pageid
int64 12
74.6M
| title
stringlengths 2
102
| revid
int64 962M
1.17B
| description
stringlengths 4
100
⌀ | categories
list | markdown
stringlengths 1.22k
148k
|
---|---|---|---|---|---|
12,424,458 |
Guadeloupe amazon
| 1,167,009,304 |
Hypothetical extinct species of parrot from the Caribbean
|
[
"Amazon parrots",
"Bird extinctions since 1500",
"Birds described in 1789",
"Birds of Guadeloupe",
"Controversial parrot taxa",
"Endemic fauna of Guadeloupe",
"Extinct birds of the Caribbean",
"Hypothetical extinct species",
"Taxa named by Johann Friedrich Gmelin",
"Taxonomy articles created by Polbot"
] |
The Guadeloupe amazon or Guadeloupe parrot (Amazona violacea) is a hypothetical extinct species of parrot that is thought to have been endemic to the Lesser Antillean island region of Guadeloupe. Mentioned and described by 17th- and 18th-century writers, it received a scientific name in 1789. It was moved to the genus Amazona in 1905, and is thought to have been related to, or possibly the same as, the extant imperial amazon. A tibiotarsus and an ulna bone from the island of Marie-Galante may belong to the Guadeloupe amazon. In 1905, a species of extinct violet macaw was also claimed to have lived on Guadeloupe, but in 2015, it was suggested to have been based on a description of the Guadeloupe amazon.
According to contemporary descriptions, the head, neck and underparts of the Guadeloupe amazon were mainly violet or slate, mixed with green and black; the back was brownish green; and the wings were green, yellow and red. It had iridescent feathers, and was able to raise a "ruff" of feathers around its neck. The bird fed on fruits and nuts, and the male and female took turns sitting on the nest. It was eaten by French settlers, who also destroyed its habitat. Rare by 1779, it appears to have become extinct by the end of the 18th century.
## Taxonomy
The Guadeloupe amazon was first described in 1664 by the French botanist Jean-Baptiste Du Tertre, who also wrote about and illustrated the bird in 1667. The French clergyman Jean-Baptiste Labat described the bird in 1742, and it was mentioned in later natural history works by writers such as Mathurin Jacques Brisson, Comte de Buffon, and John Latham; the latter gave it the name "ruff-necked parrot". German naturalist Johann Friedrich Gmelin coined the scientific name Psittacus violaceus for the bird in his 1789 edition of Systema Naturae, based on the writings of Du Tertre, Brisson, and Buffon. The specific name violaceus means "violet".
In 1891, the Italian zoologist Tommaso Salvadori included Psittacus violaceus in a list of synonyms of the red-fan parrot (Deroptyus accipitrinus), a South American species. In 1905, the American zoologist Austin Hobart Clark pointed out that the colouration of the two species was dissimilar (their main similarity being a frill on the neck), and that Buffon stated that the parrot of Guadeloupe was not found in Cayenne where the red-fan parrot lives. Clark instead suggested that the Guadeloupe species was most closely related to the extant, similarly coloured imperial amazon (Amazona imperialis) of Dominica. He therefore placed the Guadeloupe bird in the same genus, with the new combination Amazona violacea, and referred to it by the common name "Guadeloupe parrot". The name Amazona comes from the French word "Amazone", which Buffon had used to refer to parrots from the Amazonian rainforest. In 1967, the American ornithologist James Greenway suggested that the amazon of Guadeloupe may have formed a superspecies with the imperial amazon and the extinct Martinique amazon (Amazona martinicana), and was perhaps a subspecies of the former. He considered it a hypothetical extinct species since it was only known from old accounts.
In 2001, the American ornithologists Matthew Williams and David Steadman argued in favor of the idea that the early accounts were a solid basis for the Guadeloupe amazon's existence. They also reported a tibiotarsus bone found on the Folle Anse archaeological site on Marie-Galante, an island in the Guadeloupe region, which they found similar to that of the imperial amazon, but slightly shorter. Since Marie-Galante shares many modern bird species with Guadeloupe, they suggested that the bone belonged to the Guadeloupe amazon, and assigned it to A. cf. violacea (which implies the classification is uncertain). In 2004, Patricia Ottens-Wainright and colleagues pointed out that the early descriptions of the Guadeloupe amazon did not clearly determine whether it was a unique species or the same species as the imperial amazon. Ornithologists Storrs Olson and Edgar Maíz, writing in 2008, felt that the Guadeloupe amazon was probably the same as the imperial amazon. In contrast the English ornithologist Julian P. Hume wrote in 2012 that though the amazon species of Guadeloupe and Martinique were based on accounts rather than physical remains, he found it likely they once existed, having been mentioned by trusted observers, and on zoogeographical grounds. In 2015, the ecologists Monica Gala and Arnaud Lenoble stated that an ulna bone from Marie-Galante, which had been assigned to the extinct Lesser Antillean macaw (Ara guadeloupensis) by Williams and Steadman in 2001 and to the imperial amazon by Olson and Maiz in 2008, instead belonged to the Guadeloupe amazon.
### The "violet macaw"
In 1905, the British banker and zoologist Walter Rothschild named Anodorhynchus purpurascens, based on an old description of a deep violet parrot seen on Guadeloupe, found in an 1838 publication by a "Don de Navaret". He interpreted it as an extinct Anodorhynchus macaw due to its entirely blue colouration, and said the native Caribs called it "onécouli". Greenway suggested this "mythical macaw" may have been based on a careless description of the Guadeloupe amazon, or possibly an imported Lear's macaw (Anodorhynchus leari) from South America. He was unable to check the reference given by Rothschild, but suggested it may have been a publication by the Spanish historian Martín Fernández de Navarrete.
In 2000, the English writer Errol Fuller suggested the bird may have been an imported hyacinth macaw (Anodorhynchus hyacinthinus). In 2001, Williams and Steadman were also unable to find the reference listed by Rothschild, and concluded that the supposed species required further corroboration. The biologists James W. Wiley and Guy M. Kirwan were also unable to find the reference to the violet macaw in 2013, but pointed out an account by the Italian historian Peter Martyr d'Anghiera, who described how the Spanish took parrots that were mainly purple from Guadeloupe during the second voyage of Christopher Columbus.
In 2015, Lenoble reviewed overlooked historical Spanish and French texts, and identified the sources on which Rothschild had based the violet macaw. An 1828 publication by de Navarrete mentioned parrots on Guadeloupe during the second voyage of Columbus, but did not state their colour or include the term "onécouli". Lenoble instead pointed to a Carib-French dictionary by the French missionary Raymond Breton (who was on Guadeloupe from 1635 to 1654) which included terms for parrots, and the passage "onicoali is the Guadeloupe variety, which differs from the others being larger and violet, with red-lined wings". Lenoble concluded that this referred to the Guadeloupe amazon since Breton appears to have reserved the word parrot for birds smaller than macaws, and due to the consistent plumage pattern mentioned. Lenoble recognised all the elements of Rothschild's description in Breton's text, but suggested that Rothschild must have relied on a secondary source since he spelled the name differently. This source appears to have been a footnote in an 1866 article, which quoted Breton, but gave an incorrect citation. It used a francised version of the bird's name ("onécouli"), and implied it could have been a macaw. Lenoble therefore concluded that the supposed "violet macaw" was based on misidentified references to the Guadeloupe amazon, and that the Lesser Antillean macaw was the only macaw species that lived on Guadeloupe.
## Description
Du Tertre described the Guadeloupe amazon as follows in 1654:
> The Parrot of Guadeloupe is almost as large as a fowl. The beak and the eye are bordered with carnation. All the feathers of the head, neck, and underparts are of a violet color, mixed with a little green and black, and changeable like the throat of a pigeon. All the upper part of the back is brownish green. The long quills are black, the others yellow, green, and red, and it has on the wing-coverts two rosettes of rose color.
Labat described the bird as follows in 1742:
> The Parrots of these islands are distinguishable from those of the mainland of Guinea (? Guiana) by their different plumage; those of Guadeloupe are a little smaller than the Macaws. The head, neck, and underparts are slaty, with a few green and black feathers; the back is wholly green, the wings green, yellow, and red.
Clark noted that the iridescent feathers described are not unique to the Guadeloupe amazon, as other freshly killed amazons also show this to a greater or lesser degree, especially the Saint Vincent amazon (Amazona guildingii). He suggested that the black of the head and underparts of the Guadeloupe bird could have been the borders of the feathers, as seen in the imperial amazon, whereas the green may have been a sign of immaturity, like in the Saint Vincent amazon. He also likened the brownish green upperparts to those of a young Saint Vincent amazon, and suggested that the red "rosettes" mentioned by Du Tertre may have been scattered feathers in the wing covert feathers. Clark listed features of the imperial amazon which contrasted with those of the Guadeloupe amazon, such as its deep purple head and underparts, green upperparts, wings with dark brown, purple, green, blue and red feathers.
As well as being described as violet by Du Tertre and slate by Labat, the head and underparts of the bird were described as ashy blue by Brisson. Greenway suggested some of this discrepancy may have been because Labat confused the Guadeloupe amazon with the Martinique amazon, as he appears not to have distinguished between the birds. Hume consolidated these descriptions under the term "slaty-blue".
Rothschild featured an illustration of the Guadeloupe amazon in his 1907 book Extinct Birds by the Dutch artist John Gerrard Keulemans, based on the early descriptions. In 1916, the American ornithologist Robert Ridgway criticised the illustration for differing from Du Tertre's description; Du Tertre supposedly only meant that the proximal primary feathers were yellow, whereas all the covert feathers are yellow in the illustration, apart from a red edge, and the head and underparts are slate.
## Behaviour and ecology
In 1664 Du Tertre described some behavioural traits of the Guadeloupe amazon, and listed items among its diet:
> When it erects the feathers of its neck, it makes a beautiful ruff about its head, which it seems to admire, as a peacock its tail. It has a strong voice, talks very distinctly, and learns quickly if taken young. It lives on the wild fruits which grow in the forests, except that it does not eat the manchioneel. Cotton seed intoxicates it, and affects it as wine does a man; and for that reason they eat it with great eagerness ... The flavor of its flesh is excellent, but changeable, according to the kind of food. If it eats cashew nuts, the flesh has an agreeable flavor of garlic; if 'bois des inde' it has a flavor of cloves and cinnamon; if on bitter fruits, it becomes bitter like gall. If it feeds on genips, the flesh becomes wholly black, but that does not prevent its having a very fine flavor. When it feeds on guavas it is at its best, and then the French commit great havoc among them.
Clark noted that the Saint Vincent amazon and other amazon species can also raise a "ruff" of feathers around their neck when excited.
In 1667, Du Tertre repeated his description of the Guadeloupe amazon, and added some details about its breeding behaviour:
> We had two which built their nest a hundred paces from our house in a large tree. The male and the female sat alternately, and came one after the other to feed at the house, where they brought their young when they were large enough to leave the nest.
## Extinction
In 1779, Buffon stated that the Guadeloupe amazon had become very rare, and indicated why it may have become extinct:
> We have never seen this parrot, and it is not found in Cayenne. It is even very rare in Guadeloupe today, for none of the inhabitants of that island have given us any information concerning it; but that is not extraordinary, for since the islands have been inhabited, the number of parrots has greatly diminished, and Dutertre remarks in particular of this one that the French colonists wage a terrible war on it in the season when it is especially fat and succulent.
Greenway suggested that both the French settlers and their slaves ate the Guadeloupe amazon as well as destroyed its habitat. The supposedly related imperial amazon survives in the steep mountain forests of Dominica. Guadeloupe is less mountainous than Dominica, more suitable for farming and, historically, has had a larger human population. Because of this, there would have been a greater pressure on the Guadeloupe amazon and it appears to have become extinct by the end of the 18th century. All the amazon species still extant on the West Indian islands are endangered, since they are trapped for the pet-trade and overhunted for food, and also because of destruction of their habitat.
|
21,232,865 |
John Edward Brownlee sex scandal
| 1,147,468,646 |
1934 scandal in Alberta, Canada
|
[
"1934 in Alberta",
"1934 in case law",
"1934 in politics",
"Alberta political scandals",
"John Edward Brownlee",
"Political sex scandals in Canada"
] |
The John Brownlee sex scandal occurred in 1934 in Alberta, Canada, and forced the resignation of the provincial Premier, John Edward Brownlee. Brownlee was accused of seducing Vivian MacMillan, a family friend and a secretary for Brownlee's attorney-general in 1930, when she was 18 years old, and continuing the affair for three years. MacMillan claimed that the married premier had told her that she must have sex with him for his own sake and that of his invalid wife. She had, she testified, relented after physical and emotional pressure. Brownlee called her story a fabrication, and suggested that it was the result of a conspiracy by MacMillan, her would-be fiancé, and several of Brownlee's political opponents in the Alberta Liberal Party.
MacMillan and her father sued Brownlee for seduction. After a sensational trial in June 1934, the six-man jury found in favour of the plaintiffs, awarding them \$10,000 and \$5,000, respectively. In an unusual move, trial judge William Ives disregarded the jury's finding and dismissed the case. The Supreme Court of Canada eventually overturned the decision and awarded MacMillan \$10,000 in damages. This award was affirmed by the Judicial Committee of the British Privy Council, Canada's highest court of appeal at the time. All of this was largely academic to Brownlee, who resigned after the jury's finding. During the next election, his United Farmers of Alberta were wiped out of the legislature, losing every seat.
## Background
John Brownlee became Premier of Alberta in 1925 as the leader of the parliamentary caucus of the United Farmers of Alberta (UFA). Early in his premiership, he achieved a number of successes, including winning control of the province's natural resources from the federal government, but by 1933 the Great Depression was taking its toll on his government's popularity. Political forces were advocating radical overhauls of the financial system. The Co-operative Commonwealth Federation and elements of the UFA's grassroots favored socialism and government ownership of the means of production, while the Alberta Liberal Party, many within the UFA, and William Aberhart's new provincial movement favored social credit, although in differing forms and with differing levels of enthusiasm.
In 1934, Brownlee was embroiled in a sex scandal, with major consequences to his political career. Those involved with the scandal gave widely disparate accounts of the surrounding facts; on only a minority of details did the parties agree. In 1930, Brownlee visited Edson while campaigning in the 1930 provincial election. While there, Allan MacMillan—the mayor of Edson and a political ally of Brownlee's—took him to a farmers' picnic. On the way to the event, Brownlee chatted with MacMillan's daughter, Vivian, then seventeen years old and unsure as to her future. The premier encouraged her to come to Edmonton and study business at Alberta College. She did so and, after graduating in June 1931, started working in the office of the provincial Attorney-General as a stenographer on July 3.
While in Edmonton, she became close to the Brownlee family. On July 5, 1933, while the rest of his family was vacationing at Sylvan Lake, Brownlee was taking MacMillan for a car ride when he noticed they were being followed. In the pursuing vehicle were John Caldwell, a suitor of MacMillan's and third-year medical student at the University of Alberta, and Neil MacLean, a prominent Edmonton lawyer and Liberal Party supporter who had been opposing counsel in the acrimonious and high profile divorce proceeding of Brownlee's Minister of Public Works, Oran McPherson. Brownlee made a series of sharp turns and reversals, in an effort to first ascertain whether he was indeed being followed and, once satisfied that he was, to evade the other car. Unable to do so, he dropped MacMillan off at her home and returned to his.
That August, Brownlee received a letter from MacLean reading in part "We have been instructed to commence action against you for damages for the seduction of Miss Vivian MacMillan." Later that month, he took advantage of a recess in the federal Royal Commission on Banking and Currency, of which he was a member, to visit Allan MacMillan in Edson. He spoke instead to Mrs. MacMillan, who initially refused to let him into the house and asked him to leave. She eventually relented and let him in; he told her that pursuing the matter could ruin Vivian's future, to which she responded "what about you?" Concluding that the meeting was pointless, Brownlee parted by announcing "I am not asking you to refrain from your action, but I want to tell you that the allegation is not true and I will face them frankly and answer any questions ... If its [sic] money you are after, I haven't got it."
On September 22, MacLean filed a statement of claim before Judge John R. Boyle on behalf of Allan and Vivian MacMillan. The claim was made under the Alberta Seduction Act, and sought damages of \$10,000 for Vivian and \$5,000 for Allan. It alleged that Brownlee, after arranging for Vivian's move from Edson to Edmonton, had seduced her in the fall of 1930 when she was eighteen, and had had regular sexual contact with her for a period of three years. Brownlee denied the allegations immediately (and made a rejected offer to resign from the Royal Commission) and on November 13 filed a counter-claim against Vivian MacMillan and John Caldwell, alleging that they had conspired to obtain money through false allegations.
## Vivian MacMillan's story
According to Vivian MacMillan, when she met Brownlee in 1930 he told her that she would "grow up to be a beautiful woman", urged her to move to Edmonton, and offered to arrange a government job for her. He further offered to act as guardian to her and allow her to live in his house until she found a place of her own. On his advice and assurances, she moved to Edmonton and, after graduating from Alberta College, received the stenographer's position that she claimed had been arranged for her by the premier.
Immediately after her arrival in Edmonton, she said, Brownlee had telephoned her—commenting that "a little birdie" had told him that she was in town—and invited her to his home to meet his family; she soon became a regular visitor there. She alleged that in October 1930, while Brownlee was driving her home after one such visit, the premier took her hand and asked her what she knew "about life". On her response that she knew probably as much as any girl of eighteen, he invited her out the next evening for what she presumed would be some advice. Instead, he drove her 6 miles (9.7 km) west of town on Highway 16 and parked on a side road before asking her to have sex with him. He said that he had been madly in love with her from the start, that he was lonely, that he and his wife had not lived together as man and wife in a long time, that his wife (an invalid) would be endangered by a pregnancy, and that he could not be premier any longer unless MacMillan agreed to have sex with him. He told her that if she refused him, he would be forced to resume his sexual relationship with his wife, and that this would likely kill her. MacMillan reacted fearfully, and asked if there was anything else she could do to help Brownlee and his wife; he replied that there was not.
The next week on another ride home, a similar conversation ensued, this one culminating in Brownlee forcing a resisting MacMillan into the car's back seat where he partially penetrated her against her will. Two weeks later, she alleged, they had complete consensual intercourse. After, when she expressed concern about becoming pregnant, he told her that "he knew of some pills that he would give me and if I took them at the end of each month before I menstruated that they would be very safe and there would not be any danger of me becoming pregnant." MacMillan recounted that their relationship continued in this way, with sex occurring an average of three times per week. In September 1931, she stayed in the Brownlee house for three days while Mrs. Brownlee was in Vancouver; she alleged that during that time, Brownlee had his son, who usually slept in Brownlee's room, transferred to a different room so that Brownlee and MacMillan could have sex.
Some of MacMillan's most sensational allegations concerned a six-week period in the spring of 1932 when she was filling in at the Brownlee household for an absent maid. She said that she slept in the maid's room, one of three bedrooms on the second floor of Brownlee's house; a second room was occupied by Brownlee and his son Jack, and the third by Florence Brownlee and her son Alan. During this six-week period, she claimed, she and Brownlee had had sex every night; Brownlee would signal her to leave her room by turning on the tap in the second floor bathroom, and then flush the toilet and walk in lockstep with her to mask the sound of her movement. Once in the premier's room, they would have sex next to his sleeping son, taking care to be quiet. She recounted how on one occasion Jack had seemed to stir, and Brownlee had turned on the light in the middle of intercourse to make sure that his son was all right.
MacMillan said that during the summer of 1932 she experienced a nervous breakdown (for which Florence Brownlee paid the hospital bills), and that she met and fell in love with Caldwell soon after. She resolved to end her affair with Brownlee but he reacted angrily, telling her that it would mean his wife's death and MacMillan's inability to find a job anywhere in Alberta. That evening, she confided the affair to her landlady. On October 31, 1932, she had dinner with Brownlee's sons and visited Brownlee, who was sick in bed. Despite her protestations that she was on her way to a Halloween party with Caldwell, he insisted that they have sex, which they did. Thereafter, the affair resumed. On another occasion, he called her away from her visiting mother to have sex with him at the legislature building.
In late January 1933, Caldwell proposed to her. She broke down and told him of the affair. She described his reaction as sympathetic, though he rescinded the marriage proposal. In May, at Caldwell's urging, she consulted a lawyer, but continued the affair until July 5, the night of the fateful drive.
MacMillan testified that for the duration of the affair she continued to have sex with Brownlee "from terror and because he told me it was my duty to do it and he seemed to have an influence over me which I could not break." She claimed that there had been no love accompanying the sex, and that it had been physically painful for her on each occasion.
## John Brownlee's story
Brownlee denied absolutely MacMillan's claims. He said that there had been no sexual activity between him and MacMillan, likening their relationship instead to that of an uncle and his favourite niece. To claims that he had induced MacMillan to move to Edmonton and arranged a position for her in the Attorney General's office, he asserted "in the thirteen years I have been in public life I have never promised any person in this Province a position." He denied having convinced MacMillan to move to Edmonton and stated that he had not even known that she had done so until Christopher Pattinson, Member of the Legislative Assembly (MLA) for Edson, told him. He further claimed that his sex life with Mrs. Brownlee was what he would consider normal for a husband and wife (which was corroborated by his wife).
He acknowledged that he had been driving MacMillan around the evening of July 5, 1933, when he was followed by Caldwell and MacLean, but gave a dramatically different account of his reasons for doing so. According to him, there had been talk of MacMillan joining his family at their rental cottage at Sylvan Lake that weekend provided that she could get the necessary time off work, and that evening he called her to see whether or not she had been able to. During the ensuing phone conversation, MacMillan told him that she had other problems bothering her, and asked if Brownlee would take her for a drive to discuss them. He agreed to do so, and it was during this drive that he noticed that he was being followed.
In support of this story, Brownlee pointed to investigational work by Harry Brace, a private detective in the employ of Attorney General John Lymburn. According to Brace, Caldwell had told at least three witnesses that he expected to soon receive a large amount of money from someone "high up in political life". He also specifically told one of Brace's agents that he had deliberately set out to frame Brownlee, that in selecting Neil MacLean as his lawyer he had deliberately chosen a Liberal (the Liberals were considered the major opposition to Brownlee's government at the time), and that if the Liberals won the next election there would be "nothing I want I won't be able to get". Disappointingly for Brownlee, Brace did not uncover evidence that MacMillan was lying about the affair itself: Caldwell, based on his comments to Brace's men, seemed very much under the impression that the affair had occurred exactly as claimed. Moreover, Brace found that Carl Snell, MacMillan's one-time suitor, claimed to have been told in 1932 that MacMillan was having a consensual affair with the premier.
Brownlee's defenders called into doubt MacLean's motivation for involvement in the case: according to rumour, MacLean had been involved in a drunk driving incident several years previous in which he had driven his car into a ditch. When another motorist had pulled him out, MacLean had attempted unsuccessfully to drive away with the chains still attached to his vehicle, for which he was charged. He had reputedly asked Brownlee, then the Attorney General, to have the charges dropped. Upon Brownlee's refusal, he had allegedly vowed to "get" him. Finally, Brownlee made a point of noting that, as a medical student, Caldwell would have been well-positioned to coach MacMillan on her claims about the pills she was taking to avoid pregnancy. According to Brownlee, the events alleged were a complete fabrication, the result of scheming by an opportunistic young medical student and his impressionable girlfriend, encouraged by a vindictive lawyer and unscrupulous political opponents.
## Legal processes
### Trial
The trial began in June 1934 before Justice William Ives with three days of testimony from MacMillan. Brownlee's lawyer, Arthur LeRoy Smith, used his cross examination to call into question almost everything MacMillan said. To refute her claim that Brownlee had convinced her to move to Edmonton, he entered into evidence a letter she had written to Alberta College seeking information on its programs, dated before she had even met Brownlee. He further demonstrated that on the evening of the seduction, which had allegedly taken place in a car on a side road west of Edmonton, the city had been engulfed in a blizzard. Moreover, the government car in which the seduction was supposed to have taken place had not been purchased until more than a year after that date. In response to her testimony that she had always slept in the maid's room while staying with the Brownlees, Smith produced letters showing that she had actually slept in Mrs. Brownlee's room. After MacMillan conceded her mistake, Smith noted that Mrs. Brownlee's room had a large deadbolt on the door: if she had feared Brownlee, why had she not used it? "Because I just did as Mr. Brownlee said," was the plaintiff's response. MacMillan, when questioned, admitted that the period during which she had been staying in the Brownlee home in the spring of 1932, which she had initially placed at six weeks, was actually only four. When she identified these four weeks as the last two weeks of April and the first two of May, Smith showed that Brownlee had been out of town for all but ten nights of that period.
Other witnesses for the plaintiffs included a former maid of Brownlee's, who testified that she had seen the premier pick MacMillan up in his car late one night, and MacMillan's landlady's daughter, who testified that she found MacMillan sobbing in her room one night. Allan MacMillan was also called: though he testified that Brownlee had encouraged his daughter to move to Edmonton and promised to forward information about Alberta College, he acknowledged that the premier had not followed through and not contacted her again until she was in Edmonton.
The defence called Brownlee, who recounted his version of events. He testified that he had been otherwise occupied on many of the days that he and MacMillan had supposedly had sex; in one case, he produced newspaper stories showing that he had been making a speech in Stettler at a time that MacMillan had claimed he was forcing himself upon her in Edmonton. In another, he testified that he was meeting with O. H. Snow, the mayor of Raymond. MacLean on cross-examination tried to paint Brownlee as a man of tremendous persuasive powers, recalling his time as a lawyer in Calgary, only to have Brownlee retort that he had only ever tried two cases, spending most of his time drafting commercial documents. MacLean also emphasized the \$1,400 that Lymburn as Attorney General had spent investigating the case, suggesting that this amounted to government funds being spent to vindicate Brownlee personally; outside of the courtroom, Lymburn responded that his office had received a complaint that an "Edmonton lawyer"—taken by all involved to be MacLean—had approached a young woman offering money to place Brownlee in a compromising position, and that, as a criminal allegation, it had been the obligation of his office to investigate. He further emphasized that, against his protestations, Brownlee had insisted on reimbursing the government for the full cost of the investigation.
After the premier's testimony was completed, Smith called his wife, Florence Brownlee. She supported her husband's account of MacMillan's relationship with the Brownlee family and reported that, when the premier drove MacMillan home at night, he was very seldom late returning. On cross-examination, she denied that she would have defended her husband if she believed him to be guilty. Additional witnesses for the defence included Brownlee's personal secretary, Civil Service Commissioner Frederick Smailes, and four legislature janitors. Smailes acknowledged knowing at the time of MacMillan's hiring that she was acquainted with Brownlee, but denied involvement on Brownlee's part in the decision to hire her, while the janitors denied ever seeing a young woman enter the premier's office in the evenings. Jessie Ellergert, who had worked for the Brownlees as a maid, said that she had no reason to believe that there was a sexual relationship between the premier and MacMillan; moreover, she specifically recalled the Halloween night MacMillan had referred to in her testimony, and testified that the household was far too bustling for the alleged sex to have occurred.
The trial concluded with a field trip, as the jury went to view both Brownlee's house and two stretches of road where MacMillan had claimed key encounters took place. Rainy weather meant that on more than one occasion the jurors and lawyers had to push cars out of the mud. Though one road essentially matched MacMillan's description, it was located next to a populated settlement rather than deserted as she had claimed. The other, in contrast to her description of it as a side road, was a busy highway. Upon the jury's return, Smith surprised them by announcing that Brownlee's counter-claim was being dropped; he said that there was no need to complicate the clear cut issue of "seduction or no seduction" with evidence about a conspiracy on the part of MacMillan and Caldwell. Legal historian Patrick Brode criticized this decision, suggesting that the jury was expecting proof of a conspiracy and that, when this proof was not forthcoming, Brownlee's credibility was hurt.
Besides the factual issues that the jury was called on to adjudicate, there was a legal issue of what constituted "seduction" under the law. The basis of the claim was a two-hundred-year-old tort which allowed a man to sue anybody who impregnated his female servant. The basis for damages under such a claim was the servant's inability to perform her duties to the detriment of the employer. The tort was later broadened to allow the seductee's father to sue; only in statute in 1903 was the law amended to give standing to the woman herself. At issue was what damage, if any, she needed to show in order to have a cause of action. The defence argued that in all precedents there had been a pregnancy resulting, and that without one the plaintiffs could not claim damages. In response, MacLean emphasized the not entirely consensual nature of the alleged relationship. Brownlee himself responded that if the alleged relationship had been non-consensual, he should have been charged under the criminal law for rape, not sued for seduction; that the plaintiffs had not attempted to press criminal charges was evidence, he believed, of their bad faith and financial motivation.
After six days of testimony, closing arguments were given: Smith's lasted two hours and fifteen minutes and emphasized the discrepancies in MacMillan's story. MacLean's was a relatively brief forty minutes, in which he argued that the improbable and fantastic nature of his client's tale was evidence that she could not possibly have invented it. Ives then instructed the jurors, and defined "seduction" as "inducing a woman to part with her virtue ... [which] may be by any artful device that brings about her consent." After four hours and forty minutes the jury returned and announced its finding that Brownlee had seduced MacMillan in October 1930 when he had partially penetrated her, and that both she and her father had suffered damage in the amounts claimed. Ives immediately announced that he strongly disagreed with the jury's findings, and that "the evidence does not warrant them". On July 2, he issued his written ruling, overturning the jury's verdict and dismissing the action; his reason for doing so was what he viewed as the lack of damage being demonstrated by the plaintiffs. According to Ives, even if the facts had been exactly as MacMillan had described, as a matter of law the plaintiffs could not claim damages without a pregnancy or an illness.
### Media and public reception
The trial was covered in lurid detail, especially by the Edmonton Bulletin, which called it "the greatest drama ever to be heard in an Alberta court". The Bulletin was a Liberal paper, and MacLean had given it an advance copy of his statement of claim, which allowed MacMillan's allegations to be published and disseminated before the statement of claim was filed. The Bulletin was emphatically sympathetic to MacMillan in its coverage, and printed her detailed testimony (which included the dates and times of specific encounters) almost verbatim. Under the headline "Vivian Testifies to Harrowing Ordeal", it praised the young plaintiff as "bearing up with wonderful fortitude" and facing the ordeal "with courageous mien". Brownlee, in contrast, was a "love-torn, sex crazed victim of passion and jealousy, forcing his will upon her in parked autos and on country highways". The jury was not sequestered and was free to read these accounts. Edmontonians were no less enthralled than their newspaper, and many showed up to the courthouse early on the days of trial, hoping to get a seat. Towards the end of the trial, Ives revoked the Bulletin'''s press privileges at the trial and fined its publisher \$300 and a reporter \$100 for publishing writing "likely to inflame public opinion and interfere with the even-handed course of justice."
Media attention on the trial spread beyond the provincial and national borders: Time magazine published at least two articles on the trial in the United States, and the Daily Mail and Paris Midi covered it from across the Atlantic.
Reaction to the trial's outcome was mixed. The Bulletin was outraged, as was the Canadian Civil Liberties Protective Association, which called Ives' decision to overturn the jury's finding one that "set the clock back 300 years". Both organized subscriptions to finance an expected appeal. The Winnipeg Free Press called for an investigation of Ives for apparent favouritism towards Brownlee. The Vancouver Sun, on the other hand, sympathized with the premier, arguing that his "personal difficulties should not have been aired publicly". Brownlee's political allies, including Irene Parlby and Henry Wise Wood, remained loyal, with Wood keeping a large picture of Brownlee on the wall of his guest bedroom.
### Appeals
The plaintiffs appealed and the case went before the Alberta Supreme Court appeals division in January 1935. On February 2, by a 3–2 decision, the court upheld Ives' ruling. The majority ruling by Chief Justice Horace Harvey cast serious doubts on MacMillan's credibility, calling her story "quite unsupported by other evidence" and noting that she "showed a readiness to admit that she may have been mistaken as regards very positive statements previously made when by the questions it appeared there may be independent evidence she was wrong". In addition to agreeing with Ives on the points of law, he felt that the jury had not based its finding of fact on the evidence in the case. Justices Mitchell and Ford concurred. Justice Clarke, in dissent, agreed that MacMillan's story was unlikely, but expressed a willingness to defer to the jury on questions of fact. On the legal questions, he cited a precedent written by Justice Harvey himself in which the chief justice had argued that the inclusion of seduced women as potential plaintiffs under the Seduction Act proved that its framers intended a broader definition of damage than financial damage. Justice Lunney concurred. The court was unanimous in upholding Ives' dismissal of Allan MacMillan's action, and he did not appeal further.
Not satisfied with the verdict, the Bulletin again organized a campaign to fund an appeal, which was submitted to the Supreme Court of Canada; on March 1, 1937, Ives' decision was overturned. Chief Justice Lyman Duff, writing for the majority, accepted the jury's finding of fact and, echoing Justice Clarke, concluded that the framers of the Alberta Seduction Act had not intended that damage to a seductee be required to be the same as those to her father or employer (i.e. financial) in order to be actionable. The court ordered Brownlee to pay \$10,000 in damages to MacMillan, plus trial costs. Henry Hague Davis in dissent focussed less on the questions of law and more on the evidence in the case, and argued that the jury's finding of fact was perverse and that the appeal should be dismissed.
After the Supreme Court ruling, Brownlee settled with MacMillan, but still desired to clear his name. On July 1, 1937, the federal government by Order in Council gave him leave to appeal to the Judicial Committee of the British Privy Council, at the time Canada's highest court of appeal. On March 11 and 12, 1940, the committee heard Brownlee's appeal. It was denied, as the committee endorsed the Supreme Court of Canada's focus on statutory interpretation.
## Legacy
For John Brownlee's political career, Ives' ruling and the subsequent appeals were irrelevant: once the jury ruled in MacMillan's favour, he immediately announced that he would resign as soon as a replacement could be found. On July 10, 1934, he was succeeded as Premier by Richard Gavin Reid, his government's Treasurer and Minister of Health and Municipal Affairs. Brownlee stayed on as MLA and sought to retain his Ponoka seat in the 1935 provincial election, but was trounced by Edith Rogers of William Aberhart's Alberta Social Credit League. Not a single UFA member won re-election as Aberhart's movement and its promises of innovative solutions to the western world's economic problems rode to a decisive victory. In evaluating Social Credit's victory, historians unanimously cite the province's dire economic straits as the main factor, though University of Alberta historian David Elliott has acknowledged that "Aberhart and his cause were also helped" by the seduction scandal. This view has been endorsed by University of Western Ontario sociologist Edward Bell. John Barr, in his history of the Alberta Social Credit Party, is more dismissive, calling it "unlikely" that the scandal was a major factor in the UFA's defeat.
Brode acknowledges that the question of whether Brownlee seduced MacMillan "defies any definitive answer" but says that the evidence presented in the trial did not justify a finding that he did, and speculates that if MacMillan had brought her suit in a later generation she would have been "laughed out of court". Lakeland College historian and Brownlee biographer Franklin Foster does not take a position on whether or not Brownlee was guilty of seduction, but hints that a likely truth might lie "between the two extremes" of the parties' claims: that Brownlee and MacMillan did have a consensual affair which was then highjacked and exploited by the premier's more opportunistic and vengeful opponents. He leaves little doubt that he considers the behaviour of the Edmonton Bulletin and of the Liberal Party, especially its leader, William R. Howson, to have been profoundly unethical. Athabasca University historian Alvin Finkel has criticized Foster for being too friendly towards Brownlee, saying that he does not consider the scandal sufficiently from MacMillan's perspective.
A play at the 2008 Edmonton International Fringe Festival, Respecting the Action for Seduction: The Brownlee Affair'', was based on the scandal, and received average to above average reviews.
After leaving office, John Brownlee returned to the practice of law. He died in 1961. Vivian MacMillan stayed out of the limelight. She did not marry Caldwell, and returned to Edson, where on August 7, 1935, she wed confectioner Henry Sorenson. Following her husband's death, she became the bookkeeper for a Calgary construction company. After an affair, she married her boss, Frank Howie, in 1955. Vivian Howie died in 1980.
|
41,959,168 |
Capon Lake Whipple Truss Bridge
| 1,112,106,155 |
Bridge in West Virginia
|
[
"1874 establishments in West Virginia",
"1938 establishments in West Virginia",
"Bridges completed in 1874",
"Bridges completed in 1938",
"Buildings and structures in Hampshire County, West Virginia",
"Former road bridges in the United States",
"National Register of Historic Places in Hampshire County, West Virginia",
"Pedestrian bridges in West Virginia",
"Pedestrian bridges on the National Register of Historic Places",
"Relocated buildings and structures in West Virginia",
"Road bridges on the National Register of Historic Places in West Virginia",
"Transportation in Hampshire County, West Virginia",
"Truss bridges in the United States",
"U.S. Route 50",
"Whipple truss bridges in the United States",
"Wrought iron bridges in the United States"
] |
The Capon Lake Whipple Truss Bridge (locally /ˈkeɪpən/), formerly known as South Branch Bridge or Romney Bridge, is a historic Whipple truss bridge in Capon Lake, West Virginia. It is located off Carpers Pike (West Virginia Route 259) and crosses the Cacapon River. The bridge formerly carried Capon Springs Road (County Route 16) over the river, connecting Capon Springs and Capon Lake.
The bridge's Whipple truss technology was developed by civil engineer Squire Whipple in 1847. J. W. Murphy further modified Whipple's truss design in 1859 by designing the first truss bridge with pinned eyebar connections. The design of the Capon Lake Whipple Truss Bridge incorporates Murphy's later modifications with double-intersections and horizontal chords, and is therefore considered a Whipple–Murphy truss bridge. The Capon Lake Whipple Truss Bridge is West Virginia's oldest remaining example of a Whipple truss bridge and its oldest extant metal truss bridge.
The Capon Lake Whipple Truss Bridge was originally constructed in 1874 as part of the South Branch Bridge (or alternatively, the Romney Bridge), a larger two-span Whipple truss bridge conveying the Northwestern Turnpike (U.S. Route 50) across the South Branch Potomac River near Romney. The larger Whipple truss bridge replaced an 1838 wooden covered bridge that was destroyed during the American Civil War. In 1874, T. B. White and Sons were charged with the construction of a Whipple truss bridge over the South Branch; that bridge served travelers along the Northwestern Turnpike for 63 years until a new bridge was constructed in 1937.
Dismantled in 1937, the bridge was relocated to Capon Lake in southeastern Hampshire County to carry Capon Springs Road (County Route 16) between West Virginia Route 259 and Capon Springs. The bridge was dedicated on August 20, 1938. In 1991, a new bridge was completed to the south, and the Capon Lake Whipple Truss Bridge was preserved in place by the West Virginia Division of Highways, due to its rarity, age, and engineering significance. The Capon Lake Whipple Truss Bridge was listed on the National Register of Historic Places on December 15, 2011.
## Geography and setting
The Capon Lake Whipple Truss Bridge is located in a predominantly rural agricultural and forested area of southeastern Hampshire County within the Cacapon River valley. Baker Mountain, a forested narrow anticlinal mountain ridge, rises to the immediate west, and the western rolling foothills of the anticlinal Great North Mountain rise to the bridge's east. The confluence of Capon Springs Run with the Cacapon River lies just north (downstream) of the bridge. George Washington National Forest is located to the bridge's southeast, covering the forested area south of Capon Springs Road.
The bridge is located along Carpers Pike (West Virginia Route 259) in the unincorporated community of Capon Lake, 2.05 miles (3.30 km) southwest of Yellow Spring and 6.77 miles (10.90 km) northeast of the town of Wardensville. The historic Capon Springs Resort and the unincorporated community of Capon Springs are located 3.5 miles (5.6 km) east of Capon Lake on Capon Springs Road (West Virginia Secondary Route 16). The bridge is located immediately north (downstream) of the intersection of Carpers Pike with Capon Springs Road, which is carried across the Cacapon River via the current Capon Lake Bridge, a Girder bridge built in 1991 to replace the Whipple truss bridge for conveying vehicle traffic. The property containing the Capon Lake Whipple Truss Bridge is less than 1 acre (0.40 ha) in size.
## Architecture
The Capon Lake Whipple Truss Bridge is an early example of the use of metal truss bridge load-bearing superstructure technology, which defined highway bridge design well into the 20th century. Because of "its uncommon innovative design and age", the bridge is one of West Virginia's most historically significant bridges. It is the oldest remaining example of a Whipple truss bridge in West Virginia, and the oldest extant metal truss bridge in the state. The metal truss technology of the bridge displays distinctive innovations developed by the prominent civil engineers and bridge designers Squire Whipple and J. W. Murphy; the innovations are evident in the bridge's double-intersection diagonals and counter-diagonals with pin connections.
Approximately 20 feet (6.1 m) in width and 176 feet (54 m) in length, the bridge is built atop a reinforced concrete abutment and pier. Its truss structure exhibits a double-intersection configuration, constructed of 14 bays, each measuring approximately 11 feet (3.4 m) wide and 23 feet (7.0 m) in height, with the diagonals extending across two bays each. The bridge is fabricated of wrought iron bracketed with pins. Spanning the full length of the bridge is a wooden pedestrian walkway that consists of an observation deck and wooden seating near the bridge's midspan.
## History
### Whipple truss development
The bridge's Whipple truss technology was developed in 1847 by civil engineer Squire Whipple, who received a patent from the U.S. Patent Office the same year. Whipple was one of the first structural engineers to use scientific and mathematical methods analyzing the forces and stresses in framed structures to design the bridge, and his groundbreaking 1847 book, A Work on Bridge Building, had a significant influence on bridge engineering. Whipple's truss bridge design incorporated double-intersection diagonals into the standard Pratt truss, thus allowing the diagonals to extend across two truss bays. Engineer J. W. Murphy further modified Whipple's truss design in 1859 when he designed the first truss bridge with pinned eyebar connections, which utilized pins instead of trunnions. Murphy's design removed the need for riveted connections and allowed for easier and more widespread construction of truss bridges. In 1863, Murphy designed the first pin-connected truss bridge with both wrought iron tension and compression components and cast iron joint blocks and pedestals. Murphy's truss design consisted of double-intersection counter-diagonals, and along with the eyebar and pin connections, permitted longer iron bridge spans.
The technological design advances made by Whipple and Murphy, in addition to further advances in steel and iron fabrication, made wrought iron truss bridges a major industry in the United States. The Capon Lake bridge was a Whipple–Murphy truss bridge, since it incorporated Murphy's later modifications with double-intersections and horizontal chords. At the time of the bridge's original fabrication in 1874, metal truss bridges were ordered from catalogs by county courts and other entities responsible for transportation construction and maintenance. These entities provided the desired width, length, and other specifications, and the truss materials were shipped to the construction site and assembled by local construction teams. Metal truss bridges were more economically feasible, could span longer distances, and were simpler to construct than stone bridges, and they were more durable than wooden bridges. They were also marketed as detachable and transportable structures that could be dismantled and reassembled. The technology used in the Capon Lake Whipple Truss Bridge revolutionized transport throughout West Virginia. While the Whipple truss bridge had waned in popularity by the 1890s, the bridges were commonly disassembled and re-erected for use on secondary roads, as was the case with the Capon Lake Whipple Truss Bridge in 1938.
### T. B. White and Sons
The construction company that built the Capon Lake Whipple Truss Bridge, T. B. White and Sons, was established in 1868. Its founder Timothy B. White had been a carpenter and contractor in New Brighton, Pennsylvania since the 1840s. White also operated factories for iron cars and woolen mill machinery until 1859, when he began to concentrate solely on bridge construction. White's bridge company operated from a factory on the Beaver River in New Brighton until the factory was destroyed by fire in 1878. After the fire, the company relocated across the river to Beaver Falls and restructured as the Penn Bridge and Machine Works. In addition to iron truss bridges, the company produced a range of structural and architectural components and continued to expand; it employed over 500 workers by 1908. Penn Bridge and Machine Works fended off purchase by the American Bridge Company and continued to operate independently, unlike similar small bridge companies founded in the 19th century. The most prolific of its kind in the Pittsburgh region, the company was responsible for the construction of bridges throughout the United States.
### South Branch Bridge
The Capon Lake Whipple Truss Bridge was originally constructed in 1874 as part of the South Branch Bridge (or the Romney Bridge), a larger two-span Whipple truss bridge conveying the Northwestern Turnpike (U.S. Route 50) across the South Branch Potomac River 0.57 miles (0.92 km) west of Romney. The 1874 Whipple truss bridge across the South Branch replaced an 1838 wooden covered bridge that had been chartered by the Virginia General Assembly during the construction of the Northwestern Turnpike. Before the construction of the covered bridge in 1838, a public ferry conveyed traffic across the river. Isaac Parsons (1752–1796) operated a ferry there following its establishment by an act of the Virginia General Assembly in October 1786. The 1838 covered bridge remained in use until it was destroyed by retreating Confederate forces during the American Civil War. Throughout the course of the war, Romney reportedly changed hands 56 times between Confederate and Union forces, and the crossing of the South Branch Potomac River served as a strategic point due to its position along the Northwestern Turnpike, an important east–west route.
Following the conclusion of the war, nearly all bridges along the Northwestern Turnpike had been destroyed, including the South Branch Bridge. In order to restore local businesses and industry, Hampshire County citizens called a meeting and steps were taken at the local level to proceed with the construction of new bridges. Local citizens and the South Branch Intelligencer newspaper of Romney campaigned for the immediate replacement of the bridge because of "continual risk, danger and inconveniences arising from want of the South Branch Bridge at Col. Gibson's (destroyed during the war)...". Hampshire County began issuing bonds for the construction of a new bridge over the South Branch in 1868, and by 1874, construction of the Whipple truss bridge had commenced. T. B. White and Sons were charged with the bridge's construction.
The South Branch Intelligencer published periodic updates on the progress of the South Branch Bridge's construction. According to the newspaper, the bridge was scheduled to be completed by July 1875. During the course of construction, John Ridenour lost a finger while working on the bridge. The new South Branch Bridge was completed well ahead of schedule in October 1874. The October 12, 1874, edition of the South Branch Intelligencer characterized the new bridge as a "complete, handsome and durable structure", and further recounted that "the contractors, Messrs. White & Sons, New Brighton, Pennsylvania 'Penn Bridge & Machine Works,' have given us, in general opinion, a first rate, durable work, and deserve our best commendations.... We are confident that ours will realize a very handsome income and fully vindicate the wisdom of the County Court in voting its construction."
Following its construction in 1874, the Whipple truss bridge over the South Branch Potomac River served Romney and travelers along the Northwestern Turnpike for 63 years. In 1935, the West Virginia State Road Commission began organizing a project to replace the Whipple truss bridge, and construction of the new bridge had begun by 1936. In November of that year, a car collided with the south side of the eastern Whipple truss span, which knocked the span completely off its eastern abutment. The car plunged into the South Branch Potomac River, followed by the compromised truss span, which collapsed on top of the car. Unaware of the span's collapse, a car traveling from the west drove off the end of the west span at the bridge's center pier, and fell onto the collapsed span. According to the Hampshire Review, the only serious injury sustained was a broken wooden leg. Following the collapse of the eastern Whipple truss span, a temporary wooden span was hastily constructed between the western truss span and the eastern abutment, so that traffic was uninterrupted until the new bridge was completed and opened on June 21, 1937. The 1937 bridge was used until 2010 when it was replaced by the current South Branch Bridge.
### Capon Lake Bridge
Because Whipple truss bridges were easily disassembled and re-erected, the remaining western span of the Whipple truss over the South Branch was dismantled in 1937 and relocated to Capon Lake in southeastern Hampshire County to convey Capon Springs Road (West Virginia Secondary Route 16) between West Virginia Route 259 and Capon Springs. According to Branson Himelwright, a Capon Springs resident who had been a construction worker involved in the re-erection of the Whipple truss span at Capon Lake, the only two ways to cross the Cacapon River to reach Capon Springs were to cross a swinging footbridge or ford the river. During the bridge's construction, a new pier and abutments were constructed to carry the Whipple truss span and a connected Pratt truss that had been salvaged from an unknown bridge. Himelwright and Jacob "Moss" Rudolph, who had also participated in the bridge's construction, stated in interviews that both the site excavation and concrete work for the pier and abutments were completed by hand.
The newly erected Capon Lake Bridge was dedicated on August 20, 1938, with a ceremony including musical performances by the Romney High School and Capon Springs Resort bands. Former West Virginia Governor and Capon Springs native Herman G. Kump, West Virginia State Road Commission Secretary Cy Hammill, and numerous other state officials were in attendance at the dedication.
In 1991, the new steel stringer Capon Lake Bridge was constructed 187 feet (57 m) to the southwest of the Capon Lake Whipple Truss Bridge, after which the Whipple truss bridge was closed to vehicle traffic. Due to its rarity, age, and engineering significance, West Virginia Division of Highways District 5 decided to preserve the Whipple truss bridge. During the bridge's restoration, the Pratt truss span was removed due to significant deterioration, and the roadway deck was also removed. A wooden pedestrian walkway and observation deck were constructed across the full span of the remaining truss bridge.
The Capon Lake Whipple Truss Bridge was listed on the National Register of Historic Places on December 15, 2011, for its "engineering significance as an excellent example of a Whipple/Murphy Truss bridge." Since its listing, the bridge has been maintained as a historic site for pedestrians by the West Virginia Division of Highways District 5. In 2012, the West Virginia Division of Highways, in association with the West Virginia Archives and the history department of the West Virginia Division of Culture and History, installed a historical marker at the northwestern entry to the bridge as part of the West Virginia Highway Historical Marker Program. The marker reads:
> First erected in 1874 as a two span bridge on US 50 near Romney, one span was moved here in 1938 and re-erected on a new foundation. The 17' wide by 176' long bridge is a Whipple–Murphy Truss. The state's oldest extant metal truss, the bridge is one of a few of its type in WV. Listed in the National Register of Historic Places in 2011.
## See also
- List of historic sites in Hampshire County, West Virginia
- National Register of Historic Places listings in Hampshire County, West Virginia
- Hayden Bridge (Springfield, Oregon) – another example of a notable Whipple-Murphy truss bridge
|
2,281,029 |
1989 Tour de France
| 1,171,634,526 |
Cycling race in France in 1989
|
[
"1989 Tour de France",
"1989 in French sport",
"1989 in road cycling",
"July 1989 sports events in Europe",
"Tour de France by year"
] |
The 1989 Tour de France was the 76th edition of the Tour de France, one of cycling's Grand Tours. The race consisted of 21 stages and a prologue, over 3,285 km (2,041 mi). It started on 1 July 1989 in Luxembourg before taking an anti-clockwise route through France to finish in Paris on 23 July. The race was won by Greg LeMond of the team. It was the second overall victory for the American, who had spent the previous two seasons recovering from a near-fatal hunting accident. In second place was previous two-time Tour winner Laurent Fignon (), ahead of Pedro Delgado (), the defending champion.
Delgado started the race as the favourite, but lost almost three minutes on his principal rivals when he missed his start time in the prologue individual time trial. The race turned out to be a two-man battle between LeMond and Fignon, with the pair trading off the race leader's yellow jersey several times. Fignon managed to match LeMond in the prologue, but in the other three individual time trials he lost time to LeMond, who took advantage of aerodynamic elbow-rest handlebars formerly used in triathlon events. Delgado launched several attacks in the mountain stages to eventually finish third, while LeMond rode defensively to preserve his chances. Fignon rode well in the mountains, including a strong performance at Alpe d'Huez which gave him the race lead on stage 17.
In the closest Tour in history, LeMond was trailing Fignon by fifty seconds at the start of the final stage, an individual time trial into Paris. LeMond was not expected to be able to make up this deficit, but he completed the 24.5 km (15.2 mi) stage at an average speed of 54.545 km/h (33.893 mph), the fastest individual time trial ever ridden in the Tour de France up to that point, and won the stage. Fignon's time was fifty-eight seconds slower than LeMond's, costing him the victory and giving LeMond his second Tour title by a margin of only eight seconds. From stage 5 onward, LeMond and Fignon were the only two men to lead the race. The two riders were never separated by more than fifty-three seconds throughout the event. Owing to its competitive nature, the 1989 Tour is often ranked among the best in the race's history.
The team was the winner of the team classification and had four cyclists in the top ten of the general classification. They also won four of the five secondary individual classifications: Sean Kelly won both the points and intermediate sprints classifications, Gert-Jan Theunisse won the mountains classification and Steven Rooks won the combination classification. The young rider classification was won by Fabrice Philipot from the Toshiba team.
## Teams
The 1989 Tour had a starting field of twenty-two teams of nine cyclists. Prior to 1989, the Société du Tour de France, organisers of the Tour, chose freely which teams they invited to the event. For 1989, the sport's governing body, the Fédération Internationale de Cyclisme Professionnel (FICP), demanded that the highest-ranked teams in the FICP Road World Rankings would receive an automatic invitation. The Tour organisers relented in exchange for being allowed to run the race over 23 days instead of the original 21-day period given by the FICP.
Eighteen teams received their invitations through the FICP rankings, while the organisers allocated four teams with wild cards. The Ariostea team would have been eligible to start through their ranking, but decided against competing. This allowed Greg LeMond's team to enter the race. The wild cards were given out to the teams of , , , and . Not invited was the Teka team, which failed to accumulate enough points in the World Rankings after Reimund Dietzen had to leave the 1989 Vuelta a España following a career-ending crash. Of the 198 cyclists starting the race, 39 were riding the Tour de France for the first time. The youngest rider was Jean-Claude Colotti (), who turned 22 years old on the day of the prologue; the oldest, at 36 years and 139 days, was Helmut Wechselberger ().
The teams entering the race were:
Qualified teams
Invited teams
## Pre-race favourites
Before the 1989 Tour began, Pedro Delgado (), the defending champion, was considered a strong favourite to win the race. He had taken the title the previous year in convincing fashion, with a lead of over seven minutes. Prior to the Tour, Delgado had also won the 1989 Vuelta a España, and was therefore considered to be in good form. However, controversy around a failed doping test during the 1988 Tour put a cloud of suspicion over the reigning champion.
Next to Delgado, Laurent Fignon () was also given a good chance for overall victory. The Frenchman had won the Tour in 1983 and 1984, but his form in subsequent years had been inconsistent. According to German news magazine Der Spiegel, the cycling world had "written [Fignon] off" during four years with few victories after 1984. In 1989 however, a victory at Milan–San Remo and more importantly, the three-week Grand Tour in Italy, the Giro d'Italia, had propelled Fignon back into the spotlight. A strong Super U team surrounding him was also considered to be in his favour.
Stephen Roche () had won the Tour in 1987 ahead of Delgado, but missed the race in 1988 with a knee injury. A strong spring season with victory at the Tour of the Basque Country, second place at Paris–Nice and a top-ten placing at the Giro d'Italia made it seem that Roche was finding his form again.
Several other riders were named as favourites for a high place in the general classification. Charly Mottet (), fourth overall in 1987, had won the Critérium du Dauphiné Libéré shortly before the Tour started and was ranked number one in the FICP Road World Rankings, a position that had been held by Sean Kelly () for five straight years. Kelly had never been a strong contender for the general classification, despite an overall victory at the 1988 Vuelta a España. Next to targeting a high place in the overall rankings, Kelly hoped to secure a record-breaking fourth win in the points classification. Kelly's PDM squad also had two talented Dutch riders in their ranks with hopes of a high finish: Steven Rooks, who had been second the year before, and Gert-Jan Theunisse. Since PDM had elected not to start either the Giro d'Italia or the Vuelta a España earlier in the season, the team hoped that their riders, without an additional three-week race in their legs, would be fresher than their rivals. Other favourites included Erik Breukink (Panasonic–Isostar), Andrew Hampsten (), Steve Bauer (), Fabio Parra () and Robert Millar ().
According to Sports Illustrated, Greg LeMond's "name was never mentioned among the pre-race favourites". LeMond had finished every Tour he had entered up to this point on the podium, including the first-ever victory for an American rider in 1986. His career was interrupted when he was accidentally shot by his brother-in-law in a hunting accident on Easter of 1987. About 60 pellets hit his body; his life was saved by emergency surgery, but LeMond struggled to return to professional cycling, leaving the successful PDM team at the end of 1988 and joining the relatively small ADR team. His team was not considered strong enough to help him during stage races, and ADR's financial troubles meant that LeMond had not been paid by his team in 1989 before the Tour started. Even the fee for late entry into the Tour was secured only when LeMond arranged additional sponsorship. With poor performances at both the inaugural Tour de Trump and the Giro d'Italia, LeMond's chances at the Tour de France looked slim. However, he had placed second in the final-stage individual time trial at the Giro, taking more than a minute out of eventual winner Fignon. This led to Super U's team manager Cyrille Guimard commenting to Fignon: "LeMond will be dangerous at the Tour."
## Route and stages
The route of the 1989 Tour was unveiled in October 1988. With a distance of 3,285 km (2,041 mi), it was the shortest edition of the Tour in more than eighty years. The race started on 1 July with a prologue individual time trial, followed by 21 stages. On the second day of the race, two stages were held: a plain road stage followed by a team time trial. There was a transfer from Wasquehal to Dinard on a rest day between stages 4 and 5, and a second transfer between L'Isle-d'Abeau and Versailles after the finish of the penultimate stage. The second rest day was after the mountain time trial stage 15. The race lasted 23 days, including the two rest days, and ended on 23 July.
The race started outside France, in Luxembourg, and passed through the Wallonia region of Belgium, before taking an anti-clockwise route through France, starting in the northwest in Brittany before visiting the Pyrenees and then the Alps. The race consisted of seven mountain stages, two in the Pyrenees and five in the Alps. The highest point of elevation in the race was the Col du Galibier at 2,645 m (8,678 ft). In total there were five time trial events including the prologue. Pedro Delgado pointed to stage 17 up the Alpe d'Huez, one of the most prominent mountain-top finishes of the Tour, as the stage most likely to decide the outcome of the race. Unusually, the last of the time trials was held on the last stage of the race, finishing on the Champs-Élysées. This had been the idea of former race director Jean-François Naquet-Radiguet, who had taken over the position from Jacques Goddet and Félix Lévitan in May 1987. Naquet-Radiguet was unpopular in France and was replaced by Jean-Marie Leblanc before the 1989 route was announced, but the final-day time trial remained. It was the first time that the Tour ended with a time trial since 1968, when Jan Janssen overcame a 16-second deficit to Herman Van Springel to win the Tour by 38 seconds, the smallest margin up to 1989.
## Race overview
### Early stages
The prologue time trial in Luxembourg City was won by Erik Breukink, with the second to fourth places taken by Laurent Fignon, Sean Kelly, and Greg LeMond, all six seconds slower. The dominant story of the day was Pedro Delgado. Having warmed up away from the crowd a few hundred yards from the start ramp, he missed his start time with his team unable to find him. He eventually left 2:40 minutes after his designated start, with what time he missed being added on. He eventually finished last on the stage, 2:54 minutes down on Breukink. Though he had conceded only a third of his winning margin from the previous year's Tour, and was therefore still not to be counted out, riders such as Fignon felt that "victory in the Tour was already a distant memory for him" at this stage.
Delgado would lose even more time on the second day of the race. On the first stage, Acácio da Silva () won from a breakaway group to become the first Portuguese rider to wear the yellow jersey. Delgado attacked from the peloton (the main field) on the final steep climb before the finish, but he was brought back into the field. In the team time trial in the afternoon, Delgado fell further back as he struggled to keep up and his Reynolds team finished last on the stage. He was still in last place on the general classification, almost ten minutes behind the yellow jersey. The race lead was retained by da Silva, but the victory went to Fignon's Super U team. Fignon was now in third place, having taken 51 seconds out of LeMond, whose ADR squad finished fifth.
The third stage, finishing at the racing circuit of Spa-Francorchamps, was won by Raúl Alcalá (), who got the best of a five-man breakaway group up the climb to the line. Da Silva retained the jersey and would do so the next day as well. The fourth stage, which contained cobbled sections, was won by Jelle Nijdam (Superconfex–Yoko). He rode away from the peloton 1.5 km (0.93 mi) before the finish and held on with three seconds in hand at the line.
In the stage 5 time trial, LeMond won both the time trial and the yellow jersey, taking the lead in the Tour by five seconds ahead of Fignon. Delgado placed second on the stage, 24 seconds behind, with Fignon in third a further 32 seconds behind. LeMond's victory was aided by the use of aerodynamic elbow-rest handlebars, formerly seen in triathlon events, which allowed him a more aerodynamic position on the bike. The 7-Eleven team had used them at the Tour de Trump earlier in the year and LeMond adopted their use for the two flat time trials in the Tour de France to great effect. Fignon and his team manager Cyrille Guimard felt that the tribars were not within the regulations, since they only allowed three support points for the rider on the bike. However, they did not issue a complaint, a fact lamented by Fignon in his 2010 autobiography. Delgado rode a strong time trial, supported by favourable weather conditions as he competed in the dry, while later starters had to get through the rain. Kelly meanwhile lost more than five minutes to LeMond, after having to throw up about 20 km (12 mi) into the stage.
Stage 6, the longest of the race, proved unremarkable to the main classifications, but produced a human interest story: French domestique Joël Pelier (BH) had never been watched in his professional career by his mother, who was dedicated to caring for Pelier's severely disabled sibling. Unbeknownst to Pelier, his parents were waiting for him at the finish line, with his brother in a residential home for the week. Pelier, spurred on by his team manager, attacked with 180 km (110 mi) of the windy and wet stage remaining. He held an advantage of up to 25 minutes at one point, but suffered during the later part. Eventually, he won the stage one-and-a-half minutes ahead of the field and had a teary reunion with his parents. It was then the second-longest breakaway in Tour de France history after Albert Bourlon's in 1947, and has since been surpassed by Thierry Marie.
The next two stages were relatively uneventful. Stage 7 was won by Etienne De Wilde () from a group of four riders who were slightly clear of the field. The next day, a four-man breakaway stayed clear of the peloton, with Martin Earley () taking victory. Fignon put in an attack during the stage, but was brought back.
### Pyrenees
The race entered the high mountains for the first time on the next two stages, as the Tour visited the Pyrenees. On stage 9 from Pau to Cauterets, future five-time Tour winner Miguel Induráin () attacked on the bottom of the Col d'Aubisque and led the race for the rest of the day. He was followed by two riders of the BH team, Anselmo Fuerte and Javier Murguialday. At one point Induráin was more than six minutes ahead of the group containing the race favourites; he slowed during the final ascent at Cauterets, but held on to the stage win ahead of Fuerte by 27 seconds. Behind them, Mottet attacked from the peloton, with Delgado following and soon overtaking him. Delgado finished third on the stage and regained 27 seconds on Fignon and LeMond. It was on this stage that Fignon started to complain about LeMond riding too defensively for strategic reasons, accusing him of not putting any work into counterattacks. He later wrote in his autobiography: "All he did was sit tight and take advantage of the work I put in. To be honest, it was extremely frustrating." LeMond defended his tactics, claiming that as the leader it was not for him to push. Former race winner Stephen Roche hit his already injured knee on his handlebars on the descent of the Col de Marie-Blanque, reaching the finish under extreme pain many minutes behind the other favourites. He did not start the next stage.
The second Pyrenean stage ended at the ski resort of Superbagnères. Robert Millar and Mottet attacked on the approach to the climb of the Col du Tourmalet. Behind them, Fignon suffered a weak moment on the climb and allegedly held on to a photographer riding on a motorcycle, without the race directors handing out any punishment for the offence. Delgado attacked towards the end of the climb and reached Millar and Mottet after the descent of the Tourmalet. Together, they reached Superbagnères, where Delgado was provoked by an over-enthusiastic spectator and threw a bidon at him. Delgado then moved clear but was recaptured by Millar, who took the stage victory. Mottet was third, 19 seconds down. Rooks and Gert-Jan Theunisse led the next group, containing Fignon, into the finish. Fignon attacked LeMond within the final kilometre of the stage, taking twelve seconds on the general classification and with it the yellow jersey. The order in the overall standings after these two mountain stages was Fignon ahead of LeMond by seven seconds, followed by Mottet a further 50 seconds behind. Delgado had moved up to fourth, now within three minutes of Fignon.
### Transition stages
Stage 11 from Luchon to Blagnac had a flat profile. Rudy Dhaenens () attacked from a six-man breakaway just as it was caught by the peloton. With just a couple of hundred metres left and the stage win almost certainly in his hands, Dhaenens misjudged his speed going through a corner and crashed, allowing the rest of the field to pass him. Mathieu Hermans won the stage for the Paternina team in a sprint finish. During the course of the stage, the only remaining riders abandoned, pre-race favourite Fabio Parra and José Hipolito Roncancio. Stage 12 was interrupted by an ecologists' protests against a new waste plant, which aided a breakaway by Valerio Tebaldi () and Giancarlo Perini (). A crash in the peloton, impacting about 30 riders, further hindered the field. Tebaldi won the two-man sprint to take the stage 21 minutes ahead of the main field, the second highest post-World War II margin between a stage winner and rest of the field behind only José Viejo's 22:50 minutes on stage 11 of the 1976 Tour de France.
Stage 13 was held on the Bastille Day bicentenary, the 200th anniversary of the Storming of the Bastille during the French Revolution and France's main national holiday. The PDM team, on the insistence of their team director, attacked in the feed zone of the stage, thereby violating the unwritten rules of the field. They were brought back, but the acceleration split the peloton into two parts. Fignon attacked from the first group with Mottet. Both stayed out ahead for about an hour before being recaptured. Then, two other riders broke free, Vincent Barteau () and Jean-Claude Colotti. Barteau left Colotti behind on the hills around the finishing city of Marseille and went on to win the stage. Delgado was handed a ten-second time penalty for illegally accepting food outside the feed zone, while pre-race favourite Breukink retired from the race about 30 km (19 mi) from the finish.
The next stage saw a repeat of Nijdam's stage 4 exploits: he again broke free of the field shortly before the finish line and held off his pursuers to take victory. The day's breakaway, which included Luis Herrera (Café de Colombia), a pre-race contender who had so far disappointed, was caught within the last 1.5 km (0.93 mi) of the stage.
### Alps
The following five stages took the riders through the Alps. The first of these, stage 15, was an individual time trial to the ski station at Orcières-Merlette. Induráin set an early standard with a time of 1:11:25 hours, only to be outdone six minutes later by Rooks, who improved on Induráin's time by 43 seconds. Theunisse was fastest up the second of the two climbs of the course, gathering more points in the mountains classification to move clear of rival Millar. Of the true contenders for the overall victory, Delgado set the fastest times at all the check points, but slowed on the last climb to eventually finish the stage fourth, 48 seconds down on Rooks. He attributed his time loss to a worsening callus that would influence him for the remainder of the race. Behind Rooks, Marino Lejarreta (Paternina) was second fastest, moving into the top five overall. LeMond meanwhile took back the yellow jersey from Fignon, finishing fifth on the stage, just nine seconds slower than Delgado. His advantage over Fignon at this stage was 40 seconds.
The day after the second rest day, on the stage from Gap to Briançon, the riders faced the climbs of the Col de Vars and the Col d'Izoard. On the first of these climbs, Fignon and Mottet dropped back from the group of favourites, while LeMond stayed close to Delgado. Fignon and Mottet got back to the group on the descent, only for Delgado to attack on the climb of the Izoard. He was joined by LeMond, Theunisse and Mottet, while Fignon fell back once again. LeMond attacked after the descent, but on the slightly rising road towards the finish, Delgado made contact again to finish within the same time. Fignon meanwhile lost 13 seconds on LeMond. The two were now separated by 53 seconds in the general classification, the widest the margin would be the entire Tour. Up ahead, Pascal Richard () won the stage from a breakaway.
Stage 17, which finished at Alpe d'Huez, one of the most famous climbs in cycling, was expected to be the decisive part of the race overall. Theunisse, wearing the polka-dot jersey as leader of the mountains classification, attacked on the first climb, the Col du Galibier. He was joined by two more riders, Franco Vona () and Laurent Biondi (Fagor–MBK) before the ascent of the Col de la Croix de Fer, but rode away from his breakaway companions before he reached the summit. At Le Bourg-d'Oisans, the village before the climb to Alpe d'Huez starts, he led the group of favourites by more than four minutes and held on to win the stage by over a minute. Behind him, the battle for the yellow jersey intensified. Fignon, LeMond and Delgado entered the climb together and Fignon instantly attacked at the first hairpin bend. LeMond stuck to his wheel, but Guimard, knowing LeMond well from their days together at the Renault team, saw that he was struggling. He drew his team car level with Fignon and ordered him to attack again with 4 km (2.5 mi) to go to the finish line. LeMond fell back and only Delgado was able to keep up with Fignon. The two reached the finish together, 1:19 minutes ahead of LeMond. Fignon thus regained the yellow jersey, with an advantage of 26 seconds over LeMond.
The following stage to Villard-de-Lans featured a breakaway, including previous stage winners Millar and Richard. They were brought back once the race reached the Côte de Saint-Nizier climb, with only Herrera left ahead of the peloton. As the field made contact with Herrera, Fignon attacked. LeMond, Delgado and Theunisse followed him but unwillingness to work together allowed Fignon to extend his advantage. He passed the summit of the climb 15 seconds clear of his pursuers and in the valley behind, a group containing Alcalá and Kelly caught up to the three chasers. Fignon started the final 3 km (1.9 mi) climb up to the finish with a margin of 45 seconds. He took the stage win, but his advantage was reduced to 24 seconds by the time LeMond crossed the line, meaning that the difference between the two was now 50 seconds in the overall standings.
Stage 19 was the last one in the Alps and finished in Aix-les-Bains. Already by the second climb of the day, the Col de Porte, the top-four riders on the general classification, Fignon, LeMond, Delgado and Theunisse, joined by seventh-placed Lejarreta, had pulled clear. Behind, Mottet, sitting fifth overall, was struggling and would relinquish his position to Lejarreta by the end of the stage. On the race's last climb, the Col du Granier, LeMond attacked repeatedly, but Fignon followed him every time. Going into the town of Chambéry, site of the World Championships one month later, Lejarreta misjudged a roundabout and crashed, taking all riders with him but Delgado, who waited for the others to remount and join him. The five riders settled the stage win in a sprint finish, with LeMond taking the honours, although the difference between him and Fignon in the overall standings remained at 50 seconds.
### Finale
With the final-stage time trial to Paris looming ahead, the field took a steady pace on stage 20 from Aix-les-Bains to L'Isle d'Abeau. Fignon put in a less-than-serious attack, but was quickly brought back. In the run-in to the finish, Phil Anderson () attacked, but was recaptured. Then, about 275 m (301 yd) from the line, Nijdam attempted to go for a third stage win, but was beaten to the line by Giovanni Fidanza (Chateau d'Ax), with Kelly taking third.
After stage 19, Fignon had developed saddle sores, which gave him pain and made it impossible to sleep the night before the time trial. He was however still confident that he would not lose his 50-second advantage on LeMond during the 24.5 km (15.2 mi) from Versailles to the Champs-Élysées. In the final-day time trial, LeMond again opted for the aerodynamic handlebars, a tear-drop helmet, and a rear disc wheel. Fignon meanwhile used two disc wheels, but ordinary handlebars and was bareheaded, his ponytail moving in the wind. When Fignon reached the half-distance time check, LeMond had taken 21 seconds out of his lead. LeMond finished with a time of 26:57 minutes, the fastest-ever time trial in the history of the Tour, at 54.545 km/h (33.893 mph). As LeMond collapsed on the floor from exhaustion, Fignon made his way to the finish. He ended with a time of 27:55 minutes. With an average speed of 52.66 km/h (32.72 mph), it was the fastest time trial he had ever ridden. Nevertheless, he finished third on the stage, 58 seconds down on LeMond, and therefore lost the race by the slight margin of eight seconds. A November 1989 Bicycling article, supported by wind-tunnel data, estimated that LeMond may have gained one minute on Fignon through the use of the new aerobars. , eight seconds is still the smallest winning margin in Tour de France history. This was the final Tour de France stage win of LeMond's career. Further down the classification, Millar lost ninth place to Kelly during the final-day time trial. Hermans became the second-ever stage winner to finish the Tour de France in last place, after Pietro Tarchini in 1947. He had already been the lanterne rouge in the 1987 Tour. Of the 198 starters, 138 reached the finish of the last stage in Paris.
### Aftermath
LeMond's unexpected Tour victory resulted in significant media attention, with sports writer Nige Tassell describing it in 2017 as "now the biggest sports story of them all". Not only had LeMond overcome a significant time deficit, he had also won the Tour after coming back from a near-fatal hunting accident. Owing to its small margin of victory and exciting racing, the 1989 Tour has repeatedly been named as one of the best editions of all time. In 2009, journalist Keith Bingham called it "the greatest Tour of them all", while Cyclingnews.com in 2013 described it as "arguably the best [Tour] there's ever been". American media, traditionally not overly interested in cycling, made his victory headline news and TV broadcasters interrupted their regular programming to break the news. Sports Illustrated, who named LeMond their Sportsperson of the Year, called it a "heroic comeback". A month after the Tour, LeMond also won the Road World Championship race in Chambéry, with Fignon coming in sixth. Owing to LeMond's overall victory, ADR received the largest share of the prize money, at £185,700, followed by £129,000 for PDM and £112,700 for Super U. However, ADR became the lowest-ranked team in the history of the Tour up to that point to include the overall winner, placing 17th in the team classification. Apart from LeMond, only three other ADR riders finished the race, all more than two hours behind him.
Still not having been paid by ADR, LeMond signed a three-year contract worth \$5.5 million with for the 1990 season, the then-richest contract in the sport's history. He would go on to win a third Tour the following year, before finishing seventh in 1991 and retiring in 1994. Fignon on the other hand struggled with the disappointment of losing the Tour by such a small margin and the ridicule directed at him because of it. He continued his career, without much success, never coming close to winning the Tour again. A sixth-place finish in 1991 was followed by 23rd overall the following year. He retired in 1993, having won his last Tour stage in 1992.
## Classification leadership and minor prizes
There were several classifications in the 1989 Tour de France. The most important was the general classification, calculated by adding each cyclist's finishing times on each stage. The cyclist with the least accumulated time was the race leader, identified by the yellow jersey; the winner of this classification is considered the winner of the Tour. Just as in the three previous editions of the Tour, no time bonuses (time subtracted) were awarded at the finish of each stage. However, during the first half of the race, the first three riders crossing an intermediate sprint were given 6, 4, and 2 bonus seconds respectively. Greg LeMond won the general classification. Laurent Fignon spent the most stages as leader with nine ahead of LeMond's eight. During the race, the leader changed seven times. The only other two riders to lead the general classification in 1989 were Erik Breukink for one day after the prologue and then Acácio da Silva for the four days subsequent to Breukink. Pedro Delgado wore the yellow jersey on the prologue as the winner of the previous edition.
Additionally, there was a points classification, where cyclists were given points for finishing among the best in a stage finish, or in intermediate sprints. The cyclist with the most points led the classification, and was identified with a green jersey. High finishes on flat stages awarded more points, 45 for the winner down to 1 point for 25th place. In mountain stages and time trials, 15 points were given to the winner down to 1 point for 15th. The first three riders across intermediate sprints received points; 4, 2, and 1 respectively. Sean Kelly won this classification for a record fourth time, a record since broken by both Erik Zabel and Peter Sagan.
There was also a mountains classification. The organisation had categorised some climbs as either hors catégorie (beyond categorization), first, second, third, or fourth-category, with the lower-numbered categories representing harder climbs. Points for this classification were won by the first cyclists that reached the top of these climbs, with more points available for the higher-categorised climbs. Mountains ranked hors catégorie gave 40 points for the first rider across with the subsequent categories giving 30, 20, 7, and 4 points to the first at the summit respectively. The cyclist with the most points led the classification, and wore a white jersey with red polka dots. Gert-Jan Theunisse won the mountain's jersey with a lead of over a 100 points.
For the last time, there was a combination classification. This classification was calculated as a combination of the other classifications (except young rider), with the leader of each classification receiving 25 points down to one point for 25th place. The leader wore the combination jersey. Steven Rooks won this classification, defending his title from the previous year's Tour. Also for the last time, the intermediate sprints classification was calculated. This classification had similar rules as the points classification, but points were only awarded on intermediate sprints. Its leader wore a red jersey. During the first half of the race, 6, 4, and 2 points were awarded, in the second half the number was increased to 15, 10, and 5 points respectively. Kelly won the jersey for a third time in his career, which made him the record winner in this classification.
The sixth individual classification was the young rider classification, which was decided in the same way as the general classification, but was limited to riders under 25 years old. In 1989, for the first time since 1975, the leading rider in this classification did not wear the white jersey that had previously been used to identify the classification leader. The white jersey would be reintroduced in 2000. While no jersey was given to the leader, he was still marked by wearing the logo of the European Union on his shoulder. This was the only edition of the race in which this badge was used. The Café de Colombia team forgot to register their rider Alberto Camargo for the classification even though he would have been eligible. Had he been entered, he would have been the winner, as he finished the race in 20th place. Instead, the classification was won by Fabrice Philipot (), who had placed 24th overall.
For the team classification, the times of the best three cyclists per team on each stage were added; the leading team was the team with the lowest total time. The riders in the team that led this classification wore yellow caps. The team points classification was abolished ahead of the 1989 Tour. However, the combined points from the points classification would be used as a tiebreaker if two teams were to be tied on time. The team classification was won by .
In addition, there was a combativity award given after each mass-start stage to the cyclist considered most combative. The decision was made by a jury composed of journalists who gave points. The cyclist with the most points from votes in all stages led the combativity classification. Fignon won this classification, and was given overall the super-combativity award. Commemorating the bicentennial anniversary of the French Revolution, a cash prize of 17,890 francs was given out to the first rider passing the 1,789th kilometre of the race at Martres-Tolosane, on stage 11 between Luchon and Blagnac. The prize was taken by Jos Haex (). The Souvenir Henri Desgrange was given in honour of Tour founder Henri Desgrange to the first rider to pass the summit of the Col du Galibier on stage 17. This prize was won by Laurent Biondi (). There was also a Souvenir in honour of five-time Tour winner Jacques Anquetil, who had died two years before, given to the rider who wore the yellow jersey for the most days. This award was won by Fignon, who held the jersey for nine days.
- – On stages 2 and 3, da Silva and Søren Lilholt () were tied on points in the points classification. As da Silva had won a stage, he was considered the leader of that classification, since the number of stage wins served as a tie-breaker. However, he was also leading the general classification, thus wearing the yellow jersey. As Lilholt was leading the intermediate sprint classification, he was wearing the red jersey, while the green points jersey was worn by third-placed Sean Kelly.
- – As da Silva was wearing the yellow jersey and Lilholt, second in the mountains classification, was wearing the red jersey, third placed Roland Le Clerc () wore the polka-dot jersey during stages 2 and 3.
- – Thierry Claveyrolat () left the race on stage 9, thus relinquishing the lead to Induráin.
## Final standings
### General classification
### Points classification
### Mountains classification
### Combination classification
### Intermediate sprints classification
### Young rider classification
### Team classification
### Combativity classification
## FICP ranking
Riders in the Tour competed individually for points that contributed towards the FICP individual ranking. At the end of the Tour, Laurent Fignon replaced Charly Mottet as the leader of the ranking.
## Doping
In total, 87 doping tests were performed during the 1989 Tour de France; all of them were negative. The tests were carried out by the Union Cycliste Internationale's medical inspector, Gerry McDaid.
|
415,109 |
Jo Stafford
| 1,172,344,469 |
American singer (1917–2008)
|
[
"1917 births",
"2008 deaths",
"American jazz singers",
"American parodists",
"American women jazz singers",
"Burials at Holy Cross Cemetery, Culver City",
"Capitol Records artists",
"Catholics from California",
"Columbia Records artists",
"Converts to Roman Catholicism",
"Dot Records artists",
"Grammy Award winners",
"Jazz musicians from California",
"Jo Stafford",
"Long Beach Polytechnic High School alumni",
"Parody musicians",
"People from Coalinga, California",
"Reprise Records artists",
"Singers from California",
"The Pied Pipers members",
"Torch singers",
"Traditional pop music singers"
] |
Jo Elizabeth Stafford (November 12, 1917 – July 16, 2008) was an American traditional pop music singer, whose career spanned five decades from the late 1930s to the early 1980s. Admired for the purity of her voice, she originally underwent classical training to become an opera singer before following a career in popular music, and by 1955 had achieved more worldwide record sales than any other female artist. Her 1952 song "You Belong to Me" topped the charts in the United States and United Kingdom, becoming the second single to top the UK Singles Chart, and the first by a female artist to do so.
Born in remote oil-rich Coalinga, California, near Fresno in the San Joaquin Valley, Stafford made her first musical appearance at age 12. While still at high school, she joined her two older sisters to form a vocal trio named the Stafford Sisters, who found moderate success on radio and in film. In 1938, while the sisters were part of the cast of Twentieth Century Fox's production of Alexander's Ragtime Band, Stafford met the future members of the Pied Pipers and became the group's lead singer. Bandleader Tommy Dorsey hired them in 1939 to perform vocals with his orchestra. From 1940 to 1942, the group often performed with Dorsey's new male singer, Frank Sinatra.
In addition to her singing with the Pied Pipers, Stafford was featured in solo performances with Dorsey. After leaving the group in 1944, she recorded a series of pop songs now regarded as standards for Capitol Records and Columbia Records. Many of her recordings were backed by the orchestra of Paul Weston. She also performed duets with Gordon MacRae and Frankie Laine. Her work with the United Service Organizations giving concerts for soldiers during World War II earned her the nickname "G.I. Jo". Starting in 1945, Stafford was a regular host of the National Broadcasting Company (NBC) radio series The Chesterfield Supper Club and later appeared in television specials—including two series called The Jo Stafford Show, in 1954 in the U.S. and in 1961 in the UK.
Stafford married twice, first in 1937 to musician John Huddleston (the couple divorced in 1943), then in 1952 to Paul Weston, with whom she had two children. She and Weston developed a comedy routine in which they assumed the identity of an incompetent lounge act named Jonathan and Darlene Edwards, parodying well-known songs. The act proved popular at parties and among the wider public when the couple released an album as the Edwardses in 1957. In 1961, the album Jonathan and Darlene Edwards in Paris won Stafford her only Grammy Award for Best Comedy Album, and was the first commercially successful parody album. Stafford largely retired as a performer in the mid-1960s, but continued in the music business. She had a brief resurgence in popularity in the late 1970s when she recorded a cover of the Bee Gees hit, "Stayin' Alive" as Darlene Edwards. In the 1990s, she began re-releasing some of her material through Corinthian Records, a label founded by Weston. She died in 2008 in Century City, Los Angeles, and is interred with Weston at Holy Cross Cemetery, Culver City. Her work in radio, television, and music is recognized by three stars on the Hollywood Walk of Fame.
## Early years
Jo Elizabeth Stafford was born in Coalinga, California, in 1917, to Grover Cleveland Stafford and Anna Stafford (née York)—a second cousin of World War I hero Sergeant Alvin York. She was the third of four children. She had two older sisters, Christine and Pauline, and one younger sister, Betty. Both her parents enjoyed singing and sharing music with their family. Stafford's father hoped for success in the California oil fields when he moved his family from Gainesboro, Tennessee, but worked in a succession of unrelated jobs. Her mother was an accomplished banjo player, playing and singing many of the folk songs that influenced Stafford's later career. Anna insisted that her children should take piano lessons, but Jo was the only one among her sisters who took a keen interest in it, and through this, she learned to read music.
Stafford's first public singing appearance was in Long Beach, where the family lived when she was 12. She sang "Believe Me, If All Those Endearing Young Charms", a Stafford family favorite. Her second was far more dramatic. As a student at Long Beach Polytechnic High School with the lead in the school musical, she was rehearsing on stage when the 1933 Long Beach earthquake destroyed part of the school. With her mother's encouragement, Stafford originally planned to become an opera singer and studied voice as a child, taking private lessons from Foster Rucker, an announcer on California radio station KNX. Because of the Great Depression, she abandoned that idea and joined her older sisters Christine and Pauline in a popular vocal group the Stafford Sisters. The two older Staffords were already part of a trio with an unrelated third member when the act got a big booking at Long Beach's West Coast Theater. Pauline was too ill to perform, and Jo was drafted in to take her place so they could keep the engagement. She asked her glee club teacher for a week's absence from school, saying her mother needed her at home, and this was granted. The performance was a success, and Jo became a permanent member of the group.
The Staffords' first radio appearance was on Los Angeles station KHJ as part of The Happy Go Lucky Hour when Jo was 16, a role they secured after hopefuls at the audition were asked if they had their own musical accompanist(s). Christine Stafford said that Jo played piano, and the sisters were hired, though she had not previously given a public piano performance. The Staffords were subsequently heard on KNX's The Singing Crockett Family of Kentucky, and California Melodies, a network radio show aired on the Mutual Broadcasting System. While Stafford worked on The Jack Oakie Show, she met John Huddleston—a backing singer on the program, and they were married in October 1937. The couple divorced in 1943.
The sisters found work in the film industry as backup vocalists, and immediately after graduating from high school, Jo worked on film soundtracks. The Stafford Sisters made their first recording,"Let's Get Together and Swing" with Louis Prima, in 1936. In 1937, Jo worked behind the scenes with Fred Astaire on the soundtrack of A Damsel in Distress, creating the arrangements for the film, and with her sisters she arranged the backing vocals for "Nice Work If You Can Get It". Stafford said that her arrangement had to be adapted because Astaire had difficulty with some of the syncopation. In her words: "The man with the syncopated shoes couldn't do the syncopated notes".
## The Pied Pipers
By 1938, the Staffords were involved with Twentieth Century Fox's production of Alexander's Ragtime Band. The studio brought in many vocal groups to work on the film, including the Four Esquires, the Rhythm Kings, and the King Sisters, who began to sing and socialized between takes. The Stafford Sisters, the Four Esquires and the Rhythm Kings became a new vocal group called the Pied Pipers. Stafford later said, "We started singing together just for fun, and these sessions led to the formation of an eight-voice singing group that we christened 'The Pied Pipers'". The group consisted of eight members, including Stafford—John Huddleston, Hal Hooper, Chuck Lowry, Bud Hervey, George Tait, Woody Newbury, and Dick Whittinghill.
As the Pied Pipers, they worked on local radio and movie soundtracks. When Alyce and Yvonne King threw a party for their boyfriends' visit to Los Angeles, the group was invited to perform. The King Sisters' boyfriends were Tommy Dorsey's arrangers Axel Stordahl and Paul Weston, who became interested in the group. Weston said the group's vocals were unique for its time and that their vocal arrangements were much like those for orchestral instruments.
Weston persuaded Dorsey to audition the group in 1938, and the eight drove together to New York City. Dorsey liked them and signed them for 10 weeks. After their second broadcast, the sponsor visiting from overseas heard the group sing "Hold Tight (Want Some Seafood Mama)". Until this point, the sponsor knew only that he was paying for Dorsey's program and that its ratings were very good; transcription discs mailed to him by his advertising agency always arrived broken. He thought that the performance was terrible, and pressured the advertising agency representing his brand to fire the group. They stayed in New York for several months, landing one job that paid them \$3.60 each, and they recorded some material for RCA Victor Records. Weston later said that Stordahl and he felt responsibility for the group, since Weston had arranged their audition with Dorsey. After six months in New York and with no work there for them, the Pied Pipers returned to Los Angeles, where four of their members left the group to seek regular employment. Shortly afterwards, Stafford received a telephone call from Dorsey, who told her he wished to hire the group, but wanted only four of them, including Stafford. After she agreed to the offer, the remaining Pied Pipers—Stafford, Huddleston, Lowry, and Wilson—traveled to Chicago in 1939. The decision led to success for the group, especially Stafford, who featured in both collective and solo performances with Dorsey's orchestra.
When Frank Sinatra joined the Dorsey band, the Pied Pipers provided backing vocals for his recordings. Their version of "I'll Never Smile Again" topped the Billboard Chart for 12 weeks in 1940 and helped to establish Sinatra as a singer. Stafford, Sinatra, and the Pied Pipers toured extensively with Dorsey during their three years as part of his orchestra, giving concerts at venues across the United States. Stafford made her first solo recording—"Little Man with a Candy Cigar"—in 1941, after Dorsey agreed to her request to record solo. Her public debut as a soloist with the band occurred at New York's Hotel Astor in May 1942. Bill Davidson of Collier's reported in 1951 that because Stafford weighed in excess of 180 lb, Dorsey was reluctant to give her a leading vocal role in his orchestra, believing she was not sufficiently glamorous for the part. However, Peter Levinson's 2005 biography of Dorsey offers a different account. Stafford recalls that she was overweight, but Dorsey did not try hiding her because of it.
In November 1942, the Pied Pipers had a disagreement with Dorsey when he fired Clark Yocum, a guitarist and vocalist who had replaced Billy Wilson in the lineup, when he mistakenly gave the bandleader misdirections at a railroad station in Portland, Oregon. The remaining three members then quit in an act of solidarity. At the time, the number-one song in the United States was "There Are Such Things" by Frank Sinatra and the Pied Pipers. Sinatra also left Dorsey that year. Following their departure from the orchestra, the Pied Pipers played a series of vaudeville dates in the Eastern United States; when they returned to California, they were signed to appear in the 1943 Universal Pictures movie Gals Incorporated. From there, they joined the NBC Radio show Bob Crosby and Company. In addition to working with Bob Crosby, they also appeared on radio shows hosted by Sinatra and Johnny Mercer, and were one of the first groups signed to Mercer's new label, Capitol Records, which was founded in 1942. Weston, who left Dorsey's band in 1940 to work with Dinah Shore, became music director at Capitol.
## Solo career
### Capitol Records and United Service Organization
While Stafford was still working for Dorsey, Johnny Mercer told her, "Some day I'm going to have my own record company, and you're going to record for me." She subsequently became the first solo artist signed to Capitol after leaving the Pied Pipers in 1944. A key figure in helping Stafford to develop her solo career was Mike Nidorf, an agent who first heard her as a member of the Pied Pipers while he was serving as a captain in the United States Army. Having previously discovered artists such as Glenn Miller, Artie Shaw, and Woody Herman, Nidorf was impressed by Stafford's voice, and contacted her when he was demobilized in 1944. After she agreed to let him represent her, he encouraged her to reduce her weight and arranged a string of engagements that raised her profile and confidence.
The success of Stafford's solo career led to a demand for personal appearances, and from February 1945, she embarked on a six-month residency at New York's La Martinique nightclub. Her performance was well-received; an article in the July 1945 edition of Band Leaders magazine described it as "sensational", but Stafford did not enjoy singing before live audiences, and it was the only nightclub venue she ever played. Speaking about her discomfort with live performances, Stafford told a 1996 interview with The New Yorker's Nancy Franklin, "I'm basically a singer, period, and I think I'm really lousy up in front of an audience—it's just not me."
Stafford's tenure with the United Service Organizations during World War II, which often had her perform for soldiers stationed in the U.S., led to her acquiring the nickname "G.I. Jo". On returning from the Pacific theater, a veteran told Stafford that the Japanese would play her records on loudspeakers in an attempt to make the U.S. troops homesick enough to surrender. She replied personally to all the letters she received from servicemen. Stafford was a favorite of many servicemen during both World War II and the Korean War; her recordings received extensive airplay on the American Forces radio and in some military hospitals at lights-out. Stafford's involvement with servicemen led to an interest in military history and a sound knowledge of it. Years after World War II, Stafford was a guest at a dinner party with a retired naval officer. When the discussion turned to a wartime action off Mindanao, the officer tried to correct Stafford, who held to her point. He countered her by saying, "Madame, I was there". A few days after the party, Stafford received a note of apology from him, saying he had reread his logs and that she was correct.
### Chesterfield Supper Club, duets, and Voice of America
Beginning on December 11, 1945, Stafford hosted the Tuesday and Thursday broadcasts of NBC musical variety radio program The Chesterfield Supper Club. On April 5, 1946, the entire cast, including Stafford and Perry Como, participated in the first commercial radio broadcast from an airplane. The initial plan was to use the stand-held microphones used in studios, but when these proved to be problematic, the cast switched to hand-held microphones, which because of the plane's cabin pressure became difficult to hold. Three flights were made that day; a rehearsal in the afternoon, then two in the evening—one for the initial 6:00 pm broadcast and another at 10:00 pm for the West Coast broadcast.
Stafford moved from New York to California in November 1946, continuing to host Chesterfield Supper Club from Hollywood. In 1948, she restricted her appearances on the show to Tuesdays, and Peggy Lee hosted the Thursday broadcasts. Stafford left the show when it was expanded to 30 minutes, making her final appearance on September 2, 1949. She returned to the program in 1954; it ended its run on NBC Radio the following year. During her time with Chesterfield Supper Club, Stafford revisited some of the folk music she had enjoyed as a child. Weston, her conductor on the program, suggested using some of the folk music for the show. With her renewed interest in folk tunes came an interest in folklore; Stafford established a contest to award a prize to the best collection of American folklore submitted by a college student. The annual Jo Stafford Prize for American Folklore was handled by the American Folklore Society, with the first prize of \$250 awarded in 1949.
Stafford continued to record. She duetted with Gordon MacRae on a number of songs. In 1948, their version of "Say Something Sweet to Your Sweetheart" sold over a million copies. The following year, they repeated their success with "My Happiness", and Stafford and MacRae recorded "Whispering Hope" together. Stafford began hosting a weekly program on Radio Luxembourg in 1950; working unpaid, she recorded the voice portions of the shows in Hollywood. At the time, she was hosting Club Fifteen with Bob Crosby for CBS Radio.
Weston moved from Capitol to Columbia Records, and in 1950, Stafford followed suit. Content and very comfortable working with him, Stafford had had a clause inserted in her contract with Capitol stating that if Weston left that label, she would automatically be released from her obligations to them. When that happened, Capitol wanted Stafford to record eight more songs before December 15, 1950, and she found herself in the unusual situation of simultaneously working for two competing record companies, an instance that was very rare in an industry where musicians were seen as assets. In 1954, Stafford became the second artist after Bing Crosby to sell 25 million records for Columbia. She was presented with a diamond-studded disc to mark the occasion.
In 1950, Stafford began working for Voice of America (VOA), the U.S. government broadcaster transmitting programmes overseas to undermine the influence of communism. She presented a weekly show that aired in Eastern Europe, and Collier's published an article about the program in its April 21, 1951, issue that discussed her worldwide popularity, including in countries behind the Iron Curtain. The article, titled "Jo Stafford: Her Songs Upset Joe Stalin", earned her the wrath of the U.S. Communist Daily Worker newspaper, which published a column critical of Stafford and VOA.
### Marriage to Paul Weston and later career
Although Stafford and Paul Weston had known each other since their introduction at the King Sisters' party, they did not become romantically involved until 1945, when Weston traveled to New York to see Stafford perform at La Martinique. They were married in a Roman Catholic ceremony on February 26, 1952, before which Stafford converted to Catholicism. The wedding was conducted at St Gregory's Catholic Church in Los Angeles by Father Joe Kearney, a former guitarist with the Bob Crosby band, who left the music business, trained as a priest, and served as head of the Catholic Labor Institute. The couple left for Europe for a combined honeymoon and business trip: Stafford had an engagement at the London Palladium. Stafford and Weston had two children; Tim was born in 1952 and Amy in 1956. Both children followed their parents into the music industry. Tim Weston became an arranger and producer who took charge of Corinthian Records, his father's music label, and Amy Weston became a session singer, performing with a trio, Daddy's Money, and singing in commercials.
In the 1950s, Stafford had a string of popular hits with Frankie Laine, six of which charted. Their duet of the Hank Williams song "Hey Good Lookin'" made the top 10 in 1951. She had her best-known hits—"Jambalaya", "Shrimp Boats", "Make Love to Me", and "You Belong to Me"—around this time. "You Belong to Me" was Stafford's biggest hit, topping the charts in the United States and the United Kingdom. In the UK, it was the first song by a female singer to top the chart. The record first appeared on U.S. charts on August 1, 1952, and remained there for 24 weeks. In the UK, it entered the charts on November 14, 1952, at number 2, reached number one on January 16, 1953, and stayed on the charts for 19 weeks. In a July 1953 interview, Paul Weston said his wife's big hit was really the "B" side of the single "Pretty Boy", which both Weston and Columbia Records believed would be the big seller.
In 1953, Stafford signed a 4-year \$1 million deal with CBS-TV. She hosted the 15-minute The Jo Stafford Show on CBS from 1954 to 1955, with Weston as her conductor and music arranger. She appeared on NBC's Club Oasis in 1958, and on the ABC series The Pat Boone Chevy Showroom in 1959. In the early 1960s, Stafford hosted a series of television specials called The Jo Stafford Show, which were centered around music. The shows were produced in England and featured British and American guests including Claire Bloom, Stanley Holloway, Ella Fitzgerald, Mel Tormé, and Rosemary Clooney.
Both Stafford and Weston returned to Capitol in 1961. During her second stint at Capitol, Stafford also recorded for Sinatra's label Reprise Records. The albums issued by Reprise were released between 1961 and 1964, and were mostly remakes of songs from her past. Sinatra sold Reprise to Warner Bros. in 1963, and they retargeted the label at a teenage audience, letting go many of the original artists who had signed up with Sinatra. In late 1965, both Stafford and Weston signed to Dot Records.
## Comedy performances
During the 1940s, Stafford briefly performed comedy songs under the name "Cinderella G. Stump" with Red Ingle and the Natural Seven. In 1947, she recorded a hillbilly-style parody of "Temptation", pronouncing its title "Tim-tayshun". Stafford created Stump after Weston suggested her for the role when Ingle said his female vocalist was unavailable for the recording session. After meeting Ingle at a recording studio, she gave an impromptu performance. The speed of her voice was intentionally increased for the song, giving it the hillbilly sound, and the listening public did not initially know that her voice was on the record. Because it was a lighthearted, impromptu performance and she accepted the standard scale pay, Stafford waived all royalties from the record. Stafford, along with Ingle and Weston, made a personal appearance tour in 1949, and she performed "Temptation" as Cinderella G. Stump. Stafford and Ingle performed the song on network television in 1960 for Startime. Stafford recorded a second song with Ingle in 1948. "The Prisoner of Love's Song" was a parody of "Prisoner of Love", and featured in an advertisement for Capitol releases in the January 8, 1949, edition of Billboard magazine.
Throughout the 1950s, Stafford and Weston entertained party guests by performing skits in which they impersonated a poor lounge act. Stafford sang off-key in a high pitched voice and Weston played songs on the piano in unconventional rhythms. Weston began his impression of an unskilled pianist in or around 1955, assuming the guise "when things got a little quiet, or when people began taking themselves too seriously at a Hollywood party." He put on an impromptu performance of the act the following year at a Columbia Records sales convention in Key West, Florida, after hearing a particularly bad hotel pianist. The audience was very appreciative of his rendition of "Stardust", particularly Columbia executives George Avakian and Irving Townsend, who encouraged Weston to make an album of such songs. Avakian named Weston's character Jonathan Edwards, for the 18th century Calvinist preacher of the same name, and asked him to record under this alias. Weston worried that he might not be able to find enough material for an entire album, and he asked his wife to join the project. Stafford named her off-key vocalist persona Darlene Edwards.
Stafford's creation of Darlene Edwards had its roots in the novelty songs that Mitch Miller, the head of Columbia's artists and repertoire department, had been selecting for her to sing. These included songs such as "Underneath the Overpass", and because she did not agree with Miller's music choices for her, Stafford and her studio musicians often recorded their own renditions of the music, performing the songs according to their feelings about them. Because she had some unused studio time at a 1957 recording session, as a joke Stafford recorded a track as Darlene Edwards. Those who heard bootlegs of the recording responded positively, and later that year, Stafford and Weston recorded an album of songs as Jonathan and Darlene, entitled The Piano Artistry of Jonathan Edwards.
As a publicity stunt, Weston and Stafford claimed that Jonathan and Darlene Edwards were a New Jersey lounge act they had discovered, and denied any personal connection. This ruse led to much speculation about the Edwardses' identities. In an article titled "Two Right Hands" in September 1957, Time reported that some people believed the performers were Harry and Margaret Truman, but the same piece identified Weston and Stafford as the Edwardses. In 1958, Stafford and Weston appeared as the Edwardses on Jack Benny's television program Shower of Stars, and in 1960 on The Garry Moore Show. The Piano Artistry of Jonathan Edwards was followed up with an album of popular music standards, Jonathan and Darlene Edwards in Paris, which was released in 1960 and won that year's Grammy Award for Best Comedy Album. The Academy issued two awards for the category that year; Bob Newhart also received an award for "Spoken Word Comedy" for his album The Button-Down Mind Strikes Back! The Grammy was Stafford's only major award.
The couple continued to release comedy albums for several years, and in 1977 released a cover of the Bee Gees' "Stayin' Alive" as a single, with an Edwards interpretation of Helen Reddy's "I Am Woman" as its "B" side. The same year, a brief resurgence in the popularity of Jonathan and Darlene albums occurred when their cover of "Carioca" was featured as the opening and closing theme to The Kentucky Fried Movie. Their last release, Darlene Remembers Duke, Jonathan Plays Fats, was issued in 1982. To mark the occasion, an interview with Stafford and Weston—in which they assumed the persona of the Edwardses—appeared in the December 1982 edition of Los Angeles Magazine.
## Retirement and later life
In 1959, Stafford was offered a contract to perform at Las Vegas, but declined it to concentrate on her family life. Because she disliked continuously traveling for television appearances that took her away from her children, and no longer found the music business fun, she went into semiretirement in the mid-1960s. She retired fully in 1975. Except for the Jonathan and Darlene Edwards material, and re-recording her favorite song "Whispering Hope" with her daughter Amy in 1978, Stafford did not perform again until 1990, at a ceremony honoring Frank Sinatra. The Westons devoted more time to Share Inc., a charity aiding people with developmental disabilities in which they had been active for many years. Around 1983, Concord Records tried to persuade Stafford to change her mind and come out of retirement, but although an album was planned, she did not feel she would be satisfied with the finished product, and the project was shelved.
Stafford won a breach-of-contract lawsuit against her former record label Columbia in the early 1990s. Because of a clause concerning the payment of royalties in her contract, she secured the rights to all of the recordings she made with the company, including those Weston and she made as Jonathan and Darlene Edwards. After the lawsuit was settled, Stafford and her son Tim reactivated Corinthian Records, which Weston, a devout Christian, had started as a label for religious music in the 1970s, and they began releasing some of her old material.
In 1996, Paul Weston died of natural causes; Stafford continued to operate Corinthian Records. In 2006, she donated the couple's library, including music arrangements, photographs, business correspondence and recordings, to the University of Arizona. Stafford began suffering from congestive heart failure in October 2007, from which she died aged 90 on July 16, 2008. She was buried with her husband at the Holy Cross Cemetery in Culver City, California.
## Style, awards, and recognition
Stafford was admired by critics and the listening public for the purity of her voice, and was considered one of the most versatile vocalists of her era. Peter Levinson said that she was a coloratura soprano, whose operatic training allowed her to sing a natural falsetto. Her style encompassed a number of genres, including big band, ballads, jazz, folk and comedy. Music critic Terry Teachout described her as "rhythmically fluid without ever sounding self-consciously 'jazzy' ", while Rosemary Clooney said of her, "The voice says it all: beautiful, pure, straightforward, no artifice, matchless intonation, instantly recognizable. Those things describe the woman, too." Writing for the New York Sun, Will Friedwald described her 1947 interpretation of "Haunted Heart" as "effective because it's so subtle, because Stafford holds something back and doesn't shove her emotion in the listener's face." Nancy Franklin described Stafford's version of the folk song "He's Gone Away" as "wistful and tender, as if she had picked up a piece of clothing once worn by a loved one and begun singing." Frank Sinatra said, "It was a joy to sit on the bandstand and listen to her". The singer Judy Collins has cited Stafford's folk recordings as an influence on her own musical career. Country singer Patsy Cline was also inspired by Stafford's work.
In their guise of Jonathan and Darlene Edwards, Weston and Stafford earned admiration from their show-business peers. Pianist George Shearing was a fan and would play "Autumn in New York" in the style of Edwards if he knew the couple were in the audience. Ray Charles also enjoyed their performance. Art Carney, who played Ed Norton in the comedy series The Honeymooners, once wrote the Edwardses a fan letter as Norton. However, not everybody appreciated the Edwards act. Mitch Miller blamed the couple's 1962 album Sing Along With Jonathan and Darlene Edwards for ending his sing-along albums and television show, while in 2003, Stafford told Michael Feinstein that the Bee Gees had disliked the Edwards' version of "Stayin' Alive".
In 1960, Stafford said working closely with Weston had good and bad points. His knowledge of her made arranging her music easy for him, but sometimes it caused difficulties. Weston knew Stafford's abilities and would write or arrange elaborate music because he knew she was capable of performing it. She also said she did not believe she could perform in Broadway musicals because she thought her voice was not powerful enough for stage work. In 2003, she recalled that rehearsal time was often limited before she recorded a song, and how Weston would sometimes slip musical arrangements under the bathroom door as she was in the bath getting ready to go to the studio.
Her work in radio, television, and music is recognized by three stars on the Hollywood Walk of Fame. In 1952, listeners of Radio Luxembourg voted Stafford their favorite female singer. The New York Fashion Academy named her one of the Best Dressed Women of 1955. Songbirds magazine has reported that, by 1955, Stafford had amassed more worldwide record sales than any other female artist, and that she was ranked fifth overall. She was nominated in the Best Female Singer category at the 1955 Emmy Awards. She won a Grammy for Jonathan and Darlene Edwards in Paris, and The Pied Pipers' recording of "I'll Never Smile Again" was inducted into the Grammy Hall of Fame in 1982, as was Stafford's version of "You Belong to Me" in 1998. She was inducted into the Big Band Academy of America's Golden Bandstand in April 2007. Stafford and Weston were founding members of the National Academy of Recording Arts and Sciences.
Stafford's music has been referenced in popular culture. Her recording of "Blues in the Night" features in a scene of James Michener's novel The Drifters (1971), while a Marine Corps sergeant major in Walter Murphy's The Vicar of Christ (1979) hears a radio broadcast of her singing "On Top of Old Smoky" shortly before a battle in Korea. Commenting on the latter reference for his 1989 book Singers and the Song, which includes a chapter about Stafford, author Gene Lees says, "[it] somehow sets Stafford's place in the American culture. You're getting pretty famous when your name turns up in crossword puzzles; you are woven into a nation's history when you turn up in its fiction."
## Discography
## Film and television
Stafford appeared in films from the 1930s onwards, including Alexander's Ragtime Band. Her final on-screen appearance was in the Frank Sinatra tribute Sinatra 75: The Best Is Yet to Come in 1990. She declined several offers of television work because she was forced to memorize scripts (as she was unable to read the cue cards without her glasses), and the bright studio lights caused her discomfort.
## Publications
|
2,908,928 |
MAUD Committee
| 1,170,452,349 |
British nuclear weapons research group, 1940–1941
|
[
"1940 establishments in the United Kingdom",
"1940 in military history",
"1940 in politics",
"Code names",
"Military history of the United Kingdom during World War II",
"Nuclear history of the United Kingdom",
"Science and technology during World War II",
"Secret military programs"
] |
The MAUD Committee was a British scientific working group formed during the Second World War. It was established to perform the research required to determine if an atomic bomb was feasible. The name MAUD came from a strange line in a telegram from Danish physicist Niels Bohr referring to his housekeeper, Maud Ray.
The MAUD Committee was founded in response to the Frisch–Peierls memorandum, which was written in March 1940 by Rudolf Peierls and Otto Frisch, two physicists who were refugees from Nazi Germany working at the University of Birmingham under the direction of Mark Oliphant. The memorandum argued that a small sphere of pure uranium-235 could have the explosive power of thousands of tons of TNT.
The chairman of the MAUD Committee was George Thomson. Research was split among four different universities: the University of Birmingham, University of Liverpool, University of Cambridge and the University of Oxford, each having a separate programme director. Various means of uranium enrichment were examined, as was nuclear reactor design, the properties of uranium-235, the use of the then-hypothetical element plutonium, and theoretical aspects of nuclear weapon design.
After fifteen months of work, the research culminated in two reports, "Use of Uranium for a Bomb" and "Use of Uranium as a Source of Power", known collectively as the MAUD Report. The report discussed the feasibility and necessity of an atomic bomb for the war effort. In response, the British created a nuclear weapons project, code named Tube Alloys. The MAUD Report was made available to the United States, where it energised the American effort, which eventually became the Manhattan Project. The report was also revealed to the Soviet Union by its atomic spies, and helped start the Soviet atomic bomb project.
## Origins
### The discovery of nuclear fission
The neutron was discovered by James Chadwick at the Cavendish Laboratory at the University of Cambridge in February 1932. Two months later, his Cavendish colleagues John Cockcroft and Ernest Walton split lithium atoms with accelerated protons. In December 1938, Otto Hahn and Fritz Strassmann at Hahn's laboratory in Berlin-Dahlem bombarded uranium with slow neutrons, and discovered that barium had been produced. Hahn wrote to his colleague Lise Meitner, who, with her nephew Otto Frisch, proved that the uranium nucleus had been split. They published their finding in Nature in 1939. This phenomenon was a new type of nuclear disintegration, and was more powerful than any seen before. Frisch and Meitner calculated that the energy released by each disintegration was approximately 200 megaelectronvolts [MeV] (32 pJ). By analogy with the division of biological cells, they named the process "fission".
Niels Bohr and John A. Wheeler applied the liquid drop model developed by Bohr and Fritz Kalckar to explain the mechanism of nuclear fission. Bohr had an epiphany that the fission at low energies was principally due to the uranium-235 isotope, while at high energies it was mainly due to the more abundant uranium-238 isotope. The former makes up just 0.7% of natural uranium, while the latter accounts for 99.3%. Frédéric Joliot-Curie and his Paris colleagues Hans von Halban and Lew Kowarski raised the possibility of a nuclear chain reaction in a paper published in Nature in April 1939. It was apparent to many scientists that, in theory at least, an extremely powerful explosive could be created, although most still considered an atomic bomb an impossibility. The term was already familiar to the British public through the writings of H. G. Wells, in his 1913 novel The World Set Free.
### British response
In Britain, a number of scientists considered whether an atomic bomb was practical. At the University of Liverpool, Chadwick and the Polish refugee scientist Joseph Rotblat tackled the problem, but their calculations were inconclusive. At Cambridge, Nobel Prize in Physics laureates George Paget Thomson and William Lawrence Bragg wanted the government to take urgent action to acquire uranium ore. The main source of this was the Belgian Congo, and they were worried that it could fall into German hands. Unsure as to how to go about this, they spoke to Sir William Spens, the master of Corpus Christi College, Cambridge. In April 1939, he approached Sir Kenneth Pickthorn, the local Member of Parliament, who took their concerns to the Secretary of the Committee for Imperial Defence, Major General Hastings Ismay. Ismay in turn asked Sir Henry Tizard for an opinion. Like many scientists, Tizard was sceptical of the likelihood of an atomic bomb being developed, reckoning the odds of success at 100,000 to 1.
Even at such long odds, the danger was sufficiently great to be taken seriously. Lord Chartfield, the Minister for Coordination of Defence, checked with the Treasury and Foreign Office, and found that the Belgian Congo uranium was owned by the Union Minière du Haut Katanga company. Its British vice-president, Lord Stonehaven, arranged a meeting with the Belgian president of the company, Edgar Sengier. Since Union Minière management were friendly towards Britain, it was not considered necessary to immediately acquire the uranium, but Tizard's Committee for the Scientific Survey of Air Warfare (CSSAW) was directed to continue the research into the feasibility of atomic bombs. Thomson, at Imperial College London, and Mark Oliphant, an Australian physicist at the University of Birmingham, were each tasked with carrying out a series of experiments on uranium. By February 1940, Thomson's team had failed to create a chain reaction in natural uranium, and he had decided that it was not worth pursuing.
### Frisch–Peierls memorandum
At Birmingham, Oliphant's team had reached a different conclusion. Oliphant had delegated the task to Frisch and Rudolf Peierls, two German refugee scientists who could not work on Oliphant's radar project because they were enemy aliens, and therefore lacked the requisite security clearance. Francis Perrin had defined a critical mass of uranium to be the smallest amount that could sustain a chain reaction, and had calculated it to be about 40 tonnes (39 long tons; 44 short tons). He reckoned that if a neutron reflector were placed around it, this might be reduced to 12 tonnes (12 long tons; 13 short tons). In a theoretical paper written in 1939, Peierls attempted to simplify the problem by using the fast neutrons produced by fission, thus omitting consideration of a neutron moderator. He too believed the critical mass of a sphere of uranium to be "of the order of tons".
However, Bohr had contended that the uranium-235 isotope was far more likely to capture neutrons and fission even from neutrons with the low energies produced by fission. Frisch began experimenting with uranium enrichment through thermal diffusion. Progress was slow; the required equipment was not available, and the radar project had first call on the available resources. He wondered what would happen if he was able to produce a sphere of pure uranium-235. When he used Peierls' formula to calculate its critical mass, he received a startling answer: less than a kilogram would be required. Frisch and Peierls produced the Frisch–Peierls memorandum in March 1940. In it they reported that a five kilogram bomb would be the equivalent to several thousand tons of dynamite, and even a one kilogram bomb would be impressive. Because of the potential radioactive fallout, they thought that the British might find it morally unacceptable.
Oliphant took the Frisch–Peierls memorandum to Tizard in March 1940. He passed it on to Thomson, who discussed it with Cockcroft and Oliphant. They also heard from Jacques Allier of the French Deuxième Bureau, who had been involved in the removal of the entire stock of heavy water from Norway. He told them of the interest the Germans had shown in the heavy water, and in the activity of the French researchers in Paris. Immediate action was taken: the Ministry of Economic Warfare was asked to secure stocks of uranium oxide in danger of being captured by the Germans; British intelligence agencies were asked to investigate the activities of German nuclear scientists; and A. V. Hill, the British Scientific Attaché in Washington, was asked to find out what the Americans were up to. Hill reported that the Americans had scientists investigating the matter, but they did not think that any military applications would be found.
## Organisation
A committee was created as a response to the Frisch–Peierls memorandum. It held its first meeting on 10 April 1940, in the ground-floor main committee room of the Royal Society in Burlington House in London. Its meetings were invariably held there. The original members were Thomson, Chadwick, Cockcroft, Oliphant and Philip Moon; Patrick Blackett, Charles Ellis and Norman Haworth were subsequently added, along with a representative of the Director of Scientific Research at the Ministry of Aircraft Production (MAP). The MAUD Committee held its first two meetings in April 1940 before it was formally constituted by CSSAW. CSSAW was abolished in June 1940, and the MAUD Committee then came directly under the MAP. Thomson chaired the committee, and initially acted as its secretary as well, writing up the minutes in longhand on foolscap, until the MAP provided a secretary.
At first the new committee was named the Thomson Committee after its chairman, but this was soon exchanged for a more unassuming name, the MAUD Committee. MAUD was assumed by many to be an acronym, however it is not. The name MAUD came to be in an unusual way. On 9 April 1940, the day Germany invaded Denmark, Niels Bohr had sent a telegram to Frisch. The telegram ended with a strange line "Tell Cockcroft and Maud Ray Kent". At first it was thought to be code regarding radium or other vital atomic-weapons-related information, hidden in an anagram. One suggestion was to replace the "y" with an "i", producing 'radium taken'. When Bohr returned to England in 1943, it was discovered that the message was addressed to John Cockcroft and Bohr's housekeeper Maud Ray, who was from Kent. Thus the committee was named the MAUD Committee. Although the initials stood for nothing, it was officially the MAUD Committee, not the Maud Committee.
Because of the top secret aspect of the project, only British-born scientists were considered. Even despite their early contributions, Peierls and Frisch were not allowed to participate in the MAUD Committee because, at a time of war, it was considered a security threat to have enemy aliens in charge of a sensitive project. In September 1940, the Technical Sub-Committee was formed, with Peierls and Frisch as members. However, Halban did not take his exclusion from the MAUD Committee in as good grace as Frisch and Peierls. In response, two new committees were created in March 1941 to replace the MAUD Committee and the Technical Sub-Committee called the MAUD Policy Committee and the MAUD Technical Committee. Unlike the original two committees, they had written terms of reference. The terms of reference of the MAUD Policy Committee were:
1. To supervise on behalf of the Director of Scientific Research, MAP, an investigation into the possibilities of uranium as contributing to the war effort; and
2. To consider the recommendations of the MAUD Technical Committee and to advise the Director of Scientific Research accordingly.
Those of the MAUD Technical Committee were:
1. To consider the problems arising in the uranium investigation;
2. To recommend to the MAUD Policy Committee the experimental work necessary to establish the technical possibilities; and
3. To ensure co-operation between the various groups of investigators.
The MAUD Policy Committee was kept small and included only one representative from each university laboratory. Its members were: Blackett, Chadwick, Cockcroft, Ellis, Haworth, Franz Simon, Thomson and the Director of Scientific Research at the MAP. The MAUD Technical Committee's members were: Moses Blackman, Egon Bretscher, Norman Feather, Frisch, Halban, C. H. Johnson, Kowarski, Wilfrid Mann, Moon, Nevill Mott, Oliphant, Peierls and Thomson. Its meetings were normally attended by Winston Churchill's scientific advisor, Frederick Lindemann, or his representative, and a representative of Imperial Chemical Industries (ICI). Basil Dickins from the MAP acted as the secretary of the Technical Committee. Thomson chaired both committees.
## Activity
The MAUD Committee's research was split among four different English universities: the University of Birmingham, the University of Liverpool, the University of Cambridge and the University of Oxford. At first the research was paid for out of the universities' funds. Only in September 1940 did government funding become available. The MAP signed contracts that gave £3,000 to the Cavendish Laboratory at Cambridge (later increased to £6,500), £1,000 (later increased to £2,000) to the Clarendon Laboratory at Oxford, £1,500 to Birmingham, and £2,000 to Liverpool. The universities were reimbursed for expenses by the MAP, which also began to pay some of the salaries of the universities' staff. However, Chadwick, Peierls, Simon and other professors, along with some research staff, were still paid out of university funds. The government also placed a £5,000 order for 5 kilograms (11 lb) of uranium hexafluoride with ICI. Uranium oxide was purchased from the Brandhurt Company, which sourced it from America. Wartime shortages impacted many areas of research, requiring the MAP to write to firms requesting priority for items required by the scientists.
There were also shortages of manpower, as chemists and physicists had been diverted to war work. Of necessity, the universities employed many aliens or ex-aliens. The MAP was initially opposed to their employment on security grounds, especially as most were from enemy or occupied countries. Their employment was only made possible because they were employed by the universities and not the MAP, which was not allowed to employ enemy aliens. The MAP gradually came around to accepting their employment on the project. It protected some from internment, and provided security clearances. There were restrictions on where enemy aliens could work and live, and they were not allowed to own cars, so dispensations were required to allow them to visit other universities. "And so," wrote historian Margaret Gowing, "the greatest of all the wartime secrets was entrusted to scientists excluded for security reasons from other war work."
### University of Liverpool
The division of the MAUD Committee at Liverpool was led by Chadwick, who was assisted by Frisch, Rotblat, Gerry Pickavance, Maurice Pryce and John Riley Holt. The division at Liverpool also controlled a small team at the University of Bristol that included Alan Nunn May and Cecil Frank Powell. At Liverpool they focused on the separation of isotopes through thermal diffusion as was suggested in the Frisch–Peierls memorandum.
This process was based on the fact that when a mixture of two gases passes through a temperature gradient, the heavier gas tends to concentrate at the cold end and the lighter gas at the warm end. That this can be used as a means of isotope separation was first demonstrated by Klaus Clusius and Gerhard Dickel in Germany in 1938, who used it to separate isotopes of neon. They used an apparatus called a "column", consisting of a vertical tube with a hot wire down the centre. The advantage of the technique was that it was simple in design and there were no moving parts. But it could take months to reach equilibrium, required a lot of energy, and needed high temperatures that could cause a problem with the uranium hexafluoride.
Another line of research at Liverpool was measuring the fission cross section of uranium-235, on which Frisch and Peierls' calculations depended. They had assumed that almost every collision between a neutron of any energy and a uranium-235 nucleus would produce a fission. The value they used for the fission cross section was that published by French researchers in 1939, but data published by the Americans in the 15 March and 15 April 1940 issues of the American journal Physical Review indicated that it was much smaller.
No pure uranium-235 was available, so experiments at Liverpool were conducted with natural uranium. The results were inconclusive, but tended to support Frisch and Peierls. By March 1941, Alfred Nier had managed to produce a microscopic amount of pure uranium-235 in the United States, and a team under Merle Tuve at the Carnegie Institution of Washington was measuring the cross section. The uranium-235 was too valuable to send a sample to Britain, so Chadwick sent the Americans a list of measurements he wanted them to carry out. The final result was that the cross section was smaller than Frisch and Peierls had assumed, but the resulting critical mass was still only about eight kilograms.
Meanwhile, Pryce investigated how long a runaway nuclear chain reaction in an atomic bomb would continue before it blew itself apart. He calculated that since the neutrons produced by fission have an energy of about 1 MeV (0.16 pJ) this corresponded to a speed of 1.4×10<sup>9</sup> cm/s. The major part of the chain reaction would be completed in the order of 10×10<sup>−8</sup> s (ten "shakes"). From 1 to 10 per cent of the fissile material would fission in this time; but even an atomic bomb with 1 per cent efficiency would release as much energy as 180,000 times its weight in TNT.
### University of Oxford
The division of the MAUD Committee at Oxford was led by Simon. As a German émigré, he was only able to get involved after Peierls vouched for him, pointing out that Simon had already begun research on isotope separation, which would give the project a head start by his participation. The Oxford team was mostly composed of non-British scientists, including Nicholas Kurti, Kurt Mendelssohn, Heinrich Kuhn, Henry Shull Arms and Heinz London. They concentrated on isotope separation with a method known as gaseous diffusion.
This is based on Graham's law, which states that the rate of effusion of a gas through a porous barrier is inversely proportional to the square root of the gas's molecular mass. In a container with a porous barrier containing a mixture of two gases, the lighter molecules will pass out of the container more rapidly than the heavier molecules. The gas leaving the container is slightly enriched in the lighter molecules, while the residual gas is slightly depleted. Simon's team conducted experiments with copper gauze as the barrier. Because uranium hexafluoride, the only known gas containing uranium, was both scarce and difficult to handle, a mixture of carbon dioxide and water vapour was used to test it.
The result of this work was a report from Simon on the "Estimate of the Size of an Actual Separation Plant" in December 1940. He described an industrial plant capable of producing a kilogram per day of uranium enriched to 99 per cent uranium-235. The plant would use 70,000 square metres (750,000 sq ft) of membrane barriers, in 18,000 separation units in 20 stages. The plant would cover 40 acres (16 ha), the machinery would weigh 70,000 long tons (71,000 t) and consume 60,000 kW of power. He estimated that it would take 12 to 18 months to build at a cost of around £4 million, require 1,200 workers to operate, and cost £1.5 million per annum to run. "We are confident that the separation can be performed in the way described", he concluded, "and we even believe that the scheme is, in view of its object, not unduly expensive of time, money and effort."
### University of Cambridge
The division of the MAUD Committee at Cambridge was jointly led by Bragg and Cockcroft. It included Bretscher, Feather, Halban, Kowarski, Herbert Freundlich and Nicholas Kemmer. Paul Dirac assisted as a consultant, although he was not formally part of the team. On 19 June 1940, following the German invasion of France, Halban, Kowarski and other French scientists and their families, along with their precious stock of heavy water, were brought to England by the Earl of Suffolk and Major Ardale Golding on the steamer Broompark. The heavy water, valued at £22,000, was initially kept at HM Prison Wormwood Scrubs, but was later secretly stored in the library at Windsor Castle. The French scientists moved to Cambridge, where they conducted experiments that conclusively showed that a nuclear chain reaction could be produced in a mixture of uranium oxide and heavy water.
In a paper written shortly after they arrived in England, Halban and Kowarski theorised that slow neutrons could be absorbed by uranium-238, forming uranium-239. A letter by Edwin McMillan and Philip Abelson published in the Physical Review on 15 June 1940 stated that this decayed to an element with an atomic number of 93, and then to one with an atomic number of 94 and mass of 239, which, while still radioactive, was fairly long-lived. That a letter on such a sensitive subject could still be published irked Chadwick, and he asked for an official protest to be sent to the Americans, which was done.
Bretscher and Feather argued, on theoretical grounds, that this element might be capable of fission by both fast and slow neutrons like uranium-235. If so, this promised another path to an atomic bomb, as it could be bred from the more abundant uranium-238 in a nuclear reactor, and separation from uranium could be by chemical means, as it was a different element, thereby avoiding the necessity for isotope separation. Kemmer suggested that since uranium was named after the planet Uranus, element 93 could be named neptunium and 94 plutonium after the next two planets. Later it was discovered that the Americans had independently adopted the same names, following the same logic. Bretscher and Feather went further, theorising that irradiation of thorium could produce a new isotope of uranium, uranium-233, which might also be susceptible to fission by both fast and slow neutrons. In addition to this work, Eric Rideal studied isotope separation through centrifugation.
### University of Birmingham
The division of the MAUD Committee at Birmingham was led by Peierls. He was assisted by Haworth, Johnson and, from 28 May 1941, Klaus Fuchs. Haworth led the chemists in studying the properties of uranium hexafluoride. One thing in its favour was that fluorine has only one isotope, so any difference in weight between two molecules is solely due to the different isotope of uranium.
Otherwise, uranium hexafluoride was far from ideal. It solidified at 120 °F (49 °C), was corrosive, and reacted with many substances, including water. It was therefore difficult and dangerous to handle. However, a search by the chemists at Birmingham failed to uncover another gaseous compound of uranium. Lindemann used his influence with Lord Melchett, a director of ICI, to get ICI to produce uranium hexafluoride on an industrial scale. ICI's hydrofluoric acid plant was out of commission, and required extensive repairs, so the quote for a kilogram of uranium hexafluoride came to £5,000. Nonetheless, the order was placed in December 1940. ICI also explored methods of producing pure uranium metal.
Peierls and his team worked on the theoretical problems of a nuclear bomb. In essence, they were in charge of finding out the technical features of the bomb. Along with Fuchs, Peierls also interpreted all the experimental data from the other laboratories. He examined the different processes by which they were obtaining isotopes. By the end of the summer in 1940, Peierls preferred gaseous diffusion to thermal diffusion.
A paper was received from the United States in which George Kistiakowsky argued that a nuclear weapon would do very little damage, as most of the energy would be expended heating the surrounding air. A chemical explosive generates very hot gases in a confined space, but a nuclear explosion will not do this. Peierls, Fuchs, Geoffrey Taylor and J. G. Kynch worked out the hydrodynamics to refute Kistiakowsky's argument. Taylor produced a paper on "The Formation of a Blast Wave by a Very Intense Explosion" in June 1941.
## Reports
The first draft of the final report of the MAUD Committee was written by Thomson in June 1941, and circulated among members of the committee on 26 June, with instructions that the next meeting on 2 July would discuss it. A considerable amount of editing was done, mainly by Chadwick. At this stage it was divided into two reports. The first was on "Use of Uranium for a Bomb"; the second one "Use of Uranium as a Source of Power". They consolidated all the research and experiments the MAUD Committee had completed. The report opened with a statement that:
> We should like to emphasize at the beginning of this report that we entered the project with more skepticism than belief, though we felt it was a matter which had to be investigated. As we proceeded we became more and more convinced that release of atomic energy on a large scale is possible and that conditions can be chosen which would make it a very powerful weapon of war. We have now reached the conclusion that it will be possible to make an effective uranium bomb which, containing some 25 lb of active material, would be equivalent as regards destructive effect to 1,800 tons of TNT and would also release large quantities of radioactive substances which would make places near to where the bomb exploded dangerous to human life for a long period.
The first report concluded that a bomb was feasible. It described it in technical detail, and provided specific proposals for developing it, including cost estimates. A plant to produce one kilogram of uranium-235 per day was estimated to cost £5 million and would require a large skilled labour force that was also needed for other parts of the war effort. It could be available in as little as two years. The amount of damage that it would do was estimated to be similar to that of the Halifax explosion in 1917, which had devastated everything in a 1/4-mile (0.40 km) radius. The report warned that Germany had shown interest in heavy water, and although this was not considered useful for a bomb, the possibility remained that Germany could also be working on the bomb.
The second report was shorter. It recommended that Halban and Kowarski should move to the US where there were plans to make heavy water on a large scale. Plutonium might be more suitable than uranium-235, and plutonium research should continue in Britain. It concluded that the controlled fission of uranium could be used to generate heat energy for use in machines, and provide large quantities of radioisotopes which could be used as substitutes for radium. Heavy water or possibly graphite might serve as a moderator for the fast neutrons. In conclusion though, while the nuclear reactor had considerable promise for future peaceful uses, the committee felt that it was not worth considering during the present war.
## Outcome
### United Kingdom
In response to the MAUD Committee report, a nuclear weapons programme was launched. To co-ordinate the effort, a new directorate was created, with the deliberately misleading name of Tube Alloys for security purposes. Sir John Anderson, the Lord President of the Council, became the minister responsible, and Wallace Akers from ICI was appointed the director of Tube Alloys. Tube Alloys and the Manhattan Project exchanged information, but did not initially combine their efforts, ostensibly over concerns about American security. Ironically, it was the British project that had already been penetrated by atomic spies for the Soviet Union. The most significant of them at this time was John Cairncross, a member of the notorious Cambridge Five, who worked as the private secretary to Lord Hankey, a minister without portfolio in the War Cabinet. Cairncross provided the NKVD with information from the MAUD Committee.
The United Kingdom did not have the manpower or resources of the United States, and despite its early and promising start, Tube Alloys fell behind its American counterpart and was dwarfed by it. The British considered producing an atomic bomb without American help, but the project would have needed overwhelming priority, the projected cost was staggering, disruption to other wartime projects was inevitable, and it was unlikely to be ready in time to affect the outcome of the war in Europe.
At the Quebec Conference in August 1943, Churchill and Roosevelt signed the Quebec Agreement, which merged the two national projects. The Quebec Agreement established the Combined Policy Committee and the Combined Development Trust to co-ordinate their efforts. The 19 September 1944 Hyde Park Agreement extended both commercial and military co-operation into the post-war period.
A British mission led by Akers assisted in the development of gaseous diffusion technology at the SAM Laboratories in New York. Another, headed by Oliphant, assisted with that of the electromagnetic separation process at the Berkeley Radiation Laboratory. Cockcroft became the director of the joint British-Canadian Montreal Laboratory. A British mission to the Los Alamos Laboratory was led by Chadwick, and later Peierls, which included several of Britain's most eminent scientists. As overall head of the British Mission, Chadwick forged a close and successful partnership, and ensured that British participation was complete and wholehearted.
### United States
In response to the 1939 Einstein-Szilard letter, President Franklin D. Roosevelt had created an Advisory Committee on Uranium in October 1939, chaired by Lyman Briggs. Research concentrated on slow fission for power production, but with a growing interest in isotope separation. In June 1941, Roosevelt created the Office of Scientific Research and Development (OSRD), with Vannevar Bush as its director, personally responsible to the President. The Uranium Committee became the Uranium Section of the OSRD, which was soon renamed the S-1 Section for security reasons.
Bush engaged Arthur Compton, a Nobel Prize winner, and the National Academy of Sciences. His report was issued on 17 May 1941. It endorsed a stronger effort, but did not address the design or manufacture of a bomb in any detail. Information from the MAUD Committee came from British scientists travelling to the United States, notably the Tizard Mission, and from American observers at the MAUD Committee meetings in April and July 1941. Cockcroft, who was part of the Tizard Mission, reported that the American project lagged behind the British one, and was not proceeding as fast.
Britain was at war and felt an atomic bomb was urgent, but the US was not yet at war. It was Oliphant who pushed the American programme into action. He flew to the United States in late August 1941, ostensibly to discuss the radar programme, but actually to find out why the United States was ignoring the MAUD Committee's findings. Oliphant reported: "The minutes and reports had been sent to Lyman Briggs, who was the Director of the Uranium Committee, and we were puzzled to receive virtually no comment. I called on Briggs in Washington, only to find out that this inarticulate and unimpressive man had put the reports in his safe and had not shown them to members of his committee. I was amazed and distressed."
Oliphant met with the S-1 Section. Samuel K. Allison was a new committee member, an experimental physicist and a protégé of Compton at the University of Chicago. Oliphant "came to a meeting", Allison recalled, "and said 'bomb' in no uncertain terms. He told us we must concentrate every effort on the bomb and said we had no right to work on power plants or anything but the bomb. The bomb would cost 25 million dollars, he said, and Britain did not have the money or the manpower, so it was up to us."
Oliphant then visited his friend Ernest Lawrence, an American Nobel Prize winner, to explain the urgency. Lawrence contacted Compton and James B. Conant, who received a copy of the final MAUD Report from Thomson on 3 October 1941. Harold Urey, also a Nobel Prize winner, and George B. Pegram were sent to the UK to obtain more information. In January 1942, the OSRD was empowered to engage in large engineering projects in addition to research. Without the help of the MAUD Committee the Manhattan Project would have started months behind. Instead they were able to begin thinking about how to create a bomb, not whether it was possible. Gowing noted that "events that change a time scale by only a few months can nevertheless change history." On 16 July 1945, the Manhattan Project detonated the first atomic bomb in the Trinity nuclear test.
### Soviet Union
The Soviet Union received details of British research from its atomic spies Klaus Fuchs, Engelbert Broda and Cairncross. Lavrentiy Beria, the head of the NKVD, gave a report to the General Secretary of the Communist Party of the Soviet Union, Joseph Stalin, in March 1942 that included the MAUD reports and other British documents passed by Cairncross. In 1943 the NKVD obtained a copy of the final report by the MAUD Committee. This led Stalin to order the start of a Soviet programme, although it had very limited resources. Igor Kurchatov was appointed director of the nascent programme later that year. Six years later, on 29 August 1949, the Soviet Union tested an atomic bomb.
|
31,010,416 |
Air-tractor sledge
| 1,160,304,664 |
First plane taken to Antarctica
|
[
"1911 in Antarctica",
"1911 in Australia",
"1912 in Antarctica",
"1913 in Antarctica",
"1914 in Antarctica",
"Australasian Antarctic Expedition",
"Exploration of Antarctica",
"Heroic Age of Antarctic Exploration"
] |
The air-tractor sledge was a converted fixed-wing aircraft taken on the 1911–1914 Australasian Antarctic Expedition, the first plane to be taken to the Antarctic.
Expedition leader Douglas Mawson had planned to use the Vickers R.E.P. Type Monoplane as a reconnaissance and search and rescue tool, and to assist in publicity, but the aircraft crashed heavily during a test flight in Adelaide, only two months before Mawson's scheduled departure date. The plane was nevertheless sent south with the expedition, after having been stripped of its wings and metal sheathing from the fuselage.
Engineer Frank Bickerton spent most of the 1912 winter working to convert it to a sledge, fashioning brakes from a pair of geological drills and a steering system from the plane's landing gear. It was first tested on 15 November 1912, and subsequently assisted in laying depots for the summer sledging parties, but its use during the expedition was minimal.
Towing a train of four sledges, the air-tractor accompanied a party led by Bickerton to explore the area to the west of the expedition's base at Cape Denison. The freezing conditions resulted in the jamming of the engine's pistons after just 10 miles (16 km), and the air-tractor was left behind. Some time later it was dragged back to Cape Denison, and its frame was left on the ice when the expedition returned home in 1913.
In 2008, a team from the Mawson's Huts Foundation began searching for the remains of the air-tractor sledge; a seat was found in 2009, and fragments of the tail assembly a year later. The Mawson's Huts Foundation has undertaken extensive investigation using sophisticated equipment in 2009 and 2010. Results indicate that the air tractor, or parts of it, is still buried under three metres (10 ft) of ice where it was abandoned at Cape Denison.
## Background
Douglas Mawson had accompanied Ernest Shackleton's 1907–09 British Antarctic Expedition. Along with Edgeworth David and Alistair Mackay, he had been part of a man-hauled sledging expedition, the first to reach the area of the South Magnetic Pole. Upon his return from Antarctica, he recommenced to his post as geology lecturer at the University of Adelaide. Despite an offer from Robert Falcon Scott to join his Terra Nova Expedition to reach the Geographic South Pole, Mawson began planning his own Antarctic expedition. Mawson's plan, which led to the Australasian Antarctic Expedition, envisaged three bases on the Antarctic continent, collectively surveying much of the coast directly south of Australia. He approached Shackleton, who not only approved of his plan but was prepared to lead the expedition himself. Although Shackleton withdrew from the expedition in December 1910, he continued to assist Mawson with publicity and fund-raising.
### Purchase
Mawson travelled to Britain in early 1911 to raise funds, hire crew, and purchase equipment. He considered taking a plane to the Antarctic, which could work as a reconnaissance tool, transport cargo, and assist with search and rescue. Crucially, as no plane had yet been taken to the continent, it could also be used to generate publicity. Unsure of the type of plane he should take, but considering a Blériot, Mawson mentioned his plans to Scott's wife Kathleen Scott, an aircraft enthusiast. She recommended he take a monoplane, and conveyed his interest to Lieutenant Hugh Evelyn Watkins of the Essex Regiment. Watkins had connections with the ship and aircraft manufacturer Vickers Limited, which had recently entered into a licence agreement to build and sell aircraft in Britain designed by the Frenchman Robert Esnault-Pelterie. In a letter to Mawson on 18 May, Kathleen wrote:
> I believe I can help you about aeroplanes. I think you can do far better than a Bleriot ... There is a machine that the Vickers people have bought which is infinitely more stable, heavier and more solid and will carry more weight. Its cost is £1000, but I think it could be worked to get it for £700 or even less ... A man I know who had only before driven biplanes, drove it and it stayed up half an hour, which speaks very well for its stability ... If you think it's worth considering, I can let you meet the man concerned early next week and he can show you the machine and take you up in it.
On Kathleen Scott's advice, Mawson purchased a Vickers R.E.P. Type Monoplane, one of only eight built. It was fitted with a five-cylinder R.E.P. engine developing 60 horsepower (45 kW), and had a maximum range of 300 miles (480 km) at a cruising speed of 48 knots (89 km/h; 55 mph). Its wingspan was 47 feet (14 m), and its length 36 feet (11 m). The pilot used a joystick for pitch and roll, with lateral control by wing warping. Mawson opted for a two-seater version, in a tandem arrangement, with a spare ski undercarriage. The total bill, dated 17 August 1911, came to £955 4s 8d. Mawson hired Watkins to fly the plane, and Frank Bickerton to accompany as engineer. After Vickers tested the aircraft at Dartford and Brooklands, P&O shipped the plane to Adelaide aboard the steamship Macedonia, at half the usual rate of freight.
### Crash
A series of public demonstrations were planned in Australia to assist in fund-raising, the first of which was scheduled for 5 October 1911 at the Cheltenham Racecourse in Adelaide. During a test flight the day before, excessive pressure in the fuel tank caused it to rupture, almost blinding Watkins. That problem resolved, Watkins took Frank Wild, whom Mawson had hired to command a support base during the expedition, on another test flight the morning of the demonstration. In Watkins' account, which he addressed to Vickers' Aviation Department, he wrote: "[we were] about 200 ft. up. I got into a fierce tremor, and then into an air pocket, and was brought down about 100 ft., got straight, and dropped into another, almost a vacuum. That finished it. We hit the ground with an awful crash, both wings damaged, one cylinder broken, and the Nose bent up, the tail in half, etc."
Although the two men were only slightly injured, the plane was damaged beyond repair. Mawson decided to salvage the plane by converting it into a motorised sledge. He fitted the skis, and removed the wings and most of the sheathing to save weight. In his official account of the expedition, The Home of the Blizzard, Mawson wrote that the advantages of this "air-tractor sledge" were expected to be "speed, steering control, and comparative safety from crevasses owing to the great length of the runners". No longer needing a pilot, and believing him to be responsible for the crash, Mawson dismissed Watkins.
The air-tractor sledge was taken to Hobart, where the expedition ship SY Aurora was being loaded. It was secured on board in a crate lined with tin, which weighed far more than the sledge itself, on top of the ship's forecastle and two boat-skids. To fuel the sledge, along with the motor launch and the wireless equipment, the Aurora also carried 4,000 imperial gallons (18,000 L) of benzine and 1,300 imperial gallons (5,900 L) of kerosene. Fully loaded, the ship left Hobart on 2 December 1911.
## In Antarctica
The Aurora reached the Antarctic mainland on 8 January 1912, after a two-week stop on Macquarie Island to establish a wireless relay station and research base. The expedition's main base was established in Adélie Land, at Cape Denison in Commonwealth Bay. While the Aurora was unloading, a violent whirlwind lifted the 300-pound (140 kg) lid off the air-tractor's crate, throwing it 50 yards (46 m). The main hut was erected immediately, but the strong winds meant that work on the air-tractor's hangar was delayed until March. When the winds abated, a 10-foot (3.0 m) by 35-foot (11 m) hangar was constructed next to the main hut, from empty packing cases.
Bickerton began work on the air-tractor sledge on 14 April 1912. His first job was to repair the sledge, which had been damaged in transit when a violent storm hit the Aurora. A giant wave had slammed into the crate containing the sledge, driving the fuselage 4 feet (1.2 m) through its side. With the repair completed, Bickerton began the serious work of converting the plane into a sledge. He constructed brakes from a pair of geological drills, and a steering system from the landing gear. Bickerton painted the engine and fuel tank black to absorb heat better and protect them from freezing. By June he had the engine running properly, and during a lull in the winds in early September he fitted the skis. Finally, he raised the fuselage 5 feet (1.5 m) off the ground to allow the propeller free movement.
On 27 October 1912, Mawson outlined the summer sledging program. Seven sledging parties would depart from Cape Denison, surveying the coast and interior of Adélie Land and neighbouring King George V Land. They were required to return to the base by 15 January, when the Aurora was due to depart; any later, it was feared, and she would be trapped by ice. Bickerton was to lead one of the parties, which would use the air-tractor to haul four sledges and explore the coast to the west of the hut. Most of the parties left in early November, but Bickerton's Western party delayed until December, in the hope of avoiding the ferocious winter winds. Work on the air-tractor sledge was delayed by the fierce winds, and the first trial took place on 15 November, between the main base and Aladdin's Cave—a depot which had been established on the plateau above Cape Denison. The air-tractor reached a speed of 20 miles per hour (32 km/h), covering the 5 miles (8.0 km), expedition member Charles Laseron recorded, "in great style". Soon, the sledge began hauling cargo up the slope, laying depots for the summer sledging parties.
### Broken
The Western party left Cape Denison on 3 December 1912. Accompanying Bickerton and the air-tractor were cartographer Alfred Hodgeman and surgeon Leslie Whetter. The air-tractor made slow progress hauling its train of sledges, and about 10 miles (16 km) out from the base its engine began experiencing difficulty. Bickerton shut it down and the three set up camp. At 4 am the next morning the party set off again, but the engine continued to struggle; oil ejected from an idle cylinder and the cylinder's lack of compression led Bickerton to suspect broken piston rings to be the root of the problem. This would take only a matter of hours to fix. As he later recorded, "These thoughts were brought to a sudden close by the engine, without any warning, pulling up with such a jerk that the propeller was smashed. On moving the latter, something fell into the oil in the crank-case and fizzled, while the propeller could only be swung through an angle of about 30 [degrees]."
The party continued without the air-tractor, man-hauling the sledges to a point 158 miles (254 km) west of Cape Denison, and returned to base on 18 January 1913. Mawson's Far Eastern Party failed to return, and six men, including Bickerton, remained for an extra winter. On 8 February, just hours after Aurora left Commonwealth Bay after waiting for three weeks, Mawson staggered alone into base, his colleagues Belgrave Edward Sutton Ninnis and Xavier Mertz dead. As Mawson was being nursed back to health, Bickerton dragged the air-tractor sledge back to base to diagnose the reason for its failure. He found that the freezing conditions had caused the engine oil to congeal, jamming the pistons. He abandoned the sledge at Boat Harbour, next to the base. When Aurora returned to Cape Denison for the final time on 13 December 1913, only the engine and propeller were taken back to Australia.
## Recovery efforts
The bill for the plane remained unpaid. In 1914 Vickers reminded Mawson, who had apparently forgotten the outstanding debt. Mawson wrote to Vickers director Sir Trevor Dawson in November 1916, requesting the company write off the bill as a donation. His company buoyed by armaments contracts, Dawson agreed. The next expedition to take a plane to the Antarctic was Shackleton's 1921–22 Quest Expedition, but the Avro Baby remained grounded owing to missing parts. Not until 16 November 1928—when Hubert Wilkins and Carl Ben Eielson flew for 20 minutes around Deception Island, just over a year before Admiral Richard Evelyn Byrd's first flight over the South Pole—was a plane airborne in the Antarctic.
The frame of the air-tractor sledge remained on the ice at Boat Harbour where Bickerton had left it. The last expedition to Cape Denison to see the frame was in 1976; the next expedition, in 1981, could find no trace of it. The ice in that location does not move, and the implication is that the frame sank through the ice. It is therefore possible the frame is still there.
In 2007-8 a team from the Mawson's Huts Foundation began to search for the remnants of the plane. Using photographs from 1913, 1931 and 1976 it was possible to derive transits between the frame and distant objects which located the frame to a small area of ice about 50 m from the hut. Comparison with a 1931 photograph by Frank Hurley confirmed this location.
The following summer (2008–9), the team extensively surveyed the area where they believed the air-tractor to be, using ground-penetrating radar. A 3-metre deep trench was dug in a promising area, but nothing was found except fragments of seaweed indicating the overlying ice must have melted sometime in the past. Temperature records from the nearby Dumont d'Urville Station showed that there had been extended periods (each of about six weeks) of above average temperatures in 1976 and 1981, suggesting the ice around the harbour could have melted. Dr Chris Henderson, the leader of the team, believes "the frame sank in situ to the rock surface, three metres below the present ice surface".
Next year (the 2009–10 season) further search was undertaken using differential GPS, bathymetry equipment, ice augers, a magnetometer and a metal detector (whose sensor was placed down the ice auger holes after drilling). The ice showed signs of having extensively melted in the past, was about 3 metres thick and covering smooth rock which extended Northwards to become the harbour bottom. Visual examination of the harbour bottom during the bathymetry survey did not reveal any fragments of the frame in the first 30 metres of the harbour.
The most significant findings from the ice survey were a positive reading from the metal detector, coupled with a significant echo from the Ground Penetrating Radar, both from the small area where the frame is assumed to have sunk.
Parts of the Air Tractor are already known to exist: The Australian Antarctic Division has one wheel from the frame, and its ice-rudder – both of which were found in the harbour. In January 2009 the remains of a seat from the air-tractor were found in rocks near the hut, about 200 metres (660 ft) from where the team believes the frame to be buried. On 1 January 2010, a day of unusually low tide, 4 small capping pieces from the end section of the tail were found by the edge of the harbour. The tail and a section of fuselage had been removed from the rest of the air-tractor before it was abandoned in 1913, therefore this discovery did not shed much light on the location of the rest of the frame, but it suggests that "the frame, or parts of it, can survive for nearly 100 years in this environment".
The team returned to Cape Denison over the 2010–11 summer, but the crash of a French helicopter near Dumont d'Urville Station in October 2010 forced deployment of a much reduced team with no resources to continue the search.
The findings to date (2011) suggest that metal object(s) exist at a depth of 3 metres, on rock, in the location where the frame was last known to have been seen in 1976. This is likely to be the remains of Mawson's Air Tractor, but confirmation awaits a future opportunity.
## See also
- Heroic Age of Antarctic Exploration
- Aerosled, propeller-driven sledge
- Hydrocopter
- Screw-propelled vehicle
|
12,717 |
Giraffe
| 1,171,592,408 |
Tall African ungulate
|
[
"Articles containing video clips",
"Fauna of Sub-Saharan Africa",
"Giraffes",
"Herbivorous mammals",
"Mammal genera",
"Mammals of Africa",
"National symbols of Tanzania",
"Vulnerable animals",
"Vulnerable biota of Africa"
] |
The giraffe is a large African hoofed mammal belonging to the genus Giraffa. It is the tallest living terrestrial animal and the largest ruminant on Earth. Traditionally, giraffes were thought to be one species, Giraffa camelopardalis, with nine subspecies. Most recently, researchers proposed dividing them into up to eight extant species due to new research into their mitochondrial and nuclear DNA, as well as morphological measurements. Seven other extinct species of Giraffa are known from the fossil record.
The giraffe's chief distinguishing characteristics are its extremely long neck and legs, its horn-like ossicones, and its spotted coat patterns. It is classified under the family Giraffidae, along with its closest extant relative, the okapi. Its scattered range extends from Chad in the north to South Africa in the south, and from Niger in the west to Somalia in the east. Giraffes usually inhabit savannahs and woodlands. Their food source is leaves, fruits, and flowers of woody plants, primarily acacia species, which they browse at heights most other herbivores cannot reach.
Lions, leopards, spotted hyenas, and African wild dogs may prey upon giraffes. Giraffes live in herds of related females and their offspring or bachelor herds of unrelated adult males, but are gregarious and may gather in large aggregations. Males establish social hierarchies through "necking", combat bouts where the neck is used as a weapon. Dominant males gain mating access to females, which bear sole responsibility for rearing the young.
The giraffe has intrigued various ancient and modern cultures for its peculiar appearance, and has often been featured in paintings, books, and cartoons. It is classified by the International Union for Conservation of Nature (IUCN) as vulnerable to extinction and has been extirpated from many parts of its former range. Giraffes are still found in numerous national parks and game reserves, but estimates as of 2016 indicate there are approximately 97,500 members of Giraffa in the wild. More than 1,600 were kept in zoos in 2010.
## Etymology
The name "giraffe" has its earliest known origins in the Arabic word zarāfah (زرافة), ultimately from Persian زُرنَاپَا (zurnāpā), a compound of زُرنَا (zurnā, “flute, zurna”) and پَا (pā, “leg”). In early Modern English the spellings jarraf and ziraph were used, probably directly from the Arabic, and in Middle English jarraf and ziraph, gerfauntz. The Italian form giraffa arose in the 1590s. The modern English form developed around 1600 from the French girafe.
"Camelopard" /kəˈmɛləˌpɑːrd/ is an archaic English name for the giraffe; it derives from the Ancient Greek καμηλοπάρδαλις (kamēlopárdalis), from κάμηλος (kámēlos), "camel", and πάρδαλις (párdalis), "leopard", referring to its camel-like shape and leopard-like colouration.
## Taxonomy
### Evolution
The giraffe is one of only two living genera of the family Giraffidae in the order Artiodactyla, the other being the okapi. They are ruminants of the clade Pecora, along with Antilocapridae (pronghorns), Cervidae (deer), Bovidae (cattle, antelope, goats and sheep) and Moschidae (musk deer). A 2019 genome study (cladogram below) finds that Giraffidae are a sister taxon to Antilocapridae, with an estimated split of over 20 million years ago.
The family Giraffidae was once much more extensive, with over 10 fossil genera described. The elongation of the neck appears to have started early in the giraffe lineage. Comparisons between giraffes and their ancient relatives suggest vertebrae close to the skull lengthened earlier, followed by lengthening of vertebrae further down. One early giraffid ancestor was Canthumeryx, which has been dated variously to have lived 25–20 mya, 17–15 mya or 18–14.3 mya and whose deposits have been found in Libya. This animal resembled an antelope and had a medium-sized, lightly built body. Giraffokeryx appeared 15–12 mya on the Indian subcontinent and resembled an okapi or a small giraffe, and had a longer neck and similar ossicones. Giraffokeryx may have shared a clade with more massively built giraffids like Sivatherium and Bramatherium.
Giraffids like Palaeotragus, Shansitherium and Samotherium appeared 14 mya and lived throughout Africa and Eurasia. These animals had broader skulls with reduced frontal cavities. Paleotragus resembled the okapi and may have been its ancestor. Others find that the okapi lineage diverged earlier, before Giraffokeryx. Samotherium was a particularly important transitional fossil in the giraffe lineage, as the length and structure of its cervical vertebrae were between those of a modern giraffe and an okapi, and its neck posture was likely similar to the former's. Bohlinia, which first appeared in southeastern Europe and lived 9–7 mya, was likely a direct ancestor of the giraffe. Bohlinia closely resembled modern giraffes, having a long neck and legs and similar ossicones and dentition.
Bohlinia colonised China and northern India and produced the Giraffa, which, around 7 mya, reached Africa. Climate changes led to the extinction of the Asian giraffes, while the African giraffes survived and radiated into new species. Living giraffes appear to have arisen around 1 mya in eastern Africa during the Pleistocene. Some biologists suggest the modern giraffes descended from G. jumae; others find G. gracilis a more likely candidate. G. jumae was larger and more robust, while G. gracilis was smaller and more slender.
The changes from extensive forests to more open habitats, which began 8 mya, are believed to be the main driver for the evolution of giraffes. During this time, tropical plants disappeared and were replaced by arid C4 plants, and a dry savannah emerged across eastern and northern Africa and western India. Some researchers have hypothesised that this new habitat, coupled with a different diet, including acacia species, may have exposed giraffe ancestors to toxins that caused higher mutation rates and a higher rate of evolution. The coat patterns of modern giraffes may also have coincided with these habitat changes. Asian giraffes are hypothesised to have had more okapi-like colourations.
The giraffe genome is around 2.9 billion base pairs in length, compared to the 3.3 billion base pairs of the okapi. Of the proteins in giraffe and okapi genes, 19.4% are identical. The divergence of giraffe and okapi lineages dates to around 11.5 mya. A small group of regulatory genes in the giraffe appear to be responsible for the animal's height and associated circulatory adaptations.
### Species and subspecies
The International Union for Conservation of Nature (IUCN) currently recognises only one species of giraffe with nine subspecies.
Carl Linnaeus originally classified living giraffes as one species in 1758. He gave it the binomial name Cervus camelopardalis. Mathurin Jacques Brisson coined the generic name Giraffa in 1762. During the 1900s, various taxonomies with two or three species were proposed. A 2007 study on the genetics of giraffes using mitochondrial DNA suggested at least six lineages could be recognised as species. A 2011 study using detailed analyses of the morphology of giraffes, and application of the phylogenetic species concept, described eight species of living giraffes. A 2016 study also concluded that living giraffes consist of multiple species. The researchers suggested the existence of four species, which have not exchanged genetic information between each other for 1 to 2 million years.
A 2020 study showed that depending on the method chosen, different taxonomic hypotheses recognizing from two to six species can be considered for the genus Giraffa. That study also found that multi-species coalescent methods can lead to taxonomic over-splitting, as those methods delimit geographic structures rather than species. The three-species hypothesis, which recognises G. camelopardalis, G. giraffa, and G. tippelskirchi, is highly supported by phylogenetic analyses and also corroborated by most population genetic and multi-species coalescent analyses. A 2021 whole genome sequencing study suggests the existence of four distinct species and seven subspecies.
The cladogram below shows the phylogenetic relationship between the four proposed species and seven subspecies based on the genome analysis. Note the eight lineages correspond to eight of the traditional subspecies in the one species hypothesis. The Rothschild giraffe is subsumed into G. camelopardalis camelopardalis.
The following table compares the different hypotheses for giraffe species. The description column shows the traditional nine subspecies in the one species hypothesis.
The first extinct species to be described was Giraffa sivalensis Falconer and Cautley 1843, a reevaluation of a vertebra that was initially described as a fossil of the living giraffe. While taxonomic opinion may be lacking on some names, the extinct species that have been published include:
- Giraffa gracilis
- Giraffa jumae
- Giraffa pomeli
- Giraffa priscilla
- Giraffa punjabiensis
- Giraffa pygmaea
- Giraffa sivalensis
- Giraffa stillei
## Characteristics
Fully grown giraffes stand 4.3–5.7 m (14–19 ft) tall, with males taller than females. The average weight is 1,192 kg (2,628 lb) for an adult male and 828 kg (1,825 lb) for an adult female. Despite its long neck and legs, its body is relatively short. The skin is mostly gray, or tan, and can reach a thickness of 20 mm (0.79 in). The 80–100 cm (31–39 in) long tail ends in a long, dark tuft of hair and is used as a defense against insects.
The coat has dark blotches or patches, which can be orange, chestnut, brown, or nearly black, surrounded by light hair, usually white or cream coloured. Male giraffes become darker as they grow old. The coat pattern has been claimed to serve as camouflage in the light and shade patterns of savannah woodlands. When standing among trees and bushes, they are hard to see at even a few metres distance. However, adult giraffes move about to gain the best view of an approaching predator, relying on their size and ability to defend themselves rather than on camouflage, which may be more important for calves. Each giraffe has a unique coat pattern. Calves inherit some coat pattern traits from their mothers, and variation in some spot traits is correlated with calf survival. The skin under the blotches may regulate the animal's body temperature, being sites for complex blood vessel systems and large sweat glands.
The fur may give the animal chemical defense, as its parasite repellents give it a characteristic scent. At least 11 main aromatic chemicals are in the fur, although indole and 3-methylindole are responsible for most of the smell. Because males have a stronger odour than females, it may also have a sexual function.
### Head
Both sexes have prominent horn-like structures called ossicones, which can reach 13.5 cm (5.3 in). They are formed from ossified cartilage, covered in skin and fused to the skull at the parietal bones. Being vascularised, the ossicones may have a role in thermoregulation, and are used in combat between males. Appearance is a reliable guide to the sex or age of a giraffe: the ossicones of females and young are thin and display tufts of hair on top, whereas those of adult males tend to be bald and knobbed on top. A lump, which is more prominent in males, emerges in the middle of the skull. Males develop calcium deposits that form bumps on their skulls as they age. Multiple sinuses lighten a giraffe's skull. However, as males age, their skulls become heavier and more club-like, helping them become more dominant in combat. The occipital condyles at the bottom of the skull allow the animal to tip its head over 90 degrees and grab food on the branches directly above them with the tongue.
With eyes located on the sides of the head, the giraffe has a broad visual field from its great height. Compared to other ungulates, giraffe vision is more binocular and the eyes are larger with a greater retinal surface area. Giraffes may see in colour and their senses of hearing and smell are sharp. The ears are movable and the nostrils are slit-shaped, possibly to withstand blowing sand. The giraffe's tongue is about 45 cm (18 in) long. It is black, perhaps to protect against sunburn, and can grasp foliage and delicately pick off leaves. The upper lip is flexible and hairy to protect against sharp prickles. The upper jaw has a hard palate instead of front teeth. The molars and premolars are wide with low crowns on the surface.
### Neck
The giraffe has an extremely elongated neck, which can be up to 2.4 m (7 ft 10 in) in length. Along the neck is a mane made of short, erect hairs. The neck typically rests at an angle of 50–60 degrees, though juveniles are closer to 70 degrees. The long neck results from a disproportionate lengthening of the cervical vertebrae, not from the addition of more vertebrae. Each cervical vertebra is over 28 cm (11 in) long. They comprise 52–54 per cent of the length of the giraffe's vertebral column, compared with the 27–33 percent typical of similar large ungulates, including the giraffe's closest living relative, the okapi. This elongation largely takes place after birth, perhaps because giraffe mothers would have a difficult time giving birth to young with the same neck proportions as adults. The giraffe's head and neck are held up by large muscles and a nuchal ligament, which are anchored by long thoracic vertebrae spines, giving them a hump.
The giraffe's neck vertebrae have ball and socket joints. The point of articulation between the cervical and thoracic vertebrae of giraffes is shifted to lie between the first and second thoracic vertebrae (T1 and T2), unlike in most other ruminants, where the articulation is between the seventh cervical vertebra (C7) and T1. This allows C7 to contribute directly to increased neck length and has given rise to the suggestion that T1 is actually C8, and that giraffes have added an extra cervical vertebra. However, this proposition is not generally accepted, as T1 has other morphological features, such as an articulating rib, deemed diagnostic of thoracic vertebrae, and because exceptions to the mammalian limit of seven cervical vertebrae are generally characterised by increased neurological anomalies and maladies.
There are several hypotheses regarding the evolutionary origin and maintenance of elongation in giraffe necks. Charles Darwin originally suggested the "competing browsers hypothesis", which has been challenged only recently. It suggests that competitive pressure from smaller browsers, like kudu, steenbok and impala, encouraged the elongation of the neck, as it enabled giraffes to reach food that competitors could not. This advantage is real, as giraffes can and do feed up to 4.5 m (15 ft) high, while even quite large competitors, such as kudu, can feed up to only about 2 m (6 ft 7 in) high. There is also research suggesting that browsing competition is intense at lower levels, and giraffes feed more efficiently (gaining more leaf biomass with each mouthful) high in the canopy. However, scientists disagree about just how much time giraffes spend feeding at levels beyond the reach of other browsers, and a 2010 study found that adult giraffes with longer necks actually suffered higher mortality rates under drought conditions than their shorter-necked counterparts. This study suggests that maintaining a longer neck requires more nutrients, which puts longer-necked giraffes at risk during a food shortage.
Another theory, the sexual selection hypothesis, proposes the long necks evolved as a secondary sexual characteristic, giving males an advantage in "necking" contests (see below) to establish dominance and obtain access to sexually receptive females. In support of this theory, necks are longer and heavier for males than females of the same age, and males do not employ other forms of combat. However, one objection is it fails to explain why female giraffes also have long necks. It has also been proposed that the neck serves to give the animal greater vigilance.
### Legs, locomotion and posture
A giraffe's front and back legs are about the same length. The radius and ulna of the front legs are articulated by the carpus, which, while structurally equivalent to the human wrist, functions as a knee. It appears that a suspensory ligament allows the lanky legs to support the animal's great weight. The hooves of large male giraffes reach 31 cm × 23 cm (12.2 in × 9.1 in) in diameter. The fetlock of the leg is low to the ground, allowing the hoof to better support the animal's weight. Giraffes lack dewclaws and interdigital glands. While the pelvis is relatively short, the ilium has stretched out crests.
A giraffe has only two gaits: walking and galloping. Walking is done by moving the legs on one side of the body, then doing the same on the other side. When galloping, the hind legs move around the front legs before the latter move forward, and the tail will curl up. The movements of the head and neck provide balance and control momentum while galloping. The giraffe can reach a sprint speed of up to 60 km/h (37 mph), and can sustain 50 km/h (31 mph) for several kilometres. Giraffes would probably not be competent swimmers as their long legs would be highly cumbersome in the water, although they might be able to float. When swimming, the thorax would be weighed down by the front legs, making it difficult for the animal to move its neck and legs in harmony or keep its head above the water's surface.
A giraffe rests by lying with its body on top of its folded legs. To lie down, the animal kneels on its front legs and then lowers the rest of its body. To get back up, it first gets on its front knees and positions its backside on top of its hindlegs. It then pulls up the backside upwards and the front legs stand straight up again. At each stage, the animal swings its head for balance. If the giraffe wants to reach down to drink, it either spreads its front legs or bends its knees. Studies in captivity found the giraffe sleeps intermittently around 4.6 hours per day, mostly at night. It usually sleeps lying down; however, standing sleeps have been recorded, particularly in older individuals. Intermittent short "deep sleep" phases while lying are characterised by the giraffe bending its neck backwards and resting its head on the hip or thigh, a position believed to indicate paradoxical sleep.
### Internal systems
In mammals, the left recurrent laryngeal nerve is longer than the right; in the giraffe, it is over 30 cm (12 in) longer. These nerves are longer in the giraffe than in any other living animal; the left nerve is over 2 m (6 ft 7 in) long. Each nerve cell in this path begins in the brainstem and passes down the neck along the vagus nerve, then branches off into the recurrent laryngeal nerve which passes back up the neck to the larynx. Thus, these nerve cells have a length of nearly 5 m (16 ft) in the largest giraffes. Despite its long neck and large skull, the brain of the giraffe is typical for an ungulate. Evaporative heat loss in the nasal passages keep the giraffe's brain cool. The shape of the skeleton gives the giraffe a small lung volume relative to its mass. Its long neck gives it a large amount of dead space, in spite of its narrow windpipe. The giraffe also has a high tidal volume so the balance of dead space and tidal volume is much the same as other mammals. The animal can still provide enough oxygen for its tissues, and it can increase its respiratory rate and oxygen diffusion when running.
The giraffe's circulatory system has several adaptations to compensate for its great height. Its 11 kg (25 lb) and 60 cm (2 ft) heart must generate approximately double the blood pressure required for a human to maintain blood flow to the brain. As such, the wall of the heart can be as thick as 7.5 cm (3.0 in). Giraffes have relatively high heart rates for their size, at 150 beats per minute. When the animal lowers its head, the blood rushes down fairly unopposed and a rete mirabile in the upper neck, with its large cross-sectional area, prevents excess blood flow to the brain. When it raises again, the blood vessels constrict and push blood into the brain so the animal does not faint. The jugular veins contain several (most commonly seven) valves to prevent blood flowing back into the head from the inferior vena cava and right atrium while the head is lowered. Conversely, the blood vessels in the lower legs are under great pressure because of the weight of fluid pressing down on them. To solve this problem, the skin of the lower legs is thick and tight, preventing too much blood from pouring into them.
Giraffes have oesophageal muscles that are strong enough to allow regurgitation of food from the stomach up the neck and into the mouth for rumination. They have four chambered stomachs, which are adapted to their specialized diet. The intestines of an adult giraffe measure more than 70 m (230 ft) in length and have a relatively small ratio of small to large intestine. The giraffe has a small, compact liver. In fetuses there may be a small gallbladder that vanishes before birth.
## Behaviour and ecology
### Habitat and feeding
Giraffes usually inhabit savannahs and open woodlands. They prefer areas dominated by Acacieae, Commiphora, Combretum and Terminalia tree over Brachystegia which are more densely spaced. The Angolan giraffe can be found in desert environments. Giraffes browse on the twigs of trees, preferring those of the subfamily Acacieae and the genera Commiphora and Terminalia, which are important sources of calcium and protein to sustain the giraffe's growth rate. They also feed on shrubs, grass and fruit. A giraffe eats around 34 kg (75 lb) of plant matter daily. When stressed, giraffes may chew on large branches, stripping them of bark. Giraffes are also recorded to chew old bones.
During the wet season, food is abundant and giraffes are more spread out, while during the dry season, they gather around the remaining evergreen trees and bushes. Mothers tend to feed in open areas, presumably to make it easier to detect predators, although this may reduce their feeding efficiency. As a ruminant, the giraffe first chews its food, then swallows it for processing and then visibly passes the half-digested cud up the neck and back into the mouth to chew again. The giraffe requires less food than many other herbivores because the foliage it eats has more concentrated nutrients and it has a more efficient digestive system. The animal's faeces come in the form of small pellets. When it has access to water, a giraffe will go no more than three days without drinking.
Giraffes have a great effect on the trees that they feed on, delaying the growth of young trees for some years and giving "waistlines" to too tall trees. Feeding is at its highest during the first and last hours of daytime. Between these hours, giraffes mostly stand and ruminate. Rumination is the dominant activity during the night, when it is mostly done lying down.
### Social life
Giraffes are usually found in groups that vary in size and composition according to ecological, anthropogenic, temporal, and social factors. Traditionally, the composition of these groups had been described as open and ever-changing. For research purposes, a "group" has been defined as "a collection of individuals that are less than a kilometre apart and moving in the same general direction". More recent studies have found that giraffes have long lasting social groups or cliques based on kinship, sex or other factors, and these groups regularly associate with other groups in larger communities or sub-communities within a fission–fusion society. Proximity to humans can disrupt social arrangements. Masai giraffes in Tanzania sort themselves into different subpopulations of 60–90 adult females with overlapping ranges, each of which differ in reproductive rates and calf mortality. Dispersal is male biased, and can include spatial and/or social dispersal. Adult female subpopulations are connected by males into supercommunities of around 300 animals.
The number of giraffes in a group can range from one up to 66 individuals. Giraffe groups tend to be sex-segregated although mixed-sex groups made of adult females and young males also occur. Female groups may be matrilineally related. Generally females are more selective than males in who they associate with regarding individuals of the same sex. Particularly stable giraffe groups are those made of mothers and their young, which can last weeks or months. Young males also form groups and will engage in playfights. However, as they get older, males become more solitary but may also associate in pairs or with female groups. Giraffes are not territorial, but they have home ranges that vary according to rainfall and proximity to human settlements. Male giraffes occasionally roam far from areas that they normally frequent.
Early biologists suggested giraffes were mute and unable to create enough air flow to vibrate their vocal folds. To the contrary; they have been recorded to communicate using snorts, sneezes, coughs, snores, hisses, bursts, moans, grunts, growls and flute-like sounds. During courtship, males emit loud coughs. Females call their young by bellowing. Calves will emit bleats, mooing and mewing sounds. Snorting and hissing is associated with vigilance. During nighttime, giraffes appear to hum to each other. There is some evidence that giraffes use Helmholtz resonance to create infrasound. They also communicate with body language. Dominant males display to other males with an erect posture; holding the chin and head up while walking stiffly and displaying their side. The less dominant show submissiveness by dropping the head and ears, lowering the chin and fleeing.
### Reproduction and parental care
Reproduction in giraffes is broadly polygamous: a few older males mate with the fertile females. Females can reproduce throughout the year and experience oestrus cycling approximately every 15 days. Female giraffes in oestrous are dispersed over space and time, so reproductive adult males adopt a strategy of roaming among female groups to seek mating opportunities, with periodic hormone-induced rutting behaviour approximately every two weeks. Males prefer young adult females over juveniles and older adults.
Male giraffes assess female fertility by tasting the female's urine to detect oestrus, in a multi-step process known as the flehmen response. Once an oestrous female is detected, the male will attempt to court her. When courting, dominant males will keep subordinate ones at bay. A courting male may lick a female's tail, lay his head and neck on her body or nudge her with his ossicones. During copulation, the male stands on his hind legs with his head held up and his front legs resting on the female's sides.
Giraffe gestation lasts 400–460 days, after which a single calf is normally born, although twins occur on rare occasions. The mother gives birth standing up. The calf emerges head and front legs first, having broken through the fetal membranes, and falls to the ground, severing the umbilical cord. A newborn giraffe is 1.7–2 m (5.6–6.6 ft) tall. Within a few hours of birth, the calf can run around and is almost indistinguishable from a one-week-old. However, for the first one to three weeks, it spends most of its time hiding, its coat pattern providing camouflage. The ossicones, which have lain flat in the womb, raise up in a few days.
Mothers with calves will gather in nursery herds, moving or browsing together. Mothers in such a group may sometimes leave their calves with one female while they forage and drink elsewhere. This is known as a "calving pool". Calves are at risk of predation, and a mother giraffe will stand over them and kick at an approaching predator. Females watching calving pools will only alert their own young if they detect a disturbance, although the others will take notice and follow. Allo-sucking, where a calf will suckle a female other than its mother, has been recorded in both wild and captive giraffes. Calves first ruminate at four to six months and stop nursing at six to eight months. Young may not reach independence until they are 14 months old. Females become sexually mature when they are four years old, while males become mature at four or five years. Spermatogenesis in male giraffes begins at three to four years of age. Males must wait until they are at least seven years old to gain the opportunity to mate.
### Necking
Male giraffes use their necks as weapons in combat, a behaviour known as "necking". Necking is used to establish dominance and males that win necking bouts have greater reproductive success. This behaviour occurs at low or high intensity. In low-intensity necking, the combatants rub and lean on each other. The male that can keep itself more upright wins the bout. In high-intensity necking, the combatants will spread their front legs and swing their necks at each other, attempting to land blows with their ossicones. The contestants will try to dodge each other's blows and then prepare to counter. The power of a blow depends on the weight of the skull and the arc of the swing. A necking duel can last more than half an hour, depending on how well matched the combatants are. Although most fights do not lead to serious injury, there have been records of broken jaws, broken necks, and even deaths.
After a duel, it is common for two male giraffes to caress and court each other. Such interactions between males have been found to be more frequent than heterosexual coupling. In one study, up to 94 percent of observed mounting incidents took place between males. The proportion of same-sex activities varied from 30 to 75 percent. Only one percent of same-sex mounting incidents occurred between females.
### Mortality and health
Giraffes have high adult survival probability, and an unusually long lifespan compared to other ruminants, up to 38 years. Adult female survival is significantly correlated with the number of social associations. Because of their size, eyesight and powerful kicks, adult giraffes are mostly safe from predation, with lions being their only major threats. Calves are much more vulnerable than adults and are also preyed on by leopards, spotted hyenas and wild dogs. A quarter to a half of giraffe calves reach adulthood. Calf survival varies according to the season of birth, with calves born during the dry season having higher survival rates.
The local, seasonal presence of large herds of migratory wildebeests and zebras reduces predation pressure on giraffe calves and increases their survival probability. In turn, it has been suggested that other ungulates may benefit from associating with giraffes, as their height allows them to spot predators from further away. Zebras were found to assess predation risk by watching giraffes and spend less time looking around when giraffes are present.
Some parasites feed on giraffes. They are often hosts for ticks, especially in the area around the genitals, which have thinner skin than other areas. Tick species that commonly feed on giraffes are those of genera Hyalomma, Amblyomma and Rhipicephalus. Giraffes may rely on red-billed and yellow-billed oxpeckers to clean them of ticks and alert them to danger. Giraffes host numerous species of internal parasites and are susceptible to various diseases. They were victims of the (now eradicated) viral illness rinderpest. Giraffes can also suffer from a skin disorder, which comes in the form of wrinkles, lesions or raw fissures. As much as 79% of giraffes have symptoms of the disease in Ruaha National Park, but it did not cause mortality in Tarangire and is less prevalent in areas with fertile soils.
## Human relations
### Cultural significance
With its lanky build and spotted coat, the giraffe has been a source of fascination throughout human history, and its image is widespread in culture. It has represented flexibility, far-sightedness, femininity, fragility, passivity, grace, beauty and the continent of Africa itself.
Giraffes were depicted in art throughout the African continent, including that of the Kiffians, Egyptians, and Kushites. The Kiffians were responsible for a life-size rock engraving of two giraffes, dated 8,000 years ago, that has been called the "world's largest rock art petroglyph". How the giraffe got its height has been the subject of various African folktales. The Tugen people of modern Kenya used the giraffe to depict their god Mda. The Egyptians gave the giraffe its own hieroglyph; 'sr' in Old Egyptian and 'mmy' in later periods.
Giraffes have a presence in modern Western culture. Salvador Dalí depicted them with burning manes in some of his surrealist paintings. Dali considered the giraffe to be a masculine symbol, and a flaming giraffe was meant to be a "masculine cosmic apocalyptic monster". Several children's books feature the giraffe, including David A. Ufer's The Giraffe Who Was Afraid of Heights, Giles Andreae's Giraffes Can't Dance and Roald Dahl's The Giraffe and the Pelly and Me. Giraffes have appeared in animated films, as minor characters in Disney's The Lion King and Dumbo, and in more prominent roles in The Wild and the Madagascar films. Sophie the Giraffe has been a popular teether since 1961. Another famous fictional giraffe is the Toys "R" Us mascot Geoffrey the Giraffe.
The giraffe has also been used for some scientific experiments and discoveries. Scientists have used the properties of giraffe skin as a model for astronaut and fighter pilot suits because the people in these professions are in danger of passing out if blood rushes to their legs. Computer scientists have modeled the coat patterns of several subspecies using reaction–diffusion mechanisms. The constellation of Camelopardalis, introduced in the seventeenth century, depicts a giraffe. The Tswana people of Botswana traditionally see the constellation Crux as two giraffes—Acrux and Mimosa forming a male, and Gacrux and Delta Crucis forming the female.
### Captivity
The Egyptians were among the earliest people to keep giraffes in captivity and shipped them around the Mediterranean. The giraffe was among the many animals collected and displayed by the Romans. The first one in Rome was brought in by Julius Caesar in 46 BC. With the fall of the Western Roman Empire, the housing of giraffes in Europe declined. During the Middle Ages, giraffes were known to Europeans through contact with the Arabs, who revered the giraffe for its peculiar appearance.
Individual captive giraffes were given celebrity status throughout history. In 1414, a giraffe from Malindi was taken to China by explorer Zheng He and placed in a Ming dynasty zoo. The animal was a source of fascination for the Chinese people, who associated it with the mythical Qilin. The Medici giraffe was a giraffe presented to Lorenzo de' Medici in 1486. It caused a great stir on its arrival in Florence. Zarafa, another famous giraffe, was brought from Egypt to Paris in the early 19th century as a gift for Charles X of France. A sensation, the giraffe was the subject of numerous memorabilia or "giraffanalia".
Giraffes have become popular attractions in modern zoos, though keeping them healthy is difficult as they require vast areas and need to eat large amounts of browse. Captive giraffes in North America and Europe appear to have a higher mortality rate than in the wild; the most common causes being poor husbandry, nutrition and management. Giraffes in zoos display stereotypical behaviours, particularly the licking of inanimate objects and pacing. Zookeepers may offer various activities to stimulate giraffes, including training them to take food from visitors. Stables for giraffes are built particularly high to accommodate their height.
### Exploitation
Giraffes were probably common targets for hunters throughout Africa. Different parts of their bodies were used for different purposes. Their meat was used for food. The tail hairs served as flyswatters, bracelets, necklaces, and threads. Shields, sandals, and drums were made using the skin, and the strings of musical instruments were from the tendons. In Buganda, the smoke of burning giraffe skin was traditionally used to treat nose bleeds. The Humr people of Kordofan consume the drink Umm Nyolokh, which is prepared from the liver and bone marrow of giraffes. Richard Rudgley hypothesised that Umm Nyolokh might contain DMT. The drink is said to cause hallucinations of giraffes, believed to be the giraffes' ghosts, by the Humr.
## Conservation status
In 2016, giraffes were assessed as Vulnerable from a conservation perspective by the IUCN. In 1985, it was estimated there were 155,000 giraffes in the wild. This declined to over 140,000 in 1999. Estimates as of 2016 indicate there are approximately 97,500 members of Giraffa in the wild. The Masai and reticulated subspecies are endangered, and the Rothschild subspecies is near threatened. The Nubian subspecies is critically endangered.
The primary causes for giraffe population declines are habitat loss and direct killing for bushmeat markets. Giraffes have been extirpated from much of their historic range, including Eritrea, Guinea, Mauritania and Senegal. They may also have disappeared from Angola, Mali, and Nigeria, but have been introduced to Rwanda and Eswatini. As of 2010, there were more than 1,600 in captivity at Species360-registered zoos. Habitat destruction has hurt the giraffe. In the Sahel, the need for firewood and grazing room for livestock has led to deforestation. Normally, giraffes can coexist with livestock, since they avoid direct competition by feeding above them. In 2017, severe droughts in northern Kenya led to increased tensions over land and the killing of wildlife by herders, with giraffe populations being particularly hit.
Protected areas like national parks provide important habitat and anti-poaching protection to giraffe populations. Community-based conservation efforts outside national parks are also effective at protecting giraffes and their habitats. Private game reserves have contributed to the preservation of giraffe populations in eastern and southern Africa. The giraffe is a protected species in most of its range. It is the national animal of Tanzania, and is protected by law, and unauthorised killing can result in imprisonment. The UN backed Convention of Migratory Species selected giraffes for protection in 2017. In 2019, giraffes were listed under Appendix II of the Convention on International Trade in Endangered Species (CITES), which means international trade including in parts/derivatives is regulated.
Translocations are sometimes used to augment or re-establish diminished or extirpated populations, but these activities are risky and difficult to undertake using the best practices of extensive pre- and post-translocation studies and ensuring a viable founding population. Aerial survey is the most common method of monitoring giraffe population trends in the vast roadless tracts of African landscapes, but aerial methods are known to undercount giraffes. Ground-based survey methods are more accurate and can be used in conjunction with aerial surveys to make accurate estimates of population sizes and trends.
## See also
- Fauna of Africa
- Giraffe Centre
- Giraffe Manor - hotel in Nairobi with giraffes
|
3,804,776 |
No. 37 Squadron RAAF
| 1,149,531,961 |
Royal Australian Air Force transport squadron
|
[
"Air force transport squadrons",
"Military units and formations established in 1943",
"RAAF squadrons"
] |
No. 37 Squadron is a Royal Australian Air Force (RAAF) medium tactical airlift squadron. It operates Lockheed Martin C-130J Hercules aircraft from RAAF Base Richmond, New South Wales. The squadron has seen active service flying transport aircraft during World War II, the Vietnam War, the wars in Afghanistan and Iraq, and the military intervention against ISIL. It has also supported Australian humanitarian and peacekeeping operations around the world, including in Somalia, East Timor, Bali, Papua New Guinea, and the Philippines.
The squadron was formed at RAAF Station Laverton, Victoria, in July 1943, and equipped with Lockheed C-60 Lodestars that it operated in Australia, New Guinea and the Dutch East Indies. Towards the end of the war it began flying Douglas C-47 Dakotas. It became part of No. 86 (Transport) Wing, headquartered at RAAF Station Schofields, New South Wales, in 1946 but was disbanded two years later. In response to Australia's increasing air transport needs during the Vietnam War, the squadron was re-formed at Richmond in February 1966, and equipped with the C-130E Hercules. It began converting to the C-130J model in 1999, and between 2006 and 2012 also operated C-130Hs formerly of No. 36 Squadron. No. 37 Squadron came under the control of a re-formed No. 86 Wing from 1987 until 2010, when it was transferred to No. 84 Wing.
## Role and equipment
No. 37 Squadron is tasked with medium tactical airlift in Australia and overseas, transporting troops and cargo, and conducting medical evacuation, search-and-rescue, and airdrop missions. It is located at RAAF Base Richmond, New South Wales, and controlled by No. 84 Wing, which is part of Air Mobility Group. As of July 2013, the squadron comprised more than 400 personnel organised into four flights of aircrew, an administrative and operational section, and a maintenance section responsible for day-to-day aircraft servicing as well as regular maintenance cycles of six weeks' duration. Intermediate and heavy maintenance is contracted to Airbus Group Australia Pacific (airframe) and StandardAero (engines). No. 37 Squadron's motto is "Foremost".
The squadron operates twelve Lockheed Martin C-130J Hercules, which entered service in 1999. The aircraft are generally crewed by two pilots and a loadmaster, the latter being responsible for the loading, carriage and unloading of cargo and passengers. The C-130J can carry 19,500 kilograms (43,000 lb) of cargo, or 120 passengers. It has a range of over 6,800 km (4,200 miles) without payload, and is able to operate from short and unsealed airstrips. From 1999 to 2017, No. 285 Squadron operated a C-130J Flight simulator at Richmond and was responsible for training No. 37 Squadron's aircrew and maintenance personnel; its role and most of its personnel were subsequently transferred to No. 37 Squadron's Training Flight. No. 37 Squadron maintains a detachment of two aircraft at Al Minhad Air Base in the United Arab Emirates to support operations in the Middle East Region under Operation Accordion. The C-130Js are expected to remain in RAAF service until 2030.
## History
### World War II and aftermath
No. 37 (Transport) Squadron was formed on 15 July 1943 at RAAF Station Laverton, Victoria, with a staff of two officers and thirteen airmen. Its first commanding officer, Squadron Leader Neville Hemsworth (late of No. 34 Squadron), arrived on 21 July, and its first aircraft, a single-engined Northrop Delta (also formerly of No. 34 Squadron), was delivered on 2 August. The squadron was allocated the first of a batch of ten twin-engined Lockheed C-60 Lodestar transports on 23 August. The Delta was written off following an accident on 30 September. By then the squadron's staff numbered 190, including forty-five officers. It was declared operational on 11 October 1943, undertaking regular courier flights across Australia to destinations including Perth, Western Australia; Darwin and Alice Springs, Northern Territory; Adelaide, South Australia; Maryborough, Queensland; and Launceston, Tasmania.
By mid-1944, the squadron had expanded its operations to New Guinea, making courier flights to Merauke initially, and later Wewak, Noemfoor and Hollandia. It transferred to Essendon, Victoria, on 1 September. The unit was now one of eight Australian transport squadrons, all of which operated under the control of RAAF Headquarters, Melbourne. Their primary duty was supporting the Australian military, though they could also be released for urgent requests by General Douglas MacArthur's South West Pacific Area headquarters. A Lodestar crashed and burned on takeoff at Merauke on 26 January 1945 but all aboard escaped injury; it was the only hull loss suffered by the type in Australian service. No. 37 Squadron received its first three Douglas C-47 Dakotas the following month, and by the end of March had a complement of eighteen aircraft: nine Dakotas, seven Lodestars, a Douglas DC-2, and a de Havilland Tiger Moth. The next month it began operating detachments out of Parafield, South Australia, and Morotai in the Dutch East Indies. On 6 July 1945, one of the squadron's Dakotas transported the body of Prime Minister John Curtin from Canberra to Perth for burial. By September 1945, No. 37 Squadron's strength was 357 staff, including 111 officers, sixteen Dakotas, two Lodestars, a DC-2, and a Tiger Moth.
Following the end of hostilities, No. 37 Squadron repatriated former prisoners of war from Singapore to Australia. On 27 July 1946, it moved to RAAF Station Schofields, New South Wales, where it came under the control of No. 86 (Transport) Wing along with Nos. 36 and 38 Squadrons, also operating Dakotas. Another unit of No. 86 Wing, No. 486 (Maintenance) Squadron, was responsible for servicing the Dakotas. On 30 September 1946, No. 37 Squadron was assigned the regular courier service to Japan that had previously been flown by No. 36 Squadron, to support the British Commonwealth Occupation Force. In January 1947, No. 37 Squadron handed over the Japan courier run to No. 38 Squadron, and the following month took over the Lae courier service previously flown by No. 36 Squadron; the Rabaul courier run was added in April. By the end of 1947, No. 37 Squadron's personnel numbered fifty-six, including twenty-four officers, and it had an average of ten Dakotas on strength. The unit was disbanded at Schofields on 24 February 1948.
### Re-establishment
On 27 September 1965, Minister for Air Peter Howson announced that No. 37 Squadron was to be re-raised to operate twelve Lockheed C-130E Hercules transport aircraft that had been purchased by the Federal government; the new aircraft would allow the RAAF to support Australian deployments in South East Asia while continuing to meet its domestic commitments. The squadron was formed at RAAF Base Richmond on 21 February 1966, under the command of Wing Commander Ron McKimm. It joined No. 36 Squadron, which had been operating C-130A Hercules since 1958. No. 486 Squadron, disbanded in 1964, was re-formed at Richmond to provide maintenance for both Hercules squadrons; major repairs and upgrades to the C-130s were the responsibility of No. 2 Aircraft Depot (later No. 503 Wing). As the C-130E had a longer range and could carry a greater payload than the C-130A, No. 37 Squadron was generally assigned strategic tasks, while No. 36 Squadron's responsibilities were primarily tactical in nature. No. 37 Squadron began taking delivery of its C-130Es in August, and by the end of September its staff numbered eighty-six, including twenty-one officers. In February 1967, the squadron commenced long-range missions in support of Australian forces in the Vietnam War, including aero-medical evacuations conveying wounded soldiers back to Australia, generally via RAAF Base Butterworth, Malaysia. Initially both C-130A and E models were employed for such evacuations, but only C-130Es were assigned to this task from May 1967, as they offered more comfortable conditions and were capable of flying directly between South Vietnam and Australia if required. By the end of February 1968, No. 37 Squadron had a strength of 207 personnel: eighty-five aircrew, including fifty-one officers, and 122 ground staff, including three officers. The squadron transported the last Australian forces out of Vietnam in December 1972, following the Federal government's decision to withdraw from the conflict.
As well as participating in military exercises and overseas peacekeeping commitments, the Hercules became a familiar sight in the Southern Pacific, called on for relief operations following many natural disasters including tsunamis in New Guinea, cyclones in the Solomons and Tonga, and fires and floods throughout Australia. It played a major role in the evacuation of civilians following Cyclone Tracy in Darwin in 1974–75; a No. 37 Squadron C-130E was the first aircraft to touch down in Darwin following the disaster. The squadron contributed eleven aircraft to the relief effort, carrying 4,400 passengers and 1,300,000 pounds (590,000 kg) of cargo. No. 37 Squadron aircraft took part in Operation Babylift, the US-led effort to evacuate the orphaned children of American servicemen from Vietnam in April 1975. Later that month, two of the squadron's aircraft were assigned to the United Nations (UN) to transport supplies throughout South East Asia; the C-130s' Australian roundels were painted over with UN symbols to signify the mission's neutrality. Commencing operations in May, the aircraft flew supplies into Laos and transported cargo between Thailand, Butterworth, Hong Kong and Singapore, completing ninety-one sorties by the time the mission ended in early June. The Hercules also evacuated Australian embassy personnel from Saigon, South Vietnam, and Phnom Penh, Cambodia, following the end of the Vietnam War. No. 37 Squadron was awarded the Gloucester Cup by the Governor General in June 1976 for its performance in 1974–75.
In January–February 1979, two No. 37 Squadron C-130Es evacuated Australian and other foreign embassy staff from Tehran, shortly before the collapse of royal rule during the Iranian Revolution. The same year, the squadron began operations with two ex-Qantas Boeing 707s, handing them over to No. 33 Flight (later No. 33 Squadron) at the beginning of 1981. No. 37 Squadron transported the Popemobiles on John Paul II's 1986 tour of Australia; its other unusual cargoes have included a stud bull presented to the Chinese government, kangaroos and sheep to Malaysia, and an exhibition of China's Entombed Warriors. In February 1987, the unit again joined No. 36 Squadron, along with No. 33 Squadron, as part of a re-formed No. 86 Wing under the newly established Air Lift Group (later Air Mobility Group). The following year, No. 37 Squadron achieved 200,000 accident-free flying hours on the Hercules. The Australian public had the experience of flying in the C-130s when the aircraft were employed by the Federal government to provide transport during the 1989 Australian pilots' dispute that curtailed operations by the two domestic airlines. In December 1990 and January 1991, a detachment of C-130s from Nos. 36 and 37 Squadrons flew missions to Dubai in support of Australia's naval contribution to the Gulf War. No. 37 Squadron transported Australian troops to Somalia as part of Operation Solace in January 1993, and provided a shuttle service between Kenya and Somalia during May. No. 486 Squadron was disbanded in October 1998, having transferred its C-130 maintenance functions to Nos. 36 and 37 Squadrons. No. 37 Squadron began re-equipping with new-model C-130J Hercules in September 1999. Its aircraft formed part of a detachment of C-130s supporting INTERFET forces in East Timor between September 1999 and February 2000, under Operation Warden. The last C-130Es were taken out of service in November 2000. No. 37 Squadron was awarded the Gloucester Cup in 2001, the same year it took delivery of its twelfth and final C-130J. Five C-130s of Nos. 36 and 37 Squadrons participated in relief efforts following the Bali bombings in October 2002.
In September 2004, aircraft from No. 37 Squadron joined the rotating detachment of C-130s established by No. 36 Squadron in the Middle East Area of Operations (MEAO) in February 2003, following the invasion of Iraq; the C-130Js were required to be fitted with self-protection equipment before deploying to the MEAO. No. 37 Squadron was strengthened to create a "super squadron" on 17 November 2006, when its force of twelve C-130Js was augmented by No. 36 Squadron's twelve C-130Hs, prior to the latter re-equipping with Boeing C-17 Globemasters and relocating to RAAF Base Amberley, Queensland. Two of the C-130s joined DHC-4 Caribous from No. 38 Squadron as part of the RAAF's initial contribution to Operation Papua New Guinea Assist following Cyclone Guba in November 2007. No. 37 Squadron took over full responsibility for the Hercules detachment to the MEAO in mid-2008, and in March 2010 one of its C-130Js completed the detachment's 20,000th hour of flying operations. The squadron was transferred from No. 86 Wing to No. 84 Wing on 1 October 2010, as part of a restructure of Air Lift Group. It was presented with the Gloucester Cup for its proficiency in 2011 at a ceremony on 31 May 2012. The C-130Hs were retired the same year, the last pair at Richmond on 30 November. In January 2013, No. 37 Squadron undertook a successful search-and-rescue mission for Alain Delord, a missing round-the-world yachtsman who was found approximately 500 nautical miles (930 km) south of Tasmania. Crews located Delord adrift in a life raft before airdropping supplies, maintaining watch and ultimately guiding in a rescue vessel fifty-eight hours later.
No. 37 Squadron was awarded the Gloucester Cup for proficiency in March 2013. It celebrated its 70th Anniversary on 17 July, undertaking a two-ship flight over Sydney and the Blue Mountains. That November, the squadron deployed to the Philippines to participate in humanitarian relief operations in the wake of Typhoon Haiyan. In August 2014, aircraft from No. 37 Squadron based in the Middle East were involved in the airdrop of humanitarian supplies to civilians in Iraq following an offensive by Islamic State forces. The first drop occurred on the night of 13/14 August, when one of the squadron's C-130Js took part in a 16-aircraft mission including US C-17s and C-130Hs and a British C-130J that delivered supplies to Yezidi civilians trapped on Mount Sinjar. According to the Australian Department of Defence, it was the RAAF's "most complex operational humanitarian air drop mission in more than a decade". A second drop was conducted to deliver supplies to isolated civilians in the northern Iraqi town of Amirli. By September 2014, the RAAF's C-130Js had accumulated over 100,000 flying hours. Later that month, a C-130J took part in the airlift of arms and munitions to forces in Kurdish-controlled northern Iraq; the involvement of RAAF transport aircraft in operations in Iraq is ongoing. From 7 to 10 December 2015, a C-130J of No. 37 Squadron flying out of Guam joined American and Japanese aircraft in Operation Christmas Drop, a humanitarian aerial supply operation in the west Pacific and Micronesia. No. 37 Squadron was awarded the Meritorious Unit Citation in the Queen's Birthday Honours on 13 June 2016 for "sustained outstanding service in warlike operations throughout the Middle East Area of Operations over the period January 2002 to June 2014". The squadron commemorated sixty years of RAAF Hercules operations in December 2018, and twenty years of C-130J operations in September 2019. One of the C-130s flew from Australia to Antarctica in February 2020, the first time a RAAF Hercules had done so since 1983, to provide equipment for the Australian Antarctic Division near Casey Station. The squadron was awarded the RAAF Maintenance Trophy in April 2023.
## See also
- Lockheed C-130 Hercules in Australian service
|
38,817,262 |
Fantastic Novels
| 1,109,612,637 |
US pulp science fiction magazine
|
[
"Bimonthly magazines published in the United States",
"Defunct science fiction magazines published in the United States",
"Fantasy fiction magazines",
"Magazines disestablished in 1951",
"Magazines established in 1940",
"Magazines published in New York City",
"Pulp magazines",
"Science fiction magazines established in the 1940s"
] |
Fantastic Novels was an American science fiction and fantasy pulp magazine published by the Munsey Company of New York from 1940 to 1941, and again by Popular Publications, also of New York, from 1948 to 1951. It was a companion to Famous Fantastic Mysteries. Like that magazine, it mostly reprinted science fiction and fantasy classics from earlier decades, such as novels by A. Merritt, George Allan England, and Victor Rousseau, though it occasionally published reprints of more recent work, such as Earth's Last Citadel, by Henry Kuttner and C. L. Moore.
The magazine lasted for 5 issues in its first incarnation, and for another 20 in the revived version from Popular Publications. Mary Gnaedinger edited both series; her interest in reprinting Merritt's work helped make him one of the better-known fantasy writers of the era. A Canadian edition from 1948 to 1951 reprinted 17 issues of the second series; two others were reprinted in Great Britain in 1950 and 1951.
## Publication history
In the early 20th century, science fiction stories were frequently published in popular magazines, with the Munsey Company, a major pulp magazine publisher, printing a great deal of science fiction. In 1926 Amazing Stories became the first specialist pulp magazine publisher of science fiction. Munsey continued to print sf in Argosy during the 1930s, and in 1939 took advantage of the new genre's growing popularity by launching Famous Fantastic Mysteries, a vehicle to reprint the most popular fantasy and sf stories from the Munsey magazines.
The new title immediately became successful, and demand for reprints of old favorites was such that Munsey decided to launch an additional magazine, Fantastic Novels, in July 1940, edited, like Famous Fantastic Mysteries, by Mary Gnaedinger. The two magazines were placed on bimonthly schedules, arranged to alternate with each other, though the schedule slipped slightly with the fifth issue of Fantastic Novels, dated April 1941 but following the January 1941 issue. Fantastic Novels was suspended after that issue and merged with Famous Fantastic Mysteries. The stated reason was that Famous Fantastic Mysteries "is apparently the favorite title", but it seems likely that production difficulties caused by World War II played a part. The June 1941 and August 1941 issues of Famous Fantastic Mysteries both carried the slogan "Combined with Fantastic Novels Magazine" on the cover.
Fantastic Novels reappeared in 1948 through Popular Publications, which had acquired Famous Fantastic Mysteries from Munsey at the end of 1942. Gnaedinger remained editor of Famous Fantastic Mysteries when Popular took over, and was editor of the second incarnation of Fantastic Novels. The March 1948 issue, the first of the new series, was catalogued volume 1, number 6, as if there had been no break in publication. This version lasted for a further 20 issues, ending without notice with the June 1951 issue. It was apparently a sudden decision; the final issue had announced plans to reprint Otis Adelbert Kline's Maza of the Moon.
## Contents
Fantastic Novels came into existence because of the demand from readers of Famous Fantastic Mysteries for book-length reprints. Gnaedinger observed that "Everyone seems to have realized that although [the] set-up of five to seven stories with two serials running, was highly satisfactory, that the long list of novels would have to be speeded up somehow". When the new magazine was launched, Famous Fantastic Mysteries was partway through serialization of Austin Hall and Homer Eon Flint's The Blind Spot, with the third episode appearing in the May/June 1940 issue. Rather than complete the serialization, Gnaedinger printed the novel in its entirety in the first issue of Fantastic Novels, ensuring that readers of Famous Fantastic Mysteries would also acquire the new magazine. Over the next four issues she printed Ray Cummings' People of the Golden Atom, Ralph Milne Farley's The Radio Beasts, and two novels by A. Merritt: The Snake Mother and The Dwellers in the Mirage. Gnaedinger's interest in reprinting Merritt's work helped make him one of the better-known fantasy writers of the era.
In the second series, from 1948 to 1951, Gnaedinger continued to reprint work by Merritt, along with other reader favorites from the Munsey years. Works by George Allan England, Victor Rousseau, Ray Cummings, and Francis Stevens (the pen name of Gertrude Barrows Bennett) appeared, as well as (occasionally) reprints of more recent work, such as Earth's Last Citadel, by Henry Kuttner and C. L. Moore, which had been serialized in Argosy in 1943. In the early 1950s, when first Fantastic Novels and two years later Famous Fantastic Mysteries ceased publication, it is likely that the audience for science fiction was growing too sophisticated for these early works.
Each issue, except the last one, featured a lead novel with additional short fiction. The cover artwork was mostly by Virgil Finlay, Lawrence Stevens, Peter Stevens, and Norman Saunders, with one early cover contributed by Frank R. Paul.
## Bibliographic details
Mary Gnaedinger edited Fantastic Novels for both the Munsey and Popular Publications series. Five issues appeared between July 1940 and April 1941, and an additional twenty from March 1948 to June 1951. The schedule was bimonthly, with only two irregularities: the issues that would have been dated March 1941 and March 1951 were each delayed by a month. The volume numbering was regular throughout, with four volumes of six numbers, and a final fifth volume of one number. The magazine was printed in pulp format throughout both series, and was priced at 20 cents for the first two issues; then 10 cents for the remainder of the first series and 25 cents for issues in the second series. Fantastic Novels was 144 pages for the first two issues, 128 pages for two issues, and 112 pages for the last issue of the first series; it was 132 pages from the start of the second series until the November 1950 issue, and then 128 pages for January 1951, and 112 pages for the last two issues.
A Canadian reprint edition ran from September 1948 to June 1951; these were published by the Toronto-based New Publications. They were half an inch taller than the U.S. editions and used different back-cover advertisements, but were otherwise identical to the U.S. issues of the same date. Two issues were released in Britain: a single issue was released in March 1950; it was a copy of the November 1949 U.S. issue but was neither numbered nor dated. The other British issue was a copy of the May 1949 issue, cut to only 64 pages; it was released in June 1951 and was undated but numbered 1. Both these issues were published by Pemberton's and distributed by Thorpe & Porter.
|
746,586 |
M-28 (Michigan highway)
| 1,143,394,832 |
State highway in Michigan, United States
|
[
"Lake Superior Circle Tour",
"M-28 (Michigan highway)",
"State highways in Michigan",
"Transportation in Alger County, Michigan",
"Transportation in Baraga County, Michigan",
"Transportation in Chippewa County, Michigan",
"Transportation in Gogebic County, Michigan",
"Transportation in Houghton County, Michigan",
"Transportation in Luce County, Michigan",
"Transportation in Marquette County, Michigan",
"Transportation in Ontonagon County, Michigan",
"Transportation in Schoolcraft County, Michigan"
] |
M-28 is an east–west state trunkline highway that traverses nearly all of the Upper Peninsula of the U.S. state of Michigan, from Wakefield to near Sault Ste. Marie in Bruce Township. Along with US Highway 2 (US 2), M-28 forms a pair of primary highways linking the Upper Peninsula from end to end, providing a major access route for traffic from Michigan and Canada along the southern shore of Lake Superior. M-28 is the longest state trunkline in Michigan numbered with the "M-" prefix at 290.373 miles (467.310 km). The entire highway is listed on the National Highway System, while three sections of M-28 are part of the Lake Superior Circle Tour. M-28 also carries two memorial highway designations along its route.
Throughout its course across the Upper Peninsula, M-28 passes through forested woodlands, bog swamps, urbanized areas, and along the Lake Superior shoreline. Sections of roadway cross the Ottawa National Forest and both units of the Hiawatha National Forest. Some of the other landmarks accessible from M-28 include the Seney Stretch, Seney National Wildlife Refuge and several historic bridges.
M-28 is an original trunkline designation, dating to the 1919 formation of the state's trunkline system. The original highway was much shorter than the current version. M-28 was expanded eastward to the Sault Ste. Marie area in the late 1920s. The western end has been expanded twice to different locations on the Wisconsin state line. Other changes along the routing have led to the creation of three different business loops at various times, with one still extant. Future changes, proposed by Marquette County but not accepted by the Michigan Department of Transportation (MDOT), could see M-28 rerouted over County Road 480 (CR 480).
## Route description
M-28 is a major highway for Michigan and Canadian traffic along the south shore of Lake Superior. It forms the northern half of a pair of primary trunklines linking the Upper Peninsula from end to end; US 2 is the southern partner. The 290.373-mile (467.310 km) highway comprises mostly two lanes, undivided except for sections that are concurrent with US 41 near Marquette. The "Marquette Bypass" portion of US 41/M-28 is a four-lane expressway, and segments of the highway in Marquette County have four lanes. The entire route is part of the National Highway System, and three sections of the trunkline are part of the Lake Superior Circle Tour.
### Western terminus to Shingleton
In the west, M-28 begins at a signalized intersection with US 2 in Wakefield. Heading north, the highway passes Sunday Lake heading out of town. After crossing into southwestern Ontonagon County and the Eastern Time Zone, the trunkline highway skirts the northern shore of Lake Gogebic, running concurrently with M-64. The first section of M-28 designated as a part of the Lake Superior Circle Tour is from the western terminus to the eastern junction with M-64 in Bergland, where the Circle Tour turns north along M-64, leaving M-28. Here, M-28 has its lowest traffic counts; within the 2013 MDOT survey, the road is listed with only an average annual daily traffic (AADT) of 1,425 vehicles on a section of highway between Bergland and the US 45 intersection in Bruce Crossing. The trunkline runs through heavily forested areas of southern Houghton and Baraga counties. At the eastern junction with US 41 near Covington, M-28 receives the Circle Tour designation again and exits the Ottawa National Forest.
In Baraga and Marquette counties, US 41/M-28 passes through hilly terrain before entering the urban areas of Ishpeming, Negaunee and Marquette. Approximately 13,000–17,000 vehicles use this section from Ishpeming eastward through Negaunee. West of the city of Marquette, US 41/M-28 had a peak 2013 AADT of 32,805 vehicles in Marquette Township along a retail and business corridor. This peak level is sustained until the start of the Marquette Bypass, where the traffic returns to the 16,500-vehicle and higher levels seen in Ishpeming and Negaunee. South of the city of Marquette, traffic counts once again climb to 19,620 vehicles. In Chocolay Township the AADT drops to 8,840 vehicles before tapering off to 3,065 vehicles by the county line.
At the Ishpeming–Negaunee city line, M-28 changes memorial highway designations. From the western terminus to this point, M-28 is called the "Veterans Memorial Highway", but it becomes the "D. J. Jacobetti Memorial Highway" to honor the longest-serving member of the Michigan Legislature, Dominic J. Jacobetti. The Jacobetti Highway designation ends at the eastern M-123 junction in Chippewa County.
Between Marquette and Munising, M-28 closely parallels the Lake Superior shoreline, providing scenic views of the lake and its "lonesome sandy beaches". The Lakenenland sculpture park is located in Chocolay Township near Shot Point in eastern Marquette County. This roadside attraction is owned by Tom Lakenen and features fanciful works of art made of scrap iron. Near the community of Au Train, M-28 crosses into the western unit of the Hiawatha National Forest. West of Munising is a ferry dock offering transport to the Grand Island National Recreation Area, and at Munising there is easy access to Pictured Rocks National Lakeshore. The roadway also features variable-message signs to warn motorists of winter weather-related traffic closures along the lakeshore. Installed at the US 41 and M-94 junctions, the signs advise motorists which sections of roadway are closed. Per MDOT policy, only snowplows are allowed on these sections during a closure. The highway exits the Hiawatha National Forest at the Alger County–Schoolcraft County line along the Seney Stretch.
### Seney Stretch
The portion of M-28 between Seney and Shingleton, called the Seney Stretch, is 25 miles (40 km) of "straight-as-an-arrow highway" across the Great Manistique Swamp, "though others claim it's 50 miles [80 km], only because it seems longer." The Seney Stretch is the longest such section of highway in the state, and "one of the longest stretches of curveless highway east of the Mississippi." The highway is often cited as the "state's most boring route" according to the Michigan Economic Development Corporation (MEDC) and Hunts' Guide. The straightness and flatness over a great distance are given as reasons for the reputation of this stretch as boring.
The road across the swamp was constructed parallel to the line of the Duluth, South Shore and Atlantic Railway (later the Soo Line Railroad). It was first numbered as a part of M-25 when that designation was used along today's M-28 east of US 41. The most significant changes made to the stretch since its original construction were the addition of passing relief lanes and a full-scale, year-round rest area in 1999.
Part of the Seney Stretch forms the northern border of the Seney National Wildlife Refuge. Established in 1935, this refuge is a managed wetland in Schoolcraft County. It has an area of 95,212 acres (385 km<sup>2</sup>), and contains the Strangmoor Bog National Natural Landmark within its boundaries.
### Seney to eastern terminus
Past Seney, M-28 once again enters woodlands on the eastern end of the Upper Peninsula. In Luce County, the roadway passes through the community of McMillan en route to Newberry. The Circle Tour departs M-28 to follow M-123 at Newberry, looping north to the Tahquamenon Falls State Park. East of town, the road passes Luce County Airport off of Luce CR 399. From there, M-28 crosses the east and west branches of the Sage River and passes south of Soo Junction, before the Chippewa County border.
In Chippewa County, M-28 begins bending slightly east-northeastward. Hulbert Lake is located south of Hulbert; north of the lake, the highway enters the eastern unit of the Hiawatha National Forest. At the eastern junction of M-28 and M-123 near Eckerman and Strongs, the Circle Tour returns to M-28 and the D.J. Jacobetti Memorial Highway designation ends. The highway leaves the eastern unit of the Hiawatha National Forest between the communities of Raco and Brimley. M-221 leads north from the main highway on an old routing of M-28 to connect to the community of Brimley and the Bay Mills Indian Community. Brimley State Park is just east of Brimley on the old 6 Mile Road alignment of M-28. The highway meets Interstate 75 (I-75) at exit 386, and the Lake Superior Circle Tour departs M-28 to follow I-75. This interchange is just west of H-63/Mackinac Trail, a former segment of US 2. M-28 continues three miles (4.8 km) farther to its eastern terminus with M-129.
### Services
Along the routing of M-28, MDOT has established several roadside parks and rest areas. Two of these are in Ontonagon County near Ewen and Trout Creek. A park with a picnic area and a footbridge lies near Tioga Creek in Baraga County east of the US 41 junction. In Michigamme a scenic turnout and a roadside park overlook Lake Michigamme, and along Lake Superior south of Marquette is a tourist information center built as a log cabin. East of the H-01 junction in Au Train is a roadside park that includes Scott Falls. Further east, a year-round rest area is located on the western end of the Seney Stretch. Three other roadside parks lie east of Harvey in Shelter Bay, on the shores of Deer Lake and west of Newberry.
## History
### Mainline history
Formed by July 1, 1919, M-28 began in Wakefield at a junction with then M-12 and ran roughly along the current alignment to end at M-15, 6.5 miles (10.5 km) east of Covington. These two termini roughly correspond to the modern US 2 and western US 41 junctions respectively. M-28 was extended in 1927 along US 41 into Marquette County and east over M-25 through Chatham, Munising, and Newberry, before ending in downtown Sault Ste. Marie. At Negaunee, M-28 took the former routing of M-15 between Negaunee and Marquette for 10 miles (16 km) while US 41 ran along a portion of M-35. This southern loop routing of M-28 lasted until approximately 1936, when M-28 was moved to US 41, and the former route became CR 492. A new routing of M-28 in the Newberry area opened later that year, and a new M-28A (later Bus. M-28) existed until 1953. Another realignment in 1937 marked the transfer of M-28 out of downtown Ishpeming and Negaunee. This former routing later became Bus. M-28.
In the late 1930s, a highway numbered M-178 was designated between M-28 south of Munising to M-94 in town. In 1941, the routings of M-28 and M-94 were reversed between Harvey and Munising, and M-28 supplanted the M-178 designation completely. Since then, M-28 has run along the lakeshore through Au Train. M-28 was extended along US 2 to the state line at Ironwood, and the eastern end of M-28 through Brimley was moved to a new alignment ending at US 2, in Dafter in 1942. The eastern end was moved along US 2 back to Sault Ste. Marie in 1948, though the terminus was returned to Dafter in 1950.
From 1952 to 1962, M-28 crossed US 2 at Wakefield going south and stopped at the Wisconsin border, connecting with a county road. This segment of the highway (now Gogebic CR 519) was transferred back to the county in 1962. M-94 previously looped along Munising-Van Meer-Shingleton Road (now H-58 and H-15) north of M-28 between Munising and Shingleton. This routing was abandoned on November 7, 1963 in favor of the current concurrency. The last significant change to the M-28 routing occurred on March 3, 1989, when the eastern terminus was moved east to M-129.
MDOT unveiled plans on March 31, 2009, to rebuild the intersection between Front Street and the eastern end of the Marquette Bypass during 2010. The previous intersection configuration dated back to the 1960s and had been labeled as "dangerous and [causing] significant traffic delays" by the designers of the replacement. A traffic study concluded in 2007 that the intersection would need either the roundabout or a traffic signal with several turning lanes to accommodate the traffic needs in the area. MDOT decided in favor of a two-lane, 150-foot (46 m) roundabout retaining the right-turn lanes from the previous intersection layout. These lanes will be used by right-turning traffic to bypass the circle at the center of the intersection.
Construction started on the project in May. A section of the intersection was opened in July to traffic from the south that turns west. The lanes northbound into downtown were opened in the beginning of August, and the city held a ribbon cutting ceremony on August 19, 2010. The remaining lanes were opened the next day.
### Historic bridges
MDOT has highlighted five historic bridges along the route of M-28 on the MDOT website. In Interior Township, Ontonagon County, the highway crosses the Ontonagon River over a bridge built in 1929. Designed by the State Highway Department and built by the firm of Meads and Anderson, the Ontonagon River Bridge is one of only three steel arch bridges in Michigan. The main span arch is 150 feet (46 m) long. A former routing of M-28 in Covington Township crosses the Rock River. Although this section was bypassed by a new alignment of the trunkline in 1924, the bridge remains complete "with corbeled bulkheads and six panels recessed in the concrete spandrel walls." The corbels and spandrels are decorative features found in the concrete sides of the bridge.
Today, drivers cannot use the Peshekee River Bridge south of US 41/M-28 in western Marquette County's Michigamme Township. The bridge was listed on the National Register of Historic Places in 1999 as "Trunk Line Bridge No. 1" for its engineering and architectural significance. MDOT has listed it as "one of Michigan's most important vehicular bridges." It was the first bridge designed by the Michigan State Highway Department, the forerunner to MDOT, in 1914. It was bypassed by a newer bridge built over the Peshekee River on US 41/M-28 subsequently abandoned as a roadway. The replacement bridge was bypassed and demolished in 1995.
The next historic bridge listed by MDOT along M-28 is over the Sand River in Onota Township in Alger County. While not visible to motorists, the bridge, constructed in 1939, is the longest rural rigid-frame span in Michigan. Most bridges of this type were built in urban locations, and soil conditions in the state limit locations for this style of bridge. The bridge over the East Branch of the Tahquamenon River in Chippewa County was built in 1926 as a "formative exercise in what would evolve into a state standard design." The 55-foot (17 m) bridge was built with nine lines of I-beams encased in concrete. Only one other bridge in Michigan was built with such concrete encasement.
## Future
In the August 24, 2005 edition, the Marquette Mining Journal reported that the Marquette County Board and the County Road Commission were negotiating with MDOT to transfer the jurisdiction of Marquette County Road 480 to the state. Several routing options have been discussed, though all would make CR 480 a part of M-28. Cost was the primary reason given behind rerouting M-28 along CR 480. "The road commission receives about \$50,000 a year in state gas tax money but spends about \$100,000 to maintain CR 480 because of the type and volume of traffic it receives." Handing CR 480 over to the state would shift the maintenance costs to the state, as well.
MDOT has indicated that it has not requested jurisdiction, but rather if it assumed control of the route, the community would need to support a through-route. Several proposals have arisen, including creating a "spur" from US 41/M-28 through the east end of Ishpeming to meet CR 480 west of Negaunee. This spur would pass through recently reopened former mining "caving grounds", and to the south of the Mather A & B Mine complex. According to Gerry Corkin, Marquette County Board Chairman, "the land that was purchased by Ishpeming and Negaunee, the mining company land, this has the potential to help in the development of that if this is compatible. I think both cities will be interested in taking a look at what the land uses are and where this [spur] would push through."
The spur proposal would open land to development between the downtown areas of the two cities. If jurisdiction is transferred, and M-28 is routed over CR 480 as proposed, M-28 would leave the concurrency with US 41 near Teal Lake in Negaunee, and cross the caving grounds west of downtown to connect to Rail Street. Rail Street would serve as the connector to CR 480, which ends at the intersection of Rail and Ann streets and Healey Avenue. Proposals indicate two routing options for the east end of CR 480. One would route M-28 back along US 41 from Beaver Grove north of the CR 480 eastern terminus to the existing M-28 in Harvey. A second would route it along CR 551/Cherry Creek Road from CR 480 to M-28 in Harvey.
## Business loops
There have been three business loops for M-28: Ishpeming–Negaunee, Marquette and Newberry. Only the business loop serving Ishpeming and Negaunee is still a state-maintained trunkline. US 41/M-28 was relocated to bypass the two cities' downtowns in 1937. The highway through downtown Ishpeming and Negaunee later carried the ALT US 41/ALT M-28 designation before being designated Bus. M-28 in 1958. The western end of the business loop was transferred to local government control when Bus. M-28 was moved along Lakeshore Drive in 1999.
Bus. US 41 in Marquette was first shown on a map in 1964 after the construction of the Marquette Bypass. It was later designated Bus. US 41/Bus. M-28 on a map in 1975; this second designation was removed from maps by 1982. The entire business loop was turned back to local control in a "route swap" between the City of Marquette and MDOT announced in early 2005. The proposal transferred jurisdiction on the unsigned M-554 and the business route from the state to the city. The state would take jurisdiction over a segment of McClellan Avenue to be used to extend M-553 to US 41/M-28. In addition, MDOT would pay \$2.5 million (equivalent to \$ in ) for reconstruction work planned for 2007. The transfer would increase Marquette's operational and maintenance liability expenses by \$26,000 (equivalent to \$ in ) and place the financial burden of the future replacement of a stop light on the city. On October 10, 2005, MDOT and Marquette transferred jurisdiction over the three roadways. As a result, Bus. US 41 was decommissioned when the local government took control over Washington and Front streets. As a result of the decommissioning, the 2006 maps did not show the now former business loop.
The Newberry Bus. M-28 was designated from 1936 until 1952 as M-28A. The MSHD maps of the time showed it signed as Bus. M-28 in 1952 before it was turned back to local control in 1953.
## Major intersections
## See also
- List of longest state highways in the United States
|
18,940,560 |
McDonnell Douglas AV-8B Harrier II
| 1,172,265,741 |
Anglo-American second-generation VSTOL ground-attack aircraft
|
[
"1970s United States attack aircraft",
"Aircraft first flown in 1978",
"Articles containing video clips",
"Carrier-based aircraft",
"Harrier Jump Jet",
"High-wing aircraft",
"McDonnell Douglas aircraft",
"Single-engined jet aircraft",
"V/STOL aircraft by thrust vectoring"
] |
The McDonnell Douglas (now Boeing) AV-8B Harrier II is a single-engine ground-attack aircraft that constitutes the second generation of the Harrier family, capable of vertical or short takeoff and landing (V/STOL). The aircraft is primarily employed on light attack or multi-role missions, ranging from close air support of ground troops to armed reconnaissance. The AV-8B is used by the United States Marine Corps (USMC), the Spanish Navy, and the Italian Navy. A variant of the AV-8B, the British Aerospace Harrier II, was developed for the British military, while another, the TAV-8B, is a dedicated two-seat trainer.
The project that eventually led to the AV-8B's creation started in the early 1970s as a cooperative effort between the United States and United Kingdom, aimed at addressing the operational inadequacies of the first-generation Hawker Siddeley Harrier. Early efforts centered on a larger, more powerful Pegasus engine to dramatically improve the capabilities of the Harrier. Because of budgetary constraints, the UK abandoned the project in 1975. Following the UK's withdrawal, McDonnell Douglas extensively redesigned the earlier AV-8A Harrier to create the AV-8B. While retaining the general layout of its predecessor, the aircraft incorporates a new, larger composite wing with an additional hardpoint on each side, an elevated cockpit, a redesigned fuselage and other structural and aerodynamic refinements. The aircraft is powered by an upgraded version of the Pegasus. The AV-8B made its maiden flight in November 1981 and entered service with the USMC in January 1985. Later upgrades added a night-attack capability and radar, resulting in the AV-8B(NA) and AV-8B Harrier II Plus versions, respectively. An enlarged version named Harrier III was also studied but not pursued. The UK, through British Aerospace, re-joined the improved Harrier project as a partner in 1981, giving it a significant work-share in the project. Following corporate mergers in the 1990s, Boeing and BAE Systems have jointly supported the program. Approximately 340 aircraft were produced in a 22-year production program that ended in 2003.
Typically operated from small aircraft carriers, large amphibious assault ships and simple forward operating bases, AV-8Bs have participated in numerous military and humanitarian operations, proving themselves versatile assets. U.S. Army General Norman Schwarzkopf named the USMC Harrier II as one of several important weapons in the Gulf War. It also served in Operation Enduring Freedom in Afghanistan, the Iraq War and subsequent War in Iraq, along with Operation Odyssey Dawn in Libya in 2011. Italian and Spanish Harrier IIs have taken part in overseas conflicts in conjunction with NATO coalitions. During its service history, the AV-8B has had a high accident rate, related to the percentage of time spent in critical take-off and landing phases. USMC and Italian Navy AV-8Bs are being replaced by the Lockheed Martin F-35B Lightning II, with the former expected to operate its Harriers until 2025.
## Development
### Origins
In the late 1960s and early 1970s, the first-generation Harriers entered service with the Royal Air Force (RAF) and USMC but were handicapped in range and payload. In short takeoff and landing configuration, the AV-8A (American designation for the Harrier) carried less than half the 4,000 lb (1,800 kg) payload of the smaller A-4 Skyhawk, over a more limited radius. To address this, Hawker Siddeley and McDonnell Douglas began joint development of a more capable version of the Harrier in 1973. Early efforts concentrated on an improved Pegasus engine, designated the Pegasus 15, which was being tested by Bristol Siddeley. Although more powerful, the engine's diameter was too large by 2.75 in (70 mm) to fit into the Harrier easily.
In December 1973, a joint American and British team completed a project document defining an advanced Harrier powered by the Pegasus 15 engine. The advanced Harrier was intended to replace the original RAF and USMC Harriers, as well as the USMC's A-4 Skyhawk. The aim of the advanced Harrier was to double the AV-8's payload and range and was therefore unofficially named AV-16. The British government pulled out of the project in March 1975 owing to decreased defense funding, rising costs, and the RAF's insufficient 60-aircraft requirement. With development costs estimated to be around £180–200 million (1974 British pounds), the United States was unwilling to fund development by itself and ended the project later that year.
Despite the project's termination, the two companies continued to take different paths toward an enhanced Harrier. Hawker Siddeley focused on a new larger wing that could be retrofitted to existing operational aircraft, while McDonnell Douglas independently pursued a less ambitious, though still expensive, project catering to the needs of the U.S. military. Using knowledge gleaned from the AV-16 effort, though dropping some items—such as the larger Pegasus engine—McDonnell Douglas kept the basic structure and engine for an aircraft tailored for the USMC.
### Designing and testing
As the USMC wanted a substantially improved Harrier without the development of a new engine, the plan for Harrier II development was authorized by the United States Department of Defense (DoD) in 1976. The United States Navy (USN), which had traditionally procured military aircraft for the USMC, insisted that the new design be verified with flight testing. McDonnell Douglas modified two AV-8As with new wings, revised intakes, redesigned exhaust nozzles, and other aerodynamic changes; the modified forward fuselage and cockpit found on all subsequent aircraft were not incorporated on these prototypes. Designated YAV-8B, the first converted aircraft flew on 9 November 1978. The aircraft performed three vertical take-offs and hovered for seven minutes at Lambert–St. Louis International Airport. The second aircraft followed on 19 February 1979 but crashed that November because of an engine flameout; the pilot ejected safely. Flight testing of these modified AV-8s continued into 1979. The results showed greater than expected drag, hampering the aircraft's maximum speed. Further refinements to the aerodynamic profile yielded little improvement. Positive test results in other areas, including payload, range, and V/STOL performance, led to the award of a development contract in 1979. The contract stipulated a procurement of 12 aircraft initially, followed by a further 324.
Between 1978 and 1980, the DoD and USN repeatedly attempted to terminate the AV-8B program. There had previously been conflict between the USMC and USN over budgetary issues. At the time, the USN wanted to procure A-18s for its ground attack force and, to cut costs, pressured the USMC to adopt the similarly designed F-18 fighter instead of the AV-8B to fulfill the role of close air support (both designs were eventually amalgamated to create the multirole F/A-18 Hornet). Despite these bureaucratic obstacles, in 1981 the DoD included the Harrier II in its annual budget and five-year defense plan. The USN declined to participate in the procurement, citing the limited range and payload compared with conventional aircraft.
In August 1981, the program received a boost when British Aerospace (BAe) and McDonnell Douglas signed a memorandum of understanding, marking the UK's re-entry into the program. The British government was enticed by the lower cost of acquiring Harriers promised by a large production run, and the fact that the U.S. was shouldering the expense of development. Under the agreement, BAe was relegated to the position of a subcontractor, instead of the full partner status that would have been the case had the UK not left the program. Consequently, the company received, in man-hours, 40% of the airframe work-share. Aircraft production took place at McDonnell Douglas' facilities in suburban St. Louis, Missouri, and manufacturing by BAe at its Kingston and Dunsfold facilities in Surrey, England. Meanwhile, 75% work-share for the engine went to Rolls-Royce, which had absorbed Bristol Siddeley, with the remaining 25% assigned to Pratt & Whitney. The two companies planned to manufacture 400 Harrier IIs, with the USMC expected to procure 336 aircraft and the RAF to procure 60.
Four full-scale development (FSD) aircraft were constructed. The first of these, used mainly for testing performance and handling qualities, made its maiden flight on 5 November 1981. The second and third FSD aircraft, which introduced wing leading-edge root extensions and revised engine intakes, first flew in April the following year; the fourth followed in January 1984. The first production AV-8B was delivered to the Marine Attack Training Squadron 203 (VMAT-203) at Marine Corps Air Station Cherry Point on 12 December 1983, and officially handed over one month later. The last of the initial batch of 12 was delivered in January 1985 to the front-line Marine Attack Squadron 331. These aircraft had F402-RR-404A engines, with 21,450 lb (95.4 kN) of thrust; aircraft from 1990 onwards received upgraded engines.
### Upgrades
During the initial pilot conversion course, it became apparent that the AV-8B exhibited flight characteristics different from the AV-8A. These differences, as well as the digital cockpit fitted instead of the analog cockpit of the TAV-8A, necessitated additional pilot training. In 1984, funding for eight AV-8Bs was diverted to the development of a two-seat TAV-8B trainer. The first of the 28 TAV-8Bs eventually procured had its maiden flight on 21 October 1986. This aircraft was delivered to VMAT-203 on 24 July 1987; the TAV-8B was also ordered by Italy and Spain.
With export interest from Brazil, Japan, and Italy serving as a source of encouragement to continue development of the Harrier II, McDonnell Douglas commenced work on a night-attack variant in 1985. With the addition of an infrared sensor and cockpit interface enhancements, the 87th production single-seat AV-8B became the first Harrier II to be modified for night attacks, leaving the McDonnell Douglas production line in June 1987. Flight tests proved successful and the night attack capability was validated. The first of 66 AV-8B(NA)s was delivered to the USMC in September 1989. An equivalent version of the AV-8B(NA) also served with the RAF under the designation GR7; earlier GR5 aircraft were subsequently upgraded to GR7 standards.
In June 1987, as a private venture, BAe, McDonnell Douglas, and Smiths Industries agreed on the development of what was to become the AV-8B Plus with the addition of radar and increased missile compatibility. The agreement was endorsed by the USMC and, after much consideration, the Spanish and Italian navies developed a joint requirement for a fleet of air-defense Harriers. The United States, Spain, and Italy signed an MoU in September 1990 to define the responsibilities of the three countries and establish a Joint Program Office to manage the program. On 30 November 1990, the USN, acting as an agent for the three participating countries, awarded McDonnell Douglas the contract to develop the improved Harrier. The award was followed by an order from the USMC in December 1990 for 30 new aircraft, and 72 rebuilt from older aircraft. Italy ordered 16 Harrier II Plus and two twin-seat TAV-8B aircraft, while Spain signed a contract for eight aircraft. Production of the AV-8B Harrier II Plus was conducted, in addition to McDonnell Douglas' plant, at CASA's facility in Seville, Spain, and Alenia Aeronautica's facility in Turin, Italy. The UK also participated in the program by manufacturing components for the AV-8B.
Production was authorized on 3 June 1992. The maiden flight of the prototype took place on 22 September, marking the start of a successful flight-test program. The first production aircraft made its initial flight on 17 March 1993. Deliveries of new aircraft took place from April 1993 to 1995. At the same time, the plan to remanufacture existing AV-8Bs to the Plus standard proceeded. On 11 March 1994, the Defense Acquisition Board approved the program, which initially involved 70 aircraft, with four converted in fiscal year 1994. The program planned to use new and refurbished components to rebuild aircraft at a lower cost than manufacturing new ones. Conversion began in April 1994, and the first aircraft was delivered to the USMC in January 1996.
### End of production and further improvements
In March 1996, the U.S. General Accounting Office (GAO) stated that it was less expensive to buy Harrier II Plus aircraft outright than to remanufacture existing AV-8Bs. The USN estimated the cost for remanufacture of each aircraft to be US\$23–30 million, instead of \$30 million for each new-built aircraft, while the GAO estimated the cost per new aircraft at \$24 million. Nevertheless, the program continued and, in 2003, the 72nd and last AV-8B to be remanufactured for the USMC was delivered. Spain also participated in the program, the delivery of its last refurbished aircraft occurring in December 2003, which marked the end of the AV-8B's production; the final new AV-8B had been delivered in 1997.
In the 1990s, Boeing and BAE Systems assumed management of the Harrier family following corporate mergers that saw Boeing acquire McDonnell Douglas and BAe acquire Marconi Electronic Systems to form BAE Systems. Between 1969 and 2003, 824 Harriers of all models were delivered. In 2001, Flight International reported that Taiwan might meet its requirement for a V/STOL aircraft by purchasing AV-8Bs outfitted with the F-16 Fighting Falcon's APG-66 radar. A Taiwanese purchase would have allowed the production line to stay open beyond 2005. Despite the possibility of leasing AV-8Bs, interest in the aircraft waned as the country switched its intentions to procuring the F-35 and upgrading its fleet of F-16s.
Although there have been no new AV-8B variants, in 1990 McDonnell Douglas and British Aerospace began discussions on an interim aircraft between the AV-8B and the next generation of advanced V/STOL aircraft. The Harrier III would have presented an "evolutionary approach to get the most from the existing aircraft", as many of the structures employed on the Sea Harrier and AV-8B would be used. The wing and the torsion box were to be enlarged to accommodate extra fuel and hardpoints to improve the aircraft's endurance. Because of the increase in size, the wing would have had folding wingtips. To meet the heavier weight of the aircraft, Rolls-Royce was expected to design a Pegasus engine variant that would have produced 4,000 lbf (18 kN) more thrust than the latest production variant at the time. The Harrier III would have carried weapons such as AIM-120 AMRAAM and AIM-132 ASRAAM missiles. Boeing and BAE Systems continued studying the design until the early 2000s, when the project was abandoned.
In 2013, the USMC was studying potential enhancements to keep the AV-8B Harrier IIs up to date until its planned retirement, such as a helmet-mounted cueing system. It is also predicted that additional work on the aircraft's radars and sensor systems may take place. The USMC's Harrier II fleet was planned to remain in service until 2030, owing to delays with the F-35B and the fact that the Harriers have more service life left than USMC F/A-18 Hornets. However, by 2014 the USMC had decided to retire the AV-8B sooner because changing the transition orders of Harrier II and Hornet fleets to the Lightning II would save \$1 billion. The F-35B began replacing the AV-8B in 2016, with the AV-8B expected to continue service until 2025. Meanwhile, the AV-8B is to receive revamped defensive measures, updated data-link capability and targeting sensors, and improved missiles and rockets, among other enhancements.
## Design
### Overview
The AV-8B Harrier II is a subsonic attack aircraft of metal and composite construction that retains the basic layout of the Hawker Siddeley Harrier, with horizontal stabilizers and shoulder-mounted wings featuring prominent anhedral (downward slope). The aircraft is powered by a single Rolls-Royce Pegasus turbofan engine, which has two intakes and four synchronized vectorable nozzles close to its turbine. Two of these nozzles are located near the forward, cold end of the engine and two are near the rear, hot end of the engine. This arrangement contrasts with most fixed-wing aircraft, which have engine nozzles only at the rear. The Harrier II also has smaller valve-controlled nozzles in the nose, tail, and wingtips to provide control at low airspeeds.
The AV-8B is equipped with one centerline fuselage and six wing hardpoints (compared to four wing hardpoints on the original Harrier), along with two fuselage stations for a 25 mm GAU-12 cannon and ammunition pack. These hardpoints give it the ability to carry a total of 9,200 lb (4,200 kg) of weapons, including air-to-air, air-to-surface, and anti-ship missiles, as well as unguided and guided bombs. The aircraft's internal fuel capacity is 7,500 lb (3,400 kg), up 50% compared to its predecessor. Fuel capacity can be carried in hardpoint-compatible external drop tanks, which give the aircraft a maximum ferry range of 2,100 mi (3,300 km) and a combat radius of 300 mi (556 km). The AV-8B can also receive additional fuel via aerial refueling using the probe-and-drogue system. The British Aerospace Harrier II, a variant tailored to the RAF, uses different avionics and has one additional missile pylon on each wing.
The Harrier II retains the tandem landing gear layout of the first-generation Harriers, although each outrigger landing gear leg was moved from the wingtip to mid-span for a tighter turning radius when taxiing. The engine intakes are larger than those of the first-generation Harrier and have a revised inlet. On the underside of the fuselage, McDonnell Douglas added lift-improvement devices, which capture the reflected engine exhaust when close to the ground, giving the equivalent of up to 1,200 lb (544 kg) of extra lift.
The technological advances incorporated into the Harrier II, compared with the original Harrier, significantly reduce the workload on the pilot. The supercritical wing, hands-on-throttle-and-stick (HOTAS) control principle, and increased engineered lateral stability make the aircraft fundamentally easier to fly. Ed Harper, general manager for the McDonnell Douglas Harrier II development program, summarizes: "The AV-8B looks a lot like the original Harrier and it uses the same operating fundamentals. It just uses them a lot better". A large cathode-ray tube multi-purpose display, taken from the F/A-18, makes up much of the instrument panel in the cockpit. It has a wide range of functions, including radar warning information and weapon delivery checklist. The pilots sit on UPC/Stencel 10B zero-zero ejection seats, meaning that they are able to eject from a stationary aircraft at zero altitude.
### Airframe
For the AV-8B, McDonnell Douglas redesigned the entire airframe of the Harrier, incorporating numerous structural and aerodynamic changes. To improve visibility and better accommodate the crew and avionics hardware, McDonnell Douglas elevated the cockpit by 10.5 in (27 cm) and redesigned the canopy. This improved the forward (17° down), side (60°), and rear visibility. The front fuselage is composed of a molded skin with an epoxy-based core sandwiched between two carbon-fiber sheets. To compensate for the changes in the front fuselage, the rear fuselage was extended by 18 in (46 cm), and the taller vertical stabilizer of the Sea Harrier was used. The tail assembly is made up of composites to reduce weight.
Perhaps the most thorough redesign was of the wing, the objective being to match the performance of the cancelled AV-16 while retaining the Pegasus engine of the AV-8A. Engineers designed a new, one-piece supercritical wing, which improves cruise performance by delaying the rise in drag and increasing lift-to-drag ratio. Made of composites, the wing is thicker and has a longer span than that of the AV-8A. Compared to the AV-8A's wing, it has a higher aspect ratio, reduced sweep (from 40° to 37°), and an area increased from 200 sq ft (18.6 m<sup>2</sup>) to 230 sq ft (21.4 m<sup>2</sup>). The wing has a high-lift configuration, employing flaps that deploy automatically when maneuvering, and drooped ailerons. Using the leading edge root extensions, the wing allows for a 6,700 lb (3,035 kg) increase in payload compared with the first-generation Harriers after a 1,000 ft (300 m) takeoff roll. Because the wing is almost exclusively composite, it is 330 lb (150 kg) lighter than the AV-8A's smaller wing.
The Harrier II was the first combat aircraft to extensively employ carbon-fiber composite materials, exploiting their light weight and high strength; they are used in the wings, rudder, flaps, nose, forward fuselage, and tail. Twenty-six percent of the aircraft's structure is made of composites, reducing its weight by 480 lb (217 kg) compared to a conventional metal structure.
### Differences between versions
Most of the first "day attack" AV-8B Harrier IIs were upgraded to Night Attack Harrier or Harrier II Plus standards, with the remainder being withdrawn from service. The AV-8B cockpit was also used for the early trialing of direct voice input which allows the pilot to use voice commands to issue instructions to the aircraft, using a system developed by Smiths Industries. The main attack avionics system in original aircraft was the nose-mounted Hughes AN/ASB-19 angle-rate bombing system. The system combined a TV imager and laser tracker to provide a highly accurate targeting capability. Defensive equipment include several AN/ALE-39 chaff-flare dispensers, an AN/ALR-67 radar warning receiver, and an AN/ALQ-126C jammer pod.
The trainer version of the AV-8B is the TAV-8B, seating two pilots in tandem. Among other changes, the forward fuselage features a 3 ft 11 in (1.19 m) extension to accommodate the second cockpit. To compensate for the slight loss of directional stability, the vertical stabilizer's area was enlarged through increases in chord (length of the stabilizer's root) and height. USMC TAV-8Bs feature the AV-8B's digital cockpit and new systems but have only two hardpoints and are not combat capable. Initial TAV-8Bs were powered by a 21,450 lbf (95.4 kN) F402-RR-406A engine, while later examples were fitted with the 23,000 lbf (105.8 kN) F402-RR-408A. In the early 2000s, 17 TAV-8Bs were upgraded to include a night-attack capability, the F402-RR-408 engine, and software and structural changes.
Fielded in 1991, the Night Attack Harrier was the first upgrade of the AV-8B. It differed from the original aircraft in having a forward-looking infrared (FLIR) camera added to the top of the nose cone, a wide Smiths Industries head-up display (HUD), provisions for night vision goggles, and a Honeywell digital moving map system. The FLIR uses thermal imaging to identify objects by their heat signatures. The variant was powered by the F402-RR-408 engine, which featured an electronic control system and was more powerful and reliable. The flare and chaff dispensers were moved, and the ram-air intake was lengthened at the fin's base. Initially known as the AV-8D, the night-attack variant was designated the AV-8B(NA).
The Harrier II Plus is very similar to the Night Attack variant, with the addition of an APG-65 multi-mode pulse-Doppler radar in an extended nose, allowing it to launch advanced beyond-visual-range missiles such as the AIM-120 AMRAAM. To make additional space for the radar, the angle-rate bombing system was removed. The radars used were taken from early F/A-18 aircraft, which had been upgraded with the related APG-73. According to aviation author Lon Nordeen, the changes "had a slight increase in drag and a bit of additional weight, but there really was not much difference in performance between the [–408-powered] Night Attack and radar Harrier II Plus aircraft".
## Operational history
### United States Marine Corps
The AV-8B underwent standard evaluation to prepare for its USMC service. In the operational evaluation (OPEVAL), lasting from 31 August 1984 to 30 March 1985, four pilots and a group of maintenance and support personnel tested the aircraft under combat conditions. The aircraft was graded for its ability to meet its mission requirements for navigating, acquiring targets, delivering weapons, and evading and surviving enemy actions, all at the specified range and payload limits. The first phase of OPEVAL, running until 1 February 1985, required the AV-8B to fly both deep and close air support missions (deep air support missions do not require coordination with friendly ground forces) in concert with other close-support aircraft, as well as flying battlefield interdiction and armed reconnaissance missions. The aircraft flew from military installations at Marine Corps Base Camp Pendleton and Naval Air Weapons Station China Lake in California; Canadian Forces Base Cold Lake in Canada; and Marine Corps Air Station Yuma in Arizona.
The second phase of OPEVAL, which took place at MCAS Yuma from 25 February to 8 March, required the AV-8B to perform fighter escort, combat air patrol, and deck-launched intercept missions. Although the evaluation identified shortfalls in the design (subsequently rectified), OPEVAL was deemed successful. The AV-8B Harrier II reached initial operating capability (IOC) in January 1985 with USMC squadron VMA-331.
The AV-8B saw extensive action in the Gulf War of 1990–91. Aircraft based on USS Nassau and Tarawa, and at on-shore bases, initially flew training and support sorties, as well as practicing with coalition forces. The AV-8Bs were to be held in reserve during the initial phase of the preparatory air assault of Operation Desert Storm. The AV-8B was first used in the war on the morning of 17 January 1991, when a call for air support from an OV-10 Bronco forward air controller against Iraqi artillery that was shelling Khafji and an adjacent oil refinery, brought the AV-8B into combat. The following day, USMC AV-8Bs attacked Iraqi positions in southern Kuwait. Throughout the war, AV-8Bs performed armed reconnaissance and worked in concert with coalition forces to destroy targets.
During Operations Desert Shield and Desert Storm, 86 AV-8Bs amassed 3,380 flights and about 4,100 flight hours, with a mission availability rate of over 90%. Five AV-8Bs were lost to enemy surface-to-air missiles, and two USMC pilots were killed. The AV-8B had an attrition rate of 1.5 aircraft for every 1,000 sorties flown. U.S. Army General Norman Schwarzkopf later named the AV-8B among the seven weapons—along with the F-117 Nighthawk and AH-64 Apache—that played a crucial role in the war. In the aftermath of the war, from 27 August 1992 until 2003, USMC AV-8Bs and other aircraft patrolled Iraqi skies in support of Operation Southern Watch. The AV-8Bs launched from amphibious assault ships in the Persian Gulf and from forward operating bases such as Ali Al Salem Air Base, Kuwait.
In 1999, the AV-8B participated in NATO's bombing of Yugoslavia during Operation Allied Force. Twelve Harriers were split evenly between the 24th and 26th Marine Expeditionary Units (MEU). AV-8Bs of the 24th MEU were introduced into combat on 14 April and over the next 14 days flew 34 combat air support missions over Kosovo. During their six-month deployment aboard USS Nassau, 24th MEU Harriers averaged a high mission-capable rate of 91.8%. On 28 April, the 24th MEU was relieved by the 26th MEU, based on USS Kearsarge. The first combat sorties of the unit's AV-8Bs occurred two days later, one aircraft being lost. The 26th MEU remained in the theater of operations until 28 May, when it was relocated to Brindisi, Italy.
USMC AV-8Bs took part in Operation Enduring Freedom in Afghanistan from 2001. The USMC 15th MEU arrived off the coast of Pakistan in October 2001. Operating from the unit's ships, four AV-8Bs began attack missions into Afghanistan on 3 November 2001. The 26th MEU and its AV-8Bs joined 15th MEU later that month. In December 2001, two AV-8Bs first deployed to a forward base at Kandahar in Afghanistan. More AV-8Bs were deployed with other USMC units to the region in 2002. The VMA-513 squadron deployed six Night Attack AV-8Bs to Bagram in October 2002. These aircraft each carried a LITENING targeting pod to perform reconnaissance missions along with attack and other missions, primarily at night.
The aircraft participated in the Iraq War in 2003, acting primarily in support of USMC ground units. During the initial action, 60 AV-8Bs were deployed on ships such as USS Bonhomme Richard and Bataan, from which over 1,000 sorties were flown throughout the war. When possible, land-based forward arming and refuelling points were set up to enable prompt operations. USMC commander Lieutenant General Earl B. Hailston said that the Harriers were able to provide 24-hour support for ground forces, and noted that "The airplane ... became the envy of pilots even from my background ... there's an awful lot of things on the Harrier that I've found the Hornet pilots asking me [for] ... We couldn't have asked for a better record".
USMC sources documented the Harrier as holding an 85% aircraft availability record in the Iraq War; in just under a month of combat, the aircraft flew over 2,000 sorties. When used, the LITENING II targeting pod achieved greater than 75% kill effectiveness on targets. In a single sortie from USS Bonhomme Richard, a wave of Harriers inflicted heavy damage on a Republican Guard tank battalion in advance of a major ground assault on Al Kut. Harriers regularly operated in close support roles for friendly tanks, one of the aircraft generally carrying a LITENING pod. Despite the Harrier's high marks, the limited amount of time that each aircraft could remain on station, around 15–20 minutes, led to some calls from within the USMC for the procurement of AC-130 gunships, which could loiter for six hours and had a heavier close air support capability than the AV-8B. AV-8Bs were later used in combination with artillery to provide constant fire support for ground forces during heavy fighting in 2004 around the insurgent stronghold of Fallujah. The urban environment there required extreme precision for airstrikes.
On 20 March 2011, USMC AV-8Bs were launched from USS Kearsarge in support of Operation Odyssey Dawn, enforcing the UN no-fly zone over Libya. They carried out airstrikes on Sirte on 5 April 2011. Multiple AV-8Bs were involved in the defense of a downed F-15E pilot, attacking approaching Libyans prior to the pilot's extraction by a MV-22 Osprey. In addition to major conflicts, USMC AV-8Bs have been deployed in support of contingency and humanitarian operations, providing fixed-wing air cover and armed reconnaissance. The aircraft served in Somalia throughout the 1990s, Liberia (1990, 1996, and 2003), Rwanda (1994), Central African Republic (1996), Albania (1997), Zaire (1997), and Sierra Leone (1997).
The AV-8B is to be replaced by the F-35B version of the Lockheed Martin F-35 Lightning II, which was planned to enter service in 2012. The USMC had sought a replacement since the 1980s and has argued strongly in favor of the development of the F-35B. The Harrier's performance in Iraq, including its ability to use forward operating bases, reinforced the need for a V/STOL aircraft in the USMC arsenal. In November 2011, the USN purchased the UK's fleet of 72 retired BAe Harrier IIs (63 single-seat GR.7/9/9As plus 9 twin-seat T.12/12As) and replacement engines to provide spares for the existing USMC Harrier II fleet. Although the March 2012 issue of the magazine AirForces Monthly states that the USMC intended to fly some of the ex-British Harrier IIs, instead of using them just for spare parts, the Naval Air Systems Command (NAVAIR) has since stated that the USMC has never had any plans to operate those Harriers.
On 14 September 2012, a Taliban raid destroyed six AV-8Bs and severely damaged two others while they were parked on the ramp at Camp Bastion in Afghanistan's Helmand Province. All of the aircraft belonged to VMFA-211. The two damaged AV-8Bs were flown out of Afghanistan in the hours after the attack. The attack was described as "the worst loss of U.S. airpower in a single incident since the Vietnam War." The lost aircraft were quickly replaced by those from VMA-231.
On 27 July 2014, USS Bataan began deploying USMC AV-8Bs over Iraq to provide surveillance of Islamic State (IS) forces. Surveillance operations continued after the start of Operation Inherent Resolve against IS militants. In early September 2014, a USMC Harrier from the 22nd MEU struck an IS target near the Haditha Dam in Iraq, marking the first time a USMC unit dropped ordnance in the operation. On 1 August 2016, USMC Harriers from USS Wasp began strikes against ISIL in Libya as part of manned and unmanned airstrikes on targets near Sirte, launching at least five times within two days.
### Italian Navy
In the late 1960s, following a demonstration of the Hawker Siddeley Harrier on the Italian Navy (Marina Militare) helicopter carrier Andrea Doria, the country began investigating the possibility of acquiring the Harrier. Early efforts were hindered by a 1937 Italian law that prohibited the navy from operating fixed-wing aircraft because they were the domain of the air force. In early 1989, the law was changed to allow the navy to operate any fixed-wing aircraft with a maximum weight of over 3,300 lb (1,500 kg). Following a lengthy evaluation of the Sea Harrier and AV-8B, an order was placed for two TAV-8Bs in May 1989. Soon, a contract for a further 16 AV-8B Plus aircraft was signed. After the TAV-8Bs and the first three AV-8Bs, all subsequent Italian Navy Harriers were locally assembled by Alenia Aeronautica from kits delivered from the U.S. The two-seaters, the first to be delivered, arrived at Grottaglie in August 1991. They were used for proving flights with the navy's helicopter carriers and on the light aircraft carrier Giuseppe Garibaldi.
In early 1994, the initial batch of U.S.-built aircraft arrived at MCAS Cherry Point for pilot conversion training. The first Italian-assembled Harrier was rolled out the following year. In mid-January 1995, Giuseppe Garibaldi set off from Taranto to Somalia with three Harriers on board to maintain stability following the withdrawal of UN forces. The Harriers, flown by five Italian pilots, accumulated more than 100 flight hours and achieved 100% availability during the three-month deployment, performing reconnaissance and other missions. The squadron returned to port on 22 March.
In 1999, Italian AV-8Bs were used for the first time in combat missions when they were deployed aboard Giuseppe Garibaldi, which was participating in Operation Allied Force in Kosovo. Italian pilots conducted more than 60 sorties alongside other NATO aircraft, attacking the Yugoslav army and paramilitary forces and bombing the country's infrastructure with conventional and laser-guided bombs.
In 2000, the Italian Navy was looking to acquire 7 additional remanufactured aircraft to equip Giuseppe Garibaldi and a new carrier, Cavour. Existing aircraft, meanwhile, were updated to allow them to carry AIM-120 AMRAAMs and Joint Direct Attack Munition guided bombs. From November 2001 to March 2002, eight AV-8Bs were embarked aboard Giuseppe Garibaldi and were deployed to the Indian Ocean in support of Operation Enduring Freedom. The aircraft, equipped with LGBs, operated throughout January and February 2002, during which 131 missions were logged for a total of 647 flight hours.
In 2011, Italian Harriers, operating from Giuseppe Garibaldi, worked alongside Italian Typhoons and aircraft of other nations during Operation Unified Protector, part of the 2011 military intervention in Libya. They conducted airstrikes as well as intelligence and reconnaissance sorties over Libya, using the Litening targeting pods while armed with AIM-120 AMRAAMs and AIM-9 Sidewinders. In total, Italian military aircraft delivered 710 guided bombs and missiles during sorties: Italian Air Force Tornados and AMX fighter bombers delivered 550 bombs and missiles, while the eight Italian Navy AV-8Bs flying from Giuseppe Garibaldi dropped 160 guided bombs during 1,221 flight hours.
Italian Navy AV-8Bs are slated to be replaced by 15 (originally 22) F-35Bs, which will form the air wing of Cavour.
### Spanish Navy
Spain, already using the AV-8S Matador, became the first international operator of the AV-8B by signing an order for 12 aircraft in March 1983. Designated VA-2 Matador II by the Spanish Navy (Armada Española), this variant is known as EAV-8B by McDonnell Douglas. Pilot conversion took place in the U.S. On 6 October 1987, the first three Matador IIs were delivered to Naval Station Rota. The new aircraft were painted in a two-tone matte grey finish, similar to U.S. Navy aircraft, and deliveries were complete by 1988.
BAe test pilots cleared the aircraft carrier Príncipe de Asturias for Harrier operations in July 1989. The carrier, which replaced the World War II-era Dédalo, has a 12° ski-jump ramp. It was originally planned that the first unit to operate the aircraft would be the 8<sup>a</sup> Escuadrilla. This unit was disbanded on 24 October 1986, following the sales of AV-8S Matadors to Thailand. Instead, 9<sup>a</sup> Escuadrilla was formed on 29 September 1987, to become part of the Alpha Carrier Air Group and operate the EAV-8B.
In March 1993, under the September 1990 Tripartite MoU between the U.S., Italy, and Spain, eight EAV-8B Plus Matadors were ordered, along with a twin-seat TAV-8B. Deliveries of the Plus-standard aircraft started in 1996. On 11 May 2000, Boeing and the NAVAIR finalized a contract to remanufacture Spanish EAV-8Bs to bring them up to Plus standard. Boeing said the deal required it to remanufacture two EAV-8Bs, with an option for another seven aircraft; other sources say the total was 11 aircraft. The remanufacture allowed the aircraft to carry four AIM-120 AMRAAMs, enhanced the pilot's situational awareness through the installation of new radar and avionics, and provided a new engine. Eventually, 5 aircraft were modified, the last having been delivered on 5 December 2003.
Spanish EAV-8Bs joined Operation Deny Flight, enforcing the UN's no-fly zone over Bosnia and Herzegovina. Spain did not send its aircraft carrier to participate in the Iraq War in 2003, instead deploying F/A-18s and other aircraft to Turkey to defend that country against potential Iraqi attacks. Starting in 2007, Spain was looking to replace its Harrier IIs—with the likely option being the F-35B. The Spanish government, in May 2014 however, announced that it had decided to extend the aircraft's service life to beyond 2025 due to a lack of funds for a replacement aircraft.
Following the decommissioning of Príncipe de Asturias in February 2013, the sole naval platform from which Spanish Harrier IIs can operate is the amphibious assault ship Juan Carlos I.
## Variants
YAV-8B: Two prototypes converted in 1978 from existing AV-8A airframes (BuNo 158394 and 158395).
AV-8B Harrier II: The initial "day attack" variant.
AV-8B Harrier II Night Attack: Improved version with FLIR, an upgraded cockpit with night-vision goggle compatibility, and the more powerful Rolls-Royce Pegasus 11 engine.
AV-8B Harrier II Plus: Similar to the Night Attack variant, with the addition of an APG-65 radar and separate targeting pod. It is used by the USMC, Spanish Navy, and Italian Navy. Forty-six were built.
TAV-8B Harrier II: Two-seat trainer version.
EAV-8B Matador II: Company designation for the Spanish Navy version.
EAV-8B Matador II Plus: The AV-8B Harrier II Plus, ordered for the Spanish Navy.
Harrier GR5, GR7, GR9: See British Aerospace Harrier II.
## Operators
Italy
- Italian Navy
\* Gruppo Aerei Imbarcati (1991–present)
Spain
- Spanish Navy
\* 9a Escuadrilla Aeronaves (1987–present)
United States
- United States Marine Corps
- VMA-211 "Wake Island Avengers" (1990–2016)
- VMA-214 "The Black Sheep" (1989–2022)
- VMA-223 "Bulldogs" (1987–present)
- VMA-231 "Ace of Spades" (1985–present)
- VMA-311 "Tomcats" (1988–2020)
- VMA-331 "Bumblebees" (1985–1992)
- VMA-513 "Flying Nightmares" (1987–2013)
- VMA-542 "Tigers" (1986–2023)
- VMAT-203 "Hawks" (1983–2021)
- United States Navy
- VX-9 "The Vampires" (unknown)
- VX-31 "Dust Devils" (unknown–present)
## Accidents
During its service with the USMC, the Harrier has had an accident rate three times that of the Corps' F/A-18s. As of July 2013, approximately 110 aircraft have been damaged beyond repair since the type entered service in 1985, the first accident occurring in March 1985. The Los Angeles Times reported in 2003 that the Harrier family had the highest rate of major accidents among military aircraft in service at that time, with 148 accidents and 45 people killed. Author Lon Nordeen notes that several other USMC single-engine strike aircraft, like the A-4 Skyhawk and A-7 Corsair II, had higher accident rates.
Accidents have in particular been connected to the proportionate amount of time the aircraft spends taking off and landing, which are the most critical phases in flight. The AV-8 was dubbed a "widow maker" by some in the military. Further analysis shows that U.S. Marine senior officers never understood the uniqueness of the aircraft. Cutbacks in senior maintenance personnel and pilot mistakes had a disastrous effect on the safety of the American-operated AV-8B and unfairly gained it a negative reputation in the U.S. press.
## Aircraft on display
AV-8B
- BuNo 161396 – National Museum of the Marine Corps, Triangle, Virginia
- BuNo 161397 – Carolinas Aviation Museum, Charlotte, North Carolina
## Specifications (AV-8B Harrier II Plus)
## Popular culture
As part of its 1996 Pepsi Stuff marketing campaign, Pepsi ran an advertisement promising a Harrier jet to anyone who collected 7 million Pepsi Points, a gag that backfired when a participant attempted to take advantage of the ability to buy additional points for 10 cents each to claim a jet for US\$700,000. When Pepsi turned him down, a lawsuit ensued, in which the judge ruled that any reasonable person would conclude that the advertisement was a joke.
## See also
|
67,711,917 |
2021 British Open
| 1,161,080,650 |
Snooker event
|
[
"2021 in English sport",
"2021 in snooker",
"August 2021 sports events in the United Kingdom",
"British Open (snooker)",
"Sport in Leicester"
] |
The 2021 British Open (officially the 2021 Matchroom.live British Open) was a professional snooker event played from 16 to 22 August 2021 at the Morningside Arena, Leicester, England. It was the 2021 edition of the British Open event, and the first since the 2004 British Open. It was the second ranking event of the 2021–22 snooker season, following the 2021 Championship League and preceding the 2021 Northern Ireland Open. It was broadcast by ITV Sport in the UK, and sponsored by Matchroom Sport. The winner received £100,000 from a total prize pool of £470,000.
All rounds in the tournament were played after a random draw made under a single-elimination tournament format with no seeded players. The first four rounds, from the last 128 to the last 16, were played as best-of-five matches, the quarter-finals and semi-finals as best-of-seven-frame matches, and the final played as the best-of-eleven frames. John Higgins, the defending champion from 2004, lost 1–3 to Ricky Walden in the third round. Mark Williams defeated Gary Wilson 6–4 in the final to win the 24th ranking title of his career. The event featured 32 century breaks, including two maximum breaks. Higgins made his 12th maximum break in professional competition in the first frame of his first-round win over Alexander Ursenbacher, and Ali Carter made his third maximum break in the second frame of his fourth-round match against Elliot Slessor.
## Format
The British Open is a snooker event first held in 1980 as the British Gold Cup. The event changed names to the British Open for the 1985 event won by Silvino Francisco. The 2021 tournament was held from 16 to 22 August 2021 at the Morningside Arena in Leicester, England. It was the first British Open event in 17 years, the last being played in 2004. It was the second ranking event of the 2021–22 snooker season, following the 2021 Championship League, and preceding the Northern Ireland Open. John Higgins was the defending champion, having defeated Stephen Maguire 9–6 in the 2004 final, to win his 16th ranking title. The event was broadcast by ITV4 in the United Kingdom, Eurosport in Europe; Liaoning TV, Superstar online, Kuaishou, Migu, Youku, Zhibo.tv and Huya Live in China; Now TV in Hong Kong; Sports cast in Taiwan; True Sports in Thailand; DAZN in Canada, Astrosport in Australia and by Matchroom Sport in all other territories. Matchroom also sponsored the event.
The event featured all 128 participants from the World Snooker Tour, no seedings, and a random draw after each round. Matches were played as the best-of-five , until the quarter-finals and semi-finals, which were played as best-of-seven-frame matches. The final was a best-of-eleven.
### Prize fund
The tournament had a total prize fund of £470,000, the winner receiving £100,000. A breakdown of prize money for this event is shown below:
- Winner: £100,000
- Runner-up: £45,000
- Semi-final: £20,000
- Quarter-final: £12,000
- Last 16: £7,000
- Last 32: £5,000
- Last 64: £3,000
- Highest break: £5,000
- Total: £470,000
## Summary
The first round was played from 16 to 18 August, as the best of five frames. On the first day, defending champion Higgins made his 12th competitive maximum break, in the first frame of his 3–1 win against Alexander Ursenbacher. At 46 years and 90 days, Higgins broke his own record as the oldest man to make a maximum break in competition. He had previously been the oldest player to make one after completing a maximum at the Championship League in October 2020. Higgins became the player with the second most maximum breaks, behind Ronnie O'Sullivan with 15, and now ahead of Stephen Hendry.
World number one Judd Trump trailed 1–2 to Mitchell Mann, but won 3–2. Despite making two century breaks, Kyren Wilson was defeated by Ashley Hugill 2–3. Mark Allen and Reanne Evans, who had been in a relationship between 2005 and 2008, met in the first round. The players, who had been in legal battles over child maintenance, had their first professional meeting at the event. Evans refused to shake hands with Allen before the match, and led 2–1, but missed the and Allen completed a clearance to win the contest. The finalists of the 2021 World Snooker Championship, Shaun Murphy and Mark Selby met in the first round. World champion Selby won the match 3–2. Four-time winner Hendry met Chris Wakelin in the first round. Hendry won the match 3–2, his first main tournament win since rejoining the tour in 2020 after retiring in 2012. Lukas Kleckers won the final two frames against Masters champion Yan Bingtao to win 3–2.
The second round was played on 18 and 19 August as the best of five frames. The 1997 winner Mark Williams recovered from 0–2 behind to Dominic Dale in a 3–2 victory. Iranian player Hossein Vafaei defeated Allen 3–2, despite having never beaten him in four prior meetings. Stephen Maguire defeated Martin O'Donnell 3–2, but complained about O'Donnell's slow play, who had averaged more than 30 seconds per shot. Ali Carter described playing reigning world champion Selby as a "dream draw", and won the match 3–0. Higgins trailed Cao Yupeng 1–2, but made breaks of 95 and 96 to win the match. Higgins lost the third frame of the match after missing a shot, blaming the having a stray hair on it. Hendry played Gary Wilson and lost 0–3. Neither player made a break above 50. Wilson called the performance "an embarrassment".
The third and fourth rounds were played on 20 August also as the best of five frames. Trump played Elliot Slessor losing 2–3. The loss meant that Selby would now be ranked as the world number one after the event. Slessor went on to face Carter in the fourth round. Carter made the second maximum break of the event in the second round, but only won that frame, losing 1–3. Ricky Walden completed a 3–1 win over Higgins, and then defeated Ross Muir by the same scoreline. Williams made breaks of 71 and 70 as he defeated Liam Highfield 3–0. Williams played Zhang Jiankang in the fourth round, where he was the sole player from the top 16 remaining. Zhang led 2–1 and was within four of winning the match, but missed a routine , and eventually lost 2–3. David Gilbert, who had won his first ranking event at the preceding Championship League event, reached the quarter-finals where he drew Wilson, who had defeated Vafaei.
The quarter-finals were played on 21 August as the best of seven frames. Gilbert played Wilson, and led both 2–0 and 3–2, but lost the match after missing a pot using a . Slessor met Zhou Yuelong and won 4–3 to reach his second ranking event semi-final. Williams played Ricky Walden in the quarter-finals. Williams won 4–3 on the final black, despite Walden making four breaks over 50. Williams commented that despite the win, he "couldn't string three pots together", and that the players he had faced in the tournament had lost matches, rather than him winning them. Robertson played Lu Ning in the final quarter-final match, winning 4–2. The semi-finals were contested on 21 August as the best of seven frames. Wilson played Slessor, but trailed 0–2. He won the next three frames with breaks of 67, 68 and 100 to lead 3–2, before Slessor made a 125 break to force a . Wilson won the frame to reach his second ranking final. Williams completed breaks of 60, 73 and 58 in a 4–1 win over Robertson.
The final was played between Williams and Wilson on 22 August as the best of eleven frames. Williams won the opening frame, before Wilson tied the match in frame two. A break of 111 in frame three for Williams was his first century break of the event, before Wilson won frame four. Wilson won frame five with a break of 101 to lead the match for the first time, Williams winning back-to-back frames to lead 4–3. Wilson won frame eight to tie the match, but Williams won the next two frames to complete a 6–4 victory. This was Williams's 24th ranking event title, and his second British Open title, 24 years since he last won the event in 1997. Aged 46, Williams was the third oldest person to win a ranking event, only behind Ray Reardon in 1982 (50) and Doug Mountjoy in 1989 (46). Williams commented that he had been lucky to progress to the final, but that his performance in the final was the best he had played in the tournament. Wilson said he was "bitterly disappointed" not to win.
## Tournament draw
The results from the event are shown below; players in bold denote match winners. Kurt Maflin withdrew from the event (denoted by w/d) and his opponent received a walkover (w/o).
### Top half
#### Section 1
#### Section 2
#### Section 3
#### Section 4
### Bottom half
#### Section 5
#### Section 6
#### Section 7
#### Section 8
### Finals
### Final
The frame scores for the final are shown below. Numbers in brackets show breaks made during that frame.
## Century breaks
There were 32 century breaks made during the event. Both Higgins and Carter compiled maximum breaks of 147 during the event. Higgins made one in the first frame of his first-round win over Ursenbacher, while Carter's maximum was completed during the second frame of his fourth-round loss to Slessor.
- 147, 107 – Ali Carter
- 147 – John Higgins
- 135, 112 – David Gilbert
- 134 – Zhang Anda
- 133 – Yuan Sijun
- 129, 125 – Elliot Slessor
- 126 – Jimmy Robertson
- 124, 109 – Zhou Yuelong
- 121 – Hossein Vafaei
- 118, 117 – Luca Brecel
- 118 – Barry Hawkins
- 117 – Michael Holt
- 117 – Michael White
- 115, 111 – Mark Williams
- 115, 101 – Kyren Wilson
- 114, 106, 101, 100 – Gary Wilson
- 111 – Lu Ning
- 110 – Anthony McGill
- 108 – Wu Yize
- 107 – Jordan Brown
- 104 – Ian Burns
- 104 – Anthony Hamilton
|
17,332,989 |
Martin Keamy
| 1,169,459,977 |
Fictional character of the TV series Lost
|
[
"Fictional United States Marine Corps personnel",
"Fictional characters from Las Vegas",
"Fictional mercenaries",
"Fictional murderers",
"Lost (2004 TV series) characters",
"Male characters in television",
"Television characters introduced in 2008"
] |
First Sergeant Martin Christopher Keamy is a fictional character played by Kevin Durand in the fourth season and sixth season of the American ABC television series Lost. Keamy is introduced in the fifth episode of the fourth season as a crew member aboard the freighter called the Kahana that is offshore the island where most of Lost takes place. In the second half of the season, Keamy served as the primary antagonist. He is the leader of a mercenary team hired by billionaire Charles Widmore (played by Alan Dale) that is sent to the island on a mission to capture Widmore's enemy Ben Linus (Michael Emerson) from his home, then torch the island.
Unlike Lost's ensemble of characters who, according to the writers, each have good and bad intentions, the writers have said that Keamy is evil and knows it. Durand was contacted for the role after one of Lost's show runners saw him in the 2007 film 3:10 to Yuma. Like other Lost actors, Durand was not informed of his character's arc when he accepted the role. Throughout Durand's nine-episode stint as a guest star in the fourth season, little was revealed regarding Keamy's life prior to his arrival on the island and Durand cited this as a reason why the audience "loved to hate" his villainous character. Critics praised the writers for breaking Lost tradition and creating a seemingly heartless character, while Durand's performance and appearance were also reviewed positively. Keamy returned in the final season for a tenth and eleventh appearance.
## Arc
Originally from Las Vegas, Nevada, Martin Keamy was a First Sergeant of the United States Marine Corps, serving with distinction from 1996 to 2001. In the three years before the events of Lost in 2004, he worked with various mercenary organizations in Uganda. In fall 2004, Keamy is hired by Widmore to lead a mercenary team to the island via freighter then helicopter and extract Ben for a large sum of money. Once he captures Ben, Keamy has orders to kill everyone on the island (including the forty-plus survivors of the September 22, 2004 crash of Oceanic Airlines Flight 815: the protagonists of the series) by torching it.
Keamy boards the freighter Kahana in Suva, Fiji sometime between December 6 and December 10. On the night of December 25, helicopter pilot Frank Lapidus (Jeff Fahey) flies Keamy and his mercenary team, which consists of Omar (Anthony Azizi), Lacour, Kocol, Redfern and Mayhew, to the island. On December 27, the team ambushes several islanders in the jungle, taking Ben's daughter Alex Linus (Tania Raymonde) hostage and killing her boyfriend Karl (Blake Bashoff) and her mother Danielle Rousseau (Mira Furlan). The team infiltrates the Barracks compound where Ben resides, blowing up the house of 815 survivor Claire Littleton (Emilie de Ravin) and fatally shooting three 815 survivors (played by extras). Keamy attempts to negotiate for Ben's surrender in exchange for the safe release of Alex. Believing that he is bluffing, Ben does not comply, and Keamy shoots Alex dead. Ben retaliates by summoning the island's smoke monster, which brutally assaults the mercenaries and fatally wounds Mayhew.
Upon returning to the freighter, Keamy unsuccessfully attempts to kill Michael Dawson (Harold Perrineau), whom he has discovered is Ben's spy, then obtains the "secondary protocol" from a safe. The protocol contains instructions from Widmore for finding Ben if he finds out Keamy's intention to torch the island, which he apparently had. The protocol contains details about a 1980s research station called the "Orchid" that was previously run by a group of scientists working for the Dharma Initiative. Keamy is also informed by Captain Gault that Keamy and his mercenary squad may be suffering from some sort of mental sickness, a notion Keamy dismisses. Later in the day, Omar straps a dead man's switch to Keamy, rigged to detonate C4 on the freighter if Keamy's heart stops beating. That night, Frank refuses to fly the mercenaries to the island. In a display of power, Keamy slits the throat of the ship's doctor Ray (Marc Vann) and throws him overboard and later outdraws and shoots Captain Gault (Grant Bowler) during a tense standoff. Frank flies the remaining five mercenaries back to the island. On December 30, the team apprehends Ben at the Orchid and takes him to the chopper where they are ambushed and killed by Ben's people—referred to as the "Others" by the 815 survivors—and 815 survivors Kate Austen (Evangeline Lilly) and Sayid Jarrah (Naveen Andrews). After a chase to recapture Ben and a brawl with Sayid, Keamy is shot in the back by Richard Alpert (Nestor Carbonell), who leaves him for dead, unaware of Keamy's bulletproof vest. Later, Keamy descends into the Orchid's underground level via its elevator to stalk Ben, who hides in the shadows. Goading Ben with taunts about his daughter's death, Keamy is ambushed by Ben, who beats him into submission with an expandable baton before stabbing him repeatedly in the neck. Though Locke attempts to save his life for the sake of the freighter, Keamy dies and the dead man's trigger detonates the explosives on the freighter, killing nearly everyone aboard.
In the afterlife, Keamy is a business associate of Mr. Paik, Sun's (Yunjin Kim) father. Mr. Paik sends Jin (Daniel Dae Kim) to LA to give Keamy a watch and \$25,000, intended to be Keamy's reward for killing Jin. However, the money is confiscated at customs in LAX, and Keamy is disappointed to discover it missing. He takes Jin to a restaurant and has him tied up in a freezer. Shortly after, Omar, one of Keamy's henchmen, captures Sayid and brings him to the same restaurant. Keamy explains to Sayid that his brother has been shot because he borrowed money and failed to pay it back. After Keamy threatens Sayid's family, Sayid retaliates and shoots Keamy in the chest, presumably killing him.
## Personality
During the casting process, Keamy was described as a military type in his late-twenties who does not question orders. Chris Carabott of IGN wrote that "in a show that features characters fraught with uncertainty, Keamy is the polar opposite and his Marine mentality definitely sets him apart. His team has a physical advantage and with the help of Mr. Widmore, they have a tactical advantage as well. Keamy is like a bulldog being thrown into a cage full of kittens (except for [Iraqi military torturer] Sayid)". Jay Glatfelter of The Huffington Post, stated that "Keamy is Crazy! ... out of all the bad guys on the Island—past, present, and future—Keamy has to be one of the most dangerous ones. Not because of how big he is, or the weaponry, but his willingness to kill at the drop of a hat. That doesn't bode well for our Losties [protagonists]." Co-show runner/executive producer/writer Carlton Cuse has stated that he and the other writers create "complex" characters because they "are interested in exploring how good and evil can be embodied in the same characters and [the writers are also intrigued] the struggles we all have[,] to overcome the dark parts of our souls"; however, he later clarified that there is an exception: "Keamy's bad, he knows he's bad, but he's ... a guy that does the job." Damon Lindelof stated that "the great thing about Keamy is that he is like a ... merciless survivor. [There]'s this great moment [in the season finale] where he just sort of hackie-sacks [a grenade thrown at him] over to where [his ally] Omar is standing. Omar is certainly an acceptable casualty as far as Keamy is concerned." According to a featurette in the Lost: The Complete Fourth Season – The Expanded Experience DVD set, Keamy likes "heavy weaponry" and "physical fitness" and dislikes "negotiations" and "doctors".
## Development
A remake of the 1957 film 3:10 to Yuma opened in theaters on September 7, 2007. Lost's co-show runner/executive producer/head writer/co-creator Damon Lindelof enjoyed Kevin Durand's supporting performance as Tucker and checked to see if he was available for a role on Lost. The casting director had Durand read a page of dialogue for the new character Keamy; Durand was offered the role in early October and he traveled to Honolulu in Hawaii—where Lost is filmed on location—by October 17, 2007. A former stand-up comic and rapper from Thunder Bay, Ontario, Canada, with the stage name "Kevy D", Durand had seen only around six episodes of Lost by the time he won the part. When he was shooting, he was confused by the story, later stating "I didn't want to know anything or be attached to anybody. I'm glad I didn't. But now that I'm on it, I'll watch all of it." Durand revealed his appreciation for the cast, crew and scripts and the fact that he had the chance to act as someone with a similar physical appearance to himself, as he had previously done roles that had not prompted recognition from viewers on the street.
Durand was never informed of his character's arc and only learned more of Keamy's importance to the plot as he received new scripts; thus, he was thrilled when the role was expanded for his third appearance, in "The Shape of Things to Come", when he kills Alex and Durand compared his excitement to that of "a kid in a candy store." He also stated that "you really don't know what's going to happen in the next episode and you get the scripts pretty late, so it is pretty secretive and it's kind of exciting that way [because] you're really forced to get in the moment and say the words and play the guy". Durand was initially met with negative reaction from fans on the street for this action and he defended his murderous character by arguing that it was actually more Ben's fault for failing to negotiate with Keamy; later, fans warmed up to Keamy. Despite the antagonist's increasing popularity and fanbase, it became apparent to Durand that fans were hoping for Keamy's death in what promised to be a showdown in the season finale. Throughout his nine-episode run, Keamy never receives an episode in which his backstory is developed through flashbacks and Durand holds this partially responsible for the negative reaction to his character, saying that the audience "[has not] really seen anything outside of Keamy's mission, so I think they definitely want him put down." Following the season's conclusion, Durand stated that he would not be surprised if his character returned in the fifth season and concluding that "Lost was really fun. If I can have that experience in any genre, I'd take it."
Durand returned for the sixth-season episodes "Sundown" and "The Package", following a twenty-two episode absence since his character's death in the fourth-season finale. Keamy appears in the "flash sideways" parallel timeline in September 2004 working for Sun Kwon's father Mr. Paik to assassinate her new husband Jin Kwon (Daniel Dae Kim) upon the couple's arrival in Los Angeles. Keamy and his sidekick Omar are also extorting money from Sayid's brother Omer, prompting Sayid to shoot them both, aiding Jin's rescue process.
## Reception
Professional television critics deemed Martin Keamy a welcome addition to the cast. Jeff Jensen of Entertainment Weekly commented that Kevin Durand "is emerging as a real find this season; he plays that mercenary part with a scene-stealing mix of menace and damaged vulnerability." After Jensen posted what he thought were the fifteen best moments of the season, the New York Post's Jarett Wieselman "ha[d] to complain about one glaring omission from EW's list: Martin Keamy. I have loved this character all season long—and not just solely for [his] physical attributes ... although those certainly don't hurt." Alan Sepinwall of The Star-Ledger reflected, "He was only on the show for a season and not featured all that much in that season, but Kevin Durand always made an impression as Keamy. Lots of actors might have his sheer physical size, but there's a sense of danger (insanity?) that you can't build at the gym, you know?" IGN's Chris Carabott wrote that "Keamy is one of the more striking new additions to Lost [in the fourth] season ... and is a welcome addition to the Lost universe." Maureen Ryan of The Chicago Tribune stated that Keamy has "so much charisma" and she would "rather find out more about [him] than most of the old-school Lost characters". TV Guide's Bruce Fretts agreed with a reader's reaction to Durand's "chilling portrayal" of Keamy and posted it in his weekly column. The reader, nicknamed "huntress", wrote "love him or hate him, nobody is neutral when it comes to Keamy, which is the hallmark of a well-played villain. Even the camera seems to linger on Durand, who conveys malice with just a look or tilt of his head. This role should give Durand's career a well-deserved boost". Following his demise, Whitney Matheson of USA Today noted that "it seems Keamy, Lost's camouflaged baddie, is turning into a bit of a cult figure." A "hilarious" blog containing Keamy digitally edited into various photographs, posters and art titled "Keamy's Paradise" was set up in early June 2008. TV Squad's Bob Sassone thought that the blog was "a great idea" and "funny" and he called Keamy "the Boba Fett of Lost". In 2009, Kevin Durand was nominated for a Saturn Award for Best Guest Starring Role in a Television Series.
Reaction to the antagonist's death was mixed. Kristin Dos Santos of E! criticized the writing for Keamy when he futilely asks Sayid where his fellow 815 survivors are so that he can kill them, but enjoyed his attractive physique, writing that "that guy is deep-fried evil, and he must die horribly for what he did to Alex, but in the meantime, well, he's certainly a well-muscled young man". The Huffington Post's Jay Glatfelter also called for Keamy's death, stating that "nothing would be better to me than him getting run over by Hurley's Dharma Bus", alluding to a scene in the third-season finale. Dan Compora of SyFy Portal commented that "Keamy took a bit too long to die. Yes, he was wearing a bulletproof vest so it wasn't totally unexpected, but it was a bit predictable." In a review of the season finale, Erin Martell of AOL's TV Squad declared her disappointment in the conclusion of Keamy's arc, stating that "it's always a shame when the hot guys die, [especially when] Kevin Durand did an amazing job with the character ... he'll be missed." In a later article titled "Lost Season Four Highlights", Martell noted Durand's "strong performance" that was "particularly fun to watch" and wrote that "we [the audience] all know that Widmore's the big bad, but Keamy became the face of evil on the island in his stead."
|
16,894 |
Kylie Minogue
| 1,173,736,776 |
Australian singer and actress (born 1968)
|
[
"1968 births",
"20th-century Australian actresses",
"20th-century Australian women singers",
"21st-century Australian actresses",
"21st-century Australian women singers",
"ARIA Award winners",
"ARIA Hall of Fame inductees",
"Actresses from Melbourne",
"Australian LGBT rights activists",
"Australian Officers of the Order of the British Empire",
"Australian child actresses",
"Australian dance musicians",
"Australian emigrants to England",
"Australian film actresses",
"Australian people of English descent",
"Australian people of Welsh descent",
"Australian soap opera actresses",
"Australian sopranos",
"Australian television personalities",
"Australian video game actresses",
"Australian voice actresses",
"Australian women in electronic music",
"Australian women pop singers",
"BT Digital Music Awards winners",
"Brit Award winners",
"Capitol Records artists",
"Chevaliers of the Ordre des Arts et des Lettres",
"Dance-pop musicians",
"Freestyle musicians",
"Gold Logie winners",
"Grammy Award winners for dance and electronic music",
"Helpmann Award winners",
"Kylie Minogue",
"Living people",
"MTV Europe Music Award winners",
"Musicians from Melbourne",
"NME Awards winners",
"Nu-disco musicians",
"Officers of the Order of Australia",
"Parlophone artists",
"People from Surrey Hills, Victoria",
"Singers from Melbourne",
"Synth-pop singers",
"Warner Records artists",
"Women television personalities"
] |
Kylie Ann Minogue AO OBE (/mɪˈnoʊɡ/; born 28 May 1968) is an Australian singer, songwriter and actress. Minogue is the highest-selling female Australian artist of all time, having sold over 80 million records worldwide. She has been recognised for reinventing herself in music as well as fashion, and is referred to by the European press as the "Princess of Pop" and a style icon. Her accolades include a Grammy Award, three Brit Awards and seventeen ARIA Music Awards.
Born and raised in Melbourne, Minogue first achieved recognition starring in the Australian soap opera Neighbours, playing tomboy mechanic Charlene Robinson. She gained prominence as a recording artist in the late 1980s and released four bubblegum and dance-pop-influenced studio albums produced by Stock Aitken Waterman. By the early 1990s, she had amassed several top ten singles in the UK and Australia, including "I Should Be So Lucky", "The Loco-Motion", "Especially for You", "Hand on Your Heart", and "Better the Devil You Know". Taking more creative control over her music, Minogue signed with Deconstruction Records in 1993 and released Kylie Minogue (1994) and Impossible Princess (1997), both of which received positive reviews. She returned to mainstream dance-oriented music with 2000's Light Years, including the number-one hits "Spinning Around" and "On a Night Like This". The follow-up, Fever (2001), was an international breakthrough for Minogue, becoming her best-selling album to date. Two of its singles, "Love at First Sight" and "In Your Eyes", became hits, with its lead single, "Can't Get You Out of My Head" becoming one of the most successful singles of the 2000s, selling over five million units.
Minogue continued reinventing her image and experimenting with a range of genres on her subsequent albums, which spawned successful singles such as "Slow", "I Believe in You", "2 Hearts", "All the Lovers", "Dancing" and "Padam Padam". In the UK charts, she is the only female artist to have a chart-topping album and top ten single, from the 1980s to the 2020s.
In film, Minogue made her debut in The Delinquents (1989). She has also appeared in the films Street Fighter (1994) as Cammy, Moulin Rouge! (2001), Holy Motors (2012) and San Andreas (2015). In reality television, she appeared as a judge on the third series of The Voice UK and The Voice Australia both in 2014. Her other ventures include product endorsements, books, fashion, charitable work and wine brand.
Minogue was appointed an Officer of the Order of the British Empire in the 2008 New Year Honours for services to music. She was appointed by the French government as a Chevalier (knight) of the Ordre des Arts et des Lettres for her contribution to the enrichment of French culture. While touring in 2005, Minogue was diagnosed with breast cancer. She was awarded an honorary Doctor of Health Science (D.H.Sc.) degree by Anglia Ruskin University in 2011 for her work in raising awareness for breast cancer. At the 2011 ARIA Music Awards, Minogue was inducted by the Australian Recording Industry Association into the ARIA Hall of Fame. She was appointed Officer of the Order of Australia (AO) in the 2019 Australia Day Honours.
## Life and career
### 1968–1986: Early life and career beginnings
Kylie Ann Minogue was born at Bethlehem Hospital in Caulfield South, a suburb of Melbourne, Victoria, on 28 May 1968, to car company accountant Ronald Charles Minogue and his wife Carol Ann (née Jones), a former ballet dancer. Both parents had moved to Australia in 1958 as part of an assisted migration scheme on the ship Fairsea. Also aboard were the Gibb family of later Bee Gees fame. Minogue is of English and Welsh descent (though her surname is of Irish origin) and was named after the Nyungar word for "boomerang". She is the eldest of three children: her brother, Brendan Minogue, is a news cameraman in Australia, and her sister, Dannii Minogue, is a singer and television host. The family frequently moved around various suburbs in Melbourne to sustain their living expenses, which Minogue found unsettling as a child. She would often stay at home reading, sewing, and learning to play violin and piano. When they moved to Surrey Hills, Victoria, she went on to Camberwell High School. During her schooling years, she found it difficult to make friends. She got her HSC with subjects including Arts and Graphics and English. Minogue described herself as being of "average intelligence" and "quite modest" during her high school years. Growing up, she and her sister Dannii took singing and dancing lessons.
A 10-year-old Minogue accompanied Dannii to a hearing arranged by the sisters' aunt, Suzette, and, while producers found Dannii too young, Alan Hardy gave Minogue a minor role in soap opera The Sullivans (1979). She also appeared in another small role in Skyways (1980). In 1985, she was cast in one of the lead roles in The Henderson Kids. Minogue took time off school to film The Henderson Kids and while Carol was not impressed, Minogue felt that she needed the independence to make it into the entertainment industry. During filming, co-star Nadine Garner labelled Minogue "fragile" after producers yelled at her for forgetting her lines; she would often cry on set. Minogue was dropped from the second season of the show after producer Alan Hardy felt the need for her character to be "written off". In retrospect, Hardy stated that removing her from the show "turned out to be the best thing for her". Interested in following a career in music, Minogue made a demo tape for the producers of weekly music program Young Talent Time, which featured Dannii as a regular performer. Minogue gave her first television singing performance on the show in 1985 but was not invited to join the cast. Minogue was cast in the soap opera Neighbours in 1986, as Charlene Mitchell, a schoolgirl turned garage mechanic. Neighbours achieved popularity in the UK, and a story arc that created a romance between her character and the character played by Jason Donovan culminated in a wedding episode in 1987 that attracted an audience of 20 million viewers. Minogue became the first person to win four Logie Awards in one year and was the youngest recipient of the "Gold Logie Award for Most Popular Personality on Australian Television", with the result determined by public vote.
### 1987–1989: Kylie and Enjoy Yourself
During a Fitzroy Football Club benefit concert, Minogue performed "I Got You Babe" as a duet with fellow actor John Waters, and "The Loco-Motion" as an encore. Producer Greg Petherick arranged for Minogue to record a demo of the latter song, re-titled as "Locomotion". The demo was sent to the head of Mushroom Records Michael Gudinski, who decided to sign Minogue in early 1987 based on her popularity from Neighbours. The track was first recorded in big band style, but was later given a completely new backing track by producer Mike Duffy, inspired by the hi-NRG sound of UK band Dead or Alive. "Locomotion" was released as her debut single in Australia on 13 July 1987, the week after the Neighbours wedding episode premiered. The single became the best-selling single of the decade in Australia according to the Kent Music Report. The success of "Locomotion" resulted in Minogue travelling to London to work with record producing trio Stock Aitken Waterman in September 1987. They knew little of Minogue and had forgotten that she was arriving; as a result, they wrote "I Should Be So Lucky" while she waited outside the studio. The track was written and recorded in under 40 minutes. Although Minogue needed to be convinced to work with Stock Aitken Waterman again after feeling she'd been disrespected during her first recording session, more sessions with the producers occurred from February to April 1988 in London and Melbourne, where the singer was filming her last episodes for Neighbours. The trio ended up composing and producing all the tracks on the forthcoming album and produced a new version of "The Loco-Motion". Producer Pete Waterman justified the highly controversial decision to re-record the latter track by claiming Minogue's platinum-selling Australian version was poorly produced, but Mike Duffy instead blamed the decision on Waterman's alleged wish to claim the prestige and royalties from the track's placement of the soundtrack of the 1988 film Arthur 2: On the Rocks.
Minogue's self-titled debut album, Kylie, was released in July 1988. The album is a collection of dance-oriented pop tunes and spent more than a year on the UK Albums Chart, including several weeks at number one, eventually becoming the best-selling album of the 1980s by a female artist. It went gold in the United States, while the single "The Locomotion" reached number three on the U.S. Billboard Hot 100 chart, and number one on the Canadian dance chart. The single "Got to Be Certain" became her third consecutive number one single on the Australian music charts. Later in the year, she left Neighbours to focus on her music career. Minogue also collaborated with Jason Donovan on the song "Especially for You", after intense demand for the duet from the public, media and retailers overcame her initial reservations. The track peaked at number-one in the United Kingdom and, in December 2014, sold its one millionth copy in the UK. Minogue was sometimes referred to as "the Singing Budgie" by her detractors over the coming years. In a review of the album Kylie for AllMusic, Chris True described the tunes as "standard, late-80s ... bubblegum", but added, "her cuteness makes these rather vapid tracks bearable". She received the ARIA Award for the year's highest-selling single. The song reached number one in the United Kingdom, Australia, Germany, Finland, Switzerland, Israel and Hong Kong. Minogue won her second consecutive ARIA Award for the year's highest-selling single, and received a "Special Achievement Award".
Minogue's second album, Enjoy Yourself, was released in October 1989. It was a success in the United Kingdom, Europe, New Zealand, Asia and Australia and spawned the UK number-one singles "Hand on Your Heart" and "Tears on My Pillow". However, it failed to sell well throughout North America, and Minogue was dropped by her American record label Geffen Records. She then embarked on her first concert tour, the Enjoy Yourself Tour, in the United Kingdom, Europe, Asia and Australia in February 1990. She was also one of the featured vocalists on the remake of "Do They Know It's Christmas?". Minogue's debut film, The Delinquents, was released in December 1989. The movie received mixed reviews by critics but proved popular with audiences. In the UK it grossed more than £200,000, and in Australia, it was the fourth-highest-grossing local film of 1989 and the highest-grossing local film of 1990. From 1989 to 1991, Minogue dated INXS frontman Michael Hutchence.
### 1990–1992: Rhythm of Love, Let's Get to It and Greatest Hits
Unhappy with her level of creative input on her first two albums, Minogue worked with her manager Terry Blamey and her Australian label Mushroom Records to force a change in her relationship with SAW, and to push for a more mature sound. Deciding to collaborate with US producers on the new record, the move pressured SAW into conceding the singer greater creative input, and they gave her the right of veto on mixes, which she used.
Minogue's third album, Rhythm of Love, was released in November 1990 and was described as "leaps and bounds more mature" than her previous albums by AllMusic's Chris True. The album did not match the commercial success of its predecessors, but three of its singles – "Better the Devil You Know", "Step Back in Time" and "Shocked – reached the top ten in both the UK and Australia.
Entertainment Weekly's Ernest Macias observed that, in Rhythm of Love, Minogue "presented a more mature and sexually-fueled image". Macias also pointed out that the album "showcases the beginning of Minogue's career as a pop icon, propelled by her angelic vocals, sensual music videos, chic fashion, and distinct dance sound." Her relationship with Michael Hutchence was also seen as part of her departure from her earlier persona. The making of the "Better the Devil You Know" music video was the first time Minogue "felt part of the creative process". She said: "I wasn't in charge but I had a voice. I'd bought some clothes on King's Road for the video. I saw a new way to express my point of view creatively." To promote the album, Minogue embarked on the Rhythm of Love Tour in February 1991.
Minogue's fourth album, Let's Get to It, was released in October 1991 and reached number 15 on the UK Albums Chart. It was her first album to fail to reach the top ten. While the first single from the album, "Word Is Out", became her first single to miss the top ten of the UK Singles Chart, subsequent singles "If You Were with Me Now" and "Give Me Just a Little More Time" both reached the top five. In support of the album, she embarked on the Let's Get to It Tour in October. She later expressed her opinion that she was stifled by Stock, Aitken and Waterman, saying, "I was very much a puppet in the beginning. I was blinkered by my record company. I was unable to look left or right." Her first best-of album, simply titled Greatest Hits, was released in August 1992. It reached number one in the United Kingdom and number three in Australia. The singles from the album, "What Kind of Fool" and her cover version of Kool & the Gang's "Celebration", both reached the top 20 of the UK Singles Chart.
### 1993–1998: Kylie Minogue and Impossible Princess
Minogue's signing with Deconstruction Records in 1993 marked a new phase in her career. Her fifth album, Kylie Minogue, was released in September 1994 and was a departure from her previous efforts as it "no longer featured the Stock-Aitken-Waterman production gloss", with critics praising Minogue's vocals and the album production. It was produced by dance music producers the Brothers in Rhythm, namely Dave Seaman and Steve Anderson, who had previously produced "Finer Feelings", her last single with PWL. As of 2015, Anderson continued to be Minogue's musical director. The album peaked at number four on the UK Albums Chart and was certified gold in the country. Its lead single, "Confide in Me", spent four weeks at number one on the Australian singles chart. The next two singles from the album, "Put Yourself in My Place" and "Where Is the Feeling?", reached the top 20 on the UK Singles Chart.
During this period, Minogue made a guest appearance as herself in an episode of the comedy The Vicar of Dibley. Director Steven E. de Souza saw Minogue's cover photo in Australia's Who Magazine as one of "The 30 Most Beautiful People in the World" and offered her a role opposite Jean-Claude Van Damme in the film Street Fighter. The film was a moderate success, earning US\$70 million in the US, but received poor reviews, with The Washington Post's Richard Harrington calling Minogue "the worst actress in the English-speaking world". She had a minor role in the 1996 film Bio-Dome starring Pauly Shore and Stephen Baldwin. She also appeared in the 1995 short film Hayride to Hell and in the 1997 film Diana & Me. In 1995, Minogue collaborated with Australian artist Nick Cave for the song "Where the Wild Roses Grow". Cave had been interested in working with Minogue since hearing "Better the Devil You Know", saying it contained "one of pop music's most violent and distressing lyrics". The music video for their song was inspired by John Everett Millais's painting Ophelia (1851–1852), and showed Minogue as the murdered woman, floating in a pond as a serpent swam over her body. The single received widespread attention in Europe, where it reached the top 10 in several countries, and reached number two in Australia. The song won ARIA Awards for "Song of the Year" and "Best Pop Release". Following concert appearances with Cave, Minogue recited the lyrics to "I Should Be So Lucky" as poetry in London's Royal Albert Hall.
By 1997, Minogue was in a relationship with French photographer Stéphane Sednaoui, who encouraged her to develop her creativity. Inspired by a mutual appreciation of Japanese culture, they created a visual combination of "geisha and manga superheroine" for the photographs taken for Minogue's sixth album, Impossible Princess, and the video for "GBI (German Bold Italic)", Minogue's collaboration with Towa Tei. She drew inspiration from the music of artists such as Shirley Manson and Garbage, Björk, Tricky and U2, and Japanese pop musicians such as Pizzicato Five and Towa Tei. The album featured collaborations with musicians including James Dean Bradfield and Sean Moore of the Manic Street Preachers. Impossible Princess garnered some negative reviews upon its release in 1997, but would be praised as Minogue's most personal and best work in retrospective reviews. In 2003, Slant Magazine'''s Sal Cinquemani called it a "deeply personal effort" and "Minogue's best album to date", while Evan Sawdey, from PopMatters, described Impossible Princess as "one of the most crazed, damn-near perfect dance-pop albums ever created" in a 2008 review. Mostly a dance album, Minogue countered suggestions that she was trying to become an indie artist.
Acknowledging that she had attempted to escape the perceptions of her that had developed during her early career, Minogue commented that she was ready to "forget the painful criticism" and "accept the past, embrace it, use it". The music video for "Did It Again" paid homage to her earlier incarnations. Retitled Kylie Minogue in the UK following the death of Diana, Princess of Wales, it became the lowest-selling album of her career. At the end of the year, a campaign by Virgin Radio stated, "We've done something to improve Kylie's records: we've banned them." In Australia, the album was a success and spent 35 weeks on the album chart. Minogue's Intimate and Live tour in 1998 was extended due to demand. She gave several live performances in Australia, including the 1998 Sydney Gay and Lesbian Mardi Gras, and the opening ceremonies of Melbourne's Crown Casino, and Sydney's Fox Studios in 1999 (where she performed Marilyn Monroe's "Diamonds Are a Girl's Best Friend") as well as a Christmas concert in Dili, East Timor, in association with the United Nations Peace-Keeping Forces. She played a small role in the Australian-made Molly Ringwald 2000 film Cut.
### 1999–2003: Light Years, Fever and Body Language
In 1999, Minogue performed a duet with the Pet Shop Boys' on their Nightlife album and spent several months in Barbados performing in William Shakespeare's The Tempest. She then appeared in the film Sample People and recorded a cover version of Russell Morris's "The Real Thing" for the soundtrack. She signed with Parlophone in April, who wanted to re-establish Minogue as a pop artist. Her seventh studio album, Light Years, was released on 25 September 2000. NME magazine called it a "fun, perfectly-formed" record, which saw Minogue "dropping her considerable concern for cool and bouncing back to her disco-pop roots". It was a commercial success, becoming Minogue's first number-one album in her native Australia. The lead single, "Spinning Around", debuted atop the UK Singles Chart in July, making her only the second artist to have a number-one single in three consecutive decades (after American singer Madonna). Its accompanying video featured Minogue in revealing gold hotpants, which came to be regarded as a "trademark". Three other singles—"On a Night Like This", "Kids" (with English singer Robbie Williams), and "Please Stay"—peaked in the top ten in the United Kingdom.
An elaborate art book titled Kylie, featuring contributions by Minogue and creative director William Baker, was published by Booth-Clibborn in March 2000. At the time, she began a romantic relationship with model James Gooding. In October, Minogue performed at both the closing ceremonies of 2000 Sydney Olympics and in the opening ceremony of the Paralympics. Her performance of ABBA's "Dancing Queen" was chosen as one of the most memorable Olympic closing ceremony moments by Kate Samuelson of TNT. The following year, she embarked on the On a Night Like This Tour, which was inspired by the style of Broadway shows and the musicals of the 1930s. She also made a brief cameo as The Green Fairy in Baz Luhrmann's Moulin Rouge!, which earned her an MTV Movie Award nomination in 2002. "Spinning Around" and Light Years consecutively won the ARIA Award for Best Pop Release in 2000 and 2001. In early 2001, she launched her own brand of underwear called Love Kylie in partnership with the Holeproof brand of Australian Pacific Brands.
In September 2001, Minogue released "Can't Get You Out of My Head", the lead single from her eighth studio album, Fever. It reached number one in over 40 countries and sold 5 million copies, becoming Minogue's most successful single to date. The accompanying music video featured the singer sporting an infamous hooded white jumpsuit with deep plunging neckline. The remaining singles—"In Your Eyes", "Love at First Sight" and "Come into My World"—all peaked in the top ten in Australia and the United Kingdom. Released on 1 October, Fever topped the charts in Australia, Austria, Germany, Ireland, and the United Kingdom, eventually achieving worldwide sales in excess of six million. Dominique Leone from Pitchfork praised its simple and "comfortable" composition, terming it a "mature sound from a mature artist, and one that may very well re-establish Minogue for the VH1 generation". The warm reception towards the album led to its release in the United States in February 2002 by Capitol Records, Minogue's first in 13 years. It debuted on the Billboard 200 at number three, her highest-charting album in the region, while peaking at number 10 on the Canadian Albums Chart.
To support the album, Minogue headlined her KylieFever2002 tour in Europe and Australia, which ran from April to August 2002. She performed several songs from the setlist in a series of Jingle Ball concerts in the United States in 2002–2003. In May 2002, Minogue and Gooding announced the end of their relationship after two and a half years. She received four accolades at the ARIA Music Awards of 2002, including Highest Selling Single and Single of the Year for "Can't Get You Out of My Head". That same year, she won her first Brit Award for International Female Solo Artist and Best International Album for Fever. In 2003, she received her first Grammy nomination for Best Dance Recording for "Love at First Sight", before winning the award for "Come into My World" the following year, marking the first time an Australian music artist had won in a major category since Men at Work in 1983.
In November 2003, Minogue released her ninth studio album, Body Language, following an invitation-only concert, titled Money Can't Buy, at the Hammersmith Apollo in London. The album downplayed the disco style and was inspired by 1980s artists such as Scritti Politti, The Human League, Adam and the Ants and Prince, blending their styles with elements of hip hop. The sales of the album were lower than anticipated after the success of Fever, though the first single, "Slow", was a number-one hit in the United Kingdom and Australia. Two more singles from the album were released: "Red Blooded Woman" and "Chocolate". In the US, "Slow" reached number-one on the club chart and received a Grammy Award nomination in the Best Dance Recording category. Body Language achieved first week sales of 43,000 and declined significantly in the second week.
### 2004–2009: Ultimate Kylie, Showgirl and X
In November 2004, Minogue released her second official greatest hits album, entitled Ultimate Kylie. The album yielded two singles: "I Believe in You" and "Giving You Up". "I Believe in You" was later nominated for a Grammy Award in the category of "Best Dance Recording". In March 2005, Minogue commenced her Showgirl: The Greatest Hits Tour. After performing in Europe, she travelled to Melbourne, where she was diagnosed with breast cancer, forcing her to cancel the tour. She underwent surgery in May 2005 and commenced chemotherapy treatment soon after. It was announced in January 2006 that she had finished chemotherapy and the disease "had no recurrence" after the surgery. She would continue her treatment for the next months. In December 2005, Minogue released the song "Over the Rainbow", a live recording from her Showgirl tour. Her children's book, The Showgirl Princess, written during her period of convalescence, was published in October 2006, and her perfume, Darling, was launched in November. The range was later augmented by eau de toilettes including Pink Sparkle, Couture and Inverse.
Minogue resumed her then cancelled tour in November 2006, under the title Showgirl: The Homecoming Tour. Her dance routines had been reworked to accommodate her medical condition, with slower costume changes and longer breaks introduced between sections of the show to conserve her strength. The media reported that Minogue performed energetically, with The Sydney Morning Herald describing the show as an "extravaganza" and "nothing less than a triumph". She voiced Florence in the animated film The Magic Roundabout, based on the television series of the same name. She finished her voice role back in 2002, before it was released in 2005 in Europe. A year later, she reprised the role and recorded the theme song for the American edition, re-titled as Doogal, which grossed \$26,691,243 worldwide.
In November 2007, Minogue released her tenth and much-discussed "comeback" album, X. The electro-styled album included contributions from Guy Chambers, Cathy Dennis, Bloodshy & Avant and Calvin Harris. The album received some criticism for the triviality of its subject matter in light of Minogue's experiences with breast cancer. X and its lead single, "2 Hearts", entered at number one on the Australian albums and singles charts, respectively. In the United Kingdom, X initially attracted lukewarm sales, although its commercial performance eventually improved. Follow-up singles from the album, "In My Arms" and "Wow", both peaked inside the top ten of the UK Singles Chart. In the US, the album was nominated at the 2009 Grammy Awards for Best Electronic/Dance Album.
Minogue began a relationship with French actor Olivier Martinez after meeting him at the 2003 Grammy Awards ceremony. They ended their relationship in February 2007, but remained on friendly terms. Minogue was reported to have been "saddened by false [media] accusations of [Martinez's] disloyalty". She defended Martinez, and acknowledged the support he had given during her treatment for breast cancer. As part of the promotion of her album, Minogue was featured in White Diamond, a documentary filmed during 2006 and 2007 as she resumed her Showgirl: The Homecoming Tour. She also appeared in The Kylie Show, which featured her performances as well as comedy sketches. She co-starred in the 2007 Doctor Who Christmas special episode, "Voyage of the Damned", as Astrid Peth. The episode was watched by 13.31 million viewers, which was the show's highest viewing figure since 1979. In February 2008, she launched her range of home furnishings, Kylie Minogue at Home. Her business venture went on to launch its newest collection by February 2018, for its tenth anniversary.
In May 2008, Minogue embarked on the European leg of the KylieX2008 tour, her most expensive tour to date with production costs of £10 million. The tour was generally acclaimed and sold well. She was then appointed a Chevalier of the French Ordre des Arts et des Lettres, the junior grade of France's highest cultural honour. In July, she was officially invested by the Prince of Wales as an Officer of the Order of the British Empire. She also won the "Best International Female Solo Artist" award at the Brit Awards 2008. In September, she made her Middle East debut as the headline act at the opening of Atlantis, The Palm, an exclusive hotel resort in Dubai, and from November, she continued her KylieX2008 tour, taking the show to cities across South America, Asia and Australia. The tour visited 21 countries, and was considered a success, with ticket sales estimated at \$70,000,000. In 2009, Minogue hosted the Brit Awards with James Corden and Mathew Horne. She then embarked on the For You, for Me tour which was her first North American concert tour. She was also featured in the Hindi movie, Blue, performing an A. R. Rahman song. Minogue was in a relationship with model Andrés Velencoso from 2008 to 2013.
### 2010–2012: Aphrodite, Anti Tour and The Abbey Road Sessions
In July 2010, Minogue released her eleventh studio album, Aphrodite. The album featured new songwriters and producers including Stuart Price as executive producer. Price also contributed to song writing along with Minogue, Calvin Harris, Jake Shears, Nerina Pallot, Pascal Gabriel, Lucas Secon, Keane's Tim Rice-Oxley and Kish Mauve. The album received favourable reviews from most music critics; Rob Sheffield from Rolling Stone labelled the album Minogue's "finest work since 1997's underrated Impossible Princess" and Tim Sendra from Allmusic commended Minogue's choice of collaborators and producers, commenting that the album is the "work of someone who knows exactly what her skills are and who to hire to help showcase them to perfection". Aphrodite debuted at number-one in the United Kingdom, exactly 22 years after her first number one hit in the United Kingdom. The album's lead single, "All the Lovers," was a success and became her 33rd top ten single in the United Kingdom, though subsequent singles from the album—"Get Outta My Way", "Better than Today", and "Put Your Hands Up (If You Feel Love)" — failed to reach the top ten of the UK Singles Chart. However, all the singles released from the album have topped the US Billboard Hot Dance Club Songs chart.
Minogue recorded a duet with synthpop duo Hurts on their song "Devotion", which was included on the group's album Happiness. She was then featured on Taio Cruz's single "Higher". The result was successful, peaking inside the top 20 in several charts and impacting the US Hot Dance Club Charts. In February 2011, Minogue became the first act to hold two of the top three spots on the US Dance/Club Play Songs survey, with "Better Than Today" at number one and "Higher" at number 3. To conclude her recordings in 2010, she released the extended play A Kylie Christmas, which included covers of Christmas songs including "Let It Snow" and "Santa Baby". Minogue embarked on the Aphrodite: Les Folies Tour in February 2011, travelling to Europe, North America, Asia, Australia and Africa. With a stage set inspired by the birth of the love goddess Aphrodite and Grecian culture and history, it was greeted with positive reviews from critics, who praised the concept and the stage production. The tour was a commercial success, grossing US\$60 million and ranking at number six and 21 on the mid-year and annual Pollstar Top Concert Tours of 2011 respectively.
In March 2012, Minogue began a year-long celebration for her 25 years in the music industry, which was often called "K25". The anniversary started with her embarking on the Anti Tour in England and Australia, which featured b-sides, demos and rarities from her music catalogue. The tour was positively received for its intimate atmosphere and was a commercial success, grossing over two million dollars from four shows. She then released the single "Timebomb" in May, the greatest hits compilation album, The Best of Kylie Minogue in June and the singles box-set, K25 Time Capsule in October. She performed at various events around the world, including Sydney Gay and Lesbian Mardi Gras, Elizabeth II's Diamond Jubilee Concert, and BBC Proms in the Park London 2012. Minogue released the compilation album The Abbey Road Sessions in October. The album contained reworked and orchestral versions of her previous songs. It was recorded at London's Abbey Road Studios and was produced by Steve Anderson and Colin Elliot. The album received favourable reviews from music critics and debuted at number-two in the United Kingdom. The album spawned two singles, "Flower" and "On a Night Like This". Minogue returned to acting and starred in two films: a cameo appearance in the American independent film Jack & Diane and a lead role in the French film Holy Motors. Jack & Diane opened at the Tribeca Festival in April 2012, while Holy Motors opened at the 2012 Cannes Film Festival.
### 2013–2016: Kiss Me Once and Kylie Christmas
In January 2013, Minogue and her manager Terry Blamey, whom she had worked with since the start of her singing career, parted ways. The following month, she signed to Roc Nation for a management deal. In September, she was featured on Italian singer-songwriter Laura Pausini's single "Limpido", which was a number-one hit in Italy and received a nomination for "World's Best Song" at the 2013 World Music Awards. In the same month, Minogue was hired as a coach for the third series of BBC One's talent competition The Voice UK, alongside record producer and The Black Eyed Peas member, will.i.am, Kaiser Chiefs' lead singer Ricky Wilson and singer Sir Tom Jones. The show opened with 9.35 million views from the UK, a large percentage increase from the second season. It accumulated an estimated 8.10 million viewers on average. Minogue's judging and personality on the show were singled out for praise. Ed Power from The Daily Telegraph gave the series premiere 3 stars, praising Minogue for being "glamorous, agreeably giggly [and] a card-carrying national treasure". In November, she was hired as a coach for the third season of The Voice Australia.
In March 2014, Minogue released her twelfth studio album, Kiss Me Once. The album featured contributions from Sia, Mike Del Rio, Cutfather, Pharrell Williams, MNEK and Ariel Rechtshaid. It peaked at number one in Australia and number two in the United Kingdom. The singles from the album, "Into the Blue" and "I Was Gonna Cancel", did not chart inside the top ten of the UK Singles Chart, peaking at number 12 and number 59 respectively. In August, Minogue performed a seven-song set at the closing ceremony of the 2014 Commonwealth Games, donning a custom Jean Paul Gaultier corset. In September, she embarked on the Kiss Me Once Tour. In January 2015, Minogue appeared as a guest vocalist on Giorgio Moroder's single "Right Here, Right Now".
In March, Minogue's contract with Parlophone Records ended, leaving her future music releases with Warner Music Group in Australia and New Zealand. The same month, she parted ways with Roc Nation. In April, Minogue played tech reporter Shauna in a two episode arc on the ABC Family series, Young & Hungry. Also in April, reality TV personality Kylie Jenner entered into a trademark dispute with Minogue in her attempt to establish the brand "Kylie", which Minogue has been trading under since the 1990s. The dispute was eventually resolved in Minogue's favor in 2017. Minogue also appeared as Susan Riddick in the disaster film San Andreas, released in May and starring American actor Dwayne Johnson and American actress Carla Gugino. In September 2015, an extended play with Fernando Garibay titled Kylie + Garibay was released. Garibay and Moroder served as producers for the extended play. In November, Minogue was a featured artist on the single, "The Other Boys" by Australian DJ duo Nervo, alongside American singer Jake Shears and American record producer Nile Rodgers.
Minogue released her first Christmas album, Kylie Christmas, in November 2015. The following year, it was re-released entitled as Kylie Christmas: Snow Queen Edition. A Christmas concert series in Royal Albert Hall, London was held in both December 2015 and 2016, in support of the album. She also recorded the theme song "This Wheel's on Fire", for the soundtrack of the 2016 film, Absolutely Fabulous: The Movie.
### 2017–2021: Golden, Step Back in Time: The Definitive Collection and Disco
In February 2017, Minogue signed a new record deal with BMG Rights Management. In December 2017, she and BMG had struck a joint-deal with Mushroom Group—under the sub-division label Liberator Music to release her next album in Australia and New Zealand. Throughout 2017, Minogue worked with writers and producers for her fourteenth studio album, including Sky Adams and Richard Stannard, and recorded the album in London, Los Angeles and Nashville, with the latter profoundly influencing the record. Minogue's album Golden was released in April 2018 with "Dancing" serving as its lead single. The album debuted at number one in the UK and Australia. Tim Sendra from AllMusic labelled Golden a "darn bold" for an artist of Minogue's longevity, stating "The amazing thing about the album, and about Minogue, is that she pulls off the country as well as she's pulled off new wave, disco, electro, murder ballads, and everything else she's done in her long career." Golden also received criticism, with Pitchfork's Ben Cardew claiming that it "sounds like someone playing at country music, rather than someone who understands it." In support of the album, she embarked on two tours, Kylie Presents Golden and Golden Tour. Minogue was among the performers at The Queen's Birthday Party held at the Royal Albert Hall in April 2018. Minogue began dating Paul Solomons, the creative director of British GQ, in 2018.
In June 2019, Minogue released a greatest hits compilation Step Back in Time: The Definitive Collection, featuring "New York City" as the lead single. The album reached number one in her native Australia and in the UK. In the same month, as part of her Summer 2019 tour, she made her debut performance at the Glastonbury Festival, fourteen years after her breast cancer diagnosis forced her to cancel her 2005 headlining slot. Performing in the "Legends slot", her set featured appearances from Australian musician Nick Cave and English musician Chris Martin. The set received rave reviews from critics, with The Guardian declaring it a "solid-gold", "peerless" and "phenomenal". The performance was the most-watched set of the BBC coverage, earning three million viewers and breaking records for the most attended Glastonbury set in history. She also appeared in her own Christmas television special, Kylie's Secret Night on Channel 4 in December 2019.
In May 2020, Minogue launched Kylie Minogue Wines in partnership with English beverages distributor Benchmark Drinks, with Rosé Vin de France serving as the debut product. Her prosecco rosé had become the number one branded prosecco in the UK according to Nielsen Holdings data. The wine brand has sold over five million bottles by June 2022, and won a Golden Vines Award for entrepreneurship.
Following her Glastonbury performance, Minogue stated that she would like to create a "pop-disco album" and return to recording new material after the performance. Work continued on her fifteenth studio album during the COVID-19 pandemic in 2020, with Minogue using a home studio to record throughout lockdown. Alistair Norbury, president of BMG Rights Management, stated that Minogue was learning to record and engineer her own vocals in order to work apace during lockdown. In July 2020, "Say Something" was released as the first single from Disco. The album's second single, "Magic" and a promotional single, "I Love It", followed in the following months.
Disco was released in November 2020, reaching number one in her native Australia and in the UK, where Minogue became the first female artist to achieve a number one album in five different decades, from the 1980s to the 2020s. In support of the album release, a livestream concert titled Infinite Disco was held the next day. In the same month, she was featured on Children in Need's charity single – "Stop Crying Your Heart Out". In December 2020, "Real Groove" was released as the album's third single, with a subsequent remix with English singer Dua Lipa released in the same month. In May 2021, Minogue featured on a remix of "Starstruck" with British band Years & Years. In November 2021, a reissue of Disco titled Disco: Guest List Edition was released, preceded with the single "A Second to Midnight" featuring Years & Years. The reissue also included duets with English singer Jessie Ware and American singer Gloria Gaynor.
### 2022–present: Tension
In February 2022, after living in London since the 1990s, Minogue relocated back to Melbourne, citing a desire to be closer to her family in Australia. In the same year, she began working on her sixteenth studio album. In July, she returned to her role in Neighbours as Charlene, for a brief appearance for the show's series finale.
In 2023, the studio album Tension is scheduled for a September release. Minogue described it as "a blend of personal reflection, club abandon and melancholic high". The lead single, "Padam Padam" entered the top ten in the United Kingdom, and marked her as the only female artist to achieve a UK top ten entry, in the 1980s to the 2020s. The album includes the song "10 Out of 10" with Dutch DJ and producer Oliver Heldens and the title track "Tension". Minogue is set to embark on a concert residency, More Than Just a Residency in November at Voltaire at The Venetian in Las Vegas, Nevada. A television concert special, An Audience with Kylie to be filmed at the Royal Albert Hall, is set to air on ITV later in the year.
## Artistry
Minogue explained that she first became interested in pop music during her adolescence: "I first got into pop music in 1981, I'd say. It was all about Prince, Adam + the Ants, that whole New Romantic period. Prior to that, it was the Jackson 5, Donna Summer, and my dad's records – the Stones and Beatles." She would also listen to the records of Olivia Newton-John and ABBA. Minogue said that she "wanted to be" Newton-John while growing up. Her producer, Pete Waterman, recalled Minogue during the early years of her music career with the observation: "She was setting her sights on becoming the new Prince or Madonna ... What I found amazing was that she was outselling Madonna four to one, but still wanted to be her." Minogue came to prominence in the music scene as a bubblegum pop singer and was deemed a "product of the Stock Aitken Waterman Hit Factory". Musician Nick Cave, who worked with Minogue in some occasions, was a major influence on her artistic development. She told The Guardian: "He's definitely infiltrated my life in beautiful and profound ways." Throughout her career, Minogue's work was also influenced by Cathy Dennis, D Mob, Scritti Politti, Björk, Tricky, U2 and Pizzicato Five, among others.
Minogue has been known for her soft soprano vocal range. Tim Sendra of AllMusic reviewed her album Aphrodite and said that Minogue's "slightly nasal, girl next door vocals serve her needs perfectly." According to Fiona MacDonald from Madison magazine, Minogue "has never shied away from making some brave but questionable artistic decisions". In musical terms, Minogue has worked with many genres in pop and dance music. However, her signature music has been contemporary disco music. Her first studio albums with Stock, Aitken, and Waterman present a more bubblegum pop influence, with many critics comparing her to Madonna. Chris True from AllMusic, reviewed her debut Kylie and found her music "standard late-'80s Stock-Aitken-Waterman bubblegum", however he stated that she presented the most personality of any 1980s recording artist. He said of her third album Rhythm of Love, from the early 1990s, "The songwriting is stronger, the production dynamic, and Kylie seems more confident vocally." At the time of her third studio album, "She began to trade in her cutesy, bubblegum pop image for a more mature one, and in turn, a more sexual one." Chris True stated that during her relationship with Michael Hutchence, "her shedding of the near-virginal façade that dominated her first two albums, began to have an effect, not only on how the press and her fans treated her, but in the evolution of her music."
From Minogue's work on her sixth studio album, Impossible Princess, her songwriting and musical content began to change. She was constantly writing down words, exploring the form and meaning of sentences. She had written lyrics before, but called them "safe, just neatly rhymed words and that's that". Sal Cinquemani from Slant Magazine said that the album bears a resemblance to Madonna's Ray of Light (1998). He said that she took inspiration from "both the Britpop and electronica movements of the mid-'90s", saying that "Impossible Princess is the work of an artist willing to take risks". Her next effort, Light Years is a disco-influenced dance-pop record, with AllMusic's Chris True calling it "Arguably one of the best disco records since the '70s". True stated that her eighth album, Fever, "combines the disco-diva comeback of Light Years with simple dance rhythms". Her ninth album, Body Language, was quite different from her musical experiments in the past as it was a "successful" attempt at broadening her sound with electro and hip-hop for instance. Incorporating styles of dance music with funk, disco and R&B, the album was listed on Q's "Best Albums of 2003".
Critics said Minogue's tenth record X did not feature enough "consistency" and Chris True called the tracks "cold, calculated dance-pop numbers." Tim Sendra of AllMusic said that her eleventh album, Aphrodite, "rarely strays past sweet love songs or happy dance anthems" and "the main sound is the kind of glittery disco pop that really is her strong suit." Sendra found Aphrodite "One of her best, in fact." Minogue's fourteenth studio album, Golden was heavily influenced by country music, although maintaining her dance-pop sensibilities. Sal Cinquemani from Slant Magazine wrote that "Golden further bolsters Minogue's reputation for taking risks—and artfully sets the stage for her inevitable disco comeback."
## Public image
Minogue's efforts to be taken seriously as a recording artist were initially hindered by the perception that she had not "paid her dues" and was no more than a manufactured pop star exploiting the image she had created during her stint on Neighbours. Minogue acknowledged this viewpoint, saying, "If you're part of a record company, I think to a degree it's fair to say that you're a manufactured product. You're a product and you're selling a product. It doesn't mean that you're not talented and that you don't make creative and business decisions about what you will and won't do and where you want to go."
In 1993, Australian director Baz Luhrmann introduced Minogue to photographer Bert Stern, notable for his work with Marilyn Monroe. Stern photographed her in Los Angeles and, comparing her to Monroe, commented that Minogue had a similar mix of vulnerability and eroticism. Throughout her career, Minogue has chosen photographers who attempt to create a new "look" for her, and the resulting photographs have appeared in a variety of magazines, from the cutting edge The Face to the more traditionally sophisticated Vogue and Vanity Fair, making the Minogue face and name known to a broad range of people. Stylist William Baker has suggested that this is part of the reason she entered mainstream pop culture in Europe more successfully than many other pop singers who concentrate solely on selling records.
By 2000, Minogue was considered to have achieved a degree of musical credibility for having maintained her career longer than her critics had expected. Her progression from the wholesome "girl next door" to a more sophisticated performer with a flirtatious and playful persona attracted new fans. Her "Spinning Around" video led to some media outlets referring to her as "SexKylie", and sex became a stronger element in her subsequent videos. In September 2002, she was ranked 27 on VH1's 100 Sexiest Artists list. She was also named one of the 100 Hottest Women of All-Time by Men's Health in 2013. William Baker described her status as a sex symbol as a "double edged sword", observing that "we always attempted to use her sex appeal as an enhancement of her music and to sell a record. But now it has become in danger of eclipsing what she actually is: a pop singer." After 20 years as a performer, Minogue was described by BBC's Fiona Pryor as a fashion "trend-setter" and a "style icon who constantly reinvents herself". Pointing out the several reinventions in Minogue's image, Larissa Dubecki from The Age labelled her the "Mother of Reinvention".
Minogue has been inspired by and compared to American artist Madonna throughout her career. She received negative comments that her Rhythm of Love Tour in 1991 was too similar visually to Madonna's Blond Ambition World Tour, for which critics labelled her a Madonna wannabe. Writing for the Observer Music Monthly, Rufus Wainwright described her as "the anti-Madonna. Self-knowledge is a truly beautiful thing and Kylie knows herself inside out. She is what she is and there is no attempt to make quasi-intellectual statements to substantiate it. She is the gay shorthand for joy." Kathy McCabe for The Telegraph noted that Minogue and Madonna follow similar styles in music and fashion, but concluded, "Where they truly diverge on the pop-culture scale is in shock value. Minogue's clips might draw a gasp from some but Madonna's ignite religious and political debate unlike any other artist on the planet ... Simply, Madonna is the dark force; Kylie is the light force." Minogue has said of Madonna, "Her huge influence on the world, in pop and fashion, meant that I wasn't immune to the trends she created. I admire Madonna greatly but in the beginning she made it difficult for artists like me, she had done everything there was to be done", and "Madonna's the Queen of Pop, I'm the princess. I'm quite happy with that."
In January 2007, Madame Tussauds in London unveiled its fourth waxwork of Minogue. During the same week a bronze cast of her hands was added to Wembley Arena's "Square of Fame". In 2007, a bronze statue of Minogue was unveiled at Melbourne Docklands for permanent display.
In March 2010, Minogue was declared by researchers as the "most powerful celebrity in Britain". The study examined how marketers identify celebrity and brand partnerships. Mark Husak, head of Millward Brown's UK media practice, said: "Kylie is widely accepted as an adopted Brit. People know her, like her and she is surrounded by positive buzz". In 2016, according to the Sunday Times Rich List, Minogue had a net worth of £55 million.
In May 2020, Alison Boshoff of The New Zealand Herald labeled her as the "great comeback queen of Pop, for springing back from any setback" in her life and career. In November 2020, Nick Levine of BBC described her as "pop's most underestimated icon", adding "she's [Minogue] lasted more than 30 years by delivering pop songs with passion and panache, and retaining a quintessentially likeable persona along the way, in such a cutthroat industry." In June 2023, Barbara Ellen of The Guardian commented that "modesty, likeability and vulnerability have aided Minogue enduring appeal", 36 years after the 1987 single "I Should Be So Lucky" was released.
Minogue is regarded as a gay icon, which she has encouraged with comments including "I am not a traditional gay icon. There's been no tragedy in my life, only tragic outfits" and "My gay audience has been with me from the beginning ... they kind of adopted me." Her status as a gay icon has been attributed to her music, fashion sense and career longevity. Author Constantine Chatzipapatheodoridis wrote about Minogue's appeal to gay men in Strike a Pose, Forever: The Legacy of Vogue... and observed that she "frequently incorporates camp-inflected themes in her extravaganzas, drawing mainly from the disco scene, the S/M culture, and the burlesque stage." In Beautiful Things in Popular Culture (2007), Marc Brennan stated that Minogue's work "provides a gorgeous form of escapism". Minogue has explained that she first became aware of her gay audience in 1988, when several drag queens performed to her music at a Sydney pub, and she later saw a similar show in Melbourne. She said that she felt "very touched" to have such an "appreciative crowd", and this encouraged her to perform at gay venues throughout the world, as well as headlining the 1994 Sydney Gay and Lesbian Mardi Gras. Minogue has one of the largest gay followings in the world.
## Impact and legacy
Entertainment Weekly's Ernest Macias said that, by combining "a panache for fabulous fashion" with "her unequivocal disco-pop sound", Minogue "established herself as a timeless icon." Paula Joye of The Sydney Morning Herald wrote that "Minogue's fusion of fashion and music has made a huge contribution to the style zeitgeist." Fiona MacDonald, from fashion magazine Madison, acknowledged Minogue as "one of the handful of singers recognised around the world by her first name alone. ... And yet despite becoming an international music superstar, style icon and honorary Brit, those two syllables still seem as Australian as the smell of eucalyptus or a barbeque on a hot day." In 2009, the Victoria and Albert Museum "celebrated her influence on fashion" with an exhibition called Kylie Minogue: Image of a Pop Star.
In 2012, Dino Scatena of The Sydney Morning Herald wrote about Minogue: "A quarter of a century ago, a sequence of symbiotic events altered the fabric of Australian popular culture and set in motion the transformation of a 19-year-old soap actor from Melbourne into an international pop icon." Scatena also described her as "Australia's single most successful entertainer and a world-renowned style idol". In the same year, VH1 cited Minogue among its choices on the 100 Greatest Women in Music and the 50 Greatest Women of the Video Era.
Minogue has been recognised with many honorific nicknames, most notably the "Princess of Pop". Jon O'Brien of AllMusic reviewed her box-set The Albums 2000–2010 and stated that it "contains plenty of moments to justify her position as one of the all-time premier pop princesses." In January 2012, NME critics ranked her single "Can't Get You Out of My Head" at number four on their Greatest Pop Songs in History list. Channel 4 listed her as one of the world's greatest pop stars. In 2020, Rolling Stone Australia placed her at number three on its 50 Greatest Australian Artists of All Time list.
Minogue's work has influenced pop and dance artists including Dua Lipa, Jessie Ware, Alice Chater, Rina Sawayama, Kim Petras, Melanie C, Ricki-Lee Coulter, Years & Years singer Olly Alexander, September, Diana Vickers, The Veronicas, Slayyyter, Pabllo Vittar, Hailee Steinfeld and Paris Hilton. In 2007, French avant-garde guitarist Noël Akchoté released So Lucky, featuring solo guitar versions of tunes recorded by Minogue.
## Achievements
Minogue has received many accolades, including a Grammy Award, three Brit Awards, seventeen ARIA Music Awards, two MTV Video Music Awards, two MTV Europe Music Awards and six Mo Awards, including the Australian Performer of the Year in 2001 and 2003. In October 2007, she was honoured with Music Industry Trust's award for recognition of her 20-year career and was hailed as "an icon of pop and style", becoming the first female musician to receive a Music Industry Trust award. In July 2008, she was invested by Charles III as an Officer of the Order of the British Empire. In April 2017, the Britain–Australia Society recognised Minogue with its 2016 award for outstanding contribution to the improving of relations and bilateral understanding between Britain and Australia. The citation reads: "In recognition of significant contributions to the Britain-Australia relationship as an acclaimed singer, songwriter, actor and iconic personality in both countries". The award was announced at a reception in Australia House but was personally presented the next day by Prince Philip, Patron of the Society, at Windsor Castle. In January 2019, she was appointed Officer of the Order of Australia in the Australia Day Honours.
In August 2004, she held the record for the most singles at number one in the ARIA singles chart, with nine. In November 2011, Minogue was inducted by the Australian Recording Industry Association into the ARIA Hall of Fame. In January 2011, Minogue received a Guinness World Records citation for having the most consecutive decades with top five albums in the UK, with all her albums doing so. In February 2011, she made history for having two songs inside the top three on the U.S. Dance Club Songs chart, with her singles "Better than Today" and "Higher" charting at one and three, respectively. In June 2012, Official Charts Company mentioned that Minogue is the 12th best selling singer in the United Kingdom to date, and the third best selling female artist, selling over 10.1 million singles. In December 2016, Billboard ranked her as the eighteenth most successful dance artist of all time. In November 2020, she became the only female artist to reach the top spot of the UK Albums Chart in five consecutive decades, when her studio album, Disco reached number one. In June 2023, she became the only female artist to reach the top ten of the UK Singles Chart in the 1980s to the 2020s, when her single "Padam Padam" entered the top ten.
As of November 2020, Minogue has sold 80 million records worldwide. She is the most successful Australian female recording artist of all time. According to PRS for Music, her 2001 single "Can't Get You Out of My Head" was the most-played track of the 2000s, "after receiving the most airplay and live covers" in the decade.
## Personal life
### Health
Minogue was diagnosed with breast cancer at age 36 in May 2005, leading to the postponement of the remainder of her Showgirl: The Greatest Hits Tour and her withdrawal from the Glastonbury Festival. Her hospitalisation and treatment in Melbourne resulted in a brief but intense period of media coverage, particularly in Australia, where then Prime Minister John Howard issued a statement of support. As media and fans began to congregate outside the Minogue residence in Melbourne, Victorian Premier Steve Bracks warned the international media that any disruption of the Minogue family's rights under Australian privacy laws would not be tolerated.
Minogue underwent a lumpectomy, on 21 May 2005 at Cabrini Hospital in Malvern and commenced chemotherapy treatment soon after. After the surgery, the disease "had no recurrence". On 8 July 2005, she made her first public appearance after surgery when she visited a children's cancer ward at Melbourne's Royal Children's Hospital. She returned to France where she completed her chemotherapy treatment at the Institut Gustave-Roussy in Villejuif, near Paris. In January 2006, Minogue's publicist announced that she had finished chemotherapy, and her treatment continued for the next months. On her return to Australia for her concert tour, she discussed her illness and said that her chemotherapy treatment had been like "experiencing a nuclear bomb". While appearing on The Ellen DeGeneres Show in 2008, Minogue said that her cancer had originally been misdiagnosed. She commented, "Because someone is in a white coat and using big medical instruments doesn't necessarily mean they're right", but later spoke of her respect for the medical profession.
Minogue was acknowledged for the impact she made by publicly discussing her breast cancer diagnosis and treatment. In May 2008, the French Cultural Minister Christine Albanel said, "Doctors now even go as far as saying there is a "Kylie effect" that encourages young women to have regular checks." Several scientific studies have been carried out how publicity around her case resulted in more women undergoing regular checks for cancer symptoms. Television host Giuliana Rancic cited Minogue's cancer story as "inspirational" when she too was diagnosed with breast cancer.
### Philanthropy
Minogue has helped fundraise on many occasions. In 1989, she participated in recording "Do They Know It's Christmas?" under the name Band Aid II to help raise money. In early 2010, Minogue along with many other artists (under the name Helping Haiti) recorded a cover version of "Everybody Hurts". The single was a fundraiser to help after the 2010 Haiti earthquake. She also spent a week in Thailand after the 2004 tsunami. During her 2011 Aphrodite World Tour, the 2011 Tōhoku earthquake and tsunami struck Japan, which was on her itinerary. She declared she would continue to tour there, stating, "I was here to do shows and I chose not to cancel, Why did I choose not to cancel? I thought long and hard about it and it wasn't an easy decision to make." While she was there, she and Australian Prime Minister Julia Gillard were star guests at an Australian Embassy fundraiser for the disaster. In January 2020, in response to the 2019–20 Australian bushfires, Minogue announced that she and her family were donating A\$500,000 towards immediate firefighting efforts and ongoing support.
In 2008, Minogue pledged her support for a campaign to raise money for abused children, to be donated to the British charities ChildLine and the National Society for the Prevention of Cruelty to Children. According to the source, around \$93 million was raised. She spoke out in relation to the cause, saying: "Finding the courage to tell someone about being abused is one of the most difficult decisions a child will ever have to make." Minogue is a frequent supporter of amfAR, The Foundation for AIDS Research, hosting the amfAR Inspiration Gala in Los Angeles in 2010. She has also attended amfAR fundraising benefits in Cannes, and performed at galas for the charity in São Paulo and Hong Kong.
Since Minogue's breast cancer diagnosis in 2005, she has been a sponsor and ambassador for the cause. In May 2010, she held a breast cancer campaign for the first time. She later spoke about the cause saying "It means so much to me to be part of this year's campaign for Fashion Targets Breast Cancer. I wholeheartedly support their efforts to raise funds for the vital work undertaken by Breakthrough Breast Cancer." For the cause, she "posed in a silk sheet emblazoned with the distinctive target logo of Fashion Targets Breast Cancer" for photographer Mario Testino. In April 2014, Minogue launched One Note Against Cancer, a campaign to raise funds and awareness for French cancer research charity APREC (The Alliance for Cancer Research). As part of the campaign, Minogue released the single "Crystallize", with fans able to bid via online auction to own each of the song's 4,408 notes. The proceeds of the auction were donated to APREC, with the names of the successful bidders appearing in the accompanying music video's credits.
### Relationships
Minogue has been in relationships including with Australian singer and actor Jason Donovan, Australian singer Michael Hutchence, model James Gooding, French actor Olivier Martinez and Spanish model and actor Andrés Velencoso. In 1994, she had an affair with co-star Jean-Claude Van Damme while shooting Street Fighter in Thailand. In 2016, she was engaged to British actor Joshua Sasse, with their relationship ending in 2017. From 2018 to 2023, she was in a relationship with British GQ creative director Paul Solomons.
## Discography
- Kylie (1988)
- Enjoy Yourself (1989)
- Rhythm of Love (1990)
- Let's Get to It (1991)
- Kylie Minogue (1994)
- Impossible Princess (1997)
- Light Years (2000)
- Fever (2001)
- Body Language (2003)
- X (2007)
- Aphrodite (2010)
- Kiss Me Once (2014)
- Kylie Christmas (2015)
- Golden (2018)
- Disco (2020)
- Tension (2023)
## Concert tours and residencies
Headlining tours
- Disco in Dream (1989)
- Enjoy Yourself Tour (1990)
- Rhythm of Love Tour (1991)
- Let's Get to It Tour (1991)
- Intimate and Live (1998)
- On a Night Like This (2001)
- KylieFever2002 (2002)
- Showgirl: The Greatest Hits Tour (2005)
- Showgirl: The Homecoming Tour (2006–07)
- KylieX2008 (2008–09)
- For You, for Me (2009)
- Aphrodite: Les Folies Tour (2011)
- Anti Tour (2012)
- Kiss Me Once Tour (2014–15)
- Summer 2015 (2015)
- A Kylie Christmas (2015–16)
- Kylie Presents Golden (2018)
- Golden Tour (2018–19)
- Summer 2019 (2019–20)
Residencies
- More Than Just a Residency (2023–24)
## Filmography
- Neighbours
- The Delinquents
- Street Fighter
- The Vicar of Dibley
- Moulin Rouge!
- Kath & Kim
- Doctor Who
- Jack & Diane
- Holy Motors
- San Andreas
- Young & Hungry
- Galavant
- Swinging Safari''
## See also
- Kylie Minogue products
|
2,209,145 |
Draped Bust dollar
| 1,031,521,611 |
United States dollar coin minted from 1795 to 1803
|
[
"1795 introductions",
"Eagles on coins",
"Goddess of Liberty on coins",
"United States dollar coins",
"United States silver coins"
] |
The Draped Bust dollar is a United States dollar coin minted from 1795 to 1803, and was reproduced, dated 1804, into the 1850s. The design succeeded the Flowing Hair dollar, which began mintage in 1794 and was the first silver dollar struck by the United States Mint. The designer is unknown, though the distinction is usually credited to artist Gilbert Stuart. The model is also unknown, though Ann Willing Bingham has been suggested.
In October 1795, newly appointed Mint Director Elias Boudinot ordered that the legal fineness of 0.892 (89.2%) silver be used for the dollar rather than the unauthorized fineness of 0.900 (90%) silver that had been used since the denomination was first minted in 1794. Due largely to a decrease in the amount of silver deposited at the Philadelphia Mint, coinage of silver dollars declined throughout the latter years of the 18th century. In 1804, coinage of silver dollars was halted; the last date used during regular mint production was 1803.
In 1834, silver dollar production was temporarily restarted to supply a diplomatic mission to Asia with a special set of proof coins. Officials mistakenly believed that dollars had last been minted with the date 1804, prompting them to use that date rather than the date in which the coins were actually struck. A limited number of 1804 dollars were struck by the Mint in later years, and they remain rare and valuable.
## Background
Coinage began on the first United States silver dollar, known as the Flowing Hair dollar, in 1794 following the construction and staffing of the Philadelphia Mint. The Coinage Act of 1792 called for the silver coinage to be struck in an alloy consisting of 89.2% silver and 10.8% copper. However, Mint officials were reluctant to strike coins with the unusual fineness, so it was decided to strike them in an unauthorized alloy of 90% silver instead. This caused depositors of silver to lose money when their metal was coined. During the second year of production of the Flowing Hair dollar, it was decided that the denomination would be redesigned. It is unknown what prompted this change or who suggested it, though numismatic historian R.W. Julian speculates that Henry William de Saussure, who was named Director of the Mint on July 9, 1795, may have suggested it, as he had stated a redesign of the American coinage as one of his goals before taking office. It is also possible that the Flowing Hair design was discontinued owing to much public disapproval.
### Design
Though the designer of the coin is unknown, artist Gilbert Stuart is widely acknowledged to have been its creator; Mint Director James Ross Snowden began researching the early history of the United States Mint and its coinage in the 1850s, during which time he interviewed descendants of Stuart who claimed that their ancestor was the designer. It has been suggested that Philadelphia socialite Ann Willing Bingham posed as the model for the coin. Several sketches were approved by Mint engraver Robert Scot and de Saussure and sent to President George Washington and Secretary of State Thomas Jefferson to gain their approval.
After approval was received, the designs were sent to artist John Eckstein to be rendered into plaster models; during that time, plaster models were used as a guide to cutting the dies, which was done by hand. Eckstein, who was dismissed by Walter Breen as a "local artistic hack" and described by a contemporary artist as a "thorough-going drudge" due to his willingness to carry out most painting or sculptural tasks at the request of clients, was paid thirty dollars for his work preparing models for both the obverse Liberty and reverse eagle and wreath. After the plaster models were created, the engravers of the Philadelphia Mint (including Scot) began creating hubs that would be used to make dies for the new coins.
## Production
It is unknown exactly when production of the new design began, as precise records relating to design were not kept at that time. R.W. Julian, however, places the beginning of production in either late September or early October 1795, while Taxay asserts that the first new silver dollars were struck in October. In September 1795, de Saussure wrote his resignation letter to President Washington. In his letter, de Saussure mentioned the unauthorized silver standard and suggested that Congress be urged to make the standard official, but this was not done. In response to de Saussure's letter, Washington expressed his displeasure in the resignation, stating that he had viewed de Saussure's tenure with "entire satisfaction". As de Saussure's resignation would not take effect until October, the president was given time to select a replacement.
The person chosen to fill the position was statesman and former congressman Elias Boudinot. Upon assuming his duties at the Mint on October 28, Boudinot was informed of the silver standard that had been used since the first official silver coins were struck. He immediately ordered that this practice be ceased and that coinage would begin in the 89.2% fineness approved by the Coinage Act of 1792. The total production of 1795 dollars (including both the Flowing Hair and Draped Bust types) totalled 203,033. It is estimated that approximately 42,000 dollars were struck bearing the Draped Bust design. Boudinot soon ordered that production of minor denominations be increased. Later, assayer Albian Cox died suddenly from a stroke in his home on November 27, 1795, leaving the vital post of assayer vacant. This, together with Boudinot's increased focus on smaller denominations, as well as a lull in private bullion deposits (the fledgling Mint's only source of bullion), caused a decrease in silver dollar production in 1796. The total mintage for 1796 was 79,920, which amounts to an approximate 62% reduction from the previous year's total.
Bullion deposits continued to decline, and in 1797, silver dollar production reached the lowest point since 1794 with a mintage of just 7,776 pieces. During this time, silver deposits declined to such an extent that Thomas Jefferson personally deposited 300 Spanish dollars in June 1797. In April 1797, an agreement was reached between the Mint and the Bank of the United States. The Bank agreed to supply the Mint with foreign silver on the condition that the Bank would receive their deposits back in silver dollars. The Mint was closed between August and November 1797 due to the annual yellow fever epidemic in Philadelphia; that year's epidemic took the life of the Mint's treasurer, Dr. Nicholas Way. In November 1797, the Bank deposited approximately \$30,000 worth of French silver. In early 1798, the reverse was changed from the small, perched eagle to a heraldic eagle similar to that depicted on the Great Seal of the United States. The agreement reached with the Bank of the United States along with other bullion depositors (including Boudinot) led to an increase in the number of silver dollars coined; mintage for both the small and heraldic eagle types totalled 327,536. Mintage numbers for the dollar remained high through 1799, with 423,515 struck that year.
Toward the end of the 18th century, many of the silver dollars produced by the Mint were shipped to and circulated or melted in China in order to satisfy the great demand for silver bullion in that nation. In 1800, silver deposits once again began to decline, and the total silver dollar output for that year was 220,920. In 1801, following complaints from the public and members of Congress regarding the lack of small change in circulation, Boudinot began requesting that silver depositors receive smaller denominations rather than the routinely requested silver dollars, in an effort to supply the nation with more small change. Production dropped to 54,454 silver dollars in 1801 and 41,650 in 1802, after Boudinot was able to convince many depositors to accept their silver in the form of small denominations. Although silver bullion deposits at the Mint had increased, Boudinot attempted to end silver dollar production in 1803, favoring half dollars instead. Mintage of the 1803 dollar continued until March 1804, when production of silver dollars ceased entirely. In total, 85,634 dollars dated 1803 were struck. Following a formal request from the Bank of the United States, Secretary of State James Madison officially suspended silver dollar and gold eagle production in 1806, although minting of both had ended two years earlier.
## 1804 dollars
In 1831, Mint Director Samuel Moore filed a request through the Treasury asking president Andrew Jackson to once again allow the coinage of silver dollars; the request was approved on April 18. In 1834, Edmund Roberts was selected as an American commercial representative to Asia, including the kingdoms of Muscat and Siam. Roberts recommended that the dignitaries be given a set of proof coins. The State Department ordered two sets of "specimens of each kind [of coins] now in use, whether of gold, silver, or copper". Though the minting of dollars had been approved in 1831, none had been struck since 1804. After consulting with Chief Coiner Adam Eckfeldt (who had worked at the Mint since its opening in 1792), Moore determined that the last silver dollars struck were dated 1804. Unknown to either of them, the last production in March 1804 was actually dated 1803. Since they believed that the last striking was dated 1804, it was decided to strike the presentation pieces with that date as well. It is unknown why the current date was not used, but R.W. Julian suggests that this was done to prevent coin collectors from being angered over the fact that they would be unable to obtain the newly dated coins.
The first two 1804 dollars (as well as the other coins for the sets) were struck in November 1834. Soon, Roberts' trip was expanded to Indo-China (then known as Annam) and Japan, so two additional sets were struck. The pieces struck under the auspices of the Mint are known as Class I 1804 dollars, and eight of that type are known to exist today. Roberts left for his trip in April 1835, and he presented one set each to the Sultan of Muscat and the King of Siam. The gift to the Sultan of Muscat was part of an exchange of diplomatic gifts that resulted in the Sultan presenting the Washington Zoo with a full-grown lion and lioness. Roberts fell ill in Bangkok and was taken to Macao, where he died in June 1835. Following Roberts' death, the remaining two sets were returned to the Mint without being presented to the dignitaries.
### Collecting
Most coin collectors became aware of the 1804 dollar in 1842, when Jacob R. Eckfeldt (son of Adam Eckfeldt) and William E. Du Bois published a book entitled A Manual of Gold and Silver Coins of All Nations, Struck Within the Past Century. In the volume, several coins from the Mint's coin cabinet, including an 1804 dollar, were reproduced by tracing a pantograph stylus over an electrotype of the coins. In May 1843, numismatist Matthew A. Stickney was able to obtain an 1804 dollar from the Mint's coin cabinet by trading a rare pre-federal United States gold coin. Due to an increase in the demand for rare coins, Mint officials, including Director Snowden, began minting an increasing number of coin restrikes in the 1850s. Several 1804 dollars were struck, and some were sold for personal profit on the part of Mint officials. When he discovered this, Snowden bought back several of the coins. One such coin, which Snowden later added to the Mint cabinet, was struck over an 1857 shooting thaler and became known as Class II, the only such piece of that type known to exist today. Six pieces with edge lettering applied after striking became known as Class III dollars.
By the end of the 19th century, the 1804 dollar had become the most famous and widely discussed of all American coins. In 1867, one of the original 1804 dollars was sold at auction for \$750 (\$ today). Seven years later, on November 27, 1874, a specimen sold for \$700 (\$ today). In the early 20th century, coin dealer B. Max Mehl began marketing the 1804 dollar as the "King of American Coins". The coins continued to gain popularity throughout the 20th century, and the price reached an all-time high in 1999, when an example graded Proof-68 was sold at auction for \$4,140,000.
|
5,026,007 |
John Y. Brown (politician, born 1835)
| 1,147,119,353 |
19th-century American politician
|
[
"1835 births",
"1904 deaths",
"19th-century American politicians",
"American Presbyterians",
"American lawyers admitted to the practice of law by reading law",
"Censured or reprimanded members of the United States House of Representatives",
"Centre College alumni",
"Democratic Party governors of Kentucky",
"Democratic Party members of the United States House of Representatives from Kentucky",
"Kentucky lawyers",
"People from Hardin County, Kentucky",
"Politicians from Louisville, Kentucky"
] |
John Young Brown (June 28, 1835 – January 11, 1904) was a politician from the U.S. Commonwealth of Kentucky. He represented the state in the United States House of Representatives and served as its 31st governor. Brown was elected to the House of Representatives for three non-consecutive terms, each of which was marred by controversy. He was first elected in 1859, despite his own protests that he was not yet twenty-five years old; the minimum age set by the Constitution for serving in the legislature. The voters of his district elected him anyway, but he was not allowed to take his seat until the Congress' second session, after he was of legal age to serve. After moving to Henderson, Kentucky, Brown was elected from that district in 1866. On this occasion, he was denied his seat because of alleged disloyalty to the Union during the Civil War. Voters in his district refused to elect another representative, and the seat remained vacant throughout the term to which Brown was elected. After an unsuccessful gubernatorial bid in 1871, Brown was again elected to the House in 1872 and served three consecutive terms. During his final term, he was officially censured for delivering a speech excoriating Massachusetts Representative Benjamin F. Butler. The censure was later expunged from the congressional record.
After his service in the House, Brown took a break from politics, but re-entered the political arena as a candidate for governor of Kentucky in 1891. He secured the Democratic nomination in a four-way primary election, then convincingly won the general election over his Republican challenger, Andrew T. Wood. Brown's administration, and the state Democratic Party, were split between gold standard supporters (including Brown) and supporters of the free coinage of silver. Brown's was also the first administration to operate under the Kentucky Constitution of 1891, and most of the legislature's time was spent adapting the state's code of laws to the new constitution. Consequently, little of significance was accomplished during Brown's term.
Brown hoped the legislature would elect him to the U.S. Senate following his term as governor. Having already alienated the free silver faction of his party, he backed "Goldbug" candidate Cassius M. Clay, Jr. for the Democratic nomination in the upcoming gubernatorial election. However, the deaths of two of Brown's children ended his interest in the gubernatorial race and his own senatorial ambitions. At the Democratic nominating convention of 1899, candidate William Goebel used questionable tactics to secure the gubernatorial nomination, and a disgruntled faction of the party held a separate nominating convention, choosing Brown to oppose Goebel in the general election. Goebel was eventually declared the winner of the election, but was assassinated. Brown became the legal counsel for former Kentucky Secretary of State Caleb Powers, an accused conspirator in the assassination. Brown died in Henderson on January 11, 1904.
## Early life
John Young Brown was born on June 27, 1835, in Claysville (near Elizabethtown), Hardin County, Kentucky. He was the son of Thomas Dudley and Elizabeth (Young) Brown. His father served in the state legislature and was a delegate to the 1849 state constitutional convention. Two of his uncles, Bryan Rust Young and William Singleton Young, served as U.S. Representatives. Brown spent much time with his father at the state capitol, which sparked his early interest in politics.
Brown received his early education in the schools of Elizabethtown, and in 1851, at the age of sixteen, matriculated at Centre College in Danville, Kentucky. In 1855, he graduated from Centre and returned to Hardin County to read law. He was admitted to the bar in 1857 and opened his practice in Elizabethtown. His reputation as an orator put him in high demand, but his zealous criticism of the Know Nothing Party drew threats against his life.
Brown married Lucie Barbee in 1857, but she died the following year. In September 1860, he married Rebecca Hart Dixon, the daughter of former U.S. Senator Archibald Dixon. The couple had eight children.
## U.S. House of Representatives
At a meeting of local Democrats in Bardstown, Kentucky, in 1859, Brown was nominated to oppose Joshua Jewett for Jewett's seat in the House of Representatives. Despite Brown's protests that he was more than a year younger than the legal age to serve, he was elected over Jewett by about two thousand votes. He did not take his seat until the second congressional session because of his age. He became a member of the Douglas National Committee in 1860 and engaged in a series of debates with supporters of John C. Breckinridge for president, including Breckinridge's cousin, William Campbell Preston Breckinridge.
It is not clear exactly when Brown relocated to Henderson, Kentucky. Confederate officer Stovepipe Johnson recounts that Brown was among the city leaders who welcomed him to Henderson in early 1862, but other sources state that Brown did not settle in Henderson until after the war. His sympathies during the war were decidedly with the Confederacy.
Brown was re-elected to the House of Representatives in 1866. His seat was declared vacant, however, because of his alleged disloyalty during the war. Voters in his district refused to elect anyone else to fill the vacancy, and Governor John W. Stevenson filed an official protest of the House's action, but the seat remained unfilled throughout the Fortieth Congress.
Governor Stevenson resigned his office to accept a seat in the U.S. Senate, and the remainder of his term was filled by President Pro Tem of the Senate Preston Leslie. When Leslie, who enjoyed only lukewarm support from his party, sought the Democratic gubernatorial nomination in 1871, Brown's name was among those put in nomination against his; after a few ballots, however, it became clear that Brown would not be able to gain a majority, and his supporters abandoned their support of him in favor of other candidates. The following year, Brown was re-elected to the House of Representatives by an overwhelming vote of 10,888 to 457 and was allowed to assume his seat. He was twice re-elected, serving until 1877.
Brown's most notable action in the House was a speech he made on February 4, 1875, in response to Massachusetts Representative Benjamin F. Butler's call to pass the Civil Rights Act of 1875. Referring to comments Butler had made the previous day about lawlessness against African-Americans in the South, Brown claimed that unjust charges had been made against Southerners by an individual "who is outlawed in his own home by respectable society, whose name is synonymous with falsehood, who is the champion, and has been on all occasions, of fraud; who is the apologist of thieves, who is such a prodigy of vice and meanness that to describe him would sicken the imagination and exhaust invective." Brown continued by referencing notorious Scottish murderer William Burke, whose method of murdering his victims became known as "Burking." At this point in the speech, Speaker of the House James G. Blaine interrupted Brown, asking if he was referring to a member of the House; Brown gave an ambiguous response before continuing: "If I wished to describe all that was pusillanimous in war, inhuman in peace, forbidden in morals, and infamous in politics, I should call it 'Butlerizing'." The House gallery exploded in protest at Brown's remark, and incensed Republican legislators called for Brown's immediate expulsion. Though not expelled, he was officially censured by the House for the use of unparliamentary language. The censure was expunged from the record by a subsequent Congress.
## 1891 gubernatorial election
Following his service in the House, Brown resumed his law practice in Louisville, Kentucky. In 1891, he was a candidate for the Democratic gubernatorial nomination. The other candidates included Cassius Marcellus Clay, Jr., son of former Congressman Brutus J. Clay and nephew of abolitionist Cassius Marcellus Clay; Dr. John Daniel Clardy, later to be elected a U.S. Representative; and Attorney General Parker Watkins Hardin. The party was split between supporters of corporations, such as the Louisville and Nashville Railroad, and supporters of agrarian interests. Another split was between the more conservative Bourbon Democrats, who supported maintaining the gold standard, and more progressive Democrats, who called for the free coinage of silver. Agrarian voters were about equally split between Clay and Clardy, while Free Silver Democrats were about equally split between Hardin and Clardy. Having lived in the agrarian western part of the state for most of his life, and never having alienated the powerful Farmers' Alliance, Brown was acceptable to most agrarian interests, while the Louisville and Nashville Railroad felt he was a moderate on the issue of corporate regulation. Bourbon Democrats were also pleased with his sound money stand.
Entering the Democratic nominating convention, Brown seemed to be the favorite for the nomination. On the first ballot, he garnered the most votes (275), leading Clay (264), Clardy (190), and Hardin (186). Over the next nine ballots, the vote counts changed little. Finally, the convention chairman announced that the candidate receiving the fewest votes on the next ballot would be dropped from the voting. Clardy received the fewest votes, and on the next ballot, his supporters divided almost equally between the remaining three candidates. Hardin was the next candidate to be dropped, and Brown received a majority over Clay on the thirteenth ballot.
The Republicans nominated Andrew T. Wood, a lawyer from Mount Sterling, who had failed in earlier elections for Congress and state attorney general. Concurrently with the gubernatorial election, the state's voters would decide whether to ratify a proposed new constitution for the state in 1891. The divided Democrats had taken no stand on the document as part of their convention's platform, and Wood spent much of the campaign trying to get Brown to declare his support for or opposition to it. About six weeks before the election, Brown, sensing strong public support for the new constitution, finally came out in favor of it. For the remainder of the race, Wood touted an alleged conspiracy between Brown and the Louisville and Nashville Railroad to thwart meaningful corporate regulations, but the issue failed to gain much traction.
Both Democrats and Republicans were concerned about the presence of S. Brewer Erwin, nominee of the newly formed Populist Party, in the race; he enjoyed strong support for a third-party candidate, despite the fact that many believed his party's platform was too radical. Democrats, who were used to carrying the agrarian vote by a wide margin, were especially concerned that the Farmers' Alliance, consisting of over 125,000 members in Kentucky, would endorse Erwin. This did not occur, however, and in the general election, Brown defeated Wood by a vote of 144,168 to 116,087. Though he won the election, Brown had not won a majority of the votes; Populist Erwin captured 25,631 votes – 9 percent of the total cast – and a Prohibition candidate received 3,292 votes.
## Governor of Kentucky
Turmoil marked the legislative sessions of Brown's term; his supporters had been either unwilling or unable to influence the rest of the Democratic slate, and tensions over the currency issue soon split the administration. Attorney General William Jackson Hendricks, Treasurer Henry S. Hale, and Auditor Luke C. Norman were all free silver supporters and feuded with Brown and his (appointed) secretary of state, John W. Headley, throughout Brown's term. Over time, the rift deepened and spread to the entire Democratic party. Brown also frequently clashed with the legislature and vetoed several of the bills it passed; none of his vetoes were ever overridden.
When the General Assembly convened on the last day of 1891, Brown reported that he had appointed a commission to study the impact of the new constitution on the state's existing laws. He also announced that the state's present budget deficit was \$229,000 and was expected to reach almost half a million dollars by the end of 1893. With these two large issues facing it, the Assembly was in session almost continuously from December 1891 to July 1893. The length of the session earned it a derisive nickname – the "Long Parliament". Part of the reason for the extended session was each chamber's difficulty in achieving a quorum; a Louisville newspaper reported that, for an entire month, the largest attendance in the House of Representatives was 61 of 100 members. Consequently, some bills were passed by a plurality instead of a majority of the legislators. Fearing that these bills would be challenged in court, Brown vetoed them.
During the session, Brown secured the termination of a statewide geological survey, deeming it too expensive. By constitutional mandate, the regular session ended August 16, but Brown convened a special session of the legislature on August 25 because important bills that he had vetoed needed to be rewritten and passed, and because some bills he had signed needed to be amended to comply with the new constitution. Major legislation advocated by Brown and passed by the General Assembly included improvements in tax collection processes and tighter controls on corporations. Among the measures not specifically advocated by Brown that were enacted by the General Assembly was a measure racially segregating the state's railroad cars, called the "separate coach law". The special session lasted until November 1.
Brown won acclaim from the railroad companies for vetoing a proposed railroad tax increase, but soon drew their ire for preventing the merger of the state's two largest railways, the Louisville and Nashville Railroad and the Chesapeake and Ohio Railway. The Mason and Foard Company, which leased convict labor to build railroads, resented Brown's prison reforms. Brown accused his predecessor, Simon Bolivar Buckner, of illegally allowing Mason and Foard to use convict labor, a charge Buckner vehemently denied.
During the 1894 legislative session, Brown advocated and won passage of several government efficiency measures, including a bill to transfer certain state governmental expenses to the counties, a bill to reform state printing contracts, and measures clarifying laws governing asylums and charitable institutions. The most significant bill, and the one that generated the most debate, was a law giving married women individual property rights for the first time in state history. Other measures passed during the session included a basic coal safety measure, a common school statute, a measure prohibiting collusive bidding on tobacco, new regulations on grain warehouses, and a law providing free turnpikes. Measures advocated by Brown but not enacted by the Assembly included broadening the powers of the state railroad commission, establishing the offices of state bank inspector and superintendent of public printing, and reforming prison management, including separate detention of adolescent criminals. Brown also lobbied for the abolition of the state parole board; when the Assembly refused, Brown vowed to ignore the board's recommendations.
Mob violence was prevalent in Kentucky during Brown's tenure as governor. From 1892 to 1895, there were fifty-six lynchings in the state. During one notable incident, a Cincinnati judge refused to extradite a black man suspected of shooting a white man in Kentucky. The judge's decision was based on his opinion that the accused was likely to be the victim of mob violence if returned to Kentucky. In disputing the judge's decision, Governor Brown attempted to justify some of the violence that had occurred in the state's past, declaring "It is much to be regretted that we have occasionally had mob violence in this Commonwealth, but it has always been when the passions of the people have been inflamed by the commission of the most atrocious crimes."
## Later life and death
It was widely known that Brown desired election to the U.S. Senate when his gubernatorial term expired in 1896. The leading Democratic candidates to succeed Brown as governor were his old rivals, Cassius M. Clay, Jr. and Parker Watkins Hardin, and Brown believed he would need his eventual successor's support to secure the Senate seat. Having already alienated Hardin and his free silver allies, Brown threw his support to Clay. Family tragedy would soon remove his interest in the race, however. On October 30, 1894, Brown's teenage daughter Susan died of tuberculosis. A few months later, his son, Archibald Dixon Brown, divorced his wife; it was subsequently discovered that he had been carrying on an extramarital affair. Acting on an anonymous tip, his lover's husband found the couple at a brothel in Louisville; drawing his pistol, he shot his wife and Archibald Brown, killing them both. Of the series of family tragedies, Governor Brown wrote to Clay, "I shall not be a candidate for the Senate. The calamities of my children, which have recently befallen, have utterly unfitted me for the contest. My grief is so severe that, like a black vampire of the night, it seems to have sucked dry the very arteries and veins of my ambition." Clay went on to lose the nomination to Hardin. Brown refused to endorse Hardin, and the fractured Democratic party watched as the Republicans elected William O. Bradley, the party's first-ever governor of Kentucky. Despite Brown's proclaimed lack of interest in the Senate seat, he received one vote during the tumultuous 1896 Senate election to replace Senator J. C. S. Blackburn.
After his term as governor, Brown again returned to his legal practice in Louisville. He was an unsuccessful candidate for the House of Representatives in 1896, losing to Republican Walter Evans. He would later claim that he had only run in order to improve Democratic voter turnout for William Jennings Bryan's 1896 presidential bid. Prior to the 1899 Democratic nominating convention, Brown was mentioned as a possible gubernatorial nominee, but he declined to become a candidate. When the convention began, he was mentioned as a candidate for convention chairman, but he also refused to serve in this capacity.
Despite his proclaimed lack of interest in the gubernatorial nomination, Brown's name was entered as a candidate on the first ballot, along with Parker Watkins Hardin, former Congressman William J. Stone, and William Goebel, President Pro Tempore of the state senate. The convention was thrown into chaos when a widely known agreement between Stone and Goebel – designed to get Hardin out of the race – broke down. As balloting continued over the next four days (Sunday excepted) with no candidate receiving a majority, Brown continued to receive a few votes on each ballot. Finally, the convention delegates decided to drop the candidate with the lowest vote total until one candidate received a majority; this resulted in the nomination of Goebel a few ballots later.
Following the convention, disgruntled Democrats began to talk about rejecting their party's nominee and holding another nominating convention. Brown became the leader of this group, styled the "Honest Election League". Plans for the new convention were made at a meeting held August 2, 1899, in Lexington, Kentucky. The nomination was made official at a convention held in that city on August 16. In addition to Brown, the Honest Election League nominated a full slate of candidates for the other state offices.
Brown opened his campaign with a speech at Bowling Green on August 26, 1899. He answered many allegations that had been made about him, including claims that he had secretly been seeking the Democratic gubernatorial nomination all along, that he had ambitions of succeeding Senator William Joseph Deboe, and that following the nominating convention, he had agreed to speak on behalf of the Goebel ticket. Brown conceded that he desired Senator Deboe's senate seat and that he had agreed to accept the gubernatorial nomination if it had been offered to him, but he denied that he had ever agreed to speak on Goebel's behalf. Outgoing Senator Blackburn also charged that Brown was bolting the party again, just as he had in supporting Stephen Douglas over John C. Breckinridge for president in 1860. Brown replied by quoting an article by William Jennings Bryan's Omaha World-Herald that asserted the right of an individual to vote against the nominee of his party if the individual deemed the nominee unfit.
Due to his age and ill health, Brown was able to speak only once per week. At a campaign event in Madisonville, he challenged Goebel to a debate, but Goebel ignored the challenge. Brown, and other speakers enlisted on behalf of his campaign, frequently called attention to Goebel's refusal to acknowledge the challenge or agree to a debate. When William Jennings Bryan came to the state to campaign with Goebel, Brown sent him a letter challenging him to repudiate Goebel's nomination because of the broken agreement between Goebel and Stone. Bryan refused to comment on the events of the convention and stressed the importance of party loyalty. He denounced the Honest Election League's convention as irregular and invalid.
Brown's campaign faltered as the race drew to a close. Two weeks prior to the election, Brown was injured in a fall at Leitchfield; as a result of the injury, he was confined to his home and unable to deliver campaign speeches, despite several attempts to allow him to speak from a chair or wheelchair. The final vote count gave Republican William S. Taylor a small plurality with 193,714 votes to Goebel's 191,331; Brown garnered only 12,140 votes.
Goebel challenged the vote returns in several counties. While the challenges were being adjudicated, Goebel was shot by an unknown assassin; Goebel was ultimately declared the winner of the election, but died of his wounds two days after being sworn into office. Among those charged in Goebel's murder was Governor Taylor's Secretary of State, Caleb Powers. Powers employed Brown as his legal counsel during his first trial, which ended in a conviction in July 1900. Brown died January 11, 1904, in Henderson and was buried at the Fernwood Cemetery in that city. He was the namesake of, but not related to, 20th century Kentucky Congressman John Y. Brown Sr.
## See also
- List of United States representatives expelled, censured, or reprimanded
|
22,328,666 |
Death of Ian Tomlinson
| 1,158,083,688 |
London man killed by Met. Police in 2009
|
[
"2009 deaths",
"2009 in London",
"2010s trials",
"April 2009 events in the United Kingdom",
"Criminal trials that ended in acquittal",
"Deaths by beating in the United Kingdom",
"Deaths by person in London",
"Deaths by violence in the United Kingdom",
"G20",
"History of the City of London",
"Manslaughter trials",
"Metropolitan Police operations",
"Police brutality in the United Kingdom",
"Police misconduct in England",
"Protest-related deaths",
"Protests in London",
"Trials in London",
"Victims of police brutality"
] |
Ian Tomlinson (7 February 1962 – 1 April 2009) was a newspaper vendor who collapsed and died in the City of London after being struck by a police officer during the 2009 G-20 summit protests. After an inquest jury returned a verdict of unlawful killing, the officer, Simon Harwood, was prosecuted for manslaughter. He was found not guilty but was dismissed from the police service for gross misconduct. Following civil proceedings, the Metropolitan Police Service paid Tomlinson's family an undisclosed sum and acknowledged that Harwood's actions had caused Tomlinson's death.
The first post-mortem concluded that Tomlinson had suffered a heart attack, but a week later The Guardian published a video of Harwood, a constable with London's Metropolitan Police, striking Tomlinson on the leg with a baton, then pushing him to the ground. Tomlinson was not a protester, and at the time he was struck he was trying to make his way home through the police cordons. He walked away after the incident, but collapsed and died minutes later.
After the Independent Police Complaints Commission (IPCC) began a criminal inquiry, further autopsies indicated that Tomlinson had died from internal bleeding caused by blunt force trauma to the abdomen, in association with cirrhosis of the liver. The Crown Prosecution Service (CPS) decided not to charge Harwood, because the disagreement between the first and later pathologists meant they could not show a causal link between the death and alleged assault. That position changed in 2011; after the verdict of unlawful killing, the CPS charged Harwood with manslaughter. He was acquitted in 2012 and dismissed from the service a few months later.
Tomlinson's death sparked a debate in the UK about the relationship between the police, media and public, and the independence of the IPCC. In response to the concerns, the Chief Inspector of Constabulary, Denis O'Connor, published a 150-page report in November 2009 that aimed to restore Britain's consent-based model of policing.
## Background
### Ian Tomlinson
Tomlinson was born to Jim and Ann Tomlinson in Matlock, Derbyshire. He moved to London when he was 17 to work as a scaffolder. At the time of his death, at the age of 47, he was working casually as a vendor for the Evening Standard, London's evening newspaper. Married twice with nine children, including stepchildren, Tomlinson had a history of alcoholism, as a result of which he had been living apart from his second wife, Julia, for 13 years, and had experienced long periods of homelessness. He had been staying since 2008 in the Lindsey Hotel, a shelter for the homeless on Lindsey Street, Smithfield, EC1. At the time of his death, he was walking across London's financial district in an effort to reach the Lindsey Hotel, his way hampered at several points by police lines. The route he took was his usual way home from a newspaper stand on Fish Street Hill outside Monument tube station, where he worked with a friend, Barry Smith.
### London police, IPCC
With over 31,000 officers, the Metropolitan Police Service (the Met) is the largest police force in the United Kingdom, responsible for policing Greater London, except for the financial district, the City of London. The latter has its own force, the City of London Police. The Met's commissioner at the time was Sir Paul Stephenson; the City of London Police commissioner was Mike Bowron. Responsibility for supervising the Met falls to the Metropolitan Police Authority, chaired by the Mayor of London, at the time Boris Johnson.
The officer seen pushing Tomlinson was a constable with the Met's Territorial Support Group (TSG), identified by the "U" on their shoulder numbers. The TSG specializes in public-order policing, wearing military-style helmets, flame-retardant overalls, stab vests and balaclavas. Their operational commander at the time was Chief Superintendent Mick Johnson.
The Independent Police Complaints Commission (IPCC) began to operate in 2004; its chair when Tomlinson died was Nick Hardwick. Created by the Police Reform Act 2002, the commission replaced the Police Complaints Authority (PCA) following public dissatisfaction with the latter's relationship with the police. Unlike the PCA, the IPCC operates independently of the Home Office, which is the Government department responsible for criminal justice and policing in England and Wales.
### Operation Glencoe
The G20 security operation, codenamed "Operation Glencoe", was a "Benbow operation", which meant the Met, City of London Police and the British Transport Police worked under one Gold commander, in this case Bob Broadhurst of the Met.
There were six protests on 1 April 2009: a security operation at ExCeL London, a Stop the War march, a Free Tibet protest outside the Chinese Embassy, a People & Planet protest, a Climate Camp protest, and a protest outside the Bank of England. Over 4,000 protesters were at the Climate Camp and the same number at the Bank of England. On 1 April over 5,500 police officers were deployed and the following day 2,800, at a cost of £7.2 million. Officers worked 14-hour shifts. They ended at midnight, slept on the floor of police stations, were not given a chance to eat, and were back on duty at 7 am. This was viewed as having contributed to the difficulties they faced.
The Bank of England protesters were held in place from 12.30 pm until 7.00 pm using a process police called "containment" and the media called "kettling"—corralling protesters into small spaces until the police dispersed them. At 7 pm senior officers decided that "reasonable force" could be used to disperse the protesters around the bank. Between 7:10 and 7:40 pm the crowd surged toward the police, missiles were thrown, and the police pushed back with their shields. Scuffles broke out and arrests were made. This was the situation Tomlinson wandered into as he tried to make his way home.
## Incident
### Earlier encounter with police
Several newspapers published images of Tomlinson's first encounter with police that evening. According to Barry Smith, Tomlinson left the newspaper stand outside Monument Tube Station at around 7 pm. An eyewitness, IT worker Ross Hardy, said Tomlinson was on Lombard Street, drunk and refusing to move; a police van nudged him on the back of the legs, Hardy said, and when that didn't work he was moved by four police officers wearing personal protective equipment. On 16 April The Guardian published three images of Tomlinson on Lombard Street.
Tomlinson stayed on Lombard Street for another half-hour, then made his way to King William Street, toward two lines of police cordons, where police had "kettled" thousands of protesters near the Bank of England. At 7:10 pm he doubled back on himself, walking up and down Change Alley where he encountered more cordons. Five minutes later he was on Lombard Street again, crossed it, walked down Birchin Lane, and reached Cornhill at 7:10–7:15 pm.
A few minutes later he was at the northern end of a pedestrian precinct, Royal Exchange Passage (formally called Royal Exchange Buildings), near the junction with Threadneedle Street, where a further police cordon stopped him from proceeding. He turned to walk south along Royal Exchange Passage instead, where, minutes before he arrived, officers had clashed with up to 25 protesters. Riot police from the Met's TSG, accompanied by City of London police dog handlers, had arrived there from the cordon in Threadneedle Street to help their colleagues.
### Encounter with officer
Police officers followed Tomlinson as he walked 50 yards (46 m) along the street. He headed towards Threadneedle Street, but again ran into police cordons and doubled back on himself towards Cornhill. According to a CPS report, he was bitten on the leg by a police dog at 7:15 pm, when a dog handler tried to move him out of the way, but he appeared not to react to it.
The same group of officers approached Tomlinson outside a Montblanc store at the southern end of Royal Exchange Passage, near the junction with Cornhill. He was walking slowly with his hands in his pockets; according to an eyewitness, he was saying that he was trying to get home.
The first Guardian video shows one officer lunge at Tomlinson from behind, strike him across the legs with a baton and push him back, causing him to fall. On 8 April Channel 4 News released their own footage, which showed the officer's arm swing back to head height before bringing it down to hit Tomlinson on the legs with the baton. Another video obtained by The Guardian on 21 April shows Tomlinson standing by a bicycle rack, hands in his pockets, when the police approach him. After he is hit, he can be seen scraping along the ground on the right side of his forehead; eyewitnesses spoke of hearing a noise as his head hit the ground.
### Collapse
Tomlinson can be seen briefly remonstrating with police as he sits on the ground. None of the officers offered assistance. After being helped to his feet by a protester, Tomlinson walked 200 feet (60 m) along Cornhill, where he collapsed at around 7:22 pm outside 77 Cornhill. Witnesses say he appeared dazed, eyes rolling, skin grey. They also said he smelled of alcohol. An ITV News photographer tried to give medical aid, but was forced away by police, as was a medical student. Police medics attended to Tomlinson, who was pronounced dead on arrival at hospital.
## Simon Harwood
### Background
Simon Harwood, the officer who unlawfully killed Tomlinson, was a police constable with the Territorial Support Group (TSG) at Larkhall Lane police station in Lambeth, South London. Harwood had faced 10 complaints in 12 years, nine of which had been dismissed or unproven. The complaint that was upheld involved unlawful access to the Police National Computer. The complaints included a road rage incident in or around 1998 while he was on sick leave, during which he reportedly tried to arrest the other driver, who alleged that Harwood had used unnecessary force. On Friday 14 September 2001, before the case was heard by a discipline board, Harwood retired on medical grounds. Three days later, on Monday 17 September, he rejoined the Met as a civilian computer worker.
In May 2003 Harwood joined the Surrey Police as a constable. Surrey Police said he was frank about his history. In January 2004 he was alleged to have assaulted a man during a raid on a home. In November 2004, on his request, Harwood was transferred back to the Met. There were three more complaints after that, before the incident with Tomlinson.
### On the day
Harwood was involved in several confrontations on the day of Tomlinson's death. He had been on duty since 5 am, assigned as a driver, and had spent most of the day in his vehicle. While parked on Cornhill in the evening, he saw a man write "all cops are bastards" on the side of another police van, and left his vehicle to attempt to arrest the man. The suspect resisted arrest and the suspect's head collided with a van door, triggering a response from the crowd that made Harwood believe it was unsafe to return to his vehicle. He told the inquest that he had been hit on the head, had fallen over, lost his baton, had been attacked by the crowd and feared for his life, but later acknowledged this had not happened.
Shortly after his attempted arrest of the graffiti man, Harwood swung a coat at a protester, pulled a BBC cameraman to the ground, used a palm strike against one man, and at 7:19 pm pushed another man to the ground for allegedly threatening a police dog handler. It was seconds after this that he saw Tomlinson standing with his hands in his pockets beside a bicycle rack, being told by police to move away. Harwood told the inquest he made a "split-second decision" that there was justification for engagement, then struck Tomlinson on the thigh with his baton and pushed him to the ground. He said it was a "very poor push" and he had been shocked when Tomlinson fell. Harwood made no mention of the incident in his notebook; he told the inquest he had forgotten about it.
### Identification
Newspapers did not release Harwood's name until July 2010. On the day of the incident, he appeared to have removed his shoulder number and covered the bottom of his face with his balaclava. Simon Israel of Channel 4 News reported a detailed description of the officer on 22 April 2009; the IPCC sought but failed to obtain an injunction to prevent Channel 4 broadcasting the description, alleging that it might prejudice their inquiry. Fifteen months later, when announcing in July 2010 that no charges would be brought against Harwood, the Crown Prosecution Service still referred to him as "PC A." It was only on that day that newspapers decided to name him.
Harwood said he first realized on 8 April, when he saw the Guardian video, that Tomlinson had died. He reportedly collapsed at home and had to be taken to hospital by ambulance. Harwood and three colleagues made themselves known to the IPCC that day.
## Early accounts
### First police statement
The Met issued its first statement on 1 April at 11:36 pm, four hours after Tomlinson died, a statement approved by the IPCC's regional director for London. The statement said that police had been alerted that a man had collapsed and were attacked by "a number of missiles" as they tried to save his life, an allegation that was inaccurate, according to later media reports.
According to Nick Davies in The Guardian, the statement was the result of an intense argument in the Met's press office, after an earlier draft had been rejected. He wrote that both the Met and IPCC said the statement represented the truth as they understood it at the time, and that there had been no allegation at that point that Tomlinson had come into contact with police. Davies asked why the IPCC were involved if they had not realized there had been police contact. He alleged that senior sources within the Met said privately that the assault on Tomlinson had been spotted by the police control room at Cobalt Street in south London, and that a chief inspector on the ground had also reported it. The Met issued a statement saying they had checked with every chief inspector who had been part of Operation Glencoe, and that none of them had called in such a report.
### First eyewitness accounts
On 2 April the Met handed responsibility for the investigation to the City of London police; the officer in charge was Detective Superintendent Anthony Crampton. After police briefings, the Evening Standard reported on 2 April that "police were bombarded with bricks, bottles and planks of wood" as they tried to save Tomlinson, forced by a barrage of missiles to carry him to a safe location to give him mouth-to-mouth resuscitation.
Eyewitnesses said the story was inaccurate. They said protesters had provided first aid and telephoned for medical help. Others said that one or two plastic bottles had been thrown by people unaware of Tomlinson's situation, but other protesters had told them to stop. According to The Times, an analysis of television footage and photographs showed just one bottle, probably plastic, being thrown. Video taken by eyewitness Nabeela Zahir, published by The Guardian on 9 April, shows one protester shouting, "There is someone hurt here. Back the fuck up." Another voice says, "There's someone hurt. Don't throw anything."
### Officers report the incident
Three police constables from the Hammersmith and Fulham police station—Nicholas Jackson, Andrew Moore, and Kerry Smith—told their supervisor, Inspector Wynne Jones, on 3 April that they had witnessed the incident. They can be seen in The Guardian video standing next to Tomlinson. Jackson was the first to tell the inspector; officers then contacted Moore and Smith, who had been standing next to Jackson at the time.
Jackson, Moore and Smith did not recognize Simon Harwood, the officer who struck Tomlinson, and according to the newspaper assumed he was with the City of London police. This was four days before The Guardian published the video. The inspector passed this information at 4:15 pm on 3 April to Detective Inspector Eddie Hall, the Met's point of contact for Tomlinson's death. Hall said he passed it to the City of London police before the first autopsy was conducted that day by Freddy Patel, which according to The Guardian began at 5 pm.
## Post-mortem examinations
An inquest was opened on 9 April 2009 by Paul Matthews, the City of London coroner. Three post-mortems were conducted: on 3 April by Mohmed Saeed Sulema "Freddy" Patel for Paul Matthews; on 9 April by Nathaniel Cary for the IPCC and Tomlinson's family; and on 22 April jointly by Kenneth Shorrock for the Metropolitan police and Ben Swift for Simon Harwood. The coroner was criticized for reportedly having failed to allow IPCC investigators to attend the first, and for failing to tell Tomlinson's family that they had a legal right to attend or send a representative. The family also said he had not told them where and when it was taking place.
### First post-mortem
According to Detective Sergeant Chandler of the City of London police, he was not told until the first post-mortem was over, or at an advanced stage, that three police officers had seen another officer hit and push Tomlinson. Apparently, neither Patel nor the IPCC were told about the three witnesses. Patel said he was told only that the case was a "suspicious death"; the police had asked that he "rule out any assault or crush injuries associated with public order".
Patel concluded that Tomlinson had died of coronary artery disease. His report noted "intraabdominal fluid blood about 3l with small blood clot", which was interpreted by medical experts to mean that he had found three litres of blood in Tomlinson's abdomen. This would have been around 60 per cent of Tomlinson's total blood volume, a "highly significant indicator of the cause of death", according to the Crown Prosecution Service (CPS). In a report for the CPS a year later, on 5 April 2010, Patel wrote that he had meant "intraabdominal fluid with blood". He did not retain samples of the fluid for testing. This issue became pivotal regarding the decision not to prosecute Harwood. The City of London police issued a statement on 4 April: "A post-mortem examination found he died of natural causes. [He] suffered a sudden heart attack while on his way home from work."
The IPCC told reporters that the post-mortem showed no bruising or scratches on Tomlinson's head and shoulders. When the family asked the City of London police, after the post-mortem, whether there had been marks on Tomlinson's body, they were told no; according to The Guardian, Detective Superintendent Anthony Crampton, who was leading the investigation, wrote in his log that he did not tell the family about a bruise and puncture marks on Tomlinson's leg to avoid causing "unnecessary stress or alarm". On 5 April The Observer published the first photograph of Tomlinson lying on the ground next to riot police. After it was published, Freddy Patel was asked to return to the mortuary, where he made a note of bruising on Tomlinson's head that he had not noticed when he first examined him. On 24 April Sky News obtained an image of Tomlinson after he collapsed, which showed bruising on the right side of his forehead.
### Second and third post-mortem
The IPCC removed the Tomlinson inquiry from the City of London police on 8 April. A second post-mortem, ordered jointly by the IPCC and Tomlinson's family, was carried out that day by Nathaniel Cary, known for his work on high-profile cases. Cary found that Tomlinson had died because of internal bleeding from blunt force trauma to the abdomen, in association with cirrhosis of the liver. He concluded that Tomlinson had fallen on his elbow, which he said "impacted in the area of his liver causing an internal bleed which led to his death a few minutes later".
Because of the conflicting conclusions of the first two, a third post-mortem was conducted on 22 April by Kenneth Shorrock on behalf of the Metropolitan police, and Ben Swift on behalf of Simon Harwood. Shorrock and Swift agreed with the results of the second autopsy. The Met's point of contact for Tomlinson's death, Detective Inspector Eddie Hall, told the pathologists before the final post-mortem that Tomlinson had fallen to the ground in front of a police van earlier in the evening, although there was no evidence that this had happened. The IPCC ruled in May 2011 that Hall had been reckless in making this claim, but had not intended to mislead.
### Freddy Patel
At the time of Tomlinson's death, Patel was on the Home Office's register of accredited forensic pathologists. He qualified as a doctor at the University of Zambia in 1974, and registered to practice in the UK in 1988. The Metropolitan Police had written to the Home Office in 2005 raising concerns about his work. At the time of Tomlinson's death he did not have a contract with the police to conduct post-mortems in cases of suspicious death.
In 1999, Patel was reprimanded by the General Medical Council (GMC) for having released medical details about Roger Sylvester, a man who had died in police custody. In 2002, the police dropped a criminal inquiry because Patel said the victim, Sally White, had died of a heart attack with no signs of violence, although she was reportedly found naked with bruising to her body, an injury to her head and a bite mark on her thigh. Anthony Hardy, a mentally ill alcoholic who lived in the flat in which her body was found locked in a bedroom, later murdered two women and placed their body parts in bin bags. The police investigated Patel in relation to that postmortem, but the investigation was dropped. In response to the criticism, Patel said the GMC reprimand was a long time ago, and that his findings in the Sally White case had not been contested.
Patel was suspended from the government's register of pathologists in July 2009, pending a GMC inquiry. The inquiry concerned 26 charges related to postmortems in four other cases. In one case Patel was accused of having failed to spot signs of abuse on the body of a five-year-old girl who had died after a fall at home, and of having failed to check with the hospital about its investigation into her injuries. The child's body was exhumed for a second postmortem, and her mother was convicted. The hearings concluded in August 2010; Patel was suspended for three months for "deficient professional performance". In May 2011, the GMC opened an investigation into his handling of the Tomlinson post-mortem. He was struck off the medical register in August 2012.
## Images
### Observer photograph
On 5 April The Observer (the Guardians sister paper) published the first photograph of Tomlinson lying on the ground next to riot police. Over the next few days the IPCC told reporters that Tomlinson's family were not surprised that he had had a heart attack. When journalists asked whether he had been in contact with police officers before his death, they were told the speculation would upset the family.
### Guardian video
The first Guardian video was shot on a digital camera by an investment fund manager from New York who was in London on business, and who attended the protests out of curiosity. On his way to Heathrow airport, he realized that the man he had filmed being assaulted was the man who had reportedly died of a heart attack. At that point, 2 am on 7 April, he passed his footage to The Guardian, which published it on its website that afternoon. The newspaper passed a copy to the IPCC, which opened a criminal inquiry.
### Channel 4 video
A video by Ken McCallum, a cameraman for Channel 4 News, was broadcast on 8 April. Shot from a different angle, the footage shows Harwood draw his arm back to head height before bringing the baton down on Tomlinson's legs. McCallum was filming another incident at the time; the Tomlinson incident was unfolding in the background, unseen by the journalists but recorded by the camera. Half an hour later Alex Thomson, chief correspondent of Channel 4 News, was doing a live broadcast when the camera was damaged. It took engineers days to recover the tape, which is when they saw that Tomlinson's assault was on it.
### Nabeela Zahir video
On 9 April The Guardian published footage from Nabeela Zahir, a freelance journalist, showing Tomlinson after his collapse. The police can be seen moving away at least one woman who tried to help him, and a man, Daniel McPhee, who was on the phone to the ambulance services. The footage shows that the Met's initial claim that there had been a barrage of missiles from protesters while police tried to save Tomlinson was inaccurate. Protesters can be heard calling for calm; one shouts "Don't throw anything." According to The Guardian, 56 seconds into the video, three officers can be seen with their face masks pulled halfway up their faces.
### Cornhill video
The Guardian obtained a four-minute video on 21 April from an anonymous bystander who had been filming on Cornhill between 7:10 and 7:30 pm. The footage shows Tomlinson standing behind a bicycle rack in Royal Exchange Passage with his hands in his pockets, and a group of advancing police officers. When a police dog approaches him, he turns his back. At that point, he is hit on the legs and pushed by the TSG constable, and can be seen scraping along the ground on the right side of his forehead. Eyewitnesses said they heard a noise as his head hit the ground. The IPCC sought an injunction against the broadcast of the video by Channel 4 News, but a judge rejected the application. An image obtained by Sky News on 24 April appears to show bruising on the right side of Tomlinson's forehead. A head injury was recorded by the second and third pathologists.
### CCTV cameras
Nick Hardwick, chair of the IPCC, said on 9 April that there were no CCTV cameras in the area. On 14 April the Evening Standard wrote that it had found at least six CCTV cameras in the area around the assault. After photographs of the cameras were published, the IPCC reversed its position and said its investigators were looking at footage from cameras in Threadneedle Street near the corner of Royal Exchange Passage.[^1]
## Early reaction and analysis
### British policing
Tomlinson's death sparked a discussion about the nature of Britain's policing and the relationship between the police, public, media and IPCC. The mayor of London, Boris Johnson, dismissed the criticism of the police as "an orgy of cop bashing". The death was compared to others that had each acted as a watershed in the public's perception of policing, including that of Blair Peach (1979), Stephen Lawrence (1993) and Jean Charles de Menezes (2005). The IPCC was criticized for having taken seven days from Tomlinson's death, and five days after hearing evidence that police may have been involved, to remove the City of London police from the investigation.
David Gilbertson, a former assistant inspector who had worked for the Home Office formulating policing policy, told The New York Times that the British police used to act with the sanction of the public, but that tactics had changed after a series of violent assaults on officers in the 1990s. Now dressing in military-style uniforms and equipped with anti-stab vests, extendable metal batons and clubs that turn into handcuffs, an entire generation of officers has come to regard the public as the enemy, the newspaper said.
### The Guardian, police and IPCC
Tomlinson's death was confirmed in a statement that accused protesters of having hampered police efforts to save his life. His family were not told he had died until nine hours after his death. The police and IPCC told journalists that his family were not surprised to hear he had had a heart attack. Journalists who asked whether police had had any contact with Tomlinson were asked not to speculate in case it upset the family. Direct contact with the family was refused. The police issued a statement on behalf of the family instead, which said the police were keeping them informed.
The Observer (The Guardians sister paper) published an image of Tomlinson on the ground on Sunday, 5 April. That morning Tomlinson's family attended the scene of his death, where they met Paul Lewis, a Guardian reporter who had worked on The Observer story. Tomlinson's wife said this meeting was the first the family had heard of police contact with Tomlinson before his death. The family's police liaison officer later approached the newspaper to say he was "extremely unhappy" that Lewis had spoken to the family, and that the newspaper had to stay away from them for 48 hours. The IPCC accused the newspaper of "doorstepping the family at a time of grief". On the same day, the IPCC briefed other journalists that there was nothing in the story that Tomlinson might have been assaulted by police. During this period, according to Tomlinson's family, they were prevented from seeing his body; they were first allowed to see him six days after his death.
On 7 April The Guardian published the American banker's video, and later that evening handed it to an IPCC investigator and a City of London police officer who arrived at the newspaper's offices. The officers asked that the video be removed from the website, arguing that it jeopardized their inquiry and was not helpful to the family. Nick Hardwick, chair of the IPCC, said the IPCC had asked The Guardian to remove the video only because it would have been better had witnesses not seen it before being questioned.
### Metropolitan police response
The Chief Inspector of Constabulary, Denis O'Connor, published a 150-page report in November 2009 that aimed to restore Britain's consent-based model of policing.
O'Connor wrote that there had been a hardening of police attitudes, with officers believing that proportionality meant reciprocity. The deployment of officers in riot gear had become a routine response to lawful protest, largely the result of an ignorance of the law and a lack of leadership from the Home Office and police chiefs. Officers were being trained to use their riot shields as weapons. Police forces across the country differed in their training, the equipment they had access to, and their understanding of the law. The failure to understand the relevant legislation was in part due to its complexity, the report said, with 90 amendments to the Public Order Act passed since 1986.
The report made several recommendations, including the creation of a set of national principles emphasizing the minimum use of force at all times, and making the display of police ID a legal requirement. In February 2010 the Met announced that 8,000 of its officers had been issued with embroidered epaulettes, as several had complained that the numbers were falling off, rather than being removed deliberately.
## Legal aftermath
### Decision not to prosecute
In April 2010 The Guardian published an open letter from several public figures asking the Crown Prosecution Service (CPS) to proceed with a prosecution or explain its position. In July that year Keir Starmer, director of the CPS, announced that there would be no prosecution because of the medical disagreement between the three pathologists. Starmer said there was enough evidence for an assault charge, but the six-month deadline for that had expired.
The area of conflict concerned Patel's finding during the first autopsy of "intraabdominal fluid blood about 3l with small blood clot". This was interpreted by other medical experts to mean that Patel had found three litres of blood in Tomlinson's abdomen. Starmer said this would have been around 60 percent of Tomlinson's blood volume, a "highly significant indicator of the cause of death".
In April 2010 Patel introduced an ambiguity in a second report for the CPS, saying he had found "intraabdominal fluid with blood about 3l with small blood clot" [emphasis added]. The ambiguity had to be clarified, because the second and third pathologists had relied in part on Patel's original notes to form their views. Patel was interviewed twice by the CPS. According to Starmer, Patel "maintained that the total fluid was somewhat in excess of three litres but that it was mainly ascites (a substance which forms in a damaged liver), which had been stained with blood. He had not retained the fluid nor had he sampled it in order to ascertain the proportion of blood because, he said, he had handled blood all his professional life and he knew that this was not blood but blood-stained ascites." Patel also said he had found no internal rupture that would have led to this degree of blood loss.
Several conclusions were drawn from discussions between Patel and the CPS, Starmer said: (a) because Patel had not retained or sampled the three litres of fluid, no firm conclusions could be drawn about the nature of it; (b) for Tomlinson's death to have resulted so quickly from blood loss, there would have to have been a significant internal rupture; (c) Patel found no such rupture; (d) the later postmortems also found no visible rupture; and (e) because Patel was the only person to have examined Tomlinson's intact body, he was in the best position to judge the nature of the fluid, and whether there was a rupture that could have caused it. This meant that Patel's evidence would significantly undermine the evidence of the second and third pathologists.
Nathaniel Cary, the second pathologist, objected to the CPS's decision. Cary told The Guardian that the push had caused a haemorrhage to Tomlinson's abdomen, and the haemorrhage caused him to collapse. Cary said Tomlinson was vulnerable to this because he had liver disease. The CPS had erred in dismissing a charge of actual bodily harm (ABH), in his view. In a letter to Tomlinson's family, the CPS described Tomlinson's injuries as "relatively minor" and therefore insufficient to support such a charge. But Cary told The Guardian: "The injuries were not relatively minor. He sustained quite a large area of bruising. Such injuries are consistent with a baton strike, which could amount to ABH. It's extraordinary. If that's not ABH I would like to know what is."
### Inquest
The inquest was opened and adjourned in April 2009. The City of London coroner, Paul Matthews, expressed concern about whether he had appropriate expertise, and Peter Thornton QC, who specialises in protest law, was appointed in his place. The inquest opened on 28 March 2011 before a jury. The court heard from Kevin Channer, a cardiologist at Royal Hallamshire Hospital, who analysed electrocardiogram (ECG) data from the defibrillator paramedics had used on Tomlinson. He said the readings were inconsistent with an arrhythmic heart attack, but consistent with death from internal bleeding. Pathologist Nat Carey concurred regarding the cause of death. Graeme Alexander, a hepatologist, said that in his opinion Tomlinson had died of internal bleeding as a result of trauma to the liver after the fall. He told the court that Tomlinson had been suffering from serious liver disease, which would have made him susceptible to collapse from internal bleeding.
Giving evidence over three days, Harwood said that Tomlinson "just looked as if he was going to stay where he was forever and was almost inviting physical confrontation in terms of being moved on". He said he had not warned Tomlinson and had acted because Tomlinson was encroaching a police line, which amounted to a breach of the peace. The court heard that Tomlinson's last words after collapsing were, "they got me, the fuckers got me"; he died moments later. On 3 May 2011 the jury returned a verdict of unlawful killing, ruling that the officer—Harwood was not named for legal reasons—had used excessive and unreasonable force in hitting Tomlinson, and had acted "illegally, recklessly and dangerously".
### IPCC reports
In May 2011 the IPCC released three reports into Tomlinson's death, written between April 2010 and May 2011. The main report contained material revealed during the inquest. The third report detailed an allegation from Tomlinson's family that the police had offered misleading information to the pathologists before the third autopsy on 22 April 2009. The Met's point of contact for Tomlinson's death, Detective Inspector Eddie Hall, had told the pathologists that Tomlinson had fallen to the ground in front of a police van earlier in the evening, although there was no evidence to support this. The IPCC ruled that Hall had been reckless in making this claim, but had not intended to mislead the pathologists.
### Trial of Simon Harwood
Keir Starmer, director of the CPS, announced on 24 May 2011 that a summons for manslaughter had been issued against Harwood. He said the CPS had reviewed its decision not to prosecute because new medical evidence had emerged during the inquest, and because the various medical accounts, including that of the first pathologist, had been tested during questioning. The trial opened on 18 June 2012. Harwood entered a plea of not guilty, and was acquitted on 19 July.
The court was shown extensive video footage of Tomlinson and Harwood on the day. Harwood was seen trying to arrest a man who had daubed graffiti on a police van, then joining a line of officers who were clearing Royal Exchange Passage. Harwood pushed a man who blew a vuvuzela at him, then appeared to push a BBC cameraman who was filming the arrest of another man. The footage showed Harwood push a third man out of the way, and shortly after this (the passageway now almost empty) the officers reached Tomlinson.
Mark Dennis QC, for the prosecution, argued that Harwood's use of force against Tomlinson had been unnecessary and unreasonable, and had caused Tomlinson's death. He argued that a "clear temporal link" between the incident and Tomlinson's collapse had been provided by the Guardian video, that Tomlinson had posed no threat, and that the use of force had been a "gratuitous act of aggression". The defence argued that Tomlinson's health was relevant. The court heard that he had liver and brain disease caused by alcohol abuse, numbness in his legs and balance problems, and that he had been treated at least 20 times between 2007 and 2009, mostly at A&E departments, related to falling while drunk. On the day he died, The Times'' reported, he had drunk a bottle of red wine, a small bottle of vodka and several cans of 9-per-cent super-strength lager.
Harwood told the court that Tomlinson had ignored orders to move along. He acknowledged that he had pushed Tomlinson firmly, but said he had not expected him to fall. He also acknowledged that he had "got it wrong", and said he had not realized Tomlinson was in such poor health. The jury found him not guilty after deliberating for four days.
### Dismissal, civil suit
Harwood was dismissed from the Metropolitan Police Service in September 2012 after a disciplinary hearing found that he had acted with "gross misconduct" in his actions towards Tomlinson. Tomlinson's family filed a lawsuit against the Metropolitan Police, which paid the family an undisclosed sum in August 2013. Deputy Assistant Commissioner Maxine de Brunner issued a formal apology for "Simon Harwood's use of excessive and unlawful force, which caused Mr Tomlinson's death, and for the suffering and distress caused to his family as a result."
[^1]: Peter Dominiczak, Lucy Proctor, Kiran Randhawa, "We were wrong over CCTV, says police watchdog", Evening Standard, 14 April 2009.
|
21,049,733 |
Mozart in Italy
| 1,170,017,202 |
Wolfgang Amadeus Mozart's three journeys to Italy
|
[
"18th century in Italy",
"Wolfgang Amadeus Mozart"
] |
Between 1769 and 1773, the young Wolfgang Amadeus Mozart and his father Leopold Mozart made three Italian journeys. The first, an extended tour of 15 months, was financed by performances for the nobility and by public concerts, and took in the most important Italian cities. The second and third journeys were to Milan, for Wolfgang to complete operas that had been commissioned there on the first visit. From the perspective of Wolfgang's musical development the journeys were a considerable success, and his talents were recognised by honours which included a papal knighthood and memberships in leading philharmonic societies.
Leopold Mozart had been employed since 1747 as a musician in the Archbishop of Salzburg's court, becoming deputy Kapellmeister in 1763, but he had also devoted much time to Wolfgang's and sister Nannerl's musical education. He took them on a European tour between 1763 and 1766, and spent some of 1767 and most of 1768 with them in the imperial capital, Vienna. The children's performances had captivated audiences, and the pair had made a considerable impression on European society. By 1769, Nannerl had reached adulthood, but Leopold was anxious to continue 13-year-old Wolfgang's education in Italy, a crucially important destination for any rising composer of the 18th century.
During the first tour, Wolfgang's performances were well received, and his compositional talents recognised by commissions to write three operas for Milan's Teatro Regio Ducale, each of which was a critical and popular triumph. He met many of Italy's leading musicians, including the renowned theorist Giovanni Battista Martini, under whom he studied in Bologna. Leopold also hoped that Wolfgang, and possibly he himself, would obtain a prestigious appointment at one of the Italian Habsburg courts. This objective became more important as Leopold's advancement in Salzburg became less likely; but his persistent efforts to secure employment displeased the imperial court, which precluded any chance of success. The journeys thus ended not with a triumphant return, but on a note of disappointment and frustration.
## Background
In November 1766, the Mozart family returned to Salzburg after a three-and-a-half year "grand tour" of the major Northern European cities, begun when Wolfgang was seven and Nannerl twelve. This tour had largely achieved Leopold's objective to demonstrate his children's talents to the wider world and advance their musical education. A stay in Vienna beginning in 1767 proved less happy: an outbreak of smallpox, which led to the death of the Archduchess Maria Josepha of Austria, prevented the children from performing in the imperial court and forced the family to seek refuge in Bohemia, a move which did not prevent Wolfgang from contracting the disease. They returned to Vienna in January 1768, but by now the children were no longer young enough to cause a sensation in their public concerts. Leopold fell out with the court impresario Giuseppe Affligio, and damaged his relations with the eminent court composer Christoph Willibald Gluck, through an over-eagerness to secure a performance of Wolfgang's first opera, La finta semplice. As a consequence he developed a reputation at court for being importunate and "pushy".
After returning to Salzburg in January 1769, Leopold considered the 18-year-old Nannerl's education to be virtually finished, and focused his efforts on Wolfgang. He decided to take the boy to Italy, which in its pre-unification days was a collection of duchies, republics, and papal states, with the Kingdom of Naples in the south. For more than two centuries Italy had been the source of innovations in musical style, the home of church music, and above all the cradle of opera. In Leopold's view, Wolfgang needed to absorb firsthand the music of Venice, Naples, and Rome, to equip himself for future commissions from Europe's opera houses, "the late eighteenth-century composers' honeypots" according to Mozart's biographer Stanley Sadie. Leopold wanted Wolfgang to immerse himself in the Italian language, to experience church music of the highest quality, and to extend his network of influential acquaintances. There was also the possibility, for both Wolfgang and Leopold, of securing positions in the northern Italian Habsburg courts. With these priorities in mind, Leopold decided that Nannerl and her mother should stay at home, a decision they resented but which made economic and practical sense.
In the months before their departure, Wolfgang composed prolifically, gaining the favour of Archbishop Siegmund Christoph von Schrattenbach, who, as Leopold's employer, had to consent to the journey. Permission to travel, along with a gift of 600 florins, was granted in October. Wolfgang was awarded the honorary title of Konzertmeister (court musician), with a hint that on his return this post would merit a salary.
## First journey, December 1769 – March 1771
### Journey to Milan
On 13 December 1769, Leopold and Wolfgang set out from Salzburg, armed with testimonials and letters that Leopold hoped would smooth their passage. Among the most important was an introduction to Count Karl Joseph Firmian of Milan, described as the "King of Milan", an influential and cultivated patron of the arts. His support would be vital to the success of the entire Italian undertaking.
The pair travelled through Innsbruck, then due south to the Brenner Pass into Italy. They continued through Bolzano and Rovereto to Verona and Mantua, before turning west towards Milan. Leopold's financial plans for the journey were broadly the same as for the family's grand tour—travel and accommodation costs were to be met by concert proceeds. This 350-mile (560 km) winter journey to Milan occupied a difficult and unpleasant six weeks, with the weather forcing extended stops. Leopold complained in his letters home about unheated inn rooms: "freezing like a dog, everything I touch is ice". Early concert receipts were modest; according to Leopold, costs were running at around 50 florins a week. After having unwisely boasted about profits made from the grand tour, Leopold was now more cautious about revealing financial details. He tended to emphasise his expenses and minimise his takings, writing, for example: "On the whole we shall not make much in Italy ... one must generally accept admiration and bravos as payment."
The longest pause was two weeks spent in Verona, where the press reported glowingly on Wolfgang's concert of 5 January 1770. Father and son attended a performance of Guglielmi's Ruggiero, which Wolfgang wrote about dismissively in a letter to Nannerl. The boy also had his portrait painted by a local artist, most probably Giambettino Cignaroli. An alternative attribution to Saverio Dalla Rosa has also been suggested. The portrait had been commissioned by Pietro Lugiati, Receiver Général for the Venetian Republic. This interlude was followed by a shorter stop in Mantua, where Wolfgang gave a concert at the Accademia Filarmonica, with a programme designed to test his abilities in performance, sight reading, and improvisation. According to a press review the audience was "dumbfounded" at this "miracle in music, one of those freaks that Nature causes to be born". In Mantua, they suffered a snub from Prince Michael of Thurn und Taxis, who informed them through a servant that he had no desire to meet them. Historian Robert Gutman surmises that the Prince, aware of the Affligio affair in Vienna, wanted no dealings with musicians who did not know their place. By contrast, Count Arco, whose family were members of the Salzburg court, received them warmly.
The Mozarts arrived in Milan on 23 January and found comfortable lodgings in the monastery of San Marco, not far from Count Firmian's palace. While they waited to see the Count, they attended Niccolò Piccinni's opera Cesare in Egitto. Firmian eventually welcomed them with generous hospitality and friendship, presenting Wolfgang with a complete edition of the works of Metastasio, Italy's leading dramatic writer and librettist. Firmian also hosted a series of concerts attended by many of the city's notables, including Archduke Ferdinand, a possible future patron for the young composer. For the last of these occasions, Wolfgang wrote a set of arias using Metastasio's texts. These were so well received that Firmian commissioned Wolfgang to write the opening opera for the following winter's carnival season in Milan, just as Leopold had hoped he might. Wolfgang would receive a fee of around 500 florins, and free lodgings during the writing and rehearsal. The Mozarts left Milan on 15 March, heading south towards Florence and Rome, committed to return in the autumn and taking with them fresh letters of recommendation from Firmian.
Up to this point in the tour Wolfgang appears to have done little composition. The Accademia Filarmonica concert in Mantua had included much improvisation but little of Wolfgang's own music; the only certain compositions from this phase of the tour are the arias composed for the final Firmian concert, which sealed his contract for the carnival opera. These are Se tutti i mali miei, K. 83/73p, Misero me, K. 77/73e, and Ah più tremar ..., K. 71. The Symphony in G, K. 74, evidently completed in Rome in April, may have been started in Milan.
### Milan to Naples
The first stop on the southward journey was at Lodi, where Wolfgang completed his first string quartet, K. 80/73f. After a few days in Parma, the Mozarts moved on to Bologna, a "centre for masters, artists and scholars", according to Leopold. Their letter from Firmian introduced them to Count Pallavicini-Centurioni, a leading patron of the arts, who immediately arranged a concert for the local nobility in his palace. Among the guests was Giovanni Battista Martini, the leading musical theorist of his day and Europe's most renowned expert in Baroque counterpoint. Martini received the young composer and tested him with exercises in fugue. Always with an eye upon Wolfgang's future prospects in the courts of Europe, Leopold was anxious for engagement with the great master; but time was short, so he arranged a return to Bologna in the summer for extended tuition. The pair left on 29 March, carrying letters from Pallavicini that might clear the way for an audience with Pope Clement XIV in Rome. Before they left, they made the acquaintance of the Czech composer Josef Mysliveček, whose opera La Nitteti was being prepared for performance. Later in 1770, Wolfgang would use the Mysliveček opera as a source of motives for his own opera Mitridate, re di Ponto and various symphonies. More broadly, it marked the beginning of a close association between Mysliveček and the Mozart family that lasted until 1778. Wolfgang used his works repeatedly as models of compositional style.
The next day they arrived in Florence, where Pallavicini's recommendation gained them a meeting at the Palazzo Pitti with the Grand Duke and future emperor Leopold. He remembered the Mozarts from 1768 in Vienna, and asked after Nannerl. In Florence they encountered the violinist Pietro Nardini, whom they had met at the start of their grand tour of Europe; Nardini and Wolfgang performed together in a long evening concert at the Duke's summer palace. Wolfgang also met Thomas Linley, an English violin prodigy and a pupil of Nardini's. The two formed a close friendship, making music and playing together "not as boys but as men", as Leopold remarked. Gutman reports that "a melancholy Thomas followed the Mozarts' coach as they departed for Rome on 6 April". The boys never met again; Linley, after a brief career as a composer and violinist, died in a boating accident in 1778, at the age of 22.
After five days of difficult travel through wind and rain, lodged uncomfortably at inns Leopold described as disgusting, filthy, and bereft of food, they reached Rome. Pallavicini's letters soon had their effect: meetings with the Count's kinsman Lazaro Opizio Cardinal Pallavicino, Prince San Angelo of Naples, and Charles Edward Stuart, known as "Bonnie Prince Charlie", Pretender to the throne of Great Britain. There was much sightseeing, and performances before the nobility. The Mozarts visited the Sistine Chapel, where Wolfgang heard and later wrote down from memory Gregorio Allegri's famous Miserere, a complex nine-part choral work that had not been published. Amid these activities, Wolfgang was busily composing. He wrote the contradanse K. 123/73g and the aria Se ardire, e speranza (K. 82/73o), and finished the G major symphony begun earlier.
After four busy weeks the Mozarts departed for Naples. Travellers on the route through the Pontine Marshes were frequently harassed by brigands, so Leopold arranged a convoy of four coaches. They arrived on 14 May. Armed with their letters of recommendation, the Mozarts were soon calling on the prime minister, marchese Bernardo Tanucci, and William Hamilton, the British ambassador, whom they knew from London. They gave a concert on 28 May, which brought in about 750 florins (Leopold would not reveal the exact amount), and attended the first performance of Niccolò Jommelli's opera Armida abbandonata at the Teatro di San Carlo. Wolfgang was impressed by both the music and the performance, though he felt it "too old-fashioned and serious for the theatre". Invited to write an opera for the next San Carlo season, he declined because of his prior commitment to Milan. When no summons to play at the royal court was forthcoming, Leopold eventually decided to leave Naples, after visits to Vesuvius, Herculaneum, Pompeii, and the Roman baths at Baiae. They departed by post-coach for Rome on 25 June.
### Return from Naples
The party made a rapid 27-hour return trip to Rome; in the process, Leopold sustained a leg injury that troubled him for several months. Wolfgang was granted an audience with the Pope, and was made a knight of the Order of the Golden Spur. From Rome they made their way to the famous Santa Casa pilgrimage site at Loreto, and took the coastal road to Rimini—under military protection, because the road was subject to attacks from marauding pirates. From Rimini they moved inland, and reached Bologna on 20 July.
Leopold's priority was to rest his leg. Wolfgang passed the time by composing a short minuet, K. 122/73t, and a Miserere in A minor, K. 85/73s. Meanwhile, the libretto for the Milan opera arrived; Leopold had been expecting Metastasio's La Nitteti, but it was Mitridate, re di Ponto, by Vittorio Cigna-Santi. According to the correspondence of Leopold, the composer Josef Mysliveček was a frequent visitor to the Mozart household while they were staying in Bologna. The musicologist Daniel E. Freeman believes that Mozart's approach to the composition of arias changed fundamentally at this time, bringing his style into closer alignment with that of Mysliveček.
Leopold and Wolfgang moved into Count Pallavicini's palatial summer residence on 10 August, and stayed for seven weeks while Leopold's leg gradually improved and Wolfgang worked on the Mitridate recitatives. At the beginning of October, with Leopold more or less recovered, they moved back to Bologna, and Wolfgang, it is thought, began his period of study under Martini. On 9 October he underwent examination for membership in Bologna's Accademia Filarmonica, offering as his test piece the antiphon Quaerite primum regnum, K. 86/73v. According to Gutman, under ordinary circumstances Wolfgang's "floundering" attempt at this unfamiliar polyphonic form would not have received serious consideration, but Martini was at hand to offer corrections, and probably also paid the admission fee. Wolfgang's membership was duly approved; and the Mozarts departed for Milan shortly afterwards.
### Milan revisited, October 1770 – February 1771
The journey from Bologna to Milan was delayed by storms and floods, but Leopold and his son arrived on 18 October, ten weeks before the first performance of Mitridate. Wolfgang's fingers ached from writing recitatives, and in any case he could not begin work on the arias until the singers were present, collaboration with the principal performers being the custom for composers of the time. As the singers assembled, problems arose. Quirino Gasparini, composer of an earlier version of Mitridate, tried to persuade the prima donna Antonia Bernasconi to use his settings for her arias, but met with failure. "Thank God", Leopold wrote, "that we have routed the enemy". However, the principal tenor, Guglielmo d'Ettore, made repeated requests for his arias to be rewritten, and sang one of Gasparini's settings in Act 3, an insertion that survives in the published score of the opera.
Rehearsals began on 6 December. Wolfgang's mastery of Italian diction was revealed as the recitatives were practised, and a run-through of the instrumental score displayed his professionalism. Leopold wrote home: "An awful lot of this undertaking, blessed be God, is safely over, and, God be praised, once more with honour!" On 26 December, at the Teatro Regio Ducale (Milan's great opera house at the time), Wolfgang directed the first public performance from the keyboard, dressed for the occasion in a scarlet coat lined with blue satin and edged with gold. The occasion was a triumph: the audience demanded encores and at the conclusion cried "Evviva il maestro!" (Long live the master!). The opera ran for 22 performances, and the Gazetta di Milano praised the work handsomely: "The young maestro di capella, who is not yet fifteen years of age, studies the beauties of nature, and represents them adorned with the rarest musical graces." The arias sung by Bernasconi "vividly expressed the passions and touched the heart". Subsequent reactions to the opera proved less effusive; there are no records of further performances of Mitridate before its revival at the Salzburg Festival in 1971.
Having fulfilled his major obligation for his first trip to Italy by completing the opera Mitridate, Wolfgang gave a concert at Firmian's palace on 4 January 1771. A few days later, news arrived that Wolfgang had been granted membership in the Accademia Filarmonica of Verona. On 14 January they departed for a two-week sojourn in Turin, where they met many of the leading Italian musicians: the distinguished violinist Gaetano Pugnani, his 15-year-old prodigy pupil Giovanni Battista Viotti, and the composer Giovanni Paisiello whose opera Annibale in Torino Leopold declared to be magnificent. They returned to Milan for a farewell lunch with Firmian before their departure for Salzburg on 4 February.
### Journey home
On their way back to Salzburg Leopold and Wolfgang stayed for a while at Venice, pausing on their way at Brescia to see an opera buffa. While in Venice, Leopold used his letters of introduction to meet the nobility and to negotiate a contract for Wolfgang to write an opera for the San Benedetto theatre. Wolfgang gave several concerts and perhaps played at Venice's famed ospidali—former orphanages which became respected music academies. The Mozarts were received generously, but Leopold appeared dissatisfied. "The father seems a shade piqued", wrote a correspondent to the Viennese composer Johann Adolph Hasse, adding: "they probably expected others to seek after them, rather than they after others". Hasse replied: "The father, as I see the man, is equally discontent everywhere".
Leaving Venice on 12 March, the Mozarts journeyed to Padua, where during a day of sightseeing Wolfgang was commissioned by Don Giuseppe Ximenes, Prince of Aragon, to compose an oratorio for the city. The history of La Betulia Liberata ("The Liberation of Bethulia") is obscure—it may not have been performed in Padua, or at all in Wolfgang's lifetime. In Verona, a few days later, he received further commissions. Wolfgang was to compose a serenata (or one-act opera) to be performed in Milan in October 1771 for the wedding of the Archduke Ferdinand and his bride Princess Beatrice of Modena. At the same time the young composer was engaged to undertake another Milan carnival opera, for the 1772–73 season, at an increased fee. This created a conflict of dates which prevented Wolfgang from proceeding with the San Benedetto contract. Thereafter, father and son sped northward, arriving home in Salzburg on 28 March 1771.
In his review of this first Italian journey, Maynard Solomon's analysis of the meagre financial information provided by Leopold indicates that the Mozarts made a substantial profit—perhaps as much as 2,900 florins. The pair had also been accorded wide recognition, moving among the highest Italian nobility. Aside from being honoured by the Pope, Wolfgang had been admitted to the academies of Bologna and Verona, and had studied with Martini. Solomon calls the tour Leopold's "finest hour and ... perhaps his happiest".
## Second journey, August–December 1771
In August 1771 Leopold and Wolfgang set out once more for Milan, to work on the serenata—which had by this time evolved into the full-length opera Ascanio in Alba . On arrival they shared their lodgings with violinists, a singing-master, and an oboist: a ménage that was, as Wolfgang wrote jestingly to Nannerl, "delightful for composing, it gives you plenty of ideas!" Working at great speed, Wolfgang finished Ascanio just in time for the first rehearsal on 23 September.
Ascanio was expected to be the lesser of the works for the wedding celebration, second to Hasse's opera Ruggiero. However, the 72-year-old Hasse was out of touch with current theatrical tastes, and although his opera was praised by the Empress Maria Theresa, its overall reception was lukewarm, especially compared to the triumphant success of Ascanio. Leopold expressed delight at this turn of events: "The archduke has recently ordered two copies", he wrote home. "All the noblemen and other people constantly address us in the street to congratulate Wolfgang. In short! I'm sorry, Wolfgang's Serenata has so crushed Hasse's opera that I can't describe it." Hasse was gracious about his eclipse, and is said to have remarked that the boy would cause all others to be forgotten.
The Mozarts were free to leave Milan early in November, but they stayed another month because Leopold hoped that the success of Ascanio would lead to an appointment for Wolfgang from a royal patron. He apparently solicited Archduke Ferdinand on 30 November, and his request was passed on to the imperial court in Vienna. It is possible that Leopold's pushiness in Vienna over La finta semplice still rankled, or that word of his crowing over Hasse's failure had reached the Empress. For whatever reason, Maria Theresa's reply to the archduke was unequivocal, describing the Mozarts as "useless people" whose appointment would debase the royal service, and adding that "such people go around the world like beggars". Leopold never learned this letter's contents; by the time it reached Milan the Mozarts had left, disappointed but still hopeful. "The matter is not over; I can say that much", Leopold wrote as he and Wolfgang made their way home.
Despite the hectic schedule during this short visit, Wolfgang still found time to write his Symphony in F, K. 112 (No. 13). He contrived a further symphony from the Ascanio overture, by adding a finale to the two existing movements. Another symphony, K. 96/111b, in C major, is sometimes allocated to this visit to Milan, but it is not certain when (or indeed whether) Wolfgang actually wrote it.
## Upheaval in Salzburg
The day after Leopold and Wolfgang arrived back in Salzburg the court was thrown into turmoil by the death of Archbishop Schrattenbach. This created problems for Leopold, who had unresolved issues with the court. Part of his salary during the second Italian visit had been stopped, and Leopold wished to petition for its payment, and to pursue the matter of Wolfgang's salary as a Konzertmeister, which Schrattenbach had indicated might be paid on Wolfgang's return from the first Italian journey. There was also the matter of succession to the post of Salzburg's Kapellmeister, to be available soon on the pending retirement of the incumbent, Giuseppe Lolli, who was over 70 years old; Leopold, who had followed Lolli as Vice-Kapellmeister, might normally have felt confident of succeeding him to the higher post. Decisions on these matters would now be made by the new archbishop, whose policies and attitudes were unknown.
On 14 March 1772, amid various political machinations, Count Hieronymus von Colloredo was elected to the archbishopric as a compromise candidate acceptable to the imperial court in Vienna. Although unpopular among Salzburgers, this appointment appeared at first to be to the Mozarts' advantage: Leopold's withheld salary was paid, and on 31 August Colloredo authorised the payment of Wolfgang's Konzertmeister salary. However, the new archbishop began to look for someone outside the Salzburg court to be his new Kapellmeister. Eventually, he chose the Italian Domenico Fischietti, who was several years younger than Leopold. Realising that his chances of promotion had probably been irrevocably lost, Leopold turned his hopes for a comfortable old age towards Wolfgang, giving new urgency to the third Italian journey which began in October 1772.
## Third journey, October 1772 – March 1773
In October 1772 Leopold and Wolfgang returned to Milan to work on the carnival opera that had been commissioned at the end of the first journey. The text was Lucio Silla, revised by Metastasio from an original by Giovanni de Gamerra. Wolfgang found himself in the familiar routine of composing rapidly while coping with problems such as the late arrival of singers and the withdrawal of the principal tenor due to illness. Leopold reported on 18 December that the tenor had arrived, that Wolfgang was composing his arias at breakneck speed, and that rehearsals were in full swing. The first performance, on 26 December, was chaotic: its start was delayed two hours by the late arrival of Archduke Ferdinand, there were quarrels among the principal performers, and the running time was extended by the insertion of ballets (a common practice of the time), so the performance was not over until two o'clock the following morning. Despite this, subsequent performances were well received. Leopold wrote on 9 January 1773 that the theatre was still full, and that the premiere of the season's second opera, Giovanni Paisiello's Sismano nel Mogul, had been postponed to allow Wolfgang's piece a longer run—26 performances in all. Such success for the new work seems to have been fleeting; but during the next few years the libretto was reset by several different composers, including Wolfgang's London mentor Johann Christian Bach.
Leopold, unaware of the Empress's views, continued to pursue an appointment for Wolfgang by applying to Grand Duke Leopold I of Tuscany, the Empress's third son. The application was strongly supported by Count Firmian, and Leopold, in a coded letter home, said he was quite hopeful. While the Mozarts waited for a reply, Wolfgang composed a series of "Milanese" string quartets (K. 155/134a to K. 160/159a), and the famous motet Exsultate, jubilate, K. 165. Leopold resorted to deception to explain his extended stay in Milan, claiming to be suffering from severe rheumatism that prevented his travelling. His ciphered letters to his wife Anna Maria assure her that he is in fact well, but urge her to spread the story of his indisposition. He waited through most of January and all of February for the Grand Duke's reply. The negative response arrived on 27 February. It is not known whether the Grand Duke was influenced by his mother's opinion of the Mozart family, but his rejection effectively ended Leopold's hope of an Italian appointment for Wolfgang. The Mozarts had no choice now but to return to Salzburg, leaving Milan on 4 March and reaching home nine days later. Neither father nor son visited Italy again.
## Evaluation
Maynard Solomon summarises the Italian journeys as a great triumph, but suggests that from Leopold's standpoint they also incorporated a great failure. The Mozarts had certainly profited financially, and Wolfgang had developed artistically, into a recognised composer. Although the Mozarts' reception had not been uniformly cordial—they had been cold-shouldered by the Neapolitan court and the Prince of Thurn and Taxis had snubbed them—the Italians had generally responded with enthusiasm. Wolfgang had been received and knighted by the Pope; he had been granted membership in leading philharmonic societies and had studied with Italy's greatest music scholar, Giovanni Martini. Above all, he had been accepted as a practitioner of Italian opera by a leading opera house, completing three commissions that resulted in acclaimed performances. Other compositions resulted from the Italian experience, including a full-scale oratorio, several symphonies, string quartets, and numerous minor works.
The failure was Leopold's inability, despite his persistence, to secure a prestigious appointment either for himself or for Wolfgang. Leopold was evidently unaware of the negative light in which he was generally viewed; he did, however, perceive that there was some intangible barrier to his Italian ambitions, and eventually recognised that he could not overcome whatever forces were arrayed against him. In any event, Wolfgang's Italian triumphs proved short-lived; despite the critical and popular successes of his Milan operas, he was not invited to write another, and there were no further commissions from any of the other centres he had visited. With all hopes of an Italian court appointment gone, Leopold sought to secure the family's future by other means: "We shall not go under, for God shall help us. I have already thought out some plans."
Wolfgang was qualified by his skills at the keyboard and violin, and by his compositional experience, for a post as Kapellmeister; but at 17 he was too young. He therefore remained in Colloredo's employ at the Salzburg court, increasingly discontent, until his dismissal from the Archbishop's retinue during its stay in Vienna, in 1781. Leopold, unpromoted from his rank of vice-Kapellmeister, remained with the court until his death in 1787.
## See also
- List of operas by Mozart
- Mozart family grand tour
|
1,196,618 |
Robert Garran
| 1,171,624,629 |
First Solicitor-General of Australia
|
[
"1867 births",
"1957 deaths",
"Australian King's Counsel",
"Australian Knights Bachelor",
"Australian Knights Grand Cross of the Order of St Michael and St George",
"Australian federationists",
"Colony of New South Wales people",
"People educated at Sydney Grammar School",
"Solicitors-General of Australia",
"University of Sydney alumni"
] |
Sir Robert Randolph Garran GCMG KC (10 February 1867 – 11 January 1957) was an Australian lawyer who became "Australia's first public servant" – the first federal government employee after the federation of the Australian colonies. He served as the departmental secretary of the Attorney-General's Department from 1901 to 1932, and after 1916 also held the position of Solicitor-General of Australia.
Garran was born in Sydney, the son of the journalist and politician Andrew Garran. He studied arts and law at the University of Sydney and was called to the bar in 1891. Garran was a keen supporter of the federation movement, and became acquainted with leading federalists like George Reid and Edmund Barton. At the 1897–98 constitutional convention he served as secretary of the drafting committee. On 1 January 1901, Garran was chosen by Barton's caretaker government as its first employee; for a brief period, he was the only member of the Commonwealth Public Service. His first duty was to write the inaugural edition of the Commonwealth Gazette, which contained Queen Victoria's proclamation authorising the creation of a federal government.
Over the following three decades, Garran provided legal advice to ten different prime ministers, from Barton to Joseph Lyons. He was considered an early expert in Australian constitutional law, and with John Quick published an annotated edition of the constitution that became a standard reference work. Garran developed a close relationship with Billy Hughes during World War I, and accompanied him to the Imperial War Cabinet and the Paris Peace Conference. Hughes, who was simultaneously prime minister and attorney-general, appointed him to the new position of solicitor-general and delegated numerous powers and responsibilities to him. He was knighted three times for his service to the Commonwealth, in 1917, in 1920 and in 1937.
In addition to his professional work, Garran was also an important figure in the development of the city of Canberra during its early years. He was one of the first public servants to relocate there after it replaced Melbourne as the capital in 1927. He founded several important cultural associations, organised the creation of the Canberra University College, and later contributed to the establishment of the Australian National University. Garran published at least eight books and many journal articles throughout his lifetime, covering such topics as constitutional law, the history of federalism in Australia, and German-language poetry. He was granted a state funeral upon his death in 1957, the first federal public servant to receive one.
## Early life
Garran was born in Sydney, New South Wales, the only son (among seven children) of journalist and politician Andrew Garran and his wife Mary Isham. His parents were committed to social justice, Mary campaigning for issues such as the promotion of education for women, and Andrew advocating free trade and Federation, and as editor of The Sydney Morning Herald and later promoting them as a member of the New South Wales Legislative Council.
The family lived in Phillip Street in central Sydney. Garran's mother "had a deep distrust, well justified in those days, of milkman's milk" and so she kept a cow in the backyard, which would walk on its own to The Domain each day to graze and return twice a day to be milked. The Garrans later lived in the suburb of Darlinghurst, just to the east of the centre of the city.
Garran attended Sydney Grammar School from the age of ten, starting in 1877. He was a successful student, and became School Captain in 1884. He then studied arts and law at the University of Sydney, where he was awarded scholarships for classics, mathematics and general academic ability. Garran graduated with a Bachelor of Arts degree in 1888 and subsequently won the University's Medal in Philosophy when he was conferred with a Master of Arts with first-class honours in 1899.
After graduating, Garran began to study for the Bar examination. He was employed for a year with a firm of Sydney solicitors, and in 1890 served as associate to Justice William Charles Windeyer of the Supreme Court of New South Wales. Windeyer had a reputation for being a harsh and inflexible judge, particularly in criminal cases, where he was said to have "a rigorous and unrelenting sense of the retribution that he believed criminal justice demanded, [and] a sympathy verging on the emotional for the victims of crime." Garran however offered a different view, saying that "those who knew him well knew that under a brusque exterior he was the kindest of men", and his reputation had to some degree been created by misrepresentation. In 1891, Garran was admitted to the New South Wales Bar, where he commenced practice as a barrister, primarily working in equity.
## Federation movement
Garran, like his father, was strongly involved in the Australian Federation movement, the movement which sought to unite the British colonies in Australia (and, in early proposals, New Zealand) into one federated country. The first Constitutional Convention was held in 1891 in the chamber of the Legislative Council of New South Wales in Macquarie Street, Sydney, around the corner from Garran's chambers in Phillip Street; Garran regularly attended and sat in the public gallery to see "history... in the making under my very eyes." Garran later recalled with approval that the 1891 convention was the first with the courage to face the "lion in the path", the issue of customs duties and tariffs, which had previously divided states such as Victoria, who were in favour of protectionism, and states such as New South Wales, who were in favour of free trade. In Garran's view a clause proposed at the convention, which allowed for tariffs against international trade while ensuring free trade domestically (the predecessor to the final section 92 of the Constitution of Australia), "expressed the terms on which New South Wales was prepared to face the lion."
On joining the bar soon became involved with Edmund Barton Q.C., later the first Prime Minister of Australia, and the de facto leader of the federation movement in New South Wales. Garran, along with others such as Atlee Hunt, worked essentially as secretaries to Barton's federation campaign, drafting correspondence and planning meetings. At one late night meeting, planning a speech Barton was to give in the Sydney suburb of Ashfield, Barton expressed the phrase "For the first time, we have a nation for a continent, and a continent for a nation"; Garran later claimed that the now famous phrase "would have been unrecorded if I had not happened to jot it down."
In June 1893, when Barton's Australasian Federal League was formed at a meeting in the Sydney Town Hall, Garran joined immediately and was made a member of the executive committee. He was one of the League's four delegates to the 1893 Corowa Conference and a League delegate to the 1896 "People's Convention", or Bathurst Conference, a conference attended by Barton, Reid, League members , the Australian Natives' Association (mainly Victorian) and other pro-federation groups. At Corowa he was part of an impromptu group organised by John Quick which drafted a resolution, passed at the Conference, calling for a directly elected Constitutional Convention to be charged with drafting the Bill for the Constitution of Australia. The proposal, which came to be known as the Corowa Plan, was later accepted at the 1895 Premiers' Conference and formed the basis for the federation process over the following five years.
In 1897, Garran published The Coming Commonwealth, an influential book on the history of the Federation movement and the debate over the 1891 draft of the Constitution of Australia. The book was based on material he prepared for a course on federalism and federal systems of government, which he had planned to give at the University of Sydney, but which failed to attract a sufficient number of students. Nevertheless, the book was both unique and popular, as one of the few books on the topic at the time, with the first edition quickly selling out. Soon after its publication the Premier of New South Wales George Reid, who had been elected as a New South Wales delegate to the 1897–1898 Constitutional Convention, invited Garran to be his secretary. At the Convention, Reid appointed him secretary of the Drafting Committee, at Barton's request; he was also a member of the Press Committee.
Garran recorded in a letter to his family during the convention's Melbourne sitting that:
> The committee professes to find me very useful in unravelling the conundrums sent down by the finance committee... The last two nights I have found the drafting committee fagged [tired] and despairing, and now they have pitched the conundrums at me and gone out for a smoke; and then I worked out algebraic formulas to clear the thing up, drafted clauses accordingly, and when the committee returned we had plain sailing.
Garran joked that the long work of the drafting committee breached the Factory Acts, the group (primarily Barton, Richard O'Connor, John Downer and Garran) often working late into the night preparing drafts for the convention to consider and debate the next morning. On the evening before the convention's last day, Barton had gone to bed exhausted in the small hours, Garran and Charles Gavan Duffy finishing the final schedule of amendments at breakfast time. The convention concluded successfully, approving a final draft which ultimately, aside from a small amendment arranged at the last minute in London, became the Constitution of Australia.
Throughout 1898, following the completion of the proposed Constitution, Garran participated in the campaign promoting Federation leading up to the referendums at which the people of the colonies voted whether or not to approve the Constitution. He contributed a daily column to the Evening News, and had humorous poems critiquing opponents of federation published in The Bulletin. The following year, he began working with Quick on the Annotated Constitution of the Australian Commonwealth, a reference work on the Constitution including a history, and detailed discussion of each section analysing its meaning and its development at the Conventions. Published in 1901, the Annotated Constitution, commonly referred to simply as "Quick & Garran", soon became the standard work on the Constitution and is still regarded as one of the most important works on the subject.
## Public service
On the day that Federation was completed and Australia created, 1 January 1901, Garran, feeling like " a junior barrister suddenly promoted to the final court of appeal", was appointed secretary and Permanent Head of the Attorney-General's Department by the first Attorney-General of Australia, Alfred Deakin. Garran was the first, and for a time the only, public servant employed by the Government of Australia. Garran later said of this time that:
> I was not only the head [of the department], but the tail. I was my own clerk and messenger. My first duty was to write out with my own hand Commonwealth Gazette No. 1 proclaiming the establishment of the Commonwealth and the appointment of ministers of state, and to send myself down with it to the government printer.
In this role, Garran was responsible for organising the first federal election in March 1901, and for organising the transfer of various government departments from the states to the federal government, including the Department of Defence, the postal and telegraphic services (now part of the Department of Communications, Information Technology and the Arts) and the Department of Trade and Customs (now part of the Department of Foreign Affairs and Trade). As parliamentary drafter, Garran also developed legislation to administer those new departments and other important legislation. As head of the Attorney General's department, Garran was also responsible for advice on the consistency of legislation with the Constitution, including the Commonwealth Franchise Act 1902 which by disenfranchising Aboriginals appeared to offend the Constitution's Section 41, which guaranteed the right to vote federally of anyone entitled to vote in a state. Garran advised that the Section could be neutered by interpreting it as no more than a grandfather clause of the right to vote of existing Aboriginal voters. The Act also conferred the right to vote on women federally, a cause Garran was, in private, gently mocking of.
Garran and his fellow staff aimed for a simple style of legislative drafting, a goal enabled by the fact that there was no pre-existing federal legislation on which their work would have to be based. In Garran's opinion the approach, which was put into practice many years before the similarly principled plain English movement became popular in government in the 1970s, was intended "to set an example of clear, straightforward language, free from technical jargon." Subsequent parliamentary drafters have noted that Garran was unusual in this respect for deliberately setting out to achieve and improve a particular drafting style, and that it was not until the early 1980s that such discipline among drafters re-emerged.
However, Garran himself admitted that his drafting could be overly simplistic, citing the first customs and excise legislation, developed with the Minister for Trade and Customs Charles Kingston and Assistant Parliamentary Draftsman Gordon Castle, as an example of the style taken to excess. The style was also once parodied by foundation High Court Justice Richard O'Connor as follows:
> Every man shall wear –
> (a) Coat
> (b) Vest
> (c) Trousers
> Penalty: £100.
The Attorney-General's Department also managed litigation on behalf of the government. Initially the department contracted private law firms to actually conduct the litigation, but in 1903 the office of the Commonwealth Crown Solicitor was established, with Charles Powers the first to hold the job. The other Crown Solicitors that Garran worked with were Gordon Castle (with whom he had also worked as a drafter) and William Sharwood. In 1912, Garran was considered as a possible appointee to the High Court, following the expansion of the bench from five seats to seven and the death of Richard O'Connor. Billy Hughes, Attorney-General in the Fisher government at the time, later said Garran would have been appointed "but for the fact that he is too valuable a man for us to lose. We cannot spare him."
Garran worked with eleven Attorneys-General as Permanent Head of the Department. Garran regarded the first Attorney-General, Alfred Deakin, as an excellent thinker and a natural lawyer, and on occasion "[spoke] of Deakin as the Balfour of Australian politics." He was also very much impressed with the fifth Attorney-General, Isaac Isaacs, who was an extremely diligent worker, and two time Attorney-General Littleton Groom, who was "probably one of the most useful Ministers the Commonwealth has had."
### Solicitor-General
In 1916, Garran was made the first Solicitor-General of Australia by Billy Hughes, who had since become Prime Minister as well as Attorney-General. The creation of the office and Garran's appointment to it represented a formal delegation of many of the powers and functions formerly exercised by the Attorney-General.
Garran developed a strong relationship with Hughes, giving him legal advice on the World War I conscription plebiscites and on the range of regulations which were made under the War Precautions Act 1914. The War Precautions Regulations had a broad scope, and were generally supported by the High Court, which adopted a much more flexible approach to the reach of the Commonwealth's defence power during wartime. A substantial amount of Garran's work during the war involved preparing and carrying out the regulations. Many of them were directed at maximising the economic aspect of the war effort and ensuring supplies of goods to Australian troops; others were directed at controlling citizens or former citizens of the enemy Central Powers living in Australia. On one occasion, when Hughes had been informed that at a party hosted by a German man, the band had played "Das Lied der Deutschen", Hughes asked Garran "By the way, what is this tune?" to which Garran replied that it was Haydn's melody to "Gott erhalte Franz den Kaiser", and as it was used as the tune to several hymns "it was probably sung in half a dozen churches in Sydney last Sunday." Hughes then said "Good Heavens! I have played that thing with one finger hundreds of times."
The partnership between Garran and Hughes is regarded by some as unusual, given that Garran was "tall, gentlemanly, wise and scholarly", and patient with his staff, whereas Hughes was "short of stature [and] renowned for bursts of temper." Nevertheless, the partnership was a successful one, with Hughes recognising the importance of Garran's constitutional expertise, remarking once about the World War I period that "the best way to govern Australia was to have Sir Robert Garran at [my] elbow, with a fountain pen and a blank sheet of paper, and the War Precautions Act." Likewise, Garran respected Hughes' strong leadership style, which had been important in guiding the country through the war, although in describing the Nationalist Party's loss in the 1922 federal election, Garran later said that "Hughes also overestimated his own hold on Parliament [although] his hold on the people was probably undiminished."
Garran accompanied Hughes and Joseph Cook (then the Minister for the Navy) to the 1917 and 1918 meetings of the Imperial War Cabinet in London, United Kingdom, and was also part of the British Empire delegation to the 1919 Paris Peace Conference in Paris, France. There he was on several of the treaty drafting committees, and contributed to many provisions, notably the portions of the League of Nations Covenant relating to League of Nations mandates. Though focusing mainly on League of Nations matters, Garran and John Latham (the head of Australian Naval Intelligence) had the status of technical advisers to Hughes and Cook, and so could attend the main conference and any of the associated councils. Observing the proceedings, Garran admired the "moral and physical courage" of French premier Georges Clemenceau, whom he regarded as determined to protect France from Germany but in a measured and temperate way; in Garran's words, Clemenceau "always withstood the excessive demands of the French chauvinists, of the French army, and of Foch himself". Garran viewed some similarities between British Prime Minister David Lloyd George and United States President Woodrow Wilson where others saw only differences, since Lloyd George "also had a strong vein of idealism in his character", and Wilson could be pragmatic when the situation called for it, such as in discussions relating to American interests. Garran also met other political and military leaders at the conference, including T. E. Lawrence, "an Oxford youth of 29 – he looks 18", who was modest and "without any affectation... in a company of two or three [he] could talk very interestingly, but at a larger gathering he was apt to be dumb."
Following the war, Garran worked with Professor Harrison Moore of the University of Melbourne and South Australian judge Professor Jethro Brown on a report about proposed constitutional amendments which ultimately became the referendum questions put forward in the 1919 referendum. Garran attended two Imperial Conferences, accompanying Prime Minister Stanley Bruce in 1923 and in 1930 joining Prime Minister James Scullin and Attorney-General Frank Brennan, chair of the Drafting Committee which prepared drafts of agreements on various topics, such as merchant shipping. He also attended the eleventh League of Nations conference that year with them in Geneva, Switzerland. At the Royal Commission on the Constitution in 1927, Garran gave evidence over five days, where he discussed the history and origins of the Constitution and the evolution of the institutions established under it.
Through the 1920s and early 1930s, Garran prepared annual summaries of legislative developments in Australia, highlighting important individual pieces of legislation for the Journal of Comparative Legislation and International Law.
Towards the end of his time as Solicitor-General, Garran's work included the preparation of the Debt Conversion Agreement between the Government of Australia and the governments of the states, which involved the federal government taking over and managing the debts of the individual states, following the 1928 referendum. In 1930, he was asked by the Scullin government to provide an opinion on whether Norman Lindsay's novel Redheap was indecent and obscene within the terms of section 52(c) of the Customs Act 1901. He concluded that it was, and the Department of Trade and Customs subsequently banned the book from being imported into Australia, the first book by an Australian author to suffer such a ban. It has been suggested that Frank Forde, the Acting Minister for Trade and Customs, had already decided to ban the book, and sought Garran's advice primarily as a buffer against political criticism.
## Personal life and retirement
In 1902, Garran married Hilda Robson. Together they had four sons, Richard (born 1903), John (1905), Andrew (1906) and Isham Peter (1910). At this time the family lived in Melbourne, and the boys all attended Melbourne Grammar School and later studied at the University of Melbourne, attending Trinity College there.
In 1927, Garran had moved from his home in Melbourne to the newly established capital Canberra, one of the first public officials to do so (many government departments and their public servants did not move to Canberra until after World War II). He also worked within the Government to facilitate housing in Canberra for officials who needed to move there from other cities, and was involved in establishing cultural organisations in the city. In 1928 he was the inaugural President of the Canberra Rotary Club. In 1929, he formed the Canberra University Association in order to promote the formation of a university in Canberra, and in 1930 organised the establishment of Canberra University College (essentially a campus of the University of Melbourne) which taught undergraduate courses, chairing its council for its first twenty-three years. Throughout the 1920s and 1930s, Garran "consistently advocated the establishment of what he prophetically called 'a National University at Canberra' ", which would be primarily for specialist research and postgraduate study, in areas particularly relating to Australia, such as foreign relations with Asia and the Pacific region. This vision was evidently influential on the establishment of the Australian National University (ANU) in 1946, the only research-only university in the country (although in 1960 it amalgamated with Canberra University College to offer undergraduate courses).
Garran retired from his governmental positions on 9 February 1932, a fixed retirement date on the day before his sixty-fifth birthday. He soon returned to practise as a barrister, and within a month he was made a King's Counsel. However, he occasionally carried out more prominent work. In 1932, he was selected on the advice of the then Attorney-General John Latham to chair the Indian Defence Expenditure Tribunal, to advise on the dispute between India and the United Kingdom regarding the costs of the military defence of India. In 1934, along with John Keating, William Somerville and David John Gilbert, he formed a committee which prepared The Case for Union, the Government of Australia's official reply to the secessionist movement in the state of Western Australia.
Garran served on ANU's council from 1946 until 1951. Garran was also involved with the arts; he was a founding member of the Australian Institute of Arts and Literature and its president 1922-1927. He was the vice-president of the Canberra Musical Society, where he sang and played the clarinet, and in 1946 won a national song competition run by the Australian Broadcasting Corporation. Garran also published translations of Heinrich Heine's 1827 work Buch der Lieder ("Book of Songs") in 1924, and of the works of Franz Schubert and Robert Schumann in 1946.
Garran died in 1957 in Canberra. He was granted a state funeral, the first given to a public servant of the Government of Australia. He was survived by his four sons; his wife Hilda had died in 1936. His memoirs, Prosper the Commonwealth, were published posthumously in 1958, having been completed shortly before his death.
## Legacy
Garran's "personality, like his prose, was devoid of pedantry and pomposity and, though dignified, was laced with a quizzical turn of humour." His death "marked the end of a generation of public men for whom the cultural and the political were natural extensions of each other and who had the skills and talents to make such connections effortlessly."
Garran's friend Charles Studdy Daley, a long time civic administrator of the Australian Capital Territory, emphasised Garran's contribution to the early development of the city of Canberra, particularly its cultural life, remarking at a celebratory dinner for Garran in 1954 that:
> "There has hardly been a cultural movement in this city with which Sir Robert has not been identified in loyal and inspiring support, as his constant aim has been that Canberra should be not only a great political centre but also a shrine to foster those things that stimulate and enrich our national life... his name will ever be inscribed in the annals, not only of Canberra, but of the Commonwealth as clarum et venerabile nomen gentibus.
However Garran is perhaps best remembered as an expert on constitutional law, more so than for his other contributions to public service. At his death, Garran was one of the last remaining of the people involved with the creation of the Constitution of Australia. On his experience of Federation and the Constitution, Garran was always enthusiastic:
> "I'm often asked 'has federation turned out as you expected?' Well yes and no. By and large the sort of thing we expected has happened but with differences. We knew the constitution was not perfect; it had to be a compromise with all the faults of a compromise... But, in spite of the unforeseen [sic] strains and stresses, the constitution has worked, on the whole, much as we thought it would. I think it now needs revision, to meet the needs of a changed world. But no-one could wish the work undone, who tries to imagine, what, in these stormy days, would have been the plight of six disunited Australian colonies."
Former Prime Minister John Howard, in describing Garran, said:
> "I wonder though if we sometimes underestimate the changes, excitements, disruptions and adjustments previous generations have experienced. Sir Robert Garran knew the promise and reality of federation. He was part of the establishment of a public service which, in many ways, is clearly recognisable today."
At one level, Garran's remarkable career epitomises the hay day, or Indian Summer, of the meritocratic bourgeois elite born in Australia in the third quarter of the 19th century. At another level, his exceptional influence as an eminence grise bespeaks his fluency in construction, be it in poetry translation or legislative drafts, even if always out of commonplace materials. He lacked the imagination to range beyond the stock assumptions of the day regarding race, sex and Empire, assumptions he fully shared. This, inevitably, only made his influence stronger.
## Honours
Garran was made a Commander of the Order of St Michael and St George (CMG) on the day that Federation was completed and Australia created, 1 January 1901, "in recognition of services in connection with the Federation of Australian Colonies and the establishment of the Commonwealth of Australia",
Garran was first knighted in 1917, and was appointed as a Knight Commander of the Order of St Michael and St George (KCMG) in 1920. He was knighted a third time in 1937 when he was made a Knight Grand Cross of the Order of St Michael and St George (GCMG).
Shortly after the establishment of the ANU in 1946, Garran became its first graduate when he was awarded an honorary doctorate of laws. He had already been awarded such an honorary doctorate from the University of Melbourne in 1937 and later receiving one from his alma mater, the University of Sydney in 1952.
### Memorials
Garran's influence on Canberra is remembered by the naming of the suburb of Garran, Australian Capital Territory, established in 1966, after him. Garran's link with ANU is remembered by the naming of a chair in the university's School of Law, by the naming of the hall of residence Burton & Garran Hall, and by the naming of Garran house at Canberra Grammar School for his work with that school. The Garran oration, established to honour his memory, has been given yearly since 1959.
In 1983, the former Patent Office building – then occupied by the Federal Attorney General's Department – was renamed Robert Garran Offices. It was renamed the Robert Marsden Hope Building in 2011.
## Publications
- A problem of federation under the crown; the representation of the crown in commonwealth and states (1895).
- The coming Commonwealth: an Australian handbook of federal government (1897).
- The Annotated Constitution of the Australian Commonwealth by Quick and Garran (1901).
- The government of South Africa (1908).
- The Making and Working of the Constitution (1932).
- The Making and Working of the Constitution (continued) (1932).
- The Case for union : a reply to the case for the secession of the state of Western Australia by Garran and 3 others (1934).
- Prosper the Commonwealth (1958).
- The book of songs Translated by Garran (1924).
- Schubert and Schumann : songs and translations Translated by Garran (1946).
|
67,843,843 |
The Yankee
| 1,153,921,105 |
1820s American literary magazine edited by John Neal
|
[
"Advertising-free magazines",
"Defunct literary magazines published in the United States",
"English-language magazines",
"Magazines disestablished in 1829",
"Magazines established in 1828",
"Magazines published in Maine",
"Monthly magazines published in the United States",
"Weekly magazines published in the United States"
] |
The Yankee (later retitled The Yankee and Boston Literary Gazette) was one of the first cultural publications in the United States, founded and edited by John Neal (1793–1876), and published in Portland, Maine as a weekly periodical and later converted to a longer, monthly format. Its two-year run concluded at the end of 1829. The magazine is considered unique for its independent journalism at the time.
Neal used creative control of the magazine to improve his social status, help establish the American gymnastics movement, cover national politics, and critique American literature, art, theater, and social issues. Essays by Neal on American art and theater anticipated major changes and movements in those fields realized in the following decades. Conflicting opinions published in The Yankee on the cultural identity of Maine and New England presented readers with a complex portrait of the region.
Many new, predominantly female, writers and editors started their careers with contributions and criticism of their work published in The Yankee, including many who are familiar to modern readers. The articles on women's rights and early feminist ideas affirmed intellectual equality between men and women and demanded political and economic rights for women.
## Background
John Neal grew up in Portland, Maine (then the District of Maine), and later lived in Boston, then Baltimore, where he pursued a dual career in law and literature following the bankruptcy of his dry goods business in 1816. After gaining national recognition as a critic, poet, and novelist, he sailed to London, where he wrote for British magazines and served as Jeremy Bentham's secretary.
Upon returning to his native Portland in 1827, Neal was confronted by community members who were offended by his literary work in the preceding years: the unsympathetic depiction of his hometown in his semi-autobiographical novel Errata (1823), the way he depicted New England dialect and customs in his novel Brother Jonathan (1825), and his criticism of American writers in Blackwood's Magazine (1824–1825). Residents posted inflammatory broadsides calling Neal "a panderer for scandal against the country that nourished him" and a "renegado" who "basely traduced his native town and country for hire". Neal experienced verbal taunting and physical violence in the streets and an attempt to block his admission to the local bar association, though he had been a practicing lawyer in Baltimore (1820–1823). In the second half of 1827, he pursued several projects to further his personal goals and to vindicate himself to his local community. He joined the bar despite opposition, founded Maine's first athletic program, and established The Yankee. The first issue was published on January 1, 1828.
The idea came from a local bookseller who urged Neal shortly after his return to Portland to establish a new magazine or newspaper. Neal initially refused, not wanting to be the financial backer of his literary undertaking. The bookseller then offered to publish the periodical if Neal would serve as editor, which Neal accepted. Subscription to the new weekly magazine cost \$3 a year, or \$2.50 paid in advance.
The Yankee was Maine's first literary periodical and one of America's first cultural publications. Controversial at the time for its lack of association with any political party or interest group, it was a precursor for the independent American press that was established later in the century. When asked why he would establish such a magazine outside a major city, Neal said, "We mean to publish in Portland. Whatever the people of New-York, or Boston or Philadelphia or Baltimore might say, Portland is the place for us."
## Content
The Yankee functioned to educate Americans about England, spread Jeremy Bentham-inspired utilitarian philosophy, publish literary contributions, and critique American literature, American art, theater, politics, and social issues. The magazine also aided in establishing the US gymnastics movement, provided a forum for new writers, and promoted Neal's own accomplishments. Because Neal included a high proportion of his own work, self-promotion, and details of feuds with other public figures, "no magazine ever bore more fully the stamp of a personality", according to scholar Irving T. Richards. Other authors published in the magazine included John Greenleaf Whittier, Edgar Allan Poe, Albert Pike (later associate justice of the Arkansas Supreme Court), Grenville Mellen, Isaac Ray, and early published works by John Appleton (later chief justice of Maine).
### Literary criticism
Neal biographer Donald A. Sears felt that The Yankee's greatest impact was encouraging new authors through publication and criticism of their early works. Poe, Whittier, Nathaniel Hawthorne, and Henry Wadsworth Longfellow all received their first impactful encouragement in its pages. Most of the new authors whose careers started in The Yankee were women, including Elizabeth Oakes Smith and others lesser known to history.
The Yankee is credited with having "discovered" Poe, and influenced the young writer's style with the magazine's essays. Poe considered Neal's September 1829 review of the poem "Fairy-Land" to be "the very first words of encouragement I ever remember to have heard". Poe became a contributor to the Ladies' Magazine shortly afterward – a relationship that may have been orchestrated by Neal. Whittier sought Neal's opinion in the magazine at a turning point in the poet's career, saying when he submitted a poem that "if you don't like it, say so privately; and 'I will quit poetry, and everything also of a literary nature, for I am sick at heart of the business'." In what may be the first review of Hawthorne's first novel, The Yankee referred to Fanshawe as "powerful and pathetic" and said that the author "should be encouraged to persevering efforts by a fair prospect of future success". An 1828 review of Longfellow noted "a fine genius and a pure and safe taste" but also cited the need for "a little more energy, and a little more stoutness".
### Art criticism
Neal was the first American art critic. Scholars find his work in the novel Randolph (1823), Blackwood's Magazine (1824), and The Yankee to be the most historically important, in which he discussed leading American artists and their work "with unprecedented acumen and enthusiasm". The essay "Landscape and Portrait-Painting" (September 1829) anticipated John Ruskin's groundbreaking Modern Painters (1843) by distinguishing between "things seen by the artist" and "things as they are", as Ruskin put it more famously fourteen years later. In Neal's words in 1829, "There is not a landscape nor a portrait painter alive who dares to paint what he sees as he sees it; nor probably a dozen with power to see things as they are."
Neal's essays in The Yankee about landscape painting and its potential role in America's artistic renaissance anticipate the rise of the Hudson River School and provide early coverage (1828) of its founders, Thomas Doughty, Asher Brown Durand, and Thomas Cole. These essays also offer unprecedented coverage of reproduction technology like engraving and lithography and American portrait painters trained in the "humbler contingencies" of sign painting and applied arts. According to art scholar Harold E. Dickson, Neal's opinions in The Yankee "to a remarkable degree ... have stood the trying test of time."
### Theatrical criticism
At the time The Yankee was in circulation, Neal was one of the most important critics of American drama. His serial essay "The Drama" (July–December 1829) elaborates upon opinions on theater originally published in the prefaces to his first play, Otho (1819) and his second poetry collection, The Battle of Niagara: Second Edition (1819). The essay dismissed well-accepted Shakespearean standards and outlined a prophecy for the future American drama that largely played out by the end of the century. Neal predicted that characters would become more relatable by expressing feelings "in common language" because "when a person talks beautifully in his sorrow, it shows both great preparation and insincerity." Instead of relying on highly cultivated circumstances in the plot, "The incidents will be such as every man may hope or dread to see ...; for it is there, and there only, that we can judge of a hero, or of a nation, or sympathize with either." This "thorough revolution in plays and players, authors and actors" called for in "The Drama" was still in process 60 years later when William Dean Howells was considered innovative for issuing the same criticism.
### Political, social, and civic issues
The Yankee documented and offered commentary upon the period's nationally relevant social and political topics, such as the nullification crisis, the Tariff of Abominations, Andrew Jackson's spoils system, lotteries, temperance, women's rights, and the Maine-New Brunswick border issues that led to the 1838–1839 Aroostook War between the United States and the United Kingdom. He published a "vigorous campaign" of seventeen articles against lotteries over the course of 1828, claiming they encourage idle and reckless behavior among patrons, an argument he first conveyed in his novel Logan (1822). On a local level, Neal's advocacy in The Yankee contributed toward municipal funding being designated for the construction of Portland's first sidewalks.
In March 1828, Neal advertised his gymnasium in The Yankee as "accessible here to every body, without distinction of age or color", but when he sponsored six Black men to join, only two other members of three hundred voted to accept them. In May, Neal used his magazine to call out his fellow gymnasts' racial prejudice. He ended his involvement with the gym shortly thereafter.
### Feminism
Neal's writing on gender and women's rights in The Yankee show his focus moving beyond inter-gender social manners and female educational opportunities and toward women's economic and political rights. In the first issue of the second volume, he asserted that unmarried women are treated unfairly "as if it were better for a woman to marry anybody than not to marry at all; or even to marry one that was not her selected and preferred of all than to go unmarried to her grave." The article "Rights of Women" (March 5, 1829) includes some of the "angriest and most assertive feminist claims" of his career, saying of coverture and suffrage that:
> The truth is, that women are not citizens here; they pay taxes without being represented ...; if they are represented, it is by those whose interest, instead of being included in theirs, is directly opposed to theirs ...; they are not eligible to office; and they are not, nor is their property protected at law. So much for the equality of the sexes here ....
The solution, which he offered in "Woman" (March 26, 1828), was female solidarity and organizing to secure economic and political rights: "If woman would act with woman, there would be a stop to our tyranny". The Yankee also promoted female editors like Sarah Josepha Hale and Frances Harriet Whipple, and proclaimed the example of economic freedom these women provided: "We hope to see the day when she-editors will be as common as he-editors; and when our women of all ages ... will be able to maintain herself, without being obliged to marry for bread."
In other articles, The Yankee affirmed intellectual equality between men and women, opining that "When minds meet, all distinctions of sex are abolished" and "women are not inferior to men; they are unlike men. They cannot do all that men may do – any more than men may do all that women may do."
### New England
The magazine's title word, Yankee, is a demonym used to refer to people from Maine and the other New England states. Holding his native state in high regard, Neal in the third issue of The Yankee claimed: "Her magnitude, her resources, and her character, we believe, are neither appreciated nor understood by the chief men" and the "great mass of the American people." To correct this, he published articles written by himself and others detailing the region's customs, traditions, and speech, particularly the series "Live Yankees" (March–June 1828), "New England As It Was" (March–November 1828), and "New England As It Is" (March–November 1828). He juxtaposed articles by separate authors with conflicting views and inserted his own editorial footnotes into others' essays to encourage discourse over the region's identity. Nineteenth-century American regionalists are known for sentimentally posing rural traditions in conflict with America's urbanization. In contrast, The Yankee presented the country's regions in a state of constant cultural evolution that beckons but thwarts characterization.
### Feuds
The first volume of The Yankee (January 1 – December 24, 1828) documents literary feuds between Neal and other New England journalists like William Lloyd Garrison, Francis Ormand Jonathan Smith, and Joseph T. Buckingham. Tensions between Neal and Garrison started with Garrison's denunciation of Neal's literary criticism in Blackwood's Magazine (1824–1825) as a "renegade's base attempt to assassinate the reputation of this country" and continued with Neal's claim in The Yankee that Garrison was fired from his editorial position for attacking Neal in the paper. Journalist and historian Edward H. Elwell characterized Neal's willingness to publish these inflammatory back-and-forth letters and essays as the embodiment of "impulsive honesty and fair play". Neal stopped after receiving complaints from subscribers, which he also published in the magazine.
## Run of publication
The Yankee published regularly from the beginning of 1828 through the end of 1829, during which time the magazine changed its name, printing format, frequency, and volume numbering system. Volumes 1 and 2 (January 1, 1828, through July 3, 1829) are composed of eight-page weekly issues in quarto. New series volume 1 (July through December 1829) is composed of six, 56-page monthly issues in octavo. For financial reasons, Neal merged The Yankee with a Boston periodical and changed the name to The Yankee and Boston Literary Gazette, starting August 20, 1828, (volume 1, number 34).
When The Yankee ceased publication at the end of 1829, it merged with the Ladies' Magazine. The common misconception that it merged with the New England Galaxy is based on a misinterpretation of a passage in Neal's autobiography.
|
32,788 |
Vivien Leigh
| 1,172,302,765 |
British actress (1913–1967)
|
[
"1913 births",
"1967 deaths",
"20th-century British actresses",
"20th-century deaths from tuberculosis",
"Alumni of RADA",
"Analysands of Ralph Greenson",
"Anglo-Indian people",
"Best Actress Academy Award winners",
"Best British Actress BAFTA Award winners",
"British Roman Catholics",
"British Shakespearean actresses",
"British film actresses",
"British people in colonial India",
"British people of Anglo-Indian descent",
"British people of Armenian descent",
"British people of Irish descent",
"British stage actresses",
"Golders Green Crematorium",
"Olivier family",
"People educated at Woldingham School",
"People from Darjeeling",
"People with bipolar disorder",
"Spouses of life peers",
"Tony Award winners",
"Tuberculosis deaths in England",
"Volpi Cup for Best Actress winners",
"Wives of knights",
"Women who experienced pregnancy loss"
] |
Vivien Leigh (/liː/ LEE; born Vivian Mary Hartley; 5 November 1913 – 8 July 1967), styled as Lady Olivier after 1947, was a British actress. She won the Academy Award for Best Actress twice, for her performances as Scarlett O'Hara in Gone with the Wind (1939) and Blanche DuBois in the film version of A Streetcar Named Desire (1951), a role she had also played on stage in London's West End in 1949. She also won a Tony Award for her work in the Broadway musical version of Tovarich (1963). Although her career had periods of inactivity, in 1999 the American Film Institute ranked Leigh as the 16th-greatest female movie star of classic Hollywood cinema.
After completing her drama school education, Leigh appeared in small roles in four films in 1935 and progressed to the role of heroine in Fire Over England (1937). Lauded for her beauty, Leigh felt that her physical attributes sometimes prevented her from being taken seriously as an actress. Despite her fame as a screen actress, Leigh was primarily a stage performer. During her 30-year career, she played roles ranging from the heroines of Noël Coward and George Bernard Shaw comedies to classic Shakespearean characters such as Ophelia, Cleopatra, Juliet and Lady Macbeth. Later in life, she performed as a character actress in a few films.
At the time, the public strongly identified Leigh with her second husband, Laurence Olivier, who was her spouse from 1940 to 1960. Leigh and Olivier starred together in many stage productions, with Olivier often directing, and in three films. She earned a reputation for being difficult to work with and for much of her life, she had bipolar disorder, as well as recurrent bouts of chronic tuberculosis, which was first diagnosed in the mid-1940s and ultimately led to her death at age 53.
## Life and career
### 1913–1934: Early life and acting debut
Leigh was born Vivian Mary Hartley on 5 November 1913 in British India on the campus of St. Paul's School in Darjeeling, Bengal Presidency. She was the only child of Ernest Richard Hartley, a British broker, and his wife, Gertrude Mary Frances (née Yackjee; she also used her mother's maiden name of Robinson). Her father was born in Scotland in 1882, while her mother, a devout Catholic, was born in Darjeeling in 1888 and might have been of Irish, Parsi Indian and Armenian ancestry. Gertrude's parents, who lived in India, were Michael John Yackjee (born 1840), an Anglo-Indian man of independent means, and Mary Teresa Robinson (born 1856), who was born to an Irish family killed during the Indian Rebellion of 1857 and grew up in an orphanage, where she met Yackjee; they married in 1872 and had five children, of whom Gertrude was the youngest. Ernest and Gertrude Hartley were married in 1912 in Kensington, London.
In 1917, Ernest Hartley was transferred to Bangalore as an officer in the Indian Cavalry, while Gertrude and Vivian stayed in Ootacamund. At the age of three, Vivian made her first stage appearance for her mother's amateur theatre group, reciting "Little Bo Peep". Gertrude Hartley tried to instill an appreciation of literature in her daughter and introduced her to the works of Hans Christian Andersen, Lewis Carroll and Rudyard Kipling, as well as stories of Greek mythology and Indian folklore. At the age of six, Vivian was sent by her mother from Loreto Convent, Darjeeling, to the Convent of the Sacred Heart (now Woldingham School) then situated in Roehampton, south-west London. One of her friends there was future actress Maureen O'Sullivan, two years her senior, to whom Vivian expressed her desire to become "a great actress". She was removed from the school by her father, and travelling with her parents for four years, she attended schools in Europe, notably in Dinard (Brittany, France), Biarritz (France), the Sacred Heart in San Remo on the Italian Riviera, and in Paris, becoming fluent in both French and Italian. The family returned to Britain in 1931. She attended A Connecticut Yankee, one of O'Sullivan's films playing in London's West End, and told her parents of her ambitions to become an actress. Shortly after, her father enrolled Vivian at the Royal Academy of Dramatic Art (RADA) in London.
Vivian met Herbert Leigh Holman, known as Leigh Holman, a barrister 13 years her senior, in 1931. Despite his disapproval of "theatrical people", they married on 20 December 1932 and she terminated her studies at RADA, her attendance and interest in acting having already waned after meeting Holman. On 12 October 1933 in London, she gave birth to a daughter, Suzanne, later Suzanne Farrington.
### 1935–1939: Early career and Laurence Olivier
Leigh's friends suggested she take a minor role as a schoolgirl in the film Things Are Looking Up, which was her film debut, albeit uncredited as an extra. She engaged an agent, John Gliddon, who believed that "Vivian Holman" was not a suitable name for an actress. After rejecting his many suggestions, she took "Vivian Leigh" as her professional name. Gliddon recommended her to Alexander Korda as a possible film actress, but Korda rejected her as lacking potential. She was cast in the play The Mask of Virtue, directed by Sidney Carroll in 1935, and received excellent reviews, followed by interviews and newspaper articles. One such article was from the Daily Express, in which the interviewer noted that "a lightning change came over her face", which was the first public mention of the rapid changes in mood that had become characteristic of her. John Betjeman, future poet laureate, described her as "the essence of English girlhood". Korda attended her opening night performance, admitted his error, and signed her to a film contract. She continued with the play but, when Korda moved it to a larger theatre, Leigh was found to be unable to project her voice adequately or to hold the attention of so large an audience, and the play closed soon after. In the playbill, Carroll had revised the spelling of her first name to "Vivien".
In 1960, Leigh recalled her ambivalence towards her first experience of critical acclaim and sudden fame, commenting that "some critics saw fit to be as foolish as to say that I was a great actress. And I thought, that was a foolish, wicked thing to say, because it put such an onus and such a responsibility onto me, which I simply wasn't able to carry. And it took me years to learn enough to live up to what they said for those first notices. I find it so stupid. I remember the critic very well and have never forgiven him."
In the autumn of 1935 and at Leigh's insistence, John Buckmaster introduced her to Laurence Olivier at the Savoy Grill, where he and his first wife Jill Esmond dined regularly after his performance in Romeo and Juliet. Olivier had seen Leigh in The Mask of Virtue earlier in May and congratulated her on her performance. Olivier and Leigh began an affair while acting as lovers in Fire Over England (1937), while Olivier was still married to Esmond and Leigh to Holman. During this period, Leigh read the Margaret Mitchell novel Gone with the Wind and instructed her American agent to recommend her to David O. Selznick, who was planning a film version. She remarked to a journalist, "I've cast myself as Scarlett O'Hara", and The Observer film critic C. A. Lejeune recalled a conversation of the same period in which Leigh "stunned us all" with the assertion that Olivier "won't play Rhett Butler, but I shall play Scarlett O'Hara. Wait and see."
Despite her relative inexperience, Leigh was chosen to play Ophelia to Olivier's Hamlet in an Old Vic Theatre production staged at Elsinore, Denmark. Olivier later recalled an incident when her mood rapidly changed as she was preparing to go onstage. Without apparent provocation, she began screaming at him before suddenly becoming silent and staring into space. She was able to perform without mishap, and by the following day she had returned to normal with no recollection of the event. It was the first time Olivier witnessed such behaviour from her. They began living together, as their respective spouses had each refused to grant either of them a divorce. Under the moral standards then enforced by the film industry, their relationship had to be kept from public view.
Leigh appeared with Robert Taylor, Lionel Barrymore and Maureen O'Sullivan in A Yank at Oxford (1938), which was the first of her films to receive attention in the United States. During production, she developed a reputation for being difficult and unreasonable, partly because she disliked her secondary role but mainly because her petulant antics seemed to be paying dividends. After dealing with the threat of a lawsuit brought over a frivolous incident, Korda instructed her agent to warn her that her option would not be renewed if her behaviour did not improve. Her next role was in Sidewalks of London, also known as St. Martin's Lane (1938), with Charles Laughton.
Olivier had been attempting to broaden his film career. He was not well known in the United States despite his success in Britain, and earlier attempts to introduce him to American audiences had failed. Offered the role of Heathcliff in Samuel Goldwyn's production of Wuthering Heights (1939), he travelled to Hollywood, leaving Leigh in London. Goldwyn and the film's director, William Wyler, offered Leigh the secondary role of Isabella, but she refused, preferring the role of Cathy, which went to Merle Oberon.
### 1939: Gone with the Wind
Hollywood was in the midst of a widely publicised search to find an actress to portray Scarlett O'Hara in David O. Selznick's production of Gone with the Wind (1939). At the time, Myron Selznick—David's brother and Leigh's American theatrical agent—was the London representative of the Myron Selznick Agency. In February 1938, Leigh asked Myron that she be considered to play the part of Scarlett O'Hara.
David O. Selznick watched her performances that month in Fire Over England and A Yank at Oxford and thought that she was excellent but in no way a possible Scarlett because she was "too British". Leigh travelled to Los Angeles, however, to be with Olivier and to try to convince David Selznick that she was the right person for the part. Myron Selznick also represented Olivier and when he met Leigh, he felt that she possessed the qualities that his brother was searching for. According to legend, Myron Selznick took Leigh and Olivier to the set where the burning of the Atlanta Depot scene was being filmed and stage-managed an encounter, where he introduced Leigh, derisively addressing his younger brother, "Hey, genius, meet your Scarlett O'Hara." The following day, Leigh read a scene for Selznick, who organized a screen test with director George Cukor and wrote to his wife, "She's the Scarlett dark horse and looks damn good. Not for anyone's ear but your own: it's narrowed down to Paulette Goddard, Jean Arthur, Joan Bennett and Vivien Leigh". The director, George Cukor, concurred and praised Leigh's "incredible wildness". She secured the role of Scarlett soon after.
Filming proved difficult for Leigh. Cukor was dismissed and replaced by Victor Fleming, with whom Leigh frequently quarrelled. She and Olivia de Havilland secretly met with Cukor at night and on weekends for his advice about how they should play their parts. Leigh befriended Clark Gable, his wife Carole Lombard and Olivia de Havilland, but she clashed with Leslie Howard, with whom she was required to play several emotional scenes. Leigh was sometimes required to work seven days a week, often late into the night, which added to her distress, and she missed Olivier, who was working in New York City. On a long-distance telephone call to Olivier, she declared: "Puss, my puss, how I hate film acting! Hate, hate, and never want to do another film again!"
Quoted in a 2006 biography of Olivier, Olivia de Havilland defended Leigh against claims of her manic behaviour during the filming of Gone with the Wind: "Vivien was impeccably professional, impeccably disciplined on Gone with the Wind. She had two great concerns: doing her best work in an extremely difficult role and being separated from Larry [Olivier], who was in New York."
Gone with the Wind brought Leigh immediate attention and fame, but she was quoted as saying, "I'm not a film star—I'm an actress. Being a film star—just a film star—is such a false life, lived for fake values and for publicity. Actresses go on for a long time and there are always marvellous parts to play." The film won 10 Academy Awards including a Best Actress award for Leigh, who also won a New York Film Critics Circle Award for Best Actress.
### 1940–1949: Marriage and early collaborations with Olivier
In February 1940, Jill Esmond agreed to divorce Laurence Olivier, and Leigh Holman agreed to divorce Vivien, although they maintained a strong friendship for the rest of Leigh's life. Esmond was granted custody of Tarquin, her son with Olivier. Holman was granted custody of Suzanne, his daughter with Leigh. On 31 August 1940, Olivier and Leigh were married at the San Ysidro Ranch in Santa Barbara, California, in a ceremony attended only by their hosts, Ronald and Benita Colman and witnesses, Katharine Hepburn and Garson Kanin. Leigh had made a screen test and hoped to co-star with Olivier in Rebecca, which was to be directed by Alfred Hitchcock with Olivier in the leading role. After viewing Leigh's screen test, David Selznick noted that "she doesn't seem right as to sincerity or age or innocence", a view shared by Hitchcock and Leigh's mentor, George Cukor.
Selznick observed that she had shown no enthusiasm for the part until Olivier had been confirmed as the lead actor, so he cast Joan Fontaine. He refused to allow her to join Olivier in Pride and Prejudice (1940), and Greer Garson played the role Leigh had wanted for herself. Waterloo Bridge (1940) was to have starred Olivier and Leigh; however, Selznick replaced Olivier with Robert Taylor, then at the peak of his success as one of Metro-Goldwyn-Mayer's most popular male stars. Her top billing reflected her status in Hollywood, and the film was popular with audiences and critics.
The Oliviers mounted a stage production of Romeo and Juliet for Broadway. The New York press publicised the adulterous nature of the beginning of Olivier and Leigh's relationship and questioned their ethics in not returning to the UK to help with the war effort. Critics were hostile in their assessment of Romeo and Juliet. Brooks Atkinson for The New York Times wrote: "Although Miss Leigh and Mr. Olivier are handsome young people, they hardly act their parts at all." While most of the blame was attributed to Olivier's acting and direction, Leigh was also criticised, with Bernard Grebanier commenting on the "thin, shopgirl quality of Miss Leigh's voice". The couple had invested almost all of their combined savings of \$40,000 in the project, and the failure was a financial disaster for them.
The Oliviers filmed That Hamilton Woman (1941) with Olivier as Horatio Nelson and Leigh as Emma Hamilton. With the United States not yet having entered the war, it was one of several Hollywood films made with the aim of arousing a pro-British sentiment among American audiences. The film was popular in the United States and an outstanding success in the Soviet Union. Winston Churchill arranged a screening for a party that included Franklin D. Roosevelt and, on its conclusion, addressed the group, saying, "Gentlemen, I thought this film would interest you, showing great events similar to those in which you have just been taking part." The Oliviers remained favourites of Churchill, attending dinners and occasions at his request for the rest of his life; and, of Leigh, he was quoted as saying, "By Jove, she's a clinker."
The Oliviers returned to Britain in March 1943, and Leigh toured through North Africa that same year as part of a revue for the armed forces stationed in the region. She reportedly turned down a studio contract worth \$5,000 a week in order to volunteer as part of the war effort. Leigh performed for troops before falling ill with a persistent cough and fevers. In 1944, she was diagnosed as having tuberculosis in her left lung and spent several weeks in hospital before appearing to have recovered. Leigh was filming Caesar and Cleopatra (1945) when she discovered she was pregnant, then had a miscarriage. Leigh temporarily fell into a deep depression that hit its low point with her falling to the floor, sobbing in an hysterical fit. This was the first of many major bipolar disorder breakdowns. Olivier later came to recognise the symptoms of an impending episode—several days of hyperactivity followed by a period of depression and an explosive breakdown, after which Leigh would have no memory of the event, but would be acutely embarrassed and remorseful.
With her doctor's approval, Leigh was well enough to resume acting in 1946, starring in a successful London production of Thornton Wilder's The Skin of Our Teeth; but her films of this period, Caesar and Cleopatra (1945) and Anna Karenina (1948), were not great commercial successes. All British films in this period were adversely affected by a Hollywood boycott of British films. In 1947, Olivier was knighted and Leigh accompanied him to Buckingham Palace for the investiture. She became Lady Olivier. After their divorce, according to the style granted to the divorced wife of a knight, she became known socially as Vivien, Lady Olivier.
By 1948, Olivier was on the board of directors for the Old Vic Theatre, and he and Leigh embarked on a six-month tour of Australia and New Zealand to raise funds for the theatre. Olivier played the lead in Richard III and also performed with Leigh in The School for Scandal and The Skin of Our Teeth. The tour was an outstanding success and, although Leigh was plagued with insomnia and allowed her understudy to replace her for a week while she was ill, she generally withstood the demands placed upon her, with Olivier noting her ability to "charm the press". Members of the company later recalled several quarrels between the couple as Olivier was increasingly resentful of the demands placed on him during the tour. The most dramatic altercation occurred in Christchurch, New Zealand, when her shoes were not found and Leigh refused to go onstage without them. Olivier screamed an obscenity at her and slapped her face, and a devastated Leigh slapped him in return, dismayed that he would hit her publicly. Subsequently, she made her way to the stage in borrowed pumps, and in seconds, had "dried her tears and smiled brightly onstage". By the end of the tour, both were exhausted and ill. Olivier told a journalist, "You may not know it, but you are talking to a couple of walking corpses." Later, he would observe that he "lost Vivien" in Australia.
The success of the tour encouraged the Oliviers to make their first West End appearance together, performing the same works with one addition, Antigone, included at Leigh's insistence because she wished to play a role in a tragedy.
### 1949–1951: Play and film roles in A Streetcar Named Desire
Leigh next sought the role of Blanche DuBois in the West End stage production of Tennessee Williams's A Streetcar Named Desire and was cast after Williams and the play's producer Irene Mayer Selznick saw her in The School for Scandal and Antigone; Olivier was contracted to direct. The play contained a rape scene and references to promiscuity and homosexuality, and was destined to be controversial; the media discussion about its suitability added to Leigh's anxiety. Nevertheless, she believed strongly in the importance of the work.
When the West End production of Streetcar opened in October 1949, J. B. Priestley denounced the play and Leigh's performance; and the critic Kenneth Tynan, who was to make a habit of dismissing her stage performances, commented that Leigh was badly miscast because British actors were "too well-bred to emote effectively on stage". Olivier and Leigh were chagrined that part of the commercial success of the play lay in audience members attending to see what they believed would be a salacious story, rather than the Greek tragedy that they envisioned. The play also had strong supporters, among them Noël Coward, who described Leigh as "magnificent".
After 326 performances, Leigh finished her run, and she was soon assigned to reprise her role as Blanche DuBois in the film version of the play. Her irreverent and often bawdy sense of humour allowed her to establish a rapport with Marlon Brando, but she had an initial difficulty in working with director Elia Kazan, who was displeased with the direction that Olivier had taken in shaping the character of Blanche. Kazan had favoured Jessica Tandy and later, Olivia de Havilland over Leigh, but knew she had been a success on the London stage as Blanche. He later commented that he did not hold her in high regard as an actress, believing that "she had a small talent." As work progressed, however, he became "full of admiration" for "the greatest determination to excel of any actress I've known. She'd have crawled over broken glass if she thought it would help her performance." Leigh found the role gruelling and commented to the Los Angeles Times, "I had nine months in the theatre of Blanche DuBois. Now she's in command of me." Olivier accompanied her to Hollywood where he was to co-star with Jennifer Jones in William Wyler's Carrie (1952).
Leigh's performance in A Streetcar Named Desire won glowing reviews, as well as a second Academy Award for Best Actress, a British Academy of Film and Television Arts (BAFTA) Award for Best British Actress, and a New York Film Critics Circle Award for Best Actress. Tennessee Williams commented that Leigh brought to the role "everything that I intended, and much that I had never dreamed of". Leigh herself had mixed feelings about her association with the character; in later years, she said that playing Blanche DuBois "tipped me over into madness".
### 1951–1960: Struggle with mental illness
In 1951, Leigh and Laurence Olivier performed two plays about Cleopatra, William Shakespeare's Antony and Cleopatra and George Bernard Shaw's Caesar and Cleopatra, alternating the play each night and winning good reviews. They took the productions to New York, where they performed a season at the Ziegfeld Theatre into 1952. The reviews there were also mostly positive, but film critic Kenneth Tynan angered them when he suggested that Leigh's was a mediocre talent that forced Olivier to compromise his own. Tynan's diatribe almost precipitated another collapse; Leigh, terrified of failure and intent on achieving greatness, dwelt on his comments and ignored the positive reviews of other critics.
In January 1953, Leigh travelled to Ceylon to film Elephant Walk with Peter Finch. Shortly after filming commenced, she had a nervous breakdown and Paramount Pictures replaced her with Elizabeth Taylor. Olivier returned her to their home in Britain, where, between periods of incoherence, Leigh told him she was in love with Finch and had been having an affair with him. Over a period of several months, she gradually recovered. As a result of this episode, many of the Oliviers' friends learned of her problems. David Niven said she had been "quite, quite mad". Noël Coward expressed surprise in his diary that "things had been bad and getting worse since 1948 or thereabouts". Leigh's romantic relationship with Finch began in 1948, and waxed and waned for several years, ultimately flickering out as her mental condition deteriorated.
Also in 1953, Leigh recovered sufficiently to play The Sleeping Prince with Olivier, and in 1955 they performed a season at Stratford-upon-Avon in Shakespeare's Twelfth Night, Macbeth and Titus Andronicus. They played to capacity houses and attracted generally good reviews, Leigh's health seemingly stable. John Gielgud directed Twelfth Night and wrote, "... perhaps I will still make a good thing of that divine play, especially if he will let me pull her little ladyship (who is brainier than he but not a born actress) out of her timidity and safeness. He dares too confidently ... but she hardly dares at all and is terrified of overreaching her technique and doing anything that she has not killed the spontaneity of by overpractice." In 1955, Leigh starred in Anatole Litvak's film The Deep Blue Sea; co-star Kenneth More felt he had poor chemistry with Leigh during the filming.
In 1956, Leigh took the lead role in the Noël Coward play South Sea Bubble, but withdrew from the production when she became pregnant. Several weeks later, she miscarried and entered a period of depression that lasted for months. She joined Olivier for a European tour of Titus Andronicus, but the tour was marred by Leigh's frequent outbursts against Olivier and other members of the company. After their return to London, her former husband, Leigh Holman, who could still exert a strong influence on her, stayed with the Oliviers and helped calm her.
In 1959, when she achieved a success with the Noël Coward comedy Look After Lulu!, a critic working for The Times described her as "beautiful, delectably cool and matter of fact, she is mistress of every situation".
Considering her marriage to be over, Leigh began a relationship with actor Jack Merivale in 1960, who knew of Leigh's medical condition and assured Olivier that he would care for her. That same year, she and Olivier divorced and Olivier soon married actress Joan Plowright. In his autobiography, Olivier discussed the years of strain they had experienced because of Leigh's illness: "Throughout her possession by that uncannily evil monster, manic depression, with its deadly ever-tightening spirals, she retained her own individual canniness—an ability to disguise her true mental condition from almost all except me, for whom she could hardly be expected to take the trouble."
### 1961–1967: Final years and death
Merivale proved to be a stabilising influence for Leigh, but despite her apparent contentment, she was quoted by Radie Harris as confiding that she "would rather have lived a short life with Larry [Olivier] than face a long one without him". Her first husband Leigh Holman also spent considerable time with her. Merivale joined her for a tour of Australia, New Zealand and Latin America that lasted from July 1961 until May 1962, and Leigh enjoyed positive reviews without sharing the spotlight with Olivier. Though she was still beset by bouts of depression, she continued to work in the theatre and, in 1963, won a Tony Award for Best Actress in a Musical for her role in Tovarich. She also appeared in the films The Roman Spring of Mrs. Stone (1961) and Ship of Fools (1965).
Leigh's last screen appearance in Ship of Fools was both a triumph and emblematic of her illnesses that were taking root. Producer and director Stanley Kramer, who ended up with the film, planned to star Leigh but was initially unaware of her fragile mental and physical state. Later recounting her work, Kramer remembered her courage in taking on the difficult role, "She was ill, and the courage to go ahead, the courage to make the film—was almost unbelievable." Leigh's performance was tinged by paranoia and resulted in outbursts that marred her relationship with other actors, although both Simone Signoret and Lee Marvin were sympathetic and understanding. In one unusual instance during the attempted rape scene, Leigh became distraught and hit Marvin so hard with a spiked shoe that it marked his face. Leigh won the L'Étoile de Cristal for her performance in a leading role in Ship of Fools.
In May 1967, Leigh was rehearsing to appear with Michael Redgrave in Edward Albee's A Delicate Balance when her tuberculosis resurfaced. Following several weeks of rest, she seemed to recover. On the night of 7 July 1967, Merivale left her as usual at their Eaton Square flat to perform in a play, and he returned home just before midnight to find her asleep. About 30 minutes later (by now 8 July), he entered the bedroom and discovered her body on the floor. She had been attempting to walk to the bathroom and, as her lungs filled with liquid, she collapsed and suffocated. Merivale first contacted her family and later was able to reach Olivier, who was receiving treatment for prostate cancer in a nearby hospital. In his autobiography, Olivier described his "grievous anguish" as he immediately travelled to Leigh's residence, to find that Merivale had moved her body onto the bed. Olivier paid his respects, and "stood and prayed for forgiveness for all the evils that had sprung up between us", before helping Merivale make funeral arrangements; Olivier stayed until her body was removed from the flat.
Her death was publicly announced on 8 July, and the lights of every theatre in central London were extinguished for an hour. A Catholic service for Leigh was held at St. Mary's Church, Cadogan Street, London. Her funeral was attended by the luminaries of British stage and screen. According to the provisions of her will, Leigh was cremated at the Golders Green Crematorium and her ashes were scattered on the lake at her summer home, Tickerage Mill, near Blackboys, East Sussex, England. A memorial service was held at St Martin-in-the-Fields, with a final tribute read by John Gielgud. In 1968, Leigh became the first actress honoured in the United States by "The Friends of the Libraries at the University of Southern California". The ceremony was conducted as a memorial service, with selections from her films shown and tributes provided by such associates as George Cukor, who screened the tests that Leigh had made for Gone with the Wind, the first time the screen tests had been seen in 30 years.
## Legacy
Leigh was considered to be one of the most beautiful actresses of her day, and her directors emphasised this in most of her films. When asked if she believed her beauty had been an impediment to being taken seriously as an actress, she said, "People think that if you look fairly reasonable, you can't possibly act, and as I only care about acting, I think beauty can be a great handicap, if you really want to look like the part you're playing, which isn't necessarily like you."
Director George Cukor described Leigh as a "consummate actress, hampered by beauty", and Laurence Olivier said that critics should "give her credit for being an actress and not go on forever letting their judgments be distorted by her great beauty." Garson Kanin shared their viewpoint and described Leigh as "a stunner whose ravishing beauty often tended to obscure her staggering achievements as an actress. Great beauties are infrequently great actresses—simply because they don't need to be. Vivien was different; ambitious, persevering, serious, often inspired."
Leigh explained that she played "as many different parts as possible" in an attempt to learn her craft and to dispel prejudice about her abilities. She believed that comedy was more difficult to play than drama because it required more precise timing and said that more emphasis should be placed upon comedy as part of an actor's training. Nearing the end of her career, which ranged from Noël Coward comedies to Shakespearean tragedies, she observed, "It's much easier to make people cry than to make them laugh."
Her early performances brought her immediate success in Britain, but she remained largely unknown in other parts of the world until the release of Gone with the Wind. In December 1939, film critic Frank Nugent wrote in The New York Times, "Miss Leigh's Scarlett has vindicated the absurd talent quest that indirectly turned her up. She is so perfectly designed for the part by art and nature that any other actress in the role would be inconceivable", and as her fame escalated, she was featured on the cover of Time magazine as Scarlett. In 1969, critic Andrew Sarris commented that the success of the film had been largely due to "the inspired casting" of Leigh, and in 1998, wrote that "she lives in our minds and memories as a dynamic force rather than as a static presence". Film historian and critic Leonard Maltin described the film as one of the all-time greats, writing in 1998 that Leigh "brilliantly played" her role.
Her performance in the West End production of A Streetcar Named Desire, described by the theatre writer Phyllis Hartnoll as "proof of greater powers as an actress than she had hitherto shown", led to a lengthy period during which she was considered one of the finest actresses in British theatre. Discussing the subsequent film version, Pauline Kael wrote that Leigh and Marlon Brando gave "two of the greatest performances ever put on film" and that Leigh's was "one of those rare performances that can truly be said to evoke both fear and pity."
Her greatest critic was Kenneth Tynan who ridiculed Leigh's performance opposite Olivier in the 1955 production of Titus Andronicus, commenting that she "receives the news that she is about to be ravished on her husband's corpse with little more than the mild annoyance of one who would have preferred foam rubber." He was also critical of her reinterpretation of Lady Macbeth in 1955, saying that her performance was insubstantial and lacked the necessary fury demanded of the role. After her death, however, Tynan revised his opinion, describing his earlier criticism as "one of the worst errors of judgment" he had ever made. He came to believe that Leigh's interpretation, in which Lady Macbeth uses her sexual allure to keep Macbeth enthralled, "made more sense ... than the usual battle-axe" portrayal of the character. In a survey of theatre critics conducted shortly after Leigh's death, several named her performance as Lady Macbeth as one of her greatest achievements in theatre.
In 1969, a plaque to Leigh was placed in the Actors' Church, St Paul's, Covent Garden, London. In 1985, a portrait of her was included in a series of United Kingdom postage stamps, along with Sir Alfred Hitchcock, Sir Charlie Chaplin, Peter Sellers and David Niven to commemorate "British Film Year". In April 1996, she appeared in the Centenary of Cinema stamp issue (with Sir Laurence Olivier) and in April 2013 was again included in another series, this time celebrating the 100th anniversary of her birth. The British Library in London purchased the papers of Olivier from his estate in 1999. Known as The Laurence Olivier Archive, the collection includes many of Leigh's personal papers, including numerous letters she wrote to Olivier. The papers of Leigh, including letters, photographs, contracts and diaries, are owned by her daughter, Mrs. Suzanne Farrington. In 1994, the National Library of Australia purchased a photograph album, monogrammed "L & V O" and believed to have belonged to the Oliviers, containing 573 photographs of the couple during their 1948 tour of Australia. It is now held as part of the record of the history of the performing arts in Australia. In 2013, an archive of Leigh's letters, diaries, photographs, annotated film and theatre scripts and her numerous awards was acquired by the Victoria and Albert Museum in London. Also in 2013, Leigh was among the ten people selected by the Royal Mail for their "Great Britons" commemorative postage stamp issue.
### In popular culture
Leigh was portrayed by American actress Morgan Brittany in The Day of the Locust (1975), Gable and Lombard (1976) and The Scarlett O'Hara War (1980). Julia Ormond played Leigh in My Week with Marilyn (2011). Leigh was also portrayed by Katie McGuinness in the Netflix miniseries Hollywood (2020).
## Filmography
## Accolades
|
18,386,478 |
House of Music
| 1,152,273,927 | null |
[
"1996 albums",
"Albums produced by DJ Quik",
"Albums produced by G-One",
"Albums produced by Raphael Saadiq",
"Mercury Records albums",
"Tony! Toni! Toné! albums"
] |
House of Music is the fourth and final album by American R&B band Tony! Toni! Toné!, released on November 19, 1996, by Mercury Records. It follows the success of the band's 1993 album Sons of Soul and a hiatus during which each member pursued individual musical projects.
For House of Music, Tony! Toni! Toné! regrouped in 1995 and worked at studios in San Francisco, Los Angeles, Oakland, and Sacramento. Bassist-vocalist Raphael Saadiq, guitarist-vocalist D'wayne Wiggins, and percussionist-keyboardist Timothy Christian Riley worked on songs for the album independently before recording them together as a group. Most of the album was produced by the band; the only song to feature outside production was "Let's Get Down", produced by Saadiq with rapper-producer DJ Quik and G-One.
Tony! Toni! Toné! sought to emphasize musicianship rather than production technique during the sessions for House of Music. Expanding on their previous work's traditional R&B influences with live instrumentation and balladry, the album features both contemporary and older musical sensibilities alongside witty, sensitive lyrics informed by the spirit of romantic love and seduction. Tony! Toni! Toné! named the album after a small record store in the band's native city of Oakland, which Wiggins said they were reminded of after listening to the finished music.
House of Music charted for 31 weeks on the Billboard 200, peaking at number 32, and was certified Platinum by the Recording Industry Association of America (RIAA). Critics widely praised Tony! Toni! Toné!'s musicianship and songwriting, later deeming the album a masterpiece of 1990s R&B. An international tour promoting House of Music was planned but did not materialize amid growing tensions within the group stemming from creative differences and Mercury's management. They disbanded shortly after the album's release to pursue separate music careers.
## Background
Tony! Toni! Toné! took a hiatus as a group after the commercial and critical success of their third album Sons of Soul (1993). According to vocalist and bassist Raphael Wiggins, each member had pursued individual music projects, and "the group was trying to figure out where everybody's time, space and head was at." He, D'wayne Wiggins, and Timothy Christian Riley worked on songwriting and production for other recording artists during the band's hiatus, including D'Angelo, En Vogue, Karyn White, Tevin Campbell, and A Tribe Called Quest. Raphael Wiggins adopted the surname "Saadiq" for his professional name in 1994—meaning "man of his word" in Arabic—and released his solo single "Ask of You" in 1995. Their work outside the band led to rumors of a break-up during the time between albums, before regrouping to record House of Music.
## Recording and production
House of Music was recorded in sessions that began in September 1995 and took place at the following California-based studios: Brilliant Studios and Hyde Street Studios in San Francisco; Coda Studios and Grass Roots Studios in Oakland; Encore Studios, Image Recording, and Westlake Recording Studios in Los Angeles; and Pookie Labs and Woodshed Studios in Sacramento.
Tony! Toni! Toné! used vintage recording equipment and, for certain tracks, a 40-piece orchestra. Some songs also featured guest musicians, including rapper and producer DJ Quik, percussionist Sheila E., and the Tower of Power horn section. Saadiq worked with DJ Quik on the song "Let's Get Down" and said the collaboration proved very "natural" because of the producer's affinity for funk music. Tony! Toni! Toné! wanted to record the album with an emphasis on musicianship rather than production flair. Wiggins felt that the absence of their once prominent synthesizers made the resulting music sound more distinctive. "On a lot of the songs, you can just imagine a five-piece band performing", he later told USA Today.
Unlike the group's previous albums, each member arranged, composed, and produced songs on their own before putting the finished recordings together for House of Music. According to Saadiq, "what I did was write a lot of stuff and rehearse it for about a month, then recorded it live. Then [Wiggins and Riley] would add their parts separately." He worked with his own recording crew for House of Music, featuring guitarist Chalmers "Spanky" Alford, drummer Tommy Branford, and keyboardists Kelvin Wooten and Cedric Draper. Wiggins believed the band's hiatus benefited the recording of House of Music, making them less likely to produce an album derivative of Sons of Soul.
The album's opening track, the Al Green-styled "Thinking of You", was one the group conceived and recorded together at 3:00 a.m. in Saadiq's Pookie Labs studio. As he remembered it, "I was just playing around and started singing off the top of my head. I never wrote anything down, it was just what came out." "Annie May", one of Wiggins' songs for House of Music, had Saadiq's backing vocals pre-recorded and then overdubbed to the track's final mix. Tony! Toni! Toné! completed recording House of Music in September 1996. The album was then mastered by Brian Gardner at the Bernie Grundman Mastering studio in Hollywood.
One of Saadiq's songs for the album, "Me and the Blind Man", was excluded from the final mix because, as Saadiq told Yahoo! Music, "they didn't want anybody playing favorites, so one of my songs had to come off." The recording was a moody blues piece with surrealistic lyrics about lust, longing, and a fictitious blind man's secret powers. Saadiq wanted to show "a darker side ... some depth" to listeners with the song. "To me, songs like 'Blind Man' make the whole sound, the House of Music", he remarked. It was featured on an album sampler sent by the group's label to music journalists. Saadiq later recorded a version of "Blind Man" for his 2002 solo album Instant Vintage.
## Music and lyrics
House of Music expands on the traditional R&B influences of Tony! Toni! Toné!'s previous work, emphasizing live instrumentation and ballads. In the opinion of Daily Herald writer Dan Kening, the album is a continuation of the band's mix of contemporary R&B and old-fashioned soul, resulting in "half a tribute to their '60s and '70s soul music roots and half a masterful blend of modern smooth balladeering and danceable funk." Salon critic Jennie Yabroff believed House of Music mostly features ballads in the form of "slow, emotional numbers with muted beats" that accentuate the lyrics. According to Drum magazine, mid-tempo songs such as "Thinking of You" and "Still a Man" rely strongly on 1960s R&B/soul "given a contemporary face", while up-tempo songs such as "Lovin' You", "Don't Fall in Love", and "Let's Get Down" have elements of funk.
The lyrics on House of Music are described by several journalists as witty and sensitive. Michaelangelo Matos of the Chicago Reader characterizes Saadiq's songwriting as playful and quirky, while comparing his tenor singing voice to that of a young Michael Jackson. Of Wiggins, Matos says his melodies and rhythms are subtler than those of Saadiq and observes "burnished obbligatos, hushed burr, and starry-eyed falsetto" in his singing. Saadiq alternates with Wiggins as lead vocalist throughout the album. Richard Torres of Newsday attributes the group's lyrics on the album to their "[belief] in the power of love and the lure of romance".
According to Saadiq, the opening track "Thinking of You" is "a really soul, southern, funky song" inspired by Al Green. It has light guitar strokes and is sung in a Southern twang by Saadiq, while "Top Notch" features jazz elements and the vocalist's playful promise of a trip to Denny's for "the most expensive dinner we can find". On "Still a Man", he sings from the perspective of a man who was left by his wife to raise their children alone. The backing vocalists sing the meditative hook, "Have you ever loved somebody / Who loves you so much it hurts you to hurt them so bad?" On "Holy Smokes & Gee Whiz", Saadiq's older brother Randall Wiggins sings lead. It is described by Washington City Paper journalist Rickey Wright as a modernized version of the Stylistics' 1972 song "Betcha by Golly, Wow", featuring "a dead-on impression of Russell Thompkins' unmistakable falsetto and precise diction".
"Annie May", written by Wiggins, is a story about a "good girl next door" who becomes an exotic dancer, while "Let Me Know" is a love song with Wall of Sound elements. According to Nick Krewen of The Spectator, "Wild Child" is "a ballad in the grand sense" of the 1977 Earth, Wind & Fire song "Be Ever Wonderful". "Party Don't Cry", a meditation on mortality with jazzy, philosophical overtones, is said by Wright to convey "an overt spirituality unheard in the Tonyies' past songs". The album's closing track is a gospel-influenced instrumental and variation of "Lovin' You" composed by Saadiq. Its sole lyric, according to Wright, is a universalist platitude.
## Title and packaging
House of Music was named after a record store in the band's native Oakland, which had closed several years prior to the album's release. Wiggins explained in October 1996 to Billboard: "We title all our albums at the end of the project. We sat back and listened to everything, and it reminded us of this mom-and-pop store around our way in Oakland." "We grew up in a house of music", Wiggins continued, remarking how their father was a blues guitarist and music had a unifying effect on people. According to Billboard's Shawnee Smith, the album's title describes a varied, complete work distinct from a contemporary music market oversaturated by "retro-soul groups".
The album's cover and booklet photos were taken by photographer William Claxton, who captured Tony! Toni! Toné! dressed in casual and formal, retro clothing. This departure from the more outré wardrobe of the band's past was interpreted by journalist Brandon Ousley as an effort to promote "the elegance of 1960s-era Black America and legendary soul acts to a modern generation".
## Marketing and sales
House of Music was released on November 19, 1996, by Mercury Records. The label planned the release date to coincide with the peak holiday shopping period and ran ad campaigns scheduled for network cable, syndicated television shows, and radio stations. Tony! Toni! Toné! inaugurated its release with a satellite press conference and in-store performance at a small retail outlet in the San Francisco Bay Area. They also embarked on a tour of historically black colleges and Black Independent Coalition record shops after "Let's Get Down" had been sent to R&B and crossover radio on October 28 as the album's lead single; its music video was released to outlets such as BET, The Box, and MTV. Tony! Toni! Toné! performed the song on the sketch comedy show All That; on the music variety program Soul Train, they performed "Let's Get Down" and "Annie May".
In its first eight weeks, House of Music sold 318,502 copies in the US. It peaked at number 32 on the Billboard 200 and spent 31 weeks on the chart. "Thinking of You" was released as the second single on March 11, 1997, by which time House of Music had sold 514,000 copies, according to Nielsen SoundScan. On August 6, the album was certified Platinum by the Recording Industry Association of America (RIAA).
During the album's marketing campaign, Tony! Toni! Toné! experienced growing tensions stemming from creative differences, business-related problems, and Saadiq's interest in a solo career. "There's a quiet stress between us that no one really talks about", Saadiq told Vibe in February 1997. "And what's sad about the whole thing is the fact that our friendship is disintegrating. Who knows, House of Music could be the last Tony Toni Toné album." According to Mercury vice president Marty Maidenberg, an international tour for the album had been planned by October 1996, with concert dates in Japan and the United Kingdom, but it never materialized.
The band remained committed to promoting the record into 1997, including a February 28 taped performance at VH1's Hard Rock Live special. Later that year, Mercury released the group's greatest hits album Hits (1997), prompting Saadiq to explain in a November interview for the Philadelphia Daily News that Tony! Toni! Toné! had experienced "a little turmoil" but are still together: "We're family. We all love each other and support each other. If we don't do any more records together, it doesn't matter." When asked about the status of House of Music's campaign, he suggested that it had ended and Mercury was at fault. The group disbanded shortly afterward, and each member went on to pursue an individual music career.
## Critical reception and legacy
House of Music was met with positive reviews. Writing for Entertainment Weekly in November 1996, Ken Tucker found Tony! Toni! Toné!'s imitations of classic sounds "intelligent, sometimes brilliant", "witty", and "tremendously likable", with "a new recurring theme: what makes a man a man and a woman a woman, explored with both frankness and slyness". Sonia Murray of The Atlanta Journal-Constitution hailed it as the band's most effectual and multifaceted record yet, while Chicago Tribune critic Greg Kot said, "they find rapture that is steeped in reality rather than in the upwardly mobile fantasy concocted by many of today's less tradition-conscious R&B crooners." "The Tonies serve as a sort of stylistic missing link", J. D. Considine wrote in The Baltimore Sun, "suggesting what would have happened had the soul styles of the '70s continued to evolve, instead of being tossed aside by the synth-driven sound of the '80s". Michael A. Gonzales from Vibe said the album "glows a vision of blackness that is superbad, mad smooth, and crazy sexy". He described it as "a wonderland of harmonic delights, softcore jollies, and slow-jam fever floating on the tip of Cupid's arrow", showing the group "exploring the sensuality of black pop without sounding like boulevard bullies stalking their objects of desire".
At the end of 1996, House of Music was voted the 30th best album of the year in The Village Voice's annual Pazz & Jop poll, which polled 236 American critics nationwide. Robert Christgau, the poll's supervisor, ranked it 10th on his own year-end list. In his review for the newspaper, he deemed "Thinking of You" a "hilariously gutsy" and spot-on Al Green homage while writing of the album overall:
> Raphael Saadiq and his henchmen give the r&b revival what for, constructing a generous original style from a varied history they know inside out—Tempts, Sly, Blue Magic, Kurtis Blow. And for almost every sound they provide a sharp song, which is more than Holland–Dozier–Holland and Gamble-Huff could manage when they were compelled to stick to one. Defeating second-half trail-off and a CD-age windiness the band isn't beatwise enough to beat, Saadiq's flexible, sensitive, slightly nasal tenor, spelled by the grain of D'wayne Wiggins's workaday baritone, recasts the tradition in its image.
In retrospect, Christgau attributed the album's success to Saadiq's lead role in Tony! Toni! Toné! He contended that "only with House of Music did they become true sons of the soul revival, the most accomplished r&b act of the '90s. That's still the album to remember them by." AllMusic editor Leo Stanley later remarked that the group "successfully accomplish their fusion of the traditional and contemporary ... within the framework of memorable, catchy songs" indebted to both old and modern R&B songwriting virtues. According to Stanley, the record had an influence on contemporary neo soul artists such as Tony Rich and Maxwell. In Matos' opinion, the album showcased the increasing artistic contrast between Saadiq and Wiggins, which "had grown so pronounced that the tension only enhanced what was already the group's best batch of songs". Rashod Ollison of The Virginian-Pilot regarded the record as "a flawless gem" on which the band's "amalgamation of traditional and contemporary styles coalesced beautifully". In The Rolling Stone Album Guide (2004), Fred Schruers said "House of Music consolidates the triumph of Sons of Soul for a masterpiece of 1990s R&B, an album that is as steeped in soul tradition as anything by Maxwell or D'Angelo, but that mixes the homage with humor and deft contemporary touches, thereby creating a new space all its own."
## Track listing
Information is taken from the album's liner notes.
## Personnel
Credits are adapted from the album's liner notes.
### Tony! Toni! Toné!
- Timothy Christian Riley – acoustic piano, clarinet, drums, electric pianos, Hammond B-3 organ, percussion, production
- Raphael Saadiq – bass, guitar, keyboards, production, vocals
- D'wayne Wiggins – guitar, production, vocals
### Additional musicians
- Greg Adams – trumpet
- Spanky Alford – guitar
- George Archie – musician
- Johnny Bamont – saxophone
- Sue Ann Carwell – background vocals
- Tommy Bradford – drums
- DJ Quik – production, triangle, vocals on "Let's Get Down"
- Pete Escovedo – percussion
- Clare Fischer – string arrangements
- Mic Gillette – trombone
- Elijah Baker Hassan – bass guitar
- Bobette Jamison-Harrison – background vocals
- Vince Lars – saxophone
- Marvin McFadden – trumpet
- Nick Moroch – guitar
- Bill Ortiz – trumpet
- Conesha Owens – background vocals
- Brenda Roy – background vocals
- Sheila E. – percussion
- Jackie Simley – background vocals
- Joel Smith – bass guitar, drums
- Charles Veal – orchestration
- Carl Wheeler – background vocals, engineering, keyboards
- Randall Wiggins – background vocals, vocals
- Kelvin Wooten – keyboards, string arrangements
- Benjamin Wright – string arrangements
### Production
- Danny Alonso – engineering
- Mike Bogus – assistant engineering, engineering
- Gerry Brown – engineering, mixing
- Milton Chan – assistant engineering
- William Claxton – photography
- Jim Danis – assistant engineering
- Tim Donovan – mixing assistance
- Maureen Droney – production coordination
- Steve Durkey – mixing assistance
- Brian Gardner – mastering
- Danny Goldberg – executive production
- Margery Greenspan – art direction
- Darrin Harris – engineering
- Carter Humphrey – mixing assistance
- Richard Huredia – mixing assistance
- Wes Johnson – assistant engineering
- Booker T. Jones III – mixing
- Ken Kessie – engineering, mixing
- Brian Kinkel – engineering
- Marty Main – assistant engineer, engineering
- Bill Malina – editing, engineering, mixing
- Jason Mauza – mixing assistance
- Marty Ogden – engineering
- Chris Puram – mixing
- Tracy Riley – production coordination
- Skip Saylor – engineering
- Joey Swails – engineering, programming
- Raymond Taylor-Smith – mixing assistance
- Tulio Torrinello, Jr. – engineering
- Terri Wong – assistant engineer
- Brian Young – assistant engineer
## Charts
### Weekly charts
### Year-end charts
### Singles
|
22,877,183 |
Uruguayan War
| 1,113,869,588 |
1864–1865 war between Brazil and Uruguay
|
[
"Argentina–Uruguay relations",
"Brazil–Uruguay relations",
"Conflicts in 1864",
"Conflicts in 1865",
"History of South America",
"Military history of Brazil",
"Uruguayan War",
"Wars involving Brazil",
"Wars involving Uruguay"
] |
The Uruguayan War (10 August 1864 – 20 February 1865) was fought between Uruguay's governing Blanco Party and an alliance consisting of the Empire of Brazil and the Uruguayan Colorado Party, covertly supported by Argentina. Since its independence, Uruguay had been ravaged by intermittent struggles between the Colorado and Blanco factions, each attempting to seize and maintain power in turn. The Colorado leader Venancio Flores launched the Liberating Crusade in 1863, an insurrection aimed at toppling Bernardo Berro, who presided over a Colorado–Blanco coalition (fusionist) government. Flores was aided by Argentina, whose president Bartolomé Mitre provided him with supplies, Argentine volunteers and river transport for troops.
The fusionism movement collapsed as the Colorados abandoned the coalition to join Flores' ranks. The Uruguayan Civil War quickly escalated, developing into a crisis of international scope that destabilized the entire region. Even before the Colorado rebellion, the Blancos within fusionism had sought an alliance with Paraguayan dictator Francisco Solano López. Berro's now purely Blanco government also received support from Argentine federalists, who opposed Mitre and his Unitarians. The situation deteriorated as the Empire of Brazil was drawn into the conflict. Almost one fifth of the Uruguayan population were considered Brazilian. Some joined Flores' rebellion, spurred by discontent with Blanco government policies that they regarded as harmful to their interests. Brazil eventually decided to intervene in the Uruguayan affair to reestablish the security of its southern frontiers and its regional ascendancy.
In April 1864, Brazil sent Minister Plenipotentiary José Antônio Saraiva to negotiate with Atanasio Aguirre, who had succeeded Berro in Uruguay. Saraiva made an initial attempt to settle the dispute between Blancos and Colorados. Faced with Aguirre's intransigence regarding Flores' demands, the Brazilian diplomat abandoned the effort and sided with the Colorados. On 10 August 1864, after a Brazilian ultimatum was refused, Saraiva declared that Brazil's military would begin exacting reprisals. Brazil declined to acknowledge a formal state of war, and for most of its duration, the Uruguayan–Brazilian armed conflict was an undeclared war.
In a combined offensive against Blanco strongholds, the Brazilian–Colorado troops advanced through Uruguayan territory, taking one town after another. Eventually the Blancos were left isolated in Montevideo, the national capital. Faced with certain defeat, the Blanco government capitulated on 20 February 1865. The short-lived war would have been regarded as an outstanding success for Brazilian and Argentine interests, had Paraguayan intervention in support of the Blancos (with attacks upon Brazilian and Argentine provinces) not led to the long and costly Paraguayan War.
## Uruguayan Civil War
### Blanco–Colorado strife
The Oriental Republic of Uruguay in South America had been, since its independence in 1828, troubled by strife between the Blanco Party and the Colorado Party. They were not political parties in the modern sense, but factions that engaged in internecine rebellion whenever the other dominated the government. The nation was deeply divided into Colorado and Blanco camps. These partisan groups formed in the 1830s and arose out of patron–client relationships fostered by local caudillos in the cities and countryside. Rather than a unity based upon common nationalistic sentiments, each had differing aims and loyalties informed by their respective, insular political frameworks.
Uruguay had a very low population density and a weak government. Ordinary citizens were compelled by circumstances to seek the protection of local caudillos—landlords who were either Colorados or Blancos and who used their workers, mostly gaucho horsemen, as private armies. The civil wars between the two factions were brutal. Harsh tactics produced ever-increasing alienation between the groups, and included seizure of land, confiscation of livestock and executions. The antagonism caused by atrocities, along with family loyalties and political ties, made reconciliation unthinkable. European immigrants, who came in great numbers during the latter half of the nineteenth century, were drawn into one party or the other; both parties had liberal and conservative wings, so the social and political views of newcomers could be reconciled with either. The feuding blocs impeded development of a broadly supported central national administration.
### Liberating Crusade of 1863
In the latter half of the 1850s, leading members of the Colorados and Blancos attempted a reconciliation. With the approval of many from both parties efforts were made to implement "fusionist" policies, which began to show results in cooperation in government and military spheres. The attempt at healing the schism was dealt a setback in 1858 when reactionaries in the Colorado Party rejected the scheme. The revolt was put down by Gabriel Pereira, a former Colorado and Uruguayan president under the fusionist government. The rebellious leaders were executed at Paso de Quinteros along the Río Negro, sparking renewed conflict. The Colorados suspected fusionism of promoting Blanco aims to their own detriment and called for the "martyrs of Quinteros" to be avenged.
With the internal weaknesses of fusionism now exposed, the Colorados moved to oust its supporters from the government. Their leader, Brigadier General Venancio Flores, a caudillo and an early proponent of fusionism, found himself without sufficient military resources to mount a sustained revolt and resorted to asking for intervention by Argentina.
Argentina was a fragmented nation (since the 1852 downfall of Argentine dictator Juan Manuel de Rosas), with the Argentine Confederation and the State of Buenos Aires each vying for supremacy. Flores approached the Buenos Aires Minister of War, Bartolomé Mitre, agreeing to throw the support of the Colorados behind Buenos Aires in exchange for subsequent Argentine assistance in their fight against the fusionist government in Montevideo (the Uruguayan capital). Flores and his Colorado units served Buenos Aires with fierce determination. They played a decisive role in the Battle of Pavón on 17 September 1861, in which the Confederation was defeated and all Argentina was reunited under the government in Buenos Aires.
In fulfillment of his commitment, Mitre arranged for the Colorado militia, Argentine volunteer units and supplies to be carried aboard Argentine vessels to Uruguay in May and June 1863. Ships of the Argentine navy kept Uruguayan gunships away from the operation. Back on his native soil, Flores called for the ouster of the constitutional government, by that time headed by Bernardo Berro. Flores accused the Montevideo government of Blanco sympathies and framed his "Liberating Crusade" (as he called his rebellion) in the familiar terms of a Colorado vs. Blanco struggle. Colorados from rural areas joined defectors from the military in responding to his call.
## International crisis
### Paraguayan–Blanco ties
Although the Colorados had defected to the Flores insurgency, the national guard continued to support the fusionist government. Blanco partisans filled its depleted ranks. They also replaced army officers who had deserted to Flores. The Blancos received aid from several Argentine Federalists who joined their cause. As in Uruguay, Argentina had long been a battleground of rival parties, and Bartolomé Mitre's victory at Pavón in 1861 had signaled the triumph of his Unitarian Party over the Federal Party led by Justo José de Urquiza. Mitre denied any involvement in the Flores rebellion, even though his complicity was widely known and taken for granted.
Relations between Argentina and Uruguay worsened, and both nations came close to declaring war on each other, although neither could afford a direct military conflict. Argentina had only recently emerged from a long civil war, and was still struggling to suppress a Federalist rebellion in its western province of La Rioja. Uruguay was too weak militarily to engage in a fight unaided.
Since 1862, the Blancos had made repeated overtures to Paraguay, governed by dictator Carlos Antonio López, in an attempt to forge an alliance that might advance both their interests in the Platine region. Upon the death of López, his son, Francisco Solano López, succeeded him as Paraguayan dictator. Unlike the elder López, who strove to avoid encumbering alliances, Solano greeted the Blancos' proposal with enthusiasm. He believed Argentina was working towards the annexation of both Uruguay and Paraguay, with the goal of recreating the Viceroyalty of the Río de la Plata, the former Spanish colony that once encompassed the territories of all three nations. Solano López had, as far back as 1855, expressed this concern, commenting to the Uruguayan Andrés Lamas that "the idea of reconstructing [the old viceroyalty] is in the soul of the Argentines; and as a result, it isn't just Paraguay that needs to stand guard: your country, the Oriental Republic [of Uruguay], needs to get along with my own in order to prepare for any eventualities." In late 1863, Solano López was mobilizing his army and was in talks with Urquiza, the leader of the dissident Argentine Federalists, to convince him to join the proposed Paraguayan–Uruguayan alliance.
### Brazil and the civil war
The developments in Uruguay were closely watched by the Empire of Brazil, which had vital interests in the Río de la Plata Basin. After Rosas fell in 1852, Brazil became the dominant regional power. Its foreign policy included the covert underwriting of opposition parties in Uruguay and Argentina, preventing strong governments that might threaten Brazil's strategic position in the area. Brazilian banking and commercial firms also had ventures in the area, furthering ties within the region. In Uruguay, the bank run by Irineu Evangelista de Sousa (Baron and later Viscount of Mauá) became so heavily involved in commercial enterprises that the economy depended on this source of continued capital flow.
About 18 percent (40,000) of the Uruguayan population (220,000) spoke Portuguese and regarded themselves as Brazilian rather than Uruguayan. Many within Flores' ranks were Brazilians, some hailing from the nearby Brazilian province of Rio Grande do Sul. Life along the frontier between Rio Grande do Sul and Uruguay was often chaotic, with hostilities erupting between partisans of various cattle barons, cattle-rustling and random killings. Large landowners on both sides of the border had long been antagonistic toward Berro's policies. The Uruguayan president attempted to tax the cattle coming from Rio Grande do Sul and to impose curbs on the use of Brazilian slaves within Uruguayan territory; slavery had been outlawed years before in Uruguay.
Among the Brazilian land barons were David Canabarro and Antônio de Sousa Neto, both allies of Flores and former separatist rebels during the Ragamuffin War that had ravaged Rio Grande do Sul from 1835 until 1845. Canabarro, a frontier military commander, misled Brazil's government by denying that Brazilians were crossing the border to join Flores. Sousa Neto went to the Brazilian capital to request immediate government intervention in Uruguay, claiming that Brazilians were being murdered and their ranches robbed. The "fact that Uruguayan citizens had just as valid claims against Brazil as Brazilians had against Uruguay was ignored", said historian Philip Raine. Although Sousa Neto had ties with the governing political party, his claims, including that he could amass a force of 40,000 to invade Uruguay, were not taken seriously by all. The Uruguayan crisis arrived at a difficult moment for Brazil, which was on the verge of a full-blown war with the British Empire for unrelated reasons. Brazil's government decided to intervene in Uruguay, fearful of showing any weakness in the face of an impending conflict with Britain, and believing that it would be better for the central government to take the lead rather than allow the Brazilian ranchers on the frontier to decide the course of events.
## Early engagements
### Brazilian ultimatum
On 1 March 1864, Berro's term of office ended. The ongoing civil war prevented elections; therefore Atanasio Aguirre, president of the Uruguayan senate and a member of the Amapolas (the radical wing of the Blanco Party) replaced Berro, on an interim basis. In April, José Antônio Saraiva was appointed minister plenipotentiary by the Brazilian government and charged with quickly reaching an accord that would settle Brazil's claims and ensure the safety of Brazilian citizens. His focus soon shifted from satisfying Brazil's terms to a more immediate goal of hammering out a deal between the antagonists in the civil war, with the expectation that only a more stable regime would be able to reach a settlement with Brazil.
The government in Montevideo was at first reluctant to consider Saraiva's proposals. With backing from Paraguay, it saw little advantage in negotiating a close to the civil war or in seeking to comply with Brazil's demands. The main factor, as historian Jeffrey D. Needell summarized, was that the "Uruguayan president had been unwilling to resolve these, particularly because the Brazilians whose grievances were at issue were allies of Venancio Flores, a client of the Argentines, and a man who was seeking his overthrow." A mutual enmity between Brazil and its Hispanic-American neighbors compounded the difficulties, the result of a long-standing distrust and rivalry between Spain and Portugal that had been carried over to their former American colonies. Brazil and Uruguay exhibited loathing for one another; as Robert Bontine Cunninghame Graham put it: "the Brazilians holding the Uruguayans as bloodthirsty savages, and the Uruguayans returning their contempt for the unwarlike ways of the Brazilians, whom they called monkeys, and looked down upon, for their mixed blood."
Eventually, in July 1864, Saraiva's persistent diplomacy moved the Uruguayan government to agree to mediated talks including Edward Thornton (the British resident minister in Buenos Aires), Argentine foreign minister Rufino de Elizalde and Saraiva himself. Initially, the negotiations seemed promising, but soon bogged down. On 4 August, convinced that the government in Montevideo was unwilling to work toward a settlement, a frustrated Saraiva delivered an ultimatum, which the Uruguayans rebuffed. On 10 August, Saraiva informed Aguirre that the Brazilian military commanders would receive orders to begin retaliation, marking the beginning of the war.
### Alliance with rebel Colorados
Under the orders of Vice-Admiral Joaquim Marques Lisboa (Baron of Tamandaré), a Brazilian fleet was stationed in Uruguayan territorial waters. The naval force comprised twelve steamships: one frigate, six corvettes and five gunboats. On 11 August 1864, Tamandaré, as the commander-in-chief of Brazilian naval and land forces in the war, received orders from Saraiva to begin retaliatory operations. Brazilian warships were deployed to the Uruguayan towns of Salto, Paysandú and Maldonado, ostensibly to "protect Brazilian subjects", while Uruguay's only warships, the small steamers Villa del Salto and General Artigas, were to be neutralized. When Tamandaré demanded these steamships remain at their docks, only the crew of General Artigas complied.
Tamandaré created a naval command assigned to Captain of Sea and War Francisco Pereira Pinto (later Baron of Ivinhema). Consisting of two corvettes and one gunboat, the division was sent to patrol the Uruguay River, a tributary of the Río de la Plata and part of the Platine region. On 24 August, Pereira Pinto sighted the Villa del Salto, which was conveying troops to fight the Colorados. The Villa del Salto ignored warning shots and a demand to surrender; after a desperate run from the Brazilian warships, it escaped to Argentine waters. This first skirmish of the war prompted the Uruguayan government to sever all diplomatic ties with Brazil on 30 August. On 7 September, Pereira Pinto again encountered the Villa del Salto sailing from Salto to Paysandú. The two Brazilian corvettes attacked the Uruguayan ship as it again tried to escape to Argentina. The battle ended when the Villa del Salto ran aground near Paysandú, where its crew set it on fire to prevent it falling into Brazilian hands. Meanwhile, the General Artigas had been sold to prevent its capture by the Brazilians.
To Flores, Brazil's military operations against the Blanco government represented a priceless opportunity, since he had been unable to achieve any lasting results during the rebellion. He entered talks with Saraiva, winning the Brazilian government over, after promising to settle their claims refused by the Blanco government. The Brazilian plenipotentiary minister gave instructions to Tamandaré to form a joint offensive with the Colorado leader and overthrow the Blancos. On 20 October, after a swift exchange of letters, Flores and the Brazilian vice-admiral formed a secret alliance.
## Colorado–Brazil joint offensive
### Sieges of Uruguayan towns
The Brazilian naval fleet in Uruguay was supposed to work in conjunction with a Brazilian land force. But months passed, and the "Army of the South" (called the "Division of Observation" until the ultimatum) stationed in Piraí Grande (in Rio Grande do Sul) was still not ready to cross into Uruguayan territory. Its main objectives were to occupy the Uruguayan towns of Paysandú, Salto and Melo; once taken, they were to be handed over to Flores and his Colorados.
On 12 October, a brigade led by Brigadier José Luís Mena Barreto detached from the main army. Two days later, near the Brazilian town of Jaguarão, the force invaded Uruguay's Cerro Largo Department. After skirmishes failed to halt their march, the Blancos abandoned Melo, and the brigade entered this capital of Cerro Largo unopposed, on 16 October. After handing over control of Melo to the Uruguayan Colorados, the Brazilians withdrew on 24 October, to rejoin their Army of the South. The next Brazilian target was Salto. Pereira Pinto sent two gunboats under First Lieutenant Joaquim José Pinto to blockade the town. On 24 November, Flores arrived with his troops and began the siege. Colonel José Palomeque, commander of the Uruguayan garrison, surrendered almost without firing a shot, on the afternoon of 28 November. Flores' army captured and incorporated four artillery pieces and 250 men; 300 Colorados and 150 Brazilians were left behind to occupy Salto.
Paysandú, the last Brazilian target, was already under blockade by Pereira Pinto. Tamandaré, who had been in Buenos Aires until this point, took charge of the blockade on 3 December. It was enforced by one corvette and four gunboats. Paysandú was garrisoned by 1,274 men and 15 cannons, under the command of Colonel Leandro Gómez. Flores, who had come from Salto, headed a force of 3,000 men, mostly cavalry. He invested Paysandú, deploying 800 infantrymen, 7 cannons (3 of which were rifled), and detachments of an additional 660 Brazilians. Gómez declined the offer to surrender. From 6 December until 8 December, the Brazilians and Colorados made attempts to storm the town, advancing through the streets, but were unable to take it. Tamandaré and Flores opted to wait for the arrival of the Army of the South. Meanwhile, Aguirre had sent General Juan Sáa with 3,000 men and four cannons to relieve the besieged town, forcing the Brazilians and Colorados to briefly lift the siege while dealing with this new threat. Sáa abandoned his advance before encountering the enemy force, and fled north of the Río Negro.
### Army of the South in Paysandú
Rather than the show of force that had been intended by the Brazilian government, the war revealed the empire's lack of military readiness. The Army of the South, stationed in Piraí Grande, was commanded by Field Marshal João Propício Mena Barreto (later Baron of São Gabriel) with two divisions. The 1st Division, under Brigadier Manuel Luís Osório (later Marquis of Erval), was formed by regular army units. The 2nd Division, under Brigadier José Luís Mena Barreto (who had since returned from his attack on Melo), was composed entirely of national guardsmen. Altogether, it numbered only 5,711 men—all (except some officers) native to Rio Grande do Sul. The army was poorly equipped for siege operations: it brought along no engineers (who could direct the construction of trenches); it was under-equipped, lacking even hatchets (necessary to cut fences, break through doors and scale walls); and its 12 cannons (a mix of La Hitte and Paixhans) were of small calibers ill-suited to attacking fortifications.
On 1 December, almost four months after Saraiva presented the ultimatum, the Army of the South invaded Uruguay. Its troops were accompanied by a semi-independent militia unit, consisting of no more than 1,300 Brazilian gaucho cavalrymen, under the former Ragamuffin Antônio de Sousa Neto. The 7,011-strong force (with 200 supply carts) marched through Uruguayan territory unopposed, heading toward Paysandú in the southwest. The disorganized and undisciplined bands of gauchos, who formed the armies of both Blancos and Colorados, were no match for the Brazilian troops. The Uruguayan gauchos "had combat experience but no training and were poorly armed save for the usual muskets, boleadoras, and facón knives", remarked historian Thomas L. Whigham. "Fire arms he [the Uruguayan gaucho] rarely possessed", said Cunninghame Graham, "or if by chance he owned a pair of long brass-mounted pistols or a flintlock blunderbuss, they were in general out of order and unserviceable. Upon the other hand, a little training made him a formidable adversary with the sabre and the lance."
Field Marshal Barreto reached Paysandú on 29 December with two infantry brigades and one artillery regiment under Lieutenant Colonel Émile Louis Mallet (later Baron of Itapevi). The Army of the South's cavalry established its camp a few kilometers away. Meanwhile, Gómez beheaded forty Colorados and fifteen Brazilian prisoners and "hung their still-dripping heads above his trenches in full view of their compatriots". On 31 December, the Brazilians and Colorados recommenced their attack and overran the city's defenses, after a bitter struggle, on 2 January 1865. The Brazilians captured Gómez and handed him over to the Colorados. Colonel Gregorio "Goyo" Suárez shot Gómez and three of his officers. According to Whigham, "Suárez's actions were not really unexpected, as several members of his immediate family had fallen victim to Gómez's wrath against the Colorados."
## Blanco capitulation
### Further operations
On 12 November 1864, before the siege of Paysandú, the Paraguayan dictator Solano López seized the Brazilian steamer Marquês de Olinda, beginning the Paraguayan War. While the Army of the South crossed Uruguay heading toward Paysandú, Brazil's government sent José Maria da Silva Paranhos (later Viscount of Rio Branco) to replace Saraiva. He arrived in the Argentine capital of Buenos Aires on 2 December and a few days later sought a formal alliance with Mitre against the Blancos. The Argentine president refused, insisting that neither he nor his government had any role in Flores' rebellion, and that Argentina would remain neutral. On 26 December, the Paraguayans invaded the Brazilian province of Mato Grosso, laying waste to towns and the countryside.
As the situation deteriorated, the Brazilian government mobilized army units from other regions of the Empire. On 1 January 1865, one brigade (composed of two infantry battalions and one artillery battalion) with 1,700 men from the Brazilian province of Rio de Janeiro disembarked and occupied the Uruguayan town of Fray Bentos. Paranhos, along with Tamandaré, met Flores in Fray Bentos and decided to launch a combined attack against Montevideo. It was apparent that the Paraguayans would take too long to reach Uruguay and no help would come from Urquiza and his Argentine Federalists. Increasingly isolated, Aguirre hoped that the foreign powers could intervene, but when, on 11 January, he asked the diplomatic corps in Montevideo whether they would provide military assistance to him and his government, none responded positively. João Propício Mena Barreto sailed from Fray Bentos on 14 January with the Brazilian infantry, bound for a landing near the mouth of the Santa Lucía River near Montevideo. On the way, he occupied the Uruguayan town of Colonia del Sacramento, garrisoning it with 50 soldiers.
The cavalry and artillery were placed under Osório and went overland. They met João Propício Mena Barreto and the infantry at their landing place. From there, the reunited Army of the South marched on Montevideo. On 31 January, Brazil and the Colorados besieged the Uruguayan capital. In the meantime, on 19 January, Paranhos attempted to clarify the nature of the Brazilian operations against the Blancos. He issued notes to the foreign diplomatic corps in Buenos Aires declaring that a state of war existed between Brazil and Uruguay. Until then, there had been no formal declaration of war, and the Empire's military operations in Uruguay since August 1864 had been mere "reprisals"—the vague term used by Brazilian diplomacy since the ultimatum.
### Armistice
In an attempt to divert the attention of Brazil from the siege of the capital, the Blanco government ordered the "Vanguard Army of the Republic of Uruguay", composed of 1,500 men under General Basilio Muñoz, to invade Brazilian soil. On 27 January 1865, Muñoz crossed the border and exchanged fire with 500 cavalrymen from Brazil's National Guard units. The Brazilians retreated to the town of Jaguarão, where they were joined by 90 infantrymen also from the National Guard, and hurriedly constructed trenches. There were also two small steamers and one other large vessel, each equipped with one artillery piece, to protect Jaguarão. The Blanco army attacked the town in the Battle of Jaguarão, but were repelled. Muñoz established a brief siege and asked Colonel Manuel Pereira Vargas (the commander of the Brazilian garrison) to surrender, but to no effect. In the early hours of 28 January, Muñoz retreated with his men toward Uruguay, ransacking property and taking all the slaves they could find.
On 2 February, Tamandaré declared to foreign diplomats that Montevideo was under siege and blockade. The Uruguayan capital was defended by between 3,500 and 4,000 armed men with little to no combat experience and 40 artillery pieces of various calibers. On 16 February, the Army of the South was further reinforced by 1,228 men from the 8th Battalion of Caçadores (Sharpshooters) arriving from the Brazilian province of Bahia, raising its numbers to 8,116. Sousa Neto and his gauchos had detached from the main force weeks before to pursue Muñoz and his army. British and French nationals were evacuated to Buenos Aires. The "general exodus of foreigners that followed caused those who remained in Montevideo to feel terror for the first time. All agreed that a full-scale assault against the city could not be postponed." However, neither Paranhos nor his government were willing to risk the destruction of Montevideo and face the inevitable outcry from other nations that would follow it.
On 15 February, Aguirre's term of office expired. Against the wishes of the Amapolas, the moderate Tomás Villalba was elected by the Senate to replace Aguirre. French, Italian and Spanish troops landed in Montevideo at Villalba's request to dissuade the radical Blancos from attempting a coup to retake power. Villalba entered into talks with Flores and Paranhos. With the Italian resident minister Raffaele Ulisse Barbolani serving as intermediary, an agreement was reached. Flores and Manuel Herrera y Obes (representing Villalba's government) signed a peace accord on 20 February at the Villa de la Unión. A general amnesty was granted to both Blancos and Colorados, and Villalba handed over the presidency to Flores on an interim basis until elections could be held.
## Aftermath
In early March, Flores assembled a cabinet composed entirely of Colorados, among them a brother of the Blanco Leandro Gómez. The new Uruguayan president purged government departments of employees with Fusionist or Blanco associations. All Blanco officers and enlisted men were eliminated from the army and replaced by those Colorado and Brazilian loyalists who had remained with Flores throughout the conflict. Public commemorations glorified the Colorados, and a monument dedicated to the "Martyrs of Quinteros" was erected. The costs of the Liberating Crusade are unknown. Flores' losses amounted to around 450 dead and wounded; there are no estimates of the number of civilians who died of famine and disease, nor is it known how much damage was sustained by the national economy. The effects of the Uruguayan War have received little attention from historians, who have been drawn to focus on the dramatic devastation suffered by Paraguay in the subsequent Paraguayan War.
News of the war's end was brought by Pereira Pinto and met with joy in Rio de Janeiro. Brazilian Emperor Dom Pedro II found himself waylaid by a crowd of thousands in the streets amid acclamations. But public opinion quickly changed for the worse, when newspapers began running stories painting the accord of 20 February as harmful to Brazilian interests, for which the cabinet was blamed. The newly raised Viscount of Tamandaré and Mena Barreto (now Baron of São Gabriel) had supported the peace accord. Tamandaré changed his mind soon afterward and played along with the allegations. Paranhos (a member of the opposition party) was used as a scapegoat by the Emperor and the government, and was recalled in disgrace to the imperial capital. Subsequent events show the accusation was unfounded. Not only had Paranhos managed to settle all Brazilian claims, but by avoiding the death of thousands, he gained a willing and grateful Uruguayan ally, not a dubious and resentful one—who provided Brazil an important base of operations during the war with Paraguay that followed.
Victory brought mixed results for Brazil and Argentina. As the Brazilian government had expected, the conflict was a short-lived and relatively easy affair that led to the installation of a friendly government in Uruguay. The official estimates included 549 battlefield casualties (109 dead, 439 wounded and 1 missing) from the navy and army and an unknown number who died from disease. Historian José Bernardino Bormann put the total at 616 (204 dead, 411 wounded and 1 missing). The war would have been deemed an outstanding success for Brazil, had it not been for its terrible consequences. Instead of demonstrating strength, Brazil revealed military weakness that an emboldened Paraguay sought to exploit. From the Argentine viewpoint, most of Bartolomé Mitre's expectations were frustrated by the war's outcome. He had succeeded in bringing to power his friend and ally, but the minimal risk and cost to Argentina he had envisioned at the outset proved to be illusory. The resulting attack by Paraguay on Brazilian and Argentine provinces sparked the long and devastating Paraguayan War.
|
12,174,750 |
Oryzomys gorgasi
| 1,107,941,703 |
Rodent from the family Cricetidae from northwestern Colombia and Venezuela
|
[
"Mammals described in 1971",
"Mammals of Colombia",
"Mammals of Venezuela",
"Oryzomys",
"Taxa named by Philip Hershkovitz",
"Taxonomy articles created by Polbot"
] |
Oryzomys gorgasi, also known as Gorgas's oryzomys or Gorgas's rice rat, is a rodent in the genus Oryzomys of family Cricetidae. First recorded in 1967, it is known from only a few localities, including a freshwater swamp in the lowlands of northwestern Colombia and a mangrove islet in northwestern Venezuela. It reportedly formerly occurred on the island of Curaçao off northwestern Venezuela; this extinct population has been described as a separate species, Oryzomys curasoae, but does not differ morphologically from mainland populations.
Oryzomys gorgasi is a medium-sized, brownish species with large, semiaquatically specialized feet. It differs from other Oryzomys species in several features of its skull. Its diet includes crustaceans, insects, and plant material. The species is listed as "Endangered" by the International Union for Conservation of Nature as a result of destruction of its habitat and competition with the introduced black rat (Rattus rattus).
## Taxonomy
Oryzomys gorgasi was first found in Antioquia Department of northwestern Colombia in 1967 during an expedition by the U.S. Army Medical Department and the Gorgas Memorial Laboratory. In 1971, Field Museum zoologist Philip Hershkovitz described a new species, Oryzomys gorgasi, on the basis of the single known specimen, an old male. He named the animal after physician William Crawford Gorgas, the namesake of the Gorgas Memorial Laboratory. Hershkovitz considered the new species most closely related to Oryzomys palustris, which at the time included North and Central American populations now divided into several species, including the marsh rice rat (O. palustris) and O. couesi. The species was not recorded again until 2001, when Venezuelan zoologist J. Sánchez H. and coworkers reported on 11 specimens collected in coastal northwestern Venezuela in 1992, 700 km (430 mi) from the Colombian locality. They confirmed that O. gorgasi is a distinct species related to the O. palustris group.
In 2001, Donald McFarlane and Adolphe Debrot described a new Oryzomys species from the Dutch island of Curaçao off northwestern Venezuela. For their description, they used subfossil material from owl pellets, including two partial skulls and several hemimandibles. They referred the species to Oecomys, a group of arboreal (tree-living), mainly South American rodents related to Oryzomys. O. curasoae has also been known as the "Curaçao Rice Rat" and the "Curaçao Oryzomys".
Marcelo Weksler and colleagues removed most of the species then placed in Oryzomys from the genus in 2006, retaining only the marsh rice rat and related species, including O. gorgasi. They also kept O. curasoae in the genus and suggested that it may not be distinct from O. gorgasi. In a 2009 paper, R.S. Voss and Weksler examined the two and concluded that they represented the same species on the basis of direct comparisons and a phylogenetic analysis. The resultant tree placed O. curasoae and O. gorgasi sister to each other and closer to O. couesi than to the marsh rice rat. Accordingly, they placed O. curasoae as a junior synonym of the earlier described O. gorgasi.
Oryzomys gorgasi is the southeasternmost representative of the genus Oryzomys, which extends north into the eastern United States (marsh rice rat, O. palustris). O. gorgasi is further part of the O. couesi section, which is centered on the widespread Central American O. couesi and also includes six other species with more limited and peripheral distributions. Many aspects of the systematics of the O. couesi section remain unclear and it is likely that the current classification underestimates the true diversity of the group. Oryzomys is classified in the tribe Oryzomyini, a diverse assemblage of American rodents of over a hundred species, and on higher taxonomic levels in the subfamily Sigmodontinae of family Cricetidae, along with hundreds of other species of mainly small rodents.
## Description
Oryzomys gorgasi is a medium-sized oryzomyine with small ears and large feet, and is similar to the marsh rice rat in general appearance. The long and coarse fur is brownish above and ochraceous below. At the base of the tail, the upper and lower sides differ in color and at the end is a short tuft of hairs. The scales on the tail are well-developed. As in other Oryzomys, the hindfeet exhibit specializations for life in the water. The plantar (lower) surface of the metatarsus is naked. Two of the pads are very small. Ungual tufts, tufts of hair at the bases of the claws, are poorly developed. Interdigital webbing is present, but extends along less than half of the first phalanges.
In specimens from El Caimito, total length is 220 to 290 mm (8.7 to 11.4 in), averaging 259 mm (10.2 in) (measured in 6 specimens); tail length is 116 to 138 mm (4.6 to 5.4 in), averaging 130 mm (5.1 in) (measured in 8 specimens); hindfoot length is 30 to 32 mm (1.2 to 1.3 in), averaging 31 mm (1.2 in) (measured in 10 specimens); ear length is 15 to 17 mm (0.59 to 0.67 in), averaging 16 mm (0.63 in) (measured in 7 specimens); and condylo-incisive length (a measure of total skull size) is 26.9 to 31.4 mm (1.06 to 1.24 in), averaging 29.6 mm (1.17 in) (measured in 5 specimens). In the holotype from Colombia, an old male, total length is 240 mm (9.4 in); tail length is 125 mm (4.9 in); ear length is 19 mm (0.75 in); and condylo-incisive length is 32.1 mm (1.26 in). The collector recorded the holotype's hindfoot as being 34 mm (1.3 in) long, but Sánchez and colleagues remeasured it as 33 mm (1.3 in).
The rostrum (front part of the skull) is short. The broad zygomatic plate develops a prominent notch, but not a spine, on its front end, and its back margin is in front of the first molars. The interorbital region, located between the eyes, is narrowest towards the front and is flanked by beadings along its margins. The interparietal bone is relatively long. The incisive foramina, perforations of the palate between the incisors and the molars, are narrow and long and taper towards the end. The palate itself is also long, extending beyond the molars, and includes prominent posterolateral palatal pits near the third molars, which are excavated into deep fossae. The roof of the mesopterygoid fossa, the opening behind the palate, is not perforated by sphenopalatine vacuities. O. gorgasi lacks an alisphenoid strut; in some other oryzomyines, this extension of the alisphenoid bone separates two openings in the skull, the masticatory–buccinator foramen and the foramen ovale accessorium. The squamosal bone lacks a suspensory process that contacts the tegmen tympani, the roof of the tympanic cavity, a defining character of oryzomyines. The subsquamosal fenestra, an opening at the back of the skull determined by the shape of the squamosal, is almost absent.
In the mandible (lower jaw), the upper and lower masseteric ridges come close together below the first molars, but do not fuse. The back end of the lower incisor root is in a capsular process, a raising of the mandibular bone behind the molars. The upper incisors have yellowish enamel and are opisthodont, with the cutting edge inclined backwards. The molars are relatively small and are brachydont (low-crowned) and bunodont (with the cusps higher than the connecting crests). They are similar to those of the marsh rice rat in structural details. The upper and lower first molars have small accessory roots, as in many other oryzomyines, and the second and third lower molar each have two roots only.
Oryzomys gorgasi is distinguished from other Oryzomys species by its short rostrum, the form of its incisive foramina, the absence of sphenopalatine vacuities, and the near absence of a subsquamosal fenestra. Within the species, the Colombian specimen differs from the Venezuelan animals in being larger in some measurements, but having smaller teeth, and in having oddly shaped wear facets of the incisors. The Colombian animal was probably kept in captivity for some time after it was caught, which would explain its large size and odd wear facets. There are no substantial differences between mainland O. gorgasi and material from Curaçao.
## Distribution and ecology
As far as known, Oryzomys gorgasi has a disjunct distribution in northwestern South America, including Colombia, Venezuela, and Curaçao. In a 2009 paper, Carleton and Arroyo-Cabrales speculated that its distribution may extend into Central America. The Colombian population is known from the holotype only, caught at Loma Teguerre (7°54'N, 77°W) in Antioquia Department, northwestern Colombia, near the Río Atrato, at about 1 m above sea level. The location is apparently a freshwater swamp, and Hershkovitz suggested that O. gorgasi probably occurred throughout the swamp forests in the Río Atrato basin. On Curaçao, it is known from cave faunas at Tafelberg Santa Barbara, Noordkant, Ser'i Kura, and Hermanus. At Tafelberg Santa Barbara, it was found in association with introduced black rats (Rattus rattus), indicating that the population persisted at least until the first European contact in 1499.
In Venezuela, it was found on El Caimito, a small (57 ha, 140 acres) islet just east of the outlet of Lake Maracaibo in the state of Zulia, where the only other native non-flying mammal is the opossum Marmosa robinsoni. El Caimito is separated from the mainland by a narrow, brackish channel and contains sand banks with xerophytic vegetation surrounded by marshy lagoons with Rhizospora mangle mangroves. Oryzomys gorgasi was caught in all habitats on the islet, but has not been found in other similar sites in northwestern Venezuela, where the introduced black rat is the only rodent collected. Analysis of stomach contents of El Caimito specimens indicates that the species is an omnivore, with a diet including crustaceans, insects, plant seeds, and other plant material. The crustaceans may include fiddler crabs (Uca) and a mangrove tree crab of the genus Aratus; the insects include flies (Diptera); and the plants include grass seeds. Two parasitic nematodes, Litomosoides sigmodontis (family Onchocercidae) and an undetermined species of Pterygodermatites (family Rictulariidae), are known to infect O. gorgasi. The 2009 IUCN Red List tersely indicates that the species has been found in a second Venezuelan locality.
## Conservation status
On the 2017 IUCN Red List, O. gorgasi is listed as "endangered" and O. curasoae as "data deficient". The species may be threatened by competition with introduced black rats and destruction of its habitat, but does occur in at least one protected area. Displacement by the black rat has caused the species to become locally extinct in parts of its Venezuelan range. Suitable habitats for O. gorgasi exist in inland Venezuela, and further study is needed to determine whether it is present there. The extinction of the Curaçao population may also have been caused by competition with the black rat, which has been found together with Oryzomys in subfossil deposits.
|
8,761,118 |
Free Association of German Trade Unions
| 1,104,282,640 |
Trade union federation in Imperial and early Weimar Germany
|
[
"1897 establishments in Germany",
"1919 disestablishments",
"Breakaway trade unions",
"Defunct trade unions of Germany",
"National trade union centers of Germany",
"Organisations based in Berlin",
"Organizations of the German Revolution of 1918–1919",
"Syndicalist trade unions",
"Trade unions established in 1897"
] |
The Free Association of German Trade Unions (; abbreviated FVdG; sometimes also translated as Free Association of German Unions or Free Alliance of German Trade Unions) was a trade union federation in Imperial and early Weimar Germany. It was founded in 1897 in Halle under the name Representatives' Centralization of Germany as the national umbrella organization of the localist current of the German labor movement. The localists rejected the centralization in the labor movement following the sunset of the Anti-Socialist Laws in 1890 and preferred grassroots democratic structures. The lack of a strike code soon led to conflict within the organization. Various ways of providing financial support for strikes were tested before a system of voluntary solidarity was agreed upon in 1903, the same year that the name Free Association of German Trade Unions was adopted.
During the years following its formation, the FVdG began to adopt increasingly radical positions. During the German socialist movement's debate over the use of mass strikes, the FVdG advanced the view that the general strike must be a weapon in the hands of the working class. The federation believed the mass strike was the last step before a socialist revolution and became increasingly critical of parliamentary action. Disputes with the mainstream labor movement finally led to the expulsion of FVdG members from the Social Democratic Party of Germany (SPD) in 1908 and the complete severing of relations between the two organizations. Anarchist and especially syndicalist positions became increasingly popular within the FVdG. During World War I, the FVdG rejected the SPD's and mainstream labor movement's cooperation with the German state—known as the Burgfrieden—but was unable to organize any significant resistance to or continue its regular activities during the war. Immediately after the November Revolution, the FVdG very quickly became a mass organization. It was particularly attractive to miners from the Ruhr area opposed to the mainstream unions' reformist policies. In December 1919, the federation merged with several minor left communist unions to become the Free Workers' Union of Germany (FAUD).
## Background
According to Angela Vogel and Hartmut Rübner, Carl Hillmann, a typesetter and prominent trade unionist in the 1870s, was the "intellectual father" of the localist and anarcho-syndicalist movement. Vogel's and Rübner's claim is based on the fact that Hillmann was the first in Germany to consider unions' primary role to be the creation of the conditions for a socialist revolution, not simply to improve workers' living conditions. He also advocated a de-centralized trade union federation structure. Many of the later anarcho-syndicalists including Rudolf Rocker agree with this notion. Hans Manfred Bock, on the other hand, sees no evidence for Hillmann's influence on the FVdG.
From 1878 to 1890, the Anti-Socialist Laws forbade all socialist trade unions. Only small local organizations, which communicated via intermediaries such as stewards, who worked illegally or semi-legally, survived. This form of organization was easier to protect against state repression. After the laws were sunset in 1890, the General Commission of the Trade Unions of Germany was founded on November 17 at a conference in Berlin to centralize the socialist labor movement. In 1892, the Trade Union Congress of Halberstadt was held to organize the many local unions under the committee. The localists, 31,000 of whom were represented at the congress, wanted to retain many of the changes that had been adopted during the repressive period. For example, they opposed separate organizations for political and economic matters, such as the party and the trade union. They especially wanted to keep their grassroots democratic structures. They also advocated local trade unions being networked by delegates rather than ruled centrally, and were wary of bureaucratic structures. The localists' proposals were rejected at the Halberstadt congress, so they refused to join the centralized trade unions, which became known as the Free Trade Unions. They did not renounce social democracy, but rather considered themselves to be an avant-garde within the social democratic movement in Germany.
The localists' main stronghold was in Berlin, although localist unions existed in the rest of the Empire as well. Masons, carpenters, and some metal-working professions—especially those requiring a higher degree of qualification like coppersmiths or gold and silver workers—were represented in large numbers. By 1891, there were at least 20,000 metal workers in localist trade unions, just as many as in the centralized German Metal Workers' Union.
## Founding
At a congress in 1897 in Halle, the localists founded a national organization of their own, the Representatives' Centralization of Germany (Vertrauensmänner-Zentralisation Deutschlands). The congress was originally supposed to take place a year earlier, but a lack of interest forced it to be postponed. There were 37 delegates at the congress representing 6,803 union members. Nearly two-thirds of the delegates came from Berlin or Halle. Almost half the delegates worked in the construction industry, while 14 delegates came from highly specialized professions. The congress decided to establish a five-person Business Commission seated in Berlin to organize political actions, aid in communication between local organizations, and raise financial support for strikes. Fritz Kater became the chairman of the commission. A newspaper, Solidarität (Solidarity), was founded, but the name was changed to Die Einigkeit (Unity) the following year. It initially appeared fortnightly, but was published on a weekly basis beginning in 1898.
The decision to found a national organization was likely the result of several factors. First, the mainstream trade unions were increasingly reformist and centralized. Second, the localists gained confidence from their involvement in the dock workers' strike in Hamburg in late 1896 and early 1897. Third, loss of membership (for example, the Berlin metal workers rejoined the DMV in 1897) convinced the localists of the need for action.
The Representatives' Centralization's relationship to the SPD was ambivalent. The organization was allied with the SPD and supported the Erfurt Program. At the same time, the party mostly opposed the founding of the Representatives' Centralization and called upon its members to rejoin the centralized trade unions. The FVdG remained affiliated with the SPD, which in turn tolerated it because the SPD was afraid a split would lead to a large loss of members. The FVdG stated it would rejoin the centralized trade unions like the SPD leadership desired only if the centralized unions accepted the FVdG's organizational principles.
## Early years
The early years of the Representatives' Centralization of Germany were dominated by a discussion on how to finance strikes by individual local trade unions. The issue was how local unions could retain their autonomy when receiving financial assistance. Originally, all support between local organizations had been voluntary. But this system became more and more impractical, especially after the turn of the 20th century saw numerous large strikes in which employers reacted more aggressively — often by locking out workers. In 1899, the Business Committee felt it had to support a strike in Braunschweig. It took out a loan, which was paid off with dues income and from donations by Berlin unions. The following year, the Business Committee incurred 8,000 Marks in debt by supporting strikes. Part of the debt was paid off by the SPD, while the rest was apportioned among the local unions.
This practice was replaced in 1900 by a far more complex system of assessments and donations designed to raise the money to support strikes. This system was replaced in 1901 because it was impractical. The 1901 system required every local union and the central committee to create strike funds. Local unions would receive support for strikes from Berlin under certain circumstances, and the central Business Committee's fund would be replenished by all member organizations in amounts proportional to their membership and the average wage of their members. This system, too, proved problematic because it penalized the larger, wealthier unions — especially the construction workers in Berlin who had higher wages but also higher costs of living. From 1901 to 1903, many small organizations joined the federation, yet the FVdG's membership fell as the punitive strike support system drove some larger unions out. In 1903, the federation not only changed its name to the Free Association of German Trade Unions but also decided to return to the old system of voluntary contributions. This system remained in place until 1914. The Business Committee worked to ensure that unions contributed as much as they could. Often the committee resorted to threatening unions with expulsion in order to raise funds for a strike. Fritz Kater called this a dictatorship necessary for the movement, but local organizations still had far more autonomy than their counterparts in other German labor federations.
## Radicalization and expulsion from the SPD
During the first decade of the 20th century, the FVdG was transformed from a localist union federation into a syndicalist labor organization with anarchist tendencies. The process was initiated by the death of Gustav Keßler, the most important ideologue in the FVdG, in 1903. His role was largely assumed by the physician Raphael Friedeberg.
In 1903, a dispute between the FVdG and the Free Trade Unions in Berlin led the party commission to intervene and to sponsor talks aimed at re-unification of the two wings of the German labor movement. At the meeting, the FVdG made a number of compromises, which led to member protests. Soon, over one-third of the members left the union. The 1903 FVdG congress elected a panel to continue negotiations with the Free Trade Unions. This panel demanded that the Free Trade Unions adopt localist organizational principles as a prerequisite for re-unification. The FVdG panel realized this demand was unrealistic, but hoped the expulsion of revisionists from the SPD during the debate on Eduard Bernstein's theses would strengthen their position. The impossibility of a reconciliation between the two became obvious by March 1904, since the re-unification envisioned by both the leadership of the SPD and the Free Trade Unions was more along the lines of an integration of the FVdG into the Free Trade Unions.
The FVdG's disillusionment with the social democratic movement deepened during the mass strike debate. The role of the general strike for the socialist movement was first discussed within the FVdG in 1901. At the SPD's 1903 congress in Dresden, Raphael Friedeberg proposed discussing the topic, but his proposal was rejected by the congress. The following year, a proposal by Wilhelm Liebknecht and Eduard Bernstein to initiate debate on the topic was accepted, since they had distanced themselves from Friedeberg's positions.
Liebknecht and Bernstein, like the left wing of the party, felt the general strike should not be used to provoke the state but rather to defend political rights (especially the right to vote) should the state seek to abolish them. The more conservative faction in the party was opposed to this concept. In 1904, Friedeberg, speaking for the FVdG, advanced the view that the general strike must be a weapon in the hands of the proletariat and would be the last step before the socialist revolution. In 1905, his speech on the topic was even more radical. He claimed that historical materialism, a pillar of Marxism, was to blame for social democracy's alleged powerlessness, and introduced the alternative concept of historical psychism—which held that human psychology was more significant for social development than material conditions. He also recommended the anarchist literature especially Kropotkin's writings rather than Marx's works, which were most influential in the SPD.
The position that the general strike could be used, but only as a last resort, became dominant in the party during the mass strike debate. This caused much concern among the conservatives in the party, especially among many trade unionists. At a meeting in February 1906, the trade unionists were placated by party leaders, who said they would attempt to prevent a general strike at all costs. The FVdG reacted by publishing the secret protocols from the meeting in Die Einigkeit, greatly angering the party leadership.
At the 1905 party convention, August Bebel, who had always favored a stronger role for the SPD-affiliated unions, proposed a resolution requiring all members of the party to join the centralized trade unions for their respective professions. This would have forced all FVdG members to either leave the party or the trade union. The resolution was adopted, and implemented in 1907. An FVdG survey returned a vote of twenty-two to eight opposing rejoining the centralized unions. This led some of the masons, carpenters, and construction workers in the union to leave the FVdG in 1907 to avoid being expelled from the SPD, saying the organization was "taking a path, which would certainly lead to strife with the SPD and to syndicalism and anarchism." In 1908, the SPD's Nuremberg congress finally voted to make SPD and FVdG membership incompatible.
In addition to causing about two-thirds of its members to quit between 1906 and 1910, the radicalization of the FVdG also correlates to a slight change in the milieu, industries, and regions from which the organization drew its members. Many metal and construction workers, who had a localist tradition, left as a result of the syndicalist and anarchist tendencies in the FVdG. Miners, who worked mostly in the Ruhr area, did not have this tradition but developed a certain skepticism of bureaucratic structures. About 450 of them joined the FVdG before World War I, a sign of what was to come after the war.
## Pre-war period
Following the split from the SPD, the FVdG was increasingly influenced by French syndicalism and anarchism. In 1908, Kater called the Charter of Amiens, the platform of the French General Confederation of Labor (CGT), the earliest and largest syndicalist union worldwide, "a new revelation". Although there was no contact between German "intellectual anarchists" (like Gustav Landauer and Erich Mühsam) and the FVdG, it did have influential anarchist members, most notably Andreas Kleinlein and Fritz Köster. Kleinlein and Köster increasingly influenced the federation from 1908 on, and this led to the founding of Der Pionier in 1911. This newspaper, which was edited by Köster, had a much more aggressive tone than Die Einigkeit. Despite these developments, the influence of the anarchists in the pre-World War I FVdG remained quantitatively minute, especially as leading members like Kater were at the time very skeptical of the anarchist ideology.
After both the British Industrial Syndicalist Education League (ISEL), a short-lived syndicalist organization heavily involved in the strike wave in Britain from 1910, and the Dutch syndicalist union National Labor Secretariat (NAS) published proposals for an international syndicalist congress in 1913, the FVdG was the first to express support. There were difficulties in organizing the congress, and the largest syndicalist union worldwide — the CGT — refused to participate because it was already affiliated with the social democratic International Federation of Trade Unions. Despite these challenges, the First International Syndicalist Congress took place at Holborn Town Hall in London from September 27 to October 2. British, Swedish, Danish, Dutch, Belgian, French, Spanish, Italian, Cuban, Brazilian, and Argentine organizations—both labor unions and political groups — had delegates in London in addition to the FVdG, which was represented by Karl Roche, Carl Windhoff, and Fritz Kater. There were also links with Norwegian, Polish, and American groups. Kater was elected co-president of the congress alongside Jack Wills. After Wills was forced to resign, Kater served as co-president with Jack Tanner. The congress had difficulty agreeing on many issues, the main source of conflict being whether further schisms in the European labor movement (as had occurred in Germany and the Netherlands) should be risked. The FVdG generally agreed with their Dutch comrades in calling on other unions to decide between syndicalism and socialism, while their Italian, French, and Spanish counterparts, most notably Alceste De Ambris of the Italian USI, were more intent on preventing further division. Accordingly, the congress was divided on the question of whether its purpose was to simply pave the way for deeper relations between the syndicalist unions or whether a Syndicalist International was to be founded. The opponents of a new organization prevailed, but the congress agreed to establish an Information Bureau. The Information Bureau was based in Amsterdam and published the Bulletin international du mouvement syndicaliste. The congress was considered a success by most who attended, with the notable exception of De Ambris. A second congress was scheduled to take place in two years' time in Amsterdam. Due to the outbreak of World War I, the congress did not take place. The Bulletin only published for eighteen issues before the war caused it to cease publication.
## World War I
During the buildup to World War I, the FVdG denounced the SPD's anti-war rhetoric as "complete humbug". With the start of war, the SPD and the mainstream labor movement entered into the Burgfrieden (or civil truce) with the German state. Under this agreement, the unions' structures remained intact and the government did not cut wages during the war. For their part, the unions did not support new strikes, ended current ones, and mobilized support for the war effort. The 1916 Auxiliary War Service Law established further cooperation between employers, unions, and the state by creating workers' committees in the factories and joint management-union arbitration courts.
The FVdG, on the other hand, was the only labor organization in the country which refused to participate in the Burgfrieden. The union held that war-time patriotism was incompatible with proletarian internationalism and that war could only bring greater exploitation of labor. (Indeed, the average real wage fell by 55 percent during the war.) While the mainstream labor movement was quick to agree with the state that Russia and the United Kingdom were to blame for igniting the war, the FVdG held that the cause for the war was imperialism and that no blame could be assigned until after the conflict ended. The federation strongly criticized hostility towards foreigners working in Germany, especially Poles and Italians. It also rejected the concepts of the "nation" and national identity invoked in support of the war, claiming that common language, origin and culture (the foundations of a nation) did not exist in Germany. The FVdG's newspapers also declared that the war refuted historical materialism, since the masses had gone to war against their own material interests.
After Fritz Kater and Max Winkler reaffirmed syndicalist antimilitarism in the August 5, 1914 Der Pionier edition, the newspaper was banned. Three days later, Die Einigkeit criticized the SPD's stance on the war. It was then suppressed as well. The FVdG promptly responded by founding the weekly Mitteilungsblatt. After it was banned in June 1915, the federation founded the bi-weekly Rundschreiben, which survived until May 1917. Social Democratic publications on the other hand were allowed by Prussian War Minister Erich von Falkenhayn to be distributed even in the army. In the first days of the war, about 30 FVdG activists in Cologne, Elberfeld, Düsseldorf, Krefeld and other cities were arrested—some remaining under house arrest for two years. The government repression against the FVdG was heavy. While bans were often placed on the union's regular meetings, authorities in Düsseldorf even banned meetings of the syndicalist choir. Another problem for the union was that many of its members were conscripted. Half of the Berlin construction workers, the federation's largest union, were forced to serve in the army. In some places, all FVdG members were called into service.
Although the FVdG insisted that the "goal is everything and ... must be everything" (a play on Bernstein's formula that "the final goal, whatever it may be, is nothing to me: the movement is everything"), it was unable to do much more than keep its own structures alive during World War I. Immediately after the declaration of war, FVdG tried to continue its antiwar demonstrations to no avail. Although it constantly criticized the Burgfrieden and militarism in general, industrial action was not possible except for a few minor cases (most notably resistance by the carpenters' union to Sunday work). The FVdG also received support from abroad. The faction in the Italian USI led by Armando Borghi, an antimilitarist minority in the French CGT, the Dutch NAS, as well as Spanish, Swedish, and Danish syndicalists were all united with the FVdG in their opposition to the war.
As the Great War progressed, war exhaustion in Germany grew. The first strikes in the country since the start of the war broke out in 1915, steadily increasing in frequency and magnitude. The unions' role as troubleshooter between the employers and the workers soon led to conflict between the membership and union officials, and the Free Trade Unions steadily lost members. Correspondingly, the Reichstag faction of the SPD split over continued support for the war. The 1917 February Revolution in Russia was seen by the FVdG as an expression of the people's desire for peace. The syndicalists paid special attention to the role the general strike (which they had been advocating for years) played in the revolution. They were unable to comment on the October Revolution as the Rundschreiben had been banned by the time it broke out.
## November Revolution and re-founding as FAUD
Some claim that the FVdG influenced strikes in the arms industry as early as February or March 1918, but the organization was not re-established on a national level until December 1918. On December 14, Fritz Kater started publishing Der Syndikalist (The Syndicalist) in Berlin as a replacement for Die Einigkeit. On December 26 and 27, a conference organized by Kater and attended by 33 delegates from 43 local unions took place in Berlin. The delegates reflected upon the difficult times during the war and proudly noted that the FVdG was the only trade union which did not have to adjust its program to the new political conditions because it had remained loyal to its anti-state and internationalist principles. The delegates reaffirmed their rejection of parliamentarianism and refused to participate in the National Assembly.
In Spring 1919, Karl Roche wrote a new platform for the FVdG entitled "Was wollen die Syndikalisten? Programm, Ziele und Wege der 'Freien Vereinigung deutscher Gewerkschaften'" ("What Do the Syndicalists Want? The Program, Goals, and Means of the 'Free Association of German Trade Unions'"). In addition to reiterating pre-war ideas and slogans, it went further by criticizing participation in electoral democracy, claiming that this handicapped and confused proletarian class struggle. The platform also called for the establishment of the dictatorship of the proletariat, a position which was designed to reach out to the newly formed Communist Party (KPD) and International Communists of Germany. In late 1918 and early 1919, the FVdG became an important player in the strike movement in the Ruhr region (which mostly involved miners). Its organizers, most notably Carl Windhoff, became regular speakers at workers' demonstrations. On April 1, a general strike supported by the FVdG, the KPD and the Independent Social Democratic Party (USPD) began. The strike eventually involved up to 75 percent of the region's miners until it was violently suppressed in late April by the SPD-led government. After the strike and the ensuing collapse of the General Miners' Union, the FVdG expanded its unions rapidly and independently of the aforementioned political parties, especially in the Ruhr region. This led to a massive expansion in FVdG membership. The FVdG's criticism of the bureaucratic centralized trade unions, its advocation of direct action, and its low membership dues were received well by the workers in the Ruhr region. By August 1919, the federation had around 60,000 members throughout Germany. However, its Ruhr miners' unions left the craft unionist scheme the FVdG had traditionally been organized by behind, preferring simpler industrial structures.
The end of cooperation between the FVdG and the political parties in the Ruhr region was part of a nationwide trend after Paul Levi, an anti-syndicalist, became chairman of the KPD in March. Moreover, Rudolf Rocker, a communist anarchist and follower of Kropotkin, joined the FVdG in March 1919. He returned via The Netherlands in November 1918 after living in exile in London, where he had been active in the Jewish anarchist scene. Augustin Souchy, more of a Landauer-esque anarchist, also joined the federation in 1919. Both rapidly gained influence in the organization and—as anti-Marxists—were opposed to close collaboration with communists.
Nevertheless, the FVdG's Rhineland and Westphalia section merged with left communist unions to form the Free Workers' Union (FAU) in September 1919. Syndicalists from the FVdG were the biggest and most dominant faction in the FAU. The FAU's statutes mostly reflected compromises by the federation's member unions, but also reflected the FVdG's significant influence.
Soon it was decided to complete the merger in Rhineland and Westphalia on a national level. The FVdG's 12th congress, held December 27 to 30, became the Free Workers' Union of Germany's (FAUD) founding congress. Most left communists (including the influential veteran member Karl Roche) had already quit or were in the process of leaving the FAU in Rhineland and Westphalia by this point. The majority of them would join the General Workers' Union of Germany (AAUD), which was founded in February 1920. Without the left communists to oppose its adoption, Rocker's thoroughly anarchist "Prinzipienerklärung des Syndikalismus" ("Declaration of Syndicalist Principles"), which the Business Commission had charged him with drafting, became the FAUD's platform without much controversy. The FAUD also rejected the dictatorship of the proletariat and other Marxist terms and ideas. According to the Business Commission, the congress was attended by 109 delegates representing 111,675 workers, twice as many as were claimed just four and a half months earlier.
|
86,700 |
Terry Fox
| 1,169,372,396 |
Canadian athlete (1958–1981)
|
[
"1958 births",
"1981 deaths",
"Athletes from Winnipeg",
"Canadian amputees",
"Canadian disabled sportspeople",
"Canadian humanitarians",
"Canadian people of Métis descent",
"Companions of the Order of Canada",
"Deaths from bone cancer",
"Deaths from cancer in British Columbia",
"Deaths from lung cancer in Canada",
"Lou Marsh Trophy winners",
"People from Port Coquitlam",
"Persons of National Historic Significance (Canada)",
"Simon Fraser University alumni",
"Sportspeople from British Columbia",
"Sportspeople with limb difference",
"Terry Fox"
] |
Terrance Stanley Fox CC OD (July 28, 1958 – June 28, 1981) was a Canadian athlete, humanitarian, and cancer research activist. In 1980, with one leg having been amputated due to cancer, he embarked on an east-to-west cross-Canada run to raise money and awareness for cancer research. Although the spread of his cancer eventually forced him to end his quest after 143 days and 5,373 kilometres (3,339 mi), and ultimately cost him his life, his efforts resulted in a lasting, worldwide legacy. The annual Terry Fox Run, first held in 1981, has grown to involve millions of participants in over 60 countries and is now the world's largest one-day fundraiser for cancer research; over C\$850 million has been raised in his name as of September 2022.
Fox was a distance runner and basketball player for his Port Coquitlam high school, now named after him, and Simon Fraser University. His right leg was amputated in 1977 after he was diagnosed with osteosarcoma, though he continued to run using an artificial leg. He also played wheelchair basketball in Vancouver, winning three national championships.
In 1980, he began the Marathon of Hope, a cross-country run to raise money for cancer research. He hoped to raise one dollar from each of Canada's 24 million people. He began with little fanfare from St John's, Newfoundland and Labrador, in April and ran the equivalent of a full marathon every day. Fox had become a national star by the time he reached Ontario; he made numerous public appearances with businessmen, athletes, and politicians in his efforts to raise money. He was forced to end his run outside Thunder Bay when the cancer spread to his lungs. His hopes of overcoming the disease and completing his run ended when he died nine months later.
Fox was the youngest person named a Companion of the Order of Canada and won the 1980 Lou Marsh Award as the nation's top sportsman. He was named Canada's Newsmaker of the Year in both 1980 and 1981 by The Canadian Press. Considered a national hero, he has had many buildings, statues, roads, and parks named in his honour across the country.
## Early life and cancer
Terry Fox was born on July 28, 1958, in Winnipeg, Manitoba, to Rolland and Betty Fox. Rolland was a switchman for the Canadian National Railway. Fox had an elder brother, Fred, a younger brother, Darrell, and a younger sister, Judith. Fox's maternal grandmother is Métis and Fox's younger brother Darrell has official Métis status.
His family moved to Surrey, British Columbia, in 1966, then settled in Port Coquitlam in 1968. His parents were dedicated to their family, and his mother was especially protective of her children; it was through her that Fox developed his stubborn dedication to whatever task he committed to do. His father recalled that Fox was extremely competitive, noting that he hated to lose so much that he would continue at any activity until he succeeded. Fox attempted to join his school's basketball team, though struggled because of his height. His coach suggested that Fox try cross-country running, which Fox did as he wanted to impress his coach. Fox continued to improve on his basketball skills, and in grade 12 he won his high school's athlete of the year award. Fox was unsure whether he wanted to go to university, but Fox's mother convinced him to enrol at Simon Fraser University. He studied kinesiology with the intention of becoming a physical education teacher. He was also a member of the junior varsity basketball team.
On November 12, 1976, Fox was driving to the family home in Port Coquitlam when he was distracted by nearby bridge construction and crashed into the back of a pickup truck. Fox injured his right knee in the crash and felt pain in December, but chose to ignore it until the end of basketball season. By March 1977, the pain had intensified and he went to a hospital, where he was diagnosed with osteosarcoma, a form of cancer that often starts near the knees. Fox believed his car accident weakened his knee and left it vulnerable to the disease, though his doctors argued there was no connection. He was told that his leg had to be amputated, he would require chemotherapy treatment, and that recent medical advances meant he had a 50-percent chance of survival. Fox learned that two years before, the figure would have been only 15 percent; the improvement in survival rates impressed on him the value of cancer research. With the help of an artificial leg, Fox was walking three weeks after the amputation. Doctors were impressed with Fox's positive outlook, stating it contributed to his rapid recovery. Fox endured sixteen months of chemotherapy and found the time he spent in the British Columbia Cancer Control Agency facility difficult as he watched fellow cancer patients suffer and die from the disease.
In the summer of 1977, Rick Hansen, working with the Canadian Wheelchair Sports Association, invited Fox to try out for his wheelchair basketball team. Although he was undergoing chemotherapy treatments at the time, Fox's energy impressed Hansen. Less than two months after learning how to play the sport, Fox was named a member of the team for the national championship in Edmonton. He won three national titles with the team, and was named an all-star by the North American Wheelchair Basketball Association in 1980.
## Marathon of Hope
The night before his cancer surgery, Fox had been given an article about Dick Traum, the first amputee to complete the New York City Marathon. The article inspired him; he embarked on a 14-month training program, telling his family he planned to compete in a marathon himself. In private, he devised a more extensive plan. His hospital experiences had made Fox angry at how little money was dedicated to cancer research. He intended to run the length of Canada in the hope of increasing cancer awareness, a goal he initially divulged only to his friend Douglas Alward.
Fox ran with an unusual gait, as he was required to hop-step on his good leg due to the extra time the springs in his artificial leg required to reset after each step. He found the training painful as the additional pressure he had to place on both his good leg and his stump led to bone bruises, blisters and intense pain. Fox found that after about 20 minutes of each run, he crossed a pain threshold and the run became easier.
On September 2, 1979, Fox competed in a 17-mile (27 km) road race in Prince George. He finished in last place, ten minutes behind his closest competitor, but his effort was met with tears and applause from the other participants. Following the marathon, he revealed his full plan to his family. His mother discouraged him, angering Fox, though she later came to support the project. She recalled, "He said, 'I thought you'd be one of the first persons to believe in me.' And I wasn't. I was the first person who let him down". Fox initially hoped to raise \$1 million, then \$10 million, but later sought to raise \$1 for each of Canada's 24 million citizens.
### Preparation
On October 15, 1979, Fox sent a letter to the Canadian Cancer Society in which he announced his goal and appealed for funding. He stated that he would "conquer" his disability, and promised to complete his run, even if he had to "crawl every last mile". Explaining why he wanted to raise money for research, Fox described his personal experience of cancer treatment:
> I soon realized that that would only be half my quest, for as I went through the 16 months of the physically and emotionally draining ordeal of chemotherapy, I was rudely awakened by the feelings that surrounded and coursed through the cancer clinic. There were faces with the brave smiles, and the ones who had given up smiling. There were feelings of hopeful denial, and the feelings of despair. My quest would not be a selfish one. I could not leave knowing these faces and feelings would still exist, even though I would be set free from mine. Somewhere the hurting must stop....and I was determined to take myself to the limit for this cause.
The Cancer Society was skeptical of his success but agreed to support Fox once he had acquired sponsors and requested he get a medical certificate from a heart specialist stating that he was fit to attempt the run. Fox was diagnosed with left ventricular hypertrophy – an enlarged heart – a condition commonly associated with athletes. Doctors warned Fox of the potential risks he faced, though they did not consider his condition a significant concern. They endorsed his participation when he promised that he would stop immediately if he began to experience any heart problems.
A second letter was sent to several corporations seeking donations for a vehicle and running shoes, and to cover the other costs of the run. Fox sent other letters asking for grants to buy a running leg. The Ford Motor Company donated a camper van, while Imperial Oil contributed fuel, and Adidas his running shoes. Fox turned away any company that requested he endorse their products and refused any donation that carried conditions, as he insisted that nobody was to profit from his run.
### Start of the marathon
The Marathon began on April 12, 1980, when Fox dipped his right leg in the Atlantic Ocean near St. John's, Newfoundland and Labrador, and filled two large bottles with ocean water. He intended to keep one as a souvenir and pour the other into the Pacific Ocean upon completing his journey at Victoria, British Columbia. Fox was supported on his run by Doug Alward, who drove the van and cooked meals.
Fox was met with gale-force winds, heavy rain, and a snowstorm in the first days of his run. He was initially disappointed with the reception he received but was heartened upon arriving in Channel-Port aux Basques, Newfoundland and Labrador, where the town's 10,000 residents presented him with a donation of over \$10,000. Throughout the trip, Fox frequently expressed his anger and frustration to those he saw as impeding the run, and he fought regularly with Alward. When they reached Nova Scotia, they were barely on speaking terms, and it was arranged for Fox's brother Darrell, then 17, to join them as a buffer.
Fox left the Maritimes on June 10 and faced new challenges upon entering Quebec due to his group's inability to speak French and drivers who continually forced him off the road. Fox arrived in Montreal on June 22, one-third of the way through his 8,000-kilometre (5,000 mi) journey, having collected over \$200,000 in donations. Fox's run caught the attention of Isadore Sharp, the founder and CEO of Four Seasons Hotels and Resorts, who lost a son to melanoma in 1978 just a year after Terry's diagnosis. Sharp gave food and accommodation at his hotels to Fox's team. When Fox was discouraged because so few people were making donations, Sharp pledged \$2 a mile and persuaded close to 1,000 other corporations to do the same. Fox was convinced by the Canadian Cancer Society that arriving in Ottawa for Canada Day would aid fundraising efforts, so he remained in Montreal for a few extra days.
### Ontario and marathon's end
Fox crossed into Ontario on the last Saturday in June, and he was met by a brass band and thousands of residents who lined the streets to cheer him on, while the Ontario Provincial Police gave him an escort throughout the province. Despite the sweltering heat of summer, he continued to run 26 miles (42 km) per day. On his arrival in Ottawa, Fox met Governor General Ed Schreyer, Prime Minister Pierre Trudeau, and was the guest of honour at numerous sporting events in the city. In front of 16,000 fans, he performed a ceremonial kickoff at a Canadian Football League game and was given a standing ovation. Fox's journal reflected his growing excitement at the reception he had received.
On July 11, Fox arrived in Toronto where a crowd of 10,000 people met Fox, who was to be honoured in Nathan Phillips Square. As he ran to the square, he was joined on the road by many people, including National Hockey League star Darryl Sittler, who presented Fox with his 1980 All-Star Game jersey. The Cancer Society estimated it collected \$100,000 in donations that day alone. That evening he threw the ceremonial first pitch at Exhibition Stadium preceding a baseball game between the Toronto Blue Jays and the Cleveland Indians. As he continued through southern Ontario, he was met by Hockey Hall of Famer Bobby Orr who presented him with a cheque for \$25,000. Fox considered meeting Orr the highlight of his journey.
As Fox's fame grew, the Cancer Society scheduled him to attend more functions and give more speeches. Fox attempted to accommodate any request that he believed would raise money, no matter how far out of his way it took him. He bristled, however, at what he felt were media intrusions into his personal life, for example when the Toronto Star reported that he had gone on a date. Fox was left unsure whom he could trust in the media after negative articles began to emerge, including one by The Globe and Mail that highlighted tensions with his brother Darrell and claimed he was running because he held a grudge against a doctor who had misdiagnosed his condition, allegations he referred to as "trash".
The physical demands of running a marathon every day took their toll on Fox's body. Apart from the rest days in Montreal taken at the request of the Cancer Society, he refused to take a day off, even on his 22nd birthday. He frequently had shin splints and an inflamed knee. He developed cysts on his stump and experienced dizzy spells. At one point, he had a soreness in his ankle that would not go away. Although he feared he had developed a stress fracture, he ran for three more days before seeking medical attention, and was then relieved to learn it was tendonitis and could be treated with painkillers. Fox rejected calls for him to seek regular medical checkups, and dismissed suggestions he was risking his future health. By late August, Fox described that he was exhausted before he began the day's run. On September 1, outside Thunder Bay, he was forced to stop briefly after he had an intense coughing fit and experienced pains in his chest. He resumed running as the crowds along the highway shouted out their encouragement. A few miles later, short of breath and with continued chest pain, he asked Alward to drive him to a hospital. The next day, Fox held a tearful press conference during which he announced that his cancer had returned and spread to his lungs. He was forced to end his run after 143 days and 5,373 kilometres (3,339 mi). Fox refused offers to complete the run in his stead, stating that he wanted to complete his marathon himself.
### National response
Fox had raised \$1.7 million (equivalent to \$ million in ) when he was forced to abandon the Marathon. A week after his run ended, the CTV Television Network organized a nationwide telethon in support of Fox and the Canadian Cancer Society. Supported by Canadian and international celebrities, the five-hour event raised \$10.5 million (equivalent to \$ million in ). Among the donations were \$1 million each by the governments of British Columbia and Ontario, the former to create a new research institute to be founded in Fox's name and the latter an endowment given to the Ontario Cancer Treatment and Research Foundation. Donations continued throughout the winter, and by April over \$23 million had been raised (equivalent to \$ million in ).
Supporters and well-wishers from around the world inundated Fox with letters and tokens of support. At one point, he was receiving more mail than the rest of Port Coquitlam combined. Such was his fame that one letter addressed simply to "Terry Fox, Canada" was successfully delivered.
In September 1980, Fox was invested in a special ceremony as a Companion of the Order of Canada; he was the youngest person to be so honoured. The Lieutenant Governor of British Columbia named him to the Order of the Dogwood, the province's highest award. Canada's Sports Hall of Fame commissioned a permanent exhibit, and Fox was named the winner of the Lou Marsh Award for 1980 as the nation's top athlete. He was named Canada's 1980 Newsmaker of the Year. The Ottawa Citizen described the national response to his marathon as "one of the most powerful outpourings of emotion and generosity in Canada's history".
## Illness and death
In the following months, Fox received multiple chemotherapy treatments, but the disease continued to spread. As his condition worsened, Canadians hoped for a miracle and Pope John Paul II sent a telegram saying that he was praying for Fox. Doctors turned to experimental interferon treatments, though their effectiveness against osteogenic sarcoma was unknown. He had an adverse reaction to his first treatment, but continued the program after a period of rest.
Fox was re-admitted to the Royal Columbian Hospital in New Westminster on June 19, 1981, with chest congestion and developed pneumonia. He fell into a coma and died at 4:35 a.m. PDT on June 28, 1981. The Government of Canada ordered flags across the country lowered to half mast, an unprecedented honour that was usually reserved for statesmen. Addressing the House of Commons, Trudeau said, "It occurs very rarely in the life of a nation that the courageous spirit of one person unites all people in the celebration of his life and in the mourning of his death ... We do not think of him as one who was defeated by misfortune but as one who inspired us with the example of the triumph of the human spirit over adversity".
His funeral in Port Coquitlam was attended by 40 relatives and 200 guests, and broadcast on national television. Hundreds of communities across Canada also held memorial services, a public memorial service was held on Parliament Hill in Ottawa, and Canadians again overwhelmed Cancer Society offices with donations. Fox is buried at Port Coquitlam Municipal Cemetery.
## Legacy
Fox remains a prominent figure in Canadian folklore. His determination united the nation; people from all walks of life lent their support to his run and his memory inspires pride in all regions of the country. A 1999 national survey named him as Canada's greatest hero, and he finished second to Tommy Douglas in the 2004 Canadian Broadcasting Corporation program The Greatest Canadian. Fox's heroic status has been attributed to his image as an ordinary person attempting a remarkable and inspirational feat. Others have argued that Fox's greatness derives from his audacious vision, his determined pursuit of his goal, his ability to overcome challenges such as his lack of experience and the very loneliness of his venture. As Fox's advocate on The Greatest Canadian, media personality Sook-Yin Lee compared him to a classic hero, Phidippides, the runner who delivered the news of the Battle of Marathon before dying, and asserted that Fox "embodies the most cherished Canadian values: compassion, commitment, perseverance". She highlighted the juxtaposition between his celebrity, brought about by the unforgettable image he created, and his rejection of the trappings of that celebrity. Typically amongst Canadian icons, Fox is an unconventional hero, admired but not without flaws. An obituary in the Canadian Family Physician emphasized his humanity and noted that his anger – at his diagnosis, at press misrepresentations and at those he saw as encroaching on his independence – spoke against ascribing sainthood for Fox, and thus placed his achievements within the reach of all.
### Views on Fox's disability
Fox refused to regard himself as disabled, and would not allow anyone to pity him, telling a Toronto radio station that he found life more "rewarding and challenging" since he had lost his leg. His feat helped redefine Canadian views of disability and the inclusion of disabled people in society. Fox's actions increased the visibility of people with disabilities, and influenced the attitudes of those with disabilities by showing disability portrayed in a positive light. Rick Hansen commented that the run challenged society to focus on ability rather than disability, writing, "What was perceived as a limitation became a great opportunity. People with disabilities started looking at things differently. They came away with huge pride".
The narrative surrounding Fox has been critiqued as illustrating the media's focus on stereotyped portrayals of the heroic and extraordinary achievements of people with disabilities, rather than more mundane accomplishments. Actor Alan Toy noted "Sure, it raised money for cancer research and sure it showed the human capacity for achievement. But a lot of disabled people are made to feel like failures if they haven't done something extraordinary. They may be bankers or factory workers – proof enough of their usefulness to society. Do we have to be 'supercrips' in order to be valid? And if we're not super, are we invalid?" The media's idealization of Fox has also been critiqued for emphasizing an individualistic approach to illness and disability, in which the body is a machine to be mastered, rather than the social model of disability where societal attitudes and barriers to inclusion play a prominent role in determining who is disabled.
### Terry Fox Run
During Fox's marathon, Sharp proposed an annual fundraising run in Fox's name; Fox agreed, but insisted that the runs be non-competitive and include any who wanted to participate. Sharp faced opposition to the project: the Cancer Society feared that a fall run would detract from its traditional April campaigns, while other charities believed that an additional fundraiser would leave less money for their causes. Sharp persisted, and he, the Four Seasons Hotels and the Fox family organized the first Terry Fox Run on September 13, 1981.
Over 300,000 people took part and raised \$3.5 million in the first Terry Fox Run. Schools across Canada were urged to join the second run, held on September 19, 1982. School participation has continued since, evolving into the National School Run Day. The runs, which raised over \$20 million in their first six years, grew into an international event as over one million people in 60 countries took part in 1999, raising \$15 million that year alone. By the Terry Fox Run's 25th anniversary, more than three million people were taking part annually. Grants from the Terry Fox Foundation, which organizes the runs, have helped Canadian scientists make numerous advances in cancer research. The Terry Fox Run is the world's largest one-day fundraiser for cancer research, and over \$850 million has been raised in his name as of May 2022.
### Honours
The physical memorials in Canada named after Fox include:
- Approximately 32 roads and streets, notably Terry Fox Drive, Ottawa, and the Terry Fox Courage Highway near Thunder Bay, near where Fox ended his run and where a statue of him was erected as a monument the Terry Fox Memorial and Lookout;
- 14 schools, including a new school in a suburb of Montreal that was renamed Terry Fox Elementary School shortly after he died, and the Port Coquitlam high school from which he had graduated, which was renamed Terry Fox Secondary School on January 18, 1986;
- 14 other buildings, including many athletic centres, and
- Terry Fox Stadium, Ottawa, Ontario
- Terry Fox Station, a transitway stop in Ottawa
- Terry Fox Theatre, Port Coquitlam, British Columbia
- the Terry Fox Research Institute and the Terry Fox Laboratory, the major research unit of the British Columbia Cancer Agency;
- Seven statues, including:
- the Terry Fox Monument in Ottawa, which was the genesis of The Path of Heroes, a federal government initiative that seeks to honour the people that shaped the nation;
- In 2011, a series of four bronze sculptures of Fox, designed by Douglas Coupland and depicting Fox running toward the Pacific Ocean, was unveiled at Terry Fox Plaza outside BC Place in downtown Vancouver.
- Nine fitness trails;
- A previously unnamed mountain in the Canadian Rockies in the Selwyn range, which was named Mount Terry Fox by the government of British Columbia; the area around it is now known as Mount Terry Fox Provincial Park;
- The Terry Fox Fountain of Hope was installed in 1982 on the grounds of Rideau Hall;
- The Canadian Coast Guard icebreaker CCGS Terry Fox, which was commissioned in 1983.
Shortly after his death, Fox was named the Newsmaker of the Year for 1981, and Canada Post announced the production of a commemorative stamp in 1981, bypassing its traditionally held position that stamps honouring people should not be created until ten years after their deaths. British rock star Rod Stewart was so moved by the Marathon of Hope that he was inspired to write and dedicate the song "Never Give Up on a Dream" – found on his 1981 album Tonight I'm Yours – to Fox. Stewart also called his 1981–1982 tour of Canada the "Terry Fox Tour". In 1982 the groundwork was laid for the Terry Fox Canadian Youth Centre, a residential hostel in Ottawa for high school students to come from across Canada to spend a week learning about the country. It was set up by the Canadian Unity Council; the programme later became known as Encounters with Canada and the building was renamed the Historica Canada Centre.
In 2012, Fox was inducted into the Canadian Medical Hall of Fame in the Builder category in recognition of his public service in the name of research fundraising.
The Terry Fox Hall of Fame was established in 1994 to recognize individuals that have made contributions that improved the quality of life of disabled people. The Terry Fox Laboratory research centre was established in Vancouver to conduct leading edge research into the causes and potential cures for cancer.
In 2005, the Royal Canadian Mint issued a special dollar coin designed by Stanley Witten to commemorate the 25th anniversary of the Marathon of Hope. It was their first regular circulation coin to feature a Canadian.
In 2008, Fox was named a National Historic Person of Canada, a recognition given by the Canadian government to those persons who are considered to have played a nationally significant role in the history of the country. Fox's designation was due to his status as an "enduring icon", his personal qualities, and for the manner in which the Marathon of Hope had captivated the country and resonated deeply with Canadians.
Fox's mother, Betty Fox, was one of eight people to carry the Olympic Flag into BC Place Stadium at the opening ceremonies of the 2010 Winter Olympics in Vancouver. The games saw the Terry Fox Award bestowed on Olympic athletes who embodied Fox's characteristics of determination and humility in the face of adversity.
Beginning in 2015, Manitoba designated the first Monday in August, formerly known as Civic Holiday, as Terry Fox Day.
On September 13, 2020, Google celebrated Fox with a Google Doodle.
### Films
Fox's story was dramatized in the 1983 biographical film The Terry Fox Story. Produced by Home Box Office, the film aired as a television movie in the United States and had a theatrical run in Canada. The film starred amputee actor Eric Fryer and Robert Duvall, and was the first film made exclusively for pay television. The movie received mixed but generally positive reviews, but was criticized by Fox's family over how it portrayed his temper. The Terry Fox Story was nominated for eight Genie Awards, and won five, including Best Picture and Best Actor.
Rock musician Ian Thomas had written and recorded a song in response to Fox's story, "Runner", which ended up being included in the film. It also was covered by Manfred Mann's Earth Band, reaching 22 on the Billboard Hot 100 in 1984.
A second movie, titled Terry, focused on the Marathon of Hope, was produced by the CTV Television Network in 2005. Fox was portrayed by Shawn Ashmore. He is not an amputee; digital editing was used to superimpose a prosthesis over his real leg. The film was endorsed by Fox's family, and portrayed his attitude more positively than the first movie. Canadian National Basketball Association star Steve Nash, who himself was inspired by Fox when he was a child, directed a 2010 documentary Into the Wind, which aired on ESPN as part of its 30 for 30 series.
### Steve Fonyo and Rick Hansen
Fox was not the first person to attempt to run across Canada. Mark Kent crossed the country in 1974 as he raised money for the Canadian team at the 1976 Summer Olympics. While he lived, Fox refused to let anyone else complete the Marathon of Hope, having promised to finish it himself once he recovered. Steve Fonyo, an 18-year-old with the same form of cancer and who also had a leg amputated, sought in 1984 to duplicate Fox's run, calling his effort the "Journey for Lives". After leaving St. John's on March 31, Fonyo reached the point where Fox was forced to end his marathon at the end of November, and completed the transcontinental run on May 29, 1985. The Journey for Lives raised over \$13 million for cancer research.
Canadian Paralympic athlete Rick Hansen, who had recruited Fox to play on his wheelchair basketball team in 1977, was similarly inspired by the Marathon of Hope. Hansen, who first considered circumnavigating the globe in his wheelchair in 1974, began the Man in Motion World Tour in 1985 with the goal of raising \$10 million towards research into spinal cord injuries. As Fonyo had, Hansen paused at the spot Fox's run ended to honour the late runner. Hansen completed his world tour in May 1987 after 792 days and 40,073 kilometres (24,900 mi); he travelled through 34 countries and raised over \$26 million.
### Currency
Fox is one of eight candidate finalists for having his portrait on the future \$5 polymer banknotes in Canada.
## See also
- Terry (book)
|
11,084,007 |
Rainilaiarivony
| 1,165,850,741 |
Prime Minister of Madagascar from 1864 to 1895
|
[
"1828 births",
"1896 deaths",
"Commanders of the Legion of Honour",
"Leaders ousted by a coup",
"Malagasy exiles",
"Malagasy people of the Madagascar expeditions",
"Merina people",
"People from Alaotra-Mangoro",
"Prime Ministers of Madagascar",
"Remarried royal consorts"
] |
Rainilaiarivony (30 January 1828 – 17 July 1896) was a Malagasy politician who served as the prime minister of Madagascar from 1864 to 1895, succeeding his older brother Rainivoninahitriniony, who had held the post for thirteen years. His career mirrored that of his father Rainiharo, a renowned military man who became prime minister during the reign of Queen Ranavalona I.
Despite a childhood marked by ostracism from his family, as a young man Rainilaiarivony was elevated to a position of high authority and confidence in the royal court, serving alongside his father and brother. He co-led a critical military expedition with Rainivoninahitriniony at the age of 24 and was promoted to commander-in-chief of the army following the death of the queen in 1861. In that position he oversaw continuing efforts to maintain royal authority in the outlying regions of Madagascar and acted as adviser to his brother, who had been promoted to prime minister in 1852. He also influenced the transformation of the kingdom's government from an absolute monarchy to a constitutional one, in which power was shared between the sovereign and the prime minister. Rainilaiarivony and Queen Rasoherina worked together to depose Rainivoninahitriniony for his abuses of office in 1864. Taking his brother's place as prime minister, Rainilaiarivony remained in power as Madagascar's longest-serving prime minister for the next 31 years by marrying three queens in succession: Rasoherina, Ranavalona II and Ranavalona III.
As prime minister, Rainilaiarivony actively sought to modernize the administration of the state, in order to strengthen and ensure Madagascar remained independent from foreign colonial empires who wished to absorb it. The army was reorganized and professionalized, public schooling was made mandatory, a series of legal codes patterned on English law were enacted and three courts were established in Antananarivo. Rainilaiarivony exercised care not to offend traditional norms, while gradually limiting traditional practices, such as slavery, polygamy, and unilateral repudiation of wives. He legislated the Christianization of the monarchy under Ranavalona II. His diplomatic skills and military acumen assured the defense of Madagascar during the Franco-Hova Wars, successfully preserving his country's sovereignty until a French column captured the royal palace in September 1895. Although holding him in high esteem, the French colonial authority deposed the prime minister and exiled him to French Algeria, where he died less than a year later in 1896.
## Early life
Rainilaiarivony was born on 30 January 1828 in the Merina village of Ilafy, one of the twelve sacred hills of Imerina, into a family of statesmen. His father, Rainiharo, was a high-ranking military officer and a deeply influential conservative political adviser to the reigning monarch, Queen Ranavalona I, at the time that his wife, Rabodomiarana (daughter of Ramamonjy), gave birth to Rainilaiarivony. Five years later Rainiharo was promoted to the position of prime minister, a role he retained from 1833 until his death in 1852. During his tenure as prime minister, Rainiharo was chosen by the queen to become her consort, but he retained Rabodomiarana as his wife according to local customs that allowed polygamy. Rainilaiarivony's paternal grandfather, Andriatsilavo, had likewise been a privileged adviser to the great King Andrianampoinimerina (1787–1810). Rainilaiarivony and his relatives issued from the Andafiavaratra family clan of Ilafy who, alongside the Andrefandrova clan of Ambohimanga, constituted the two most influential hova (commoner) families in the 19th-century Kingdom of Imerina. The majority of political positions not assigned to andriana (nobles) were held by members of these two families.
According to oral history, Rainilaiarivony was born on a day of the week traditionally viewed as inauspicious for births. Custom in much of Madagascar dictated that such unlucky children had to be subjected to a trial by ordeal, such as prolonged exposure to the elements, since it was believed the misfortune of their day of birth would ensure a short and cursed life for the child and its family. But rather than leave the child to die, Rainilaiarivony's father reportedly followed the advice of an ombiasy (astrologer) and instead amputated a joint from two fingers on his infant son's left hand to dispel the ill omen. The infant was nonetheless kept outside the house to avert the possibility that evil might still befall the family if the child remained under their roof. Relatives took pity and adopted Rainilaiarivony to raise him within their own home. Meanwhile, Rainilaiarivony's older brother Rainivoninahitriniony enjoyed the double privilege of his status as elder son and freedom from a predestined evil fate. Rainiharo selected and groomed his elder son to follow in his footsteps as commander-in-chief and prime minister, while Rainilaiarivony was left to make his way in the world by his own merits.
At age six, Rainilaiarivony began two years of study at one of the new schools opened by the London Missionary Society (LMS) for the children of the noble class at the royal palace in Antananarivo. Ranavalona shut down the mission schools in 1836, but the boy continued to study privately with an older missionary student. When Rainilaiarivony reached age 11 or 12, the relatives who had raised him decided he was old enough to make his own way in the world. Beginning with the purchase and resale of a few bars of soap, the boy gradually grew his business and expanded into the more profitable resale of fabric. The young Rainilaiarivony's reputation for tenacity and industriousness, as he fought against his predestined misfortunes, eventually reached the palace, where at the age of 14 the boy was invited to meet Queen Ranavalona I. She was favorably impressed, awarding him the official ranking of Sixth Honor title of Officer of the Palace. At 16 he was promoted to Seventh Honor, then promoted twice again to Eighth and Ninth Honor at age 19, an unprecedented ascent through the ranks.
As a regular among the foreigners at the palace, young Rainilaiarivony was tasked by an English merchant as a courier for his confidential business correspondence. The merchant was impressed by the young man's punctuality and integrity and would regularly refer to him as the boy who "deals fair." With the addition of the Malagasy honorific "ra", the expression was transformed into a sobriquet—"Radilifera"—that Rainilaiarivony adopted for himself and transmitted to a son and grandson. The arrival of a doctor from Mauritius in 1848 provided Rainilaiarivony with the opportunity to study medicine over the course of three years. With this knowledge he became indispensable at the palace, where he provided modern medical care to the Queen and other members of the aristocracy. Successfully curing the Queen of a particularly grievous illness earned him a promotion to Tenth Honor in April 1851, thereby qualifying him for more responsible positions within the monarch's closest circle. Rainiharo took advantage of this trust to successfully encourage friendship between his own sons and the only child and heir apparent of the queen, her son Radama II, who was one year Rainilaiarivony's junior.
## Marriage and family
Around 1848—the exact date of his marriage is not recorded—Rainilaiarivony, then around 20 or 21 years old and having adopted the name Radilifera, concluded a marriage with his paternal cousin Rasoanalina. They had sixteen children over the course of their marriage. In addition, a one-year-old son that Rasoanalina had conceived with another man prior to the union, Ratsimatahodriaka (Radriaka), was adopted by Rainilaiarivony as his own. As a young man, Ratsimatahodriaka was groomed by Rainilaiarivony to become his successor, but the youth fell from a balcony while intoxicated and died in his early twenties.
Most of Rainilaiarivony's children failed to achieve their full potential. One son, Rafozehana, died young of delirium tremens, and sons Ratsimandresy and Ralaiarivony both met violent ends while still in their youth. Randravalahy, to whom Rainilaiarivony later ascribed the name Radilifera, was sent to France to study but returned before earning his diploma and faded into obscurity among the upper classes of Imerina. Ramangalahy studied medicine and was on his way to becoming a successful doctor, but died of illness in his twenties. Three brothers turned to crime: Rajoelina, who violated the laws of his country to enrich himself by selling contraband gold to an English company; Penoelina, who studied in England before health issues recalled him to Madagascar, where he and his friends engaged in sexual assault and theft; and Ramariavelo (Mariavelo), who organized a group of bandits to rob the houses of common citizens. One of Rainilaiarivony's daughters died in her twenties following a self-induced abortion, and the rest married and lived quiet lives out of public view.
## Military career
The February 1852 death of Prime Minister Rainiharo left the queen without her consort, long-time political adviser and military Commander-in-Chief. She consequently awarded Rainilaiarivony a double promotion to Twelfth Honor ten days afterward, in preparation for an increase in military and political responsibilities. Shortly thereafter the queen expressed romantic interest in Rainilaiarivony and proposed that he assume the former role of his father as consort and prime minister. The young man refused on the double basis of their age difference, as well as the perceived impropriety of becoming intimate with his father's former lover. Ranavalona continued to harbor feelings for him throughout her lifetime but she did not express resentment over his refusal to reciprocate them and went on to take another high-ranking official as consort: Rainijohary, who was jointly awarded the role of prime minister along with the new Commander-in-Chief, Rainivoninahitriniony. Within a year the queen had assigned the 24-year-old Rainilaiarivony to his first position of responsibility within the military, and promoted him to Royal Secretary, keeper of the Royal Seal, and supervisor to the Royal Treasurer.
Several years prior to his death, former Prime Minister Rainiharo had led military campaigns to bring the peoples of the south under Merina control. Strong military campaigns on both sides of the conflict had concluded in a peace agreement between the Merina armies and those of the Bara people of the central southern highlands, who were accorded semi-autonomous status in exchange for serving as a buffer between the Sakalava to the west and the Tanala, Antemoro, Antefasy and other ethnic groups to the southeast. Upon learning of Rainiharo's death, disgruntled southeastern factions rose up against the Merina military stationed at posts within their territory. Queen Ranavalona responded by sending Rainivoninahitriniony and Rainilaiarivony on their first military expedition to liberate the besieged Merina colonists and quell the uprising.
Under the brothers' joint command were ten thousand soldiers armed with muskets and another thousand carrying swords. An additional 80,000 porters, cooks, servants and other support staff accompanied the army throughout the massive campaign. Over 10,000 were killed by Merina soldiers in the campaign, and according to custom numerous women and children were captured to be sold into slavery in Imerina. Rainilaiarivony took 80 slaves, while his older brother took more than 160. However, the campaign was only partly successful in pacifying the region and the Merina hold over the outlying areas of the island remained tenuous throughout the 19th century.
### First thwarted coup attempt
As the queen's son Radama grew to adulthood, he became increasingly disillusioned by the high death toll of his mother's military campaigns and traditional measures of justice, and was frustrated by her unilateral rejection of European influence. The young prince developed sympathetic relationships with the handful of Europeans permitted by Ranavalona to frequent her court, namely Jean Laborde and Joseph-François Lambert, with whom he privately concluded the lucrative Lambert Charter. The charter, which would come into effect upon Radama's accession to the throne, granted Lambert large tracts of land and exclusive rights to road construction, mineral extraction, timber harvesting and other activities on the island. In May 1857, when Rainilaiarivony was 29 years old, Lambert consequently invited Prince Radama, Rainivoninahitriniony, Rainilaiarivony and a number of other officers to conspire with him in a plot to overthrow Ranavalona.
On the eve of the coup, Rainivoninahitriniony informed Lambert that he could not guarantee the support of the army and that the plot should be aborted. One of the officers believed the brothers had betrayed them and sought to exonerate himself by notifying the queen of the failed conspiracy. She reacted by expelling the foreigners from the island and subjecting all the implicated Merina officers to the tangena ordeal in which they were forced to swallow a poison to determine their guilt or innocence. Rainilaiarivony and his brother were excepted from this and remained, like her son Radama, in the queen's confidence for the few remaining years of her life.
### Second thwarted coup attempt
In the summer of 1861, when Rainilaiarivony was 33 years old, Queen Ranavalona's advanced age and acute illness produced speculation about who would succeed her. Ranavalona had repeatedly stated her intention that her progressive and pro-European son, Radama II, would be her successor, much to the chagrin of the conservative faction at court. The conservatives privately rallied behind the queen's nephew and adoptive son Ramboasalama, whom the queen had initially declared heir apparent some years prior, and who had never abandoned hope to one day reclaim the right that had briefly been accorded to him.
According to custom, pretenders to the throne had historically been put to death upon the naming of a new sovereign. Radama was opposed to this practice and asked the brothers to help ensure his accession to the throne with minimum bloodshed on the day of the queen's death. Rainilaiarivony successfully maintained authority over the palace guards anxiously awaiting the command from either faction to slaughter the other. When the queen's attendant quietly informed him that her final moments were approaching, Rainilaiarivony discreetly summoned Radama and Rainivoninahitriniony from the Prime Minister's Palace to the royal Rova compound and ordered the prince crowned before the gathered soldiers, just as the queen was pronounced dead. Ramboasalama was promptly escorted to the palace where he was obliged to publicly swear allegiance to King Radama.
Rainilaiarivony was made responsible for the tribunal where Ramboasalama's supporters were tried, convicted of subversion and sentenced to banishment and other punishments. Ramboasalama was sent to live with his wife Ramatoa Rasoaray—Rainilaiarivony's sister—in the distant highland village of Ambohimirimo, where he died in April 1862. Rainijohary, the former prime minister and consort of Ranavalona, was relieved of his rank and exiled, leaving his co-minister Rainivoninahitriniony as the sole prime minister. At the same time, Rainilaiarivony was promoted by Radama to the position of Commander-in-Chief of the military.
### Creation of a limited monarchy
As Commander-in-Chief, Rainilaiarivony maintained a distance from politics throughout the reign of the new monarch, Radama II, instead preferring to focus on his military responsibilities. Meanwhile, disputes between Prime Minister Rainivoninahitriniony and King Radama grew frequent as the young sovereign pursued radical reforms that had begun to foment displeasure among the traditional masses. The situation came to a head on 7 May 1863, when Radama insisted on legalizing duels, despite widespread concern among the king's advisers that the innovation would lead to anarchy. The prime minister initiated the arrest of the menamaso, the prince's influential advisers, while Rainilaiarivony enacted his brother's instructions to keep the peace in the capital city. However, the situation deteriorated in dramatic fashion and, by the morning of 12 May, King Radama II was declared dead, having been strangled on the prime minister's orders.
Not having been involved in the coup d'état, Rainilaiarivony provided direction for his brother and the rest of the court as they grappled with the gravity of their acts. He proposed that future monarchs would no longer have absolute power but would instead rule by the consent of the nobles. A series of terms were proposed by Rainilaiarivony that the nobles agreed to impose on Radama's widow, Rasoherina. Under Rainilaiarivony's new monarchy, a sovereign required the consent of the nobles to issue a death sentence or promulgate a new law, and was forbidden to disband the army. The new power sharing agreement was concluded by a political marriage between the queen and the prime minister.
Because of the new limitations placed on future Merina monarchs by Rainilaiarivony and the Hova courtiers, Radama's strangling represented more than a simple coup d'état. The ruling conditions imposed on Rasoherina reflected a power shift toward the oligarchs of the Hova commoner class and away from the Andriana sovereigns, who had traditionally drawn their legitimacy from the deeply held cultural belief that the royal line was imbued with hasina, a sacred authority bestowed by the ray aman-dreny (ancestors). In this respect, the new political structure in Imerina embodied the erosion of certain traditional social values among the Merina elite, who had gained exposure to contemporary European political thought and assimilated a number of Western governance principles. It also signalled the expansion of a rift between the pro-European, progressive elite to which Rainilaiarivony and his brother belonged, and the majority of the population in Madagascar, for whom traditional values such as hasina remained integral to determining the legitimacy of a government—a divide that would deepen in the decades to come through Rainilaiarivony's efforts to effect a modernizing political and social transformation on a nationwide scale.
## Tenure as prime minister
### Rise to power
Rainivoninahitriniony's tenure as sole prime minister was short lived. His violent tendencies, irritability and insolence toward Rasoherina, in addition to lingering popular resentment over Rainivoninahitriniony's role in the violent end to Radama's rule, gradually turned the opinion of the nobles against him. As Commander-in-Chief, Rainilaiarivony attempted to counsel his brother, while simultaneously overseeing diplomatic and military efforts to re-pacify the agitated Sakalava and other peoples, who viewed the coup as an indication of weakening Merina control. The prime minister repaid these efforts by repeatedly castigating high-ranking officers and even threatening Rainilaiarivony with his sword.
Two of Rainilaiarivony's cousins urged him to take his elder brother's place in order to end the shame that Rainivoninahitriniony's behavior was bringing upon their family. After weighing the idea, Rainilaiarivony approached Rasoherina with the proposal. The queen readily consented and lent her assistance in rallying the support of the nobles at court. On 14 July 1864, little more than a year after the coup, Rasoherina deposed and divorced Rainivoninahitriniony, then exiled the fallen minister the following year. Rainilaiarivony was promoted to prime minister. The arrangement was sealed when Rainilaiarivony took Rasoherina as his bride and demoted his longtime spouse Rasoanalina to the status of second wife. Rainilaiarivony confided in a friend shortly before his death that he deeply loved his first wife and came to share the same degree of feeling toward Rasoherina as well, but never developed the same affection for the subsequent queens he married. None of his royal spouses bore him any children.
By taking this new role, Rainilaiarivony became the first Hova to concurrently serve as both prime minister and Commander-in-Chief. The sociopolitical transformation that had been triggered by the strangling of Radama II reached its zenith with Rainilaiarivony's consolidation of administrative power. Rasoherina and her successors remained the figureheads of traditional authority, participated in political councils and provided official approval for policies. The prime minister issued new policies and laws in the Queen's name. However, the day-to-day governance, security and diplomatic activities of the kingdom principally originated with, and were managed by, Rainilaiarivony and his counselors. This new level of authority enabled the prime minister to amass a vast personal fortune, whether through inheritance, gifts or purchase, including 57 houses, large plantations and rice paddies, numerous cattle and thousands of slaves. The most prominent of Rainilaiarivony's properties was the Andafiavaratra Palace, constructed for him on the slope just below the royal Rova compound by English architect William Pool in 1873.
### Policies and reforms
Government administration and bureaucracy was strengthened under Rainilaiarivony's leadership. In March 1876, Rainilaiarivony established eight cabinet ministries to manage foreign affairs, the interior, education, war, justice, commerce and industry, finance, and legislation. State envoys were installed throughout the island's provinces to manage administrative affairs, ensure the application of law, collect taxes and provide regular reports back to Antananarivo on the local state of affairs. The traditional method of tax collection through local administrators was expanded in the provinces, bringing in new revenues, most commonly in the form of locally produced goods such as woven mats, fish, or wood. Rainilaiarivony actively encouraged Merina settlement in the coastal provinces, but coastal peoples were not invited to participate in political administration of the territories they inhabited. Approximately one third of the island had no Merina presence and retained de facto independence from the authority of the crown, including parts of the western provinces of Ambongo and Menabe, and areas in the southern Bara, Tanala, Antandroy and Mahafaly lands.
Rainilaiarivony's first royal wife, Queen Rasoherina, died on 1 April 1868, and was succeeded by her cousin Ranavalona II (crowned on 3 September 1868) who, like Rasoherina, was a widow of Radama II. Ranavalona II was a pupil of Protestant missionaries and had converted to Christianity. Rainilaiarivony recognized the growing power of Christianity on the island and identified the need to bring it under his influence in order to avert destabilizing cultural and political power struggles. The prime minister encouraged the new queen to Christianize the court through a public baptism ceremony at Andohalo on 21 February 1869, the day of their marriage. In this ceremony the supernatural royal talismans were ordered to be destroyed and replaced by the Bible. The Christianization of the court and the establishment of the independent royal Protestant chapel on the palace grounds prompted the wide-scale conversion of hundreds of thousands of Malagasy. These conversions were commonly motivated by a desire to express political allegiance to the Crown, and as such were largely nominal, with the majority of converts practicing a syncretic blend of Christian and traditional religions. Rainilaiarivony's biographers conclude that the prime minister's own conversion was also largely a political gesture and most likely did not denote a genuine spiritual shift until late in his life, if ever. Some local officials attempted to force conversions to Protestantism by mandating church attendance and persecuting Catholics, but Rainilaiarivony quickly responded to quell these overzealous practices. The prime minister's criminalization of polygamy and alcohol consumption, as well as the declaration of Sunday as a day of rest, were likewise inspired by the growing British and Protestant influences in the country. The Christianization of the court came at a steep personal price: with the outlawing of polygamy, Rainilaiarivony was forced to repudiate his first wife. The prime minister was deeply saddened by this necessity and by the consequent souring of his relationships with Rasoanalina and their children after the divorce.
The prime minister recognized that the modernization of Madagascar and its system of state administration could strengthen the country against invasion by a Western power and directed his energy to this end. In 1877, he outlawed the enslavement of the Makoa community. Rainilaiarivony expanded the public education system, declaring school attendance mandatory in 1881 and forming a cadre of school inspectors the following year to ensure education quality. The island's first pharmacy was established by LMS missionaries in 1862, and the first hospital was inaugurated in Antananarivo three years later, followed by the launching in 1875 of a state medical system staffed by civil servant clinicians. Rainilaiarivony enacted a series of new legal codes over the course of his administration that sought to create a more humane social order. The number of capital offenses was reduced from eighteen to thirteen, and he put an end to the tradition of collective family punishment for the crimes of one individual. Fines were fixed for specific offenses and corporal punishment was limited to being locked in irons. The structure of legal administration was reorganized so that matters that exceeded the authority of the traditional community courts at the level of the fokonolona village collective, administered by local magistrates and village heads, would be referred to the three high courts established in the capital in 1876, although final judicial authority remained with Rainilaiarivony. The Code of 305 Laws established that same year would form the basis of the legal system applied in Madagascar for the remainder of the 19th century and throughout much of the colonial period. To strengthen rule of law, the prime minister introduced a rural police force, modernized the court system and eliminated certain unjust privileges that had disproportionately benefited the noble class.
Beginning in 1872, Rainilaiarivony worked to modernize the army with the assistance of a British military instructor, who was hired to recruit, train and manage its soldiers. Rainilaiarivony purchased new local and imported firearms, reintroduced regular exercises and reorganized the ranking system. He prohibited the purchasing of rank promotions or exemptions from military service and instituted free medical care for soldiers in 1876. The following year Rainilaiarivony introduced the mandatory conscription of 5,000 Malagasy from each of the island's six provinces to serve five years in the royal army, swelling its ranks to over 30,000 soldiers.
### Foreign relations
During his time in power, Rainilaiarivony proved himself a competent and temperate leader, administrator and diplomat. In foreign affairs he exercised acumen and prudent diplomacy, successfully forestalling French colonial designs upon Madagascar for nearly three decades. Rainilaiarivony established embassies in Mauritius, France and Britain, while treaties of friendship and trade were concluded with Britain and France in 1862 and revised in 1865 and 1868 respectively. Upon the arrival of the first American plenipotentiary in Antananarivo, a treaty between the United States and Madagascar was agreed in 1867. A British contemporary observed that his diplomatic communication skills were particularly evident in his political speeches, describing Rainilaiarivony as a "Great orator among a nation of orators".
The early years of Rainilaiarivony's tenure as prime minister saw a reduction in French influence on the island, to the benefit of the British, whose alliance he strongly preferred. Contributing factors to the eclipse of French presence included a military defeat in 1870 and economic constraints that forced an end to French government subsidy of Catholic missions in Madagascar in 1871. He permitted foreigners to lease Malagasy land for 99 years but forbade its sale to non-citizens. The decision not to undertake the construction of roads connecting coastal towns to the capital was adopted as a deliberate strategy to protect Antananarivo from potential invasion by foreign armies.
Despite the strong presence of British missionaries, military advisers and diplomats in Antananarivo in the early part of Rainilaiarivony's administration, the 1869 opening of the Suez Canal led the British to shift their focus to combating French presence in Egypt, at the expense of their own long-standing interests in Madagascar. When Jean Laborde died in 1878 and Rainilaiarivony refused to allow his heirs to inherit Malagasy land accorded him under Radama II's Lambert Charter, France had a pretext for invasion. Rainilaiarivony sent a diplomatic mission to England and France to negotiate release of their claims on Malagasy lands and was successful in brokering a new agreement with the British. Talks with the French conducted between November 1881 and August 1882 broke down without reaching consensus on the status of French land claims. Consequently, France launched the First Franco-Hova War in 1883 and occupied the coastal port towns of Mahajanga, Antsiranana, Toamasina and Vohemar. Queen Ranavalona II died during the height of these hostilities in July 1883. Rainilaiarivony chose her 22-year-old niece, Princess Razafindrahety, to replace her under the throne name Ranavalona III. It was widely rumored that Rainilaiarivony may have ordered the poisoning of Razafindrahety's first husband in order to free the princess to become his spouse and queen. Thirty-three years younger than her new husband, Ranavalona III was relegated to a largely ceremonial role during her reign, while the prime minister continued to manage the critical affairs of state. In December 1885, Rainilaiarivony successfully negotiated the cessation of hostilities in the first Franco-Hova War.
The agreement drafted between the French and Malagasy governments did not clearly establish a French protectorate over the island, partly because recent French military involvement in the Tonkin Campaign had begun to turn popular opinion against French colonial expansion. The Malagasy crown agreed to pay ten million francs to France to settle the dispute, a sum that was partly raised through the unpopular decision to increase fanampoana (forced labor in lieu of cash taxes) to mobilize the populace in panning for gold in the kingdom's rivers. This expense, coupled with Rainilaiarivony's removal of \$50,000 in silver and gold coins from the tomb of Ranavalona I to offset the cost of purchasing arms in the run-up to the First Franco-Hova War, effectively emptied the royal treasury reserves. Capitalizing on Madagascar's weakened position, the French government then occupied the port town of Antsiranana and installed French Resident-General Le Myre de Vilers in Antananarivo, citing vague sections of the treaty as justification. The Resident-General was empowered by the French government to control international trade and foreign affairs on the island, although the monarchy's authority over internal administration was left unchallenged. Refusing to acknowledge the validity of the French interpretation of the treaty, Rainilaiarivony continued managing trade and international relations and unsuccessfully solicited assistance from the United States in maintaining the island's sovereignty. In 1894, the French government pressed Rainilaiarivony to unconditionally accept the status of Madagascar as a French protectorate. In response, Rainilaiarivony broke off all diplomatic relations with France in November 1894.
## Deposition and exile
The cessation of diplomatic relations between France and Madagascar prompted immediate French military action in a campaign that became known as the Second Franco-Hova War. The expedition ended eleven months later in September 1895 when a French military column reached Antananarivo and bombarded the royal palace with heavy artillery, blasting a hole through the roof of the queen's quarters and inflicting heavy casualties among the numerous courtiers gathered in the palace courtyard. Rainilaiarivony sent an interpreter to carry a white flag to the French commander and entreat his clemency. Forty-five minutes later he was joined by Radilifera, the prime minister's son, to request the conditions of surrender; these were immediately accepted. The following day Queen Ranavalona signed a treaty accepting the French protectorate over Madagascar. She and her court were permitted to remain at the palace and administer the country according to French dictates.
Upon the queen's signing of the treaty, the French government deposed Rainilaiarivony from his position as prime minister and commander-in-chief. The minister of foreign affairs, an elderly man named Rainitsimbazafy, was jointly selected by the French and Ranavalona as his replacement. The French ordered Rainilaiarivony to be exiled to French Algeria, although he initially remained in Antananarivo for several months after the treaty was signed. On 15 October 1895 the former prime minister was placed under house arrest and put under the guard of Senegalese soldiers at his home in Amboditsiry. On 6 February 1896, at the age of 68, Rainilaiarivony boarded a ship bound for Algiers and left his island for the first time in his life. He was accompanied by his grandson, Ratelifera, as well as an interpreter and four servants. On 17 March 1896 the ship docked at the port of Algiers, where he would live out the few remaining months of his life.
The French government installed Rainilaiarivony in the Geryville neighborhood of Algiers, one of the derelict parts of town. He was assigned a French attendant and guard named Joseph Vassé, who maintained detailed documentation on the personality and activities of Rainilaiarivony throughout his exile in French Algeria. Vassé described the former prime minister as a man of great spontaneity, sincere friendliness, and openness of heart, but also prone to mood swings, touchiness, and a tendency to be demanding, especially in regard to his particular tastes in clothing. His intelligence, tact and leadership qualities won him the admiration of many who knew him, including Le Myre des Vilers, who referred to him as both an enemy and a friend. Upon learning of Rainilaiarivony's living situation in Algiers, Le Myre de Vilers privately lobbied the French government for better accommodation. Consequently, Vassé found a new home for the former prime minister at the elegant estate called Villa des Fleurs ("Villa of the Flowers") in the upscale Mustapha Supérieur neighborhood, neighboring the residence of the exiled former king of Annam.
The beauty of his Villa des Fleurs home and the warm reception he received in French Algeria pleased Rainilaiarivony and contributed to a positive impression of his new life in Algiers. He quickly developed an excellent reputation among the local high society, who perceived him as a kind, intelligent, generous and charming figure. The Governor-General of French Algeria regularly invited him to diplomatic balls and social events where Rainilaiarivony danced with the enthusiasm and endurance of a much younger man. When not busy with diverse social engagements, Rainilaiarivony avidly read the newspaper and corresponded with contacts in Madagascar. As an insurrection in Madagascar emerged against French rule, the former prime minister wrote a letter published in a Malagasy newspaper on 5 July 1896 that condemned the participants as ungrateful for the benefits that contact with the French would bring to the island. His last outing in Algiers was on 14 July 1896 to watch the Bastille Day fireworks show. As he walked through the streets to join other spectators in his party, he was greeted with cheers and calls of "Vive le Ministre!" ("Long live the Minister!") from admiring onlookers.
## Death
The intense heat at the outdoor Bastille Day event on 14 July exhausted the former prime minister, and that evening Rainilaiarivony developed a fever. He slept poorly, disturbed by a dream in which he saw the former queen Rasoherina stand beside his bed, saying, "In the name of your brother, Rainivoninahitriniony, be ready." One of Rainilaiarivony's servants reported the dream to Vassé, explaining it as a premonition that foretold Rainilaiarivony's impending death. The former prime minister remained in bed and rapidly weakened over the next several days as his fever worsened and he developed a headache. He was constantly attended by his closest friends and loved ones. Rainilaiarivony died in his sleep on 17 July 1896.
Rainilaiarivony's body was initially interred within a stone tomb in Algiers. In 1900, the former prime minister's remains were exhumed and transported to Madagascar, where they were interred in the family tomb constructed by Jean Laborde in the Isotry neighborhood of Antananarivo. French colonial governor General Gallieni and Rainilaiarivony's grandson both spoke at the funeral, which was heavily attended by French and Malagasy dignitaries. In his eulogy, Gallieni expressed esteem for the former prime minister in the following terms: "Rainilaiarivony was worthy of leading you. In the years to come, will there be a monument erected in his memory? This should be an obligation for the Malagasy who will have the freedom to do so. France has now taken Madagascar, come what may, but it's a credit to Rainilaiarivony to have protected it the way he did." Following the funeral a commemorative plaque was installed at Rainilaiarivony's family tomb, engraved with the words "Rainilairivony, ex Premier Ministre et Commandant en chef de Madagascar, Commandeur de la Légion d'honneur" ("former Prime Minister and Commander-in-Chief of Madagascar, Commander of the Legion of Honor").
|
66,657,814 |
Beowulf and Middle-earth
| 1,171,095,454 |
J. R. R. Tolkien's use of the Old English poem Beowulf in his Middle-earth fiction
|
[
"Beowulf",
"Influences on J. R. R. Tolkien",
"Middle-earth themes",
"Themes of The Lord of the Rings"
] |
J. R. R. Tolkien, a fantasy author and professional philologist, drew on the Old English poem Beowulf for multiple aspects of his Middle-earth legendarium, alongside other influences. He used elements such as names, monsters, and the structure of society in a heroic age. He emulated its style, creating an impression of depth and adopting an elegiac tone. Tolkien admired the way that Beowulf, written by a Christian looking back at a pagan past, just as he was, embodied a "large symbolism" without ever becoming allegorical. He worked to echo the symbolism of life's road and individual heroism in The Lord of the Rings.
The names of races, including ents, orcs, and elves, and place names such as Orthanc and Meduseld, derive from Beowulf. The werebear Beorn in The Hobbit has been likened to the hero Beowulf himself; both names mean "bear" and both characters have enormous strength. Scholars have compared some of Tolkien's monsters to those in Beowulf. Both his trolls and Gollum share attributes with Grendel, while Smaug's characteristics closely match those of the Beowulf dragon. Tolkien's Riders of Rohan are distinctively Old English, and he has made use of multiple elements of Beowulf in creating them, including their language, culture, and poetry.
## Context
Beowulf is an epic poem in Old English, telling the story of its eponymous pagan hero. He becomes King of the Geats after ridding Heorot, the hall of the Danish king Hrothgar, of the monster Grendel, who was ravaging the land; he dies saving his people from a dragon. The tale is told in a roundabout way with many digressions into history and legend, and with a constant elegiac tone, ending in a dirge. It was written by a Christian poet, looking back reflectively on a time already in his people's distant past.
J. R. R. Tolkien was an English author and philologist of ancient Germanic languages, specialising in Old English; he spent much of his career as a professor at the University of Oxford. He is best known for his novels about his invented Middle-earth, The Hobbit and The Lord of the Rings. A devout Roman Catholic, he described The Lord of the Rings as "a fundamentally religious and Catholic work", rich in Christian symbolism.
The Tolkien scholar Tom Shippey, like Tolkien a philologist, called Beowulf the single work that most strongly influenced Tolkien, out of the many other sources that he used. He made use of it in his Middle-earth legendarium in multiple ways: in specific story-elements such as monsters; in Old English culture, as seen in the kingdom of Rohan; in the aesthetic style of The Lord of the Rings, with its impression of depth and its elegiac tone; and in its "large symbolism".
## People
### A philologist's races
Tolkien made use of his philological expertise on Beowulf to create some of the races of Middle-earth. The list of supernatural creatures in Beowulf, eotenas ond ylfe ond orcnéas, "ettens and elves and demon-corpses", contributed to his Orcs, and Elves, and to an allusion to Ettens in his "Ettenmoors" placename. His tree-giants or Ents (etymologically close to Ettens) may derive from a phrase in another Old English poem, Maxims II, orþanc enta geweorc, "skilful work of giants". Shippey suggests that Tolkien took the name of the tower of Orthanc (orþanc) from the same phrase, reinterpreted as "Orthanc, the Ents' fortress".
### Characters
The word orþanc occurs again in Beowulf, alongside the term searo in the phrase searonet seowed, smiþes orþancum, "a cunning-net sewn, by a smith's skill", meaning a mail-shirt or byrnie. Tolkien used searo in its Mercian form \*saru for the name of Orthanc's ruler, the wizard Saruman, whose name could thus be translated "cunning man", incorporating the ideas of subtle knowledge and technology into Saruman's character.
An especially Beowulfian character appears in The Hobbit as Beorn; his name originally meant "bear" but came to mean "man, warrior", giving Tolkien the chance to make the character a were-bear, able to shift his shape. A bear-man Bödvar Bjarki exists in Norse myth, while it is Beowulf himself whom Beorn echoes in the Old English poem. The name "Beowulf" can indeed be read as "the Bees' Wolf", that is, "the Honey-Eater". In other words, he is "the Bear", the man who is so strong that he snaps swords and tears off the arms of monsters with his enormous bear-like strength. Shippey notes that Beorn is ferocious, rude, and cheerful, characteristics that reflect his huge inner self-confidence—itself an aspect of northern heroic courage.
## Monsters
Scholars have compared several of Tolkien's monsters, including his Trolls, Gollum, and Smaug, to those in Beowulf.
### Trolls
Beowulf's first fight is with the monster Grendel, who is often taken by scholars as a kind of troll from Norse mythology. Tolkien's trolls share some of Grendel's attributes, such as great size and strength, being impervious to ordinary swords, and favouring the night. The scholar Christina Fawcett suggests that Tolkien's "roaring Troll" in The Return of the King reflects Grendel's "firey [sic] eye and terrible screaming". Noting that Tolkien compares them to beasts as they "came striding up, roaring like beasts ... bellowing", she observes that they "remain wordless warriors, like Grendel".
### Gollum
Gollum, a far smaller monster in Middle-earth, has also been likened to Grendel, with his preference for hunting with his bare hands and his liking for desolate, marshy places. The many parallels between these monsters include their affinity for water, their isolation from society, and their bestial description. The Tolkien scholar Verlyn Flieger suggests that he is Tolkien's central monster-figure, likening him to both Grendel and the dragon; she describes him as "the twisted, broken, outcast hobbit whose manlike shape and dragonlike greed combine both the Beowulf kinds of monster in one figure".
### Smaug
Tolkien made use of the Beowulf dragon to create one of his most distinctive monsters, the dragon in The Hobbit, Smaug. The Beowulf dragon is aroused and enraged by the theft of a golden cup from his pile of treasure; he flies out in the night and destroys Beowulf's hall; he is killed, but the treasure is cursed, and Beowulf too dies. In The Hobbit, the eponymous Hobbit protagonist Bilbo accordingly steals a golden cup from the dragon's huge mound of treasure, awakening Smaug, who flies out and burns Lake-town; the allure of gold is too much of a temptation for the Dwarf Thorin Oakenshield, who is killed soon afterwards. On the other hand, the Beowulf dragon does not speak; Tolkien has made Smaug conversational, and wily with it. Scholars have analysed the parallels between Smaug and the unnamed Beowulf dragon:
## Culture of Rohan
### Names, language, and heroism
Tolkien made use of Beowulf, along with other Old English sources, for many aspects of the Riders of Rohan. Their land was the Mark, its name a version of the Mercia where he lived, in Mercian dialect \*Marc. Their names are straightforwardly Old English: Éomer and Háma (characters in Beowulf), Éowyn ("Horse-joy"), Théoden ("King"). So too is their language, with words like Éothéod ("Horse-people"), Éored ("Troop of cavalry"), and Eorlingas ("people of Eorl", whose name means "[Horse-]lord", cf. Earl), where many words and names begin with the word for "horse", eo[h].
There are even spoken phrases that follow this form. As Alaric Hall notes, "'Westu Théoden hál!' cried Éomer" is a scholarly joke: a dialectal form of Beowulf's Wæs þú, Hróðgár, hál ("Be thou well, Hrothgar!") i.e. Éomer shouts "Long Live King Theoden!" in a Mercian accent. Tolkien used this West Midlands dialect of Old English because he had been brought up in that region.
Théoden's hall, Meduseld, is modelled on Beowulf's Heorot, as is the way it is guarded, with visitors challenged repeatedly but courteously. Heorot's golden thatched roof is described in line 311 of Beowulf which Tolkien directly translates as a description of Meduseld: "The light of it shines far over the land", representing líxte se léoma ofer landa fela.
The war horns of the Riders of Rohan exemplify, in Shippey's view, the "heroic Northern world", as in what he calls the nearest Beowulf has to a moment of Tolkien-like eucatastrophe, when Ongentheow's Geats, trapped all night, hear the horns of Hygelac's men coming to rescue them; the Riders blow their horns wildly as they finally arrive, turning the tide of the Battle of the Pelennor Fields at a climactic moment in The Lord of the Rings.
### Alliterative verse
Among the many poems in The Lord of the Rings are examples of Tolkien's skill in imitating Old English alliterative verse, keeping strictly to the metrical structure, which he described in his essay On Translating Beowulf. The Tolkien scholar Mark Hall compares Aragorn's lament for Boromir to Scyld Scefing's ship-burial in Beowulf:
## Style
### Impression of depth
A quality of literature that Tolkien particularly prized was the impression of depth, of hidden vistas into ancient history. He found this especially in Beowulf, but also in other works that he admired, such as Virgil's Aeneid, Shakespeare's Macbeth, Sir Orfeo, and Grimms' Fairy Tales. Beowulf contains numerous digressions into other stories which have functions other than advancing the plot, in Adrien Bonjour's words rendering "the background of the poem extraordinarily alive", and providing contrasts and examples that repeatedly illuminate the key points of the main story with flashes of the distant past. Tolkien stated in The Monsters and the Critics that Beowulf:
> must have succeeded admirably in creating in the minds of the poet's contemporaries the illusion of surveying a past, pagan but noble and fraught with a deep significance – a past that itself had depth and reached backward into a dark antiquity of sorrow. This impression of depth is an effect and a justification of the use of episodes and allusions to old tales, mostly darker, more pagan, and desperate than the foreground.
In addition, Tolkien valued particularly the "shimmer of suggestion" that never exactly becomes explicit, but that constantly hints at greater depth. That is just as in Beowulf, where Tolkien described the quality as the "glamour of Poesis", though whether this was, Shippey notes, an effect of distance in time, the "elvish hone of antiquity", or a kind of memory or vision of paradise is never distinguished.
### Elegiac tone
The Lord of the Rings, especially its last part, The Return of the King, has a consistent elegiac tone, in this resembling Beowulf. The Tolkien scholar Marjorie Burns describes it as a "sense of inevitable disintegration". The author and scholar Patrice Hannon calls it "a story of loss and longing, punctuated by moments of humor and terror and heroic action but on the whole a lament for a world—albeit a fictional world—that has passed even as we seem to catch a last glimpse of it flickering and fading".
## "Large symbolism"
Shippey notes that Tolkien wrote of Beowulf that the "large symbolism is near the surface, but ... does not break through, nor become allegory", for if it did, that would constrain the story, like that of The Lord of the Rings, to have just one meaning. That sort of constraint was something that Tolkien "contemptuously" dismissed in his foreword to the second edition, stating that he preferred applicability, giving readers the freedom to read into the novel what they could see in it. The message could be hinted at, repeatedly, and they would work, Shippey writes, "only if they were true both in fact and in fiction"; Tolkien set out to make The Lord of the Rings work the same way.
### A learned Christian's heroic world
Another theme, in both Beowulf and The Lord of the Rings, is that of the good pagan pre-Catholics such as Aragorn, who would on a strict interpretation of Christianity be damned as they had no knowledge of Christ. Tolkien stated in a letter to his friend the Jesuit priest Robert Murray that he had cut religion out of the work because it "is absorbed into the story and the symbolism". George Clark writes that Tolkien saw the Beowulf poet as
> a learned Christian who re-created a heroic world and story in an implicitly Christian universe governed by a God whose existence and nature the poem's wiser characters intuit without the benefit of revelation. Tolkien's Beowulf poet was a version of himself, and his authorial persona in creating [The Lord of the Rings] was a version of that Beowulf poet.
### Contrasted heroes
Flieger contrasts the warrior-hero Aragorn with the suffering hero Frodo. Aragorn is, like Beowulf, an epic/romance hero, a bold leader and a healer-king. Frodo is "the little man of fairy tale", the little brother who unexpectedly turns out to be brave. But the fairy tale happy ending comes to Aragorn, marrying the beautiful princess (Arwen) and winning the kingdom (Gondor and Arnor); while Frodo gets "defeat and disillusionment—the stark, bitter ending typical of the Iliad, Beowulf, the Morte D'Arthur". In other words, the two types of hero are not only contrasted, but combined, halves of their legends swapped over.
### The road of life
The symbolism of the road of life can be glimpsed in many places, illuminating different aspects. Tolkien's poem The Old Walking Song is repeated, with variations, three times in The Lord of the Rings. The last version contains the words "The Road goes ever on and on / Out from the door where it began. ... But I at last with weary feet / Will turn towards the lighted inn". Shippey writes that "if 'the lighted inn' on the road means death, then 'the Road' must mean life", and the poem and the novel could be speaking of the process of psychological individuation. Beowulf, too, concerns the life and death of its hero. Flieger writes that Tolkien saw Beowulf as "a poem of balance, the opposition of ends and beginnings": the young Beowulf rises, sails to Denmark, kills Grendel, becomes King; many years later, the old Beowulf falls, killing the dragon but going to his own death. In Flieger's view, Tolkien has built the same values, balance, and opposition into The Lord of the Rings, but at the same time rather than one after the other.
|
184,826 |
Common blackbird
| 1,169,360,042 |
Thrush native to Europe, Asia and North Africa
|
[
"Birds described in 1758",
"Birds of Central Asia",
"Birds of Europe",
"Birds of Oceania",
"Taxa named by Carl Linnaeus",
"Turdus"
] |
The common blackbird (Turdus merula) is a species of true thrush. It is also called the Eurasian blackbird (especially in North America, to distinguish it from the unrelated New World blackbirds), or simply the blackbird where this does not lead to confusion with a similar-looking local species. It breeds in Europe, Asiatic Russia, and North Africa, and has been introduced to Australia and New Zealand. It has a number of subspecies across its large range; a few of the Asian subspecies are sometimes considered to be full species. Depending on latitude, the common blackbird may be resident, partially migratory, or fully migratory.
The adult male of the common blackbird (Turdus merula merula, the nominate subspecies), which is found throughout most of Europe, is all black except for a yellow eye-ring and bill and has a rich, melodious song; the adult female and juvenile have mainly dark brown plumage. This species breeds in woods and gardens, building a neat, cup-shaped nest, bound together with mud. It is omnivorous, eating a wide range of insects, earthworms, berries, and fruits.
Both sexes are territorial on the breeding grounds, with distinctive threat displays, but are more gregarious during migration and in wintering areas. Pairs stay in their territory throughout the year where the climate is sufficiently temperate. This common and conspicuous species has given rise to a number of literary and cultural references, frequently related to its song.
## Taxonomy and systematics
The common blackbird was described by Carl Linnaeus in his landmark 1758 10th edition of Systema Naturae as Turdus merula (characterised as T. ater, rostro palpebrisque fulvis). The binomial name derives from two Latin words, turdus, "thrush", and merula, "blackbird", the latter giving rise to its French name, merle, and its Scots name, merl.
About 65 species of medium to large thrushes are in the genus Turdus, characterised by rounded heads, longish, pointed wings, and usually melodious songs. Although two European thrushes, the song thrush and mistle thrush, are early offshoots from the Eurasian lineage of Turdus thrushes after they spread north from Africa, the blackbird is descended from ancestors that had colonised the Canary Islands from Africa and subsequently reached Europe from there. It is close in evolutionary terms to the island thrush (T. poliocephalus) of Southeast Asia and islands in the southwest Pacific, which probably diverged from T. merula stock fairly recently.
It may not immediately be clear why the name "blackbird", first recorded in 1486, was applied to this species, but not to one of the various other common black English birds, such as the carrion crow, raven, rook, or jackdaw. However, in Old English, and in modern English up to about the 18th century, "bird" was used only for smaller or young birds, and larger ones such as crows were called "fowl". At that time, the blackbird was therefore the only widespread and conspicuous "black bird" in the British Isles. Until about the 17th century, another name for the species was ouzel, ousel or wosel (from Old English osle, cf. German Amsel). Another variant occurs in Act 3 of Shakespeare's A Midsummer Night's Dream, where Bottom refers to "The Woosell cocke, so blacke of hew, With Orenge-tawny bill". The ouzel usage survived later in poetry, and still occurs as the name of the closely related ring ouzel (Turdus torquatus), and in water ouzel, an alternative name for the unrelated but superficially similar white-throated dipper (Cinclus cinclus).
Two related Asian Turdus thrushes, the white-collared blackbird (T. albocinctus) and the grey-winged blackbird (T. boulboul), are also named blackbirds, and the Somali thrush (T. (olivaceus) ludoviciae) is alternatively known as the Somali blackbird.
The icterid family of the New World is sometimes called the blackbird family because of some species' superficial resemblance to the common blackbird and other Old World thrushes, but they are not evolutionarily close, being related to the New World warblers and tanagers. The term is often limited to smaller species with mostly or entirely black plumage, at least in the breeding male, notably the cowbirds, the grackles, and for around 20 species with "blackbird" in the name, such as the red-winged blackbird and the melodious blackbird.
### Subspecies
As would be expected for a widespread passerine bird species, several geographical subspecies are recognised. The treatment of subspecies in this article follows Clement et al. (2000).
- T. m. merula, the nominate subspecies, breeds commonly throughout much of Europe from Iceland, the Faroes and the British Isles east to the Ural Mountains and north to about 70 N, where it is fairly scarce. A small population breeds in the Nile Valley. Birds from the north of the range winter throughout Europe and around the Mediterranean, including Cyprus and North Africa. The introduced birds in Australia and New Zealand are of the nominate race.
- T. m. azorensis is a small race which breeds in the Azores. The male is darker and glossier than merula.
- T. m. cabrerae, named for Ángel Cabrera, the Spanish zoologist, resembles azorensis and breeds in Madeira and the western Canary Islands.
- T. m. mauritanicus, another small dark subspecies with a glossy black male plumage, breeds in central and northern Morocco, coastal Algeria and northern Tunisia.
- T m. aterrimus breeds in Hungary, south and east to southern Greece, Crete, northern Turkey and northern Iran. It winters in southern Turkey, northern Egypt, Iraq and southern Iran. It is smaller than merula with a duller male and paler female plumage.
- T. m. syriacus breeds on the Mediterranean coast of southern Turkey south to Jordan, Israel and the northern Sinai. It is mostly resident, but part of the population moves southwest or west to winter in the Jordan Valley and in the Nile Delta of northern Egypt south to about Cairo. Both sexes of this subspecies are darker and greyer than the equivalent merula plumages.
- T. m. intermedius is an Asian race breeding from Central Russia to Tajikistan, western and northeastern Afghanistan, and eastern China. Many birds are resident, but some are altitudinal migrants and occur in southern Afghanistan and southern Iraq in winter. This is a large subspecies, with a sooty-black male and a blackish-brown female.
The Central Asian subspecies, the relatively large intermedius, also differs in structure and voice, and may represent a distinct species. Alternatively, it has been suggested that it should be considered a subspecies of T. maximus, but it differs in structure, voice and the appearance of the eye-ring.
### Similar species
In Europe, the common blackbird can be confused with the paler-winged first-winter ring ouzel (Turdus torquatus) or the superficially similar common starling (Sturnus vulgaris). A number of similar Turdus thrushes exist far outside the range of the common blackbird, for example the South American Chiguanco thrush (Turdus chiguanco). The Indian blackbird (Turdus simillimus), the Tibetan blackbird (Turdus maximus), and the Chinese blackbird (Turdus mandarinus) were formerly treated as subspecies of the common blackbird.
## Description
The common blackbird of the nominate subspecies T. m. merula is 23.5–29 cm (9.3–11.4 in) in length, has a long tail, and weighs 80–125 g (2.8–4.4 oz). The adult male has glossy black plumage, blackish-brown legs, a yellow eye-ring and an orange-yellow bill. The bill darkens somewhat in winter. The adult female is sooty-brown with a dull yellowish-brownish bill, a brownish-white throat and some weak mottling on the breast. The juvenile is similar to the female, but has pale spots on the upperparts, and the very young juvenile also has a speckled breast. Young birds vary in the shade of brown, with darker birds presumably males. The first year male resembles the adult male, but has a dark bill and weaker eye ring, and its folded wing is brown, rather than black like the body plumage.
## Distribution and habitat
The common blackbird breeds in temperate Eurasia, North Africa, the Canary Islands, and South Asia. It has been introduced to Australia and New Zealand. Populations are sedentary in the south and west of the range, although northern birds migrate south as far as northern Africa and tropical Asia in winter. Urban males are more likely to overwinter in cooler climes than rural males, an adaptation made feasible by the warmer microclimate and relatively abundant food that allow the birds to establish territories and start reproducing earlier in the year. Recoveries of blackbirds ringed on the Isle of May show that these birds commonly migrate from southern Norway (or from as far north as Trondheim) to Scotland, and some onwards to Ireland. Scottish-ringed birds have also been recovered in England, Belgium, Holland, Denmark, and Sweden. Female blackbirds in Scotland and the north of England migrate more (to Ireland) in winter than do the males
Common over most of its range in woodland, the common blackbird has a preference for deciduous trees with dense undergrowth. However, gardens provide the best breeding habitat with up to 7.3 pairs per hectare (nearly three pairs per acre), with woodland typically holding about a tenth of that density, and open and very built-up habitats even less. They are often replaced by the related ring ouzel in areas of higher altitude. The common blackbird also lives in parks, gardens and hedgerows.
The common blackbird occurs at elevations of up to 1,000 m (3,300 ft) in Europe, 2,300 m (7,500 ft) in North Africa, and at 900–1,820 m (2,950–5,970 ft) in peninsular India and Sri Lanka, but the large Himalayan subspecies range much higher, with T. m. maximus breeding at 3,200–4,800 m (10,500–15,700 ft) and remaining above 2,100 m (6,900 ft) even in winter.
This widespread species has occurred as a vagrant in many locations in Eurasia outside its normal range, but records from North America are normally considered to involve escapees, including, for example, the 1971 bird in Quebec. However, a 1994 record from Bonavista, Newfoundland, has been accepted as a genuine wild bird, and the species is therefore on the North American list.
## Behaviour and ecology
The male common blackbird defends its breeding territory, chasing away other males or utilising a "bow and run" threat display. This consists of a short run, the head first being raised and then bowed with the tail dipped simultaneously. If a fight between male blackbirds does occur, it is usually short and the intruder is soon chased away. The female blackbird is also aggressive in the spring when it competes with other females for a good nesting territory, and although fights are less frequent, they tend to be more violent.
The bill's appearance is important in the interactions of the common blackbird. The territory-holding male responds more aggressively towards models with orange bills than to those with yellow bills, and reacts least to the brown bill colour typical of the first-year male. The female is, however, relatively indifferent to bill colour, but responds instead to shinier bills.
As long as winter food is available, both the male and female will remain in the territory throughout the year, although occupying different areas. Migrants are more gregarious, travelling in small flocks and feeding in loose groups in the wintering grounds. The flight of migrating birds comprises bursts of rapid wing beats interspersed with level or diving movement, and differs from both the normal fast agile flight of this species and the more dipping action of larger thrushes.
### Breeding
The male common blackbird attracts the female with a courtship display which consists of oblique runs combined with head-bowing movements, an open beak, and a "strangled" low song. The female remains motionless until she raises her head and tail to permit copulation. This species is monogamous, and the established pair will usually stay together as long as they both survive. Pair separation rates of up to 20% have been noted following poor breeding. Although the species is socially monogamous, there have been studies showing as much as 17% extra-pair paternity.
The nominate T. merula may commence breeding in March, but eastern and Indian races are a month or more later, and the introduced New Zealand birds start nesting in August (late winter). The breeding pair prospect for a suitable nest site in a creeper or bush, favouring evergreen or thorny species such as ivy, holly, hawthorn, honeysuckle or pyracantha. Sometimes the birds will nest in sheds or outbuildings where a ledge or cavity is used. The cup-shaped nest is made with grasses, leaves and other vegetation, bound together with mud. It is built by the female alone. She lays three to five (usually four) bluish-green eggs marked with reddish-brown blotches, heaviest at the larger end; the eggs of nominate T. merula are 2.9 cm × 2.1 cm (1.14 in × 0.83 in) in size and weigh 7.2 g (0.25 oz), of which 6% is shell. Eggs of birds of the southern Indian races are paler than those from the northern subcontinent and Europe.
The female incubates for 12–14 days before the altricial chicks are hatched naked and blind. Fledging takes another 10–19 (average 13.6) days, with both parents feeding the young and removing faecal sacs. The nest is often ill-concealed compared with those of other species, and many breeding attempts fail due to predation. The young are fed by the parents for up to three weeks after leaving the nest, and will follow the adults begging for food. If the female starts another nest, the male alone will feed the fledged young. Second broods are common, with the female reusing the same nest if the brood was successful, and three broods may be raised in the south of the common blackbird's range.
A common blackbird has an average life expectancy of 2.4 years, and, based on data from bird ringing, the oldest recorded age is 21 years and 10 months.
### Songs and calls
In its native Northern Hemisphere range, the first-year male common blackbird of the nominate race may start singing as early as late January in fine weather in order to establish a territory, followed in late March by the adult male. The male's song is a varied and melodious low-pitched fluted warble, given from trees, rooftops or other elevated perches mainly in the period from March to June, sometimes into the beginning of July. It has a number of other calls, including an aggressive seee, a pook-pook-pook alarm for terrestrial predators like cats, and various chink and chook, chook vocalisations. The territorial male invariably gives chink-chink calls in the evening in an attempt (usually unsuccessful) to deter other blackbirds from roosting in its territory overnight. During the northern winter, blackbirds can be heard quietly singing to themselves, so much so that September and October are the only months in which the song cannot be heard. Like other passerine birds, it has a thin high seee alarm call for threats from birds of prey since the sound is rapidly attenuated in vegetation, making the source difficult to locate.
At least two subspecies, T. m. merula and T. m. nigropileus, will mimic other species of birds, cats, humans or alarms, but this is usually quiet and hard to detect.
### Feeding
The common blackbird is omnivorous, eating a wide range of insects, earthworms, seeds and berries. It feeds mainly on the ground, running and hopping with a start-stop-start progress. It pulls earthworms from the soil, usually finding them by sight, but sometimes by hearing, and roots through leaf litter for other invertebrates. Small amphibians, lizards and (on rare occasions) small mammals are occasionally hunted. This species will also perch in bushes to take berries and collect caterpillars and other active insects. Animal prey predominates, and is particularly important during the breeding season, with windfall apples and berries taken more in the autumn and winter. The nature of the fruit taken depends on what is locally available, and frequently includes exotics in gardens.
### Natural threats
Near human habitation the main predator of the common blackbird is the domestic cat, with newly fledged young especially vulnerable. Foxes and predatory birds, such as the sparrowhawk and other accipiters, also take this species when the opportunity arises. However, there is little direct evidence to show that either predation of the adult blackbirds or loss of the eggs and chicks to corvids, such as the European magpie or Eurasian jay, decrease population numbers.
This species is occasionally a host of parasitic cuckoos, such as the common cuckoo (Cuculus canorus), but this is minimal because the common blackbird recognizes the adult of the parasitic species and its non-mimetic eggs. In the UK, only three nests of 59,770 examined (0.005%) contained cuckoo eggs. The introduced merula blackbird in New Zealand, where the cuckoo does not occur, has, over the past 130 years, lost the ability to recognize the adult common cuckoo but still rejects non-mimetic eggs.
As with other passerine birds, parasites are common. Intestinal parasites were found in 88% of common blackbirds, most frequently Isospora and Capillaria species. and more than 80% had haematozoan parasites (Leucocytozoon, Plasmodium, Haemoproteus and Trypanosoma species).
Common blackbirds spend much of their time looking for food on the ground where they can become infested with ticks, which are external parasites that most commonly attach to the head of a blackbird. In France, 74% of rural blackbirds were found to be infested with Ixodes ticks, whereas, only 2% of blackbirds living in urban habitats were infested. This is partly because it is more difficult for ticks to find another host on lawns and gardens in urban areas than in uncultivated rural areas, and partly because ticks are likely to be commoner in rural areas, where a variety of tick hosts, such as foxes, deer and boar, are more numerous. Although ixodid ticks can transmit pathogenic viruses and bacteria, and are known to transmit Borrelia bacteria to birds, there is no evidence that this affects the fitness of blackbirds except when they are exhausted and run down after migration.
The common blackbird is one of a number of species which has unihemispheric slow-wave sleep. One hemisphere of the brain is effectively asleep, while a low-voltage EEG, characteristic of wakefulness, is present in the other. The benefit of this is that the bird can rest in areas of high predation or during long migratory flights, but still retain a degree of alertness.
## Status and conservation
The common blackbird has an extensive range, estimated at 32.4 million square kilometres (12.5 million square miles), and a large population, including an estimated 79 to 160 million individuals in Europe alone. The species is not believed to approach the thresholds for the population decline criterion of the IUCN Red List (i.e., declining more than 30% in ten years or three generations), and is therefore evaluated as least concern. In the western Palearctic, populations are generally stable or increasing, but there have been local declines, especially on farmland, which may be due to agricultural policies that encouraged farmers to remove hedgerows (which provide nesting places), and to drain damp grassland and increase the use of pesticides, both of which could have reduced the availability of invertebrate food.
The common blackbird was introduced to Australia by a bird dealer visiting Melbourne in early 1857, and its range has expanded from its initial foothold in Melbourne and Adelaide to include all of southeastern Australia, including Tasmania and the Bass Strait islands. The introduced population in Australia is considered a pest because it damages a variety of soft fruits in orchards, parks and gardens, including berries, cherries, stone fruit and grapes. It is thought to spread weeds, such as blackberry, and may compete with native birds for food and nesting sites.
The introduced common blackbird is, together with the native silvereye (Zosterops lateralis), the most widely distributed avian seed disperser in New Zealand. Introduced there along with the song thrush (Turdus philomelos) in 1862, it has spread throughout the country up to an elevation of 1,500 metres (4,921 ft), as well as outlying islands such as the Campbell and Kermadecs. It eats a wide range of native and exotic fruit, and makes a major contribution to the development of communities of naturalised woody weeds. These communities provide fruit more suited to non-endemic native birds and naturalised birds than to endemic birds.
## In popular culture
The common blackbird was seen as a sacred though destructive bird in Classical Greek folklore, and was said to die if it consumed pomegranates. Like many other small birds, it has in the past been trapped in rural areas at its night roosts as an easily available addition to the diet, and in medieval times the practice of placing live birds under a pie crust just before serving may have been the origin of the familiar nursery rhyme:
> Sing a song of sixpence,
> A pocket full of rye;
> Four and twenty blackbirds baked in a pie!
> When the pie was opened the birds began to sing,
> Oh, wasn't that a dainty dish to set before the king?
The common blackbird's melodious, distinctive song is mentioned in the poem Adlestrop by Edward Thomas;
> And for that minute a blackbird sang
> Close by, and round him, mistier,
> Farther and farther, all the birds
> Of Oxfordshire and Gloucestershire.
In the English Christmas carol "The Twelve Days of Christmas", the line commonly sung today as "four calling birds" is believed to have originally been written in the 18th century as "four colly birds", an archaism meaning "black as coal" that was a popular English nickname for the common blackbird.
The common blackbird, unlike many black creatures, is not normally seen as a symbol of bad luck, but R. S. Thomas wrote that there is "a suggestion of dark Places about it", and it symbolised resignation in the 17th century tragic play The Duchess of Malfi; an alternate connotation is vigilance, the bird's clear cry warning of danger.
The common blackbird is the national bird of Sweden, which has a breeding population of 1–2 million pairs, and was featured on a 30 öre Christmas postage stamp in 1970; it has also featured on a number of other stamps issued by European and Asian countries, including a 1966 4d British stamp and a 1998 Irish 30p stamp. This bird—arguably—also gives rise to the Serbian name for Kosovo (and Metohija), which is the possessive adjectival form of Serbian kos ("blackbird") as in Kosovo Polje ("Blackbird Field").
A common blackbird can be heard singing on the Beatles song "Blackbird".
|
19,679 |
Mary Rose
| 1,173,726,872 |
Carrack-type warship of the English Tudor navy
|
[
"1510 in England",
"1545 in England",
"16th-century maritime incidents",
"16th-century ships",
"1971 archaeological discoveries",
"1982 in England",
"Henry VIII",
"History of archery",
"Individual sailing vessels",
"Italian War of 1542–1546",
"Museum ships in the United Kingdom",
"Protected Wrecks of England",
"Ships and vessels of the National Historic Fleet",
"Ships built in Portsmouth",
"Ships of the English navy",
"Ships preserved in museums",
"Shipwrecks in the Solent"
] |
The Mary Rose was a carrack in the English Tudor navy of King Henry VIII. She was launched in 1511 and served for 33 years in several wars against France, Scotland, and Brittany. After being substantially rebuilt in 1536, she saw her last action on 19 July 1545. She led the attack on the galleys of a French invasion fleet, but sank in the Solent, the strait north of the Isle of Wight.
The wreck of the Mary Rose was located in 1971 and was raised on 11 October 1982 by the Mary Rose Trust in one of the most complex and expensive maritime salvage projects in history. The surviving section of the ship and thousands of recovered artefacts are of great value as a Tudor period time capsule. The excavation and raising of the Mary Rose was a milestone in the field of maritime archaeology, comparable in complexity and cost to the raising of the 17th-century Swedish warship Vasa in 1961. The Mary Rose site is designated under the Protection of Wrecks Act 1973 by statutory instrument 1974/55. The wreck is a Protected Wreck managed by Historic England.
The finds include weapons, sailing equipment, naval supplies, and a wide array of objects used by the crew. Many of the artefacts are unique to the Mary Rose and have provided insights into topics ranging from naval warfare to the history of musical instruments. The remains of the hull have been on display at the Portsmouth Historic Dockyard since the mid-1980s while undergoing restoration. An extensive collection of well-preserved artefacts is on display at the Mary Rose Museum, built to display the remains of the ship and its artefacts.
Mary Rose was one of the largest ships in the English navy through more than three decades of intermittent war, and she was one of the earliest examples of a purpose-built sailing warship. She was armed with new types of heavy guns that could fire through the recently invented gun-ports. She was substantially rebuilt in 1536 and was also one of the earliest ships that could fire a broadside, although the line of battle tactics had not yet been developed. Several theories have sought to explain the demise of the Mary Rose, based on historical records, knowledge of 16th-century shipbuilding, and modern experiments. The precise cause of her sinking is subject to conflicting testimonies and a lack of conclusive evidence.
## Historical context
In the late 15th century, England was still reeling from its dynastic wars first with France and then among its ruling families back on home soil. The great victories against France in the Hundred Years' War were in the past; only the small enclave of Calais in northern France remained of the vast continental holdings of the English kings. The War of the Roses – the civil war between the houses of York and Lancaster – had ended with Henry VII's establishment of the House of Tudor, the new ruling dynasty of England. The ambitious naval policies of Henry V were not continued by his successors, and from 1422 to 1509 only six ships were built for the crown. The marriage alliance between Anne of Brittany and Charles VIII of France in 1491, and his successor Louis XII in 1499, left England with a weakened strategic position on its southern flank. Despite this, Henry VII managed to maintain a comparatively long period of peace and a small but powerful core of a navy.
At the onset of the early modern period, the great European powers were France, the Holy Roman Empire and Spain. All three became involved in the War of the League of Cambrai in 1508. The conflict was initially aimed at the Republic of Venice but eventually turned against France. Through the Spanish possessions in the Low Countries, England had close economic ties with the Spanish Habsburgs, and it was the young Henry VIII's ambition to repeat the glorious martial endeavours of his predecessors. In 1509, six weeks into his reign, Henry married the Spanish princess Catherine of Aragon and joined the League, intent on certifying his historical claim as king of both England and France. By 1511 Henry was part of an anti-French alliance that included Ferdinand II of Aragon, Pope Julius II and Holy Roman emperor Maximilian.
The small navy that Henry VIII inherited from his father had only two sizeable ships, the carracks Regent and Sovereign. Just months after his accession, two large ships were ordered: the Mary Rose and the Peter Pomegranate (later known as Peter after being rebuilt in 1536) of about 500 and 450 tons respectively. Which king ordered the building of the Mary Rose is unclear; although construction began during Henry VIII's reign, the plans for naval expansion could have been in the making earlier. Henry VIII oversaw the project and he ordered additional large ships to be built, most notably the Henry Grace à Dieu ("Henry by the Grace of God"), or Great Harry at more than 1000 tons burthen. By the 1520s the English state had established a de facto permanent "Navy Royal", the organizational ancestor of the modern Royal Navy.
## Construction
Construction of Mary Rose began on 29 January 1510 in Portsmouth and she was launched in July 1511. She was then towed to London and fitted with rigging and decking, and supplied with armaments. Other than the structural details needed to sail, stock and arm the Mary Rose, she was also equipped with flags, banners and streamers (extremely elongated flags that were flown from the top of the masts) that were either painted or gilded.
Constructing a warship of the size of the Mary Rose was a major undertaking, requiring vast quantities of high-quality material. For a state-of-the-art warship, these materials were primarily oak. The total amount of timber needed for the construction can only be roughly calculated since only about one third of the ship still exists. One estimate for the number of trees is around 600 mostly large oaks, representing about 16 hectares (40 acres) of woodland.
The huge trees that had been common in Europe and the British Isles in previous centuries were by the 16th century quite rare, which meant that timbers were brought in from all over southern England. The largest timbers used in the construction were of roughly the same size as those used in the roofs of the largest cathedrals in the High Middle Ages. An unworked hull plank would have weighed over 300 kg (660 lb), and one of the main deck beams would have weighed close to three-quarters of a tonne.
### Naming
The common explanation for the ship's name was that it was inspired by Henry VIII's favourite sister, Mary Tudor, Queen of France, and the rose as the emblem of the Tudors. According to the historians David Childs, David Loades and Peter Marsden, no direct evidence of naming the ship after the King's sister exists. It was far more common at the time to give ships pious Christian names, a long-standing tradition in Western Europe, or to associate them with their royal patrons. Names like Grace Dieu (Hallelujah) and Holighost (Holy Spirit) had been common since the 15th century and other Tudor navy ships had names like the Regent and Three Ostrich Feathers (referring to the crest of the Prince of Wales).
The Virgin Mary is a more likely candidate for a namesake, and she was also associated with the Rosa Mystica (mystic rose). The name of the sister ship of the Mary Rose, the Peter Pomegranate, is believed to have been named in honour of Saint Peter, and the badge of the Queen Catharine of Aragon, a pomegranate. According to Childs, Loades and Marsden, the two ships, which were built around the same time, were named in honour of the king and queen, respectively.
## Design
The Mary Rose was substantially rebuilt in 1536. The 1536 rebuilding turned a ship of 500 tons into one of 700 tons, and added an entire extra tier of broadside guns to the old carrack-style structure. By consequence, modern research is based mostly on interpretations of the concrete physical evidence of this version of the Mary Rose. The construction of the original design from 1509 is less known.
The Mary Rose was built according to the carrack-style with high "castles" fore and aft with a low waist of open decking in the middle. The hull has what is called a tumblehome shape, which reflects the ship's use as a platform for heavy guns: above the waterline, the hull gradually narrows to center the weight of the higher guns, and to make boarding more difficult. Since only part of the hull has survived, it is not possible to determine many of the basic dimensions with any great accuracy. The moulded breadth, the widest point of the ship roughly above the waterline, was about 12 metres (39 feet) and the keel about 32 metres (105 feet), although the ship's overall length is uncertain.
The hull had four levels separated by three decks. Because the terminology for these was not yet standardised in the 16th century, the terms used here are those that were applied by the Mary Rose Trust. The hold lay furthest down in the ship, right above the bottom planking and below the waterline. This is where the galley was situated and the food was cooked. Directly aft of the galley was the mast step, a rebate in the centre-most timber of the keelson, right above the keel, which supported the main mast, and next to it the main bilge pump. To increase the stability of the ship, the hold was where the ballast was placed and much of the supplies were kept. Right above the hold was the orlop, the lowest deck. Like the hold, it was partitioned and was also used as a storage area for everything from food to spare sails.
Above the orlop lay the main deck, which housed the heaviest guns. The side of the hull on the main deck level had seven gunports on each side fitted with heavy lids that would have been watertight when closed. This was also the highest deck that was caulked and waterproof. Along the sides of the main deck there were cabins under the forecastle and aftercastle which have been identified as belonging to the carpenter, barber-surgeon, pilot and possibly also the master gunner and some of the officers.
The top deck in the hull structure was the upper deck (or weather deck) which was exposed to the elements in the waist. It was a dedicated fighting deck without any known partitions and a mix of heavy and light guns. Over the open waist, the upper deck was entirely covered with a boarding net, a coarse netting that served as a defence measure against boarding. Though very little of the upper deck has survived, it has been suggested that it housed the main living quarters of the crew underneath the aftercastle. A drain located in this area has been identified as a possible "piss-dale", a general urinal to complement the regular toilets which would probably have been located in the bow.
The castles of the Mary Rose had additional decks, but since almost nothing of them survives, their design has had to be reconstructed from historical records. Contemporary ships of equal size were consistently listed as having three decks in both castles. Although speculative, this layout is supported by the illustration in the Anthony Roll and the gun inventories.
During the early stages of excavation of the wreck, it was erroneously believed that the ship had originally been built with clinker (or clench) planking, a technique in which the hull consisted of overlapping planks that bore the structural strength of the ship. Cutting gunports into a clinker-built hull would have meant weakening the ship's structural integrity, and it was assumed that she was later rebuilt to accommodate a hull with carvel edge-to-edge planking with a skeletal structure to support a hull perforated with gunports. Later examination indicates that the clinker planking is not present throughout the ship; only the outer structure of the sterncastle is built with overlapping planking, though not with a true clinker technique.
### Construction method
The hull of Mary Rose is carvel built. The ship is an early example of this method of construction in England. Her hull shape is now known to have been set out using the three arc method – a geometric method similar to that used some two hundred years later, so giving a much earlier date for this technique. This, and studies of other ships specified in the 15th century, is suggestive that the three arc methodology was probably already in existence at the time Mary Rose was built.
The construction sequence began with laying the keel and setting up the stem and sternpost. Master frames were set up at key stations along the length of the keel. Planking started from the keel up, with first the floors then being inserted in the spaces between the master frames. As planking reached the level of the ends of the floors, the first futtocks were installed, so continuing the line of each frame up the hull. These frame components (floors and futtocks) were generally not fastened to each other as construction continued. This demonstrates that the hull was not made by first building a complete framework and then adding the planking once that was complete. Instead planking and framing were carried out in a largely simultaneous process, with later futtocks being added as planking carried on up to the weather deck level. This is in sharp contrast to the usual way of building a carvel hull today. The construction sequence used for Mary Rose was common for the lengthy transition period during which carvel became the main method of ship building.
### Sails and rigging
Although only the lower fittings of the rigging survive, a 1514 inventory and the only known contemporary depiction of the ship from the Anthony Roll have been used to determine how the propulsion system of the Mary Rose was designed. Nine, or possibly ten, sails were flown from four masts and a bowsprit: the foremast had two square sails and the mainmast three; the mizzen mast had a lateen sail and a small square sail; the bonaventure mizzen had at least one lateen sail and possibly also a square sail; and the bowsprit flew a small square spritsail. According to the Anthony Roll illustration (see top of this section), the yards (the spars from which the sails were set) on the foremast and mainmast were also equipped with sheerhooks – twin curved blades sharpened on the inside – that were intended to cut an enemy ship's rigging during boarding actions.
The sailing capabilities of the Mary Rose were commented on by her contemporaries and were once even put to the test. In March 1513 a contest was arranged off The Downs, west of Kent, in which she raced against nine other ships. She won the contest, and Admiral Edward Howard described her enthusiastically as "the noblest ship of sayle [of any] gret ship, at this howr, that I trow [believe] be in Cristendom". Several years later, while sailing between Dover and The Downs, Vice-Admiral William Fitzwilliam noted that both the Henry Grace à Dieu and the Mary Rose performed very well, riding steadily in rough seas and that it would have been a "hard chose" between the two. Modern experts have been more sceptical of her sailing qualities, believing that ships at this time were almost incapable of sailing close to the wind, and describing the handling of the Mary Rose as being like "a wet haystack".
### Armament
The Mary Rose represented a transitional ship design in naval warfare. Since ancient times, war at sea had been fought much as on land: with melee weapons and bows and arrows, only on floating wooden platforms rather than battlefields. Though the introduction of guns was a significant change, it only slowly changed the dynamics of ship-to-ship combat. As guns became heavier and able to take more powerful gunpowder charges, they needed to be placed lower in the ship, closer to the water line. Gunports cut in the hull of ships had been introduced as early as 1501, only about a decade before the Mary Rose was built.
This made broadsides – coordinated volleys from all the guns on one side of a ship – possible, at least in theory, for the first time in history. Naval tactics throughout the 16th century and well into the 17th century focused on countering the oar-powered galleys that were armed with heavy guns in the bow, facing forwards, which were aimed by turning the entire ship against its target. Combined with inefficient gunpowder and the difficulties inherent in firing accurately from moving platforms, this meant that boarding remained the primary tactic for decisive victory throughout the 16th century.
#### Bronze and iron guns
As the Mary Rose was built and served during a period of rapid development of heavy artillery, her armament was a mix of old designs and innovations. The heavy armament was a mix of older-type wrought iron and cast bronze guns, which differed considerably in size, range and design. The large iron guns were made up of staves or bars welded into cylinders and then reinforced by shrinking iron hoops and breech loaded and equipped with simpler gun-carriages made from hollowed-out elm logs with only one pair of wheels, or without wheels entirely.
The bronze guns were cast in one piece and rested on four-wheel carriages which were essentially the same as those used until the 19th century. The breech-loaders were cheaper to produce and both easier and faster to reload, but could take less powerful charges than cast bronze guns. Generally, the bronze guns used cast iron shot and were more suited to penetrate hull sides while the iron guns used stone shot that would shatter on impact and leave large, jagged holes, but both could also fire a variety of ammunition intended to destroy rigging and light structure or injure enemy personnel.
The majority of the guns were small iron guns with short range that could be aimed and fired by a single person. The two most common are the bases, breech-loading swivel guns, most likely placed in the castles, and hailshot pieces, small muzzle-loaders with rectangular bores and fin-like protrusions that were used to support the guns against the railing and allow the ship structure to take the force of the recoil. Though the design is unknown, there were two top pieces in a 1546 inventory (finished after the sinking) which were probably similar to a base, but placed in one or more of the fighting tops.
The ship went through several changes in her armament throughout her career, most significantly accompanying her "rebuilding" in 1536 (see below), when the number of anti-personnel guns was reduced and a second tier of carriage-mounted long guns fitted. There are three inventories that list her guns, dating to 1514, 1540 and 1546. Together with records from the armoury at the Tower of London, these show how the configuration of guns changed as gun-making technology evolved and new classifications were invented. In 1514, the armament consisted mostly of anti-personnel guns like the larger breech-loading iron murderers and the small serpentines, demi-slings and stone guns.
Only a handful of guns in the first inventory were powerful enough to hole enemy ships, and most would have been supported by the ship's structure rather than resting on carriages. The inventories of both the Mary Rose and the Tower had changed radically by 1540. There were now the new cast bronze cannons, demi-cannons, culverins and sakers and the wrought iron port pieces (a name that indicated they fired through ports), all of which required carriages, had longer range and were capable of doing serious damage to other ships. The analysis of the 1514 inventory combined with hints of structural changes in the ship both indicate that the gunports on the main deck were indeed a later addition.
Various types of ammunition could be used for different purposes: plain spherical shot of stone or iron smashed hulls, spiked bar shot and shot linked with chains would tear sails or damage rigging, and canister shot packed with sharp flints produced a devastating shotgun effect. Trials made with replicas of culverins and port pieces showed that they could penetrate wood the same thickness of the Mary Rose's hull planking, indicating a stand-off range of at least 90 m (300 ft). The port pieces proved particularly efficient at smashing large holes in wood when firing stone shot and were a devastating anti-personnel weapon when loaded with flakes or pebbles.
#### Hand-held weapons
To defend against being boarded, Mary Rose carried large stocks of melee weapons, including pikes and bills; 150 of each kind were stocked on the ship according to the Anthony Roll, a figure confirmed roughly by the excavations. Swords and daggers were personal possessions and not listed in the inventories, but the remains of both have been found in great quantities, including the earliest dated example of a British basket-hilted sword.
A total of 250 longbows were carried on board, and 172 of these have so far been found, as well as almost 4,000 arrows, bracers (arm guards) and other archery-related equipment. Longbow archery in Tudor England was mandatory for all able adult men, and despite the introduction of field artillery and handguns, they were used alongside new missile weapons in great quantities. On the Mary Rose, the longbows could only have been drawn and shot properly from behind protective panels in the open waist or from the top of the castles as the lower decks lacked sufficient headroom. There were several types of bows of various size and range. Lighter bows would have been used as "sniper" bows, while the heavier design could possibly have been used to shoot fire arrows.
The inventories of both 1514 and 1546 also list several hundred heavy darts and lime pots that were designed to be thrown onto the deck of enemy ships from the fighting tops, although no physical evidence of either of these weapon types has been identified. Of the 50 handguns listed in the Anthony Roll, the complete stocks of five matchlock muskets and fragments of another eleven have been found. They had been manufactured mainly in Italy, with some originating from Germany. Found in storage were several gunshields, a rare type of firearm consisting of a wooden shield with a small gun fixed in the middle.
### Crew
Throughout her 33-year career, the crew of the Mary Rose changed several times and varied considerably in size. It would have a minimal skeleton crew of 17 men or fewer in peacetime and when she was "laid up in ordinary" (in reserve). The average wartime manning would have been about 185 soldiers, 200 sailors, 20–30 gunners and an assortment of other specialists such as surgeons, trumpeters and members of the admiral's staff, for a total of 400–450 men. When taking part in land invasions or raids, such as in the summer of 1512, the number of soldiers could have swelled to just over 400 for a combined total of more than 700. Even with the normal crew size of around 400, the ship was quite crowded, and with additional soldiers would have been extremely cramped.
Little is known of the identities of the men who served on the Mary Rose, even when it comes to the names of the officers, who would have belonged to the gentry. Two admirals and four captains (including Edward and Thomas Howard, who served both positions) are known through records, as well as a few ship masters, pursers, master gunners and other specialists. Forensic science has been used by artists to create reconstructions of faces of eight crew members, and the results were publicised in May 2013. In addition, researchers have extracted DNA from remains in the hopes of identifying origins of crew, and potentially living descendants.
Of the vast majority of the crewmen, soldiers, sailors and gunners alike, nothing has been recorded. The only source of information for these men has been through osteological analysis of the human bones found at the wrecksite. An approximate composition of some of the crew has been conjectured based on contemporary records. The Mary Rose would have carried a captain, a master responsible for navigation, and deck crew. There would also have been a purser responsible for handling payments, a boatswain, the captain's second in command, at least one carpenter, a pilot in charge of navigation, and a cook, all of whom had one or more assistants (mates). The ship was also staffed by a barber-surgeon who tended to the sick and wounded, along with an apprentice or mate and possibly also a junior surgeon. The only positively identified person who went down with the ship was Vice-Admiral George Carew. McKee, Stirland and several other authors have also named Roger Grenville, father of Richard Grenville of the Elizabethan-era Revenge, captain during the final battle, although the accuracy of the sourcing for this has been disputed by maritime archaeologist Peter Marsden.
The bones of a total of 179 people were found during the excavations of the Mary Rose, including 92 "fairly complete skeletons", more or less complete collections of bones associated with specific individuals. Analysis of these has shown that crew members were all male, most of them young adults. Some were no more than 11–13 years old, and the majority (81%) under 30. They were mainly of English origin and, according to archaeologist Julie Gardiner, they most likely came from the West Country; many following their aristocratic masters into maritime service. There were also a few people from continental Europe. An eyewitness testimony right after the sinking refers to a survivor who was a Fleming, and the pilot may very well have been French. Analysis of oxygen isotopes in teeth indicates that some were also of southern European origin. At least one crewmember was of African ancestry. In general they were strong, well-fed men, but many of the bones also reveal tell-tale signs of childhood diseases and a life of grinding toil. The bones also showed traces of numerous healed fractures, probably the result of on-board accidents.
There are no extant written records of the make-up of the broader categories of soldiers and sailors, but since the Mary Rose carried some 300 longbows and several thousand arrows there had to be a considerable proportion of longbow archers. Examination of the skeletal remains has found that there was a disproportionate number of men with a condition known as os acromiale, affecting their shoulder blades. This condition is known among modern elite archery athletes and is caused by placing considerable stress on the arm and shoulder muscles, particularly of the left arm that is used to hold the bow to brace against the pull on the bowstring. Among the men who died on the ship it was likely that some had practised using the longbow since childhood, and served on board as specialist archers.
A group of six skeletons was found grouped close to one of the 2-tonne bronze culverins on the main deck near the bow. Fusing of parts of the spine and ossification, the growth of new bone, on several vertebrae evidenced all but one of these crewmen to have been strong, well-muscled men who had been engaged in heavy pulling and pushing, the exception possibly being a "powder monkey" not involved in heavy work. These have been tentatively classified as members of a complete gun crew, all having died at their battle station.
## Military career
### First French war
The Mary Rose first saw battle in 1512, in a joint naval operation with the Spanish against the French. The English were to meet the French and Breton fleets in the English Channel while the Spanish attacked them in the Bay of Biscay and then attacked Gascony. The 35-year-old Sir Edward Howard was appointed Lord High Admiral in April and chose the Mary Rose as his flagship. His first mission was to clear the seas of French naval forces between England to the northern coast of Spain to allow for the landing of supporting troops near the French border at Fuenterrabia. The fleet consisted of 18 ships, among them the large ships the Regent and the Peter Pomegranate, carrying over 5,000 men. Howard's expedition led to the capture of twelve Breton ships and a four-day raiding tour of Brittany where English forces successfully fought against local forces and burned numerous settlements.
The fleet returned to Southampton in June where it was visited by King Henry. In August the fleet sailed for Brest where it encountered a joint, but ill-coordinated, French-Breton fleet at the battle of St. Mathieu. The English with one of the great ships in the lead (according to Marsden the Mary Rose) battered the French ships with heavy gunfire and forced them to retreat. The Breton flagship Cordelière put up a fight and was boarded by the 1,000-ton Regent. By accident or through the unwillingness of the Breton crew to surrender, the powder magazine of the Cordelière caught fire and blew up in a violent explosion, setting fire to the Regent and eventually sinking her. About 180 English crew members saved themselves by throwing themselves into the sea; a handful of Bretons survived, only to be captured. The captain of the Regent, 600 soldiers and sailors, the High Admiral of France and the steward of the town of Morlaix were killed in the incident, making it the focal point of several contemporary chronicles and reports. On 11 August, the English burnt 27 French ships, captured another five and landed forces near Brest to raid and take prisoners, but storms forced the fleet back to Dartmouth in Devon and then to Southampton for repairs.
In early 1513, the Mary Rose was once more chosen by Howard as the flagship for an expedition against the French. Before seeing action, she took part in a race against other ships where she was deemed to be one of the most nimble and the fastest of the great ships in the fleet (see details under "Sails and rigging"). On 11 April, Howard's force arrived off Brest only to see a small enemy force join with the larger force in the safety of Brest harbour and its fortifications. The French had recently been reinforced by a force of galleys from the Mediterranean, which sank one English ship and seriously damaged another. Howard landed forces near Brest, but made no headway against the town and was by now getting low on supplies. Attempting to force a victory, he took a small force of small oared vessels on a daring frontal attack on the French galleys on 25 April. Howard himself managed to reach the ship of French admiral, Prégent de Bidoux, and led a small party to board it. The French fought back fiercely and cut the cables that attached the two ships, separating Howard from his men. It left him at the mercy of the soldiers aboard the galley, who instantly killed him.
Demoralised by the loss of its admiral and seriously short of food, the fleet returned to Plymouth. Thomas Howard, elder brother of Edward, was assigned the new Lord Admiral, and was set to the task of arranging another attack on Brittany. The fleet was not able to mount the planned attack because of adverse winds and great difficulties in supplying the ships adequately and the Mary Rose took up winter quarters in Southampton. In August the Scots joined France in war against England, but were dealt a crushing defeat at the Battle of Flodden on 9 September 1513. A follow-up attack in early 1514 was supported by a naval force that included the Mary Rose, but without any known engagements. The French and English mounted raids on each other throughout that summer, but achieved little, and both sides were by then exhausted. By autumn the war was over and a peace treaty was sealed by the marriage of Henry's sister, Mary, to French king Louis XII.
After the peace Mary Rose was placed in the reserves, "in ordinary". She was laid up for maintenance along with her sister ship the Peter Pomegranate in July 1514. In 1518 she received a routine repair and caulking, waterproofing with tar and oakum (old rope fibres) and was then assigned a small skeleton crew who lived on board the ship until 1522. She served briefly on a mission with other warships to "scour the seas" in preparation for Henry VIII's journey across the Channel to the summit with the French king Francis I at the Field of the Cloth of Gold in June 1520.
### Second French war
In 1522, England was once again at war with France because of a treaty with the Holy Roman Emperor Charles V. The plan was for an attack on two fronts with an English thrust in northern France. The Mary Rose participated in the escort transport of troops in June 1522, and by 1 July the Breton port of Morlaix was captured. The fleet sailed home and the Mary Rose berthed for the winter in Dartmouth. The war raged on until 1525 and saw the Scots join the French side. Though Charles Brandon came close to capturing Paris in 1523, there was little gained either against France or Scotland throughout the war. With the defeat of the French army and capture of Francis I by Charles V's forces at the Battle of Pavia on 24 February 1525, the war was effectively over without any major gains or major victories for the English side.
### Maintenance and "in ordinary"
The Mary Rose was kept in reserve from 1522 to 1545. She was once more caulked and repaired in 1527 in a newly dug dock at Portsmouth and her longboat was repaired and trimmed. Little documentation about the Mary Rose between 1528 and 1539 exists. A document written by Thomas Cromwell in 1536 specifies that the Mary Rose and six other ships were "made new" during his service under the king, though it is unclear which years he was referring to and what "made new" actually meant. A later document from January 1536 by an anonymous author states that the Mary Rose and other ships were "new made", and dating of timbers from the ship confirms some type of repair being done in 1535 or 1536. This would have coincided with the controversial dissolution of the monasteries that resulted in a major influx of funds into the royal treasury. The nature and extent of this repair is unknown. Many experts, including Margaret Rule, the project leader for the raising of the Mary Rose, have assumed that it meant a complete rebuilding from clinker planking to carvel planking, and that it was only after 1536 that the ship took on the form that it had when it sank and that was eventually recovered in the 20th century. Marsden has speculated that it could even mean that the Mary Rose was originally built in a style that was closer to 15th-century ships, with a rounded, rather than square, stern and without the main deck gunports.
### Third French war
Henry's complicated marital situation and his high-handed dissolution of the monasteries angered the Pope and Catholic rulers throughout Europe, which increased England's diplomatic isolation. In 1544 Henry had agreed to attack France together with Emperor Charles V, and English forces captured Boulogne at great cost in September, but soon England was left in the lurch after Charles had achieved his objectives and brokered a separate peace.
In May 1545, the French had assembled a large fleet in the estuary of the Seine with the intent to land troops on English soil. The estimates of the size of the fleet varied considerably; between 123 and 300 vessels according to French sources; and up to 226 sailing ships and galleys according to the chronicler Edward Hall. In addition to the massive fleet, 50,000 troops were assembled at Havre de Grâce (modern-day Le Havre). An English force of 160 ships and 12,000 troops under Viscount Lisle was ready at Portsmouth by early June, before the French were ready to set sail, and an ineffective pre-emptive strike was made in the middle of the month. In early July the huge French force under the command of Admiral Claude d'Annebault set sail for England and entered the Solent unopposed with 128 ships on 16 July. The English had around 80 ships with which to oppose the French, including the flagship Mary Rose. But since they had virtually no heavy galleys, the vessels that were at their best in sheltered waters like the Solent, the English fleet promptly retreated into Portsmouth harbour.
### Battle of the Solent
The English were becalmed in port and unable to manoeuvre. On 19 July 1545, the French galleys advanced on the immobilised English fleet, and initially threatened to destroy a force of 13 small galleys, or "rowbarges", the only ships that were able to move against them without a wind. The wind picked up and the sailing ships were able to go on the offensive before the oared vessels were overwhelmed. Two of the largest ships, the Henry Grace à Dieu and the Mary Rose, led the attack on the French galleys in the Solent.
Early in the battle something went wrong. While engaging the French galleys the Mary Rose suddenly heeled (leaned) heavily over to her starboard (right) side and water rushed in through the open gunports. The crew was powerless to correct the sudden imbalance, and could only scramble for the safety of the upper deck as the ship began to sink rapidly. As she leaned over, equipment, ammunition, supplies and storage containers shifted and came loose, adding to the general chaos. The massive port side brick oven in the galley collapsed completely and the huge 360-litre (90 gallon) copper cauldron was thrown onto the orlop deck above. Heavy guns came free and slammed into the opposite side, impeding escape or crushing men beneath them.
For those who were not injured or killed outright by moving objects, there was little time to reach safety, especially for the men who were manning the guns on the main deck or fetching ammunition and supplies in the hold. The companionways that connected the decks with one another would have become bottlenecks for fleeing men, something indicated by the positioning of many of the skeletons recovered from the wreck. What turned the sinking into a major tragedy was the anti-boarding netting that covered the upper decks in the waist (the midsection of the ship) and the sterncastle. With the exception of the men who were stationed in the tops in the masts, most of those who managed to get up from below deck were trapped under the netting; they would have been in view of the surface, and their colleagues above, but with little or no chance to break through, and were dragged down with the ship. Out of a crew of at least 400, fewer than 35 escaped, a casualty rate of over 90%.
## Causes of sinking
### Contemporary accounts
Many accounts of the sinking have been preserved, but the only confirmed eyewitness account is the testimony of a surviving Flemish crewman written down by the Holy Roman Emperor's ambassador François van der Delft in a letter dated 24 July. According to the unnamed Fleming, the ship had fired all of its guns on one side and was turning to present the guns on the other side to the enemy ship, when she was caught in a strong gust of wind, heeled and took in water through the open gunports. In a letter to William Paget dated 23 July, former Lord High Admiral John Russel claimed that the ship had been lost because of "rechenes and great negligence". Three years after the sinking, the Hall's Chronicle gave the reason for the sinking as being caused by "to[o] much foly ... for she was laden with much ordinaunce, and the portes left open, which were low, & the great ordinaunce unbreached, so that when the ship should turne, the water entered, and sodainly she sanke."
Later accounts repeat the explanation that the ship heeled over while going about and that the ship was brought down because of the open gunports. A biography of Peter Carew, brother of George Carew, written by John Hooker sometime after 1575, gives the same reason for the sinking, but adds that insubordination among the crew was to blame. The biography claims that George Carew noted that the Mary Rose showed signs of instability as soon as her sails were raised. George's uncle Gawen Carew had passed by with his own ship the Matthew Gonson during the battle to inquire about the situation of his nephew's ship. In reply he was told "that he had a sorte of knaves whom he could not rule". Contrary to all other accounts, Martin du Bellay, a French cavalry officer who was present at the battle, stated that the Mary Rose had been sunk by French guns.
### Modern theories
The most common explanation for the sinking among modern historians is that the ship was unstable for a number of reasons. When a strong gust of wind hit the sails at a critical moment, the open gunports proved fatal, the ship flooded and quickly foundered. Coates offered a variant of this hypothesis, which explains why a ship which served for several decades without sinking, and which even fought in actions in the rough seas off Brittany, unexpectedly foundered: the ship had accumulated additional weight over the years in service and finally become unseaworthy. That the ship was turning after firing all the cannons on one side has been questioned by Marsden after examination of guns recovered in both the 19th and 20th centuries; guns from both sides were found still loaded. This has been interpreted to mean that something else could have gone wrong since it is assumed that an experienced crew would not have failed to secure the gunports before making a potentially risky turn.
The most recent surveys of the ship indicate that the ship was modified late in her career and have lent support to the idea that the Mary Rose was altered too much to be properly seaworthy. Marsden has suggested that the weight of additional heavy guns would have increased her draught so much that the waterline was less than one metre (c. 3 feet) from the gunports on the main deck.
Peter Carew's claim of insubordination has been given support by James Watt, former Medical Director-General of the Royal Navy, based on records of an epidemic of dysentery in Portsmouth which could have rendered the crew incapable of handling the ship properly, while historian Richard Barker has suggested that the crew actually knew that the ship was an accident waiting to happen, at which they balked and refused to follow orders. Marsden has noted that the Carew biography is in some details inconsistent with the sequence of events reported by both French and English eyewitnesses. It also reports that there were 700 men on board, an unusually high number. The distance in time to the event it describes may mean that it was embellished to add a dramatic touch. The report of French galleys sinking the Mary Rose as stated by Martin du Bellay has been described as "the account of a courtesan" by naval historian Maurice de Brossard. Du Bellay and his two brothers were close to king Francis I and du Bellay had much to gain from portraying the sinking as a French victory. English sources, even if biased, would have nothing to gain from portraying the sinking as the result of crew incompetence rather than conceding a victory to the much-feared gun galleys.
Dominic Fontana, a geographer at the University of Portsmouth, has voiced support for du Bellay's version of the sinking based on the battle as it is depicted in the Cowdray Engraving, and modern GIS analysis of the modern scene of the battle. By plotting the fleets and calculating the conjectured final manoeuvres of the Mary Rose, Fontana reached the conclusion that the ship had been hit low in the hull by the galleys and was destabilised after taking in water. He has interpreted the final heading of the ship straight due north as a failed attempt to reach the shallows at Spitbank only a few hundred metres away. This theory has been given partial support by Alexzandra Hildred, one of the experts who has worked with the Mary Rose, though she has suggested that the close proximity to Spitbank could also indicate that the sinking occurred while trying to make a hard turn to avoid running aground.
### Experiments
In 2000, the Channel 4 television programme What Sank the Mary Rose? attempted to investigate the causes suggested for her sinking by means of experiments with scale models of the ship and metal weights to simulate the presence of troops on the upper decks. Initial tests showed that the ship was able to make the turn described by eyewitnesses without capsizing. In later tests, a fan was used to create a breeze similar to the one reported to have suddenly sprung up on the day of the sinking as the Mary Rose went to make the turn. As the model made the turn, the breeze in the upper works forced it to heel more than at calm, forcing the main deck gun ports below the waterline and foundering the model within a few seconds. The sequence of events closely followed what eyewitnesses had reported, particularly the suddenness with which the ship sank.
## History as a shipwreck
A salvage attempt was ordered by Secretary of State William Paget only days after the sinking, and Charles Brandon, the king's brother-in-law, took charge of practical details. The operation followed the standard procedure for raising ships in shallow waters: strong cables were attached to the sunken ship and fastened to two empty ships, or hulks. At low tide, the ropes were pulled taut with capstans. When the high tide came in, the hulks rose and with them the wreck. It would then be towed into shallower water and the procedure repeated until the whole ship could be raised completely.
A list of necessary equipment was compiled by 1 August and included, among other things, massive cables, capstans, pulleys, and 40 pounds of tallow for lubrication. The proposed salvage team comprised 30 Venetian mariners and a Venetian carpenter with 60 English sailors to serve them. The two ships to be used as hulks were Jesus of Lübeck and Samson, each of 700 tons burthen and similar in size to the Mary Rose. Brandon was so confident of success that he reassured the king that it would only be a matter of days before they could raise the Mary Rose. The optimism proved unfounded. Since the ship had settled at a 60-degree angle to starboard much of it was stuck deep into the clay of the seabed. This made it virtually impossible to pass cables under the hull and required far more lifting power than if the ship had settled on a hard seabed. An attempt to secure cables to the main mast appears only to have resulted in its being snapped off.
The project was successful only in raising rigging, some guns and other items. At least two other salvage teams in 1547 and 1549 received payment for raising more guns from the wreck. Despite the failure of the first salvage operation, there was still lingering belief in the possibility of retrieving the Mary Rose at least until 1546, when she was presented as part of the illustrated list of English warships called the Anthony Roll. When all hope of raising the complete ship was finally abandoned is not known. It could have been after Henry VIII's death in January 1547 or even as late as 1549, when the last guns were brought up. The Mary Rose was remembered well into the reign of Elizabeth I, and according to one of the queen's admirals, William Monson (1569–1643), the wreck was visible from the surface at low tide in the late 16th century.
### Deterioration
After the sinking, the partially buried wreck created a barrier at a right angle against the currents of the Solent. Two scour pits, large underwater ditches, formed on either side of the wreck while silt and seaweed was deposited inside the ship. A deep but narrow pit formed on the upward tilting port side, while a shallower, broader pit formed on the starboard side, which had been mostly buried by the force of the impact. The abrasive actions of sand and silt carried by the currents and the activity of fungi, bacteria and wood-boring crustaceans and molluscs, such as the teredo "shipworm", began to break down the structure of the ship. Eventually the exposed wooden structure was weakened and gradually collapsed. The timbers and contents of the port side were either deposited in the scour pits and remaining ship structure or carried off by the currents. Following the collapse of the exposed parts of the ship, the site was levelled with the seabed and gradually covered by layers of sediment, concealing most of the remaining structure. During the 16th century, a hard layer of compacted clay and crushed shells formed over the ship, stabilising the site and sealing the Tudor-era deposits. Further layers of soft silt covered the site during the 18th and 19th centuries, but frequent changes in the tidal patterns and currents in the Solent occasionally exposed some of the timbers, leading to its accidental rediscovery in 1836 and aiding in locating the wreck in 1971. After the ship had been raised it was determined that about 40% of the original structure had survived.
### Rediscovery in 19th century
In mid-1836, a group of five fishermen caught their nets on timbers protruding from the bottom of the Solent. They contacted a diver to help them remove the hindrance, and on 10 June, Henry Abbinett became the first person to see the Mary Rose in almost 300 years. Later, two other professional divers, John Deane and William Edwards, were employed. Using a recently invented rubber suit and metal diving helmet, Deane and Edwards began to examine the wreck and salvage items from it. Along with an assortment of timbers and wooden objects, including several longbows, they brought up several bronze and iron guns, which were sold to the Board of Ordnance for over £220. Initially, this caused a dispute between Deane (who had also brought in his brother Charles into the project), Abbinett and the fishermen who had hired them. The matter was eventually settled by allowing the fishermen a share of the proceeds from the sale of the first salvaged guns, while Deane received exclusive salvage rights at the expense of Abbinett. The wreck was soon identified as the Mary Rose from the inscriptions of one of the bronze guns manufactured in 1537.
The identification of the ship led to high public interest in the salvage operation and caused a great demand for the objects that were brought up. Though many of the objects could not be properly conserved at the time and subsequently deteriorated, many were documented with pencil sketches and watercolour drawings, which survive to this day. John Deane ceased working on the wreck in 1836, but returned in 1840 with new, more destructive methods. With the help of condemned bomb shells filled with gunpowder acquired from the Ordnance Board, he blasted his way into parts of the wreck. Fragments of bombs and traces of blasting craters were found during the modern excavations, but there was no evidence that Deane managed to penetrate the hard layer that had sealed off the Tudor levels. Deane reported retrieving a bilge pump and the lower part of the main mast, both of which would have been located inside the ship. The recovery of small wooden objects like longbows suggests that Deane did manage to penetrate the Tudor levels at some point, though this has been disputed by the excavation project leader Margaret Rule. Newspaper reports on Deane's diving operations in October 1840 report that the ship was clinker built, but since the sterncastle is the only part of the ship with this feature, an alternative explanation has been suggested: Deane did not penetrate the hard shelly layer that covered most of the ship, but managed only to get into remains of the sterncastle that today no longer exist. Despite the rough handling by Deane, the Mary Rose escaped the wholesale destruction by giant rakes and explosives that was the fate of other wrecks in the Solent (such as HMS Royal George).
### Modern rediscovery
The modern search for the Mary Rose was initiated by the Southsea branch of the British Sub-Aqua Club in 1965 as part of a project to locate shipwrecks in the Solent. The project was under the leadership of historian, journalist and amateur diver Alexander McKee. Another group led by Lieutenant-Commander Alan Bax of the Royal Navy, sponsored by the Committee for Nautical Archaeology in London, also formed a search team. Initially the two teams had differing views on where to find the wreck, but eventually joined forces. In February 1966 a chart from 1841 was found that marked the positions of the Mary Rose and several other wrecks. The charted position coincided with a trench (one of the scour pits) that had already been located by McKee's team, and a definite location was finally established at a position 3 km (1.9 mi) south of the entrance to Portsmouth Harbour () in water with a depth of 11 m (36 feet) at low tide. Diving on the site began in 1966 and a sonar scan by Harold Edgerton in 1967–68 revealed some type of buried feature. In 1970 a loose timber was located and on 5 May 1971, the first structural details of the buried hull were identified after they were partially uncovered by winter storms.
A major problem for the team from the start was that wreck sites in the UK lacked any legal protection from plunderers and treasure hunters. Sunken ships, once being moving objects, were legally treated as chattel and were awarded to those who could first raise them. The Merchant Shipping Act of 1894 also stipulated that any objects raised from a wreck should be auctioned off to finance the salvage operations, and there was nothing preventing anyone from "stealing" the wreck and making a profit. The problem was handled by forming an organisation, the Mary Rose Committee, aiming "to find, excavate, raise and preserve for all time such remains of the ship Mary Rose as may be of historical or archaeological interest".
To keep intruders at bay, the Committee arranged a lease of the seabed where the wreck lay from the Portsmouth authorities, thereby discouraging anyone from trespassing on the underwater property. In hindsight this was only a legalistic charade which had little chance of holding up in a court of law. In combination with secrecy as to the exact location of the wreck, it saved the project from interference. It was not until the passing of the Protection of Wrecks Act on 5 February 1973 that the Mary Rose was declared to be of national historic interest that enjoyed full legal protection from any disturbance by commercial salvage teams. Despite this, years after the passing of the 1973 act and the excavation of the ship, lingering conflicts with salvage legislation remained a threat to the Mary Rose project as "personal" finds such as chests, clothing and cooking utensils risked being confiscated and auctioned off.
#### Survey and excavation
Following the discovery of the wreck in 1971, the project became known to the general public and received increasing media attention. This helped bring in more donations and equipment, primarily from private sources. By 1974 the committee had representatives from the National Maritime Museum, the Royal Navy, the BBC and local organisations. In 1974 the project received royal patronage from Prince Charles, who participated in dives on the site. This attracted yet more publicity, and also more funding and assistance. The initial aims of the Mary Rose Committee were now more officially and definitely confirmed. The committee had become a registered charity in 1974, which made it easier to raise funds, and the application for excavation and raising of the ship had been officially approved by the UK government.
By 1978 the initial excavation work had uncovered a complete and coherent site with an intact ship structure and the orientation of the hull had been positively identified as being on an almost straight northerly heading with a 60-degree heel to starboard and a slight downward tilt towards the bow. As no records of English shipbuilding techniques used in vessels like the Mary Rose survive, excavation of the ship would allow for a detailed survey of her design and shed new light on the construction of ships of the era. A full excavation also meant removing the protective layers of silt that prevented the remaining ship structure from being destroyed through biological decay and the scouring of the currents; the operation had to be completed within a predetermined timespan of a few years or it risked irreversible damage. It was also considered desirable to recover and preserve the remains of the hull if possible. For the first time, the project was faced with the practical difficulties of actually raising, conserving and preparing the hull for public display.
To handle this new, considerably more complex and expensive task, it was decided that a new organisation was needed. The Mary Rose Trust, a limited charitable trust, with representatives from many organisations would handle the need for a larger operation and a large infusion of funds. In 1979 a new diving vessel was purchased to replace the previous 12 m (39 ft) catamaran Roger Greenville which had been used from 1971. The choice fell on the salvage vessel Sleipner, the same craft that had been used as a platform for diving operations on the Vasa. The project went from a team of only twelve volunteers working four months a year to over 50 individuals working almost around the clock nine months a year. In addition, there were over 500 volunteer divers and a laboratory staff of about 70 that ran the shore base and conservation facilities. During the four diving seasons from 1979 to 1982 over 22,000 diving hours were spent on the site, an effort that amounted to 11.8 man-years.
#### Raising the ship
Raising the Mary Rose meant overcoming delicate problems that had never been encountered before. The raising of the Swedish warship Vasa during 1959–61 was the only comparable precedent, but it had been a relatively straightforward operation since the hull was completely intact and rested upright on the seabed. It had been raised with basically the same methods as were in use in Tudor England: cables were slung under the hull and attached to two pontoons on either side of the ship which was then gradually raised and towed into shallower waters. Only one-third of the Mary Rose was intact and she lay deeply embedded in mud. If the hull were raised in the conventional way, there was no guarantee that it would have enough structural strength to hold together out of water. Many suggestions for raising the ship were discarded, including the construction of a cofferdam around the wreck site, filling the ship with small buoyant objects (such as ping-pong balls) or even pumping brine into the seabed and freezing it so that it would float and take the hull with it. After lengthy discussions it was decided in February 1980 that the hull would first be emptied of all its contents and strengthened with steel braces and frames. It would then be lifted to the surface with floating sheerlegs attached to nylon strops passing under the hull and transferred to a cradle. It was also decided that the ship would be recovered before the end of the diving season in 1982. If the wreck stayed uncovered any longer it risked irreversible damage from biological decay and tidal scouring.
During the last year of the operation, the massive scope of full excavation and raising was beginning to take its toll on those closely involved in the project. In May 1981, Alexander McKee voiced concerns about the method chosen for raising the timbers and openly questioned Margaret Rule's position as excavation leader. McKee felt ignored in what he viewed as a project where he had always played a central role, both as the initiator of the search for the Mary Rose and other ships in the Solent, and as an active member throughout the diving operations. He had several supporters who all pointed to the risk of the project's turning into an embarrassing failure if the ship were damaged during raising operations. To address these concerns it was suggested that the hull should be placed on top of a supporting steel cradle underwater. This would avoid the inherent risks of damaging the wooden structure if it were lifted out of the water without appropriate support. The idea of using nylon strops was also discarded in favour of drilling holes through the hull at 170 points and passing iron bolts through them to allow the attachment of wires connected to a lifting frame.
In the spring of 1982, after three intense seasons of archaeological underwater work, preparations began for raising the ship. The operation soon ran into problems: early on there were difficulties with the custom-made lifting equipment; the method of lifting the hull had to be considerably altered as late as June. After the frame was properly attached to the hull, it was slowly jacked up on four legs to pull the ship off the seabed. The massive crane of the barge Tog Mor then moved the frame and hull, transferring them underwater to the specially designed cradle, which was padded with water-filled bags. On the morning of 11 October 1982, the final lift of the entire package of cradle, hull and lifting frame began. It was watched by the team, Prince Charles and other spectators in boats around the site. At 9:03 am, the first timbers of the Mary Rose broke the surface. A second set of bags under the hull was inflated with air, to cushion the waterlogged wood. Finally, the whole package was placed on a barge and taken to the shore. Though eventually successful, the operation was close to foundering on two occasions; first when one of the supporting legs of the lifting frame was bent and had to be removed and later when a corner of the frame, with "an unforgettable crunch", slipped more than a metre (3 feet) and came close to crushing part of the hull.
## Archaeology
As one of the most ambitious and expensive projects in the history of maritime archaeology, the Mary Rose project broke new ground within this field in the UK. Besides becoming one of the first wrecks to be protected under the new Protection of Wrecks Act in 1973 it also created several new precedents. It was the first time that a British privately funded project was able to apply modern scientific standards fully and without having to auction off part of the findings to finance its activities; where previous projects often had to settle for just a partial recovery of finds, everything found in connection with the Mary Rose was recovered and recorded. The raising of the vessel made it possible to establish the first historic shipwreck museum in the UK to receive government accreditation and funding. The excavation of the Mary Rose wreck site proved that it was possible to achieve a level of exactness in underwater excavations comparable to those on dry land.
Throughout the 1970s, the Mary Rose was meticulously surveyed, excavated and recorded with the latest methods within the field of maritime archaeology. Working in an underwater environment meant that principles of land-based archaeology did not always apply. Mechanical excavators, airlifts and suction dredges were used in the process of locating the wreck, but as soon as it began to be uncovered in earnest, more delicate techniques were employed. Many objects from the Mary Rose had been well preserved in form and shape, but many were quite delicate, requiring careful handling. Artefacts of all sizes were supported with soft packing material, such as old plastic ice cream containers, and some of the arrows that were "soft like cream cheese" had to be brought up in special styrofoam containers. The airlifts that sucked up clay, sand and dirt off-site or to the surface were still used, but with much greater precision since they could potentially disrupt the site. The many layers of sediment that had accumulated on the site could be used to date artefacts in which they were found, and had to be recorded properly. The various types of accretions and remnants of chemicals with artefacts were essential clues to objects that had long since broken down and disappeared, and needed to be treated with considerable care.
The excavation and raising of the ship in the 1970s and early 1980s meant that diving operations ceased, even though modern scaffolding and part of the bow were left on the seabed. The pressure on conservators to treat tens of thousands of artefacts and the high costs of conserving, storing and displaying the finds and the ship meant that there were no funds available for diving. In 2002, the UK Ministry of Defence announced plans to build two new aircraft carriers. Because of the great size of the new vessels, the outlet from Portsmouth needed to be surveyed to make sure that they could sail no matter the tide. The planned route for the underwater channel ran close to the Mary Rose wrecksite, which meant that funding was supplied to survey and excavate the site once more. Even though the planned carriers were downsized enough to not require alteration of Portsmouth outlet, the excavations had already exposed timbers and were completed in 2005. Among the most important finds was the ten-metre (32 feet) stem, the forward continuation of the keel, which provided more exact details about the original profile of the ship.
### Finds
Over 26,000 artefacts and pieces of timber were raised along with remains of about half the crew members. The faces of some crew members have been reconstructed. Analysis of the crew skeletons shows many had suffered malnutrition, and had evidence of rickets, scurvy, and other deficiency diseases. Crew members also developed arthritis through the stresses on their joints from heavy lifting and maritime life generally, and suffered bone fractures. As the ship was intended to function as a floating, self-contained community, it was stocked with victuals (food and drink) that could sustain its inhabitants for extended periods of time. The casks used for storage on the Mary Rose have been compared with those from a wreck of a trade vessel from the 1560s and have revealed that they were of better quality, more robust and reliable, an indication that supplies for the Tudor navy were given high priority, and their requirements set a high standard for cask manufacturing at the time.
As a miniature society at sea, the wreck of the Mary Rose held personal objects belonging to individual crew members. This included clothing, games, various items for spiritual or recreational use, and objects related to mundane everyday tasks such as personal hygiene, fishing, and sewing. The master carpenter's chest, for example, contained an early backgammon set, a book, three plates, a sundial, and a tankard, goods suggesting he was relatively wealthy.
The ship carried several skilled craftsmen and was equipped for handling both routine maintenance and repairing extensive battle damage. In and around one of the cabins on the main deck under the sterncastle, archaeologists found a "collection of woodworking tools ... unprecedented in its range and size", consisting of eight chests of carpentry tools. Along with loose mallets and tar pots used for caulking, this variety of tools belonged to one or several of the carpenters employed on the Mary Rose.
Many of the cannons and other weapons from the Mary Rose have provided invaluable physical evidence about 16th-century weapon technology. The surviving gunshields are almost all from the Mary Rose, and the four small cast iron hailshot pieces are the only known examples of this type of weapon.
Animal remains have been found in the wreck of the Mary Rose. These include the skeletons of a rat, a frog and a dog. The dog, an English Toy Terrier (Black & Tan), was between eighteen months and two years in age, was found near the hatch to the ship's carpenter's cabin and is presumed to have been brought aboard as a ratter. Nine barrels have been found to contain bones of cattle, indicating that they contained pieces of beef butchered and stored as ship's rations. The bones of pigs and fish, stored in baskets, have also been found.
#### Musical instruments
Two fiddles, a bow, a still shawm or douçaine, three three-hole pipes, and a tabor drum with a drumstick were found throughout the wreck. These would have been used for the personal enjoyment of the crew and to provide a rhythm to work on the rigging and turning the capstans on the upper decks. The tabor drum is the earliest known example of its kind and the drumstick is of a previously unknown design. The tabor pipes are considerably longer than any known examples from the period. Their discovery proved that contemporary illustrations, previously viewed with some suspicion, were accurate depictions of the instruments. Before the discovery of the Mary Rose shawm, an early predecessor to the oboe, instrument historians had been puzzled by references to "still shawms", or "soft" shawms, that were said to have a sound that was less shrill than earlier shawms. The still shawm disappeared from the musical scene in the 16th century, and the instrument found on the Mary Rose is the only surviving example. A reproduction has been made and played. Combined with a pipe and tabor, it provides a "very effective bass part" that would have produced "rich and full sound, which would have provided excellent music for dancing on board ship". Only a few other fiddle-type instruments from the 16th century exist, but none of them of the type found on the Mary Rose. Reproductions of both fiddles have been made, though less is known of their design than the shawm since the neck and strings were missing.
#### Navigation tools
In the remains of a small cabin in the bow of the ship and in a few other locations around the wreck was found the earliest dated set of navigation instruments in Europe found so far: compasses, divider calipers, a stick used for charting, protractors, sounding leads, tide calculators and a logreel, an instrument for calculating speed. Several of these objects are not only unique in having such an early, definite dating, but also because they pre-date written records of their use; protractors would have reasonably been used to measure bearings and courses on maps, but sea charts are not known to have been used by English navigators during the first half of the 16th century, compasses were not depicted on English ships until the 1560s, and the first mention of a logreel is from 1574.
#### Barber-surgeon's cabin
The cabin located on the main deck underneath the sterncastle is thought to have belonged to the barber-surgeon. He was a trained professional who saw to the health and welfare of the crew and acted as the medical expert on board. The most important of these finds were found in an intact wooden chest which contained over 60 objects relating to the barber-surgeon's medical practice: the wooden handles of a complete set of surgical tools and several shaving razors (although none of the steel blades had survived), a copper syringe for wound irrigation and treatment of gonorrhoea, and even a skilfully crafted feeding bottle for feeding incapacitated patients. More objects were found around the cabin, such as earscoops, shaving bowls and combs. With this wide selection of tools and medicaments the barber-surgeon, along with one or more assistants, could set bone fractures, perform amputations and deal with other acute injuries, treat a number of diseases and provide crew members with a minimal standard of personal hygiene.
#### Hatch
One of the first scientifically confirmed ratters was a terrier and whippet dog crossbreed who spent his short life on the Mary Rose. The dog, named Hatch by researchers, was discovered in 1981 during the underwater excavation of the ship. Hatch's main duty was to kill rats on board the ship. Based on the DNA work performed on Hatch's teeth, he was a young adult male, 18–24 months old, with a brown coat. Hatch's skeleton is on display in the Mary Rose Museum in Portsmouth Historic Dockyard.
### Conservation
Preservation of the Mary Rose and her contents was an essential part of the project from the start. Though many artefacts, especially those that were buried in silt, had been preserved, the long exposure to an underwater environment had rendered most of them sensitive to exposure to air after recovery. Archaeologists and conservators had to work in tandem from the start to prevent deterioration of the artefacts. After recovery, finds were placed in so-called passive storage, which would prevent any immediate deterioration before the active conservation which would allow them to be stored in an open-air environment. Passive storage depended on the type of material that the object was made of, and could vary considerably. Smaller objects from the most common material, wood, were sealed in polyethylene bags to preserve moisture. Timbers and other objects that were too large to be wrapped were stored in unsealed water tanks. Growth of fungi and microbes that could degrade wood were controlled by various techniques, including low-temperature storage, chemicals, and in the case of large objects, common pond snails that consumed wood-degrading organisms but not the wood itself.
Other organic materials such as leather, skin and textiles were treated similarly, by keeping them moist in tanks or sealed plastic containers. Bone and ivory was desalinated to prevent damage from salt crystallisation, as were glass, ceramic and stone. Iron, copper and copper alloy objects were kept moist in a sodium sesquicarbonate solution to prevent oxidisation and reaction with the chlorides that had penetrated the surface. Alloys of lead and pewter are inherently stable in the atmosphere and generally require no special treatment. Silver and gold were the only materials that required no special passive storage.
Conserving the hull of the Mary Rose was the most complicated and expensive task for the project. In 2002 a donation of £4.8 million from the Heritage Lottery Fund and equivalent monetary support from the Portsmouth City and Hampshire County Councils was needed to keep the work with conservation on schedule. During passive conservation, the ship structure could for practical reasons not be completely sealed, so instead it was regularly sprayed with filtered, recycled water that was kept at a temperature of 2 to 5 °C (36 to 41 °F) to keep it from drying out. Drying waterlogged wood that has been submerged for several centuries without appropriate conservation causes considerable shrinkage (20–50%) and leads to severe warping and cracking as water evaporates from the cellular structure of the wood. The substance polyethylene glycol (PEG) had been used before on archaeological wood, and was during the 1980s being used to conserve the Vasa. After almost ten years of small-scale trials on timbers, an active three-phase conservation programme of the hull of the Mary Rose began in 1994. During the first phase, which lasted from 1994 to 2003, the wood was sprayed with low-molecular-weight PEG to replace the water in the cellular structure of the wood. From 2003 to 2010, a higher-molecular-weight PEG was used to strengthen the mechanical properties of the outer surface layers. The third phase consisted of a controlled air drying ending in 2016. Researchers are planning on using magnetic nanoparticles to remove iron in the ship's wood to reduce the production of harmful sulfuric acid that is causing deterioration.
The wreck site is legally protected. Under the "Protection of Wrecks Act 1973" (1973 c. 33) any interference with the site requires a licence. The site is listed as being of "historical, archaeological or artistic importance" by Historic England.
## Display
After the decision to raise the Mary Rose, discussions ensued as to where she would eventually go on permanent display. The east end of Portsea Island at Eastney emerged as an early alternative, but was rejected because of parking problems and the distance from the dockyard where she was originally built. Placing the ship next to the famous flagship of Horatio Nelson, HMS Victory, at Portsmouth Historic Dockyard was proposed in July 1981. A group called the Maritime Preservation Society even suggested Southsea Castle, where Henry VIII had witnessed the sinking, as a final resting place and there was widespread scepticism to the dockyard location. At one point a county councillor even threatened to withdraw promised funds if the dockyard site became more than an interim solution. As costs for the project mounted, there was a debate in the Council chamber and in the local paper The News as to whether the money could be spent more appropriately. Although author David Childs writes that in the early 1980s "the debate was as a fiery one", the project was never seriously threatened because of the great symbolic importance of the Mary Rose to the naval history of both Portsmouth and England.
Since the mid-1980s, the hull of the Mary Rose has been kept in a covered dry dock while undergoing conservation. Although the hull has been open to the public for viewing, the need for keeping the ship saturated first with water and later a polyethylene glycol (PEG) solution meant that, before 2013, visitors were separated from the hull by a glass barrier. By 2007, the specially built ship hall had been visited by over seven million visitors since it first opened on 4 October 1983, just under a year after it was successfully raised.
A separate Mary Rose Museum was housed in a structure called No. 5 Boathouse near the ship hall and was opened to the public on 9 July 1984, containing displays explaining the history of the ship and a small number of conserved artefacts, from entire bronze cannons to household items. In September 2009 the temporary Mary Rose display hall was closed to visitors to facilitate construction of the new £35 million museum building, which opened to the public on 31 May 2013.
The new Mary Rose Museum was designed by architects Wilkinson Eyre, Perkins+Will and built by construction firm Warings. The construction has been challenging because the museum has been built over the ship in the dry dock which is a listed monument. During construction of the museum, conservation of the hull continued inside a sealed "hotbox". In April 2013 the polyethylene glycol sprays were turned off and the process of controlled airdrying began. In 2016 the "hotbox" was removed and for the first time since 1545, the ship was revealed dry. This new museum displays most of the artefacts recovered from within the ship in context with the conserved hull. As of 2018, the new museum has been visited by over 1.8 million people and saw 189,702 visitors in 2019.
## See also
|
979,237 |
Rings of Neptune
| 1,169,975,255 |
Rings of the planet Neptune
|
[
"Astronomical objects discovered in 1989",
"Neptune",
"Planetary rings"
] |
The rings of Neptune consist primarily of five principal rings. They were first discovered (as "arcs") by simultaneous observations of a stellar occultation on 22 July 1984 by André Brahic's and William B. Hubbard's teams at La Silla Observatory (ESO) and at Cerro Tololo Interamerican Observatory in Chile. They were eventually imaged in 1989 by the Voyager 2 spacecraft. At their densest, they are comparable to the less dense portions of Saturn's main rings such as the C ring and the Cassini Division, but much of Neptune's ring system is quite tenuous, faint and dusty, more closely resembling the rings of Jupiter. Neptune's rings are named after astronomers who contributed important work on the planet: Galle, Le Verrier, Lassell, Arago, and Adams. Neptune also has a faint unnamed ring coincident with the orbit of the moon Galatea. Three other moons orbit between the rings: Naiad, Thalassa and Despina.
The rings of Neptune are made of extremely dark material, likely organic compounds processed by radiation, similar to those found in the rings of Uranus. The proportion of dust in the rings (between 20% and 70%) is high, while their optical depth is low to moderate, at less than 0.1. Uniquely, the Adams ring includes five distinct arcs, named Fraternité, Égalité 1 and 2, Liberté, and Courage. The arcs occupy a narrow range of orbital longitudes and are remarkably stable, having changed only slightly since their initial detection in 1980. How the arcs are stabilized is still under debate. However, their stability is probably related to the resonant interaction between the Adams ring and its inner shepherd moon, Galatea.
## Discovery and observations
The first mention of rings around Neptune dates back to 1846 when William Lassell, the discoverer of Neptune's largest moon, Triton, thought he had seen a ring around the planet. However, his claim was never confirmed and it is likely that it was an observational artifact. The first reliable detection of a ring was made in 1968 by stellar occultation, although that result would go unnoticed until 1977 when the rings of Uranus were discovered. Soon after the Uranus discovery, a team from Villanova University led by Harold J. Reitsema began searching for rings around Neptune. On 24 May 1981, they detected a dip in a star's brightness during one occultation; however, the manner in which the star dimmed did not suggest a ring. Later, after the Voyager fly-by, it was found that the occultation was due to the small Neptunian moon Larissa, a highly unusual event.
In the 1980s, significant occultations were much rarer for Neptune than for Uranus, which lay near the Milky Way at the time and was thus moving against a denser field of stars. Neptune's next occultation, on 12 September 1983, resulted in a possible detection of a ring. However, ground-based results were inconclusive. Over the next six years, approximately 50 other occultations were observed with only about one-third of them yielding positive results. Something (probably incomplete arcs) definitely existed around Neptune, but the features of the ring system remained a mystery. The Voyager 2 spacecraft made the definitive discovery of the Neptunian rings during its fly-by of Neptune in 1989, passing by as close as 4,950 km (3,080 mi) above the planet's atmosphere on 25 August. It confirmed that occasional occultation events observed before were indeed caused by the arcs within the Adams ring (see below). After the Voyager fly-by the previous terrestrial occultation observations were reanalyzed yielding features of the ring's arcs as they were in 1980s, which matched those found by Voyager 2 almost perfectly.
Since Voyager 2's fly-by, the brightest rings (Adams and Le Verrier) have been imaged with the Hubble Space Telescope and Earth-based telescopes, owing to advances in resolution and light-gathering power. They are visible, slightly above background noise levels, at methane-absorbed wavelengths in which the glare from Neptune is significantly reduced. The fainter rings are still far below the visibility threshold for these instruments. In 2022 the rings were imaged by the James Webb Space Telescope, which made the first observation of the fainter rings since the Voyager 2's fly-by.
## General properties
Neptune possesses five distinct rings named, in order of increasing distance from the planet, Galle, Le Verrier, Lassell, Arago and Adams. In addition to these well-defined rings, Neptune may also possess an extremely faint sheet of material stretching inward from the Le Verrier to the Galle ring, and possibly farther in toward the planet. Three of the Neptunian rings are narrow, with widths of about 100 km or less; in contrast, the Galle and Lassell rings are broad—their widths are between 2,000 and 5,000 km. The Adams ring consists of five bright arcs embedded in a fainter continuous ring. Proceeding counterclockwise, the arcs are: Fraternité, Égalité 1 and 2, Liberté, and Courage. The first four names come from "liberty, equality, fraternity", the motto of the French Revolution and Republic. The terminology was suggested by their original discoverers, who had found them during stellar occultations in 1984 and 1985. Four small Neptunian moons have orbits inside the ring system: Naiad and Thalassa orbit in the gap between the Galle and Le Verrier rings; Despina is just inward of the Le Verrier ring; and Galatea lies slightly inward of the Adams ring, embedded in an unnamed faint, narrow ringlet.
The Neptunian rings contain a large quantity of micrometer-sized dust: the dust fraction by cross-section area is between 20% and 70%. In this respect they are similar to the rings of Jupiter, in which the dust fraction is 50%–100%, and are very different from the rings of Saturn and Uranus, which contain little dust (less than 0.1%). The particles in Neptune's rings are made from a dark material; probably a mixture of ice with radiation-processed organics. The rings are reddish in color, and their geometrical (0.05) and Bond (0.01–0.02) albedos are similar to those of the Uranian rings' particles and the inner Neptunian moons. The rings are generally optically thin (transparent); their normal optical depths do not exceed 0.1. As a whole, the Neptunian rings resemble those of Jupiter; both systems consist of faint, narrow, dusty ringlets and even fainter broad dusty rings.
The rings of Neptune, like those of Uranus, are thought to be relatively young; their age is probably significantly less than that of the Solar System. Also, like those of Uranus, Neptune's rings probably resulted from the collisional fragmentation of onetime inner moons. Such events create moonlet belts, which act as the sources of dust for the rings. In this respect the rings of Neptune are similar to faint dusty bands observed by Voyager 2 between the main rings of Uranus.
## Inner rings
### Galle ring
The innermost ring of Neptune is called the Galle ring after Johann Gottfried Galle, the first person to see Neptune through a telescope (1846). It is about 2,000 km wide and orbits 41,000–43,000 km from the planet. It is a faint ring with an average normal optical depth of around 10<sup>−4</sup>, and with an equivalent depth of 0.15 km. The fraction of dust in this ring is estimated from 40% to 70%.
### Le Verrier ring
The next ring is named the Le Verrier ring after Urbain Le Verrier, who predicted Neptune's position in 1846. With an orbital radius of about 53,200 km, it is narrow, with a width of about 113 km. Its normal optical depth is 0.0062 ± 0.0015, which corresponds to an equivalent depth of 0.7 ± 0.2 km. The dust fraction in the Le Verrier ring ranges from 40% to 70%. The small moon Despina, which orbits just inside of it at 52,526 km, may play a role in the ring's confinement by acting as a shepherd.
### Lassell ring
The Lassell ring, also known as the plateau, is the broadest ring in the Neptunian system. Its namesake is William Lassell, the English astronomer who discovered Neptune's largest moon, Triton. This ring is a faint sheet of material occupying the space between the Le Verrier ring at about 53,200 km and the Arago ring at 57,200 km. Its average normal optical depth is around 10<sup>−4</sup>, which corresponds to an equivalent depth of 0.4 km. The ring's dust fraction is in the range from 20% to 40%.
#### Potential ring
There is a small peak of brightness near the outer edge of the Lassell ring, located at 57,200 km from Neptune and less than 100 km wide, which some planetary scientists call the Arago ring after François Arago, a French mathematician, physicist, astronomer and politician. However, many publications do not mention the Arago ring at all.
## Adams ring
The outer Adams ring, with an orbital radius of about 63,930 km, is the best studied of Neptune's rings. It is named after John Couch Adams, who predicted the position of Neptune independently of Le Verrier. This ring is narrow, slightly eccentric and inclined, with total width of about 35 km (15–50 km), and its normal optical depth is around 0.011 ± 0.003 outside the arcs, which corresponds to the equivalent depth of about 0.4 km. The fraction of dust in this ring is from 20% to 40%—lower than in other narrow rings. Neptune's small moon Galatea, which orbits just inside of the Adams ring at 61,953 km, acts like a shepherd, keeping ring particles inside a narrow range of orbital radii through a 42:43 outer Lindblad resonance. Galatea's gravitational influence creates 42 radial wiggles in the Adams ring with an amplitude of about 30 km, which have been used to infer Galatea's mass.
### Arcs
The brightest parts of the Adams ring, the ring arcs, were the first elements of Neptune's ring system to be discovered. The arcs are discrete regions within the ring in which the particles that it comprises are mysteriously clustered together. The Adams ring is known to comprise five short arcs, which occupy a relatively narrow range of longitudes from 247° to 294°. In 1986 they were located between longitudes of:
- 247–257° (Fraternité),
- 261–264° (Égalité 1),
- 265–266° (Égalité 2),
- 276–280° (Liberté),
- 284.5–285.5° (Courage).
The brightest and longest arc was Fraternité; the faintest was Courage. The normal optical depths of the arcs are estimated to lie in the range 0.03–0.09 (0.034 ± 0.005 for the leading edge of Liberté arc as measured by stellar occultation); the radial widths are approximately the same as those of the continuous ring—about 30 km. The equivalent depths of arcs vary in the range 1.25–2.15 km (0.77 ± 0.13 km for the leading edge of Liberté arc). The fraction of dust in the arcs is from 40% to 70%. The arcs in the Adams ring are somewhat similar to the arc in Saturn's G ring.
The highest resolution Voyager 2 images revealed a pronounced clumpiness in the arcs, with a typical separation between visible clumps of 0.1° to 0.2°, which corresponds to 100–200 km along the ring. Because the clumps were not resolved, they may or may not include larger bodies, but are certainly associated with concentrations of microscopic dust as evidenced by their enhanced brightness when backlit by the Sun.
The arcs are quite stable structures. They were detected by ground-based stellar occultations in the 1980s, by Voyager 2 in 1989 and by Hubble Space Telescope and ground-based telescopes in 1997–2005 and remained at approximately the same orbital longitudes. However some changes have been noticed. The overall brightness of arcs decreased since 1986. The Courage arc jumped forward by 8° to 294° (it probably jumped over to the next stable co-rotation resonance position) while the Liberté arc had almost disappeared by 2003. The Fraternité and Égalité (1 and 2) arcs have demonstrated irregular variations in their relative brightness. Their observed dynamics is probably related to the exchange of dust between them. Courage, a very faint arc found during the Voyager flyby, was seen to flare in brightness in 1998; it was back to its usual dimness by June 2005. Visible light observations show that the total amount of material in the arcs has remained approximately constant, but they are dimmer in the infrared light wavelengths where previous observations were taken.
### Confinement
The arcs in the Adams ring remain unexplained. Their existence is a puzzle because basic orbital dynamics imply that they should spread out into a uniform ring over a matter of years. Several theories about the arcs' confinement have been suggested, the most widely publicized of which holds that Galatea confines the arcs via its 42:43 co-rotational inclination resonance (CIR). The resonance creates 84 stable sites along the ring's orbit, each 4° long, with arcs residing in the adjacent sites. However measurements of the rings' mean motion with Hubble and Keck telescopes in 1998 led to the conclusion that the rings are not in CIR with Galatea.
A later model suggested that confinement resulted from a co-rotational eccentricity resonance (CER). The model takes into account the finite mass of the Adams ring, which is necessary to move the resonance closer to the ring. A byproduct of this theory is a mass estimate for the Adams ring—about 0.002 of the mass of Galatea. A third theory proposed in 1986 requires an additional moon orbiting inside the ring; the arcs in this case are trapped in its stable Lagrangian points. However Voyager 2'''s observations placed strict constraints on the size and mass of any undiscovered moons, making such a theory unlikely. Some other more complicated theories hold that a number of moonlets are trapped in co-rotational resonances with Galatea, providing confinement of the arcs and simultaneously serving as sources of the dust.
## Exploration
The rings were investigated in detail during the Voyager 2 spacecraft's flyby of Neptune in August 1989. They were studied with optical imaging, and through observations of occultations in ultraviolet and visible light. The spaceprobe observed the rings in different geometries relative to the Sun, producing images of back-scattered, forward-scattered and side-scattered light. Analysis of these images allowed derivation of the phase function (dependence of the ring's reflectivity on the angle between the observer and Sun), and geometrical and Bond albedo of ring particles. Analysis of Voyager's images also led to discovery of six inner moons of Neptune, including the Adams ring shepherd Galatea.
## Properties
- A question mark means that the parameter is not known.''
|
21,871,243 |
Cortinarius violaceus
| 1,172,326,088 |
Species of fungus in the family Cortinariaceae native to the Northern Hemisphere
|
[
"Cortinarius",
"Fungi described in 1753",
"Fungi of Asia",
"Fungi of Europe",
"Fungi of North America",
"Inedible fungi",
"Taxa named by Carl Linnaeus"
] |
Cortinarius violaceus, commonly known as the violet webcap or violet cort, is a fungus in the webcap genus Cortinarius native across the Northern Hemisphere. The fruit bodies are dark purple mushrooms with caps up to 15 cm (6 in) across, sporting gills underneath. The stalk measures 6 to 12 centimetres (2+1⁄3 to 4+2⁄3 in) by 1 to 2 cm (3⁄8 to 3⁄4 in), sometimes with a thicker base. The dark flesh has a smell reminiscent of cedar wood. Forming symbiotic (ectomycorrhizal) relationships with the roots of various plant species, C. violaceus is found predominantly in conifer forests in North America and deciduous forests in Europe.
Though they are sometimes described as edible, the appearance of these mushrooms is more distinctive than their taste. The species was first described by Carl Linnaeus in 1753, and has undergone several name changes. It is the type species of the genus Cortinarius, but is readily distinguished from other species in the genus by its dark colouration and distinct cystidia. There are some populations that seem to prefer deciduous trees and others that prefer pines, but no genetic divergence between the two has been found. When identified as taxonomically separate from the deciduous-preferring populations, the pine-preferring populations have been referred to either as a separate species, C. hercynicus, or as a subspecies, C. violaceus ssp. hercynicus. Other populations once identified as C. violaceus or close to that species have now been described as new and separate species, such as C. palatinus, C. neotropicus, C. altissimus, C. kioloensis and C. hallowellensis.
## Taxonomy
Agaricus violaceus was one of the few fungal species named by Carl Linnaeus in his 1753 work Species Plantarum. The specific epithet violaceus refers to the deep violet colour of its cap. In English, it is commonly known as the violet webcap, or violet cort. French naturalist Jean-Baptiste Lamarck viewed it as a variety (violaceus) of a variable species he described as Amanita araneosa in 1783, and Christiaan Hendrik Persoon placed it in the Section Cortinaria of Agaricus in his 1801 work Synopsis Methodica Fungorum. Cortinarius was established as a genus by English botanist Samuel Frederick Gray in the first volume of his 1821 work A Natural Arrangement of British Plants, where the species was recorded as Cortinaria violacea, "the violet curtain-stool".
The starting date of fungal taxonomy had been set as 1 January 1821, to coincide with the date of the works of the "father of mycology", the Swedish naturalist Elias Magnus Fries, which meant the name Cortinarius violaceus required sanction by Fries (indicated in the name by a colon) to be considered valid. Thus, the species was written as Cortinarius violaceus (L.: Fr.) Gray. However, a 1987 revision of the International Code of Botanical Nomenclature set the starting date at 1 May 1753, the date of publication of Linnaeus's Species Plantarum. Hence, the name no longer requires the ratification of Fries's authority, and is thus written as Cortinarius violaceus (L.) Gray.
German botanist Friedrich Otto Wünsche described the species as Inoloma violaceum in 1877. In 1891, his countryman Otto Kuntze published Revisio Generum Plantarum, his response to what he perceived as poor methodology in existing nomenclatural practice. He called the violet webcap Gomphos violaceus in 1898. However, Kuntze's revisionary programme was not accepted by the majority of biologists.
Cortinarius violaceus was designated as the type species for the genus Cortinarius by Frederic Clements and Cornelius Lott Shear in their 1931 work The Genera of Fungi. Mycologist David Arora considers this odd, due to the mushroom's unusual colour and cystidia. Because of this designation, if C. violaceus were to be split from the rest of the current genus, then, according to the rules of the International Code of Botanical Nomenclature, it would retain the name Cortinarius, while the other species would have to be reclassified. The species was one of only two placed in the Cortinarius subgenus Cortinarius by the Austrian mycologist Meinhard Moser. Molecular investigation of webcaps worldwide has increased this number to at least twelve.
A 2015 genetic study by evolutionary biologist Emma Harrower and colleagues of C. violaceus and its closest relatives suggests that the group (section Cortinarius) originated in Australasia and began diverging from a common ancestor around twelve million years ago in the Miocene, with C. violaceus itself diverging from its closest relative around 3.9 million years ago. The fact that these species diverged relatively recently indicates that some form of dispersal must have taken place across large bodies of water. The original plant hosts were flowering plants (angiosperms), and C. violaceus—or its direct ancestor—developed a symbiotic relationship with pines, as well as multiple flowering plants; this may have facilitated its expansion across the Northern Hemisphere.
Some mycologists classify C. violaceus as two distinct species—Cortinarius violaceus and Cortinarius hercynicus, with hercynicus relating to the Hercynian Forest region of southern Germany. These species are differentiated morphologically by the latter population's rounder spores. Persoon had described C. hercynicus as a separate species in 1794, though Fries regarded it as conspecific with C. violaceus. Moser separated them once again as species in 1967, and Norwegian biologist Tor Erik Brandrud classified C. hercynicus as a subspecies of C. violaceus in 1983. However, Harrower and colleagues, on limited molecular testing, found no genetic or ecological difference between the two taxa.
Some fungal populations around the world that have been classified as C. violaceus have been found to belong to separate lineages and hence reclassified as new species within section Cortinarius. Two separate lineages discovered in populations from Costa Rica have been renamed Cortinarius palatinus and C. neotropicus, one from Guyana—described as sp. aff. violaceus—has become C. altissimus, and another from Western Australia and Tasmania described as both C. violaceus and sp. aff. violaceus has become C. hallowellensis. Yet another from Eastern Australia has been named C. kioloensis. The poorly known species Cortinarius subcalyptrosporus and Cortinarius atroviolaceus from Borneo are almost indistinguishable from C. violaceus outside of hard-to-observe spore detail—the former has smaller spores with a detached perisporium (outer layer) and the latter has smaller spores and fruiting bodies. Another population, known from Borneo, New Guinea and New Zealand, was ascribed to C. violaceus by Moser. It was noted as very similar to the original species concept of C. violaceus, and awaits description as a new species after a phylogenetic study revealed it to represent a distinct taxon.
## Description
Cortinarius violaceus has a convex (becoming broadly convex, umbonate or flat) cap of 3.5–15 centimetres (1+3⁄8–6 in) in diameter with an incurved margin. It is dark violet to blue-black in colour, and is covered in fine, downy scales. This layer on the cap is known as the pileipellis, which is either classified as a trichoderm—parallel hyphae running perpendicular to the surface and forming a layer 6–22 μm wide—or rarely an ixocutis, a layer of gelatinized hyphae 2–11 μm wide. The cap surface, unlike that of many other Cortinarius species, is neither sticky nor slimy, though it is occasionally greasy. The stipe, or stalk, is 6 to 18 cm (2+1⁄3 to 7 in) tall, and 1 to 2 cm (3⁄8 to 3⁄4 in) thick. Due to its swollen, bulbous nature, the base of the stipe can sometimes be as wide as 4 cm (1+1⁄2 in). The stipe is a similar colour to the cap, and covered in wool-like fibrils; purple mycelium can be present at the base. Younger specimens feature a veil, but this vanishes quickly. The flesh is violet, but darker below the pileipellis and in the stipe. The flesh has a mild taste, indistinctly reminiscent of cedar wood, with a slight, pleasant smell, also reminiscent of cedar wood. The gills are dark violet, changing to a purplish-brown with age. They have an adnate connection to the stipe, and can be very dark in older specimens. The mushroom stains red when in contact with potassium hydroxide (KOH). Fruit bodies identified as C. v. hercynicus are less robust than those of the nominate subspecies.
The spore print is rust-coloured, while the spores themselves measure 12 to 15 μm by 7 to 8.5 μm. They are rough, from elliptical to almond-shaped, and covered in medium-sized warts. The spores are wider in C. v. hercynicus. The species is the only one in the genus to have cystidia on both the faces and the edges of the gills. A large number of cystidia are present, and, individually, they measure between 60 and 100 μm by between 12 and 25 μm. They are flask-shaped, with somewhat purple contents.
### Similar species
Although there are many Cortinarius species with some degree of violet colour, C. violaceus and its close relatives are easily distinguished by their much darker purple colour. Cortinarius iodes of the southeastern United States has a slimy purple cap and paler violet stipe. The other species in the section Cortinarius are dark purple and superficially similar, but can be differentiated based on host and geography as they do not occur in the same locations as C. violaceus. Certain Leptonia species in northwestern North America, including L. carnea and L. nigroviolacea, have a similar color, but are easily differentiated due to their pink spore print.
C. cotoneus, Entoloma bloxamii, and E. parvum are also similar.
## Distribution and habitat
Cortinarius violaceus is found across North America, Europe and Asia. Although widespread, it is not common anywhere in Europe, and it is listed as endangered in the British Isles. Cortinarius violaceus is a rare component of subarctic areas of western Greenland. It has not been recorded from Iceland.
In Europe, it grows in deciduous woodland during autumn, especially among oak, birch and beech, but is also found on occasion with conifers. It is also occasionally known from treeless heathland, where it is associated with bracken. The species favours acidic soil. Cortinarius violaceus forms mycorrhizal associations with several species of tree. In this symbiotic relationship, the fungus gains carbon from the plant and supplies it with beneficial minerals. In Nordic countries, its hosts include white birch (Betula pubescens), silver birch (B. pendula), European aspen (Populus tremula) and rarely European beech (Fagus sylvatica). No records of association with oak (Quercus) are known from this region. Brandrud reported that what he described as spp. hercynicus grew with Picea abies, generally in more alkaline soils and along with mosses of the genera Hylocomium and Pleurozium, and, in moister areas, big shaggy-moss (Rhytidiadelphus triquetrus), as well as the buttercup-family shrub Hepatica nobilis. The species grows with Betula pubescens in Greenland, and is also associated with hazelnut (Corylus avellana) in Central and Southern Europe.
In North America, C. violaceus favours conifers, and, though rare over much of the continent, is relatively common in certain areas, including Mount Rainier National Park and Olympic National Park. It is more common in old growth forest in the Pacific Northwest, though has sprung up in regrowth areas populated with fir, pine, aspen and alder in the Great Lakes region. Fruit bodies occur singly or in small groups, often near rotting wood, and can grow in fairy rings. Closely related species that look like C. violaceus can be found in Central and South America, Australia, New Zealand, Papua New Guinea, and Malaysia.
## Edibility and biochemistry
Cortinarius violaceus are sometimes considered inedible, and sometimes considered edible, but not choice. Instead, the primary appeal of the species to mushroom hunters, according to Arora, is its beauty. Its similarity to some other (inedible or toxic) webcaps renders it risky to eat. The taste after cooking is reportedly bitter.
The colour of C. violaceus cannot be converted to a dye, unlike that of some other Cortinarius species, such as C. sanguineus and C. semisanguineus. The colour is caused by an elusive pigment that has been difficult to isolate; its identity was not known until 1998. This is an iron(III) complex of (R)-3′,4′-dihydroxy-β-phenylalanine [(R)-β-dopa]. It dissolves in water, turning the liquid dark purple before fading to blackish-grey. C. violaceus fruiting bodies contain around 100 times more iron than those of most other fungi. Cortinarius violaceus extract demonstrates an inhibitory activity against cysteine protease.
## See also
- List of Cortinarius species
|
39,336,181 |
Weather Machine
| 1,161,245,230 |
Lumino kinetic bronze sculpture and weather beacon in Portland, Oregon
|
[
"1988 establishments in Oregon",
"1988 sculptures",
"Bronze sculptures in Oregon",
"Interactive art",
"Kinetic sculptures in the United States",
"Outdoor sculptures in Portland, Oregon",
"Sculptures of birds in Oregon",
"Sculptures of dragons",
"Sound sculptures",
"Southwest Portland, Oregon",
"Stainless steel sculptures in Oregon",
"Weather prediction"
] |
Weather Machine is a lumino kinetic bronze sculpture and columnar machine that serves as a weather beacon, displaying a weather prediction each day at noon. Designed and constructed by Omen Design Group Inc., the approximately 30-foot-tall (9 m) sculpture was installed in 1988 in a corner of Pioneer Courthouse Square in Portland, Oregon, United States. Two thousand people attended its dedication, which was broadcast live nationally from the square by Today weatherman Willard Scott. The machine cost \$60,000.
During its daily two-minute sequence, which includes a trumpet fanfare, mist, and flashing lights, the machine displays one of three metal symbols as a prediction of the weather for the following 24-hour period: a sun for clear and sunny weather, a blue heron for drizzle and transitional weather, or a dragon and mist for rainy or stormy weather. The sculpture includes two bronze wind scoops and displays the temperature via colored lights along its stem. The air quality index is also displayed by a light system below the stainless steel globe. Weather predictions are made based on information obtained by employees of Pioneer Courthouse Square from the National Weather Service and the Department of Environmental Quality. Considered a tourist attraction, Weather Machine has been praised for its quirkiness, and has been compared to a giant scepter.
## Description and history
Weather Machine is a lumino kinetic bronze sculpture that serves as a weather beacon, designed and constructed by Omen Design Group Inc. Contributors included Jere and Ray Grimm, Dick Ponzi, who won a 40-entry international competition to design the machine for Pioneer Courthouse Square (1984), and Roger Patrick Sheppard. The group described their efforts as "collaborative", but Sheppard considered Ponzi the "maestro" of the project. Ponzi did the engineering and hydraulics, and the machine was assembled at his vineyard near Beaverton. The sculpture was inspired by Portland-born-and-based writer Terence O'Donnell, who suffered from osteomyelitis during his childhood, and his "funny Irish jig". Weather Machine, which took five years to plan and build and cost \$60,000, was installed in the square in August 1988. Today weatherman Willard Scott broadcast live from the square to dedicate the sculpture on its August 24 opening. Two thousand people were present as early as 4 a.m. for the dedication. Financial contributors included Pete and Mary Mark, the AT&T Foundation, Alyce R. Cheatham, Alexandra MacColl, E. Kimbark MacColl, Meier & Frank, the Oregon Department of Environmental Quality, David Pugh and Standard Insurance Company. Information about the donors was included on a plaque added to the sculpture's stem in the weeks following the dedication.
Each day at noon, the columnar machine performs a two-minute sequence that begins with a trumpet fanfare of the opening bars of Aaron Copland's Fanfare for the Common Man, and produces mist and flashing lights. It eventually reveals one of three metal symbols: a stylized golden sun ("helia") for clear and sunny weather, a blue heron (Portland's official bird) for drizzle and transitional weather, or mist and a "fierce, open-mouthed" dragon for heavy rain or stormy weather. The fanciful symbols change at the same time every day, representing weather predictions for the following 24-hour period. "Helia", described as "gleaming", was designed by Jere Grimm; her design would later be applied to one of her husband's pots, exhibited in 1989. The trumpets are allowed to play at noon due to a waiver of Portland's noise ordinance for that time period. Ray Grimm constructed the blue heron symbol, and the group collaborated on the dragon symbol based on his drawings. In order for the machine to display an accurate weather prediction, as reported by The Oregonian in 1988, employees of Pioneer Courthouse Square contact the National Weather Service each morning at 10:30 a.m. for the forecast, and then enter information into the machine's computer, located behind a nearby door.
The machine, whose height is reported to be between 25 and 33 feet (7.6 and 10.1 m), includes two bronze wind scoops that turn in opposite directions. It also indicates the temperature (when 20 °F (−7 °C) or above) via vertical colored lights along the sculpture's stem. Measured by an internal gauge, the machine displays blue lights for temperatures below freezing, white lights for above freezing and red lights to mark every ten degrees (°F). Referring to an additional light system (below the stainless steel globe) that indicates air quality, The Oregonian reported in 1988 that a green light indicates good air quality, amber reflects "semismoggy" air and a red light indicates poor air quality. However, in 1998, one writer for The Oregonian warned: "you don't want to breathe so much when the white light is on". Pioneer Courthouse Square employees enter air quality information into the machine's computer following routine checks with the Department of Environmental Quality.
In addition to its pre-dawn dedication on national television, Weather Machine had a public dedication at noon on August 24, attended by Mayor Bud Clark and other city officials. On that day, the machine displayed the sun symbol and a green light for good air quality, and indicated a temperature of 82 °F (28 °C). Following the fanfare, known officially as "Fanfare for Weather Machine with Four Trumpets", jazz singer Shirley Nanette led the crowd in a rendition of "You Are My Sunshine". Portland had good weather in the days following its dedication, preventing visitors from seeing all three symbols for an extended length of time (though all three symbols are displayed briefly during the daily two-minute sequence). This prompted the executive director of Pioneer Courthouse Square to consider altering the machine's schedule so that the public would have a chance to see all three symbols. The sculpture maintained good operation until winter 1995, when its mechanical performance temporarily began deviating away from noon and the temperature gauge had difficulties working properly. In 2012, the machine malfunctioned and stopped operating for about a week.
## Reception
In the weeks following Weather Machine's dedication, an estimated 300 to 400 people gathered at the square daily to witness the noon sequence. Following the dedication, The Oregonian wrote: "It takes nothing from its fascination to know that a human on the staff of the square will be making the daily phone calls to the Weather Service and the Department of Environmental Quality, and pushing the necessary buttons to cue the pillar's performance ... They have given Portland an attraction no other city has. We're going to like it."
Ponzi described the machine as "light-hearted ... active, distinctive—and fun". O'Donnell, who inspired the sculpture, called it a "gentle spectacle" and described the work as "a cartoon contraption, an odd little thingamajig. It has bells and whistles and other mechanized wonders that confirm rain sometime after the downpour and proudly announce sunshine in the bright light of day." In 1994, The Oregonian reported that O'Donnell regarded Weather Machine with a "mixture of wonder and embarrassment" and stated that he "[didn't] think it [was] all that attractive". The publication's Vivian McInerny said of O'Donnell and the machine: "Practical people may wonder why the square needs such a silly weather machine when a glance out the window works as well .... And these practical people may be the very ones who make the world go 'round. But it is the less practical people, the dreamers like O'Donnell, who make it worth going 'round."
In 1995, The Oregonian's Jonathan Nicholas wrote, "To this day, nobody is exactly sure what happens when the thing sounds off each day at noon. It's like having a governor in blue jeans. We can't really explain it: It just happens." Grant Butler of The Oregonian gave the machine's trumpet fanfare as one of three examples of ways in which people could be certain it was noon in Portland.
The machine is considered a tourist attraction, recommended in visitor guides for Portland and included in walking tours. One travel contributor recommended a visit to the sculpture for people with children seeking a "perfect family day". Weather Machine has been compared to a giant scepter and has been called "bizarre", "eccentric", "playful", "unique", "wacky", "whimsical", "zany", and a "piece of wizardry".
## See also
- 1988 in art
- Allow Me (Portland, Oregon), a bronze sculpture also located in Pioneer Courthouse Square
|
1,353,717 |
Castlevania: Dawn of Sorrow
| 1,161,156,520 |
2005 action-adventure game
|
[
"2005 video games",
"Castlevania games",
"Fiction about reincarnation",
"Fiction about sacrifices",
"Fiction set in 2036",
"Metroidvania games",
"Mobile games",
"Multiplayer and single-player video games",
"Nintendo DS games",
"Side-scrolling role-playing video games",
"Video games about cults",
"Video games developed in Japan",
"Video games scored by Michiru Yamane"
] |
Castlevania: Dawn of Sorrow is a 2005 action-adventure game developed and published by Konami. It is part of Konami's Castlevania video game series and the first Castlevania game released on the Nintendo DS. The game is the sequel to Castlevania: Aria of Sorrow and incorporates many elements from its predecessor. Dawn of Sorrow was commercially successful. It sold more than 15,000 units in its first week in Japan and 164,000 units in the United States during the three months after its initial release.
Dawn of Sorrow continues the story of Aria of Sorrow: Dracula has been defeated, with his powers assumed by his reincarnation, Soma Cruz. With the help of his allies, Soma avoids becoming the new dark lord. A cult forms to bring forth a new one by killing Soma. Soma and his allies move to ensure that does not happen.
Dawn of Sorrow incorporates many features from earlier Castlevania games: the combination of elements from platform games and role-playing video games, the "Tactical Soul" system featured in Aria of Sorrow and a dark, gothic atmosphere. Dawn of Sorrow introduces gameplay elements, like the "Magic Seal" system, which requires the use of the DS stylus to draw a pattern to defeat powerful enemies, a distinctive anime character design, and a multiplayer mode, where two players compete for fastest times on a prerendered level. The game received high scores from many video game publications, and was considered one of the best games on the Nintendo DS for 2005. The game was re-released in Japan in June 2006, and later in North America during 2007 as part of the "Konami the Best" line.
## Gameplay
The player controls the onscreen character from a third-person perspective to interact with people, objects, and enemies. Like previous games in the series, and most role-playing video games, characters level up each time they earn a set number of experience points from defeating enemies; each level gained increases the character's statistics, thus improving their performance in battle. Statistic examples include hit points, the amount of damage a character can receive; magic points, which determine the number of times a character can use magical attacks; strength, the power of a character's physical attacks; and intelligence, the power of a character's magical spells. Upon encountering an enemy, the player can use a variety of weapons to attack and defeat the enemy. The weapon choices are largely medieval, including swords, axes, and spears, although handguns and a rocket-propelled grenade are available. These weapons differ in their damage output, their range, and the speed of the attack.
Dawn of Sorrow, like most games in the Castlevania series, is set in a castle, which is divided into various areas. Areas of the castle differ in their composition, including monsters and terrain features. In addition, each area has its own unique piece of theme music which plays while the player is in that area. The character moves around the environment based on the player's choices; however, the items the player has restricts the areas the character can move into, like most platform games. Progression, however, is not linear, as players are free to explore the parts of the castle they have access to, and can backtrack or move forward as they see fit.
### Tactical Soul
The primary method for the player to gain additional abilities in the game is the absorption of souls via the Tactical Soul system originally featured in Aria of Sorrow. Except for human enemies and the game's final opponent, the player can absorb all enemies' souls. The chances for absorbing a soul varies by enemy, as certain enemies release souls more regularly than others. The player can absorb multiple copies of the same soul; many of these souls will increase in effectiveness depending on the number of the same soul a player possesses. Souls provide a variety of effects and are separated into four categories: Bullet, Guardian, Enchant, and Ability souls. The player can have only one type of Bullet, Guardian, and Enchant soul equipped at any given time. However, when the player acquires the "Dopplegänger" soul, they can have two different weapon and soul setups, and switch between them at will. Players can trade souls wirelessly using two Dawn of Sorrow game cards.
Bullet souls are often projectiles and consume a set number of magic points upon use. Guardian souls provide continuous effects including transforming into mythical creatures, defensive abilities, and the summoning of familiars. The movement and attacking of familiars can be directly controlled with the stylus. Guardian souls continually drain magic points so long as they are activated. Several Guardian souls can be used in with Bullet souls to execute special attacks called Tactical Soul combos. Enchant souls offer statistical bonuses and resistance against several forms of attack. They are passive, and require no magic points to remain active. Ability souls give the player new abilities and are required to move into certain areas of the castle. They are always active, and therefore not equipped – nor do they consume magic points. Some examples include the abilities to break ice blocks with the stylus and to double-jump.
Souls can be spent to permanently transform a character's weapon. At Yoko Belnades' shop, the player can remove certain souls from their inventory to change their weapon into a stronger form. Certain weapons can be acquired only by using souls to strengthen a lesser form of the weapon. Souls are used in the "Enemy Set" mode, where a player builds a custom scenario. The player can place monsters inside rooms if they have acquired the monster's soul in the main game, but boss enemies cannot be added to any scenario, even if the player has the boss' soul. Two players using two Nintendo DS consoles can compete in these scenarios, with the winner being the one with the fastest time completing the course.
### Magic Seal
The Magic Seal system is a new feature introduced in Dawn of Sorrow that uses the DS touchscreen. Once the player reduces the hit points of a "boss" enemy to zero, a circle will appear, and the game will automatically draw a pattern connecting any number of smaller circles on the circumference of the larger circle. After this, the player is prompted to draw the same pattern on the touchscreen in a set amount of time. If the player fails to draw the pattern accurately within the time limit, the boss will regain health and the battle will resume. If successful, the boss will be defeated. More powerful boss enemies require higher level Magic Seals, which have more intricate and complex patterns as the level increases and are found over the course of the game.
### Julius Mode
After the player completes the game with either the bad ending or the best ending, Julius Mode (similar to that in Aria of Sorrow) is unlocked. In storyline terms, Julius Mode follows the assumption that Soma succumbed to his dark power and became the new dark lord. A new game can be started from the main menu in Julius Mode. In Julius Mode, the playable characters include Julius Belmont, Yoko Belnades, and Alucard. Each character has a weapon and assorted unique abilities. Although these abilities remain static throughout the entire game, the characters' statistics can improve by acquiring enough experience points to level up. The castle layout and enemies are the same except for the final battle, which is against Soma.
## Plot
### Setting
Dawn of Sorrow is set in the fictional universe of the Castlevania series. The primary premise of the series is the struggle of the vampire hunters of the Belmont clan against the vampire Dracula and his legacy. Before the events of Castlevania: Aria of Sorrow, Dracula was defeated and his castle sealed within a solar eclipse. With Dracula dead, a prophecy relating to who would inherit his powers drove the events of Aria of Sorrow, with the protagonist, Soma Cruz, realizing that he was Dracula's reincarnation. Soma manages to escape his fate of becoming the new dark lord with the help of his allies. Dawn of Sorrow takes place one year after the events of Aria of Sorrow, when Soma believes his inherited powers have been lost. Most of the game is played inside a copy of Dracula's castle, which is subdivided into several areas through which the player must venture over the course of the game. The future setting of both Aria of Sorrow and Dawn of Sorrow, as well as a storyline beginning after Dracula's defeat, reflects Koji Igarashi's desire to take a "different route" with Aria of Sorrow.
### Characters
The primary playable character in Dawn of Sorrow is Soma Cruz, the reincarnation of Dracula, the longtime antagonist of the Castlevania series. Mina Hakuba, the daughter of the priest of the Hakuba shrine supports him in his quest; Genya Arikado, a mysterious government agent dealing primarily with the supernatural; Julius Belmont, the latest member of the Belmont clan of vampire hunters featured in the series; Yoko Belnades, a witch in the service of the Roman Catholic Church; and Hammer, a vendor of military material who retains a large information network.
A cult dedicated to the resurrection of Dracula serves as the game's antagonists. Celia Fortner is a shadow priestess heading the cult. She seeks to revive him to prevent the loss of her magical powers. Dmitrii Blinov, a ruthless manipulator, and Dario Bossi, a vicious firebrand, are Celia's primary lieutenants. They are the "dark lord's candidates", born on the day Dracula was slain and thus can assume the mantle of Dracula by destroying his soul, which is present in Soma Cruz.
### Story
One year after the events in Aria of Sorrow, Soma is living peacefully, and believes his powers have been lost. A woman who identifies herself as Celia Fortner, appears and summons several monsters. Arikado arrives to help Soma defeat the monsters, after which Soma absorbs their souls. Celia retreats, proclaiming that she will destroy Soma. Soma expresses disbelief at the return of his powers, but Arikado reveals that his powers were never lost, only dormant. He informs Soma that Celia is the head of a cult that seeks the resurrection of the dark lord. He leaves, instructing Soma not to pursue Celia.
Soma, however, uses information acquired from Hammer to locate the cult's base, a facsimile of Dracula's castle. Hammer arrives, and as he has left the military, agrees to help Soma by opening up a shop in the village outside the castle. After entering the castle, Soma encounters Yoko and Julius Belmont. As Julius leaves, Soma escorts Yoko to a safe location. During this time, she instructs him in the use of a Magic Seal, which is necessary to defeat certain monsters in the castle. As Soma travels farther into the castle, he meets Celia, who is flanked by two men, Dmitrii Blinov and Dario Bossi. Celia explains their nature as the "dark lord's candidates", who can become the dark lord by destroying Soma. He later encounters Dmitrii and is able to defeat him. Soma gains dominance over his soul, although he acquires no abilities. As Soma travels further, he comes upon Dario. Soma bests him, and Celia teleports Dario away from harm.
Soma meets Arikado, who is initially angered by Soma's presence, but accepts the situation. He gives Soma a letter and a talisman from Mina. Soma briefs Arikado on the current situation, and Arikado leaves to locate Dario. Soma comes upon Dario and Julius who is later defeated due to his inability to use the Magic Seals. Dario retreats, instructing Soma to fight him in the castle's throne room. Soma does so, lambasting Dario for desiring only power, and promising to defeat him. Before the battle begins, Soma uses one of his souls to transport himself into the mirror in the room, revealing Aguni, the flame demon sealed within Dario's soul. Soma defeats Aguni, leaving Dario powerless. As Dario flees, Celia arrives, and instructs Soma to come to the castle's center.
Upon arriving, Soma is forced to watch Celia kill Mina. Furious, he begins to succumb to his dark power. The talisman Mina gave Soma is able to slow the transformation, enabling Arikado to arrive in time to inform Soma that the "Mina" who Celia killed was a doppelgänger. This aborts the transformation, but a soul leaves Soma and enters the doppelgänger, which takes on the appearance of Dmitrii. Dmitrii says that when Soma defeated him he allowed himself to be absorbed, wishing to use his powers to copy Soma's ability to dominate the souls of Dracula's minions. He then leaves with Celia to absorb the souls of many powerful demons and monsters in an attempt to increase his power. Soma and Arikado chase after the pair, and find them in the castle's basement. Dmitrii, using Celia as a sacrifice, seals Arikado's powers, and engages Soma. However, his soul is unable to bear the strain of controlling the demons he has absorbed, and they erupt out of him, combining into one gargantuan creature called Menace. Soma manages to defeat it, but the souls composing the demon begin to fall under Soma's dominance. He becomes overwhelmed and rejects them, fleeing from the castle with Arikado. Soma is conflicted over the present situation. He believes it was his responsibility to become the dark lord, and that the events of the game were the result of his not accepting this responsibility. Arikado convinces him his fate is not fixed. Soma then shares a tender moment with Mina, much to the amusement of his onlooking friends.
If Soma does not have Mina's talisman equipped when he witnesses Celia slay the Mina doppelganger, he will not realize the deception, and fully accept his dark powers, ending the game and unlocking a new mode, in which Julius, Yoko and Arikado, now assuming his true form as Alucard, must venture into the castle to kill Soma. The game may also end early if Dario is confronted directly from the start, upon which he will lose control of Aguni and die by immolation, enabling Celia to escape and Dmitrii to secretly possess Soma through his absorbed soul.
## Development
The production of Dawn of Sorrow was announced on January 6, 2005, as the first Castlevania game to be released for the Nintendo DS. Longtime Castlevania producer Koji Igarashi was in charge of the production. The choice to develop the game for the Nintendo DS instead of the Sony PlayStation Portable was due to Aria of Sorrow's success on Nintendo's Game Boy Advance, and Igarashi's observations of both consoles during the 2005 E3 Media and Business Summit. He felt it was a waste to use the storyline with Soma Cruz and the Tactical Soul system in only one game, contributing to his desire to make a sequel. The original design team from Aria of Sorrow, as well as personnel from Konami Tokyo, were involved in the production of Dawn of Sorrow. Igarashi intended to include a white-collar Japanese worker in the game. This worker would be a manager in a Japanese firm and have a family. However, the development team's opposition to this idea forced him to drop it.
The use of the technical features of the Nintendo DS was one of the production team's principal concerns during development. The DS touch screen was a primary point of interest. Several functions such as picking up items on the screen and moving them were originally intended to be incorporated into the game. However, scheduling problems forced the development team to abandon many of these ideas. Igarashi's primary concern with using the touch screen was that it would detract from "the Castlevania pure action gameplay", since the player would have to slow down play to use the stylus. The DS microphone was looked at during development, but Igarashi noted that although he found humorous uses for it, it was never seriously considered for inclusion in the game.
For the graphical representations of the game's enemies, Igarashi had sprites from earlier Castlevania games such as Castlevania: Symphony of the Night reused, and the development team redesigned them for use on the Nintendo DS. Unlike most recent Castlevania games, Ayami Kojima did not participate in designing the characters for Dawn of Sorrow. Instead, the characters were drawn in a distinctive anime style, influenced by producer Koji Igarashi who wanted to market the game to a younger audience. Aria of Sorrow's sales figures did not meet expectations, and as a result, Igarashi consulted Konami's sales department. The staff concluded that the demographics of the Game Boy Advance did not line up with the series' target age group. Igarashi believed the Nintendo DS inherently attracted a younger audience, and he was working to court them with the anime style. Furthermore, Igarashi considered the anime style a litmus test for whether future Castlevania games would incorporate it. Kojima's hiatus from Dawn of Sorrow allowed her to concentrate upon her character designs for Castlevania: Curse of Darkness.
### Audio
Michiru Yamane and Masahiko Kimura composed the game's music. Yamane, a longtime composer of music for the Castlevania series, had worked earlier on the music of Castlevania games such as Symphony of the Night and Aria of Sorrow. Kimura had developed the music for Castlevania on the Nintendo 64. In an interview, Yamane noted that she made the music "simple" and "easy to recognize", similar to her work on previous Castlevania games. She drew a parallel between her work on Castlevania games for the Game Boy Advance and her music for Dawn of Sorrow. In the same interview Igarashi said making music for handheld game consoles, regardless of the type, is largely the same, although he accepted that the DS's sound capabilities were better than those of the Game Boy Advance.
## Release
In Japan, the game sold over 15,000 units in its first week, acquiring the number ten slot in software sales. The game sold over 164,000 copies in the first three months after its release in the United States. The game was re-released in Japan in June 2006, and later in North America during 2007 as part of the "Konami the Best" line.
## Reception
Dawn of Sorrow has received critical acclaim from many video game publications, with several hailing it as the best Nintendo DS game of 2005.
Japanese gaming publication Famitsu gave it a 33 out of 40 score. Many reviewers noted that despite being highly similar to Aria of Sorrow, it managed to define itself as a standalone title. GameSpot commented that Dawn of Sorrow succeeded in continuing 2D games as a definite genre, and it "keeps that flame burning as bright as ever". GameSpot also considered it for the accolade of best Nintendo DS game of 2005, with the prize ultimately going to Mario Kart DS. Editors at IGN awarded Dawn of Sorrow the prize of best adventure game on the DS for 2005.
The gameplay, and the Tactical Soul system in particular, received praise from reviewers. The sheer depth of the abilities of the numerous souls found in the game was lauded, and IGN believed the ability to have two customizable "profiles" of different abilities was "an extremely handy idea". The relative difficulty of the game and its length was also brought into question, with GameSpot noting that the game could be finished in five hours and "is fairly easy as far as Castlevania games go".
GameSpot extolled the game's animation and graphics, describing the backgrounds as "intricate and gorgeous" and the individual animation, especially of enemies, as one of the game's "highlights". IGN echoed this assessment, calling the animation "stunning and fluid", and noted the differences in graphics between Aria of Sorrow and Dawn of Sorrow, saying the latter was on a "broader and more impressive scale". Reviewers lambasted the use of an anime style of drawing the characters, as opposed to the traditional gothic presentation of illustrator Ayami Kojima in earlier Castlevania games. GameSpy deplored the "shallow, lifeless anime images" used for the characters and Kojima's absence from the production. IGN believed the new images were "down to the level of 'generic Saturday morning Anime' quality". The audio by Michiru Yamane and Masahiko Kimura was highly regarded, with GameSpot saying it was "head and shoulders above [Aria of Sorrow]". IGN noted that the DS dual speaker system presented the audio "extraordinarily well". In the review from 1UP.com, the game's score was compared to the soundtrack of Symphony of the Night, with "excellent" sound quality and "exceptional" compositions.
The functionality associated with the Nintendo DS, namely the touch screen and the Magic Seal system, was subject to criticism from reviewers. GameSpot noted that it was difficult to use the stylus immediately after the game prompted the player to draw the Magic Seal, thus forcing the player to use their fingernail on the touch screen. Other functions using the touch screen, including clearing ice blocks, were viewed as trivial, with GameSpy calling it a "gimmick". However, IGN dismissed the lack of DS functionality as a major issue, claiming it "doesn't hurt the product in the slightest".
In 2010, the game was included as one of the titles in the book 1001 Video Games You Must Play Before You Die.
|
44,755 |
William of Tyre
| 1,164,874,414 |
12th-century clergyman, writer, and Archbishop of Tyre
|
[
"1130s births",
"1186 deaths",
"12th-century Latin writers",
"12th-century Roman Catholic archbishops in the Kingdom of Jerusalem",
"12th-century historians",
"12th-century jurists",
"Ambassadors to the Byzantine Empire",
"Canon law jurists",
"Medieval writers about the Crusades"
] |
William of Tyre (Latin: Willelmus Tyrensis; c. 1130 – 29 September 1186) was a medieval prelate and chronicler. As archbishop of Tyre, he is sometimes known as William II to distinguish him from his predecessor, William I, the Englishman, a former Prior of the Church of the Holy Sepulchre, who was Archbishop of Tyre from 1127 to 1135. He grew up in Jerusalem at the height of the Kingdom of Jerusalem, which had been established in 1099 after the First Crusade, and he spent twenty years studying the liberal arts and canon law in the universities of Europe.
Following William's return to Jerusalem in 1165, King Amalric made him an ambassador to the Byzantine Empire. William became tutor to the king's son, the future King Baldwin IV, whom William discovered to be a leper. After Amalric's death, William became chancellor and archbishop of Tyre, two of the highest offices in the kingdom, and in 1179 William led the eastern delegation to the Third Council of the Lateran. As he was involved in the dynastic struggle that developed during Baldwin IV's reign, his importance waned when a rival faction gained control of royal affairs. He was passed over for the prestigious Patriarchate of Jerusalem, and died in obscurity, probably in 1186.
William wrote an account of the Lateran Council and a history of the Islamic states from the time of Muhammad, neither of which survives. He is famous today as the author of a history of the Kingdom of Jerusalem. William composed his chronicle in excellent Latin for his time, with numerous quotations from classical literature. The chronicle is sometimes given the title Historia rerum in partibus transmarinis gestarum ("History of Deeds Done Beyond the Sea") or Historia Ierosolimitana ("History of Jerusalem"), or the Historia for short. It was translated into French soon after his death, and thereafter into numerous other languages. Because it is the only source for the history of twelfth-century Jerusalem written by a native, historians have often assumed that William's statements could be taken at face value. However, more recent historians have shown that William's involvement in the kingdom's political disputes resulted in detectable biases in his account. Despite this, he is considered the greatest chronicler of the crusades, and one of the best authors of the Middle Ages.
## Early life
The Kingdom of Jerusalem was founded in 1099 at the end of the First Crusade. It was the third of four Christian territories to be established by the crusaders, following the County of Edessa and the Principality of Antioch, and followed by the County of Tripoli. Jerusalem's first three rulers, Godfrey of Bouillon (1099–1100), his brother Baldwin I (1100–1118), and their cousin Baldwin II (1118–1131), expanded and secured the kingdom's borders, which encompassed roughly the same territory as modern-day Israel, Palestine, and Lebanon. During the kingdom's early decades, the population was swelled by pilgrims visiting the holiest sites of Christendom. Merchants from the Mediterranean city-states of Italy and France were eager to exploit the rich trade markets of the east.
William's family probably originated in either France or Italy, since he was very familiar with both countries. His parents were likely merchants who had settled in the kingdom and were "apparently well-to-do", although it is unknown whether they participated in the First Crusade or arrived later. William was born in Jerusalem around 1130. He had at least one brother, Ralph, who was one of the city's burgesses, a non-noble leader of the merchant community. Nothing more is known about his family, except that his mother died before 1165.
As a child William was educated in Jerusalem, at the cathedral school in the Church of the Holy Sepulchre. The scholaster, or school-master, John the Pisan, taught William to read and write, and first introduced him to Latin. From the Historia it is clear that he also knew French and possibly Italian, but there is not enough evidence to determine whether he learned Greek, Persian, and Arabic, as is sometimes claimed.
Around 1145 William went to Europe to continue his education in the schools of France and Italy, especially in those of Paris and Bologna, "the two most important intellectual centers of twelfth-century Christendom." These schools were not yet the official universities that they would become in the 13th century, but by the end of the 11th century both had numerous schools for the arts and sciences. They were separate from the cathedral schools, and were established by independent professors who were masters of their field of study. Students from all over Europe gathered there to hear lectures from these masters. William studied liberal arts and theology in Paris and Orléans for about ten years, with professors who had been students of Thierry of Chartres and Gilbert de la Porrée. He also spent time studying under Robert of Melun and Adam de Parvo Ponte, among others. In Orléans, one of the pre-eminent centres of classical studies, he read ancient Roman literature (known simply as "the Authors") with Hilary of Orléans, and learned mathematics ("especially Euclid") with William of Soissons. For six years, he studied theology with Peter Lombard and Maurice de Sully. Afterwards, he studied civil law and canon law in Bologna, with the "Four Doctors", Hugo de Porta Ravennate, Bulgarus, Martinus Gosia, and Jacobus de Boragine. William's list of professors "gives us almost a who's who of the grammarians, philosophers, theologians, and law teachers of the so-called Twelfth-Century Renaissance", and shows that he was as well-educated as any European cleric. His contemporary John of Salisbury had many of the same teachers.
## Religious and political life in Jerusalem
The highest religious and political offices in Jerusalem were usually held by Europeans who had arrived on pilgrimage or crusade. William was one of the few natives with a European education, and he quickly rose through the ranks. After his return to the Holy Land in 1165, he became canon of the cathedral at Acre. In 1167 he was appointed archdeacon of the cathedral of Tyre by Frederick de la Roche, archbishop of Tyre, with the support of King Amalric.
Amalric had come to power in 1164 and had made it his goal to conquer Egypt. Egypt had been invaded by King Baldwin I fifty years earlier, and the weak Fatimid Caliphate was forced to pay yearly tribute to Jerusalem. Amalric turned towards Egypt because Muslim territory to the east of Jerusalem had fallen under the control of the powerful Zengid sultan Nur ad-Din. Nur ad-Din had taken control of Damascus in 1154, six years after the disastrous siege of Damascus by the Second Crusade in 1148. Jerusalem could now expand only to the southwest, towards Egypt, and in 1153 Ascalon, the last Fatimid outpost in Palestine, fell to the crusaders. Nur ad-Din, however, also wished to acquire Egypt, and sent his army to hinder Amalric's plans. This was the situation in the east when William returned from Europe. In 1167 Amalric married Maria Comnena, grand-niece of Byzantine emperor Manuel I Comnenus, and in 1168 the king sent William to finalize a treaty for a joint Byzantine-crusader campaign against Egypt. The expedition, Amalric's fourth, was the first with support from the Byzantine navy. Amalric, however, did not wait for the fleet to arrive. He managed to capture Damietta, but within a few years he was expelled from Egypt by one of Nur ad-Din's generals, Saladin, who would later become Jerusalem's greatest threat.
Meanwhile, William continued his advancement in the kingdom. In 1169 he visited Rome, possibly to answer accusations made against him by Archbishop Frederick, although if so, the charge is unknown. It is also possible that while Frederick was away on a diplomatic mission in Europe, a problem within the diocese forced William to seek the archbishop's assistance.
`On his return from Rome in 1170 he may have been commissioned by Amalric to write a history of the kingdom. He also became the tutor of Amalric's son and heir, Baldwin IV. When Baldwin was thirteen years old, he was playing with some children, who were trying to cause each other pain by scratching each other's arms. "The other boys gave evidence of pain by their outcries," wrote William, "but Baldwin, although his comrades did not spare him, endured it altogether too patiently, as if he felt nothing ... It is impossible to refrain from tears while speaking of this great misfortune." William inspected Baldwin's arms and recognized the possible symptoms of leprosy, which was confirmed as Baldwin grew older.`
Amalric died in 1174, and Baldwin IV succeeded him as king. Nur ad-Din also died in 1174, and his general Saladin spent the rest of the decade consolidating his hold on both Egypt and Nur ad-Din's possessions in Syria, which allowed him to completely encircle Jerusalem. The subsequent events have often been interpreted as a struggle between two opposing factions, a "court party" and a "noble party." The "court party" was led by Baldwin's mother, Amalric's first wife Agnes of Courtenay, and her immediate family, as well as recent arrivals from Europe who were inexperienced in the affairs of the kingdom and were in favour of war with Saladin. The "noble party" was led by Raymond III of Tripoli and the native nobility of the kingdom, who favoured peaceful co-existence with the Muslims. This is the interpretation offered by William himself in the Historia, and it was taken as fact by later historians. Peter W. Edbury, however, has more recently argued that William must be considered extremely partisan as he was naturally allied with Raymond, who was responsible for his later advancement in political and religious offices. The accounts of the 13th-century authors who continued the Historia in French must also be considered suspect, as they were allied to Raymond's supporters in the Ibelin family. The general consensus among recent historians is that although there was a dynastic struggle, "the division was not between native barons and newcomers from the West, but between the king's maternal and paternal kin."
Miles of Plancy briefly held the regency for the underaged Baldwin IV. Miles was assassinated in October 1174, and Raymond III was soon appointed to replace him. Raymond named William chancellor of Jerusalem, as well as archdeacon of Nazareth, and on 6 June 1175, William was elected archbishop of Tyre to replace Frederick de la Roche, who had died in October 1174. William's duties as chancellor probably did not take up too much of his time; the scribes and officials in the chancery drafted documents and it may not have even been necessary for him to be present to sign them. Instead he focused on his duties as archbishop. In 1177 he performed the funeral services for William of Montferrat, husband of Baldwin IV's sister Sibylla, when the patriarch of Jerusalem, Amalric of Nesle, was too sick to attend.
In 1179, William was one of the delegates from Jerusalem and the other crusader states at the Third Lateran Council; among the others were Heraclius, archbishop of Caesarea, Joscius, bishop of Acre and William's future successor in Tyre, the bishops of Sebastea, Bethlehem, Tripoli, and Jabala, and the abbot of Mount Sion. Patriarch Amalric and Patriarch of Antioch Aimery of Limoges were unable to attend, and William and the other bishops did not have sufficient weight to persuade Pope Alexander III of the need for a new crusade. William was, however, sent by Alexander as an ambassador to Emperor Manuel, and Manuel then sent him on a mission to the Principality of Antioch. William does not mention exactly what happened during these embassies, but he probably discussed the Byzantine alliance with Jerusalem, and Manuel's protectorate over Antioch, where, due to pressure from Rome and Jerusalem, the emperor was forced to give up his attempts to restore a Greek patriarch. William was absent from Jerusalem for two years, returning home in 1180.
## Patriarchal election of 1180
During William's absence a crisis had developed in Jerusalem. King Baldwin had reached the age of majority in 1176 and Raymond III had been removed from the regency, but as a leper Baldwin could have no children and could not be expected to rule much longer. After the death of William of Montferrat in 1177, King Baldwin's widowed sister Sibylla required a new husband. At Easter in 1180, the two factions were divided even further when Raymond and his cousin Bohemond III of Antioch attempted to force Sibylla to marry Baldwin of Ibelin. Raymond and Bohemond were King Baldwin's nearest male relatives in the paternal line, and could have claimed the throne if the king died without an heir or a suitable replacement. Before Raymond and Bohemond arrived, however, Agnes and King Baldwin arranged for Sibylla to be married to a Poitevin newcomer, Guy of Lusignan, whose older brother Aimery of Lusignan was already an established figure at court.
The dispute affected William, since he had been appointed chancellor by Raymond and may have fallen out of favour after Raymond was removed from the regency. When Patriarch Amalric died on 6 October 1180, the two most obvious choices for his successor were William and Heraclius of Caesarea. They were fairly evenly matched in background and education, but politically they were allied with opposite parties, as Heraclius was one of Agnes of Courtenay's supporters. It seems that the canons of the Holy Sepulchre were unable to decide, and asked the king for advice; due to Agnes' influence, Heraclius was elected. There were rumours that Agnes and Heraclius were lovers, but this information comes from the partisan 13th-century continuations of the Historia, and there is no other evidence to substantiate such a claim. William himself says almost nothing about the election and Heraclius' character or his subsequent patriarchate, probably reflecting his disappointment at the outcome.
## Death
William remained archbishop of Tyre and chancellor of the kingdom, but the details of his life at this time are obscure. The 13th-century continuators claim that Heraclius excommunicated William in 1183, but it is unknown why Heraclius would have done this. They also claim that William went to Rome to appeal to the Pope, where Heraclius had him poisoned. According to and John Rowe, the obscurity of William's life during these years shows that he did not play a large political role, but concentrated on ecclesiastical affairs and the writing of his history. The story of his excommunication, and the unlikely detail that he was poisoned, were probably an invention of the Old French continuators. William remained in the kingdom and continued to write up until 1184, but by then Jerusalem was internally divided by political factions and externally surrounded by the forces of Saladin, and "the only subjects that present themselves are the disasters of a sorrowing country and its manifold misfortunes, themes which can serve only to draw forth lamentations and tears."
His importance had dwindled with the victory of Agnes and her supporters, and with the accession of Baldwin V, infant son of Sibylla and William of Montferrat. Baldwin was a sickly child and he died the next year. In 1186 he was succeeded by his mother Sibylla and her second husband Guy of Lusignan, ruling jointly. William was probably in failing health by this point. Rudolf Hiestand discovered that the date of William's death was 29 September, but the year was not recorded; whatever the year, there was a new chancellor in May 1185 and a new archbishop of Tyre by 21 October 1186. Hans E. Mayer concluded that William died in 1186, and this is the year generally accepted by scholars.
William's foresight about the misfortunes of his country was proven correct less than a year later. Saladin defeated King Guy at the Battle of Hattin in 1187, and went on to capture Jerusalem and almost every other city of the kingdom, except the seat of William's archdiocese, Tyre. News of the fall of Jerusalem shocked Europe and plans were made to send assistance. According to Roger of Wendover, William was present at Gisors in France in 1188 when Henry II of England and Philip II of France agreed to go on crusade: "Thereupon the king of the English first took the sign of the cross at the hands of the Archbishop of Rheims and William of Tyre, the latter of whom had been entrusted by our lord the pope with the office of legate in the affairs of the crusade in the western part of Europe." Roger was however mistaken; he knew that an unnamed archbishop of Tyre was present and assumed it must have been the William whose chronicle he possessed, although the archbishop in question was actually William's successor Joscius.
## Works
### Historia
#### Latin chronicle
> In the present work we seem to have fallen into manifold dangers and perplexities. For, as the series of events seemed to require, we have included in this study on which we are now engaged many details about the characters, lives, and personal traits of kings, regardless of whether these facts were commendable or open to criticism. Possibly descendants of these monarchs, while perusing this work, may find this treatment difficult to brook and be angry with the chronicler beyond his just deserts. They will regard him as either mendacious or jealous—both of which charges, as God lives, we have endeavored to avoid as we would a pestilence.
William's great work is a Latin chronicle, written between 1170 and 1184. It contains twenty-three books; the final book, which deals with the events of 1183 and the beginning of 1184, has only a prologue and one chapter, so it is either unfinished or the rest of the pages were lost before the whole chronicle began to be copied. The first book begins with the conquest of Syria by Umar in the seventh century, but otherwise the work deals with the advent of the First Crusade and the subsequent political history of the Kingdom of Jerusalem. It is arranged, but was not written, chronologically; the first sections to be written were probably the chapters about the invasion of Egypt in 1167, which are extremely detailed and were likely composed before the Fatimid dynasty was overthrown in 1171. Much of the Historia was finished before William left to attend the Lateran Council, but new additions and corrections were made after his return in 1180, perhaps because he now realized that European readers would also be interested in the history of the kingdom. In 1184 he wrote the Prologue and the beginning of the twenty-third book.
August C. Krey thought William's Arabic sources may have come from the library of the Damascene diplomat Usama ibn Munqidh, whose library was looted by Baldwin III from a shipwreck in 1154. Alan V. Murray, however, has argued that, at least for the accounts of Persia and the Turks in his chronicle, William relied on Biblical and earlier medieval legends rather than actual history, and his knowledge "may be less indicative of eastern ethnography than of western mythography." William had access to the chronicles of the First Crusade, including Fulcher of Chartres, Albert of Aix, Raymond of Aguilers, Baldric of Dol, and the Gesta Francorum, as well as other documents located in the kingdom's archives. He used Walter the Chancellor and other now-lost works for the history of the Principality of Antioch. From the end of Fulcher's chronicle in 1127, William is the only source of information from an author living in Jerusalem. For events that happened in William's own lifetime, he interviewed older people who had witnessed the events about which he was writing, and drew on his own memory.
William's classical education allowed him to compose Latin superior to that of many medieval writers. He used numerous ancient Roman and early Christian authors, either for quotations or as inspiration for the framework and organization of the Historia. His vocabulary is almost entirely classical, with only a few medieval constructions such as "loricator" (someone who makes armour, a calque of the Arabic "zarra") and "assellare" (to empty one's bowels). He was capable of clever word-play and advanced rhetorical devices, but he was prone to repetition of a number of words and phrases. His writing also shows phrasing and spelling which is unusual or unknown in purely classical Latin but not uncommon in medieval Latin, such as:
- confusion between reflexive and possessive pronouns;
- confusion over the use of the accusative and ablative cases, especially after the preposition in;
- collapsed diphthongs (i.e. the Latin diphthongs ae and oe are spelled simply e);
- the dative "mihi" ("to me") is spelled "michi";
- a single "s" is often doubled, for example in the adjectival place-name ending which he often spells "-enssis"; this spelling is also used to represent the Arabic "sh", a sound which Latin lacks, for example in the name Shawar which he spells "Ssauar".
#### Literary themes and biases
Despite his quotations from Christian authors and from the Bible, William did not place much emphasis on the intervention of God in human affairs, resulting in a somewhat "secular" history. Nevertheless, he included much information that is clearly legendary, especially when referring to the First Crusade, which even in his own day was already considered an age of great Christian heroes. Expanding on the accounts of Albert of Aix, Peter the Hermit is given prominence in the preaching of the First Crusade, to the point that it was he, not Pope Urban II, who originally conceived the crusade. Godfrey of Bouillon, the first ruler of crusader Jerusalem, was also depicted as the leader of the crusade from the beginning, and William attributed to him legendary strength and virtue. This reflected the almost mythological status that Godfrey and the other first crusaders held for the inhabitants of Jerusalem in the late twelfth century.
William gave a more nuanced picture of the kings of his own day. He claimed to have been commissioned to write by King Amalric himself, but William did not allow himself to praise the king excessively; for example, Amalric did not respect the rights of the church, and although he was a good military commander, he could not stop the increasing threat from the neighbouring Muslim states. On a personal level, William admired the king's education and his interest in history and law, but also noted that Amalric had "breasts like those of a woman hanging down to his waist" and was shocked when the king questioned the resurrection of the dead.
About Amalric's son Baldwin IV, however, "there was no ambiguity". Baldwin was nothing but heroic in the face of his debilitating leprosy, and he led military campaigns against Saladin even while still underaged; William tends to gloss over campaigns where Baldwin was not actually in charge, preferring to direct his praise towards the afflicted king rather than subordinate commanders. William's history can be seen as an apologia, a literary defense, for the kingdom, and more specifically for Baldwin's rule. By the 1170s and 1180s, western Europeans were reluctant to support the kingdom, partly because it was far away and there were more pressing concerns in Europe, but also because leprosy was usually considered divine punishment.
William was famously biased against the Knights Templar, whom he believed to be arrogant and disrespectful of both secular and ecclesiastical hierarchies, as they were not required to pay tithes and were legally accountable only to the Pope. Although he was writing decades later, he is the earliest author to describe the actual foundation of the Templar order. He was generally favourable towards them when discussing their early days, but resented the power and influence they held in his own time. William accused them of hindering the siege of Ascalon in 1153; of poorly defending a cave-fortress in 1165, for which twelve Templars were hanged by King Amalric; of sabotaging the invasion of Egypt in 1168; and of murdering Assassin ambassadors in 1173.
Compared to other Latin authors of the twelfth century, William is surprisingly favourable to the Byzantine Empire. He had visited the Byzantine court as an official ambassador and probably knew more about Byzantine affairs than any other Latin chronicler. He shared the poor opinion of Alexius I Comnenus that had developed during the First Crusade, although he was also critical of some of the crusaders' dealings with Alexius. He was more impressed by Alexius' son John II Comnenus; he did not approve of John's attempts to bring the crusader Principality of Antioch under Byzantine control, but John's military expeditions against the Muslim states, the common enemy of both Greeks and Latins, were considered admirable. Emperor Manuel, whom William met during his visits to Constantinople, was portrayed more ambivalently, much like King Amalric. William admired him personally, but recognized that the Empire was powerless to help Jerusalem against the Muslim forces of Nur ad-Din and Saladin. William was especially disappointed in the failure of the joint campaign against Egypt in 1169. The end of the Historia coincides with the massacre of the Latins in Constantinople and the chaos that followed the coup of Andronicus I Comnenus, and in his description of those events, William was certainly not immune to the extreme anti-Greek rhetoric that was often found in Western European sources.
As a medieval Christian author William could hardly avoid hostility towards the kingdom's Muslim neighbours, but as an educated man who lived among Muslims in the east, he was rarely polemical or completely dismissive of Islam. He did not think Muslims were pagans, but rather that they belonged to a heretical sect of Christianity and followed the teachings of a false prophet. He often praised the Muslim leaders of his own day, even if he lamented their power over the Christian kingdom; thus Muslim rulers such as Mu'in ad-Din Unur, Nur ad-Din, Shirkuh, and even Jerusalem's ultimate conqueror Saladin are presented as honourable and pious men, characteristics that William did not bestow on many of his own Christian contemporaries.
#### Circulation of the chronicle
After William's death the Historia was copied and circulated in the crusader states and was eventually brought to Europe. In the 13th century, James of Vitry had access to a copy while he was bishop of Acre, and it was used by Guy of Bazoches, Matthew Paris, and Roger of Wendover in their own chronicles. However, there are only ten known manuscripts that contain the Latin chronicle, all of which come from France and England, so William's work may not have been very widely read in its original form. In England, however, the Historia was expanded in Latin, with additional information from the Itinerarium Regis Ricardi, and the chronicle of Roger Hoveden; this version was written around 1220.
It is unknown what title William himself gave his chronicle, although one group of manuscripts uses Historia rerum in partibus transmarinis gestarum and another uses Historia Ierosolimitana. The Latin text was printed for the first time in Basel in 1549 by Nicholas Brylinger; it was also published in the Gesta Dei per Francos by Jacques Bongars in 1611 and the Recueil des historiens des croisades (RHC) by Auguste-Arthur Beugnot and Auguste Le Prévost in 1844, and Bongars' text was reprinted in the Patrologia Latina by Jacques Paul Migne in 1855. The now-standard Latin critical edition, based on six of the surviving manuscripts, was published as Willelmi Tyrensis Archiepiscopi Chronicon in the Corpus Christianorum in 1986, by R. B. C. Huygens, with notes by Hans E. Mayer and Gerhard Rösch. The RHC edition was translated into English by Emily A. Babcock and August C. Krey in 1943 as "A History of Deeds Done Beyond the Sea," although the translation is sometimes incomplete or inexact.
#### Old French translation
A translation of the Historia into Old French, made around 1223, was particularly well-circulated and had many anonymous additions made to it in the 13th century. In contrast to the surviving Latin manuscripts, there are "at least fifty-nine manuscripts or fragments of manuscripts" containing the Old French translation. There are also independent French continuations attributed to Ernoul and Bernard le Trésorier. The translation was sometimes called the Livre dou conqueste; it was known by this name throughout Europe as well as in the crusader Kingdom of Cyprus and in Cilician Armenia, and 14th-century Venetian geographer Marino Sanuto the Elder had a copy of it. The French was further translated into Spanish, as the Gran conquista de Ultramar, during the reign of Alfonso the Wise of Castile in the late 13th century. The French version was so widespread that the Renaissance author Francesco Pipino [it] translated it back into Latin, unaware that a Latin original already existed. A Middle English translation of the French was made by William Caxton in the 15th century.
### Other works
William reports that he wrote an account of the Third Council of the Lateran, which does not survive. He also wrote a history of the Holy Land from the time of Muhammad up to 1184, for which he used Eutychius of Alexandria as his main source. This work seems to have been known in Europe in the 13th century but it also does not survive.
## Modern assessment
William's neutrality as an historian was often taken for granted until the late twentieth century. August C. Krey, for example, believed that "his impartiality ... is scarcely less impressive than his critical skill." Despite this excellent reputation, D. W. T. C. Vessey has shown that William was certainly not an impartial observer, especially when dealing with the events of the 1170s and 1180s. Vessey believes that William's claim to have been commissioned by Amalric is a typical ancient and medieval topos, or literary theme, in which a wise ruler, a lover of history and literature, wishes to preserve for posterity the grand deeds of his reign. William's claims of impartiality are also a typical topos in ancient and medieval historical writing.
His depiction of Baldwin IV as a hero is an attempt "to vindicate the politics of his own party and to blacken those of its opponents." As mentioned above, William was opposed to Baldwin's mother Agnes of Courtenay, Patriarch Heraclius, and their supporters; his interpretation of events during Baldwin's reign was previously taken as fact almost without question. In the mid twentieth century, Marshall W. Baldwin, Steven Runciman, and Hans Eberhard Mayer were influential in perpetuating this point of view, although the more recent re-evaluations of this period by Vessey, Peter Edbury and Bernard Hamilton have undone much of William's influence.
An often-noted flaw in the Historia is William's poor memory for dates. "Chronology is sometimes confused, and dates are given wrongly", even for basic information such as the regnal dates of the kings of Jerusalem. For example, William gives the date of Amalric's death as 11 July 1173, when it actually occurred in 1174.
Despite his biases and errors, William "has always been considered one of the greatest medieval writers." Runciman wrote that "he had a broad vision; he understood the significance of the great events of his time and the sequence of cause and effect in history." Christopher Tyerman calls him "the historian's historian", and "the greatest crusade historian of all," and Bernard Hamilton says he "is justly considered one of the finest historians of the Middle Ages". As the Dictionary of the Middle Ages says, "William's achievements in assembling and evaluating sources, and in writing in excellent and original Latin a critical and judicious (if chronologically faulty) narrative, make him an outstanding historian, superior by medieval, and not inferior by modern, standards of scholarship."
|
405,421 |
Ariel (moon)
| 1,170,116,538 |
Fourth-largest moon of Uranus
|
[
"Ariel (moon)",
"Astronomical objects discovered in 1851",
"Moons with a prograde orbit",
"Things named after Shakespearean works"
] |
Ariel is the fourth-largest of the 27 known moons of Uranus. Ariel orbits and rotates in the equatorial plane of Uranus, which is almost perpendicular to the orbit of Uranus and so has an extreme seasonal cycle.
It was discovered in October 1851 by William Lassell and named for a character in two different pieces of literature. As of 2019, much of the detailed knowledge of Ariel derives from a single flyby of Uranus performed by the space probe Voyager 2 in 1986, which managed to image around 35% of the moon's surface. There are no active plans at present to return to study the moon in more detail, although various concepts such as a Uranus Orbiter and Probe have been proposed.
After Miranda, Ariel is the second-smallest of Uranus's five major rounded satellites and the second-closest to its planet. Among the smallest of the Solar System's 20 known spherical moons (it ranks 14th among them in diameter), it is believed to be composed of roughly equal parts ice and rocky material. Its mass is approximately equal in magnitude to Earth's hydrosphere.
Like all of Uranus's moons, Ariel probably formed from an accretion disc that surrounded the planet shortly after its formation, and, like other large moons, it is likely differentiated, with an inner core of rock surrounded by a mantle of ice. Ariel has a complex surface consisting of extensive cratered terrain cross-cut by a system of scarps, canyons, and ridges. The surface shows signs of more recent geological activity than other Uranian moons, most likely due to tidal heating.
## Discovery and name
Discovered on 24 October 1851 by William Lassell, it is named for a sky spirit in Alexander Pope's 1712 poem The Rape of the Lock and Shakespeare's The Tempest.
Both Ariel and the slightly larger Uranian satellite Umbriel were discovered by William Lassell on 24 October 1851. Although William Herschel, who discovered Uranus's two largest moons Titania and Oberon in 1787, claimed to have observed four additional moons, this was never confirmed and those four objects are now thought to be spurious.
All of Uranus's moons are named after characters from the works of William Shakespeare or Alexander Pope's The Rape of the Lock. The names of all four satellites of Uranus then known were suggested by John Herschel in 1852 at the request of Lassell. Ariel is named after the leading sylph in The Rape of the Lock. It is also the name of the spirit who serves Prospero in Shakespeare's The Tempest. The moon is also designated Uranus I.
## Orbit
Among Uranus's five major moons, Ariel is the second closest to the planet, orbiting at the distance of about 190,000 km. Its orbit has a small eccentricity and is inclined very little relative to the equator of Uranus. Its orbital period is around 2.5 Earth days, coincident with its rotational period. This means that one side of the moon always faces the planet; a condition known as tidal lock. Ariel's orbit lies completely inside the Uranian magnetosphere. The trailing hemispheres (those facing away from their directions of orbit) of airless satellites orbiting inside a magnetosphere like Ariel are struck by magnetospheric plasma co-rotating with the planet. This bombardment may lead to the darkening of the trailing hemispheres observed for all Uranian moons except Oberon (see below). Ariel also captures magnetospheric charged particles, producing a pronounced dip in energetic particle count near the moon's orbit observed by Voyager 2 in 1986.
Because Ariel, like Uranus, orbits the Sun almost on its side relative to its rotation, its northern and southern hemispheres face either directly towards or directly away from the Sun at the solstices. This means it is subject to an extreme seasonal cycle; just as Earth's poles see permanent night or daylight around the solstices, Ariel's poles see permanent night or daylight for half a Uranian year (42 Earth years), with the Sun rising close to the zenith over one of the poles at each solstice. The Voyager 2 flyby coincided with the 1986 southern summer solstice, when nearly the entire northern hemisphere was dark. Once every 42 years, when Uranus has an equinox and its equatorial plane intersects the Earth, mutual occultations of Uranus's moons become possible. A number of such events occurred in 2007–2008, including an occultation of Ariel by Umbriel on 19 August 2007.
Currently Ariel is not involved in any orbital resonance with other Uranian satellites. In the past, however, it may have been in a 5:3 resonance with Miranda, which could have been partially responsible for the heating of that moon (although the maximum heating attributable to a former 1:3 resonance of Umbriel with Miranda was likely about three times greater). Ariel may have once been locked in the 4:1 resonance with Titania, from which it later escaped. Escape from a mean motion resonance is much easier for the moons of Uranus than for those of Jupiter or Saturn, due to Uranus's lesser degree of oblateness. This resonance, which was likely encountered about 3.8 billion years ago, would have increased Ariel's orbital eccentricity, resulting in tidal friction due to time-varying tidal forces from Uranus. This would have caused warming of the moon's interior by as much as 20 K.
## Composition and internal structure
Ariel is the fourth-largest of the Uranian moons, and may have the third-greatest mass. It is also the 14th-largest moon in the Solar System. The moon's density is 1.66 g/cm<sup>3</sup>, which indicates that it consists of roughly equal parts water ice and a dense non-ice component. The latter could consist of rock and carbonaceous material including heavy organic compounds known as tholins. The presence of water ice is supported by infrared spectroscopic observations, which have revealed crystalline water ice on the surface of the moon, which is porous and thus transmits little solar heat to layers below. Water ice absorption bands are stronger on Ariel's leading hemisphere than on its trailing hemisphere. The cause of this asymmetry is not known, but it may be related to bombardment by charged particles from Uranus's magnetosphere, which is stronger on the trailing hemisphere (due to the plasma's co-rotation). The energetic particles tend to sputter water ice, decompose methane trapped in ice as clathrate hydrate and darken other organics, leaving a dark, carbon-rich residue behind.
Except for water, the only other compound identified on the surface of Ariel by infrared spectroscopy is carbon dioxide (CO<sub>2</sub>), which is concentrated mainly on its trailing hemisphere. Ariel shows the strongest spectroscopic evidence for CO<sub>2</sub> of any Uranian satellite, and was the first Uranian satellite on which this compound was discovered. The origin of the carbon dioxide is not completely clear. It might be produced locally from carbonates or organic materials under the influence of the energetic charged particles coming from Uranus's magnetosphere or solar ultraviolet radiation. This hypothesis would explain the asymmetry in its distribution, as the trailing hemisphere is subject to a more intense magnetospheric influence than the leading hemisphere. Another possible source is the outgassing of primordial CO<sub>2</sub> trapped by water ice in Ariel's interior. The escape of CO<sub>2</sub> from the interior may be related to past geological activity on this moon.
Given its size, rock/ice composition and the possible presence of salt or ammonia in solution to lower the freezing point of water, Ariel's interior may be differentiated into a rocky core surrounded by an icy mantle. If this is the case, the radius of the core (372 km) is about 64% of the radius of the moon, and its mass is around 56% of the moon's mass—the parameters are dictated by the moon's composition. The pressure in the center of Ariel is about 0.3 GPa (3 kbar). The current state of the icy mantle is unclear. The existence of a subsurface ocean is currently considered possible, though a 2006 study suggests that radiogenic heating alone would not be enough to allow for one. More scientific research concluded that an Active underwater ocean is possible for the 4 largest moons of Uranus.
## Surface
### Albedo and color
Ariel is the most reflective of Uranus's moons. Its surface shows an opposition surge: the reflectivity decreases from 53% at a phase angle of 0° (geometrical albedo) to 35% at an angle of about 1°. The Bond albedo of Ariel is about 23%—the highest among Uranian satellites. The surface of Ariel is generally neutral in color. There may be an asymmetry between the leading and trailing hemispheres; the latter appears to be redder than the former by 2%. Ariel's surface generally does not demonstrate any correlation between albedo and geology on one hand and color on the other hand. For instance, canyons have the same color as the cratered terrain. However, bright impact deposits around some fresh craters are slightly bluer in color. There are also some slightly blue spots, which do not correspond to any known surface features.
### Surface features
The observed surface of Ariel can be divided into three terrain types: cratered terrain, ridged terrain, and plains. The main surface features are impact craters, canyons, fault scarps, ridges, and troughs.
The cratered terrain, a rolling surface covered by numerous impact craters and centered on Ariel's south pole, is the moon's oldest and most geographically extensive geological unit. It is intersected by a network of scarps, canyons (graben), and narrow ridges mainly occurring in Ariel's mid-southern latitudes. The canyons, known as chasmata, probably represent graben formed by extensional faulting, which resulted from global tensional stresses caused by the freezing of water (or aqueous ammonia) in the moon's interior (see below). They are 15–50 km wide and trend mainly in an east- or northeasterly direction. The floors of many canyons are convex; rising up by 1–2 km. Sometimes the floors are separated from the walls of canyons by grooves (troughs) about 1 km wide. The widest graben have grooves running along the crests of their convex floors, which are called valles. The longest canyon is Kachina Chasma, at over 620 km in length (the feature extends into the hemisphere of Ariel that Voyager 2 did not see illuminated).
The second main terrain type—ridged terrain—comprises bands of ridges and troughs hundreds of kilometers in extent. It bounds the cratered terrain and cuts it into polygons. Within each band, which can be up to 25 to 70 km wide, are individual ridges and troughs up to 200 km long and between 10 and 35 km apart. The bands of ridged terrain often form continuations of canyons, suggesting that they may be a modified form of the graben or the result of a different reaction of the crust to the same extensional stresses, such as brittle failure.
The youngest terrain observed on Ariel are the plains: relatively low-lying smooth areas that must have formed over a long period of time, judging by their varying levels of cratering. The plains are found on the floors of canyons and in a few irregular depressions in the middle of the cratered terrain. In the latter case they are separated from the cratered terrain by sharp boundaries, which in some cases have a lobate pattern. The most likely origin for the plains is through volcanic processes; their linear vent geometry, resembling terrestrial shield volcanoes, and distinct topographic margins suggest that the erupted liquid was very viscous, possibly a supercooled water/ammonia solution, with solid ice volcanism also a possibility. The thickness of these hypothetical cryolava flows is estimated at 1–3 km. The canyons must therefore have formed at a time when endogenic resurfacing was still taking place on Ariel. A few of these areas appear to be less than 100 million years old, suggesting that Ariel may still be geologically active in spite of its relatively small size and lack of current tidal heating.
Ariel appears to be fairly evenly cratered compared to other moons of Uranus; the relative paucity of large craters suggests that its surface does not date to the Solar System's formation, which means that Ariel must have been completely resurfaced at some point of its history. Ariel's past geologic activity is believed to have been driven by tidal heating at a time when its orbit was more eccentric than currently. The largest crater observed on Ariel, Yangoor, is only 78 km across, and shows signs of subsequent deformation. All large craters on Ariel have flat floors and central peaks, and few of the craters are surrounded by bright ejecta deposits. Many craters are polygonal, indicating that their appearance was influenced by the preexisting crustal structure. In the cratered plains there are a few large (about 100 km in diameter) light patches that may be degraded impact craters. If this is the case they would be similar to palimpsests on Jupiter's moon Ganymede. It has been suggested that a circular depression 245 km in diameter located at 10°S 30°E is a large, highly degraded impact structure.
## Origin and evolution
Ariel is thought to have formed from an accretion disc or subnebula; a disc of gas and dust that either existed around Uranus for some time after its formation or was created by the giant impact that most likely gave Uranus its large obliquity. The precise composition of the subnebula is not known; however, the higher density of Uranian moons compared to the moons of Saturn indicates that it may have been relatively water-poor. Significant amounts of carbon and nitrogen may have been present in the form of carbon monoxide (CO) and molecular nitrogen (N<sub>2</sub>), instead of methane and ammonia. The moons that formed in such a subnebula would contain less water ice (with CO and N<sub>2</sub> trapped as clathrate) and more rock, explaining the higher density.
The accretion process probably lasted for several thousand years before the moon was fully formed. Models suggest that impacts accompanying accretion caused heating of Ariel's outer layer, reaching a maximum temperature of around 195 K at a depth of about 31 km. After the end of formation, the subsurface layer cooled, while the interior of Ariel heated due to decay of radioactive elements present in its rocks. The cooling near-surface layer contracted, while the interior expanded. This caused strong extensional stresses in the moon's crust reaching estimates of 30 MPa, which may have led to cracking. Some present-day scarps and canyons may be a result of this process, which lasted for about 200 million years.
The initial accretional heating together with continued decay of radioactive elements and likely tidal heating may have led to melting of the ice if an antifreeze like ammonia (in the form of ammonia hydrate) or some salt was present. The melting may have led to the separation of ice from rocks and formation of a rocky core surrounded by an icy mantle. A layer of liquid water (ocean) rich in dissolved ammonia may have formed at the core–mantle boundary. The eutectic temperature of this mixture is 176 K. The ocean, however, is likely to have frozen long ago. The freezing of the water likely led to the expansion of the interior, which may have been responsible for the formation of the canyons and obliteration of the ancient surface. The liquids from the ocean may have been able to erupt to the surface, flooding floors of canyons in the process known as cryovolcanism. More recent analysis concluded that an active ocean is probable for the 4 largest moons of Uranus; specifically including Ariel.
Thermal modeling of Saturn's moon Dione, which is similar to Ariel in size, density, and surface temperature, suggests that solid state convection could have lasted in Ariel's interior for billions of years, and that temperatures in excess of 173 K (the melting point of aqueous ammonia) may have persisted near its surface for several hundred million years after formation, and near a billion years closer to the core.
## Observation and exploration
The apparent magnitude of Ariel is 14.8; similar to that of Pluto near perihelion. However, while Pluto can be seen through a telescope of 30 cm aperture, Ariel, due to its proximity to Uranus's glare, is often not visible to telescopes of 40 cm aperture.
The only close-up images of Ariel were obtained by the Voyager 2 probe, which photographed the moon during its flyby of Uranus in January 1986. The closest approach of Voyager 2 to Ariel was 127,000 km (79,000 mi)—significantly less than the distances to all other Uranian moons except Miranda. The best images of Ariel have a spatial resolution of about 2 km. They cover about 40% of the surface, but only 35% was photographed with the quality required for geological mapping and crater counting. At the time of the flyby the southern hemisphere of Ariel (like those of the other moons) was pointed towards the Sun, so the northern (dark) hemisphere could not be studied. No other spacecraft has ever visited the Uranian system. The possibility of sending the Cassini spacecraft to Uranus was evaluated during its mission extension planning phase. It would have taken about twenty years to get to the Uranian system after departing Saturn, and these plans were scrapped in favour of remaining at Saturn and eventually destroying the spacecraft in Saturn's atmosphere.
### Transits
On 26 July 2006, the Hubble Space Telescope captured a rare transit made by Ariel on Uranus, which cast a shadow that could be seen on the Uranian cloud tops. Such events are rare and only occur around equinoxes, as the moon's orbital plane about Uranus is tilted 98° to Uranus's orbital plane about the Sun. Another transit, in 2008, was recorded by the European Southern Observatory.
## See also
- List of natural satellites
- Planetary Science Decadal Survey
|
3,316,158 |
Frank Pick
| 1,172,209,900 |
British transport administrator
|
[
"1878 births",
"1941 deaths",
"Alumni of the University of London",
"English chief executives",
"English solicitors",
"History of the London Underground",
"People associated with transport in London",
"People educated at St Peter's School, York",
"People from Spalding, Lincolnshire",
"Transport design in London"
] |
Frank Pick Hon. RIBA (23 November 1878 – 7 November 1941) was a British transport administrator. After qualifying as a solicitor in 1902, he worked at the North Eastern Railway, before moving to the Underground Electric Railways Company of London (UERL) in 1906. He was chief executive officer and vice-chairman of the London Passenger Transport Board from its creation in 1933 until 1940.
Pick had a strong interest in design and its use in public life. He steered the development of the London Underground's corporate identity by commissioning eye-catching commercial art, graphic design and modern architecture, establishing a highly recognisable brand, including the first versions of the roundel and typeface still used today.
Under his direction, the UERL's Underground network and associated bus services expanded considerably, reaching out into new areas and stimulating the growth of London's suburbs. His impact on the growth of London between the world wars led to his being likened to Baron Haussmann and Robert Moses.
Pick's interest extended beyond his own organisation. He was a founding member and later served as president of the Design and Industries Association. He was also the first chairman of the Council for Art and Industry and regularly wrote and lectured on design and urban planning subjects. For the government, Pick prepared the transport plan for the mass evacuation of civilians from London at the outbreak of war and produced reports on the wartime use of canals and ports.
## Early life
Frank Pick was born on 23 November 1878 at Spalding, Lincolnshire. He was the first child of five born to draper Francis Pick and his wife Fanny Pick (née Clarke). Pick's paternal grandfather, Charles Pick, was a farmer in Spalding who died in his forties, leaving eight children. His maternal grandfather, Thomas Clarke, was a blacksmith and Wesleyan lay preacher. As a child, Pick was bookish, preferring to read and build collections of moths and butterflies and objects found on the beach rather than take part in sports.
Before becoming a draper, Pick's father had had an ambition to become a lawyer and he encouraged his son to follow this career. Pick attended St Peter's School in York on a scholarship, and was articled to a York solicitor, George Crombie, in March 1897. He qualified in January 1902 and completed a law degree at the University of London in the same year, but did not apply to practice.
In 1902, Pick began working for the North Eastern Railway. He worked first in the company's traffic statistics department before becoming assistant to the company's general manager, Sir George Gibb in 1904. In 1904, Pick married Mabel Mary Caroline Woodhouse. The couple had no children.
## London's transport
In 1906, Gibb was appointed managing director of the UERL. At Gibb's invitation, Pick also moved to the UERL to continue working as his assistant. The UERL controlled the District Railway and, during 1906 and 1907, opened three deep-level tube lines – the Baker Street and Waterloo Railway (Bakerloo tube), the Charing Cross, Euston and Hampstead Railway (Hampstead tube) and the Great Northern, Piccadilly and Brompton Railway (Piccadilly tube).
The UERL had financial problems. Ticket prices were low and passenger numbers were significantly below the pre-opening estimates. The lower than expected passenger numbers were partly the result of competition between the UERL's lines and those of the other tube and sub-surface railway companies. The spread of street-level electric trams and motor buses, replacing slower, horse-drawn road transport, also took a large number of passengers away from the trains.
### Branding
By 1908, Pick had become publicity officer responsible for marketing and it was at this time that, working with the company's general manager Albert Stanley, he began developing the strong corporate identity and visual style for which the London Underground later became famous, including the introduction of the "UNDERGROUND" brand. Pick's philosophy on design was that "the test of the goodness of a thing is its fitness for use. If it fails on this first test, no amount of ornamentation or finish will make it any better; it will only make it more expensive, more foolish."
Pick became traffic development officer in 1909 and commercial manager in 1912. Albert Stanley replaced Gibb as managing director in 1910. During 1912 and 1913, the UERL increased its control over transport services in London by purchasing two tube railways, the City & South London Railway (C&SLR) and Central London Railway (CLR), and a number of bus and tram companies. One of Pick's responsibilities was to increase passenger numbers, and he believed that the best way to do so was by encouraging increased patronage of the company's services outside peak hours. He commissioned posters which promoted the Underground's trains and London General Omnibus Company's (LGOC's) buses as a means of reaching the countryside around London and attractions within the city. Realising that variety was important to maintain travellers' interest, he commissioned designs from artists working in many different styles.
At the same time, he rationalised bus routes to ensure that they complemented and acted as feeder services for the company's railway lines, tripling the number of LGOC-operated routes during 1912 and extending the area covered to five times its previous size. Sunday excursion services to leisure destinations were implemented to fully utilise otherwise idle buses and agreements were established with rural bus operators to coordinate services rather than compete with them.
Pick introduced a common advertising policy, improving the appearance of stations by standardising poster sizes, limiting the number used and controlling their positioning. Before he took control of advertising, posters had been stuck up on any available surface on station buildings and platform walls in a crowded jumble of shapes and sizes that led to complaints from passengers that it was difficult to find the station name. Pick standardised commercial poster sizes on printers' double crown sheets, arranging these in organised groups to enable the station name to be easily seen. The Underground's own promotional posters were smaller, using single or paired double royal sheets, and were arranged separately from the commercial advertising. Pick described the process: "after many fumbling experiments I arrived at some notion of how poster advertising ought to be. Everyone seemed quite pleased and I got a reputation that really sprang out of nothing."
To make the Underground Group's posters and signage more distinctive, he commissioned calligrapher and typographer Edward Johnston to design a clear new typeface. Pick specified to Johnston in 1913 that he wanted a typeface that would ensure that the Underground Group's posters would not be mistaken for advertisements; it should have "the bold simplicity of the authentic lettering of the finest periods" and belong "unmistakably to the twentieth century". Johnston's sans serif "Underground" typeface, (now known as Johnston) was first used in 1916 and was so successful that, with minor modifications in recent years, it is still in use today.
In conjunction with his changes to poster display arrangements, Pick experimented with the positioning and sizing of station name signs on platforms, which were often inadequate in number or poorly placed. In 1908, he settled on an arrangement where the sign was backed by a red disc to make it stand out clearly, creating the "bulls-eye" device – the earliest form of what is today known as the roundel. In 1909, Pick started to combine the "bulls-eye" and the "UNDERGROUND" brand on posters and station buildings, but was not satisfied with the arrangement.
By 1916, he had decided to adapt the logo used by the LGOC, the Underground Group's bus company, which was in the form of a ring with a bar bearing the name "GENERAL" across the centre. Pick commissioned Johnston to redesign the "bulls-eye" and the form used today is based on that developed by Johnston and first used in 1919.
### Expansion
In 1919, with a return to normality after the First World War, Pick began developing plans to extend the Underground network out into suburbs that lacked adequate transport services. The only major extensions made to the Underground network since the three tube lines had opened were the extension of the District Railway to Uxbridge in 1910, and the extension of the Bakerloo tube to Watford Junction between 1913 and 1917. Approved schemes put on hold during the war were revived: the CLR was extended to Ealing Broadway in 1920, the Hampstead tube was extended to Edgware between 1923 and 1924 and the C&SLR was reconstructed and extended to Camden Town between 1922 and 1924. Finance for the latter two extensions was obtained through the government's Trade Facilities Acts which underwrote loans for public works as a means of alleviating unemployment.
For new lines, Pick first considered extending Underground services to the northeast of London where the mainline suburban services of the Great Northern Railway (GNR) and Great Eastern Railway (GER) were poor and unreliable. Studies were carried out for an extension of the Piccadilly tube on GNR tracks to New Barnet and Enfield or on a new route to Wood Green and plans were developed for an extension of the CLR along GER tracks to Chingford and Ongar, but both mainline companies strongly opposed the Underground's encroachment into their territories.
Wanting to make maximum use of the government's financial backing, which was only available for a limited period, Pick did not have time to press the Underground's case for these extensions. Instead he developed a plan for an extension of the C&SLR southwest from Clapham Common to Sutton in Surrey. Pick still faced strong opposition from the London, Brighton and South Coast Railway and the London and South Western Railway which operated in the area, but the Underground had the advantage of already having an approval for the last few miles of the route as part of an unused prewar permission for a new line from Wimbledon to Sutton. The railway companies challenged the need for a new service, claiming it would simply drain passengers from their own trains and that any extension should only run as far as Tooting, but Pick was able to counter their arguments and negotiated a compromise settlement to extend the C&SLR as far as Morden.
Even before the C&SLR extension had been completed in 1926, possibilities for the northward extension of the Piccadilly tube began to reappear. From 1922, a series of press campaigns called for the improvement of services at the GNR's Finsbury Park station where interchanges between tube lines, mainline trains, buses and trams were notoriously bad. In June 1923, a petition from 30,000 local residents was submitted to Parliament, and in 1925, the government called a public inquiry to review options. Pick presented plans to relieve the congestion at Finsbury Park by extending the Piccadilly tube north to Southgate.
Opposition from the London and North Eastern Railway (successor to the GNR following the 1923 grouping of railway companies) was again considerable and based on claims that the new Underground line would take passengers from the mainline services. Using data from the Bakerloo tube, Hampstead tube and C&SLR extensions, Pick demonstrated that the route planned for the new line would stimulate new residential development and increase passenger numbers for all rail operators in the area, increasing those on the Piccadilly tube by 50 million per year.
Parliamentary approval was granted in 1930 to extend the Piccadilly tube north beyond Southgate to a terminus at Cockfosters. The approval also included complementary extensions of the Piccadilly tube from its western terminus at Hammersmith to supplement District Railway services to Hounslow and South Harrow. The development was again financed with government backed loans, this time through the 1929 Development (Loan Guarantees and Grants) Act. To ensure the most efficient integration between the new tube line and the UERL's bus and tram operations, the stations were located further apart than in central areas and where road transport services could be arranged to deliver and collect the most passengers. At Manor House, the station was designed with subway exits directly on to pedestrian islands in the road served by the local trams.
### Design
In 1924, with plans for the C&SLR extension under development, Pick commissioned Charles Holden to design the station buildings in a new style. The designs replaced a set by the Underground's own architect, Stanley Heaps, which Pick had found unsatisfactory. Pick had first met Holden at the Design and Industries Association (DIA) in 1915, and he saw the modernist architect as one he could work with to define what Pick called "a new architectural idiom".
Pick wanted to streamline and simplify the design of the stations to make them welcoming, brightly lit and efficient with large, uncluttered ticket halls for the rapid sale of tickets and quick access to the trains via escalators. At these new stations, tickets were issued from a number of "passimeters", glazed booths in the centre of the ticket hall, rather than the traditional ticket office windows set to one side.
Pick was pleased with the results and at a DIA dinner in 1926 proclaimed "that a new style of architectural decoration will arise" leading to a "Modern London – modern not garbled classic or Renaissance." Amongst Pick's next commissions for Holden were the redesign of Piccadilly Circus station (1925–28), where a wide subterranean concourse and ticket hall were built beneath the road junction, and the Underground Group's new headquarters building at 55 Broadway, St James's (1925–1929).
The new headquarters building was on an awkwardly shaped site, partly over the platforms and tracks of St James's Park station. Although Holden's practice had not designed such a large office building, it did have experience of large hospital design, which Pick saw as complementary to the design of a modern office building. When completed, the twelve-storey, 176-foot (54 m) high cruciform building was the tallest in London and the tower dominated the skyline. The building was well received by architectural critics and won Holden the RIBA's London Architecture Medal for 1929.
Two sculptures commissioned for the building were less well received, generating considerable controversy in the media. The nudity and primitive carving of Day and Night by Jacob Epstein led to calls for them to be removed from the building and the board of the Underground Group considered replacing them with new sculptures by another artist. Although he privately admitted later that the sculptures were not to his taste, Pick publicly supported Holden's selection of Epstein as sculptor and offered to resign over the matter. The crisis was averted when Epstein was persuaded to reduce the length of the penis of one of the figures and the sculptures remained in place.
Pick wanted a new type of building for the more open sites of the stations on the Piccadilly line's extensions. To decide what this new type should look like, he and Holden made a short tour of Germany, Denmark, Sweden and the Netherlands in July and August 1930 to see the latest developments in modern architecture.
Pick was disappointed with much of the new architecture that he saw in Germany and Sweden, considering it either too extreme or unsatisfactorily experimental. The architecture in the Netherlands was much more to his liking, particularly buildings by Willem Marinus Dudok in Hilversum. Although the architecture in Denmark was not considered remarkable, Pick was impressed with the way in which designers there were often responsible for all elements of a building including the interior fixtures and fittings.
The designs Pick commissioned from Holden (1931–33) established a new standard for the Underground, with the prototype station at Sudbury Town being described by architectural historian Nikolaus Pevsner as a "landmark" and the start of "the 'classic' phase of Underground architecture". To ensure that the new stations achieved the complete and coherent design that he wanted, Pick instructed the engineering departments to provide Holden with full details of all equipment needed for the stations. After late equipment changes by the engineers at the first few new stations compromised the integrated design, Pick took personal charge of the coordination of the architectural and engineering elements.
In the mid-1930s when the introduction of trolleybuses to replace trams required the installation of new street poles to support overhead wiring, Pick was keenly interested that the design of the poles was coordinated to accommodate all of the possible equipment and signage that might be needed. He also oversaw the designs of the new bus stops and bus shelters that were installed when specified stopping points were introduced for bus services.
### London Passenger Transport Board
At the beginning of the 1920s, with vehicle numbers depleted by wartime service in France and Belgium, the Underground Group's bus operations began to experience a surge in competition from a large number of new independent bus operators. These small operators were unregulated and preyed on the group's most profitable routes taking away a large number of its passengers and a large amount of its income. Albert Stanley (ennobled as Lord Ashfield in 1920) and Pick fought back by calling on parliament to regulate bus operations in the capital. The London Traffic Act 1924 granted their request by establishing the London Traffic Area to regulate road passenger traffic within London and the surrounding districts.
Throughout the 1920s, Pick led the Underground Group's efforts to coordinate its services with the municipal tram operators, the Metropolitan Railway and the suburban mainline rail services. The aim was to achieve a pooling of income between all of the operators and remove wasteful competition. At the end of 1930, a solution was announced in a bill for the formation of the London Passenger Transport Board (LPTB), a public corporation which was to take control of the Underground Group, the Metropolitan Railway and the majority of the bus and tram operators within an area designated as the London Passenger Transport Area covering the County of London and Middlesex and parts of Buckinghamshire, Essex, Hertfordshire, Kent, Surrey and Sussex.
Pick had become joint managing director of the Underground Group in 1928, and when, on 1 July 1933, the group was taken over by the LPTB, he became chief executive officer and vice-chairman, on an annual salary of £10,000 (approximately ). Ashfield was chairman. Pick led the board's negotiations on the compensation to be paid to the owners and shareholders of each of the transport operations being taken over.
With the majority of London's transport operations now under the control of a single organisation, Pick was able to commence the next round of improvements. On the Metropolitan Railway (renamed the Metropolitan line), Pick and Ashfield began to rationalise services. The barely used and loss-making Brill and Verney Junction branches beyond Aylesbury were closed in 1935 and 1936. Freight services were reduced and electrification of the remaining steam operated sections of the line was planned.
In 1935, the availability of government-backed loans to stimulate the flagging economy allowed Pick to promote system-wide improvements under the New Works Programme for 1935–1940, including the transfer of the Metropolitan line's Stanmore services to the Bakerloo line in 1939, the Northern line's Northern Heights project and extension of the Central line to Ongar in Essex and Denham in Buckinghamshire.
During 1938 and 1939, with war anticipated, an increasing part of Pick's time was spent in planning for the approaching conflict. The Railway Executive Committee was reconstituted in 1938 to act as a central coordinating body for the country's railways with Pick as the LPTB's representative. This role absorbed most of his time after the committee took over control of the railways on 1 September 1939. Following a disagreement with other members of the LPTB board over the government's proposals to limit the dividend that it could pay to its shareholders, Pick stated his intention to retire from the board at the end of his seven-year appointment in May 1940.
Pick had previously suggested a reorganisation of the LPTB's senior management structure and hoped to be able to continue with the organisation in some sort of joint general manager position. Ashfield chose not to find such a continuing role for Pick and, on 18 May 1940, to the surprise of many within the organisation, Pick retired from the LPTB board, officially due to failing health. Pick's post of chief executive was abolished and replaced with a group of six heads of department.
## Other activities
Pick's interest in design led to his involvement in the founding, in 1915, of the Design and Industries Association. The organisation aimed to bring manufacturers and designers together to improve the quality of industrial design. Through his improvements in the UERL's advertising and branding, Pick was considered by many of its members to be taking a practical lead in achieving the organisation's aims and he was soon lecturing on the subject, giving talks during 1916 and 1917 at the Art Workers Guild in London, at the Royal Scottish Academy in Edinburgh and elsewhere.
After the First World War, Pick continued to give talks regularly and published articles on design. He also began to set out his ideas on reconstruction and town planning, an area of design he became interested in through its connection to transport planning. He wrote and lectured extensively on this subject during the 1920s and 1930s including presenting a 14,000-word paper to the Institute of Transport in 1927 and addressing the International Housing and Town Planning Congress in 1939. Concerned about the uncontrolled and unchecked growth of London, partly facilitated by the new lines that London Underground was building, Pick was a strong supporter of the need for a green belt around the capital to maintain open space within reach of urban areas.
In 1922, he wrote and published privately a pamphlet This is the World that Man Made, or The New Creation that was influenced by the rationalist writing of Ray Lankester. In it Pick was pessimistic that mankind was not achieving its creative potential. He returned to the subject in lectures he gave in the 1930s when he outlined his concern that at some not too distant point progress in civilisation would come to a natural end and a stable condition would arise where, he believed, it would be hard to maintain creativity and an entropic decline would follow.
Later, in the last year of his life and with the Second World War under way, he published two booklets on postwar reconstruction, Britain Must Rebuild and Paths to Peace. Pick wrote the introduction to the English translation of Walter Gropius's The New Architecture and the Bauhaus published in 1935.
Beside his positions at the UERL and LPTB, Pick held a number of industrial administrative and advisory positions. In 1917, during the First World War, Pick was appointed to be head of the Mines Directorate's Household Fuel and Lighting Department at the Board of Trade where Albert Stanley was the President. Pick was responsible for the control of the rationing and distribution of domestic fuel supplies. He remained in this position until June 1919. Also in 1917, Pick was appointed as a member of the Special Committee advising the Civil Aerial Transport Committee on technical and practical questions of aerial transport. From 12 December 1917 he was a member of the main committee.
In 1928, he was appointed as a member of the Royal Commission on Police Powers and Procedure. He also served as a member of the London and Home Counties Traffic Advisory Committee and as a member of the Crown Lands Advisory Committee.
Pick was President of the Institute of Transport for 1931/32. He was President of the Design and Industries Association from 1932 to 1934 and the chairman of the Board of Trade's Council for Art and Industry from 1934 to 1939.
During 1938, the government appointed Pick to plan the transport operations for the evacuation of civilians from London. Initially scheduled for 30 September 1938, the plans were cancelled when Neville Chamberlain's Munich conference with Adolf Hitler averted war that year, but were activated a year later at the beginning of September 1939 on the declaration of war with Germany. After leaving the LPTB, Pick visited British ports for the Ministry of Transport to prepare a report on methods of improving port operations and cargo handling. In August 1940, he reluctantly accepted the position of director-general of the Ministry of Information.
His time at the Ministry of Information was short and unhappy and he left after four months and returned to the Ministry of Transport, where he carried out studies on improvements in the use of Britain's canals and rivers.
## Personality
Biographers have characterised Pick as being "very shy", and "brilliant but lonely". Christian Barman described him as a person who inspired conflicting opinions about his personality and his actions: "a man about whom so many people held so many different views". Pick acknowledged that he could be difficult to work with: "I have always kept in mind my own frailties – a short temper. Impatience with fools, quickness rather than thoroughness. I am a bad hand at the gracious word or casual congratulation." His moralistic character led to friends giving him the nickname "Jonah".
Pick valued criticism and savoured challenging debate, though he complained that he found it difficult to get people to stand up to him. UERL board member Sir Ernest Clark considered Pick to be perhaps too efficient and unable to fully delegate and relinquish responsibility: "his own efficiency has a bad effect on the efficiency of others... How can the housemaid take pride in a job to which the mistress will insist on putting the finishing touch?" Pick's friend Noel Carrington thought that his attention to detail made him the "ideal inspector general."
Pick ran his office on a fortnightly cycle and his workload was prodigious. Barman described Pick's office as a training school for future managers, with a regular turnover of staff who would go on to management positions when Pick thought them ready.
Ashfield considered that Pick possessed "a sterling character and steadfast loyalty", and "an administrative ability which was outstanding", with "a keen analytical mind which was able to seize upon essentials and then drive his way through to his goal, always strengthened by a sure knowledge of the problem and confidence in himself." Charles Holden described Pick's management of meetings: "Here his decisions were those of a benevolent dictator, and the members left the meeting with a clear sense of a task to be performed, difficult, perhaps, and sometimes impossible, as might subsequently prove to be, but usually well worth exploring if only in producing convincing proof of obstacles. Out of these exploratory methods there often emerged new and most interesting solutions, which Pick was quick to appreciate, and to adopt in substitution for his own proposals."
Disliking honours, Pick declined offers of a knighthood and a peerage. He did accept, in 1932, the Soviet Union's Honorary Badge of Merit for his advice on the construction of the Moscow Metro. He was an honorary member of the Royal Institute of British Architects.
## Influences
Pick was widely read and was influenced by many writers on scientific, sociological and social matters including works by Alfred North Whitehead, Leonard Hobhouse, Edwin Lankester, Arthur Eddington and John Ruskin. On design, he was influenced by D'Arcy Wentworth Thompson's description of design in nature in On Growth and Form and by architect William Lethaby. His admiration for William Morris led him to adopt Morris's favourite colour of green as his own, using green ink for the majority of his correspondence.
## Legacy
Pick had not been well for some years. The stresses of his war work took a further toll on his health and he lost two stone during his travels around the country to research his report on the canal industry. Although exhausted at the end of the tour, he wrote to friends that he was struggling with the idleness and was hoping for something new to do. He died at his home, 15 Wildwood Road, Hampstead Garden Suburb, on 7 November 1941 from a cerebral haemorrhage. His funeral was held at Golders Green Crematorium on 11 November 1941 and a memorial service was held at St Peter's Church, Eaton Square on 13 November 1941.
Working with Ashfield, Pick's impact on London's transport system was considerable. Transport historian Christian Wolmar considers it "almost impossible to exaggerate the high regard in which [London Transport] was held during its all too brief heyday, attracting official visitors from around the world eager to learn the lessons of its success and apply them in their own countries" and that "it represented the apogee of a type of confident public administration ... with a reputation that any state organisation today would envy ... only made possible by the brilliance of its two famous leaders, Ashfield and Pick." In his obituary of Pick, Charles Holden described him as "the Maecenas of our time."
Writing in 1968, Nikolaus Pevsner described Pick as "the greatest patron of the arts whom this century has so far produced in England, and indeed the ideal patron of our age." Considering Pick's public statements on art and life, art historian Kenneth Clark suggested that "in a different age he might have become a sort of Thomas Aquinas". Historian Michael Saler compared Pick's influence on London Transport to that of Lord Reith on the BBC's development during the same interwar period. Urban planner Sir Peter Hall suggested that Pick "had as much influence on London's development in the twentieth century as Haussmann had on that of Paris in the nineteenth", and historian Anthony Sutcliffe compared him to Robert Moses, the city planner responsible for many urban infrastructure projects in New York.
Pick's will was probated at £36,000 (£ in present-day terms). In his will he bequeathed a Francis Dodd painting, Ely, to the Tate Gallery. Transport for London and the London Transport Museum hold archives of Pick's business and personal papers.
As part of the Transported by Design programme of activities, on 15 October 2015, after two months of public voting, the work of Frank Pick was elected by Londoners as one of the 10 favourite transport design icons.
Pick is commemorated with a memorial plaque at St Peter's School, York, unveiled in 1953 by Lord Latham, and a blue plaque was erected at his Golders Green home in 1981.
Pick is commemorated with a permanent memorial at Piccadilly Circus station by the BAFTA-winning and Turner Prize-nominated artists Langlands & Bell. The work, entitled Beauty \< Immortality, was commissioned by London Transport Museum and installed by Art on the Underground (Transport for London's official art programme). It was unveiled on 7 November 2016, the 75th anniversary of Pick's death.
## See also
- London Transport (brand)
|
60,827 |
Cleopatra
| 1,172,613,579 |
Queen of Egypt from 51 to 30 BC
|
[
"1st-century BC Egyptian people",
"1st-century BC Pharaohs",
"1st-century BC queens regnant",
"1st-century BC women writers",
"30 BC deaths",
"69 BC births",
"Ancient suicides",
"Cleopatra",
"Deaths due to snake bites",
"Female Shakespearean characters",
"Female pharaohs",
"Hellenistic Cyprus",
"Hellenistic-era people",
"Mistresses of Julius Caesar",
"People of Caesar's civil war",
"Pharaohs of the Ptolemaic dynasty",
"Wives of Mark Antony"
] |
Cleopatra VII Thea Philopator (Koinē Greek: Κλεοπάτρα Θεά Φιλοπάτωρ, lit. Cleopatra "father-loving goddess"; 70/69 BC – 10 August 30 BC) was Queen of the Ptolemaic Kingdom of Egypt from 51 to 30 BC, and its last active ruler. A member of the Ptolemaic dynasty, she was a descendant of its founder Ptolemy I Soter, a Macedonian Greek general and companion of Alexander the Great. After the death of Cleopatra, Egypt became a province of the Roman Empire, marking the end of the last Hellenistic period state in the Mediterranean and of the age that had lasted since the reign of Alexander (336–323 BC). Her first language was Koine Greek and she is the only known Ptolemaic ruler to learn the Egyptian language.
In 58 BC, Cleopatra presumably accompanied her father, Ptolemy XII Auletes, during his exile to Rome after a revolt in Egypt (a Roman client state) allowed his rival daughter Berenice IV to claim his throne. Berenice was killed in 55 BC when Ptolemy returned to Egypt with Roman military assistance. When he died in 51 BC, the joint reign of Cleopatra and her brother Ptolemy XIII began, but a falling-out between them led to open civil war. After losing the 48 BC Battle of Pharsalus in Greece against his rival Julius Caesar (a Roman dictator and consul) in Caesar's civil war, the Roman statesman Pompey fled to Egypt. Pompey had been a political ally of Ptolemy XII, but Ptolemy XIII, at the urging of his court eunuchs, had Pompey ambushed and killed before Caesar arrived and occupied Alexandria. Caesar then attempted to reconcile the rival Ptolemaic siblings, but Ptolemy's chief adviser, Potheinos, viewed Caesar's terms as favoring Cleopatra, so his forces besieged her and Caesar at the palace. Shortly after the siege was lifted by reinforcements, Ptolemy XIII died in the Battle of the Nile; Cleopatra's half-sister Arsinoe IV was eventually exiled to Ephesus for her role in carrying out the siege. Caesar declared Cleopatra and her brother Ptolemy XIV joint rulers but maintained a private affair with Cleopatra that produced a son, Caesarion. Cleopatra traveled to Rome as a client queen in 46 and 44 BC, where she stayed at Caesar's villa. After the assassination of Caesar and (on her orders) Ptolemy XIV in 44 BC, she named Caesarion co-ruler as Ptolemy XV.
In the Liberators' civil war of 43–42 BC, Cleopatra sided with the Roman Second Triumvirate formed by Caesar's grandnephew and heir Octavian, Mark Antony, and Marcus Aemilius Lepidus. After their meeting at Tarsos in 41 BC, the queen had an affair with Antony. He carried out the execution of Arsinoe at her request, and became increasingly reliant on Cleopatra for both funding and military aid during his invasions of the Parthian Empire and the Kingdom of Armenia. The Donations of Alexandria declared their children Alexander Helios, Cleopatra Selene II, and Ptolemy Philadelphus rulers over various erstwhile territories under Antony's triumviral authority. This event, their marriage, and Antony's divorce of Octavian's sister Octavia Minor led to the final war of the Roman Republic. Octavian engaged in a war of propaganda, forced Antony's allies in the Roman Senate to flee Rome in 32 BC, and declared war on Cleopatra. After defeating Antony and Cleopatra's naval fleet at the 31 BC Battle of Actium, Octavian's forces invaded Egypt in 30 BC and defeated Antony, leading to Antony's suicide. When Cleopatra learned that Octavian planned to bring her to his Roman triumphal procession, she killed herself by poisoning, contrary to the popular belief that she was bitten by an asp.
Cleopatra's legacy survives in ancient and modern works of art. Roman historiography and Latin poetry produced a generally critical view of the queen that pervaded later Medieval and Renaissance literature. In the visual arts, her ancient depictions include Roman busts, paintings, and sculptures, cameo carvings and glass, Ptolemaic and Roman coinage, and reliefs. In Renaissance and Baroque art, she was the subject of many works including operas, paintings, poetry, sculptures, and theatrical dramas. She has become a pop culture icon of Egyptomania since the Victorian era, and in modern times, Cleopatra has appeared in the applied and fine arts, burlesque satire, Hollywood films, and brand images for commercial products.
## Etymology
The Latinized form Cleopatra comes from the Ancient Greek Kleopátra (Κλεοπάτρα), meaning "glory of her father", from κλέος (kléos, "glory") and πατήρ (patḗr, "father"). The masculine form would have been written either as Kleópatros (Κλεόπατρος) or Pátroklos (Πάτροκλος). Cleopatra was the name of Alexander the Great's sister, as well as Cleopatra Alcyone, wife of Meleager in Greek mythology. Through the marriage of Ptolemy V Epiphanes and Cleopatra I Syra (a Seleucid princess), the name entered the Ptolemaic dynasty. Cleopatra's adopted title Theā́ Philopátōra (Θεᾱ́ Φιλοπάτωρα) means "goddess who loves her father".
## Biography
### Background
Ptolemaic pharaohs were crowned by the Egyptian high priest of Ptah at Memphis, but resided in the multicultural and largely Greek city of Alexandria, established by Alexander the Great of Macedon. They spoke Greek and governed Egypt as Hellenistic Greek monarchs, refusing to learn the native Egyptian language. In contrast, Cleopatra could speak multiple languages by adulthood and was the first Ptolemaic ruler known to learn the Egyptian language. Plutarch implies that she also spoke Ethiopian, the language of the "Troglodytes", Hebrew (or Aramaic), Arabic, the Syrian language (perhaps Syriac), Median, and Parthian, and she could apparently also speak Latin, although her Roman contemporaries would have preferred to speak with her in her native Koine Greek. Aside from Greek, Egyptian, and Latin, these languages reflected Cleopatra's desire to restore North African and West Asian territories that once belonged to the Ptolemaic Kingdom.
Roman interventionism in Egypt predated the reign of Cleopatra. When Ptolemy IX Lathyros died in late 81 BC, he was succeeded by his daughter Berenice III. With opposition building at the royal court against the idea of a sole reigning female monarch, Berenice III accepted joint rule and marriage with her cousin and stepson Ptolemy XI Alexander II, an arrangement made by the Roman dictator Sulla. Ptolemy XI had his wife killed shortly after their marriage in 80 BC, and was lynched soon after in the resulting riot over the assassination. Ptolemy XI, and perhaps his uncle Ptolemy IX or father Ptolemy X Alexander I, willed the Ptolemaic Kingdom to Rome as collateral for loans, so that the Romans had legal grounds to take over Egypt, their client state, after the assassination of Ptolemy XI. The Romans chose instead to divide the Ptolemaic realm among the illegitimate sons of Ptolemy IX, bestowing Cyprus on Ptolemy of Cyprus and Egypt on Ptolemy XII Auletes.
### Early childhood
Cleopatra VII was born in early 69 BC to the ruling Ptolemaic pharaoh Ptolemy XII and an uncertain mother, presumably Ptolemy XII's wife Cleopatra V Tryphaena (who may have been the same person as Cleopatra VI Tryphaena), the mother of Cleopatra's older sister, Berenice IV Epiphaneia. Cleopatra Tryphaena disappears from official records a few months after the birth of Cleopatra in 69 BC. The three younger children of Ptolemy XII, Cleopatra's sister Arsinoe IV and brothers Ptolemy XIII Theos Philopator and Ptolemy XIV, were born in the absence of his wife. Cleopatra's childhood tutor was Philostratos, from whom she learned the Greek arts of oration and philosophy. During her youth Cleopatra presumably studied at the Musaeum, including the Library of Alexandria.
### Reign and exile of Ptolemy XII
In 65 BC the Roman censor Marcus Licinius Crassus argued before the Roman Senate that Rome should annex Ptolemaic Egypt, but his proposed bill and the similar bill of tribune Servilius Rullus in 63 BC were rejected. Ptolemy XII responded to the threat of possible annexation by offering remuneration and lavish gifts to powerful Roman statesmen, such as Pompey during his campaign against Mithridates VI of Pontus, and eventually Julius Caesar after he became Roman consul in 59 BC. However, Ptolemy XII's profligate behavior bankrupted him, and he was forced to acquire loans from the Roman banker Gaius Rabirius Postumus.
In 58 BC the Romans annexed Cyprus and on accusations of piracy drove Ptolemy of Cyprus, Ptolemy XII's brother, to commit suicide instead of enduring exile to Paphos. Ptolemy XII remained publicly silent on the death of his brother, a decision which, along with ceding traditional Ptolemaic territory to the Romans, damaged his credibility among subjects already enraged by his economic policies. Ptolemy XII was then exiled from Egypt by force, traveling first to Rhodes, then Athens, and finally the villa of triumvir Pompey in the Alban Hills, near Praeneste, Italy.
Ptolemy XII spent nearly a year there on the outskirts of Rome, ostensibly accompanied by his daughter Cleopatra, then about 11. Berenice IV sent an embassy to Rome to advocate for her rule and oppose the reinstatement of her father Ptolemy XII. Ptolemy had assassins kill the leaders of the embassy, an incident that was covered up by his powerful Roman supporters. When the Roman Senate denied Ptolemy XII the offer of an armed escort and provisions for a return to Egypt, he decided to leave Rome in late 57 BC and reside at the Temple of Artemis in Ephesus.
The Roman financiers of Ptolemy XII remained determined to restore him to power. Pompey persuaded Aulus Gabinius, the Roman governor of Syria, to invade Egypt and restore Ptolemy XII, offering him 10,000 talents for the proposed mission. Although it put him at odds with Roman law, Gabinius invaded Egypt in the spring of 55 BC by way of Hasmonean Judea, where Hyrcanus II had Antipater the Idumaean, father of Herod the Great, furnish the Roman-led army with supplies. As a young cavalry officer, Mark Antony was under Gabinius's command. He distinguished himself by preventing Ptolemy XII from massacring the inhabitants of Pelousion, and for rescuing the body of Archelaos, the husband of Berenice IV, after he was killed in battle, ensuring him a proper royal burial. Cleopatra, then 14 years of age, would have traveled with the Roman expedition into Egypt; years later, Antony would profess that he had fallen in love with her at this time.
Gabinius was put on trial in Rome for abusing his authority, for which he was acquitted, but his second trial for accepting bribes led to his exile, from which he was recalled seven years later in 48 BC by Caesar. Crassus replaced him as governor of Syria and extended his provincial command to Egypt, but he was killed by the Parthians at the Battle of Carrhae in 53 BC. Ptolemy XII had Berenice IV and her wealthy supporters executed, seizing their properties. He allowed Gabinius's largely Germanic and Gallic Roman garrison, the Gabiniani, to harass people in the streets of Alexandria and installed his longtime Roman financier Rabirius as his chief financial officer.
Within a year Rabirius was placed under protective custody and sent back to Rome after his life was endangered for draining Egypt of its resources. Despite these problems, Ptolemy XII created a will designating Cleopatra and Ptolemy XIII as his joint heirs, oversaw major construction projects such as the Temple of Edfu and a temple at Dendera, and stabilized the economy. On 31 May 52 BC, Cleopatra was made a regent of Ptolemy XII, as indicated by an inscription in the Temple of Hathor at Dendera. Rabirius was unable to collect the entirety of Ptolemy XII's debt by the time of the latter's death, and so it was passed on to his successors Cleopatra and Ptolemy XIII.
### Accession to the throne
Ptolemy XII died sometime before 22 March 51 BC, when Cleopatra, in her first act as queen, began her voyage to Hermonthis, near Thebes, to install a new sacred Buchis bull, worshiped as an intermediary for the god Montu in the Ancient Egyptian religion. Cleopatra faced several pressing issues and emergencies shortly after taking the throne. These included famine caused by drought and a low level of the annual flooding of the Nile, and lawless behavior instigated by the Gabiniani, the now unemployed and assimilated Roman soldiers left by Gabinius to garrison Egypt. Inheriting her father's debts, Cleopatra also owed the Roman Republic 17.5 million drachmas.
In 50 BC Marcus Calpurnius Bibulus, proconsul of Syria, sent his two eldest sons to Egypt, most likely to negotiate with the Gabiniani and recruit them as soldiers in the desperate defense of Syria against the Parthians. The Gabiniani tortured and murdered these two, perhaps with secret encouragement by rogue senior administrators in Cleopatra's court. Cleopatra sent the Gabiniani culprits to Bibulus as prisoners awaiting his judgment, but he sent them back to Cleopatra and chastised her for interfering in their adjudication, which was the prerogative of the Roman Senate. Bibulus, siding with Pompey in Caesar's Civil War, failed to prevent Caesar from landing a naval fleet in Greece, which ultimately allowed Caesar to reach Egypt in pursuit of Pompey.
By 29 August 51 BC, official documents started listing Cleopatra as the sole ruler, evidence that she had rejected her brother Ptolemy XIII as a co-ruler. She had probably married him, but there is no record of this. The Ptolemaic practice of sibling marriage was introduced by Ptolemy II and his sister Arsinoe II. A long-held royal Egyptian practice, it was loathed by contemporary Greeks. By the reign of Cleopatra, however, it was considered a normal arrangement for Ptolemaic rulers.
Despite Cleopatra's rejection of him, Ptolemy XIII still retained powerful allies, notably the eunuch Potheinos, his childhood tutor, regent, and administrator of his properties. Others involved in the cabal against Cleopatra included Achillas, a prominent military commander, and Theodotus of Chios, another tutor of Ptolemy XIII. Cleopatra seems to have attempted a short-lived alliance with her brother Ptolemy XIV, but by the autumn of 50 BC Ptolemy XIII had the upper hand in their conflict and began signing documents with his name before that of his sister, followed by the establishment of his first regnal date in 49 BC.
### Assassination of Pompey
In the summer of 49 BC, Cleopatra and her forces were still fighting against Ptolemy XIII within Alexandria when Pompey's son Gnaeus Pompeius arrived, seeking military aid on behalf of his father. After returning to Italy from the wars in Gaul and crossing the Rubicon in January of 49 BC, Caesar had forced Pompey and his supporters to flee to Greece. In perhaps their last joint decree, both Cleopatra and Ptolemy XIII agreed to Gnaeus Pompeius's request and sent his father 60 ships and 500 troops, including the Gabiniani, a move that helped erase some of the debt owed to Rome. Losing the fight against her brother, Cleopatra was then forced to flee Alexandria and withdraw to the region of Thebes. By the spring of 48 BC Cleopatra had traveled to Roman Syria with her younger sister, Arsinoe IV, to gather an invasion force that would head to Egypt. She returned with an army, but her advance to Alexandria was blocked by her brother's forces, including some Gabiniani mobilized to fight against her, so she camped outside Pelousion in the eastern Nile Delta.
In Greece, Caesar and Pompey's forces engaged each other at the decisive Battle of Pharsalus on 9 August 48 BC, leading to the destruction of most of Pompey's army and his forced flight to Tyre, Lebanon. Given his close relationship with the Ptolemies, Pompey ultimately decided that Egypt would be his place of refuge, where he could replenish his forces. Ptolemy XIII's advisers, however, feared the idea of Pompey using Egypt as his base in a protracted Roman civil war. In a scheme devised by Theodotus, Pompey arrived by ship near Pelousion after being invited by a written message, only to be ambushed and stabbed to death on 28 September 48 BC. Ptolemy XIII believed he had demonstrated his power and simultaneously defused the situation by having Pompey's head, severed and embalmed, sent to Caesar, who arrived in Alexandria by early October and took up residence at the royal palace. Caesar expressed grief and outrage over the killing of Pompey and called on both Ptolemy XIII and Cleopatra to disband their forces and reconcile with each other.
### Relationship with Julius Caesar
Ptolemy XIII arrived at Alexandria at the head of his army, in clear defiance of Caesar's demand that he disband and leave his army before his arrival. Cleopatra initially sent emissaries to Caesar, but upon allegedly hearing that Caesar was inclined to having affairs with royal women, she came to Alexandria to see him personally. Historian Cassius Dio records that she did so without informing her brother, dressed in an attractive manner, and charmed Caesar with her wit. Plutarch provides an entirely different account that alleges she was bound inside a bed sack to be smuggled into the palace to meet Caesar.
When Ptolemy XIII realized that his sister was in the palace consorting directly with Caesar, he attempted to rouse the populace of Alexandria into a riot, but he was arrested by Caesar, who used his oratorical skills to calm the frenzied crowd. Caesar then brought Cleopatra and Ptolemy XIII before the assembly of Alexandria, where Caesar revealed the written will of Ptolemy XII—previously possessed by Pompey—naming Cleopatra and Ptolemy XIII as his joint heirs. Caesar then attempted to arrange for the other two siblings, Arsinoe IV and Ptolemy XIV, to rule together over Cyprus, thus removing potential rival claimants to the Egyptian throne while also appeasing the Ptolemaic subjects still bitter over the loss of Cyprus to the Romans in 58 BC.
Judging that this agreement favored Cleopatra over Ptolemy XIII and that the latter's army of 20,000, including the Gabiniani, could most likely defeat Caesar's army of 4,000 unsupported troops, Potheinos decided to have Achillas lead their forces to Alexandria to attack both Caesar and Cleopatra. After Caesar managed to execute Potheinos, Arsinoe IV joined forces with Achillas and was declared queen, but soon afterward had her tutor Ganymedes kill Achillas and take his position as commander of her army. Ganymedes then tricked Caesar into requesting the presence of the erstwhile captive Ptolemy XIII as a negotiator, only to have him join the army of Arsinoe IV. The resulting siege of the palace, with Caesar and Cleopatra trapped together inside, lasted into the following year of 47 BC.
Sometime between January and March of 47 BC, Caesar's reinforcements arrived, including those led by Mithridates of Pergamon and Antipater the Idumaean. Ptolemy XIII and Arsinoe IV withdrew their forces to the Nile, where Caesar attacked them. Ptolemy XIII tried to flee by boat, but it capsized, and he drowned. Ganymedes may have been killed in the battle. Theodotus was found years later in Asia, by Marcus Junius Brutus, and executed. Arsinoe IV was forcefully paraded in Caesar's triumph in Rome before being exiled to the Temple of Artemis at Ephesus. Cleopatra was conspicuously absent from these events and resided in the palace, most likely because she had been pregnant with Caesar's child since September 48 BC.
Caesar's term as consul had expired at the end of 48 BC. However, Antony, an officer of his, helped to secure Caesar's appointment as dictator lasting for a year, until October 47 BC, providing Caesar with the legal authority to settle the dynastic dispute in Egypt. Wary of repeating the mistake of Cleopatra's sister Berenice IV in having a female monarch as sole ruler, Caesar appointed Cleopatra's 12-year-old brother, Ptolemy XIV, as joint ruler with the 22-year-old Cleopatra in a nominal sibling marriage, but Cleopatra continued living privately with Caesar. The exact date at which Cyprus was returned to her control is not known, although she had a governor there by 42 BC.
Caesar is alleged to have joined Cleopatra for a cruise of the Nile and sightseeing of Egyptian monuments, although this may be a romantic tale reflecting later well-to-do Roman proclivities and not a real historical event. The historian Suetonius provided considerable details about the voyage, including use of Thalamegos, the pleasure barge constructed by Ptolemy IV, which during his reign measured 90 metres (300 ft) in length and 24 metres (80 ft) in height and was complete with dining rooms, state rooms, holy shrines, and promenades along its two decks, resembling a floating villa. Caesar could have had an interest in the Nile cruise owing to his fascination with geography; he was well-read in the works of Eratosthenes and Pytheas, and perhaps wanted to discover the source of the river, but turned back before reaching Ethiopia.
Caesar departed from Egypt around April 47 BC, allegedly to confront Pharnaces II of Pontus, the son of Mithridates VI of Pontus, who was stirring up trouble for Rome in Anatolia. It is possible that Caesar, married to the prominent Roman woman Calpurnia, also wanted to avoid being seen together with Cleopatra when she had their son. He left three legions in Egypt, later increased to four, under the command of the freedman Rufio, to secure Cleopatra's tenuous position, but also perhaps to keep her activities in check.
Caesarion, Cleopatra's alleged child with Caesar, was born 23 June 47 BC and was originally named "Pharaoh Caesar", as preserved on a stele at the Serapeum of Saqqara. Perhaps owing to his still childless marriage with Calpurnia, Caesar remained publicly silent about Caesarion (but perhaps accepted his parentage in private). Cleopatra, on the other hand, made repeated official declarations about Caesarion's parentage, naming Caesar as the father.
Cleopatra and her nominal joint ruler Ptolemy XIV visited Rome sometime in late 46 BC, presumably without Caesarion, and were given lodging in Caesar's villa within the Horti Caesaris. As with their father Ptolemy XII, Caesar awarded both Cleopatra and Ptolemy XIV the legal status of "friend and ally of the Roman people" (), in effect client rulers loyal to Rome. Cleopatra's visitors at Caesar's villa across the Tiber included the senator Cicero, who found her arrogant. Sosigenes of Alexandria, one of the members of Cleopatra's court, aided Caesar in the calculations for the new Julian calendar, put into effect 1 January 45 BC. The Temple of Venus Genetrix, established in the Forum of Caesar on 25 September 46 BC, contained a golden statue of Cleopatra (which stood there at least until the 3rd century AD), associating the mother of Caesar's child directly with the goddess Venus, mother of the Romans. The statue also subtly linked the Egyptian goddess Isis with the Roman religion.
Cleopatra's presence in Rome most likely had an effect on the events at the Lupercalia festival a month before Caesar's assassination. Antony attempted to place a royal diadem on Caesar's head, but the latter refused in what was most likely a staged performance, perhaps to gauge the Roman public's mood about accepting Hellenistic-style kingship. Cicero, who was present at the festival, mockingly asked where the diadem came from, an obvious reference to the Ptolemaic queen whom he abhorred. Caesar was assassinated on the Ides of March (15 March 44 BC), but Cleopatra stayed in Rome until about mid-April, in the vain hope of having Caesarion recognized as Caesar's heir. However, Caesar's will named his grandnephew Octavian as the primary heir, and Octavian arrived in Italy around the same time Cleopatra decided to depart for Egypt. A few months later, Cleopatra had Ptolemy XIV killed by poisoning, elevating her son Caesarion as her co-ruler.
### Cleopatra in the Liberators' civil war
Octavian, Antony, and Marcus Aemilius Lepidus formed the Second Triumvirate in 43 BC, in which they were each elected for five-year terms to restore order in the Republic and bring Caesar's assassins to justice. Cleopatra received messages from both Gaius Cassius Longinus, one of Caesar's assassins, and Publius Cornelius Dolabella, proconsul of Syria and Caesarian loyalist, requesting military aid. She decided to write Cassius an excuse that her kingdom faced too many internal problems, while sending the four legions left by Caesar in Egypt to Dolabella. These troops were captured by Cassius in Palestine.
While Serapion, Cleopatra's governor of Cyprus, defected to Cassius and provided him with ships, Cleopatra took her own fleet to Greece to personally assist Octavian and Antony. Her ships were heavily damaged in a Mediterranean storm and she arrived too late to aid in the fighting. By the autumn of 42 BC, Antony had defeated the forces of Caesar's assassins at the Battle of Philippi in Greece, leading to the suicide of Cassius and Brutus.
By the end of 42 BC, Octavian had gained control over much of the western half of the Roman Republic and Antony the eastern half, with Lepidus largely marginalized. In the summer of 41 BC, Antony established his headquarters at Tarsos in Anatolia and summoned Cleopatra there in several letters, which she rebuffed until Antony's envoy Quintus Dellius convinced her to come. The meeting would allow Cleopatra to clear up the misconception that she had supported Cassius during the civil war and address territorial exchanges in the Levant, but Antony also undoubtedly desired to form a personal, romantic relationship with the queen. Cleopatra sailed up the Kydnos River to Tarsos in Thalamegos, hosting Antony and his officers for two nights of lavish banquets on board the ship. Cleopatra managed to clear her name as a supposed supporter of Cassius, arguing she had really attempted to help Dolabella in Syria, and convinced Antony to have her exiled sister, Arsinoe IV, executed at Ephesus. Cleopatra's former rebellious governor of Cyprus was also handed over to her for execution.
### Relationship with Mark Antony
Cleopatra invited Antony to come to Egypt before departing from Tarsos, which led Antony to visit Alexandria by November 41 BC. Antony was well received by the populace of Alexandria, both for his heroic actions in restoring Ptolemy XII to power and coming to Egypt without an occupation force like Caesar had done. In Egypt, Antony continued to enjoy the lavish royal lifestyle he had witnessed aboard Cleopatra's ship docked at Tarsos. He also had his subordinates, such as Publius Ventidius Bassus, drive the Parthians out of Anatolia and Syria.
Cleopatra carefully chose Antony as her partner for producing further heirs, as he was deemed to be the most powerful Roman figure following Caesar's demise. With his powers as a triumvir, Antony also had the broad authority to restore former Ptolemaic lands, which were currently in Roman hands, to Cleopatra. While it is clear that both Cilicia and Cyprus were under Cleopatra's control by 19 November 38 BC, the transfer probably occurred earlier in the winter of 41–40 BC, during her time spent with Antony.
By the spring of 40 BC, Antony left Egypt due to troubles in Syria, where his governor Lucius Decidius Saxa was killed and his army taken by Quintus Labienus, a former officer under Cassius who now served the Parthian Empire. Cleopatra provided Antony with 200 ships for his campaign and as payment for her newly acquired territories. She would not see Antony again until 37 BC, but she maintained correspondence, and evidence suggests she kept a spy in his camp. By the end of 40 BC, Cleopatra had given birth to twins, a boy named Alexander Helios and a girl named Cleopatra Selene II, both of whom Antony acknowledged as his children. Helios (the Sun) and Selene (the Moon) were symbolic of a new era of societal rejuvenation, as well as an indication that Cleopatra hoped Antony would repeat the exploits of Alexander the Great by conquering the Parthians.
Mark Antony's Parthian campaign in the east was disrupted by the events of the Perusine War (41–40 BC), initiated by his ambitious wife Fulvia against Octavian in the hopes of making her husband the undisputed leader of Rome. It has been suggested that Fulvia wanted to cleave Antony away from Cleopatra, but the conflict emerged in Italy even before Cleopatra's meeting with Antony at Tarsos. Fulvia and Antony's brother Lucius Antonius were eventually besieged by Octavian at Perusia (modern Perugia, Italy) and then exiled from Italy, after which Fulvia died at Sicyon in Greece while attempting to reach Antony. Her sudden death led to a reconciliation of Octavian and Antony at Brundisium in Italy in September 40 BC. Although the agreement struck at Brundisium solidified Antony's control of the Roman Republic's territories east of the Ionian Sea, it also stipulated that he concede Italia, Hispania, and Gaul, and marry Octavian's sister Octavia the Younger, a potential rival for Cleopatra.
In December 40 BC Cleopatra received Herod in Alexandria as an unexpected guest and refugee who fled a turbulent situation in Judea. Herod had been installed as a tetrarch there by Antony, but he was soon at odds with Antigonus II Mattathias of the long-established Hasmonean dynasty. The latter had imprisoned Herod's brother and fellow tetrarch Phasael, who was executed while Herod was fleeing toward Cleopatra's court. Cleopatra attempted to provide him with a military assignment, but Herod declined and traveled to Rome, where the triumvirs Octavian and Antony named him king of Judea. This act put Herod on a collision course with Cleopatra, who would desire to reclaim the former Ptolemaic territories that comprised his new Herodian kingdom.
Relations between Antony and Cleopatra perhaps soured when he not only married Octavia, but also sired her two children, Antonia the Elder in 39 BC and Antonia Minor in 36 BC, and moved his headquarters to Athens. However, Cleopatra's position in Egypt was secure. Her rival Herod was occupied with civil war in Judea that required heavy Roman military assistance, but received none from Cleopatra. Since the authority of Antony and Octavian as triumvirs had expired on 1 January 37 BC, Octavia arranged for a meeting at Tarentum, where the triumvirate was officially extended to 33 BC. With two legions granted by Octavian and a thousand soldiers lent by Octavia, Antony traveled to Antioch, where he made preparations for war against the Parthians.
Antony summoned Cleopatra to Antioch to discuss pressing issues, such as Herod's kingdom and financial support for his Parthian campaign. Cleopatra brought her now three-year-old twins to Antioch, where Antony saw them for the first time and where they probably first received their surnames Helios and Selene as part of Antony and Cleopatra's ambitious plans for the future. In order to stabilize the east, Antony not only enlarged Cleopatra's domain, he also established new ruling dynasties and client rulers who would be loyal to him, yet would ultimately outlast him.
In this arrangement Cleopatra gained significant former Ptolemaic territories in the Levant, including nearly all of Phoenicia (Lebanon) minus Tyre and Sidon, which remained in Roman hands. She also received Ptolemais Akko (modern Acre, Israel), a city that was established by Ptolemy II. Given her ancestral relations with the Seleucids, she was granted the region of Coele-Syria along the upper Orontes River. She was even given the region surrounding Jericho in Palestine, but she leased this territory back to Herod. At the expense of the Nabataean king Malichus I (a cousin of Herod), Cleopatra was also given a portion of the Nabataean Kingdom around the Gulf of Aqaba on the Red Sea, including Ailana (modern Aqaba, Jordan). To the west Cleopatra was handed Cyrene along the Libyan coast, as well as Itanos and Olous in Roman Crete. Although still administered by Roman officials, these territories nevertheless enriched her kingdom and led her to declare the inauguration of a new era by double-dating her coinage in 36 BC.
Antony's enlargement of the Ptolemaic realm by relinquishing directly controlled Roman territory was exploited by his rival Octavian, who tapped into the public sentiment in Rome against the empowerment of a foreign queen at the expense of their Republic. Octavian, fostering the narrative that Antony was neglecting his virtuous Roman wife Octavia, granted both her and Livia, his own wife, extraordinary privileges of sacrosanctity. Some 50 years before, Cornelia Africana, daughter of Scipio Africanus, had been the first living Roman woman to have a statue dedicated to her. She was now followed by Octavia and Livia, whose statues were most likely erected in the Forum of Caesar to rival that of Cleopatra's, erected by Caesar.
In 36 BC, Cleopatra accompanied Antony to the Euphrates in his journey toward invading the Parthian Empire. She then returned to Egypt, perhaps due to her advanced state of pregnancy. By the summer of 36 BC, she had given birth to Ptolemy Philadelphus, her second son with Antony.
Antony's Parthian campaign in 36 BC turned into a complete debacle for a number of reasons, in particular the betrayal of Artavasdes II of Armenia, who defected to the Parthian side. After losing some 30,000 men, more than Crassus at Carrhae (an indignity he had hoped to avenge), Antony finally arrived at Leukokome near Berytus (modern Beirut, Lebanon) in December, engaged in heavy drinking before Cleopatra arrived to provide funds and clothing for his battered troops. Antony desired to avoid the risks involved in returning to Rome, and so he traveled with Cleopatra back to Alexandria to see his newborn son.
### Donations of Alexandria
As Antony prepared for another Parthian expedition in 35 BC, this time aimed at their ally Armenia, Octavia traveled to Athens with 2,000 troops in alleged support of Antony, but most likely in a scheme devised by Octavian to embarrass him for his military losses. Antony received these troops but told Octavia not to stray east of Athens as he and Cleopatra traveled together to Antioch, only to suddenly and inexplicably abandon the military campaign and head back to Alexandria. When Octavia returned to Rome Octavian portrayed his sister as a victim wronged by Antony, although she refused to leave Antony's household. Octavian's confidence grew as he eliminated his rivals in the west, including Sextus Pompeius and even Lepidus, the third member of the triumvirate, who was placed under house arrest after revolting against Octavian in Sicily.
Dellius was sent as Antony's envoy to Artavasdes II in 34 BC to negotiate a potential marriage alliance that would wed the Armenian king's daughter to Alexander Helios, the son of Antony and Cleopatra. When this was declined, Antony marched his army into Armenia, defeated their forces and captured the king and Armenian royal family. Antony then held a military parade in Alexandria as an imitation of a Roman triumph, dressed as Dionysus and riding into the city on a chariot to present the royal prisoners to Cleopatra, who was seated on a golden throne above a silver dais. News of this event was heavily criticized in Rome as a perversion of time-honored Roman rites and rituals to be enjoyed instead by an Egyptian queen.
In an event held at the gymnasium soon after the triumph, Cleopatra dressed as Isis and declared that she was the Queen of Kings with her son Caesarion, King of Kings, while Alexander Helios was declared king of Armenia, Media, and Parthia, and two-year-old Ptolemy Philadelphos was declared king of Syria and Cilicia. Cleopatra Selene II was bestowed with Crete and Cyrene. Antony and Cleopatra may have been wed during this ceremony. Antony sent a report to Rome requesting ratification of these territorial claims, now known as the Donations of Alexandria. Octavian wanted to publicize it for propaganda purposes, but the two consuls, both supporters of Antony, had it censored from public view.
In late 34 BC, Antony and Octavian engaged in a heated war of propaganda that would last for years. Antony claimed that his rival had illegally deposed Lepidus from their triumvirate and barred him from raising troops in Italy, while Octavian accused Antony of unlawfully detaining the king of Armenia, marrying Cleopatra despite still being married to his sister Octavia, and wrongfully claiming Caesarion as the heir of Caesar instead of Octavian. The litany of accusations and gossip associated with this propaganda war have shaped the popular perceptions about Cleopatra from Augustan-period literature through to various media in modern times. Cleopatra was said to have brainwashed Mark Antony with witchcraft and sorcery and was as dangerous as Homer's Helen of Troy in destroying civilization. Pliny the Elder claims in his Natural History that Cleopatra once dissolved a pearl worth tens of millions of sesterces in vinegar just to win a dinner-party bet. The accusation that Antony had stolen books from the Library of Pergamum to restock the Library of Alexandria later turned out to be an admitted fabrication by Gaius Calvisius Sabinus.
A papyrus document dated to February 33 BC, later used to wrap a mummy, contains the signature of Cleopatra, probably written by an official authorized to sign for her. It concerns certain tax exemptions in Egypt granted to either Quintus Caecillius or Publius Canidius Crassus, a former Roman consul and Antony's confidant who would command his land forces at Actium. A subscript in a different handwriting at the bottom of the papyrus reads "make it happen" or "so be it" (Ancient Greek: γινέσθωι, romanized: ginésthōi); this is likely the autograph of the queen, as it was Ptolemaic practice to countersign documents to avoid forgery.
### Battle of Actium
In a speech to the Roman Senate on the first day of his consulship on 1 January 33 BC, Octavian accused Antony of attempting to subvert Roman freedoms and territorial integrity as a slave to his Oriental queen. Before Antony and Octavian's joint imperium expired on 31 December 33 BC, Antony declared Caesarion as the true heir of Caesar in an attempt to undermine Octavian. In 32 BC, the Antonian loyalists Gaius Sosius and Gnaeus Domitius Ahenobarbus became consuls. The former gave a fiery speech condemning Octavian, now a private citizen without public office, and introduced pieces of legislation against him. During the next senatorial session, Octavian entered the Senate house with armed guards and levied his own accusations against the consuls. Intimidated by this act, the consuls and over 200 senators still in support of Antony fled Rome the next day to join the side of Antony.
Antony and Cleopatra traveled together to Ephesus in 32 BC, where she provided him with 200 of the 800 naval ships he was able to acquire. Ahenobarbus, wary of having Octavian's propaganda confirmed to the public, attempted to persuade Antony to have Cleopatra excluded from the campaign against Octavian. Publius Canidius Crassus made the counterargument that Cleopatra was funding the war effort and was a competent monarch. Cleopatra refused Antony's requests that she return to Egypt, judging that by blocking Octavian in Greece she could more easily defend Egypt. Cleopatra's insistence that she be involved in the battle for Greece led to the defections of prominent Romans, such as Ahenobarbus and Lucius Munatius Plancus.
During the spring of 32 BC Antony and Cleopatra traveled to Athens, where she persuaded Antony to send Octavia an official declaration of divorce. This encouraged Plancus to advise Octavian that he should seize Antony's will, invested with the Vestal Virgins. Although a violation of sacred and legal rights, Octavian forcefully acquired the document from the Temple of Vesta, and it became a useful tool in the propaganda war against Antony and Cleopatra. Octavian highlighted parts of the will, such as Caesarion being named heir to Caesar, that the Donations of Alexandria were legal, that Antony should be buried alongside Cleopatra in Egypt instead of Rome, and that Alexandria would be made the new capital of the Roman Republic. In a show of loyalty to Rome, Octavian decided to begin construction of his own mausoleum at the Campus Martius. Octavian's legal standing was also improved by being elected consul in 31 BC. With Antony's will made public, Octavian had his casus belli, and Rome declared war on Cleopatra, not Antony. The legal argument for war was based less on Cleopatra's territorial acquisitions, with former Roman territories ruled by her children with Antony, and more on the fact that she was providing military support to a private citizen now that Antony's triumviral authority had expired.
Antony and Cleopatra had a larger fleet than Octavian, but the crews of Antony and Cleopatra's navy were not all well-trained, some of them perhaps from merchant vessels, whereas Octavian had a fully professional force. Antony wanted to cross the Adriatic Sea and blockade Octavian at either Tarentum or Brundisium, but Cleopatra, concerned primarily with defending Egypt, overrode the decision to attack Italy directly. Antony and Cleopatra set up their winter headquarters at Patrai in Greece, and by the spring of 31 BC they had moved to Actium, on the southern side of the Ambracian Gulf.
Cleopatra and Antony had the support of various allied kings, but Cleopatra had already been in conflict with Herod, and an earthquake in Judea provided him with an excuse to be absent from the campaign. They also lost the support of Malichus I, which would prove to have strategic consequences. Antony and Cleopatra lost several skirmishes against Octavian around Actium during the summer of 31 BC, while defections to Octavian's camp continued, including Antony's long-time companion Dellius and the allied kings Amyntas of Galatia and Deiotaros of Paphlagonia. While some in Antony's camp suggested abandoning the naval conflict to retreat inland, Cleopatra urged for a naval confrontation, to keep Octavian's fleet away from Egypt.
On 2 September 31 BC the naval forces of Octavian, led by Marcus Vipsanius Agrippa, met those of Antony and Cleopatra at the Battle of Actium. Cleopatra, aboard her flagship, the Antonias, commanded 60 ships at the mouth of the Ambracian Gulf, at the rear of the fleet, in what was likely a move by Antony's officers to marginalize her during the battle. Antony had ordered that their ships should have sails on board for a better chance to pursue or flee from the enemy, which Cleopatra, ever concerned about defending Egypt, used to swiftly move through the area of major combat in a strategic withdrawal to the Peloponnese.
Burstein writes that partisan Roman writers would later accuse Cleopatra of cowardly deserting Antony, but their original intention of keeping their sails on board may have been to break the blockade and salvage as much of their fleet as possible. Antony followed Cleopatra and boarded her ship, identified by its distinctive purple sails, as the two escaped the battle and headed for Tainaron. Antony reportedly avoided Cleopatra during this three-day voyage, until her ladies in waiting at Tainaron urged him to speak with her. The Battle of Actium raged on without Cleopatra and Antony until the morning of 3 September, and was followed by massive defections of officers, troops, and allied kings to Octavian's side.
### Downfall and death
While Octavian occupied Athens, Antony and Cleopatra landed at Paraitonion in Egypt. The couple then went their separate ways, Antony to Cyrene to raise more troops and Cleopatra to the harbor at Alexandria in an attempt to mislead the oppositional party and portray the activities in Greece as a victory. She was afraid that news about the outcome of the battle of Actium would lead to a rebellion. It is uncertain whether or not, at this time, she actually executed Artavasdes II and sent his head to his rival, Artavasdes I of Media Atropatene, in an attempt to strike an alliance with him.
Lucius Pinarius, Mark Antony's appointed governor of Cyrene, received word that Octavian had won the Battle of Actium before Antony's messengers could arrive at his court. Pinarius had these messengers executed and then defected to Octavian's side, surrendering to him the four legions under his command that Antony desired to obtain. Antony nearly committed suicide after hearing news of this but was stopped by his staff officers. In Alexandria he built a reclusive cottage on the island of Pharos that he nicknamed the Timoneion, after the philosopher Timon of Athens, who was famous for his cynicism and misanthropy. Herod, who had personally advised Antony after the Battle of Actium that he should betray Cleopatra, traveled to Rhodes to meet Octavian and resign his kingship out of loyalty to Antony. Octavian was impressed by his speech and sense of loyalty, so he allowed him to maintain his position in Judea, further isolating Antony and Cleopatra.
Cleopatra perhaps started to view Antony as a liability by the late summer of 31 BC, when she prepared to leave Egypt to her son Caesarion. Cleopatra planned to relinquish her throne to him, take her fleet from the Mediterranean into the Red Sea, and then set sail to a foreign port, perhaps in India, where she could spend time recuperating. However, these plans were ultimately abandoned when Malichus I, as advised by Octavian's governor of Syria, Quintus Didius, managed to burn Cleopatra's fleet in revenge for his losses in a war with Herod that Cleopatra had largely initiated. Cleopatra had no other option but to stay in Egypt and negotiate with Octavian. Although most likely later pro-Octavian propaganda, it was reported that at this time Cleopatra started testing the strengths of various poisons on prisoners and even her own servants.
Cleopatra had Caesarion enter into the ranks of the ephebi, which, along with reliefs on a stele from Koptos dated 21 September 31 BC, demonstrated that Cleopatra was now grooming her son to become the sole ruler of Egypt. In a show of solidarity, Antony also had Marcus Antonius Antyllus, his son with Fulvia, enter the ephebi at the same time. Separate messages and envoys from Antony and Cleopatra were then sent to Octavian, still stationed at Rhodes, although Octavian seems to have replied only to Cleopatra. Cleopatra requested that her children should inherit Egypt and that Antony should be allowed to live in exile in Egypt, offered Octavian money in the future, and immediately sent him lavish gifts. Octavian sent his diplomat Thyrsos to Cleopatra after she threatened to burn herself and vast amounts of her treasure within a tomb already under construction. Thyrsos advised her to kill Antony so that her life would be spared, but when Antony suspected foul intent, he had this diplomat flogged and sent back to Octavian without a deal.
After lengthy negotiations that ultimately produced no results, Octavian set out to invade Egypt in the spring of 30 BC, stopping at Ptolemais in Phoenicia, where his new ally Herod provided his army with fresh supplies. Octavian moved south and swiftly took Pelousion, while Cornelius Gallus, marching eastward from Cyrene, defeated Antony's forces near Paraitonion. Octavian advanced quickly to Alexandria, but Antony returned and won a small victory over Octavian's tired troops outside the city's hippodrome. However, on 1 August 30 BC, Antony's naval fleet surrendered to Octavian, followed by Antony's cavalry.
Cleopatra hid herself in her tomb with her close attendants and sent a message to Antony that she had committed suicide. In despair, Antony responded to this by stabbing himself in the stomach and taking his own life at age 53. According to Plutarch, he was still dying when brought to Cleopatra at her tomb, telling her he had died honorably and that she could trust Octavian's companion Gaius Proculeius over anyone else in his entourage. It was Proculeius, however, who infiltrated her tomb using a ladder and detained the queen, denying her the ability to burn herself with her treasures. Cleopatra was then allowed to embalm and bury Antony within her tomb before she was escorted to the palace.
Octavian entered Alexandria, occupied the palace, and seized Cleopatra's three youngest children. When she met with Octavian, Cleopatra told him bluntly, "I will not be led in a triumph" (Ancient Greek: οὑ θριαμβεύσομαι, romanized: ou thriambéusomai), according to Livy, a rare recording of her exact words. Octavian promised that he would keep her alive but offered no explanation about his future plans for her kingdom. When a spy informed her that Octavian planned to move her and her children to Rome in three days, she prepared for suicide as she had no intentions of being paraded in a Roman triumph like her sister Arsinoe IV. It is unclear if Cleopatra's suicide on 10 August 30 BC, at age 39, took place within the palace or her tomb. It is said she was accompanied by her servants Eiras and Charmion, who also took their own lives.
Octavian was said to have been angered by this outcome but had Cleopatra buried in royal fashion next to Antony in her tomb. Cleopatra's physician Olympos did not explain her cause of death, although the popular belief is that she allowed an asp or Egyptian cobra to bite and poison her. Plutarch relates this tale, but then suggests an implement (κνῆστις, knêstis, lit. 'spine, cheese-grater') was used to introduce the toxin by scratching, while Dio says that she injected the poison with a needle (βελόνη, belónē), and Strabo argued for an ointment of some kind. No venomous snake was found with her body, but she did have tiny puncture wounds on her arm that could have been caused by a needle.
Cleopatra decided in her last moments to send Caesarion away to Upper Egypt, perhaps with plans to flee to Kushite Nubia, Ethiopia, or India. Caesarion, now Ptolemy XV, would reign for a mere 18 days until executed on the orders of Octavian on 29 August 30 BC, after returning to Alexandria under the false pretense that Octavian would allow him to be king. Octavian was convinced by the advice of the philosopher Arius Didymus that there was room for only one Caesar in the world. With the fall of the Ptolemaic Kingdom, the Roman province of Egypt was established, marking the end of the Hellenistic period. In January of 27 BC Octavian was renamed Augustus ("the revered") and amassed constitutional powers that established him as the first Roman emperor, inaugurating the Principate era of the Roman Empire.
## Cleopatra's kingdom and role as a monarch
Following the tradition of Macedonian rulers, Cleopatra ruled Egypt and other territories such as Cyprus as an absolute monarch, serving as the sole lawgiver of her kingdom. She was the chief religious authority in her realm, presiding over religious ceremonies dedicated to the deities of both the Egyptian and Greek polytheistic faiths. She oversaw the construction of various temples to Egyptian and Greek gods, a synagogue for the Jews in Egypt, and even built the Caesareum of Alexandria, dedicated to the cult worship of her patron and lover Julius Caesar.
Cleopatra was directly involved in the administrative affairs of her domain, tackling crises such as famine by ordering royal granaries to distribute food to the starving populace during a drought at the beginning of her reign. Although the command economy that she managed was more of an ideal than a reality, the government attempted to impose price controls, tariffs, and state monopolies for certain goods, fixed exchange rates for foreign currencies, and rigid laws forcing peasant farmers to stay in their villages during planting and harvesting seasons. Apparent financial troubles led Cleopatra to debase her coinage, which included silver and bronze currencies but no gold coins like those of some of her distant Ptolemaic predecessors.
## Legacy
### Children and successors
After her suicide, Cleopatra's three surviving children, Cleopatra Selene II, Alexander Helios, and Ptolemy Philadelphos, were sent to Rome with Octavian's sister Octavia the Younger, a former wife of their father, as their guardian. Cleopatra Selene II and Alexander Helios were present in the Roman triumph of Octavian in 29 BC. The fates of Alexander Helios and Ptolemy Philadelphus are unknown after this point. Octavia arranged the betrothal of Cleopatra Selene II to Juba II, son of Juba I, whose North African kingdom of Numidia had been turned into a Roman province in 46 BC by Julius Caesar due to Juba I's support of Pompey.
The emperor Augustus installed Juba II and Cleopatra Selene II, after their wedding in 25 BC, as the new rulers of Mauretania, where they transformed the old Carthaginian city of Iol into their new capital, renamed Caesarea Mauretaniae (modern Cherchell, Algeria). Cleopatra Selene II imported many important scholars, artists, and advisers from her mother's royal court in Alexandria to serve her in Caesarea, now permeated in Hellenistic Greek culture. She also named her son Ptolemy of Mauretania, in honor of their Ptolemaic dynastic heritage.
Cleopatra Selene II died around 5 BC, and when Juba II died in 23/24 AD he was succeeded by his son Ptolemy. However, Ptolemy was eventually executed by the Roman emperor Caligula in 40 AD, perhaps under the pretense that Ptolemy had unlawfully minted his own royal coinage and utilized regalia reserved for the Roman emperor. Ptolemy of Mauretania was the last known monarch of the Ptolemaic dynasty, although Queen Zenobia, of the short-lived Palmyrene Empire during the Crisis of the Third Century, would claim descent from Cleopatra. A cult dedicated to Cleopatra still existed as late as 373 AD when Petesenufe, an Egyptian scribe of the book of Isis, explained that he "overlaid the figure of Cleopatra with gold."
### Roman literature and historiography
Although almost 50 ancient works of Roman historiography mention Cleopatra, these often include only terse accounts of the Battle of Actium, her suicide, and Augustan propaganda about her personal deficiencies. Despite not being a biography of Cleopatra, the Life of Antonius written by Plutarch in the 1st century AD provides the most thorough surviving account of Cleopatra's life. Plutarch lived a century after Cleopatra but relied on primary sources, such as Philotas of Amphissa, who had access to the Ptolemaic royal palace, Cleopatra's personal physician named Olympos, and Quintus Dellius, a close confidant of Mark Antony and Cleopatra. Plutarch's work included both the Augustan view of Cleopatra—which became canonical for his period—as well as sources outside of this tradition, such as eyewitness reports.
The Jewish Roman historian Josephus, writing in the 1st century AD, provides valuable information on the life of Cleopatra via her diplomatic relationship with Herod the Great. However, this work relies largely on Herod's memoirs and the biased account of Nicolaus of Damascus, the tutor of Cleopatra's children in Alexandria before he moved to Judea to serve as an adviser and chronicler at Herod's court. The Roman History published by the official and historian Cassius Dio in the early 3rd century AD, while failing to fully comprehend the complexities of the late Hellenistic world, nevertheless provides a continuous history of the era of Cleopatra's reign.
Cleopatra is barely mentioned in De Bello Alexandrino, the memoirs of an unknown staff officer who served under Caesar. The writings of Cicero, who knew her personally, provide an unflattering portrait of Cleopatra. The Augustan-period authors Virgil, Horace, Propertius, and Ovid perpetuated the negative views of Cleopatra approved by the ruling Roman regime, although Virgil established the idea of Cleopatra as a figure of romance and epic melodrama. Horace also viewed Cleopatra's suicide as a positive choice, an idea that found acceptance by the Late Middle Ages with Geoffrey Chaucer.
The historians Strabo, Velleius, Valerius Maximus, Pliny the Elder, and Appian, while not offering accounts as full as Plutarch, Josephus, or Dio, provided some details of her life that had not survived in other historical records. Inscriptions on contemporary Ptolemaic coinage and some Egyptian papyrus documents demonstrate Cleopatra's point of view, but this material is very limited in comparison to Roman literary works. The fragmentary Libyka commissioned by Cleopatra's son-in-law Juba II provides a glimpse at a possible body of historiographic material that supported Cleopatra's perspective.
Cleopatra's gender has perhaps led to her depiction as a minor if not insignificant figure in ancient, medieval, and even modern historiography about ancient Egypt and the Greco-Roman world. For instance, the historian Ronald Syme asserted that she was of little importance to Caesar and that the propaganda of Octavian magnified her importance to an excessive degree. Although the common view of Cleopatra was one of a prolific seductress, she had only two known sexual partners, Caesar and Antony, the two most prominent Romans of the time period, who were most likely to ensure the survival of her dynasty. Plutarch described Cleopatra as having had a stronger personality and charming wit than physical beauty.
### Cultural depictions
#### Depictions in ancient art
##### Statues
Cleopatra was depicted in various ancient works of art, in the Egyptian as well as Hellenistic-Greek and Roman styles. Surviving works include statues, busts, reliefs, and minted coins, as well as ancient carved cameos, such as one depicting Cleopatra and Antony in Hellenistic style, now in the Altes Museum, Berlin. Contemporary images of Cleopatra were produced both in and outside of Ptolemaic Egypt. For instance, there was once a large gilded bronze statue of Cleopatra inside the Temple of Venus Genetrix in Rome, the first time that a living person had their statue placed next to that of a deity in a Roman temple. It was erected there by Caesar and remained in the temple at least until the 3rd century AD, its preservation perhaps owing to Caesar's patronage, although Augustus did not remove or destroy artworks in Alexandria depicting Cleopatra.
A life-sized Roman-style statue of Cleopatra was found near the Tomba di Nerone [it], Rome, along the Via Cassia, and is now housed in the Museo Pio-Clementino, part of the Vatican Museums. Plutarch, in his Life of Antonius, said that the public statues of Antony were torn down by Augustus, but those of Cleopatra were preserved following her death thanks to her friend Archibius paying the emperor 2,000 talents to dissuade him from destroying hers.
Since the 1950s scholars have debated whether or not the Esquiline Venus—discovered in 1874 on the Esquiline Hill in Rome and housed in the Palazzo dei Conservatori of the Capitoline Museums—is a depiction of Cleopatra, based on the statue's hairstyle and facial features, apparent royal diadem worn over the head, and the uraeus Egyptian cobra wrapped around the base. Detractors of this theory argue that the face in this statue is thinner than the face on the Berlin portrait and assert that it was unlikely she would be depicted as the naked goddess Venus (or the Greek Aphrodite). However, she was depicted in an Egyptian statue as the goddess Isis, while some of her coinage depicts her as Venus-Aphrodite. She also dressed as Aphrodite when meeting Antony at Tarsos. The Esquiline Venus is generally thought to be a mid-1st-century AD Roman copy of a 1st-century BC Greek original from the school of Pasiteles.
##### Coinage portraits
Surviving coinage of Cleopatra's reign include specimens from every regnal year, from 51 to 30 BC. Cleopatra, the only Ptolemaic queen to issue coins on her own behalf, almost certainly inspired her partner Caesar to become the first living Roman to present his portrait on his own coins. Cleopatra was the first foreign queen to have her image appear on Roman currency. Coins dated to the period of her marriage to Antony, which also bear his image, portray the queen as having a very similar aquiline nose and prominent chin as that of her husband. These similar facial features followed an artistic convention that represented the mutually-observed harmony of a royal couple.
Her strong, almost masculine facial features in these particular coins are strikingly different from the smoother, softer, and perhaps idealized sculpted images of her in either the Egyptian or Hellenistic styles. Her masculine facial features on minted currency are similar to that of her father, Ptolemy XII Auletes, and perhaps also to those of her Ptolemaic ancestor Arsinoe II (316–260 BC) and even depictions of earlier queens such as Hatshepsut and Nefertiti. It is likely, due to political expediency, that Antony's visage was made to conform not only to hers but also to those of her Macedonian Greek ancestors who founded the Ptolemaic dynasty, to familiarize himself to her subjects as a legitimate member of the royal house.
The inscriptions on the coins are written in Greek, but also in the nominative case of Roman coins rather than the genitive case of Greek coins, in addition to having the letters placed in a circular fashion along the edges of the coin instead of across it horizontally or vertically as was customary for Greek ones. These facets of their coinage represent the synthesis of Roman and Hellenistic culture, and perhaps also a statement to their subjects, however ambiguous to modern scholars, about the superiority of either Antony or Cleopatra over the other. Diana Kleiner argues that Cleopatra, in one of her coins minted with the dual image of her husband Antony, made herself more masculine-looking than other portraits and more like an acceptable Roman client queen than a Hellenistic ruler. Cleopatra had actually achieved this masculine look in coinage predating her affair with Antony, such as the coins struck at the Ascalon mint during her brief period of exile to Syria and the Levant, which Joann Fletcher explains as her attempt to appear like her father and as a legitimate successor to a male Ptolemaic ruler.
Various coins, such as a silver tetradrachm minted sometime after Cleopatra's marriage with Antony in 37 BC, depict her wearing a royal diadem and a 'melon' hairstyle. The combination of this hairstyle with a diadem is also featured in two surviving sculpted marble heads. This hairstyle, with hair braided back into a bun, is the same as that worn by her Ptolemaic ancestors Arsinoe II and Berenice II in their own coinage. After her visit to Rome in 46–44 BC it became fashionable for Roman women to adopt it as one of their hairstyles, but it was abandoned for a more modest, austere look during the conservative rule of Augustus.
##### Greco-Roman busts and heads
Of the surviving Greco-Roman-style busts and heads of Cleopatra, the sculpture known as the "Berlin Cleopatra", located in the Antikensammlung Berlin collection at the Altes Museum, possesses her full nose, whereas the head known as the "Vatican Cleopatra", located in the Vatican Museums, is damaged with a missing nose. Both the Berlin Cleopatra and Vatican Cleopatra have royal diadems, similar facial features, and perhaps once resembled the face of her bronze statue housed in the Temple of Venus Genetrix.
Both heads are dated to the mid-1st century BC and were found in Roman villas along the Via Appia in Italy, the Vatican Cleopatra having been unearthed in the Villa of the Quintilii. Francisco Pina Polo writes that Cleopatra's coinage present her image with certainty and asserts that the sculpted portrait of the Berlin head is confirmed as having a similar profile with her hair pulled back into a bun, a diadem, and a hooked nose.
A third sculpted portrait of Cleopatra accepted by scholars as being authentic survives at the Archaeological Museum of Cherchell, Algeria. This portrait features the royal diadem and similar facial features as the Berlin and Vatican heads, but has a more unique hairstyle and may actually depict Cleopatra Selene II, daughter of Cleopatra. A possible Parian-marble sculpture of Cleopatra wearing a vulture headdress in Egyptian style is located at the Capitoline Museums. Discovered near a sanctuary of Isis in Rome and dated to the 1st century BC, it is either Roman or Hellenistic-Egyptian in origin.
Other possible sculpted depictions of Cleopatra include one in the British Museum, London, made of limestone, which perhaps only depicts a woman in her entourage during her trip to Rome. The woman in this portrait has facial features similar to others (including the pronounced aquiline nose), but lacks a royal diadem and sports a different hairstyle. However, the British Museum head, once belonging to a full statue, could potentially represent Cleopatra at a different stage in her life and may also betray an effort by Cleopatra to discard the use of royal insignia (i.e. the diadem) to make herself more appealing to the citizens of Republican Rome. Duane W. Roller speculates that the British Museum head, along with those in the Egyptian Museum, Cairo, the Capitoline Museums, and in the private collection of Maurice Nahmen, while having similar facial features and hairstyles as the Berlin portrait but lacking a royal diadem, most likely represent members of the royal court or even Roman women imitating Cleopatra's popular hairstyle.
##### Paintings
In the House of Marcus Fabius Rufus at Pompeii, Italy, a mid-1st century BC Second Style wall painting of the goddess Venus holding a cupid near massive temple doors is most likely a depiction of Cleopatra as Venus Genetrix with her son Caesarion. The commission of the painting most likely coincides with the erection of the Temple of Venus Genetrix in the Forum of Caesar in September 46 BC, where Caesar had a gilded statue erected depicting Cleopatra. This statue likely formed the basis of her depictions in both sculpted art as well as this painting at Pompeii.
The woman in the painting wears a royal diadem over her head and is strikingly similar in appearance to the Vatican Cleopatra, which bears possible marks on the marble of its left cheek where a cupid's arm may have been torn off. The room with the painting was walled off by its owner, perhaps in reaction to the execution of Caesarion in 30 BC by order of Octavian, when public depictions of Cleopatra's son would have been unfavorable with the new Roman regime.
Behind her golden diadem, crowned with a red jewel, is a translucent veil with crinkles that suggest the "melon" hairstyle favored by the queen. Her ivory-white skin, round face, long aquiline nose, and large round eyes were features common in both Roman and Ptolemaic depictions of deities. Roller affirms that "there seems little doubt that this is a depiction of Cleopatra and Caesarion before the doors of the Temple of Venus in the Forum Julium and, as such, it becomes the only extant contemporary painting of the queen."
Another painting from Pompeii, dated to the early 1st century AD and located in the House of Giuseppe II, contains a possible depiction of Cleopatra with her son Caesarion, both wearing royal diadems while she reclines and consumes poison in an act of suicide. The painting was originally thought to depict the Carthaginian noblewoman Sophonisba, who toward the end of the Second Punic War (218–201 BC) drank poison and committed suicide at the behest of her lover Masinissa, King of Numidia. Arguments in favor of it depicting Cleopatra include the strong connection of her house with that of the Numidian royal family, Masinissa and Ptolemy VIII Physcon having been associates, and Cleopatra's own daughter marrying the Numidian prince Juba II.
Sophonisba was also a more obscure figure when the painting was made, while Cleopatra's suicide was far more famous. An asp is absent from the painting, but many Romans held the view that she received poison in another manner than a venomous snakebite. A set of double doors on the rear wall of the painting, positioned very high above the people in it, suggests the described layout of Cleopatra's tomb in Alexandria. A male servant holds the mouth of an artificial Egyptian crocodile (possibly an elaborate tray handle), while another man standing by is dressed as a Roman.
In 1818 a now lost encaustic painting was discovered in the Temple of Serapis at Hadrian's Villa, near Tivoli, Lazio, Italy, that depicted Cleopatra committing suicide with an asp biting her bare chest. A chemical analysis performed in 1822 confirmed that the medium for the painting was composed of one-third wax and two-thirds resin. The thickness of the painting over Cleopatra's bare flesh and her drapery were reportedly similar to the paintings of the Fayum mummy portraits. A steel engraving published by John Sartain in 1885 depicting the painting as described in the archaeological report shows Cleopatra wearing authentic clothing and jewelry of Egypt in the late Hellenistic period, as well as the radiant crown of the Ptolemaic rulers, as seen in their portraits on various coins minted during their respective reigns. After Cleopatra's suicide, Octavian commissioned a painting to be made depicting her being bitten by a snake, parading this image in her stead during his triumphal procession in Rome. The portrait painting of Cleopatra's death was perhaps among the great number of artworks and treasures taken from Rome by Emperor Hadrian to decorate his private villa, where it was found in an Egyptian temple.
A Roman panel painting from Herculaneum, Italy, dated to the 1st century AD possibly depicts Cleopatra. In it she wears a royal diadem, red or reddish-brown hair pulled back into a bun, pearl-studded hairpins, and earrings with ball-shaped pendants, the white skin of her face and neck set against a stark black background. Her hair and facial features are similar to those in the sculpted Berlin and Vatican portraits as well as her coinage. A highly similar painted bust of a woman with a blue headband in the House of the Orchard at Pompeii features Egyptian-style imagery, such as a Greek-style sphinx, and may have been created by the same artist.
##### Portland Vase
The Portland Vase, a Roman cameo glass vase dated to the Augustan period and now in the British Museum, includes a possible depiction of Cleopatra with Antony. In this interpretation, Cleopatra can be seen grasping Antony and drawing him toward her while a serpent (i.e. the asp) rises between her legs, Eros floats above, and Anton, the alleged ancestor of the Antonian family, looks on in despair as his descendant Antony is led to his doom. The other side of the vase perhaps contains a scene of Octavia, abandoned by her husband Antony but watched over by her brother, the emperor Augustus. The vase would thus have been created no earlier than 35 BC, when Antony sent his wife Octavia back to Italy and stayed with Cleopatra in Alexandria.
##### Native Egyptian art
The Bust of Cleopatra in the Royal Ontario Museum represents a bust of Cleopatra in the Egyptian style. Dated to the mid-1st century BC, it is perhaps the earliest depiction of Cleopatra as both a goddess and ruling pharaoh of Egypt. The sculpture also has pronounced eyes that share similarities with Roman copies of Ptolemaic sculpted works of art. The Dendera Temple complex, near Dendera, Egypt, contains Egyptian-style carved relief images along the exterior walls of the Temple of Hathor depicting Cleopatra and her young son Caesarion as a grown adult and ruling pharaoh making offerings to the gods. Augustus had his name inscribed there following the death of Cleopatra.
A large Ptolemaic black basalt statue measuring 104 centimetres (41 in) in height, now in the Hermitage Museum, Saint Petersburg, is thought to represent Arsinoe II, wife of Ptolemy II, but recent analysis has indicated that it could depict her descendant Cleopatra due to the three uraei adorning her headdress, an increase from the two used by Arsinoe II to symbolize her rule over Lower and Upper Egypt. The woman in the basalt statue also holds a divided, double cornucopia (dikeras), which can be seen on coins of both Arsinoe II and Cleopatra. In his Kleopatra und die Caesaren (2006), Bernard Andreae [de] contends that this basalt statue, like other idealized Egyptian portraits of the queen, does not contain realistic facial features and hence adds little to the knowledge of her appearance. Adrian Goldsworthy writes that, despite these representations in the traditional Egyptian style, Cleopatra would have dressed as a native only "perhaps for certain rites" and instead would usually dress as a Greek monarch, which would include the Greek headband seen in her Greco-Roman busts.
#### Medieval and Early Modern reception
In modern times Cleopatra has become an icon of popular culture, a reputation shaped by theatrical representations dating back to the Renaissance as well as paintings and films. This material largely surpasses the scope and size of existent historiographic literature about her from classical antiquity and has made a greater impact on the general public's view of Cleopatra than the latter. The 14th-century English poet Geoffrey Chaucer, in The Legend of Good Women, contextualized Cleopatra for the Christian world of the Middle Ages. His depiction of Cleopatra and Antony, her shining knight engaged in courtly love, has been interpreted in modern times as being either playful or misogynistic satire.
Chaucer highlighted Cleopatra's relationships with only two men as hardly the life of a seductress and wrote his works partly in reaction to the negative depiction of Cleopatra in De Mulieribus Claris and De Casibus Virorum Illustrium, Latin works by the 14th-century Italian poet Giovanni Boccaccio. The Renaissance humanist Bernardino Cacciante [it], in his 1504 Libretto apologetico delle donne, was the first Italian to defend the reputation of Cleopatra and criticize the perceived moralizing and misogyny in Boccaccio's works. Works of Islamic historiography written in Arabic covered the reign of Cleopatra, such as the 10th-century Meadows of Gold by Al-Masudi, although his work erroneously claimed that Octavian died soon after Cleopatra's suicide.
Cleopatra appeared in miniatures for illuminated manuscripts, such as a depiction of her and Antony lying in a Gothic-style tomb by the Boucicaut Master in 1409. In the visual arts, the sculpted depiction of Cleopatra as a free-standing nude figure committing suicide began with the 16th-century sculptors Bartolommeo Bandinelli and Alessandro Vittoria. Early prints depicting Cleopatra include designs by the Renaissance artists Raphael and Michelangelo, as well as 15th-century woodcuts in illustrated editions of Boccaccio's works.
In the performing arts, the death of Elizabeth I of England in 1603, and the German publication in 1606 of alleged letters of Cleopatra, inspired Samuel Daniel to alter and republish his 1594 play Cleopatra in 1607. He was followed by William Shakespeare, whose Antony and Cleopatra, largely based on Plutarch, was first performed in 1608 and provided a somewhat salacious view of Cleopatra in stark contrast to England's own Virgin Queen. Cleopatra was also featured in operas, such as George Frideric Handel's 1724 Giulio Cesare in Egitto, which portrayed the love affair of Caesar and Cleopatra; Domenico Cimarosa wrote Cleopatra on a similar subject in 1789.
#### Modern depictions and brand imaging
In Victorian Britain, Cleopatra was highly associated with many aspects of ancient Egyptian culture and her image was used to market various household products, including oil lamps, lithographs, postcards and cigarettes. Fictional novels such as H. Rider Haggard's Cleopatra (1889) and Théophile Gautier's One of Cleopatra's Nights (1838) depicted the queen as a sensual and mystic Easterner, while the Egyptologist Georg Ebers's Cleopatra (1894) was more grounded in historical accuracy. The French dramatist Victorien Sardou and Irish playwright George Bernard Shaw produced plays about Cleopatra, while burlesque shows such as F. C. Burnand's Antony and Cleopatra offered satirical depictions of the queen connecting her and the environment she lived in with the modern age.
Shakespeare's Antony and Cleopatra was considered canonical by the Victorian era. Its popularity led to the perception that the 1885 painting by Lawrence Alma-Tadema depicted the meeting of Antony and Cleopatra on her pleasure barge in Tarsus, although Alma-Tadema revealed in a private letter that it depicts a subsequent meeting of theirs in Alexandria. Also based on Shakespeare's play was Samuel Barber's opera Antony and Cleopatra (1966), commissioned for the opening of the Metropolitan Opera House. In his unfinished 1825 short story The Egyptian Nights, Alexander Pushkin popularized the claims of the 4th-century Roman historian Aurelius Victor, previously largely ignored, that Cleopatra had prostituted herself to men who paid for sex with their lives. Cleopatra also became appreciated outside the Western world and Middle East, as the Qing-dynasty Chinese scholar Yan Fu wrote an extensive biography of her.
Georges Méliès's Robbing Cleopatra's Tomb (French: Cléopâtre), an 1899 French silent horror film, was the first film to depict the character of Cleopatra. Hollywood films of the 20th century were influenced by earlier Victorian media, which helped to shape the character of Cleopatra played by Theda Bara in Cleopatra (1917), Claudette Colbert in Cleopatra (1934), and Elizabeth Taylor in Cleopatra (1963). In addition to her portrayal as a "vampire" queen, Bara's Cleopatra also incorporated tropes familiar from 19th-century Orientalist painting, such as despotic behavior, mixed with dangerous and overt female sexuality. Colbert's character of Cleopatra served as a glamour model for selling Egyptian-themed products in department stores in the 1930s, targeting female moviegoers. In preparation for the film starring Taylor as Cleopatra, women's magazines of the early 1960s advertised how to use makeup, clothes, jewelry, and hairstyles to achieve the "Egyptian" look similar to the queens Cleopatra and Nefertiti. By the end of the 20th century there were forty-three films, two hundred plays and novels, forty-five operas, and five ballets associated with Cleopatra.
### Written works
Whereas myths about Cleopatra persist in popular media, important aspects of her career go largely unnoticed, such as her command of naval forces and administrative acts. Publications on ancient Greek medicine attributed to her are, likely to be the work of a physician by the same name writing in the late first century AD. Ingrid D. Rowland, who highlights that the "Berenice called Cleopatra" cited by the 3rd- or 4th-century female Roman physician Metrodora was likely conflated by medieval scholars as referring to Cleopatra. Only fragments exist of these medical and cosmetic writings, such as those preserved by Galen, including remedies for hair disease, baldness, and dandruff, along with a list of weights and measures for pharmacological purposes. Aëtius of Amida attributed a recipe for perfumed soap to Cleopatra, while Paul of Aegina preserved alleged instructions of hers for dyeing and curling hair.
## Ancestry
Cleopatra belonged to the Macedonian Greek dynasty of the Ptolemies, their European origins tracing back to northern Greece. Through her father, Ptolemy XII Auletes, she was a descendant of two prominent companions of Alexander the Great of Macedon: the general Ptolemy I Soter, founder of the Ptolemaic Kingdom of Egypt, and Seleucus I Nicator, the Macedonian Greek founder of the Seleucid Empire of West Asia. While Cleopatra's paternal line can be traced, the identity of her mother is uncertain. She was presumably the daughter of Cleopatra V Tryphaena, the sister-wife of Ptolemy XII who had previously given birth to their daughter Berenice IV.
Cleopatra I Syra was the only member of the Ptolemaic dynasty known for certain to have introduced some non-Greek ancestry. Her mother Laodice III was a daughter born to King Mithridates II of Pontus, a Persian of the Mithridatic dynasty, and his wife Laodice who had a mixed Greek-Persian heritage. Cleopatra I Syra's father Antiochus III the Great was a descendant of Queen Apama, the Sogdian Iranian wife of Seleucus I Nicator. It is generally believed that the Ptolemies did not intermarry with native Egyptians. Michael Grant asserts that there is only one known Egyptian mistress of a Ptolemy and no known Egyptian wife of a Ptolemy, further arguing that Cleopatra probably did not have any Egyptian ancestry and "would have described herself as Greek."
Stacy Schiff writes that Cleopatra was a Macedonian Greek with some Persian ancestry, arguing that it was rare for the Ptolemies to have an Egyptian mistress. Duane W. Roller speculates that Cleopatra could have been the daughter of a theoretical half-Macedonian-Greek, half-Egyptian woman from Memphis in northern Egypt belonging to a family of priests dedicated to Ptah (a hypothesis not generally accepted in scholarship), but contends that whatever Cleopatra's ancestry, she valued her Greek Ptolemaic heritage the most. Ernle Bradford writes that Cleopatra challenged Rome not as an Egyptian woman "but as a civilized Greek."
Claims that Cleopatra was an illegitimate child never appeared in Roman propaganda against her. Strabo was the only ancient historian who claimed that Ptolemy XII's children born after Berenice IV, including Cleopatra, were illegitimate. Cleopatra V (or VI) was expelled from the court of Ptolemy XII in late 69 BC, a few months after the birth of Cleopatra, while Ptolemy XII's three younger children were all born during the absence of his wife. The high degree of inbreeding among the Ptolemies is also illustrated by Cleopatra's immediate ancestry, of which a reconstruction is shown below.
The family tree given below also lists Cleopatra V as a daughter of Ptolemy X Alexander I and Berenice III. This would make her a cousin of her husband, Ptolemy XII, but she could have been a daughter of Ptolemy IX Lathyros, which would have made her a sister-wife of Ptolemy XII instead. The confused accounts in ancient primary sources have also led scholars to number Ptolemy XII's wife as either Cleopatra V or Cleopatra VI; the latter may have actually been a daughter of Ptolemy XII. Fletcher and John Whitehorne assert that this is a possible indication Cleopatra V had died in 69 BC rather than reappearing as a co-ruler with Berenice IV in 58 BC (during Ptolemy XII's exile in Rome).
## See also
- List of female hereditary monarchs
|
43,418 |
Julian of Norwich
| 1,173,466,425 |
English theologian and anchoress (1343 – after 1416)
|
[
"1340s births",
"14th-century Christian mystics",
"14th-century English women writers",
"15th-century English women writers",
"15th-century English writers",
"15th-century deaths",
"English Catholic mystics",
"English Roman Catholic writers",
"English religious writers",
"Medieval English theologians",
"Middle English literature",
"Women mystics",
"Women religious writers",
"Writers from Norwich",
"Year of birth uncertain",
"Year of death unknown"
] |
Julian of Norwich (c. 1343 – after 1416), also known as Juliana of Norwich, the Lady Julian, Dame Julian or Mother Julian, was an English anchoress of the Middle Ages. Her writings, now known as Revelations of Divine Love, are the earliest surviving English language works by a woman, although it is possible that some anonymous works may have had female authors. They are also the only surviving English language works by an anchoress.
Julian lived in the English city of Norwich, an important centre for commerce that also had a vibrant religious life. During her lifetime, the city suffered the devastating effects of the Black Death of 1348–1350, the Peasants' Revolt (which affected large parts of England in 1381), and the suppression of the Lollards. In 1373, aged 30 and so seriously ill she thought she was on her deathbed, Julian received a series of visions or shewings of the Passion of Christ. She recovered from her illness and wrote two versions of her experiences, the earlier one being completed soon after her recovery—a much longer version, today known as the Long Text, was written many years later.
Julian lived in permanent seclusion as an anchoress in her cell, which was attached to St Julian's Church, Norwich. Four wills are known in which sums were bequeathed to a Norwich anchoress named Julian, and an account by the celebrated mystic Margery Kempe exists which provides evidence of counsel Kempe was given by the anchoress.
Details of Julian's family, education, or of her life before becoming an anchoress are not known; it is unclear whether her actual name was Julian. Preferring to write anonymously, and seeking isolation from the world, she was nevertheless influential in her lifetime. While her writings were carefully preserved, the Reformation prevented their publication in print. The Long Text was first published in 1670 by the Benedictine monk Serenus de Cressy, reissued by George Hargreaves Parker in 1843, and published in a modernised version in 1864. Julian's writings emerged from obscurity in 1901 when a manuscript in the British Museum was transcribed and published with notes by Grace Warrack; many translations have been made since. Julian is today considered to be an important Christian mystic and theologian.
## Background
The English city of Norwich, where Julian probably lived all her life, was second in importance to London during the 13th and 14th centuries, and the centre of the country's primary region for agriculture and trade. During her lifetime, the Black Death reached Norwich; the disease may have killed over half the population of the city, and returned in subsequent outbreaks up to 1387. Julian was alive during the Peasants' Revolt of 1381, when the city was overwhelmed by rebel forces led by Geoffrey Litster. Henry le Despenser, the Bishop of Norwich, executed Litster after the peasant army was defeated at the Battle of North Walsham. Despenser zealously opposed the Lollards, who advocated reform of the Church, and some of them were burnt at the stake at Lollards Pit, just outside the city.
Norwich may have been one of the most religious cities in Europe at that time, with its cathedral, friaries, churches and recluses' cells dominating both the landscape and the lives of its citizens. On the eastern side of the city was the cathedral priory (founded in 1096), the Benedictine Hospital of St Paul, the Carmelite friary, St Giles's Hospital, and the Greyfriars monastery. To the south, the priory at Carrow was located just beyond the city walls. Its income was mainly generated from "livings" acquired from the renting of its assets, which included the Norwich churches of St Julian, All Saints Timberhill, St Edward Conisford and St Catherine Newgate, all now lost apart from St Julian's. The churches with anchorite cells enhanced the reputation of the priory, as they attracted endowments from across society.
## Life
### Sources for Julian's life
Little of Julian's life is known. The few scant comments she provided about herself are contained in her writings, later published in a book commonly known as Revelations of Divine Love, a title first used in 1670. The earliest surviving copy of a manuscript of Julian's, made by a scribe in the 1470s, acknowledges her as the author of the work.
The earliest known references to Julian come from four wills, in which she is described as being an anchoress. The wills were all made by individuals who lived in Norwich. Roger Reed, the rector of St Michael Coslany, Norwich, whose will of 20 March 1394 provides the earliest record of Julian's existence, made a bequest of 12 shillings to be paid to "Julian anakorite". Thomas Edmund, a Chantry priest from Aylsham, stipulated in his will of 19 May 1404 that 12 pennies be given to "Julian, anchoress of the church of St Julian, Conisford" and 8 pennies to "Sarah, living with her". John Plumpton from Norwich gave 40 pennies to "the anchoress in the church of St Julian's, Conisford, and a shilling each to her maid and her former maid Alice" in his will dated 24 November 1415. The fourth person to mention Julian was Isabelle, Countess of Suffolk (the second wife of William de Ufford, 2nd Earl of Suffolk), who made a bequest of 20 shillings to "Julian reclus a Norwich" in her will dated 26 September 1416. As a bequest to an unnamed anchorite at St Julian's was made in 1429, there is a possibility Julian was alive at this time.
Julian was known as a spiritual authority within her community, where she also served as an adviser. In around 1414, when she was in her seventies, she was visited by the English mystic Margery Kempe. The Book of Margery Kempe, which is possibly the first autobiography to be written in English, mentions that Kempe travelled to Norwich to obtain spiritual advice from Julian, saying she was "bidden by Our Lord" to go to "Dame Jelyan ... for the anchoress was expert in" divine revelations, "and good counsel could give". Kempe never referred to Julian as an author, although she was familiar with the works of other spiritual writers, and mentioned them.
### Visions
Julian wrote in Revelations of Divine Love that she became seriously ill at the age of 30. She could have been an anchoress when she fell ill, although it is possible she was a lay person living at home, as she was visited by her mother and other people, and the rules of enclosure for an anchoress would not normally have allowed outsiders such access. On 8 May 1373 a curate administered the last rites of the Church to her, in anticipation of her death. As he held a crucifix above the foot of her bed, she began to lose her sight and feel physically numb, but gazing on the crucifix she saw the figure of Jesus begin to bleed. Over the next several hours, she had a series of 15 visions of Jesus, and a 16th the following night.
Julian completely recovered from her illness on 13 May; there is general agreement that she wrote about her shewings shortly after she experienced them. Her original manuscript no longer exists, but a copy, now known as the Short Text, survived. Decades later, perhaps in the early 1390s, she began a theological exploration of the meaning of her visions, and produced writings now known as The Long Text. This second work seems to have gone through many revisions before it was finished, perhaps in the 1410s or 1420s. Julian's revelations seem to be the first important example of a vision by an Englishwoman for 200 years, in contrast with the Continent, where "a golden age of women's mysticism" occurred during the 13th and 14th centuries.
### Personal life
The few autobiographical details Julian included in the Short Text, including her gender, were suppressed when she wrote her longer text later in life. Historians are not even sure of her actual name. It is generally thought to be taken from the church in Norwich to which her cell was attached, but Julian was also used in its own right as a girl's name in the Middle Ages, and so could have been her Christian name.
Julian's writings indicate that she was born in 1343 or late 1342, and died after 1416. She was six when the Black Death arrived in Norwich. It has been speculated that she was educated as a young girl by the Benedictine nuns of Carrow Abbey, as a school for girls existed there during her childhood. There is no written evidence that she was ever a nun at Carrow.
According to several commentators, including Santha Bhattacharji in the Oxford Dictionary of National Biography, Julian's discussion of the maternal nature of God suggests that she knew of motherhood from her own experience of bringing up children. As plague epidemics were rampant during the 14th century, it has been suggested that Julian may have lost her own family as a result. By becoming an anchoress she would have been kept in quarantine away from the rest of the population of Norwich. However, nothing in Julian's writings provides any indication of the plagues, religious conflict, or civil insurrection that occurred in the city during her lifetime. Kenneth Leech and Sister Benedicta Ward, the joint authors of Julian Reconsidered (1988), concluded that she was a young widowed mother and never a nun. They based their opinion on a dearth of references about her occupation in life and a lack of evidence to connect her with Carrow Abbey, which would have honoured her and buried her in the grounds had she been strongly connected with the priory.
### Life as an anchoress
Julian was an anchoress from at least the 1390s. Living in her cell, she would have played an important part within her community, devoting herself to a life of prayer to complement the clergy in their primary function as protectors of souls. Her solitary life would have begun after the completion of an onerous selection process. An important church ceremony would have taken place at St Julian's Church, in the presence of the bishop. During the ceremony, psalms from the Office of the Dead would have been sung for Julian (as if it were her funeral), and at some point she would have been led to her cell door and into the room beyond. The door would afterwards have been sealed up, and she would have remained in her cell for the rest of her life.
Once her life of seclusion had begun, Julian would have had to follow the strict rules laid down for anchoresses. Two important sources of information about the life of such women have survived. De institutione inclusarum was written in Latin by Ælred of Rieveaulx in around 1162, and the Ancrene Riwle was written in Middle English in around 1200. Originally made for three sisters, the Ancrene Riwle became in time a manual for all female recluses. The work regained its former popularity during the mystical movement of the 14th century. It may have been available to Julian to read and become familiar with—being a book written in a language she could read. The book stipulated that anchoresses should live in confined isolation, in poverty, and under a vow of chastity. The popular image of Julian living with her cat for company stems from the regulations set out in the Ancrene Riwle.
As an anchoress living in the heart of an urban environment, Julian would not have been entirely secluded. She would have enjoyed the financial support of the more prosperous members of the local community, as well as the general affection of the population. She would have in turn provided prayers and given advice to visitors, serving as an example of devout holiness. According to one edition of the Cambridge Medieval History, it is possible that she met the English mystic Walter Hilton, who died when Julian was in her fifties, and who may have influenced her writings in a small way.
## Revelations of Divine Love
Both the Long Text and Short Text of Julian's Revelations of Divine Love contain an account of each of her revelations. Her writings are unique, as they are the earliest surviving English language works by a woman, although it is possible that some anonymous works may have had female authors. They are also the only surviving writings by an English anchoress. The Long Text consists of 86 chapters and about 63,500 words, and is about six times longer than the Short Text.
In 14th century England, when women were generally barred from high status positions, their knowledge of Latin would have been limited, and it is more likely that they read and wrote in English. The historian Janina Ramirez has suggested that by choosing to write in her vernacular language, a precedent set by other medieval writers, Julian was "attempting to express the inexpressible" in the best way possible. Nothing written by Julian was ever mentioned in any bequests, nor written for a specific readership, or influenced other medieval authors, and almost no references were made to her writings from the time they were written until the beginning of the 20th century.
Julian's writings were largely unknown until 1670, when they were published under the title XVI Revelations of Divine Love, shewed to a devout servant of Our Lord, called Mother Juliana, an Anchorete of Norwich: Who lived in the Dayes of King Edward the Third by Serenus de Cressy, a confessor for the English nuns at Cambrai. Cressy based his book on the Long Text, probably written by Julian in the 1410s or 1420s. Three manuscript copies of the Long Text have survived. One copy of the complete Long Text, known as the Paris Manuscript, resides in the Bibliothèque nationale de France in Paris, and two other manuscripts are now in the British Library in London. One of the manuscripts was perhaps copied out by Dame Clementina Cary, who founded the English Benedictine monastery in Paris. Cressy's edition was reprinted in 1843 and 1864, and again in 1902.
A new version of the book was produced by Henry Collins in 1877. It became still better known after the publication of Grace Warrack's 1901 edition, which included modernised language, as well as, according to the author Georgia Ronan Crampton, a "sympathetic informed introduction". The book introduced most early 20th century readers to Julian's writings; according to the historian Henrietta Leyser, Julian was "beloved in the 20th century by theologians and poets alike".
Julian's shorter work, now known as the Short Text, was probably written not long after her visions in May 1373. As with the Long Text, the original manuscript was lost, but not before at least one copy was made by a scribe. It was in the possession of an English Catholic family at one point. The copy was seen and described by the antiquarian Francis Blomefield in 1745. After disappearing from view for 150 years, it was found in 1910, in a collection of contemplative medieval texts bought by the British Museum, and was published by the Reverend Dundas Harford in 1911. Now part of British Library Add MS 37790, the manuscript—with Julian's Short Text of 33 pages (folios 97r to 115r)—is held in the British Library.
## Theology
Julian of Norwich is now recognised as one of England's most important mystics; according to Leyser, she was the greatest English anchoress. For the theologian Denys Turner the core issue Julian addresses in Revelations of Divine Love is "the problem of sin". Julian says that sin is behovely, which is often translated as "necessary", "appropriate", or "fitting".
Julian lived in a time of turmoil, but her theology was optimistic and spoke of God's omnibenevolence and love in terms of joy and compassion. Revelations of Divine Love "contains a message of optimism based on the certainty of being loved by God and of being protected by his Providence".
A characteristic element of Julian's mystical theology was her equating divine love with motherly love, a theme found in the Biblical prophets, as in Isaiah 49:15. According to Julian, God is both our mother and our father. As the medievalist Caroline Walker Bynum shows, this idea was also developed by Bernard of Clairvaux and others from the 12th century onward. Bynum regards the medieval notion of Jesus as a mother as being a metaphor rather than a literal belief. In her fourteenth revelation, Julian writes of the Trinity in domestic terms, comparing Jesus to a mother who is wise, loving and merciful. Author Frances Beer asserted that Julian believed that the maternal aspect of Christ was literal and not metaphoric: Christ is not like a mother, he is literally the mother. Julian emphasised this by explaining how the bond between mother and child is the only earthly relationship that comes close to the relationship a person can have with Jesus. She used metaphors when writing about Jesus in relation to ideas about conceiving, giving birth, weaning and upbringing.
Julian wrote, "For I saw no wrath except on man's side, and He forgives that in us, for wrath is nothing else but a perversity and an opposition to peace and to love." She wrote that God sees us as perfect and waits for the day when human souls mature so that evil and sin will no longer hinder us, and that "God is nearer to us than our own soul". This theme is repeated throughout her work: "Jesus answered with these words, saying: 'All shall be well, and all shall be well, and all manner of thing shall be well.' ... This was said so tenderly, without blame of any kind toward me or anybody else."
Her status as an anchoress may have prevented contemporary monastic and university authorities from challenging her theology. A lack of references to her work during her own time may indicate that she kept her writings with her in her cell, so that religious authorities were unaware of them. The 14th-century English cardinal Adam Easton's Defensorium sanctae birgittae, Alfonso of Jaen's Epistola Solitarii, and the English mystic William Flete's Remedies against Temptations, are all referenced in Julian's text.
## Commemoration
Julian is remembered in the Church of England with a Lesser Festival on 8 May. The Episcopal Church and the Evangelical Lutheran Church in the United States also commemorate her on 8 May.
Although not canonised in the Catholic Church (as of 2021) or listed in the Roman Martyrology, Julian is quoted in the Catechism of the Catholic Church. In 1997, Father Giandomenico Mucci listed Julian among 18 individuals who are considered potential Doctors of the Church, describing her as a beata.
Pope Benedict XVI discussed the life and teaching of Julian at a General Audience on 1 December 2010, stated: "Julian of Norwich understood the central message for spiritual life: God is love and it is only if one opens oneself to this love, totally and with total trust, and lets it become one's sole guide in life, that all things are transfigured, true peace and true joy found and one is able to radiate it." He concluded: "'And all will be well,' 'all manner of things shall be well': this is the final message that Julian of Norwich transmits to us and that I am also proposing to you today."
## Legacy
The 20th- and 21st-century revival of interest in Julian has been associated with a renewed interest in Christian contemplation in the English-speaking world. The Julian Meetings, an association of contemplative prayer groups, takes its name from her, but is unaffiliated to any faith doctrine, and is unconnected with Julian's theology, although her writings are sometimes used in meetings.
### St Julian's Church
There were no hermits or anchorites in Norwich from 1312 until the emergence of Julian in the 1370s. St Julian's Church, located off King Street in the south of Norwich city centre, holds regular services. The building, which has a round tower, is one of the 31 parish churches from a total of 58 that once existed in Norwich during the Middle Ages, of which 36 had an anchorite cell.
The cell did not remain empty after Julian's death. In 1428 Julian(a) Lampett (or Lampit) moved in when Edith Wilton was the prioress responsible for the church, and remained in the cell until 1478 when Margaret Pygot was prioress. The cell continued to be used by anchorites until the dissolution of the monasteries in the 1530s, when it was demolished and the church stripped of its rood screen and statues. No rector was appointed from then until 1581.
By 1845 St Julian's was in a poor state of repair and the east wall collapsed that year. After an appeal for funds, the church was restored. The church underwent further restoration during the first half of the 20th century, but was destroyed during the Norwich Blitz of June 1942 when the tower received a direct hit. After the war, the church was rebuilt. It now appears largely as it was before its destruction, although its tower is much reduced in height and a chapel has been built in place of the long-lost anchorite cell.
### Literature
The Catechism of the Catholic Church quotes from Revelations of Divine Love in its explanation of how God can draw a greater good, even from evil. The poet T. S. Eliot incorporated "All shall be well, and all shall be well, and all manner of thing shall be well" three times into his poem "Little Gidding", the fourth of his Four Quartets (1943), as well as Julian's "the ground of our beseeching". The poem renewed the English-speaking public's awareness of Julian's texts.
> > And all shall be well and All manner of thing shall be well When the tongues of flames are in-folded Into the crowned knot of fire And the fire and the rose are one.
Sydney Carter's song "All Shall Be Well" (sometimes called "The Bells of Norwich"), which uses words by Julian, was published in 1982. Julian's writings have been translated into numerous languages.
In 2023 Julian was the subject of the fictional autobiography I, Julian by Dr Claire Gilbert, Visiting Fellow at Jesus College, Cambridge. Gilbert discussed her book on BBC Radio 4's Woman's Hour on 8 May 2023.
### Norfolk and Norwich
In 2013 the University of East Anglia honoured Julian by naming its new study centre the Julian Study Centre. Norwich's first Julian Week was held in May 2013. The celebration included concerts, talks, and free events held throughout the city, with the stated aim of encouraging people "to learn about Julian and her artistic, historical and theological significance".
The Lady Julian Bridge, crossing the River Wensum and linking King Street and the Riverside Walk close to Norwich railway station, was named in honour of the anchoress. An example of a swing bridge, built to allow larger vessels to approach a basin further upstream, it was designed by the Mott MacDonald Group and completed in 2009.
During 2023, the Friends of Julian of Norwich organized a series of events, centred around 8 May, the 650th anniversary of the occurrence of Julian's revelations.
## Works: Revelations of Divine Love
### Manuscripts
#### Long Text
- Westminster Cathedral Treasury MS 4 (W), a late 15th or early 16th century manuscript. It includes extracts from Julian's Long Text, as well as selections from the writings of the English mystic Walter Hilton. The manuscript is on loan to Westminster Abbey's Muniments Room and Library (as of 1997).
#### Short Text
### Selected editions
- (The second edition (1907) is available online from the Internet Archive.)
## See also
- Order of Julian of Norwich
- Visions of Jesus and Mary
|
33,522 |
William Howard Taft
| 1,172,755,108 |
27th President of the United States
|
[
"1857 births",
"1900s in the United States",
"1910s in the United States",
"1930 deaths",
"19th-century American judges",
"19th-century Unitarians",
"20th-century American judges",
"20th-century Unitarians",
"20th-century presidents of the United States",
"American Unitarians",
"American legal scholars",
"American people of English descent",
"American people of Scotch-Irish descent",
"American prosecutors",
"Articles containing video clips",
"Boston University faculty",
"Burials at Arlington National Cemetery",
"Candidates in the 1908 United States presidential election",
"Candidates in the 1912 United States presidential election",
"Chief justices of the United States",
"Colonial heads of Cuba",
"Fellows of the American Academy of Arts and Sciences",
"Governors-General of the Philippine Islands",
"Judges of the Superior Court of Cincinnati",
"Judges of the United States Court of Appeals for the Sixth Circuit",
"Lawyers from Cincinnati",
"Members of the American Antiquarian Society",
"Members of the American Philosophical Society",
"Members of the Sons of the American Revolution",
"Ohio Republicans",
"Old Right (United States)",
"People from Kalorama (Washington, D.C.)",
"Politicians from Cincinnati",
"Presidents of the American Bar Association",
"Presidents of the United States",
"Progressive Era in the United States",
"Psi Upsilon",
"Republican Party (United States) presidential nominees",
"Republican Party presidents of the United States",
"Skull and Bones Society",
"Taft family",
"Theodore Roosevelt administration cabinet members",
"United States Secretaries of War",
"United States Solicitors General",
"United States federal judges appointed by Benjamin Harrison",
"United States federal judges appointed by Warren G. Harding",
"University of Cincinnati College of Law alumni",
"University of Cincinnati College of Law faculty",
"William Howard Taft",
"Woodward High School (Cincinnati, Ohio) alumni",
"Yale College alumni",
"Yale Law School faculty"
] |
William Howard Taft (September 15, 1857 – March 8, 1930) was the 27th president of the United States (1909–1913) and the tenth chief justice of the United States (1921–1930), the only person to have held both offices. Taft was elected president in 1908, the chosen successor of Theodore Roosevelt, but was defeated for reelection in 1912 by Woodrow Wilson after Roosevelt split the Republican vote by running as a third-party candidate. In 1921, President Warren G. Harding appointed Taft to be chief justice, a position he held until a month before his death.
Taft was born in Cincinnati, Ohio, in 1857. His father, Alphonso Taft, was a U.S. attorney general and secretary of war. Taft attended Yale and joined the Skull and Bones, of which his father was a founding member. After becoming a lawyer, Taft was appointed a judge while still in his twenties. He continued a rapid rise, being named solicitor general and a judge of the Sixth Circuit Court of Appeals. In 1901, President William McKinley appointed Taft civilian governor of the Philippines. In 1904, Roosevelt made him Secretary of War, and he became Roosevelt's hand-picked successor. Despite his personal ambition to become chief justice, Taft declined repeated offers of appointment to the Supreme Court of the United States, believing his political work to be more important.
With Roosevelt's help, Taft had little opposition for the Republican nomination for president in 1908 and easily defeated William Jennings Bryan for the presidency in that November's election. In the White House, he focused on East Asia more than European affairs and repeatedly intervened to prop up or remove Latin American governments. Taft sought reductions to trade tariffs, then a major source of governmental income, but the resulting bill was heavily influenced by special interests. His administration was filled with conflict between the Republican Party's conservative wing, with which Taft often sympathized, and its progressive wing, toward which Roosevelt moved more and more. Controversies over conservation and antitrust cases filed by the Taft administration served to further separate the two men. Roosevelt challenged Taft for renomination in 1912. Taft used his control of the party machinery to gain a bare majority of delegates and Roosevelt bolted the party. The split left Taft with little chance of reelection, and he took only Utah and Vermont in Wilson's victory.
After leaving office, Taft returned to Yale as a professor, continuing his political activity and working against war through the League to Enforce Peace. In 1921, Harding appointed Taft chief justice, an office he had long sought. Chief Justice Taft was a conservative on business issues, and under him there were advances in individual rights. In poor health, he resigned in February 1930, and died the following month. He was buried at Arlington National Cemetery, the first president and first Supreme Court justice to be interred there. Taft is generally listed near the middle in historians' rankings of U.S. presidents.
## Early life and education
William Howard Taft was born September 15, 1857, in Cincinnati, Ohio, to Alphonso Taft and Louise Torrey. The Taft family was not wealthy, living in a modest home in the suburb of Mount Auburn. Alphonso served as a judge and an ambassador, and was U.S. Secretary of War and Attorney General under President Ulysses S. Grant.
William Taft was not seen as brilliant as a child, but was a hard worker; his demanding parents pushed him and his four brothers toward success, tolerating nothing less. He attended Woodward High School in Cincinnati. At Yale College, which he entered in 1874, the heavyset, jovial Taft was popular and an intramural heavyweight wrestling champion. One classmate said he succeeded through hard work rather than by being the smartest, and had integrity. He was elected a member of Skull and Bones, the Yale secret society co-founded by his father, one of three future presidents (with George H. W. Bush and George W. Bush) to be a member. In 1878, Taft graduated second in his class of 121. He attended Cincinnati Law School, and graduated with a Bachelor of Laws in 1880. While in law school, he worked on The Cincinnati Commercial newspaper, edited by Murat Halstead. Taft was assigned to cover the local courts, and also spent time reading law in his father's office; both activities gave him practical knowledge of the law that was not taught in class. Shortly before graduating from law school, Taft went to Columbus to take the bar examination and easily passed.
## Rise in government (1880–1908)
### Ohio lawyer and judge
After admission to the Ohio bar, Taft devoted himself to his job at the Commercial full-time. Halstead was willing to take him on permanently at an increased salary if he would give up the law, but Taft declined. In October 1880, Taft was appointed assistant prosecutor for Hamilton County (where Cincinnati is located), and took office the following January. Taft served for a year as assistant prosecutor, trying his share of routine cases. He resigned in January 1882 after President Chester A. Arthur appointed him Collector of Internal Revenue for Ohio's First District, an area centered on Cincinnati. Taft refused to dismiss competent employees who were politically out of favor, and resigned effective in March 1883, writing to Arthur that he wished to begin private practice in Cincinnati. In 1884, Taft campaigned for the Republican candidate for president, Maine Senator James G. Blaine, who lost to New York Governor Grover Cleveland.
In 1887, Taft, then aged 29, was appointed to a vacancy on the Superior Court of Cincinnati by Governor Joseph B. Foraker. The appointment was good for just over a year, after which he would have to face the voters, and in April 1888, he sought election for the first of three times in his lifetime, the other two being for the presidency. He was elected to a full five-year term. Some two dozen of Taft's opinions as a state judge survive, the most significant being Moores & Co. v. Bricklayers' Union No. 1 (1889) if only because it was used against him when he ran for president in 1908. The case involved bricklayers who refused to work for any firm that dealt with a company called Parker Brothers, with which they were in dispute. Taft ruled that the union's action amounted to a secondary boycott, which was illegal.
It is not clear when Taft met Helen Herron (often called Nellie), but it was no later than 1880, when she mentioned in her diary receiving an invitation to a party from him. By 1884, they were meeting regularly, and in 1885, after an initial rejection, she agreed to marry him. The wedding took place at the Herron home on June 19, 1886. William Taft remained devoted to his wife throughout their almost 44 years of marriage. Nellie Taft pushed her husband much as his parents had, and she could be very frank with her criticisms. The couple had three children, of whom the eldest, Robert, became a U.S. senator.
### Solicitor General
There was a seat vacant on the U.S. Supreme Court in 1889, and Governor Foraker suggested President Harrison appoint Taft to fill it. Taft was 32 and his professional goal was always a seat on the Supreme Court. He actively sought the appointment, writing to Foraker to urge the governor to press his case, while stating to others it was unlikely he would get it. Instead, in 1890, Harrison appointed him Solicitor General of the United States. When Taft arrived in Washington in February 1890, the office had been vacant for two months, with the work piling up. He worked to eliminate the backlog, while simultaneously educating himself on federal law and procedure he had not needed as an Ohio state judge.
New York Senator William M. Evarts, a former Secretary of State, had been a classmate of Alphonso Taft at Yale. Evarts called to see his friend's son as soon as Taft took office, and William and Nellie Taft were launched into Washington society. Nellie Taft was ambitious for herself and her husband, and was annoyed when the people he socialized with most were mainly Supreme Court justices, rather than the arbiters of Washington society such as Theodore Roosevelt, John Hay, Henry Cabot Lodge and their wives.
In 1891, Taft introduced a new policy: confession of error, by which the U.S. government would concede a case in the Supreme Court that it had won in the court below but that the solicitor general thought it should have lost. At Taft's request, the Supreme Court reversed a murder conviction that Taft said had been based on inadmissible evidence. The policy continues to this day.
Although Taft was successful as Solicitor General, winning 15 of the 18 cases he argued before the Supreme Court, he was glad when in March 1891, the United States Congress created a new judgeship for each of the United States Courts of Appeal and Harrison appointed him to the Sixth Circuit, based in Cincinnati. In March 1892, Taft resigned as Solicitor General to resume his judicial career.
### Federal judge
Taft's federal judgeship was a lifetime appointment, and one from which promotion to the Supreme Court might come. Taft's older half-brother Charles, successful in business, supplemented Taft's government salary, allowing William and Nellie Taft and their family to live in comfort. Taft's duties involved hearing trials in the circuit, which included Ohio, Michigan, Kentucky, and Tennessee, and participating with Supreme Court Justice John Marshall Harlan, the circuit justice, and judges of the Sixth Circuit in hearing appeals. Taft spent these years, from 1892 to 1900, in personal and professional contentment.
According to historian Louis L. Gould, "while Taft shared the fears about social unrest that dominated the middle classes during the 1890s, he was not as conservative as his critics believed. He supported the right of labor to organize and strike, and he ruled against employers in several negligence cases." Among these was Voight v. Baltimore & Ohio Southwestern Railway Co. Taft's decision for a worker injured in a railway accident violated the contemporary doctrine of liberty of contract, and he was reversed by the Supreme Court. On the other hand, Taft's opinion in United States v. Addyston Pipe and Steel Co. was upheld unanimously by the high court. Taft's opinion, in which he held that a pipe manufacturers' association had violated the Sherman Antitrust Act, was described by Henry Pringle, his biographer, as having "definitely and specifically revived" that legislation.
In 1896, Taft became dean and Professor of Property at his alma mater, the Cincinnati Law School, a post that required him to prepare and give two hour-long lectures each week. He was devoted to his law school, and was deeply committed to legal education, introducing the case method to the curriculum. As a federal judge, Taft could not involve himself with politics, but followed it closely, remaining a Republican supporter. He watched with some disbelief as the campaign of Ohio Governor William McKinley developed in 1894 and 1895, writing "I cannot find anybody in Washington who wants him". By March 1896, Taft realized that McKinley would likely be nominated, and was lukewarm in his support. He landed solidly in McKinley's camp after former Nebraska representative William Jennings Bryan in July stampeded the 1896 Democratic National Convention with his Cross of Gold speech. Bryan, both in that address and in his campaign, strongly advocated free silver, a policy that Taft saw as economic radicalism. Taft feared that people would hoard gold in anticipation of a Bryan victory, but he could do nothing but worry. McKinley was elected; when a place on the Supreme Court opened in 1898, the only one under McKinley, the president named Joseph McKenna.
From the 1890s until his death, Taft played a major role in the international legal community. He was active in many organizations, was a leader in the worldwide arbitration movement, and taught international law at the Yale Law School. Taft advocated the establishment of a world court of arbitration supported by an international police force and is considered a major proponent of "world peace through law" movement. One of the reasons for his bitter break with Roosevelt in 1910–12 was Roosevelt's insistence that arbitration was naïve and that only war could decide major international disputes.
### Philippine years
In January 1900, Taft was called to Washington to meet with McKinley. Taft hoped a Supreme Court appointment was in the works, but instead McKinley wanted to place Taft on the commission to organize a civilian government in the Philippines. The appointment would require Taft's resignation from the bench; the president assured him that if he fulfilled this task, McKinley would appoint him to the next vacancy on the high court. Taft accepted on condition he was made head of the commission, with responsibility for success or failure; McKinley agreed, and Taft sailed for the islands in April 1900.
The American takeover meant the Philippine Revolution bled into the Philippine–American War, as Filipinos fought for their independence, but U.S. forces, led by military governor General Arthur MacArthur Jr. had the upper hand by 1900. MacArthur felt the commission was a nuisance, and their mission a quixotic attempt to impose self-government on a people unready for it. The general was forced to co-operate with Taft, as McKinley had given the commission control over the islands' military budget. The commission took executive power in the Philippines on September 1, 1900; on July 4, 1901, Taft became civilian governor. MacArthur, until then the military governor, was relieved by General Adna Chaffee, who was designated only as commander of American forces. As Governor-General, Taft oversaw the final months of the primary phase of the Philippine–American War. He approved of General James Franklin Bell's use of concentration camps in the provinces of Batangas and Laguna, and accepted the surrender of Filipino general Miguel Malvar on April 16, 1902. In February 1902, Taft testified before the Senate Committee on the Philippines in regard to alleged offenses by the U.S. Marine Corps against Filipino civilians; he admitted that Marines had committed some offenses including waterboarding, but denied the existence of Bell's concentration camps.
Taft sought to make the Filipinos partners in a venture that would lead to their self-government; he saw independence as something decades off. Many Americans in the Philippines viewed the locals as racial inferiors, but Taft wrote soon before his arrival, "we propose to banish this idea from their minds". Taft did not impose racial segregation at official events, and treated the Filipinos as social equals. Nellie Taft recalled that "neither politics nor race should influence our hospitality in any way".
McKinley was assassinated in September 1901, and was succeeded by Theodore Roosevelt. Taft and Roosevelt had first become friends around 1890 while Taft was Solicitor General and Roosevelt a member of the United States Civil Service Commission. Taft had, after McKinley's election, urged the appointment of Roosevelt as Assistant Secretary of the Navy, and watched as Roosevelt became a war hero, Governor of New York, and Vice President of the United States. They met again when Taft went to Washington in January 1902 to recuperate after two operations caused by an infection. There, Taft testified before the Senate Committee on the Philippines. Taft wanted Filipino farmers to have a stake in the new government through land ownership, but much of the arable land was held by Catholic religious orders of mostly Spanish priests, which were often resented by the Filipinos. Roosevelt had Taft go to Rome to negotiate with Pope Leo XIII, to purchase the lands and to arrange the withdrawal of the Spanish priests, with Americans replacing them and training locals as clergy. Taft did not succeed in resolving these issues on his visit to Rome, but an agreement on both points was made in 1903.
In late 1902, Taft had heard from Roosevelt that a seat on the Supreme Court would soon fall vacant on the resignation of Justice George Shiras, and Roosevelt desired that Taft fill it. Although this was Taft's professional goal, he refused as he felt his work as governor was not yet done. The following year, Roosevelt asked Taft to become Secretary of War. As the War Department administered the Philippines, Taft would remain responsible for the islands, and Elihu Root, the incumbent, was willing to postpone his departure until 1904, allowing Taft time to wrap up his work in Manila. After consulting with his family, Taft agreed, and sailed for the United States in December 1903.
### Secretary of War
When Taft took office as Secretary of War in January 1904, he was not called upon to spend much time administering the army, which the president was content to do himself—Roosevelt wanted Taft as a troubleshooter in difficult situations, as a legal adviser, and to be able to give campaign speeches as he sought election in his own right. Taft strongly defended Roosevelt's record in his addresses, and wrote of the president's successful but strenuous efforts to gain election, "I would not run for president if you guaranteed the office. It is awful to be afraid of one's shadow."
Between 1905 and 1907, Taft came to terms with the likelihood he would be the next Republican nominee for president, though he did not plan to actively campaign for it. When Justice Henry Billings Brown resigned in 1906, Taft would not accept the seat although Roosevelt offered it, a position Taft held to when another seat opened in 1906. Edith Roosevelt, the First Lady, disliked the growing closeness between the two men, feeling that they were too much alike and that the president did not gain much from the advice of someone who rarely contradicted him.
Alternatively, Taft wanted to be chief justice, and kept a close eye on the health of the aging incumbent, Melville Fuller, who turned 75 in 1908. Taft believed Fuller likely to live many years. Roosevelt had indicated he was likely to appoint Taft if the opportunity came to fill the court's center seat, but some considered Attorney General Philander Knox a better candidate. In any event, Fuller remained chief justice throughout Roosevelt's presidency.
Through the 1903 separation of Panama from Colombia and the Hay–Bunau-Varilla Treaty, the United States had secured rights to build a canal in the Isthmus of Panama. Legislation authorizing construction did not specify which government department would be responsible, and Roosevelt designated the Department of War. Taft journeyed to Panama in 1904, viewing the canal site and meeting with Panamanian officials. The Isthmian Canal Commission had trouble keeping a chief engineer, and when in February 1907 John F. Stevens submitted his resignation, Taft recommended an army engineer, George W. Goethals. Under Goethals, the project moved ahead smoothly.
Another colony lost by Spain in 1898 was Cuba, but as freedom for Cuba had been a major purpose of the war, it was not annexed by the U.S., but was, after a period of occupation, given independence in 1902. Election fraud and corruption followed, as did factional conflict. In September 1906, President Tomás Estrada Palma asked for U.S. intervention. Taft traveled to Cuba with a small American force, and on September 29, 1906, under the terms of the Cuban–American Treaty of Relations of 1903, declared himself Provisional Governor of Cuba, a post he held for two weeks before being succeeded by Charles Edward Magoon. In his time in Cuba, Taft worked to persuade Cubans that the U.S. intended stability, not occupation.
Taft remained involved in Philippine affairs. During Roosevelt's election campaign in 1904, he urged that Philippine agricultural products be admitted to the U.S. without duty. This caused growers of U.S. sugar and tobacco to complain to Roosevelt, who remonstrated with his Secretary of War. Taft expressed unwillingness to change his position, and threatened to resign; Roosevelt hastily dropped the matter. Taft returned to the islands in 1905, leading a delegation of congressmen, and again in 1907, to open the first Philippine Assembly.
On both of his Philippine trips as Secretary of War, Taft went to Japan, and met with officials there. The meeting in July 1905 came a month before the Portsmouth Peace Conference, which would end the Russo-Japanese War with the Treaty of Portsmouth. Taft met with Japanese Prime Minister Katsura Tarō. After that meeting, the two signed a memorandum. It contained nothing new but instead reaffirmed official positions: Japan had no intention to invade the Philippines, and the U.S. that it did not object to Japanese control of Korea. There were U.S. concerns about the number of Japanese laborers coming to the American West Coast, and during Taft's second visit, in September 1907, Tadasu Hayashi, the foreign minister, informally agreed to issue fewer passports to them.
## Presidential election of 1908
### Gaining the nomination
Roosevelt had served almost three and a half years of McKinley's term. On the night of his own election in 1904, Roosevelt publicly declared that he would not run for reelection in 1908, a pledge he quickly regretted. But he felt bound by his word. Roosevelt believed Taft was his logical successor, although the War Secretary had initially been reluctant to run. Roosevelt used his control of the party machinery to aid his heir apparent. On pain of the loss of their jobs, political appointees were required to support Taft or remain silent.
A number of Republican politicians, such as Treasury Secretary George Cortelyou, tested the waters for a run but chose to stay out. New York Governor Charles Evans Hughes ran, but when he made a major policy speech, Roosevelt the same day sent a special message to Congress warning in strong terms against corporate corruption. The resulting coverage of the presidential message relegated Hughes to the back pages. Roosevelt reluctantly deterred repeated attempts to draft him for another term.
Assistant Postmaster General Frank H. Hitchcock resigned from his office in February 1908 to lead the Taft effort. In April, Taft made a speaking tour, traveling as far west as Omaha before being recalled to straighten out a contested election in Panama. He had no serious opposition at the 1908 Republican National Convention in Chicago in June, and gained a first-ballot victory. Yet Taft did not have things his own way: he had hoped his running mate would be a midwestern progressive like Iowa Senator Jonathan Dolliver, but instead the convention named Congressman James S. Sherman of New York, a conservative. Taft resigned as Secretary of War on June 30 to devote himself full-time to the campaign.
### General election campaign
Taft's opponent in the general election was Bryan, the Democratic nominee for the third time in four presidential elections. As many of Roosevelt's reforms stemmed from proposals by Bryan, the Democrat argued that he was the true heir to Roosevelt's mantle. Corporate contributions to federal political campaigns had been outlawed by the 1907 Tillman Act, and Bryan proposed that contributions by officers and directors of corporations be similarly banned, or at least disclosed when made. Taft was only willing to see the contributions disclosed after the election, and tried to ensure that officers and directors of corporations litigating with the government were not among his contributors.
Taft began the campaign on the wrong foot, fueling the arguments of those who said he was not his own man by traveling to Roosevelt's home at Sagamore Hill for advice on his acceptance speech, saying that he needed "the President's judgment and criticism". Taft supported most of Roosevelt's policies. He argued that labor had a right to organize, but not boycott, and that corporations and the wealthy must also obey the law. Bryan wanted the railroads to be owned by the government, but Taft preferred that they remain in the private sector, with their maximum rates set by the Interstate Commerce Commission, subject to judicial review. Taft attributed blame for the recent recession, the Panic of 1907, to stock speculation and other abuses, and felt some reform of the currency (the U.S. was on the gold standard) was needed to allow flexibility in the government's response to poor economic times, that specific legislation on trusts was needed to supplement the Sherman Antitrust Act, and that the constitution should be amended to allow for an income tax, thus overruling decisions of the Supreme Court striking such a tax down. Roosevelt's expansive use of executive power had been controversial; Taft proposed to continue his policies, but place them on more solid legal underpinnings through the passage of legislation.
Taft upset some progressives by choosing Hitchcock as Chairman of the Republican National Committee (RNC), placing him in charge of the presidential campaign. Hitchcock was quick to bring in men closely allied with big business. Taft took an August vacation in Hot Springs, Virginia, where he irritated political advisors by spending more time on golf than strategy. After seeing a newspaper photo of Taft taking a large swing at a golf ball, Roosevelt warned him against candid shots.
Roosevelt, frustrated by his own relative inaction, showered Taft with advice, fearing that the electorate would not appreciate Taft's qualities, and that Bryan would win. Roosevelt's supporters spread rumors that the president was in effect running Taft's campaign. This annoyed Nellie Taft, who never trusted the Roosevelts. Nevertheless, Roosevelt supported the Republican nominee with such enthusiasm that humorists suggested "TAFT" stood for "Take advice from Theodore".
Bryan urged a system of bank guarantees, so that depositors could be repaid if banks failed, but Taft opposed this, offering a postal savings system instead. The issue of prohibition of alcohol entered the campaign when in mid-September, Carrie Nation called on Taft and demanded to know his views. Taft and Roosevelt had agreed the party platform would take no position on the matter, and Nation left indignant, to allege that Taft was irreligious and against temperance. Taft, at Roosevelt's advice, ignored the issue.
In the end, Taft won by a comfortable margin. Taft defeated Bryan by 321 electoral votes to 162; however, he garnered just 51.6 percent of the popular vote. Nellie Taft said regarding the campaign, "There was nothing to criticize, except his not knowing or caring about the way the game of politics is played." Longtime White House usher Ike Hoover recalled that Taft came often to see Roosevelt during the campaign, but seldom between the election and Inauguration Day, March 4, 1909.
## Presidency (1909–1913)
### Inauguration and appointments
Taft was sworn in as president on March 4, 1909. Due to a winter storm that coated Washington with ice, Taft was inaugurated within the Senate Chamber rather than outside the Capitol as is customary. The new president stated in his inaugural address that he had been honored to have been "one of the advisers of my distinguished predecessor" and to have had a part "in the reforms he has initiated. I should be untrue to myself, to my promises, and to the declarations of the party platform on which I was elected if I did not make the maintenance and enforcement of those reforms a most important feature of my administration". He pledged to make those reforms long-lasting, ensuring that honest businessmen did not suffer uncertainty through change of policy. He spoke of the need to reduce the 1897 Dingley tariff, of the need for antitrust reform, and for continued advancement of the Philippines toward full self-government. Roosevelt left office with regret that his tenure in the position he enjoyed so much was over and, to keep out of Taft's way, arranged for a year-long hunting trip to Africa.
Soon after the Republican convention, Taft and Roosevelt had discussed which cabinet officers would stay on. Taft kept only Agriculture Secretary James Wilson and Postmaster General George von Lengerke Meyer (who was transferred to the Navy Department). Others appointed to the Taft cabinet included Philander Knox, who had served under McKinley and Roosevelt as Attorney General, as the new Secretary of State, and Franklin MacVeagh as Treasury Secretary.
Taft did not enjoy the easy relationship with the press that Roosevelt had, choosing not to offer himself for interviews or photo opportunities as often as his predecessor had. His administration marked a change in style from the charismatic leadership of Roosevelt to Taft's quieter passion for the rule of law.
### First Lady's illness
Early in Taft's term, in May 1909, his wife Nellie had a severe stroke that left her paralysed in one arm and one leg and deprived her of the power of speech. Taft spent several hours each day looking after her and teaching her to speak again, which took a year.
### Foreign policy
#### Organization and principles
Taft made it a priority to restructure the State Department, noting, "it is organized on the basis of the needs of the government in 1800 instead of 1900." The department was for the first time organized into geographical divisions, including desks for the Far East, Latin America and Western Europe. The department's first in-service training program was established, and appointees spent a month in Washington before going to their posts. Taft and Secretary of State Knox had a strong relationship, and the president listened to his counsel on matters foreign and domestic. According to historian Paolo E. Coletta, Knox was not a good diplomat, and had poor relations with the Senate, press, and many foreign leaders, especially those from Latin America.
There was broad agreement between Taft and Knox on major foreign policy goals; the U.S. would not interfere in European affairs, and would use force if necessary to enforce the Monroe Doctrine in the Americas. The defense of the Panama Canal, which was under construction throughout Taft's term (it opened in 1914), guided United States foreign policy in the Caribbean and Central America. Previous administrations had made efforts to promote American business interests overseas, but Taft went a step further and used the web of American diplomats and consuls abroad to further trade. Such ties, Taft hoped, would promote world peace. Taft pushed for arbitration treaties with Great Britain and France, but the Senate was not willing to yield to arbitrators its constitutional prerogative to approve treaties.
#### Tariffs and reciprocity
At the time of Taft's presidency, protectionism through the use of tariffs was a fundamental position of the Republican Party. The Dingley Act tariff had been enacted to protect American industry from foreign competition. The 1908 party platform had supported unspecified revisions to the Dingley Act, and Taft interpreted this to mean reduction. Taft called a special session of Congress to convene on March 15, 1909, to deal with the tariff question.
Sereno E. Payne, chairman of the House Ways and Means Committee, had held hearings in late 1908, and sponsored the resulting draft legislation. On balance, the bill reduced tariffs slightly, but when it passed the House in April 1909 and reached the Senate, the chairman of the Senate Finance Committee, Rhode Island Senator Nelson W. Aldrich, attached many amendments raising rates. This outraged progressives such as Wisconsin's Robert M. La Follette, who urged Taft to say that the bill was not in accord with the party platform. Taft refused, angering them. Taft insisted that most imports from the Philippines be free of duty, and according to Anderson, showed effective leadership on a subject he was knowledgeable on and cared about.
When opponents sought to modify the tariff bill to allow for an income tax, Taft opposed it on the ground that the Supreme Court would likely strike it down as unconstitutional, as it had before. Instead, they proposed a constitutional amendment, which passed both houses in early July, was sent to the states, and by 1913 was ratified as the Sixteenth Amendment. In the conference committee, Taft won some victories, such as limiting the tax on lumber. The conference report passed both houses, and Taft signed it on August 6, 1909. The Payne-Aldrich tariff was immediately controversial. According to Coletta, "Taft had lost the initiative, and the wounds inflicted in the acrid tariff debate never healed".
In Taft's annual message sent to Congress in December 1910, he urged a free trade accord with Canada. Britain at that time still handled Canada's foreign relations, and Taft found the British and Canadian governments willing. Many in Canada opposed an accord, fearing the U.S. would dump it when it became inconvenient, as it had the 1854 Elgin-Marcy Treaty in 1866, and farm and fisheries interests in the United States were also opposed. After talks with Canadian officials in January 1911, Taft had the agreement, which was not a treaty, introduced into Congress. It passed in late July. The Parliament of Canada led by Prime Minister Sir Wilfrid Laurier had deadlocked over the issue. Canadians turned Laurier out of office in the September 1911 election and Robert Borden became the new prime minister. No cross-border agreement was concluded, and the debate deepened divisions within the Republican Party.
#### Latin America
Taft and his Secretary of State, Philander Knox, instituted a policy of Dollar Diplomacy towards Latin America, believing U.S. investment would benefit all involved, while diminishing European influence in regions where the Monroe Doctrine applied. The policy was unpopular among Latin American states that did not wish to become financial protectorates of the United States, as well as in the U.S. Senate, many of whose members believed the U.S. should not interfere abroad. No foreign affairs controversy tested Taft's policy more than the collapse of the Mexican regime and subsequent turmoil of the Mexican Revolution.
When Taft entered office, Mexico was increasingly restless under the grip of longtime dictator Porfirio Díaz. Many Mexicans backed his opponent, Francisco Madero. There were a number of incidents in which Mexican rebels crossed the U.S. border to obtain horses and weapons; Taft sought to prevent this by ordering the US Army to the border areas for maneuvers. Taft told his military aide, Archibald Butt, that "I am going to sit on the lid and it will take a great deal to pry me off". He showed his support for Díaz by meeting with him at El Paso, Texas, and Ciudad Juárez, Chihuahua, the first meeting between a U.S. and a Mexican president and also the first time an American president visited Mexico. The day of the summit, Frederick Russell Burnham and a Texas Ranger captured and disarmed an assassin holding a palm pistol only a few feet from the two presidents. Before the election in Mexico, Díaz jailed opposition candidate Francisco I. Madero, whose supporters took up arms. This resulted in both the ousting of Díaz and a revolution that would continue for another ten years. In the U.S.'s Arizona Territory, two citizens were killed and almost a dozen injured, some as a result of gunfire across the border. Taft was against an aggressive response and so instructed the territorial governor.
Nicaragua's president, José Santos Zelaya, wanted to revoke commercial concessions granted to American companies, and American diplomats quietly favored rebel forces under Juan Estrada. Nicaragua was in debt to foreign powers, and the U.S. was unwilling to let an alternate canal route fall into the hands of Europeans. Zelaya's elected successor, José Madriz, could not put down the rebellion as U.S. forces interfered, and in August 1910, the Estrada forces took Managua, the capital. The U.S. compelled Nicaragua to accept a loan, and sent officials to ensure it was repaid from government revenues. The country remained unstable, and after another coup in 1911 and more disturbances in 1912, Taft sent troops to begin the United States occupation of Nicaragua, which lasted until 1933.
Treaties among Panama, Colombia, and the United States to resolve disputes arising from the Panamanian Revolution of 1903 had been signed by the lame-duck Roosevelt administration in early 1909, and were approved by the Senate and also ratified by Panama. Colombia, however, declined to ratify the treaties, and after the 1912 elections, Knox offered \$10 million to the Colombians (later raised to \$25 million). The Colombians felt the amount inadequate, and requested arbitration; the matter was not settled under the Taft administration.
#### East Asia
Due to his years in the Philippines, Taft was keenly interested as president in East Asian affairs. Taft considered relations with Europe relatively unimportant, but because of the potential for trade and investment, Taft ranked the post of minister to China as most important in the Foreign Service. Knox did not agree, and declined a suggestion that he go to Peking to view the facts on the ground. Taft considered Roosevelt's minister there, William W. Rockhill, as uninterested in the China trade, and replaced him with William J. Calhoun, whom McKinley and Roosevelt had sent on several foreign missions. Knox did not listen to Calhoun on policy, and there were often conflicts. Taft and Knox tried unsuccessfully to extend John Hay's Open Door Policy to Manchuria.
In 1898, an American company had gained a concession for a railroad between Hakou and Sichuan, but the Chinese revoked the agreement in 1904 after the company (which was indemnified for the revocation) breached the agreement by selling a majority stake outside the United States. The Chinese imperial government got the money for the indemnity from the British Hong Kong government, on condition British subjects would be favored if foreign capital was needed to build the railroad line, and in 1909, a British-led consortium began negotiations. This came to Knox's attention in May of that year, and he demanded that U.S. banks be allowed to participate. Taft appealed personally to the Prince Regent, Zaifeng, Prince Chun, and was successful in gaining U.S. participation, though agreements were not signed until May 1911. However, the Chinese decree authorizing the agreement also required the nationalization of local railroad companies in the affected provinces. Inadequate compensation was paid to the shareholders, and these grievances were among those which touched off the Chinese Revolution of 1911.
After the revolution broke out, the revolt's leaders chose Sun Yat-sen as provisional president of what became the Republic of China, overthrowing the Manchu dynasty, Taft was reluctant to recognize the new government, although American public opinion was in favor of it. The U.S. House of Representatives in February 1912 passed a resolution supporting a Chinese republic, but Taft and Knox felt recognition should come as a concerted action by Western powers. Taft in his final annual message to Congress in December 1912 indicated that he was moving towards recognition once the republic was fully established, but by then he had been defeated for reelection and he did not follow through. Taft continued the policy against immigration from China and Japan as under Roosevelt. A revised treaty of friendship and navigation entered into by the U.S. and Japan in 1911 granted broad reciprocal rights to Japanese people in America and Americans in Japan, but were premised on the continuation of the Gentlemen's Agreement. There was objection on the West Coast when the treaty was submitted to the Senate, but Taft informed politicians that there was no change in immigration policy.
#### Europe
Taft was opposed to the traditional practice of rewarding wealthy supporters with key ambassadorial posts, preferring that diplomats not live in a lavish lifestyle and selecting men who, as Taft put it, would recognize an American when they saw one. High on his list for dismissal was the ambassador to France, Henry White, whom Taft knew and disliked from his visits to Europe. White's ousting caused other career State Department employees to fear that their jobs might be lost to politics. Taft also wanted to replace the Roosevelt-appointed ambassador in London, Whitelaw Reid, but Reid, owner of the New-York Tribune, had backed Taft during the campaign, and both William and Nellie Taft enjoyed his gossipy reports. Reid remained in place until his 1912 death.
Taft was a supporter of settling international disputes by arbitration, and he negotiated treaties with Great Britain and with France providing that differences be arbitrated. These were signed in August 1911. Neither Taft nor Knox (a former senator) consulted with members of the Senate during the negotiating process. By then many Republicans were opposed to Taft and the president felt that lobbying too hard for the treaties might cause their defeat. He made some speeches supporting the treaties in October, but the Senate added amendments Taft could not accept, killing the agreements.
Although no general arbitration treaty was entered into, Taft's administration settled several disputes with Great Britain by peaceful means, often involving arbitration. These included a settlement of the boundary between Maine and New Brunswick, a long-running dispute over seal hunting in the Bering Sea that also involved Japan, and a similar disagreement regarding fishing off Newfoundland. The sealing convention remained in force until abrogated by Japan in 1940.
### Domestic policies and politics
#### Antitrust
Taft continued and expanded Roosevelt's efforts to break up business combinations through lawsuits brought under the Sherman Antitrust Act, bringing 70 cases in four years (Roosevelt had brought 40 in seven years). Suits brought against the Standard Oil Company and the American Tobacco Company, initiated under Roosevelt, were decided in favor of the government by the Supreme Court in 1911. In June 1911, the Democrat-controlled House of Representatives began hearings into United States Steel (U.S. Steel). That company had been expanded under Roosevelt, who had supported its acquisition of the Tennessee Coal, Iron, and Railroad Company as a means of preventing the deepening of the Panic of 1907, a decision the former president defended when testifying at the hearings. Taft, as Secretary of War, had praised the acquisitions. Historian Louis L. Gould suggested that Roosevelt was likely deceived into believing that U.S. Steel did not want to purchase the Tennessee company, but it was in fact a bargain. For Roosevelt, questioning the matter went to his personal honesty.
In October 1911, Taft's Justice Department brought suit against U.S. Steel, demanding that over a hundred of its subsidiaries be granted corporate independence, and naming as defendants many prominent business executives and financiers. The pleadings in the case had not been reviewed by Taft, and alleged that Roosevelt "had fostered monopoly, and had been duped by clever industrialists". Roosevelt was offended by the references to him and his administration in the pleadings, and felt that Taft could not evade command responsibility by saying he did not know of them.
Taft sent a special message to Congress on the need for a revamped antitrust statute when it convened its regular session in December 1911, but it took no action. Another antitrust case that had political repercussions for Taft was that brought against the International Harvester Company, the large manufacturer of farm equipment, in early 1912. As Roosevelt's administration had investigated International Harvester, but had taken no action (a decision Taft had supported), the suit became caught up in Roosevelt's challenge for the Republican presidential nomination. Supporters of Taft alleged that Roosevelt had acted improperly; the former president blasted Taft for waiting three and a half years, and until he was under challenge, to reverse a decision he had supported.
#### Ballinger–Pinchot affair
Roosevelt was an ardent conservationist, assisted in this by like-minded appointees, including Interior Secretary James R. Garfield and Chief Forester Gifford Pinchot. Taft agreed with the need for conservation, but felt it should be accomplished by legislation rather than executive order. He did not retain Garfield, an Ohioan, as secretary, choosing instead a westerner, former Seattle mayor Richard A. Ballinger. Roosevelt was surprised at the replacement, believing that Taft had promised to keep Garfield, and this change was one of the events that caused Roosevelt to realize that Taft would choose different policies.
Roosevelt had withdrawn much land from the public domain, including some in Alaska thought rich in coal. In 1902, Clarence Cunningham, an Idaho entrepreneur, had found coal deposits in Alaska, and made mining claims, and the government investigated their legality. This dragged on for the remainder of the Roosevelt administration, including during the year (1907–1908) when Ballinger served as head of the General Land Office. A special agent for the Land Office, Louis Glavis, investigated the Cunningham claims, and when Secretary Ballinger in 1909 approved them, Glavis broke governmental protocol by going outside the Interior Department to seek help from Pinchot.
In September 1909, Glavis made his allegations public in a magazine article, disclosing that Ballinger had acted as an attorney for Cunningham between his two periods of government service. This violated conflict of interest rules forbidding a former government official from advocacy on a matter he had been responsible for. On September 13, 1909, Taft dismissed Glavis from government service, relying on a report from Attorney General George W. Wickersham dated two days previously. Pinchot was determined to dramatize the issue by forcing his own dismissal, which Taft tried to avoid, fearing that it might cause a break with Roosevelt (still overseas). Taft asked Elihu Root (by then a senator) to look into the matter, and Root urged the firing of Pinchot.
Taft had ordered government officials not to comment on the fracas. In January 1910, Pinchot forced the issue by sending a letter to Iowa Senator Dolliver alleging that but for the actions of the Forestry Service, Taft would have approved a fraudulent claim on public lands. According to Pringle, this "was an utterly improper appeal from an executive subordinate to the legislative branch of the government and an unhappy president prepared to separate Pinchot from public office". Pinchot was dismissed, much to his delight, and he sailed for Europe to lay his case before Roosevelt. A congressional investigation followed, which cleared Ballinger by majority vote, but the administration was embarrassed when Glavis' attorney, Louis D. Brandeis, proved that the Wickersham report had been backdated, which Taft belatedly admitted. The Ballinger–Pinchot affair caused progressives and Roosevelt loyalists to feel that Taft had turned his back on Roosevelt's agenda.
#### Civil rights
Taft announced in his inaugural address that he would not appoint African Americans to federal jobs, such as postmaster, where this would cause racial friction. This differed from Roosevelt, who would not remove or replace black officeholders with whom local whites would not deal. Termed Taft's "Southern Policy", this stance effectively invited white protests against black appointees. Taft followed through, removing most black office holders in the South, and made few appointments of African Americans in the North.
At the time Taft was inaugurated, the way forward for African Americans was debated by their leaders. Booker T. Washington felt that most blacks should be trained for industrial work, with only a few seeking higher education; W. E. B. DuBois took a more militant stand for equality. Taft tended towards Washington's approach. According to Coletta, Taft let the African-American "be 'kept in his place' ... He thus failed to see or follow the humanitarian mission historically associated with the Republican party, with the result that Negroes both North and South began to drift toward the Democratic party."
Taft, a Unitarian, was a leader in the early 20th century of the favorable reappraisal of Catholicism's historic role. It tended to neutralize anti-Catholic sentiments, especially in the Far West where Protestantism was a weak force. In 1904 Taft gave a speech at the University of Notre Dame. He praised the "enterprise, courage, and fidelity to duty that distinguished those heroes of Spain who braved the then frightful dangers of the deep to carry Christianity and European civilization into" the Philippines. In 1909 he praised Junípero Serra as an "apostle, legislator, [and] builder" who advanced "the beginning of civilization in California."
A supporter of free immigration, Taft vetoed a bill passed by Congress and supported by labor unions that would have restricted unskilled laborers by imposing a literacy test.
### Judicial appointments
Taft made six appointments to the Supreme Court; only George Washington and Franklin D. Roosevelt made more. The death of Justice Rufus Peckham in October 1909 gave Taft his first opportunity. He chose an old friend and colleague from the Sixth Circuit, Horace H. Lurton of Georgia; he had in vain urged Theodore Roosevelt to appoint Lurton to the high court. Attorney General Wickersham objected that Lurton, a former Confederate soldier and a Democrat, was aged 65. Taft named Lurton anyway on December 13, 1909, and the Senate confirmed him by voice vote a week later. Lurton is still the oldest person to be made an associate justice. Lurie suggested that Taft, already beset by the tariff and conservation controversies, desired to perform an official act which gave him pleasure, especially since he thought Lurton deserved it.
Justice David Josiah Brewer's death on March 28, 1910, gave Taft a second opportunity to fill a seat on the high court; he chose New York Governor Charles Evans Hughes. Taft told Hughes that should the chief justiceship fall vacant during his term, Hughes would be his likely choice for the center seat. The Senate quickly confirmed Hughes, but then Chief Justice Fuller died on July 4, 1910. Taft took five months to replace Fuller, and when he did, it was with Justice Edward Douglass White, who became the first associate justice to be promoted to chief justice. According to Lurie, Taft, who still had hopes of being chief justice, may have been more willing to appoint an older man than he (White) than a younger one (Hughes), who might outlive him, as indeed Hughes did. To fill White's seat as associate justice, Taft appointed Willis Van Devanter of Wyoming, a federal appeals judge. By the time Taft nominated White and Van Devanter in December 1910, he had another seat to fill due to William Henry Moody's retirement because of illness; he named a Louisiana Democrat, Joseph R. Lamar, whom he had met while playing golf, and had subsequently learned had a good reputation as a judge.
With the death of Justice Harlan in October 1911, Taft got to fill a sixth seat on the Supreme Court. After Secretary Knox declined appointment, Taft named Chancellor of New Jersey Mahlon Pitney, the last person appointed to the Supreme Court who did not attend law school. Pitney had a stronger anti-labor record than Taft's other appointments, and was the only one to meet opposition, winning confirmation by a Senate vote of 50–26.
Taft appointed 13 judges to the federal courts of appeal and 38 to the United States district courts. Taft also appointed judges to various specialized courts, including the first five appointees each to the United States Commerce Court and the United States Court of Customs Appeals. The Commerce Court, created in 1910, stemmed from a Taft proposal for a specialized court to hear appeals from the Interstate Commerce Commission. There was considerable opposition to its establishment, which only grew when one of its judges, Robert W. Archbald, was in 1912 impeached for corruption and removed by the Senate the following January. Taft vetoed a bill to abolish the court, but the respite was short-lived as Woodrow Wilson signed similar legislation in October 1913.
### 1912 presidential campaign and election
#### Moving apart from Roosevelt
During Roosevelt's fifteen months beyond the Atlantic, from March 1909 to June 1910, neither man wrote much to the other. Taft biographer Lurie suggested that each expected the other to make the first move to re-establish their relationship on a new footing. Upon Roosevelt's triumphant return, Taft invited him to stay at the White House. The former president declined, and in private letters to friends expressed dissatisfaction at Taft's performance. Nevertheless, he wrote that he expected Taft to be renominated by the Republicans in 1912, and did not speak of himself as a candidate.
Stanley Solvick argues that Taft abided by the goals and procedures of the "Square Deal" that Roosevelt promoted in his first term. The deepening dispute came as Roosevelt and the more radical progressives moved on to more aggressive goals, such as curbing the judiciary, which Taft rejected.[^1]
Taft and Roosevelt met twice in 1910; the meetings, though outwardly cordial, did not display their former closeness. Roosevelt gave a series of speeches in the West in the late summer and early fall of 1910. Roosevelt not only attacked the Supreme Court's 1905 decision in Lochner v. New York, he accused the federal courts of undermining democracy, and called for them to be deprived of the power to rule legislation unconstitutional. This attack horrified Taft, who privately agreed that Lochner had been wrongly decided. Roosevelt called for "elimination of corporate expenditures for political purposes, physical valuation of railroad properties, regulation of industrial combinations, establishment of an export tariff commission, a graduated income tax" as well as "workmen's compensation laws, state and national legislation to regulate the [labor] of women and children, and complete publicity of campaign expenditure". According to John Murphy in his journal article on the breach between the two presidents, "As Roosevelt began to move to the left, Taft veered to the right."
During the 1910 midterm election campaign, Roosevelt involved himself in New York politics, while Taft with donations and influence tried to secure the election of the Republican gubernatorial candidate in Ohio, former lieutenant governor Warren G. Harding. The Republicans suffered losses in the 1910 elections as the Democrats took control of the House and slashed the Republican majority in the Senate. In New Jersey, Democrat Woodrow Wilson was elected governor, and Harding lost his race in Ohio.
After the election, Roosevelt continued to promote progressive ideals, a New Nationalism, much to Taft's dismay. Roosevelt attacked his successor's administration, arguing that its guiding principles were not that of the party of Lincoln, but those of the Gilded Age. The feud continued on and off through 1911, a year in which there were few elections of significance. Wisconsin Senator La Follette announced a presidential run as a Republican, and was backed by a convention of progressives. Roosevelt began to move into a position for a run in late 1911, writing that the tradition that presidents not run for a third term only applied to consecutive terms.
Roosevelt was receiving many letters from supporters urging him to run, and Republican office-holders were organizing on his behalf. Balked on many policies by an unwilling Congress and courts in his full term in the White House, he saw manifestations of public support he believed would sweep him to the White House with a mandate for progressive policies that would brook no opposition. In February, Roosevelt announced he would accept the Republican nomination if it was offered to him. Taft felt that if he lost in November, it would be a repudiation of the party, but if he lost renomination, it would be a rejection of himself. He was reluctant to oppose Roosevelt, who helped make him president, but having become president, he was determined to be president, and that meant not standing aside to allow Roosevelt to gain another term.
#### Primaries and convention
As Roosevelt became more radical in his progressivism, Taft was hardened in his resolve to achieve re-nomination, as he was convinced that the progressives threatened the very foundation of the government. One blow to Taft was the loss of Archibald Butt, one of the last links between the previous and present presidents, as Butt had formerly served Roosevelt. Ambivalent between his loyalties, Butt went to Europe on vacation; he died in the sinking of the RMS Titanic.
Roosevelt dominated the primaries, winning 278 of the 362 delegates to the Republican National Convention in Chicago decided in that manner. Taft had control of the party machinery, and it came as no surprise that he gained the bulk of the delegates decided at district or state conventions. Taft did not have a majority, but was likely to have one once southern delegations committed to him. Roosevelt challenged the election of these delegates, but the RNC overruled most objections. Roosevelt's sole remaining chance was with a friendly convention chairman, who might make rulings on the seating of delegates that favored his side. Taft followed custom and remained in Washington, but Roosevelt went to Chicago to run his campaign and told his supporters in a speech, "we stand at Armageddon, and we battle for the Lord".
Taft had won over Root, who agreed to run for temporary chairman of the convention, and the delegates elected Root over Roosevelt's candidate. The Roosevelt forces moved to substitute the delegates they supported for the ones they argued should not be seated. Root made a crucial ruling, that although the contested delegates could not vote on their own seating, they could vote on the other contested delegates, a ruling that assured Taft's nomination, as the motion offered by the Roosevelt forces failed, 567–507. As it became clear Roosevelt would bolt the party if not nominated, some Republicans sought a compromise candidate to avert electoral disaster; they failed. Taft's name was placed in nomination by Warren Harding, whose attempts to praise Taft and unify the party were met with angry interruptions from progressives. Taft was nominated on the first ballot, though most Roosevelt delegates refused to vote.
#### Campaign and defeat
Alleging Taft had stolen the nomination, Roosevelt and his followers formed the Progressive Party. Taft knew he would lose, but concluded that through Roosevelt's loss at Chicago the party had been preserved as "the defender of conservative government and conservative institutions." He made his doomed run to preserve conservative control of the Republican Party. Governor Woodrow Wilson was the Democratic nominee. Seeing Roosevelt as the greater electoral threat, Wilson spent little time attacking Taft, arguing that Roosevelt had been lukewarm in opposing the trusts during his presidency, and that Wilson was the true reformer. Taft contrasted what he called his "progressive conservatism" with Roosevelt's Progressive democracy, which to Taft represented "the establishment of a benevolent despotism."
Reverting to the pre-1888 custom that presidents seeking reelection did not campaign, Taft spoke publicly only once, making his nomination acceptance speech on August 1. He had difficulty in financing the campaign, as many industrialists had concluded he could not win, and would support Wilson to block Roosevelt. The president issued a confident statement in September after the Republicans narrowly won Vermont's state elections in a three-way fight, but had no illusions he would win his race. He had hoped to send his cabinet officers out on the campaign trail, but found them reluctant to go. Senator Root agreed to give a single speech for him.
Vice President Sherman had been renominated at Chicago; seriously ill during the campaign, he died six days before the election, and was replaced on the ticket by the president of Columbia University, Nicholas Murray Butler. But few electors chose Taft and Butler, who won only Utah and Vermont, for a total of eight electoral votes. Roosevelt won 88, and Wilson 435. Wilson won with a plurality—not a majority—of the popular vote. Taft finished with just under 3.5 million, over 600,000 less than the former president. Taft was not on the ballot in California, due to the actions of local Progressives, nor in South Dakota.
## Return to Yale (1913–1921)
With no pension or other compensation to expect from the government after leaving the White House, Taft contemplated a return to the practice of law, from which he had long been absent. Given that Taft had appointed many federal judges, including a majority of the Supreme Court, this would raise questions of conflict of interest at every federal court appearance and he was saved from this by an offer for him to become Kent Professor of Law and Legal History at Yale Law School. He accepted, and after a month's vacation in Georgia, arrived in New Haven on April 1, 1913, to a rapturous reception. As it was too late in the semester for him to give an academic course, he instead prepared eight lectures on "Questions of Modern Government", which he delivered in May. He earned money with paid speeches and with articles for magazines, and would end his eight years out of office having increased his savings. While at Yale, he wrote the treatise, Our Chief Magistrate and His Powers (1916).
Taft had been made president of the Lincoln Memorial Commission while still in office; when Democrats proposed removing him for one of their party, he quipped that unlike losing the presidency, such a removal would hurt. The architect, Henry Bacon, wanted to use Colorado-Yule marble, while southern Democrats urged using Georgia marble. Taft lobbied for the western stone, and the matter was submitted to the Commission of Fine Arts, which supported Taft and Bacon. The project went forward; Taft would dedicate the Lincoln Memorial as chief justice in 1922. In 1913, Taft was elected to a one-year term as president of the American Bar Association (ABA), a trade group of lawyers. He removed opponents, such as Louis Brandeis and University of Pennsylvania Law School dean William Draper Lewis (a supporter of the Progressive Party) from committees.
Taft maintained a cordial relationship with Wilson. The former president privately criticized his successor on a number of issues, but made his views known publicly only on Philippine policy. Taft was appalled when, after Justice Lamar's death in January 1916, Wilson nominated Brandeis, whom the former president had never forgiven for his role in the Ballinger–Pinchot affair. When hearings led to nothing discreditable about Brandeis, Taft intervened with a letter signed by himself and other former ABA presidents, stating that Brandeis was not fit to serve on the Supreme Court. Nevertheless, the Democratic-controlled Senate confirmed Brandeis. Taft and Roosevelt remained embittered; they met only once in the first three years of the Wilson presidency, at a funeral at Yale. They spoke only for a moment, politely but formally.
As president of the League to Enforce Peace, Taft hoped to prevent war through an international association of nations. With World War I raging in Europe, Taft sent Wilson a note of support for his foreign policy in 1915. President Wilson accepted Taft's invitation to address the league, and spoke in May 1916 of a postwar international organization that could prevent a repetition. Taft supported the effort to get Justice Hughes to resign from the bench and accept the Republican presidential nomination. Once this was done, Hughes tried to get Roosevelt and Taft to reconcile, as a united effort was needed to defeat Wilson. This occurred on October 3 in New York, but Roosevelt allowed only a handshake, and no words were exchanged. This was one of many difficulties for the Republicans in the campaign, and Wilson narrowly won reelection.
In March 1917, Taft demonstrated public support for the war effort by joining the Connecticut State Guard, a state defense force organized to carry out the state duties of the Connecticut National Guard while the National Guard served on active duty. When Wilson asked Congress to declare war on Germany in April 1917, Taft was an enthusiastic supporter; he was chairman of the American Red Cross' executive committee, which occupied much of the former president's time. In August 1917, Wilson conferred military titles on executives of the Red Cross as a way to provide them with additional authority to use in carrying out their wartime responsibilities, and Taft was appointed a major general.
During the war, Taft took leave from Yale in order to serve as co-chairman of the National War Labor Board, tasked with assuring good relations between industry owners and their workers. In February 1918, the new RNC chairman, Will H. Hays, approached Taft seeking his reconciliation with Roosevelt. While at the Palmer House in Chicago, Taft heard that Roosevelt was there having dinner, and after he walked in, the two men embraced to the applause of the room, but the relationship did not progress; Roosevelt died in January 1919. Taft later wrote, "Had he died in a hostile state of mind toward me, I would have mourned the fact all my life. I loved him always and cherish his memory."
When Wilson proposed establishment of a League of Nations, Taft expressed public support. He was the leader of his party's activist wing, and was opposed by a small group of senators who vigorously opposed the League. Taft's flip-flop on whether reservations to the Versailles Treaty were necessary angered both sides, causing some Republicans to call him a Wilson supporter and a traitor to his party. The Senate refused to ratify the Versailles pact.
## Chief Justice (1921–1930)
### Appointment
During the 1920 election campaign, Taft supported the Republican ticket—Harding (by then a senator) and Massachusetts Governor Calvin Coolidge; they were elected. Taft was among those asked to come to the president-elect's home in Marion, Ohio, to advise him on appointments, and the two men conferred there on December 24, 1920. By Taft's later account, after some conversation, Harding casually asked if Taft would accept appointment to the Supreme Court; if Taft would, Harding would appoint him. Taft had a condition for Harding—having served as president, and having appointed two of the present associate justices and opposed Brandeis, he could accept only the chief justice position. Harding made no response, and Taft in a thank-you note reiterated the condition and stated that Chief Justice White had often told him he was keeping the position for Taft until a Republican held the White House. In January 1921, Taft heard through intermediaries that Harding planned to appoint him, if given the chance.
White by then was in failing health, but made no move to resign when Harding was sworn in on March 4, 1921. Taft called on the chief justice on March 26, and found White ill, but still carrying on his work and not talking of retiring. White did not retire, dying in office on May 19, 1921. Taft issued a tribute to the man he had appointed to the center seat, and waited and worried if he would be White's successor. Despite widespread speculation Taft would be the pick, Harding made no quick announcement. Taft was lobbying for himself behind the scenes, especially with the Ohio politicians who formed Harding's inner circle.
It later emerged that Harding had also promised former Utah senator George Sutherland a seat on the Supreme Court, and was waiting in the expectation that another place would become vacant. Harding was also considering a proposal by Justice William R. Day to crown his career by being chief justice for six months before retiring. Taft felt, when he learned of this plan, that a short-term appointment would not serve the office well, and that once confirmed by the Senate, the memory of Day would grow dim. After Harding rejected Day's plan, Attorney General Harry Daugherty, who supported Taft's candidacy, urged him to fill the vacancy, and he named Taft on June 30, 1921. The Senate confirmed Taft the same day, 61–4, without any committee hearings and after a brief debate in executive session. Taft drew the objections of three progressive Republicans and one southern Democrat. When he was sworn in on July 11, he became the first and to date only person to serve both as president and chief justice.
### Jurisprudence
#### Commerce Clause
The Supreme Court under Taft compiled a conservative record in Commerce Clause jurisprudence. This had the practical effect of making it difficult for the federal government to regulate industry, and the Taft Court also scuttled many state laws. The few liberals on the court—Brandeis, Holmes, and (from 1925) Harlan Fiske Stone—sometimes protested, believing orderly progress essential, but often joined in the majority opinion.
The White Court had, in 1918, struck down an attempt by Congress to regulate child labor in Hammer v. Dagenhart. Congress thereafter attempted to end child labor by imposing a tax on certain corporations making use of it. That law was overturned by the Supreme Court in 1922 in Bailey v. Drexel Furniture Co., with Taft writing the court's opinion for an 8–1 majority. He held that the tax was not intended to raise revenue, but rather was an attempt to regulate matters reserved to the states under the Tenth Amendment, and that allowing such taxation would eliminate the power of the states. One case in which Taft and his court upheld federal regulation was Stafford v. Wallace. Taft ruled for a 7–1 majority that the processing of animals in stockyards was so closely tied to interstate commerce as to bring it within the ambit of Congress's power to regulate.
A case in which the Taft Court struck down regulation that generated a dissent from the chief justice was Adkins v. Children's Hospital. Congress had decreed a minimum wage for women in the District of Columbia. A 5–3 majority of the Supreme Court struck it down. Justice Sutherland wrote for the majority that the recently ratified Nineteenth Amendment, guaranteeing women the vote, meant that the sexes were equal when it came to bargaining power over working conditions; Taft, in dissent, deemed this unrealistic. Taft's dissent in Adkins was rare both because he authored few dissents, and because it was one of the few times he took an expansive view of the police power of the government.
#### Powers of government
In 1922, Taft ruled for a unanimous court in Balzac v. Porto Rico. One of the Insular Cases, Balzac involved a Puerto Rico newspaper publisher who was prosecuted for libel but denied a jury trial, a Sixth Amendment protection under the constitution. Taft held that as Puerto Rico was not a territory designated for statehood, only such constitutional protections as Congress decreed would apply to its residents.
In 1926, Taft wrote for a 6–3 majority in Myers v. United States that Congress could not require the president to get Senate approval before removing an appointee. Taft noted that there is no restriction of the president's power to remove officials in the Constitution. Although Myers involved the removal of a postmaster, Taft in his opinion found invalid the repealed Tenure of Office Act, for violation of which his presidential predecessor, Andrew Johnson, had been impeached, though acquitted by the Senate. Taft valued Myers as his most important opinion.
The following year, the court decided McGrain v. Daugherty. A congressional committee investigating possible complicity of former Attorney General Daugherty in the Teapot Dome scandal subpoenaed records from his brother, Mally, who refused to provide them, alleging Congress had no power to obtain documents from him. Van Devanter ruled for a unanimous court against him, finding that Congress had the authority to conduct investigations as an auxiliary to its legislative function.
#### Individual and civil rights
In 1925, the Taft Court laid the groundwork for the incorporation of many of the guarantees of the Bill of Rights to be applied against the states through the Fourteenth Amendment. In Gitlow v. New York, the Court, by a 6–2 vote with Taft in the majority, upheld Gitlow's conviction on criminal anarchy charges for advocating the overthrow of the government; his defense was freedom of speech. Justice Edward T. Sanford wrote the Court's opinion, and both majority and minority (Holmes, joined by Brandeis) assumed that the First Amendment's Free Speech and Free Press clauses were protected against infringement by the states.
Pierce v. Society of Sisters was a 1925 decision by the Taft Court striking down an Oregon law banning private schools. In a decision written by Justice James C. McReynolds, a unanimous court held that Oregon could regulate private schools, but could not eliminate them. The outcome supported the right of parents to control the education of their children, but also, since the lead plaintiff (the society) ran Catholic schools, struck a blow for religious freedom.
United States v. Lanza was one of a series of cases involving Prohibition. Lanza committed acts allegedly in violation of both state and federal law, and was first convicted in Washington state court, then prosecuted in federal district court. He alleged the second prosecution violated the Double Jeopardy Clause of the Fifth Amendment. Taft, for a unanimous court, allowed the second prosecution, holding that the state and federal governments were dual sovereigns, each empowered to prosecute the conduct in question.
In the 1927 case Lum v. Rice, Taft wrote for a unanimous Court that included liberals Holmes, Brandeis and Stone. The ruling held the exclusion on account of race of a child of Chinese ancestry from a whites-only public school did not violate the Fourteenth Amendment to the United States Constitution. This allowed states to extend segregation in public schools to Chinese students.
### Administration and political influence
Taft exercised the power of his position to influence the decisions of his colleagues, urging unanimity and discouraging dissents. Alpheus Mason, in his article on Chief Justice Taft for the American Bar Association Journal, contrasted Taft's expansive view of the role of the chief justice with the narrow view of presidential power he took while in that office. Taft saw nothing wrong with making his views on possible appointments to the Court known to the White House, and was annoyed to be criticized in the press. He was initially a firm supporter of President Coolidge after Harding's death in 1923, but became disillusioned with Coolidge's appointments to office and to the bench; he had similar misgivings about Coolidge's successor, Herbert Hoover. Taft advised the Republican presidents in office while he was chief justice to avoid "offside" appointments like Brandeis and Holmes. Nevertheless, by 1923, Taft was writing of his liking for Brandeis, whom he deemed a hard worker, and Holmes walked to work with him until age and infirmity required an automobile.
Believing that the Chief Justice should be responsible for the federal courts, Taft felt that he should have an administrative staff to assist him, and the chief justice should be empowered to temporarily reassign judges. He also believed the federal courts had been ill-run. Many of the lower courts had lengthy backlogs, as did the Supreme Court. Immediately on taking office, Taft made it a priority to confer with Attorney General Daugherty as to new legislation, and made his case before congressional hearings, in legal periodicals and in speeches across the country. When Congress convened in December 1921, a bill was introduced for 24 new judges, to empower the Chief Justice to move judges temporarily to eliminate the delays, and to have him chair a body consisting of the senior appellate judge of each circuit. Congress objected to some aspects, requiring Taft to get the agreement of the senior judge of each involved circuit before assigning a judge, but it passed the bill in September 1922, and the Judicial Conference of Senior Circuit Judges held its first meeting that December.
The Supreme Court's docket was congested, swelled by war litigation and laws that allowed a party defeated in the circuit court of appeals to have the case decided by the Supreme Court if a constitutional question was involved. Taft believed an appeal should usually be settled by the circuit court, with only cases of major import decided by the justices. He and other Supreme Court members proposed legislation to make most of the Court's docket discretionary, with a case getting full consideration by the justices only if they granted a writ of certiorari. To Taft's frustration, Congress took three years to consider the matter. Taft and other members of the Court lobbied for the bill in Congress, and the Judges' Bill became law in February 1925. By late the following year, Taft was able to show that the backlog was shrinking.
When Taft became Chief Justice, the Court did not have its own building and met in the Capitol. Its offices were cluttered and overcrowded, but Fuller and White had been opposed to proposals to move the Court to its own building. In 1925, Taft began a fight to get the Court a building, and two years later Congress appropriated money to purchase the land, on the south side of the Capitol. Cass Gilbert had prepared plans for the building, and was hired by the government as architect. Taft had hoped to see the Court move into the new building, but it did not do so until 1935, after Taft's death.
## Declining health and death
Taft is remembered as the heaviest president; he was 5 feet 11 inches (1.80 m) tall and his weight peaked at 335–340 pounds (152–154 kg) toward the end of his presidency, although by 1929 he weighed 244 pounds (111 kg). By the time Taft became chief justice in 1921, his health was starting to decline, and he carefully planned a fitness regimen, walking 3 miles (4.8 km) from his home to the Capitol each day. When he walked back, he would usually go by way of Connecticut Avenue and use a particular crossing over Rock Creek. After his death, the crossing was named the Taft Bridge.
Taft followed a weight loss program and hired the British doctor N. E. Yorke-Davies as a dietary advisor. The two men corresponded regularly for over twenty years, and Taft kept a daily record of his weight, food intake, and physical activity.
At Hoover's inauguration on March 4, 1929, Taft recited part of the oath incorrectly, later writing, "my memory is not always accurate and one sometimes becomes a little uncertain", misquoting again in that letter, differently. His health gradually declined over the near-decade of his chief justiceship. Worried that if he retired his replacement would be chosen by President Herbert Hoover, whom he considered too progressive, he wrote his brother Horace in 1929, "I am older and slower and less acute and more confused. However, as long as things continue as they are, and I am able to answer to my place, I must stay on the court in order to prevent the Bolsheviki from getting control".
Taft insisted on going to Cincinnati to attend the funeral of his brother Charles, who died on December 31, 1929; the strain did not improve his own health. When the court reconvened on January 6, 1930, Taft had not returned to Washington, and two opinions were delivered by Van Devanter that Taft had drafted but had been unable to complete because of his illness. Taft went to Asheville, North Carolina, for a rest, but by the end of January, he could barely speak and was hallucinating. Taft was afraid that Stone would be made chief justice; he did not resign until he had secured assurances from Hoover that Hughes would be chosen. Taft resigned as chief justice on February 3, 1930. Returning to Washington after his resignation, Taft had barely enough physical or emotional strength to sign a reply to a letter of tribute from the eight associate justices. He died at his home in Washington, D.C., on March 8, 1930, at age 72, likely of heart disease, inflammation of the liver, and high blood pressure.
Taft lay in state at the United States Capitol rotunda. On March 11, he became the first president and first member of the Supreme Court to be buried at Arlington National Cemetery. James Earle Fraser sculpted his grave marker out of Stony Creek granite.
## Legacy and historical view
Lurie argued that Taft did not receive the public credit for his policies that he should have. Few trusts had been broken up under Roosevelt (although the lawsuits received much publicity). Taft, more quietly than his predecessor, filed many more cases than did Roosevelt, and rejected his predecessor's contention that there was such a thing as a "good" trust. This lack of flair marred Taft's presidency; according to Lurie, Taft "was boring—honest, likable, but boring". Scott Bomboy for the National Constitution Center wrote that despite being "one of the most interesting, intellectual, and versatile presidents ... a chief justice of the United States, a wrestler at Yale, a reformer, a peace activist, and a baseball fan ... today, Taft is best remembered as the president who was so large that he got stuck in the White House bathtub", a story that is not true. Taft similarly remains known for another physical characteristic—as the last president with facial hair to date.
Mason called Taft's years in the White House "undistinguished". Coletta deemed Taft to have had a solid record of bills passed by Congress, but felt he could have accomplished more with political skill. Anderson noted that Taft's prepresidential federal service was entirely in appointed posts, and that he had never run for an important executive or legislative position, which would have allowed him to develop the skills to manipulate public opinion, as "the presidency is no place for on-the-job training". According to Coletta, "in troubled times in which the people demanded progressive change, he saw the existing order as good."
Inevitably linked with Roosevelt, who chose him to be president and took it away, Taft generally falls in the former's shadow. Yet, a portrait of Taft as a victim of betrayal by his best friend is incomplete: as Coletta put it, "Was he a poor politician because he was victimized or because he lacked the foresight and imagination to notice the storm brewing in the political sky until it broke and swamped him?" Adept at using the levers of power in a way his successor could not, Roosevelt generally got what was politically possible out of a situation. Taft was generally slow to act, and when he did, his actions often generated enemies, as in the Ballinger–Pinchot affair. Roosevelt was able to secure positive coverage in the newspapers; Taft was reticent talking to reporters, and, with no comment from the White House, hostile journalists filled the gaps with quotes from Taft opponents. Roosevelt engraved in public memory the image of Taft as a James Buchanan-like figure, with a narrow view of the presidency that made him unwilling to act for the public good. Anderson noted that Roosevelt's Autobiography'' (which placed this view in enduring form) was published after both men had left the presidency (in 1913), was intended in part to justify Roosevelt's splitting of the Republican Party, and contains not a single positive reference to the man Roosevelt had hand-picked as his successor. While Roosevelt was biased, he was not alone: every major newspaper reporter of that time who left reminiscences of Taft's presidency was critical of him. Taft replied to his predecessor's criticism with his constitutional treatise on the powers of the presidency.
Taft was convinced history would vindicate him. After he left office, he was estimated to be in the middle of U.S. presidents by greatness, and subsequent rankings by historians have largely sustained that verdict. Coletta noted that this places Taft alongside James Madison, John Quincy Adams and McKinley. Lurie catalogued progressive innovations that took place under Taft, and argued that historians have overlooked them because Taft was not an effective political writer or speaker. According to Gould, "the clichés about Taft's weight, his maladroitness in the White House, and his conservatism of thought and doctrine have an element of truth, but they fail to do justice to a shrewd commentator on the political scene, a man of consummate ambition, and a resourceful practitioner of the internal politics of his party." Anderson deemed Taft's success in becoming both president and chief justice "an astounding feat of inside judicial and Republican party politics, played out over years, the likes of which we are not likely to see again in American history".
Taft has been rated among the greatest of the chief justices; later Supreme Court Justice Antonin Scalia noted that this was "not so much on the basis of his opinions, perhaps because many of them ran counter to the ultimate sweep of history". A successor as chief justice, Earl Warren, concurred: "In Taft's case, the symbol, the tag, the label usually attached to him is 'conservative.' It is certainly not of itself a term of opprobrium even when bandied by the critics, but its use is too often confused with 'reactionary.'" Most commentators agree that Taft's most significant contribution as chief justice was his advocacy for reform of the high court, urging and ultimately gaining improvement in the Court's procedures and facilities. Mason cited enactment of the Judges' Bill of 1925 as Taft's major achievement on the Court. According to Anderson, as chief justice, Taft "was as aggressive in the pursuit of his agenda in the judicial realm as Theodore Roosevelt was in the presidential".
The house in Cincinnati in which Taft was born is now the William Howard Taft National Historic Site. Taft was one of the first Gold Medal Honorees of the National Institute of Social Sciences. His son Robert was a significant political figure, becoming Senate Majority Leader and three times a major contender for the Republican nomination for president. A conservative, each time he was defeated by a candidate backed by the more liberal Eastern Establishment wing of the party.
Lurie concluded his account of William Taft's career:
> While the fabled cherry trees in Washington represent a suitable monument for Nellie Taft, there is no memorial to her husband, except perhaps the magnificent home for his Court—one for which he eagerly planned. But he died even before ground was broken for the structure. As he reacted to his overwhelming defeat for reelection in 1912, Taft had written that "I must wait for years if I would be vindicated by the people ... I am content to wait." Perhaps he has waited long enough.
## Media
## See also
- Bibliography of William Howard Taft
- Taft on U.S. postage stamps
[^1]: Stanley D. Solvick "The Conservative as Progressive: William Howard Taft and the Politics of the Square Deal" Northwest Ohio Quarterly. Jun1967, Vol. 39 Issue 3, pp. 38-48.
|
2,187,649 |
1995 Pacific hurricane season
| 1,152,036,026 |
Hurricane season in the Pacific Ocean
|
[
"1995 Pacific hurricane season",
"Pacific hurricane seasons"
] |
The 1995 Pacific hurricane season was the least active Pacific hurricane season since 1979, and marked the beginning of a multi-decade period of low activity in the basin. Of the eleven tropical cyclones that formed during the season, four affected land, with the most notable storm of the season being Hurricane Ismael, which killed at least 116 people in Mexico. The strongest hurricane in the season was Hurricane Juliette, which reached peak winds of 150 mph (240 km/h), but did not significantly affect land. Hurricane Adolph was an early-season Category 4 hurricane. Hurricane Henriette brushed the Baja California Peninsula in early September.
The season officially started on May 15, 1995, in the Eastern Pacific, and on June 1, 1995, in the Central Pacific, and lasted until November 30, 1995. These dates conventionally delimit the period of each year when most tropical cyclones form in the northeastern Pacific Ocean. The season saw eleven tropical cyclones form, of which ten became tropical storms. Seven of these storms attained hurricane status, three of them becoming major hurricanes. There were fewer tropical storms than the average of 16, while the number of hurricanes and major hurricanes were slightly below average.
## Season summary
The Accumulated Cyclone Energy (ACE) index for the 1995 Pacific hurricane season in total was 100.2 units. Broadly speaking, ACE is a measure of the power of a tropical or subtropical storm multiplied by the length of time it existed. It is only calculated for full advisories on specific tropical and subtropical systems reaching or exceeding wind speeds of 39 mph (63 km/h).
The seasonal activity during 1995 was below normal, and marked the first of several seasons with lower than normal activity. Four tropical cyclones affected Mexico: first, Hurricane Flossie passed within 75 miles (121 km) of Baja California Peninsula, producing moderate winds and killing seven people. Afterwards, Tropical Storm Gil dropped heavy rainfall in Southern Mexico early in its life, though caused no damage. Hurricane Henriette later made landfall near Cabo San Lucas with winds of 100 mph (160 km/h), resulting in moderate damage but no deaths. Finally, Ismael struck the state of Sinaloa as a minimal hurricane. Offshore, fishermen were caught off guard by the hurricane, causing 57 of them to drown. On land, Ismael destroyed thousands of houses, leaving 30,000 homeless and killing another 59. Both Hurricanes Flossie and Ismael also produced moisture and localized damage in the Southwestern United States.
Activity in the Central Pacific Ocean was below normal, as well. No tropical storms formed in the basin. For the first time in four years, Barbara was the only tropical cyclone to exist within the basin, but it formed in the Eastern Pacific. It entered as a weakening tropical storm, and quickly dissipated, without affecting land. It was the least active in the basin since 1979, when the basin was completely quiet, as no storms entered the basin that year.
## Systems
### Tropical Depression One-E
A westward-moving tropical wave entered the Pacific Ocean in mid-May. Convection within the disturbance became more concentrated and organized on May 19 while the wave was located a short distance south of the Gulf of Tehuantepec. The deep convection concentrated around a low-level circulation with expanding outflow, and the system developed into Tropical Depression One-E on May 21, while located about 400 mi (640 km) south of Manzanillo, Mexico. Initially the depression was forecast to strengthen to reach winds of 55 mph (89 km/h) as it moved westward under the influence of a high-pressure system to its north. Outflow increased as the storm moved through an area of warm waters and a favorable upper-level environment, and two satellite classifications indicated the system was at tropical storm status around nine hours after forming. Despite the favorable environment and satellite classifications of tropical storm status, the depression failed to organize further. The convection and organization continued to decrease, and on May 23 the depression dissipated.
While it was developing, locally moderate to heavy rainfall fell across southern Mexico along the disturbance's northern periphery, with rainfall totals peaking at 5.18 inches (132 mm) at Vallecitos/Petatlan.
### Hurricane Adolph
An area of disturbed weather associated with a tropical wave organized off the southwest coast of Mexico during the middle of June. Banding features developed as a circulation persisted on the northeast side of its deep convection, and the system developed into Tropical Depression Two-E on June 15. Under weak steering currents, the depression moved slowly northward, and with deep convection organizing near its center, the depression intensified to Tropical Storm Adolph on June 16. Located in an area of warm waters, Adolph exhibited a well-defined outflow pattern, and rapidly strengthened to attain hurricane status on June 17 as a banding-type eye developed. Hurricane Adolph turned to the northwest and attained major hurricane status late that same day. The small eye of the hurricane continued to organize, as very deep convection surrounded the eyewall, and Adolph reached its peak intensity of 135 mph (217 km/h) on June 18, making it a Category 4 hurricane on the Saffir-Simpson scale. Shortly thereafter, the storm weakened, as the upper-level environment became more hostile, and the system moved over progressively cooler waters. On June 19, Adolph turned to the west, and degenerated back into a tropical storm later that day. On June 20, the storm weakened to a tropical depression, and on June 21, Adolph began to dissipate as its center became devoid of deep convection.
As Adolph moved north towards Mexico while about 290 mi (470 km) off the coast, the Mexican government issued a tropical storm warning and a hurricane watch from Punta Tejupan to Cabo Corrientes, Jalisco. When the storm turned to the northwest and later to the west, the government discontinued the warnings as it was determined the storm would not be a threat to land. No damage or casualties were reported.
### Hurricane Barbara
A few days later, on June 24, another weak tropical wave moved off the coast of Africa. It moved steadily westward through the Atlantic Ocean without any development, and entered the eastern Pacific Ocean on July 5. At this point, convection developed along the wave axis, and the system gradually organized. A circulation developed as it passed through an area of warm waters, and the system developed into Tropical Depression Three-E on July 7, while located about 600 miles (970 km) south of Manzanillo, Colima. Although the outer rainbands warmed slightly in the hours after the formation, the convection near the center deepened further with favorable upper-level outflow, and the depression strengthened into Tropical Storm Barbara early on July 8. Barbara steadily intensified, and following the development of a ragged eye that night, Barbara strengthened into a hurricane on July 9, while located about 700 miles (1,100 km) south of the southern tip of Baja California Peninsula.
After moving into an area of light vertical shear and warm water temperatures, Barbara quickly intensified to reach major hurricane status on July 10. The eye continued to become better organized, and Barbara attained winds of 135 mph (217 km/h) later on July 10. After maintaining its intensity for 24 hours, increased wind shear from an upper-tropospheric trough degraded the appearance of the deepest convection, and the eye became obscured from satellite images. After weakening to a 115 mph (185 km/h) hurricane, Barbara maintained its intensity for 30 hours before moving into an area with very warm waters and a favorable upper-level environment. On July 13, the hurricane re-organized, a distinct eye again developed, and Barbara strengthened to reach its peak intensity of 140 mph (230 km/h) later that day. Barbara continued westward under the influence of a subtropical ridge to its north, and began to steadily weaken on July 14 as it moved into an area of cooler water temperatures. The hurricane degraded to a tropical storm on July 16, and a day later it deteriorated to a tropical depression. As a depression with little to no convection near its center, Barbara continued west-northwestward until dissipating on July 18 while located 720 mi (1,160 km) east-southeast of Hilo, Hawaii. Barbara remained away from land for its entire lifetime, and it did not cause any damage or deaths.
### Hurricane Cosme
As Barbara moved away from land, another area of disturbed weather moved off the coast of Central America on July 11. Moving westward, this area slowly organized, and developed a low-level circulation on July 22. The convection developed into curved rainbands, and based on Dvorak classifications of 35 mph (56 km/h), the National Hurricane Center estimated that the system developed into Tropical Depression Four-E on July 17, while located about 400 mi (640 km) south-southeast of the southern tip of Baja California Peninsula. As the depression was situated in an area with warm waters and moderate upper-level outflow, the system was forecast to slowly intensify to a 50 mph (80 km/h) tropical storm. Initially, the depression followed the forecasts, and it intensified into a tropical storm about 30 hours after developing, receiving the name "Cosme". Cosme was expected to strengthen only slightly due to predicted cooler waters and increased shear.
On July 18, contrary to the predictions, Cosme became much better organized, and well-defined banding features were visible on satellite imagery. The storm continued to steadily intensify, and subsequent to the development of an eye, Cosme strengthened into a hurricane late on July 19, while located 380 miles (610 km) west-southwest of the southern tip of Baja California Peninsula. After maintaining hurricane status for 18 hours, Cosme weakened back to a tropical storm on July 20. Cooler water temperatures deteriorated the convection near the center, resulting in Cosme quickly weakening to a tropical depression on July 21. After turning to the west-southwest, Cosme dissipated on July 22. Cosme never affected land, and as a result caused no damage or fatalities. However, the intensity of the storm is still uncertain; late on July 18, a ship 70 mi (110 km) to the east of Cosme reported winds of 17 mph (27 km/h), despite that a normal 50 mph (80 km/h) tropical storm would produce tropical storm force winds for locations within at least 70 mi (110 km) of the center.
### Tropical Storm Dalila
A tropical wave moved off the coast of Africa on July 11. It moved westward and quickly developed two areas of convection along the wave axis. One of the areas nearly developed into a tropical depression after moving northwestward, though it failed to organize further and dissipated. The southern area continued westward and ultimately entered the eastern Pacific Ocean on July 21. Thunderstorms along the wave axis became more concentrated a few hundred miles south of the Gulf of Tehuantepec, and the system developed into Tropical Depression Five-E on July 24 while located 500 mi (800 km) southwest of Manzanillo, Mexico.
Located in an area of weak steering currents and easterly wind shear, the tropical depression drifted to the north-northeast while the convection was displaced up to 70 mi (110 km) west of the circulation. Slightly strengthening occurred, and on July 25 the depression intensified into Tropical Storm Dalila. The storm turned to the northwest, and later to the west-northwest, and remained a minimal tropical storm until July 28 when a decrease in wind shear allowed Dalila to strengthen. A strong anticyclone developed to the north of the system, causing Dalila to accelerate to the northwest. Late on July 28, Dalila reached a peak intensity of 65 mph (105 km/h) at a position 570 mi (920 km) southwest of Cabo San Lucas. Tropical Storm Dalila slowly weakened after moving over progressively cooler water temperatures, and on August 1 it degenerated into a tropical depression. Dalila turned to the southwest after much of the convection waned, and the system dissipated on August 2.
### Tropical Storm Erick
On July 17 a tropical wave exited the coast of Africa, and moved westward. An area of convection along the wave organized slightly on July 18, though the next day the convection diminished. After moving through the Windward Islands on July 23, deep convection again increased. The system failed to organize further, though convection continued to develop upon entering the eastern Pacific Ocean on July 27. The cloudiness and thunderstorms became more consolidated off the coast of southern Mexico, and on July 31 Dvorak classifications began on the system. A circulation developed, and the system organized into Tropical Depression Six-E on August 1 while located about 520 mi (840 km) south of the southern tip of Baja California Peninsula.
Initially, the depression was a small system with moderate amounts of easterly wind shear. It organized slowly, and after moving to the southwest for 24 hours it turned to the northwest. Subsequent to an increase in convection over the center, the depression intensified into Tropical Storm Erick on August 4. Erick gradually strengthened as it moved to the west-northwest, and reached peak winds of 65 mph (105 km/h) on August 5 while located about 720 miles (1,160 km) southwest of Cabo San Lucas. Operationally, the storm was forecast to continue to strengthen to reach hurricane status, though this did not occur. The mid-level ridge which had been tracking Erick westward weakened, resulting in Erick to turn to the north over cooler waters. It quickly weakened to a tropical depression on August 6, and after turning to an eastward drift Erick dissipated on August 8 while located 700 mi (1,100 km) west-southwest of the southern tip of Baja California Peninsula. Erick never affected land.
### Hurricane Flossie
A large circulation with an area of low pressure persisted in the tropical eastern Pacific Ocean in early August. The large circulation was well-developed by August 7, and the convection concentrated a few hundred miles southwest of Acapulco. Based on its organization, the National Hurricane Center designated the system Tropical Depression Seven-E. On August 8, the depression intensified into Tropical Storm Flossie, based on ship reports. The storm paralleled the coast of Mexico as it moved northwestward, and after a decrease of wind shear Flossie developed very deep convection over its center. It intensified into a hurricane on August 10, reaching peak winds of 80 mph (130 km/h) as an embedded warm spot appeared in the center of the storm. After maintaining its peak intensity for 18 hours and passing within 75 mi (121 km) of Baja California Peninsula, Flossie weakened over cooler waters and degenerated to a tropical storm on August 12. The storm continued to weaken, and early on August 14 Flossie dissipated.
The government of Mexico issued a tropical storm warning from Punta Tejupan to Cabo Corrientes early in its life, though it was discontinued shortly thereafter. Officials issued a tropical storm watch and later a warning for Baja California Sur south of La Paz, which was later extended from Loreto on the east coast to San Juanico on the west coast. The large circulation of Hurricane Flossie produced gusty winds along the west coast of Mexico and southern Baja California Peninsula. Cabo San Lucas reported a gust of 55 mph (89 km/h), and San José del Cabo recorded a gust of 65 mph (105 km/h). The storm produced heavy rainfall, peaking at 9.72 in (247 mm) at San Felipe/Los Cabos. Seven people died in Mexico from the storm, including two that drowned in Cabo San Lucas. A monsoon surge moving around its eastern periphery produced heavy rainfall in the American Southwest. Flooding from the rainfall killed one person and left eleven motorists stranded. Thunderstorms in Tucson, Arizona, produced hurricane-force wind gusts which caused widespread power outages and damage. Damage from the storm in Arizona totaled to \$5 million (1995 USD; \$ 2023 USD), although damage in Mexico, if any, is unknown.
### Tropical Storm Gil
An area of disturbed weather, possibly related to a tropical wave, persisted and gradually organized in the Gulf of Tehuantepec. A circulation developed within its deep convection, and the system organized into Tropical Depression Seven-E on August 19 while located about 115 mi (185 km) southeast of Acapulco. Operationally, it was not until 15 hours later that the National Hurricane Center initiated advisories on the system. The depression moved westward and quickly intensified into a tropical storm. A nearby ship confirmed the existence of tropical storm force winds, and Gil reached winds of 50 mph (80 km/h) early on August 21. With well-defined outflow and continually developing convection, forecasters predicted Gil to strengthen more and attain hurricane status within two days of becoming a tropical storm. However, increased northeasterly wind shear initially prevented further strengthening.
On August 22, the cloud pattern of Gil became better organized, though the low-level circulation was located to the northeast of the deep convection due to the wind shear. The shear also limited outflow to the east, preventing further strengthening. Gradually the convection developed nearer to the center. After Gil turned to the northwest, the deep convection organized into a central dense overcast, and it strengthened to reach winds of 60 mph (97 km/h) on August 24. Later that day the storm attained a peak strength of 65 mph (105 km/h) while located 380 mi (610 km) southwest of the southern tip of Baja California Peninsula. After maintaining its peak strength for 30 hours, Gil moved over progressively cooler waters, and weakened to a tropical depression on August 26. The depression drifted westward and later turned to the north, and dissipated on August 27 while located 670 mi (1,080 km) to the west of Cabo San Lucas. While located a short distance off of Mexico, Gil produced heavy rainfall near the coast. However, there were no reports of casualties or damages in association with the storm.
### Hurricane Henriette
A tropical wave moved off the coast of Africa on August 15. It traversed westward and entered the eastern Pacific Ocean on August 29. The system quickly developed deep convection and a low-level circulation, and on September 1 it organized into Tropical Depression Nine-E while located about 170 mi (270 km) off the southwest coast of Mexico. Under favorable conditions, the depression slowly strengthened to become Tropical Storm Henriette on September 2 while located 220 mi (350 km) west of Manzanillo. Henriette quickly organized and intensified into a hurricane on September 3 while located 135 mi (217 km) west-southwest of Puerto Vallarta in Jalisco. Late on September 3, an eye began to form in the center of the deep convection as Henriette turned to the northwest. The eye became better defined the next day, and Henriette attained a peak intensity of 100 mph (160 km/h) as the northern portion of the eyewall moved over southern Baja California Peninsula. The hurricane quickly crossed the southern tip of Baja California Peninsula and re-emerged into the Pacific Ocean. Convection gradually waned as the hurricane moved over progressively colder waters, and on September 6 Henriette weakened to a tropical depression.
On September 2, a few hours after Henriette became a tropical storm, the government of Mexico issued tropical cyclone warnings and watches for Baja California Peninsula. The threat of Hurricane Henriette prompted a Carnival Cruise Line ship to alter their route. Winds of up to 100 mph (160 km/h) in southern Baja California Sur left much of Cabo San Lucas without water or power. 2,000 people were directly affected by the hurricane. A strong storm surge produced flooding and heavy road damage in the state. 800 people were forced from their homes, and crop damage was reported. No damage estimates are available, and no deaths were reported.
### Hurricane Ismael
Hurricane Ismael developed from a persistent area of deep convection on September 12, and steadily strengthened as it moved to the north-northwest. Ismael attained hurricane status on September 14 while located 210 mi (340 km) off the coast of Mexico. It continued to the north, and after passing a short distance east of Baja California Peninsula it made landfall on Topolobampo in the state of Sinaloa with winds of 80 mph (130 km/h). Ismael rapidly weakened over land, and dissipated on September 16 over northwestern Mexico. The remnants entered the United States and extended eastward into the mid-Atlantic states.
Offshore, Ismael produced waves of up to 30 ft (9.1 m) in height. Hundreds of fishermen were unprepared by the hurricane, which was expected to move more slowly, and as a result 52 ships were wrecked, killing 57 fishermen. The hurricane destroyed thousands of houses, leaving 30,000 people homeless. On land, Ismael caused 59 casualties in mainland Mexico and resulted in \$26 million in damage (1995 USD; \$ 2023 USD). Moisture from the storm extended into the United States, causing heavy rainfall and localized moderate damage in southeastern New Mexico.
### Hurricane Juliette
Hurricane Juliette was the strongest and final hurricane of the season. It formed on September 16 from a tropical wave off the southwest coast of Mexico, and moved west-northwest for the early part of its duration. Juliette was smaller than usual tropical cyclones, and as a result it intensified quickly, reaching hurricane status on September 18 and major hurricane status a day later. On September 20, Juliette reached peak winds of 150 mph (240 km/h), a Category 4 on the Saffir–Simpson Hurricane Scale. It subsequently began a slow weakening trend and turned toward the northeast, briefly threatening the Baja California Peninsula. Instead, strong wind shear overcame the storm, and Juliette dissipated on September 26 without significantly affecting land.
### Other systems
According to the Joint Typhoon Warning Center (JTWC), on January 4 a tropical depression formed east of the International Dateline on January 4, and three days later it exited CPHC's area of responsibility. According to the JTWC and Japan Meteorological Agency, on November 10 a tropical depression formed east of the International Dateline on November 10, and soon it exited CPHC's area of responsibility.
## Storm names
The following names were used for named storms that formed in the northeast Pacific in 1995. Names that were not assigned are marked in gray. The names not retired from this list were used again in the 2001 season. This is the same list used for the 1989 season with the exception of Wallis, which switched places with Winnie, the original "W" name on this list. The name Dalila was used for the first time in 1995; in the 1989 season, it was Dalilia, though an error in documents prior to the season changed it. The name change has remained.
For storms that form in the Central Pacific Hurricane Center's area of responsibility, encompassing the area between 140 degrees west and the International Date Line, all names are used in a series of four rotating lists. The next four names that were slated for use in 1995 are shown below, however none of them were used.
### Retirement
The World Meteorological Organization retired one name in the spring of 1996: Ismael. Originally slated to be replaced by Israel, Ismael was ultimately replaced with Ivo for the 2001 season.
## Season effects
This is a table of all the storms that formed in the 1995 Pacific hurricane season. It includes their duration, names, intensities, areas affected, damages, and death totals. Deaths in parentheses are additional and indirect (an example of an indirect death would be a traffic accident), but were still related to that storm. Damage and deaths include totals while the storm was extratropical, a wave, or a low, and all the damage figures are in 1995 USD.
## See also
- List of Pacific hurricanes
- Pacific hurricane season
- 1995 Atlantic hurricane season
- 1995 Pacific typhoon season
- 1995 North Indian Ocean cyclone season
- South-West Indian Ocean cyclone season: 1994–95, 1995–96
- Australian region cyclone seasons: 1994–95, 1995–96
- South Pacific cyclone seasons: 1994–95, 1995–96
|
1,257,935 |
Agrippina (opera)
| 1,172,912,158 |
1709 opera seria by G. F. Handel
|
[
"1709 operas",
"Cultural depictions of Agrippina the Younger",
"Cultural depictions of Claudius",
"Cultural depictions of Otho",
"Cultural depictions of Poppaea Sabina",
"Depictions of Nero in opera",
"Italian-language operas",
"Opera seria",
"Operas",
"Operas by George Frideric Handel",
"Works based on the Annals (Tacitus)"
] |
Agrippina (HWV 6) is an opera seria in three acts by George Frideric Handel with a libretto by Cardinal Vincenzo Grimani. Composed for the 1709–10 Venice Carnevale season, the opera tells the story of Agrippina, the mother of Nero, as she plots the downfall of the Roman Emperor Claudius and the installation of her son as emperor. Grimani's libretto, considered one of the best that Handel set, is an "anti-heroic satirical comedy", full of topical political allusions. Some analysts believe that it reflects Grimani's political and diplomatic rivalry with Pope Clement XI.
Handel composed Agrippina at the end of a three-year sojourn in Italy. It premiered in Venice at the Teatro San Giovanni Grisostomo on 26 December 1709. It proved an immediate success and an unprecedented series of 27 consecutive performances followed. Observers praised the quality of the music—much of which, in keeping with the contemporary custom, had been borrowed and adapted from other works, including the works of other composers. Despite the evident public enthusiasm for the work, Handel did not promote further stagings. There were occasional productions in the years following its premiere but Handel's operas, including Agrippina, fell out of fashion in the mid-18th century.
In the 20th century Agrippina was revived in Germany and premiered in Britain and America. Performances of the work have become ever more common, with innovative stagings at the New York City Opera and the London Coliseum in 2007, and the Metropolitan Opera in 2020. Modern critical opinion is that Agrippina is Handel's first operatic masterpiece, full of freshness and musical invention which have made it one of the most popular operas of the ongoing Handel revival.
## Background
Handel's earliest opera compositions, in the German style, date from his Hamburg years, 1704–06, under the influence of Johann Mattheson. In 1706 he traveled to Italy where he remained for three years, developing his compositional skills. He first settled in Florence where he was introduced to Alessandro and Domenico Scarlatti. His first opera composed in Italy, though still reflecting the influence of Hamburg and Mattheson, was Rodrigo (1707, original title Vincer se stesso ê la maggior vittoria), and was presented there. It was not particularly successful, but was part of Handel's process of learning to compose opera in the Italian style and to set Italian words to music.
Handel then spent time in Rome, where the performance of opera was forbidden by Papal decree, and in Naples. He applied himself to the composition of cantatas and oratorios; at that time there was little difference (apart from increasing length) between cantata, oratorio and opera, all based on the alternation of secco recitative and aria da capo. Works from this period include Dixit Dominus and the dramatic cantata Aci, Galatea e Polifemo, written in Naples. While in Rome, probably through Alessandro Scarlatti, Handel had become acquainted with Cardinal Grimani, a distinguished diplomat who wrote libretti in his spare time, and acted as an unofficial theatrical agent for the Italian royal courts. He was evidently impressed by Handel and asked him to set his new libretto, Agrippina. Grimani intended to present this opera at his family-owned theatre in Venice, the Teatro San Giovanni Grisostomo, as part of the 1709–10 Carnevale season.
## Writing history
### Libretto
Grimani's libretto is based on much the same story used as the subject of Monteverdi's 1642 opera L'incoronazione di Poppea. Grimani's libretto centres on Agrippina, a character who does not appear in Monteverdi's darker version. Grimani avoids the "moralizing" tone of the later opera seria libretti written by such acknowledged masters as Metastasio and Zeno. According to the critic Donald Jay Grout, "irony, deception and intrigue pervade the humorous escapades of its well-defined characters". All the main characters, with the sole exception of Claudius's servant Lesbus, are historical, and the broad outline of the libretto draws heavily upon Tacitus's Annals and Suetonius' Life of Claudius. It has been suggested that the comical, amatory character of the Emperor Claudius is a caricature of Pope Clement XI, to whom Grimani was politically opposed. Certain aspects of this conflict are also reflected in the plot: the rivalry between Nero and Otho mirror aspects of the debate over the War of the Spanish Succession, in which Grimani supported the Habsburgs and Pope Clement XI France and Spain.
### Composition
According to John Mainwaring, Handel's first biographer, Agrippina was composed in the three weeks following Handel's arrival in Venice in November 1709, a theory supported by the autograph manuscript's Venetian paper. In composing the opera Handel borrowed extensively from his earlier oratorios and cantatas, and from other composers including Reinhard Keiser, Arcangelo Corelli and Jean-Baptiste Lully. This practice of adapting and borrowing was common at the time but is carried to greater lengths in Agrippina than in almost all of Handel's other major dramatic works. The overture, which is a French-style two-part work with a "thrilling" allegro, and all but five of the vocal numbers, are based on earlier works, though subject in many cases to significant adaptation and reworking.
Examples of recycled material include Pallas's "Col raggio placido", which is based on Lucifer's aria from La resurrezione (1708), "O voi dell'Erebo", which was itself adapted from Reinhard Keiser's 1705 opera Octavia. Agrippina's aria "Non ho cor che per amarti" was taken, almost entirely unchanged, from "Se la morte non vorrà" in Handel's earlier dramatic cantata Qual ti reveggio, oh Dio (1707); Narcissus's "Spererò" is an adaptation of "Sai perchè" from another 1707 cantata, Clori, Tirsi e Fileno; and parts of Nero's aria in act 3, "Come nube che fugge dal vento", are borrowed Handel's oratorio Il trionfo del tempo (all from 1707). Later, some of Agrippina's music was used by Handel in his London operas Rinaldo (1711) and the 1732 version of Acis and Galatea, in each case with little or no change. The first music by Handel presented in London may have been Agrippina's "Non ho cor che", transposed into Alessandro Scarlatti's opera Pirro è Dimitrio which was performed in London on 6 December 1710. The Agrippina overture and other arias from the opera appeared in pasticcios performed in London between 1710 and 1714, with additional music provided by other composers. Echoes of "Ti vo' giusta" (one of the few arias composed specifically for Agrippina) can be found in the air "He was despised", from Handel's Messiah (1742).
Two of the main male roles, Nero and Narcissus, were written for castrati, the "superstars of their day" in Italian opera. The opera was revised significantly before and possibly during its run. One example is the duet for Otho and Poppaea in act 3, "No, no, ch'io non apprezzo", replaced with two solo arias before the first performance. Another is Poppaea's aria "Ingannata", replaced during the run with another of extreme virtuosity, "Pur punir chi m'ha ingannata", either to emphasise Poppaea's new-found resolution at this juncture of the opera or, as is thought more likely, to flatter Scarabelli by giving her an additional opportunity to show off her vocal abilities.
The instrumentation for Handel's score follows closely that of all his early operas: two recorders, two oboes, two trumpets, three violins, two cellos, viola, timpani, contrabassoon and harpsichord. By the standards of Handel's later London operas this scoring is light, but there are nevertheless what Dean and Knapp describe as "moments of splendour when Handel applies the full concerto grosso treatment". Agrippina, Handel's second Italian opera, was probably his last composition in Italy.
## Roles
## Synopsis
### Act 1
On hearing that her husband, the Emperor Claudius, has died in a storm at sea, Agrippina plots to secure the throne for Nero, her son by a previous marriage. Nero is unenthusiastic about this project, but consents to his mother's wishes ("Con saggio tuo consiglio"). Agrippina obtains the support of her two freedmen, Pallas and Narcissus, who hail Nero as the new Emperor before the Senate.
With the Senate's assent, Agrippina and Nero begin to ascend the throne, but the ceremony is interrupted by the entrance of Claudius's servant Lesbus. He announces that his master is alive ("Allegrezza! Claudio giunge!"), saved from death by Otho, the commander of the army. Otho himself confirms this and reveals that Claudius has promised him the throne as a mark of gratitude. Agrippina is frustrated, until Otho secretly confides to her that he loves the beautiful Poppaea more than he desires the throne. Agrippina, aware that Claudius also loves Poppaea, sees a new opportunity of furthering her ambitions for Nero. She goes to Poppaea and tells her, falsely, that Otho has struck a bargain with Claudius whereby he, Otho, gains the throne but gives Poppaea to Claudius. Agrippina advises Poppaea to turn the tables on Otho by telling the Emperor that Otho has ordered her to refuse Claudius's attentions. This, Agrippina believes, will make Claudius revoke his promise to Otho of the throne.
Poppaea believes Agrippina. When Claudius arrives at Poppaea's house she denounces what she believes is Otho's treachery. Claudius departs in fury, while Agrippina cynically consoles Poppaea by declaring that their friendship will never be broken by deceit ("Non ho cor che per amarti").
### Act 2
Pallas and Narcissus realize that Agrippina has tricked them into supporting Nero and decide to have no more to do with her. Otho arrives, nervous about his forthcoming coronation ("Coronato il crin d'alloro"), followed by Agrippina, Nero and Poppaea, who have come to greet Claudius. All combine in a triumphal chorus ("Di timpani e trombe") as Claudius enters. Each in turns pays tribute to the Emperor, but Otho is coldly rebuffed as Claudius denounces him as a traitor. Otho is devastated and appeals to Agrippina, Poppaea, and Nero for support, but they all reject him, leaving him in bewilderment and despair ("Otton, qual portentoso fulmine" followed by "Voi che udite il mio lamento").
However, Poppaea is touched by her former beloved's grief, and wonders if he might not be guilty ("Bella pur nel mio diletto"). She devises a plan and when Otho approaches her, she pretends to talk in her sleep recounting what Agrippina has told her earlier. Otho, as she intended, overhears her and fiercely protests his innocence. He convinces Poppaea that Agrippina has deceived her. Poppaea swears revenge ("Ingannata una sol volta", alternate aria "Pur punir chi m'ha ingannata") but is distracted when Nero comes forward and declares his love for her. Meanwhile, Agrippina, having lost the support of Pallas and Narcissus, manages to convince Claudius that Otho is still plotting to take the throne. She advises Claudius that he should end Otho's ambitions once and for all by abdicating in favour of Nero. Claudius agrees, believing that this will enable him to win Poppaea.
### Act 3
Poppaea now plans some deceit of her own, in an effort to divert Claudius's wrath from Otho with whom she has now reconciled. She hides Otho in her bedroom with instructions to listen carefully. Soon Nero arrives to press his love on her ("Coll'ardor del tuo bel core"), but she tricks him into hiding as well. Then Claudius enters; Poppaea tells him that he had earlier misunderstood her: it was not Otho but Nero who had ordered her to reject Claudius. To prove her point she asks Claudius to pretend to leave, then she summons Nero who, thinking Claudius has gone, resumes his passionate wooing of Poppaea. Claudius suddenly reappears and angrily dismisses the crestfallen Nero. After Claudius departs, Poppaea brings Otho out of hiding and the two express their everlasting love in separate arias.
At the palace, Nero tells Agrippina of his troubles and decides to renounce love for political ambition ("Come nube che fugge dal vento"). But Pallas and Narcissus have by now revealed Agrippina's original plot to Claudius, so that when Agrippina urges the Emperor to yield the throne to Nero, he accuses her of treachery. She then claims that her efforts to secure the throne for Nero had all along been a ruse to safeguard the throne for Claudius ("Se vuoi pace"). Claudius believes her; nevertheless, when Poppaea, Otho, and Nero arrive, Claudius announces that Nero and Poppaea will marry, and that Otho shall have the throne. No one is satisfied with this arrangement, as their desires have all changed, so Claudius in a spirit of reconciliation reverses his judgement, giving Poppaea to Otho and the throne to Nero. He then summons the goddess Juno, who descends to pronounce a general blessing ("V'accendano le tede i raggi delle stelle").
## Performance history
### Premiere
The date of Agrippina's first performance, about which there was at one time some uncertainty, has been confirmed by a manuscript newsletter as 26 December 1709. The cast consisted of some of Northern Italy's leading singers of the day, including Antonio Carli in the lead bass role; Margherita Durastanti, who had recently sung the role of Mary Magdalene in Handel's La resurrezione; and Diamante Scarabelli, whose great success at Bologna in the 1697 pasticcio Perseo inspired the publication of a volume of eulogistic verse entitled La miniera del Diamante.
Agrippina proved extremely popular and established Handel's international reputation. Its original run of 27 performances was extraordinary for that time. Handel's biographer John Mainwaring wrote of the first performance: "The theatre at almost every pause resounded with shouts of Viva il caro Sassone! ('Long live the beloved Saxon!') They were thunderstruck with the grandeur and sublimity of his style, for they had never known till then all the powers of harmony and modulation so closely arranged and forcibly combined." Many others recorded overwhelmingly positive responses to the work.
### Later performances
Between 1713 and 1724 there were productions of Agrippina in Naples, Hamburg, and Vienna, although Handel himself never revived the opera after its initial run. The Naples production included additional music by Francesco Mancini. In the later 18th, and throughout the 19th centuries, Handel's operas fell into obscurity, and none were staged between 1754 and 1920. However, when interest in Handel's operas awakened in the 20th century, Agrippina received several revivals, beginning with a 1943 production at Handel's birthplace, Halle, under conductor Richard Kraus at the Halle Opera House. In this performance the alto role of Otho, composed for a woman, was changed into a bass accompanied by English horns, "with calamitous effects on the delicate balance and texture of the score", according to Winton Dean. The Radio Audizioni Italiane produced a live radio broadcast of the opera on 25 October 1953, the opera's first presentation other than on stage. The cast included Magda László in the title role and Mario Petri as Claudius, and the performance was conducted by Antonio Pedrotti.
A 1958 performance in Leipzig, and several more stagings in Germany, preceded the British première of the opera at Abingdon, Oxfordshire, in 1963. This was followed in 1982 by the first fully professional production in England. It was performed by Kent Opera with the conductor, Ivan Fischer, making his debut with the company and the orchestra playing on baroque instruments. Felicity Palmer took the title role. In 1983 the opera returned to Venice, for a performance under Christopher Hogwood at the Teatro Malibran. In the United States a concert performance had been given on 16 February 1972 at the Academy of Music in Philadelphia, but the opera's first fully staged American performance was in Fort Worth, Texas, in 1985. That same year it reached New York, with a concert performance at Alice Tully Hall, where the opera was described as a "genuine rarity". The Fort Worth performance was quickly followed by further American stagings in Iowa City and Boston. The historically informed performance movement inspired two period instrument productions of Agrippina in 1985 and 1991 respectively. Both were in Germany, the first was in the Schlosstheater Schwetzingen, the other at the Göttingen International Handel Festival.
### 21st century revivals
There have been numerous productions in the 21st century. There was a fully staged performance at the Glimmerglass Opera in Cooperstown, New York in 2001, conducted by Harry Bicket and directed by Lillian Groag. This production then moved in 2002 to the New York City Opera, revived in 2007, and was described by The New York Times critic as "odd ... presented as broad satire, a Springtime for Hitler version of I, Claudius", although the musical performances were generally praised. In Britain, the English National Opera (ENO) staged an English-language version in February 2007, directed by David McVicar, which received a broadly favourable critical response, although critic Fiona Maddocks identified features of the production that diminished the work: "Music so witty, inventive and humane requires no extra gilding". Some of the later revivals used countertenors in the roles written for castrati. Joyce DiDonato has performed the title role in productions in London at The Royal Opera in 2019 and at the Metropolitan Opera New York in 2020, among other venues.
## Music
Agrippina is considered Handel's first operatic masterpiece; according to Winton Dean it has few rivals for its "sheer freshness of musical invention". Grimani's libretto has also been praised: The New Penguin Opera Guide describes it as one of the best Handel ever set, and praises the "light touch" with which the characters are vividly portrayed. Agrippina as a whole is, in the view of the scholar John E. Sawyer, "among the most convincing of all the composer's dramatic works".
### Style
Stylistically, Agrippina follows the standard pattern of the era by alternating recitative and da capo arias. In accordance with 18th-century opera convention the plot is mainly carried forward in the recitatives, while the musical interest and exploration of character takes place in the arias—although on occasion Handel breaks this mould by using arias to advance the action. With one exception the recitative sections are secco ("dry"), where a simple vocal line is accompanied by continuo only. The anomaly is Otho's "Otton, qual portentoso fulmine", where he finds himself robbed of the throne and deserted by his beloved Poppaea; here the recitative is accompanied by the orchestra, as a means of highlighting the drama. Dean and Knapp describe this, and Otho's aria which follows, as "the peak of the opera". The 19th-century musical theorist Ebenezer Prout singles out Agrippina's "Non ho cor che per amarti" for special praise. He points out the range of instruments used for special effects, and writes that "an examination of the score of this air would probably astonish some who think Handel's orchestration is wanting in variety".
Handel made more use than was then usual of orchestral accompaniment in arias, but in other respects Agrippina is broadly typical of an older operatic tradition. For the most part the arias are brief, there are only two short ensembles, and in the quartet and the trio the voices are not heard together. However, Handel's style would change very little in the next 30 years, a point reflected in the reviews of the Tully Hall performance of Agrippina in 1985, which refer to a "string of melodious aria and ensembles, any of which could be mistaken for the work of his mature London years".
### Character
Of the main characters, only Otho is not morally contemptible. Agrippina is an unscrupulous schemer; Nero, while not yet the monster he would become, is pampered and hypocritical; Claudius is pompous, complacent, and something of a buffoon, while Poppaea, the first of Handel's sex kittens, is also a liar and a flirt. The freedmen Pallas and Narcissus are self-serving and salacious. All, however, have some redeeming features, and all have arias that express genuine emotion. The situations in which they find themselves are sometimes comic, but never farcical—like Mozart in the Da Ponte operas, Handel avoids laughing at his characters.
In Agrippina the da capo aria is the musical form used to illustrate character in the context of the opera. The first four arias of the work exemplify this: Nero's "Con raggio", in a minor key and with a descending figure on the key phrase "il trono ascenderò" ("I will ascend the throne") characterises him as weak and irresolute. Pallas's first aria "La mia sorte fortunata", with its "wide-leaping melodic phrasing" introduces him as a bold, heroic figure, contrasting with his rival Narcissus whose introspective nature is displayed in his delicate aria "Volo pronto" which immediately follows. Agrippina's introductory aria "L'alma mia" has a mock-military form which reflects her outward power, while subtle musical phrasing establishes her real emotional state. Poppaea's arias are uniformly light and rhythmic, while Claudius's short love song "Vieni O cara" gives a glimpse of his inner feelings, and is considered one of the gems of the score.
### Irony
Grimani's libretto is full of irony, which Handel reflects in the music. His settings sometimes illustrate both the surface meaning, as characters attempt to deceive each other, and the hidden truth. For instance, in her aria in act 1, "Non ho cor che per amarti", Agrippina promises Poppaea that deceit will never mar their new friendship, while tricking her into ruining Otho's chances for the throne. Handel's music illuminates her deceit in the melody and minor modal key, while a simple, emphasised rhythmic accompaniment hints at clarity and openness. In act 3, Nero's announcement that his passion is ended and that he will no longer bound by it (in "Come nube che fugge dal vento") is set to bitter-sweet music which suggests that he is deceiving himself. In Otho's "Coronato il crin" the agitated nature of the music is the opposite of what the "euphoric" tone of the libretto suggests. Contrasts between the force of the libretto and the emotional colour of the actual music would develop into a constant feature of Handel's later London operas.
## List of arias and musical numbers
The index of Chrysander's edition (see below) lists the following numbers, excluding the secco recitatives. Variants from the libretto are also noted.
Act 1
Act 2
Act 3
## Recordings
## Editions
Handel's autograph score survives, with the Sinfonia and first recitatives missing, but it shows significant differences from the libretto, due to changes made for the first performances. Handel's performing score is lost. Three early manuscript copies, probably dating from 1710, are held in Vienna; one of these may have been a gift from Grimani to the future Emperor Charles VI. These copies, presumably based on the lost performing score, show further changes from the autograph. A manuscript from the 1740s known as the "Flower score" is described by Dean as "a miscellany in haphazard order".
In about 1795 the British composer Samuel Arnold produced an edition based on early copies; this edition, while it contains errors and inaccuracies, has been called "probably a reasonable reflection of early performances". The Chrysander edition of 1874 has a tendency to "sweep Arnold aside when he is right and follow him when he is wrong". Musicologist Anthony Hicks calls it "an unfortunate attempt to reconcile the autograph text with Arnold and the wordbook, the result being a composite version of no authority".
In 1950 Barenreiter published Hellmuth Christian Wolff's edition, prepared for the 1943 Halle revival and reflecting the casting of basses for Otto and Narcissus, even when they sing what would otherwise be the alto part in the last chorus. It presents a German adaptation of the recitatives and written out embellishments for the da capo arias as well as numerous cuts. The B-flat fugue G 37 appears as an act 2 overture along with other instrumental music.
An edition by John E Sawyer appeared in 2013 as series II vol. 3 of the Hallische Händelausgabe. It is based on the 1709 version, with ballet music borrowed from Rodrigo, and contains two appendices with added and reconstructed music as well as deleted versions from the autograph.
|
1,220,133 |
James Newland
| 1,169,081,770 |
Australian Army officer and Victoria Cross recipient
|
[
"1881 births",
"1949 deaths",
"Australian Army officers",
"Australian Army personnel of World War II",
"Australian World War I recipients of the Victoria Cross",
"Australian military personnel of World War I",
"Australian military personnel of the Second Boer War",
"Australian police officers",
"Burials at Brighton General Cemetery",
"Military personnel from Victoria (state)",
"Recipients of the Meritorious Service Medal (United Kingdom)"
] |
James Ernest Newland, VC (22 August 1881 – 19 March 1949) was an Australian soldier, policeman and a recipient of the Victoria Cross, the highest decoration for gallantry "in the face of the enemy" that can be awarded to members of the British and Commonwealth armed forces. Newland was awarded the Victoria Cross following three separate actions in April 1917, during attacks against German forces retreating to the Hindenburg Line. While in command of a company, Newland successfully led his men in several assaults on German positions and repulsed subsequent counter-attacks.
Born in the Victorian town of Highton, Newland joined the Australian military in 1899 and saw active service during the Second Boer War. He continued to serve in the Australian Army's permanent forces on his return to Australia, and completed several years' service in the artillery. Transferring to the militia in 1907, Newland became a police officer in Tasmania before re-joining the permanent forces in 1910. After the outbreak of the First World War, he was appointed to the Australian Imperial Force and was among the first wave of men to land at Gallipoli. In the following days, Newland was wounded and evacuated to Egypt where he was commissioned as a second lieutenant.
Transferring to the Western Front in 1916, Newland was mentioned in despatches for his leadership while commanding a company during an attack at Mouquet Farm. He was wounded twice more during the war and medically discharged in March 1918; he returned to service with the permanent army. Newland held several appointments between the two world wars, and retired a lieutenant colonel in 1941. He died of heart failure in 1949.
## Early life
Newland was born in the Geelong suburb of Highton, Victoria, on 22 August 1881 to William Newland, a labourer, and his wife Louisa Jane (née Wall). In 1899, he enlisted in the Commonwealth Military Forces and was assigned to the 4th Battalion, Australian Commonwealth Horse, as a private. The unit later embarked for South Africa, where Newland saw active service in Cape Town during the Second Boer War.
Returning to Australia in 1902, Newland re-settled in Victoria and joined the Royal Australian Artillery in July the following year. He served in the artillery for over four years, before transferring to the militia in September 1907. In 1909, he became a police officer in the Tasmanian Police Force, where he remained until August 1910, when he re-enlisted in the permanent army. He was posted to the Australian Instructional Corps; he served with this unit until the outbreak of the First World War. In a ceremony at Sheffield, Tasmania, on 27 December 1913, Newland married Florence May Mitchell.
## First World War
On 17 August 1914, Newland transferred to the newly raised Australian Imperial Force following the British Empire's declaration of war on Germany and her allies. Assigned to the 12th Battalion, he was made its regimental quartermaster sergeant and embarked from Hobart aboard HMAT Geelong on 20 October, bound for Egypt. After a brief stop in Western Australia, the troopship arrived at its destination seven weeks later. The 12th Battalion spent the next four months training in the Egyptian desert.
At the commencement of the Gallipoli Campaign, the 3rd Australian Brigade—of which the 12th Battalion was part—was designated as the covering force for the ANZAC landing, and as such was the first unit ashore on 25 April 1915, at approximately 04:30. Newland was wounded in the days following the landing, suffering a gunshot wound to his arm, and was evacuated to the 1st General Hospital. While at the hospital, he was commissioned as a second lieutenant on 22 May, before returning to the 12th Battalion four days later.
Newland was engaged in operations on the Gallipoli Peninsula until 9 June, when he was withdrawn from the area and placed in command of the 12th Battalion's transport elements stationed in Egypt. Promoted to lieutenant on 15 October, he was hospitalised for ten days in November due to dengue fever. Following the Allied evacuation of Gallipoli in December, the 12th Battalion returned to Egypt where Newland continued as transport officer. Promoted to captain on 1 March 1916, he was made adjutant of the 12th Battalion fifteen days later. It embarked for France and the Western Front later that month.
Disembarking at Marseilles, the 12th Battalion was initially posted to the Fleurbaix sector of France. After involvement in minor operations, it transferred to the Somme in July, where it participated in the Battle of Pozières, its first major French action. Newland was posted to command A Company from 8 August, and was subsequently moved to Sausage Valley along with the rest of the 12th Battalion in preparation for an attack on Mouquet Farm.
Mouquet Farm was a ruined complex connected to several German strongpoints, and formed part of the Thiepval defences. On 21 August, Newland led his company in an assault on a series of trenches slightly north east of the farm. By 18:30, the company had captured its objectives and several of Newland's men rushed off in pursuit of the retreating Germans. Newland immediately stopped them and organised the company into a defensive position; the trench was consolidated by 05:00 the next morning. Praised for his "... great coolness and courage under heavy fire" during the attack, he was recommended for the Military Cross. The award was downgraded to a mention in despatches, the announcement of which was published in a supplement to the London Gazette on 4 January 1917.
After its involvement at Pozières and Mouquet Farm, the 12th Battalion was briefly transferred to the Ypres sector in Belgium in September, before returning to Bernafay Wood on the Somme late the following month. Newland was admitted to the 38th Casualty Clearing Station with pyrexia on 4 December. He was moved to the 2nd General Hospital at Le Havre, and returned to the 12th Battalion two weeks later following recuperation. On the same day, he was attached to the headquarters of the 2nd Australian Brigade for duty as a staff officer. He was granted leave on 21 January 1917 on completion of this stint.
Re-joining the 12th Battalion, Newland once again assumed command of A Company. On 26 February 1917, he was tasked with leading it during the 12th Battalion's attack on the village of La Barque during the German retreat to the Hindenburg Line. At Bark Trench, a position on the north side of the centre of La Barque, the company encountered a German strongpoint and Newland received a gunshot wound to the face. He was admitted to the 1st Australian Field Ambulance, and returned to the 12th Battalion on 25 March after a period of hospitalisation at the 7th Stationary Hospital in Boulogne.
### Victoria Cross
By early April 1917, there remained three German-held outpost villages—Boursies, Demicourt and Hermies—between the area to the south of the I Anzac Corps position and the Hindenburg Line. An attack by the 1st Australian Division to capture them was planned for 9 April, the same day the British offensive opened at Arras. For his actions on three separate occasions during the assault, Newland was awarded the Victoria Cross.
On the night of 7/8 April, the 12th Battalion was tasked with the capture of Boursies, on the Bapaume–Cambrai road. The attack was a feint to mislead the German forces on the direction from which Hermies was to be assaulted. Leading A Company as well as an attached platoon from B Company, Newland began his advance on the village at 03:00. The company was soon subject to heavy rifle and machine gun fire from a derelict mill approximately 400 metres (440 yd) short of the village, and began to suffer heavy casualties. Rallying his men, Newland charged the position and bombed the Germans with grenades. The attack dislodged the Germans, and the company secured the area and continued its advance.
Throughout 8 April, the Australians were subjected to heavy shellfire from German forces. At approximately 22:00, the Germans launched a fierce counter-attack under the cover of a barrage of bombs and trench mortars against A Company's position at the mill. They had some initial success and entered the forward posts of the mill, which were occupied by a platoon of Newland's men under the command of Sergeant John Whittle. Newland, bringing up a platoon from the battalion's reserve company, charged the attackers and re-established the lost ground with Whittle's assistance. The 12th Battalion was relieved by the 11th Battalion on 10 April, having succeeded in capturing Boursies at the cost of 240 casualties, of which 70 were killed or missing.
After a four-day reprieve from the frontline, the 12th Battalion relieved the 9th Battalion at Lagnicourt on 14 April. Around dawn the next day, the Germans launched a severe counter-attack against the 1st Australian Division's line. Breaking through, they forced back the 12th Battalion's D Company, which was to the right of Newland's A Company. Soon surrounded and under attack on three sides, Newland withdrew the company to a sunken road which had been held by Captain Percy Cherry during the capture of the village three weeks earlier, and lined the depleted company out in a defensive position on each bank.
The German forces attacked Newland's company several times during the battle, but were repulsed each time. During one of the assaults, Newland observed that the German attack was weakening and gathered a party of twenty men. Leading the group, he charged the Germans and seized forty as prisoners. As reinforcements from the 9th Battalion began to arrive, a combined counter-attack was launched and the line recaptured by approximately 11:00. During the engagement, the 12th Battalion suffered 125 casualties, including 66 killed or missing. Newland and Whittle were both awarded the Victoria Cross for their actions at Boursies and Lagnicourt; they were the only two permanent members of the Australian military to receive the decoration during the war. At 35 years and 7 months old, Newland was also the oldest Australian Victoria Cross recipient of the First World War.
The full citation for Newland's Victoria Cross appeared in a supplement to the London Gazette on 8 June 1917:
> War Office, 8th June, 1917.
>
> His Majesty the KING has been graciously pleased to approve of the award of the Victoria Cross to the undermentioned Officers, Non-commissioned Officers and Men: —
>
> Capt. James Ernest Newlands, [sic] Inf. Bn., Aus. Imp. Force.
>
> For most conspicuous bravery and devotion to duty, in the face of heavy odds, on three separate occasions.
>
> On the first occasion he organised the attack by his company on a most important objective, and led personally, under heavy fire, a bombing attack. He then rallied his company, which had suffered heavy casualties, and he was one of the first to reach the objective.
>
> On the following night his company, holding the captured position, was heavily counter-attacked. By personal exertion, utter disregard of fire, and judicious use of reserves, he succeeded in dispersing the enemy and regaining the position.
>
> On a subsequent occasion, when the company on his left was overpowered and his own company attacked from the rear, he drove off a combined attack which had developed from these directions.
>
> These attacks were renewed three or four times, and it was Capt. Newland's tenacity and disregard for his own safety that encouraged the men to hold out.
>
> The stand made by this officer was of the greatest importance, and produced far-reaching results.
### Later war service
In early May 1917, the 12th Battalion was involved in the British and Australian attempt to capture the village of Bullecourt. While engaged in this operation on 6 May, Newland was wounded for the third and final time of the war by a gunshot to his left armpit. Initially admitted to the 5th Field Ambulance, he was transferred to No 1 Red Cross Hospital, Le Touquet, the next day. The injury necessitated treatment in England, and Newland was shipped to a British hospital eight days later.
On recovering from his wounds, Newland attended an investiture ceremony at Buckingham Palace on 21 July, where he was decorated with his Victoria Cross by King George V. Later the same day, Newland boarded a ship to Australia. It arrived in Melbourne on 18 September, and Newland travelled to Tasmania. He was discharged from the Australian Imperial Force as medically unfit on 2 March 1918.
## Later life
Following his discharge, Newland retained the rank of captain and returned to service with the permanent military forces. Between the two world wars, he held several appointments in the army, including adjutant and quartermaster of the 8th, 49th, 52nd, 38th and 12th Battalions, as well as area officer and recruiting officer. In 1924, Newland's wife Florence died of tuberculosis. On 30 April 1925, he married Heather Vivienne Broughton in a ceremony at St Paul's Anglican Church, Bendigo; the couple would later have a daughter. Promoted to major on 1 May 1930, Newland was awarded the Meritorious Service Medal in 1935.
With the outbreak of the Second World War, Newland was seconded for duties as quartermaster instructor at the 4th Division headquarters. On 10 May 1940, he assumed his final army appointment as quartermaster, A Branch, at Army Headquarters in Melbourne. He served in this position until August 1941, when he was placed on the retired list with the honorary rank of lieutenant colonel.
In retirement, Newland served as Assistant Commissioner of the Australian Red Cross Society in the Northern Territory during the later months of 1941. He joined the inspection staff at the Ammunition Factory in Footscray on 2 January 1942. At his home in Caulfield, Victoria, on 19 March 1949, he died suddenly of heart failure at the age of 67. He was accorded a funeral with full military honours, and was buried at Brighton Cemetery. In 1984, Newland's daughter, Dawn, donated her father's medals to the Australian War Memorial in Canberra, where they currently reside.
|
501,402 |
Shannon Lucid
| 1,165,499,272 |
American biochemist and astronaut (born 1943)
|
[
"1943 births",
"American astronauts",
"Living people",
"Mir crew members",
"People from Bethany, Oklahoma",
"People from Oklahoma City",
"Recipients of the Congressional Space Medal of Honor",
"Recipients of the NASA Distinguished Service Medal",
"Recipients of the NASA Exceptional Service Medal",
"Scientists from Shanghai",
"Space Shuttle program astronauts",
"United States Astronaut Hall of Fame inductees",
"University of Oklahoma alumni",
"Women astronauts"
] |
Shannon Wells Lucid (born January 14, 1943) is an American biochemist and retired NASA astronaut. She has flown in space five times, including a prolonged mission aboard the Russian space station Mir in 1996, and is the only American woman to have stayed on Mir. From 1996 to 2007, Lucid held the record for the longest duration spent in space by an American and by a woman. She was awarded the Congressional Space Medal of Honor in December 1996, making her the tenth person and the first woman to be accorded the honor.
Lucid is a graduate of the University of Oklahoma, where she earned a bachelor's degree in chemistry in 1963, a master's degree in biochemistry in 1970, and a PhD in biochemistry in 1973. She was a laboratory technician at the Oklahoma Medical Research Foundation from 1964 to 1966, a research chemist at Kerr-McGee from 1966 to 1968, and a research associate at the Oklahoma Medical Research Foundation from 1973 to 1978.
In 1978, Lucid was recruited by NASA for astronaut training with NASA Astronaut Group 8, the first class of astronauts to include women. She flew in space five times: on STS-51-G, STS-34, STS-43, STS-58, and her mission to Mir, for which Lucid traveled to the space station on with STS-76 and returned six months later with STS-79. She was the NASA Chief Scientist from 2002 to 2003 and a capsule communicator (CAPCOM) at Mission Control for numerous Space Shuttle missions, including STS-135, the final mission of the Space Shuttle program. Lucid announced her retirement from NASA in 2012.
## Early life
Shannon Wells was born in Shanghai, Republic of China, on January 14, 1943, the daughter of Joseph Oscar Wells, a Baptist missionary, and his wife Myrtle, a missionary nurse. Due to America's ongoing war with Japan, when she was six weeks old, the family was detained by the Japanese, who had occupied Shanghai at the time. The three of them were imprisoned in an internment camp but were released during a prisoner exchange later that year. They returned to the United States on the Swedish ocean liner and stayed in the US until the end of the war.
After the war ended, the family returned to China but decided to leave again after the Chinese Communist Revolution in 1949. They moved to Lubbock, Texas, and then settled in Bethany, Oklahoma, the family's original hometown, where Wells graduated from Bethany High School in 1960. She was fascinated by stories of the American frontier and wanted to become an explorer. She concluded that she had been born too late for this, but discovered the works of Robert Goddard, the American rocket scientist, and decided that she could become a space explorer. Wells sold her bicycle to buy a telescope so she could look at the stars, and began building her own rockets. Shortly after graduating from high school, Wells earned her private pilot's license with instrument and multi-engine ratings and bought a preowned Piper PA-16 Clipper that she used to fly her father to revival meetings. She applied for jobs as a commercial pilot, but was rejected, as women were not yet accepted for training as commercial pilots in the United States.
Wells attended Wheaton College in Illinois, where she majored in chemistry. She then transferred to the University of Oklahoma, where she earned her bachelor's degree in chemistry in 1963. She was a teaching assistant in the University of Oklahoma's Department of Chemistry from 1963 to 1964 and a senior laboratory technician at the Oklahoma Medical Research Foundation in Oklahoma City, from 1964 to 1966. She then became a research chemist at Kerr-McGee, an oil company there. At Kerr-McGee she met Michael F. Lucid, a fellow research chemist. They married in 1967 and Wells took the name Shannon Wells Lucid. Their first child, Kawai Dawn, was born in 1968.
Afterward, Lucid left Kerr-McGee and returned to the University of Oklahoma as graduate assistant in the Department of Biochemistry and Molecular Biology, where she pursued a master's degree in biochemistry. She sat for her final examinations two days after the birth of her second daughter, Shandara Michelle, in 1970. She went on to earn her PhD in biochemistry in 1973, writing her thesis on the Effect of Cholera Toxin on Phosphorylation and Kinase Activity of Intestinal Epithelial Cells and Their Brush Borders under the supervision of A. Chadwick Cox. She then returned to the Oklahoma Medical Research Foundation as a research associate. A third child, Michael Kermit, was born in 1975.
## NASA career
### Selection and training
On July 8, 1976, the National Aeronautics and Space Administration (NASA) issued a call for applications for at least 15 pilot candidates and 15 mission specialist candidates. For the first time, new selections would be considered astronaut candidates rather than fully-fledged astronauts until they finished training and evaluation, which was expected to take two years. The enactment of the Equal Employment Opportunity Act of 1972 reinforced the promise of the Civil Rights Act of 1964 to address the persistent and entrenched employment discrimination against women, African Americans and minority groups in American society. While they had never been explicitly precluded from becoming NASA astronauts, none had ever been selected either. This time, minorities and women were encouraged to apply. Lucid's was one of the first of 8,079 applications received.
As one of 208 finalists, Lucid was invited to come to the Johnson Space Center (JSC) in Houston, Texas, for a week of interviews, evaluations and examinations, commencing on August 29, 1977. She was part of the third group of twenty applicants to be interviewed, and the first one that included women. The eight women in the group included Rhea Seddon, Anna Sims, Nitza Cintron and Millie Hughes-Wiley. On January 16, 1978, NASA announced the names of the 35 successful candidates, of whom 20 were mission specialist candidates. Of the six women in this first class with female astronauts, Lucid was the only one who was a mother at the time of being selected. George Abbey, the Director of Flight Crew Operations at JSC and the chairman of the selection panel, later stated that this was not taken into consideration during the selection process.
Group 8's name for itself was "TFNG". The abbreviation was deliberately ambiguous; for public purposes, it stood for "Thirty-Five New Guys", but within the group itself, it was known to stand for the military phrase, "the fucking new guy", used to denote newcomers to a military unit. Much of the first eight months of their training was in the classroom. Because there were so many of them, the TFNGs did not fit easily into the existing classrooms, so for classroom instruction they were split into two groups, red and blue, led by Rick Hauck and John Fabian respectively. Classroom training was given on a wide variety of subjects, including an introduction to the Space Shuttle program, space flight engineering, astronomy, orbital mechanics, ascent and entry aerodynamics and space flight physiology. Those accustomed to military and academic environments were surprised that subjects were taught, but not tested. Training in geology, a feature of the training of earlier classes, was continued, but the locations visited changed because the focus was now on observations of the Earth rather than the Moon.
Astronaut candidates had to complete survival training, be able to swim and scuba dive, and master the basics of aviation safety, as well as the specifics of the spacecraft they would have to fly. Water survival training was conducted with the 3613th Combat Crew Training Squadron at Homestead Air Force Base in Florida and parasail training at Vance Air Force Base in Oklahoma. On August 31, 1979, NASA announced that the 35 astronaut candidates had completed their training and evaluation, and were now officially astronauts, qualified for selection on space flight crews. Their training, which had been expected to last eighteen to twenty-four months, had been completed in fourteen. That of subsequent classes was shortened to twelve months.
Each of the new astronauts specialized in certain aspects of the Space Shuttle program, providing astronaut support and input. Lucid was involved with Spacelab 1 crew training, and the development of the Shuttle Avionics Integration Laboratory (SAIL) at JSC and Rockwell International's Flight Systems Laboratory (FSL) in Downey, California. She also worked on the Hubble Space Telescope and rendezvous proximity operations. She was at Edwards Air Force Base as a member of the exchange crew for the landing of the STS-5 mission in November 1982. The exchange crew took over from the flight crew after they had landed, and handled the post-flight activities. She was an astronaut support person (ASP) at the Kennedy Space Center (KSC) for the STS-8 mission in August 1983. Also known as a "Cape Crusader", an ASP was an astronaut who supported vehicle and payload testing at KSC, and strapped the flight crew into their seats before takeoff. For the STS-41-B mission in February 1984 she was the backup ASP and once again a member of the exchange crew.
### STS-51-G
On November 17, 1983, Lucid was assigned to her first flight, the STS-51-A mission. Tentatively scheduled for October 24, 1984, the mission would be commanded by Daniel Brandenstein, with pilot John O. Creighton and Lucid, Fabian and Steven R. Nagel as mission specialists. She would be the last of the six women in the TFNG group to fly. Due to slippages, the crew was reassigned to the STS-51-D mission in August 1984. This mission had a different payload, and it was scheduled to be launched on March 18, 1985. The mission was scrubbed just three weeks before the launch date. In May 1985 the crew was reassigned to the STS-51-G mission. A French astronaut, Patrick Baudry, and a Saudi Arabian prince, Sultan bin Salman Al Saud were assigned as payload specialists.
STS-51-G lifted off from Launch Complex 39A at KSC in the on June 17, 1985. The seven-day mission was to deploy three communications satellites: Morelos I for Mexico, Arabsat-1B for the Arab League, and Telstar 303 for the United States. The satellites were launched on successive days during the first three days of the mission. Lucid and Fabian operated the Remote Manipulator System (RMS) to deploy the satellites, which were boosted into geostationary transfer orbits by Payload Assist Module (PAM-D) booster stages.
Lucid also used the RMS to deploy the Spartan (Shuttle Pointed Autonomous Research Tool for Astronomy) satellite, which performed 17 hours of X-ray astronomy experiments while separated from the Space Shuttle, while Fabian handled its retrieval 45 hours later. In addition to the satellite deployments, the crew activated the Automated Directional Solidification Furnace (ADSF), six Getaway Specials and participated in biomedical experiments. Discovery landed at Edwards Air Force Base in California on June 24. The mission was accomplished in 112 orbits of the Earth, traveling 4.7 million kilometers (2.9 million miles) in 169 hours and 39 minutes (just over one week).
The publicity tour that usually followed a Space Shuttle mission included a trip to Saudi Arabia. Married women were not permitted to travel to Saudi Arabia without their husband, and Michael Lucid was unavailable, so Lucid decided not to go. A devout Christian, she disapproved of the way Saudi Arabia treated women. When the rest of the crew arrived in Riyadh, her absence was noted. This prompted a call from King Fahd of Saudi Arabia to President Ronald Reagan. Lucid went to Saudi Arabia and shook hands with the king, but she stayed for only one day.
### STS-34
After the STS-51-G mission, Lucid was assigned to Capsule Communicator (CAPCOM) duty. She served as the CAPCOM for the STS-51-J mission in October 1985, the STS-61-A mission in November 1985, STS-61-B mission in November and December 1985, and the STS-61-C mission in January 1986. The January 1986 Space Shuttle Challenger disaster later that month halted Space Shuttle operations for 32 months while NASA conducted investigations and remediation. Flight crews were stood down. One consequence of the disaster was the Galileo project, an unmanned probe to Jupiter, which lost both its launch window and its ride due to the cancelation of the Shuttle-Centaur project.
On November 30, 1988, NASA announced that Galileo would be deployed by the on the STS-34 mission, which was scheduled for October 12, 1989. The mission was commanded by Donald E. Williams, with pilot Michael J. McCulley and Lucid, Ellen S. Baker and Franklin Chang-Diaz as mission specialists. The launch was delayed for five days due to a faulty Space Shuttle main engine controller, and then for an additional day due to bad weather. Atlantis lifted off from KSC on October 18.
As the lead mission specialist, Lucid was primarily responsible for the Galileo spacecraft, and initiated its deployment by pressing a button to separate Galileo from Atlantis. Galileo was successfully deployed six and a half hours into the flight using the Inertial Upper Stage (IUS). As this was much less powerful than the Shuttle-Centaur upper stage, Galileo had to employ a gravity assist from Venus and two from Earth, and it took six years instead of two for the Galileo to reach Jupiter. "Both Ellen and I sighed a great sigh of relief, because we figured Galileo was not our concern at that point, because we'd gotten rid of it," Lucid reported. "Happiness was an empty payload bay and we got happier and happier as the IUS and Galileo went further away from us."
The mission also conducted a five-day Shuttle Solar Backscatter Ultraviolet (SSBUV) experiment carried in the cargo bay, and experiments related to growth hormone crystal distribution (GHCD) and polymer morphology (PM), a sensor technology experiment (STEX), a mesoscale lightning experiment (MLE), a Shuttle Student Involvement Program (SSIP) experiment that investigated ice crystal formation in zero gravity, and a ground-based Air Force Maui Optical Station (AMOS) experiment. Lucid and Chang-Diaz operated the PM experiment, which used a laptop computer to collect two gigabytes of data from an infrared spectrometer to study the effects of microgravity on minerals. The crew filmed their activities with an IMAX camera. The mission completed 79 orbits of the Earth, traveling 3.2 million kilometers (2 million miles) in 119 hours and 39 minutes before landing at Edwards Air Force Base on October 23.
### STS-43
In May 1990 NASA announced that Lucid was assigned to the crew of the STS-43 mission, which was scheduled to be flown in Discovery in April 1991. The mission was commanded by John E. Blaha, with Michael A. Baker as the pilot and Lucid, G. David Low, and James C. Adamson as the mission specialists. The objective of the mission was to deploy TDRS-E, a communications satellite that would form part of NASA's Tracking and Data Relay Satellite System.
The launch date was postponed to July 23, and the orbiter was changed to Atlantis. The launch was delayed by a day to replace a faulty integrated electronics assembly that controlled the separation of the orbiter and the external tank, and then the countdown was halted with five hours to go due to a faulty main engine controller, and the launch was postponed to August 1. Unfavorable weather prompted yet another 24-hour delay. Atlantis lifted off on August 2.
The crew deployed TDRS-E without incident using the IUS. The crew also conducted 32 physical, material and life science experiments, mostly related to the Extended Duration Orbiter and Space Station Freedom. These included experiments with the Space Station Heat Pipe Advanced Radiator Element II (SHARE II), the Shuttle Solar Backscatter Ultra-Violet (SSBUV) instrument, Tank Pressure Control Equipment (TPCE), and Optical Communications Through Windows (OCTW). There was also an auroral photography experiment (APE-B), a protein crystal growth experiment, testing of the bioserve / instrumentation technology associates materials dispersion apparatus (BIMDA), investigations into polymer membrane processing (IPMP), the space acceleration measurement system (SAMS), a solid surface combustion experiment (SSCE), use of the ultraviolet plume imager (UVPI); and the Air Force Maui optical site (AMOS) experiment.
Atlantis performed 142 orbits of the Earth, traveling 6.0 million kilometers (3.7 million miles) in 213 hours and 21 minutes. STS-43 was the eighth mission to land at KSC, and the first one scheduled to do so since STS-61-C in January 1986.
### STS-58
On December 6, 1991, Lucid was assigned to STS-58, the Spacelab Life Sciences 2 (SLS-2) mission. This was the second mission dedicated to the study of human and animal physiology on Earth and in spaceflight. The techniques developed for this flight were intended to be precursors of those to be conducted on the Space Station Freedom and subsequent long-duration space flights. Fellow TFNG Rhea Seddon was designated as the mission payload commander, with David Wolf, like Seddon a medical doctor, as the other mission specialist. Originally scheduled as one mission, the number of Spacelab Life Sciences objectives and experiments had grown until it was split into two missions, the first of which, STS-40/SLS-1, was flown in June 1991. The rest of the crew were not named until August 27, 1992. Blaha was designated the mission commander, with pilot Richard A. Searfoss and William S. McArthur Jr. as a fourth mission specialist. A payload specialist, Martin J. Fettman, was assigned to the mission on October 29.
The with SLS-2 on board lifted off from KSC on October 18, 1993. During the fourteen-day flight the crew performed neurovestibular, cardiovascular, cardiopulmonary, metabolic and musculoskeletal medical experiments on themselves and 48 rats. The crew investigated the phenomenon of bone density loss. They also studied the effects of microgravity on their sensory perception, and the mechanism of space adaptation syndrome. To study this, on the second day of the mission Lucid and Fettman wore headsets, known as accelerometer recording units, which recorded their head movements during the day. Along with Seddon, Wolf and Fettman, Lucid collected blood and urine samples from the crew for metabolic experiments. They also drew blood from the tails of the rats to measure how weightlessness affected their red blood cell counts. They performed sixteen engineering tests aboard Columbia and twenty Extended Duration Orbiter Medical Project experiments. The mission completed 225 orbits of the Earth, traveling five million miles in 336 hours, 13 minutes and 1 second. Landing was at Edwards Air Force Base, California. On completion of this flight, Lucid had logged 838 hours and 54 minutes in space.
### Shuttle-Mir
In 1992 the United States and Russia reached an agreement on cooperation in space so that Russian cosmonauts could fly on the Space Shuttles, and American astronauts on the Russian Mir space station. The prospect of a long stay on Mir was not one calculated to appeal to most astronauts: they had to learn Russian and train at Star City for a year to spend several months on board Mir carrying out science experiments with Russian cosmonauts. "I was wondering what it would be like to spend a long period of time in space," Lucid later recalled. "I told everybody I wanted to do it, and they couldn't find anybody else who had volunteered. So they said: 'Well OK, go do it.'" In January 1995 Lucid and Blaha joined fellow astronauts Bonnie Dunbar and Norman Thagard for Mir training in Star City. On March 30, 1995, NASA announced that Lucid would be the second astronaut to stay aboard Mir, after Thagard, who arrived on the space station on March 16.
Lucid's mission to Mir commenced on March 22, 1996, with liftoff from KSC aboard Atlantis on the STS-76 mission. Atlantis docked with Mir on March 24, and Lucid became the first American woman to live on the station. She joined cosmonauts Yuri Onufriyenko and Yuri Usachov, neither of whom spoke English. During the course of her stay aboard Mir, Lucid performed numerous life science and physical science experiments. She lit candles to study the behavior of fire in a microgravity environment; studied the way that quail embryos developed in their shells; grew protein crystals; and cultivated wheat in a tiny greenhouse. She injected herself with an immune system stimulant and collected blood and saliva samples to study the effects of microgravity on the immune system.
In her free time, she read books. One novel she enjoyed immensely was The Mirror of Her Dreams, but she reached the end only to find that it ended on a cliffhanger. "I floated there, alone in Spectra, in stunned disbelief, holding only volume one," she later recalled. "I was stranded, the impossibility of running to the local bookstore forefront in my mind ... How could my daughter have done this to me? Who would send only one volume of a two-volume set to her mother in space?" She arranged for the second volume to be sent on the next Progress resupply freighter. She left her books on Mir for later astronaut visitors, but they became inaccessible after the Progress M-34 collision in June 1997. Thagard had warned Lucid about the Russians' fondness for jellied fish and borscht. She brought a supply of M&M's and jello with her, and lived on a combination of Russian and American food.
Lucid's return journey to KSC was made aboard Atlantis. The STS-79 mission docked with Mir on September 18, bringing Blaha as her relief, and landed back at KSC on September 26, 1996. One of the catches that released her helmet from the neck ring became stuck, and technicians had to use pliers and a screwdriver to remove it. During her stay on Mir, Lucid had spent nearly 400 hours exercising on a stationary bicycle and a treadmill, and was able to stand and walk off Atlantis. Administrator of NASA Daniel Goldin presented her with a giftwrapped box of M&M's, a gift from President Bill Clinton, since she had told him that she craved them.
In completing this mission Lucid traveled 121.0 million kilometers (75.2 million miles) in 188 days, 4 hours, 0 minutes. This included 179 days on Mir. Her stay on Mir was not expected to last so long but her return was delayed twice, extending her stay by about six weeks. As a result of her time aboard Mir, she held the record for the most hours in orbit by a non-Russian, and most hours in orbit by a woman until June 16, 2007, when her record for longest duration spaceflight by a woman was exceeded by Sunita Williams on the International Space Station.
### CAPCOM
Lucid had a short cameo in the 1998 film Armageddon. From 2002 to 2003, she served as Chief Scientist of NASA. Starting in 2005, she served as lead CAPCOM on the Planning (overnight) shift at the Mission Control for sixteen Space Shuttle missions, including STS-135, the final mission. On January 31, 2012, she announced her retirement from NASA.
## Later life
Lucid retired from NASA to take care of her husband Mike, who had dementia. He died on December 25, 2014. She later wrote about this experience in her book No Sugar Added: One Family's Saga of Dementia and Caretaking (2019). She wrote about her experiences on Mir in Tumbleweed: Six Months Living on Mir (2020).
## Awards and honors
Lucid was awarded the Congressional Space Medal of Honor in December 1996 (for her mission to Mir), making her the tenth person and first woman to be given this honor. She was also awarded the NASA Space Flight Medal in 1985, 1989 (twice), 1991, 1993 and 1996; the NASA Exceptional Service Medal in 1988, 1990, 1992 and 2003 (twice); and the NASA Distinguished Service Medal in 1994 and 1997. She was inducted into the International Space Hall of Fame in 1990, the Oklahoma Women's Hall of Fame in 1993, the National Women's Hall of Fame in 1998, and the United States Astronaut Hall of Fame in 2014. In 2002 Discover magazine recognized her as one of the fifty most important women in science.
|
53,838,014 |
Cyclone Ada
| 1,136,534,699 |
1970 tropical cyclone
|
[
"1970 in Australia",
"Category 3 Australian region cyclones",
"Disasters in Queensland",
"January 1970 events in Oceania",
"Retired Australian region cyclones",
"Tropical cyclones in Queensland"
] |
Severe Tropical Cyclone Ada was a small but intense tropical cyclone that severely impacted the Whitsunday Region of Queensland, Australia, in January 1970. It has been described as a defining event in the history of the Whitsunday Islands, and was the most damaging storm in the mainland town of Proserpine's history at the time. Forming over the far eastern Coral Sea in early January, the weather disturbance that would become Ada remained weak and disorganised for nearly two weeks as it slowly moved in a clockwise loop. Accelerating toward the southwest, the system was named Ada on 15 January. All observations of the fledgling cyclone were made remotely with weather satellite imagery until it passed over an automated weather station on 16 January. The extremely compact cyclone, with a gale radius of just 55 km (35 mi), intensified into a Category 3 severe tropical cyclone just before striking the Whitsunday Islands at 14:00 UTC on 17 January. At 18:30 UTC, Ada's eye crossed the coast at Shute Harbour. The cyclone made little inland progress before stalling northwest of Mackay and dissipating on 19 January.
Ada devastated several resort islands in the Whitsundays, in some cases destroying virtually all facilities and guest cabins. The biggest resort, located on Daydream Island, was obliterated, with similar destruction seen on South Molle, Hayman, and Long islands; since most boats docked on these islands were destroyed, hundreds of tourists in these resorts became stranded and required emergency rescue. Based on the severity of the damage, wind gusts were later estimated at 220 km/h (140 mph) . As Ada moved ashore, most homes were damaged or destroyed in communities near the storm's landfall point, including Cannonvale, Airlie Beach, and Shute Harbour. Extreme rainfall totals as high as 1,250 mm (49 in) caused massive river flooding in coastal waterways between Bowen and Mackay. The floodwaters washed out roads and left some locations isolated for days. Offshore, seven people were missing and presumed dead after their fishing trawler encountered the cyclone. Ada killed a total of 14 people, including 11 at sea, and caused A\$12 million in damage. The cyclone revealed inadequacies in the warning broadcast system, and served as the impetus for enhanced cyclone awareness programs that have been credited with saving lives in subsequent cyclones. In January 2020, on the 50th anniversary of the disaster, a memorial to the storm victims was erected along the shoreline at Airlie Beach.
## Meteorological history
Cyclone Ada was first noted by weather satellite imagery as a disorganised area of disturbed weather over the eastern Coral Sea on 5 January. In the early stages of its life, the system was far from ships and only peripherally detected by weather stations. More recent analyses have determined the tropical low originated on 3 January, just west of Vanuatu. For about ten days between 5 and 15 January, observations of the low remained scarce, but infrequent satellite imagery revealed that it slowly completed a cyclonic loop nearing the Solomon Islands before curving back toward the southwest while remaining weak. On 15 January, the Bureau of Meteorology's (BoM) Tropical Cyclone Warning Centre in Brisbane named the storm Ada and issued the first warning to shipping interests. Ada reached tropical cyclone status on the modern-day Australian cyclone scale the next day, while centered near . The cyclone continued tracking west-southwest toward Queensland, and at 14:00 UTC on 16 January, it passed over an automated weather station on Marion Reef, about 480 km (300 mi) east of Bowen. The site recorded sustained winds of up to 93 km/h (58 mph).
With the first direct confirmation of the storm's growing strength, the BoM issued its initial public cyclone warning at 19:00 UTC. The cyclone's centre moved within range of the weather radar site in Mackay around 06:00 UTC on 17 January. Over the next several hours, radar revealed the system was moving slower and more erratically than expected, occasionally jogging to the east. Ada was an exceptionally compact cyclone, with a 55 km (35 mi) radius of gale-force winds, compared to the 150 km (100 mi) radius generally considered "small" for tropical cyclones. Between 11:00 and 17:00 UTC on 17 January, the cyclone's eye shrank from 28 km (17 mi) to just 18 km (11 mi) across, as measured by radar. As a result of its small size, the storm's onslaught was much more sudden than normal, with little rain and steady barometric pressures in the hours before landfall. At 12:00 UTC on 17 January, Ada reached its peak intensity, with 10-minute average maximum sustained winds of 150 km/h (90 mph). This made it a Category 3 severe tropical cyclone.
Beginning around 14:00 UTC, the core of Ada crossed the Whitsunday Islands. As the eye passed overhead, pressure fell to 976 hPa (28.82 inHg) on Hayman Island—just under 30 km (20 mi) northeast of Shute Harbour on the mainland—and although peak winds were not measured, gusts on Hayman Island were estimated at over 160 km/h (100 mph); similar estimates were made by a ship in the Whitsunday Passage. Dent Island recorded a pressure of 965 hPa (28.50 inHg) as the centre made its closest approach at 17:30 UTC. At 18:30 UTC, the system made landfall at Shute Harbour on the Whitsunday Coast while still at peak intensity. Air pressure at Airlie Beach, about 5 km (3 mi) away from the centre of circulation, fell to 962 hPa (28.41 inHg), suggesting that the storm's minimum central pressure was slightly lower. Upon moving ashore, the system slowed and curved toward the south, and after reaching a point about 60 km (40 mi) northwest of Mackay on 18 January, it became nearly stationary. Around the same time, the cyclone's structure began to deteriorate, with multiple circulation centres appearing on radar imagery. Just after 06:00 UTC on 19 January, the BoM issued its final advisory on Ada, and the system dissipated shortly after.
## Preparations
As Ada reached North Queensland, the BoM issued cyclone warnings on a three-hour cycle, with more frequent bulletins occasionally released as needed. Flood warnings were issued for watersheds of susceptible rivers like the Pioneer and Connors. The bureau's post-storm assessment of the disaster revealed that local broadcasts of advisories were sometimes delayed by several hours or not made at all, and public awareness was generally inadequate. In a misguided attempt to quell panic, one radio station appended the BoM's warning with an unapproved message that there was no cause for alarm because of the cyclone's small size. Due to the unusual nature of the storm, including its delayed arrival in some areas, many residents criticised or disregarded forecasts. Additionally, many tourists in the region were unfamiliar with the dangers of tropical cyclones. Findings from studies of the public response to Ada were used as the basis for upgraded warning systems and the introduction of more cyclone education campaigns; these initiatives were credited with saving lives and property in later storms such as Cyclone Althea in December 1971.
## Impact
Offshore, the 16.7 m (55 ft) concrete fishing trawler Whakatane went missing while en route from Mackay to Townsville. The search for the vessel and her seven occupants was suspended on 26 January, around the same time that wreckage, believed to be from Whakatane, was identified near Long Island. According to the BoM, maritime tragedies during Ada were likely caused by delayed or insufficient response to warnings. Some boatowners remained aboard their vessels throughout the cyclone and others attempted to move their boats to different locations during the lull at the storm's eye. In one instance, five men were reported missing after they ventured into the storm to secure a boat anchored at Hayman Island. Overall, storm damage was estimated at A\$12 million, the equivalent of over \$1 billion in 2012 values when accounting for growth and inflation. Ada is believed to have killed 14 people, 11 of them at sea.
### Whitsunday Islands
In the Whitsundays, Ada's impact was most severe on Hayman, Long, Daydream, South Molle, and Hook islands. Peak winds in the storm's path were not recorded, but based on the severity of the damage, it is estimated that gusts may have exceeded 220 km/h (140 mph). Many trees were either blown over or debarked and stripped of their foliage, with scraps of roofing material left hanging from their limbs. Throughout the islands, Ada ravaged resorts and boats, forcing hundreds of holidaymakers to await emergency rescue.
Most of the accommodation cabins were destroyed on South Molle Island, where a woman in one of the structures was killed and her partner severely injured. Damage on South Molle amounted to \$500,000. On Hayman Island, the winds unroofed most cabins and other buildings, accounting for an estimated \$1 million in damage. Long Island was subjected to Ada's left-front quadrant—the most intense part of the storm—and the Palm Bay Resort there was devastated, with only a few huts remaining. However, another resort on the western side of the island escaped relatively unscathed. The biggest resort in the Whitsundays at the time, on Daydream Island, was destroyed, requiring \$400,000 to rebuild. About 150 tourists sought shelter in part of a recreation hall, which was the only major portion of a building left intact on Daydream. Nearly every building on Hook Island was lost, and four men remained sheltered there for a week after the storm. Farther south, rough seas broke apart a 90 m (300 ft) stone jetty at Brampton Island in the Cumberland Group.
### Mainland
Torrential rains extended along mostly rural areas of the coast from Bowen to Mackay, while the strongest winds were concentrated in the area from Cannonvale to Shute Harbor and extending inland to Proserpine. Nine hours of damaging winds unroofed or otherwise damaged around 40% of the houses in Proserpine in what was described as the worst storm in the town's history at the time. Trees were uprooted, crops were flattened, and residential outhouses were blown apart. Elsewhere, in Shute Harbour, a motel and the few houses there were demolished, along with 85% of the homes in Airlie Beach and nearly all of Cannonvale's 200 houses. According to Minister for Mines and Main Roads Ron Camm, the cyclone forced 750 people from their homes. Around 200 storm victims sought refuge in a school in Cannonvale that was converted into an emergency shelter.
As the winds subsided, the weakening cyclone dropped as much as 1.25 m (49 in) of rain, resulting in massive river flooding near the coast. Some locations received up to 860 mm (34 in) of precipitation in just 24 hours. The Pioneer River in Mackay and the Don River in Bowen both experienced severe flooding; the latter overtopped a bridge by 3 m (10 ft), while at one point the former was well above flood stage and rising by 1 m (3.3 ft) per hour. A shopping centre in Mackay was flooded to a depth of 1 meter. Some waterways approached all-time record levels, with one creek north of Proserpine swelling to 11 km (7 mi) across. Many farms were inundated by floodwaters, losing livestock, machinery, and crops. The torrents washed out bridges and roads and severed communications, isolating communities such as Proserpine and Airlie Beach for several days. As a result of the widespread flooding, hundreds of motorists became stranded on a long stretch of the Bruce Highway. Two people died in the flood-ravaged area, including one soldier who drowned near Proserpine. From Bowen north to Townsville, more modest rainfall associated with the upper-level remnants of Ada proved beneficial, helping to alleviate persistent drought conditions.
## Aftermath
Following the storm, looters traveled to Proserpine to pick through ruined homes and boats. The nine-officer police force were unable to manage the outbreak of crime, and a supplemental anti-looting squad soon arrived in the town. Australian Army soldiers and Air Force planes dispatched to the Whitsunday Islands evacuated around 500 people from the devastated resort islands. Meanwhile, Navy boats retrieved injured individuals requiring urgent medical treatment. Residents of flood-stricken communities required vaccination against typhoid fever as a preventative measure. Private citizens also rushed to the aid of stranded resort guests; in January 2014, a local boat captain was formally honoured by MP George Christensen and Premier Campbell Newman for his role in evacuating 180 people from Daydream Island.
With Queensland's resources already strained by an ongoing severe drought, the Commonwealth Government of Australia agreed to evenly split the cost of restoring government assets damaged by Ada; this expenditure would normally fall to the state alone. By August 1970, the state and federal governments had issued a combined \$708,000 in grants for repairing flood damage in Bowen. The name Ada was later retired from the Australian tropical cyclone naming list due to the cyclone's severe impact.
In the islands, about 400 workers rushed to repair the resorts before peak tourism season; by mid-May, about 100 holiday cabins had been rebuilt and 20 boats restored to service. Hayman and Daydream islands reopened to guests in June and August 1970, respectively. South Molle Island changed ownership multiple times during the 1970s as it struggled to reattain its pre-Ada success, and many of the other resort islands were also sold as their owners were unable to meet the cost of renovations. The destruction of resorts in the Whitsundays triggered a sharp decline in Australian tourism revenue. Decades later, Ada is still regarded as a "defining" event in the development of the Whitsunday region. In 2016, Whitsunday MP Jason Costigan advocated for erecting a memorial to Ada's victims, and community members formed a small committee exploring this possibility in early 2017. In April 2019, Whitsunday Regional Council voted unanimously to approve \$15,000 in funding for a memorial at Airlie Beach to be completed in time for the 50th anniversary of the disaster. Finally, on 18 January 2020, a stone monument, 1.7 m (5.6 ft) tall and inscribed with the names of the 14 cyclone victims, was unveiled at a ceremony attended by 200 people.
## See also
- Climate of Australia
- Cyclone Debbie
- Cyclone Joy
- Cyclone Yasi
|
44,206,946 |
Title (album)
| 1,169,943,935 | null |
[
"2015 debut albums",
"Albums produced by Chris Gelbuda",
"Albums produced by J. R. Rotem",
"Albums produced by Kevin Kadish",
"Doo-wop albums",
"Epic Records albums",
"Meghan Trainor albums"
] |
Title is the debut major-label studio album by American singer-songwriter Meghan Trainor. It was released on January 9, 2015, by Epic Records. Initially a songwriter for other artists in 2013, Trainor signed with the label the following year and began recording material she co-wrote with Kevin Kadish. They were dissatisfied with the electronic dance music predominant in contemporary hit radio and drew influence from retro-styled 1950s and 1960s music.
Title is a doo-wop, pop, blue-eyed soul, and R&B record, with elements of Caribbean, hip hop, reggae, and soca music. Inspired by past relationships and her insecurities about body image, Trainor wrote songs she wished existed before she attended high school. The songs on the album explore themes such as female empowerment, self-respect, and self-awareness. Trainor promoted it with several public appearances and televised performances.
After Title's release, Trainor embarked on the 2015 concert tours That Bass Tour and MTrain Tour. The album was supported by four singles, including "All About That Bass" which reached number one in 58 countries and became the best-selling song by a female artist during the 2010s in the US. It also produced the Billboard Hot 100 top-15 singles "Lips Are Movin", "Dear Future Husband", and "Like I'm Gonna Lose You", the last of which features John Legend and peaked at number one in Australia, New Zealand, and Poland. Reviewers criticized Title's repetitiveness and did not foresee a long-lasting career for Trainor, though some appreciated her wit and audacious attitude.
Title debuted at number one on charts in the US, Canada, Scotland, and the UK, and spent multiple weeks at the summit in Australia and New Zealand. It was Epic's first number-one album in the US since 2010, and in Australia since Michael Jackson's The Essential Michael Jackson in 2005. Title made Trainor the fifth female artist in history to send her debut single and album to number one and follow-up single to the top five in the US. It was the ninth-best-selling album of 2015 worldwide, and earned multi-platinum certifications in the US, Australia, Canada, and Poland.
## Background
Meghan Trainor developed an early interest in music and started singing at age six. She began performing her compositions and soca music as part of the cover band Island Fusion, which included her aunt, younger brother, and father. Trainor temporarily relocated to Orleans, Massachusetts, with her family when she was in eighth grade, before moving to North Eastham, Massachusetts. She attended Nauset Regional High School, where she studied guitar, played trumpet, and sang in a jazz band for three years. When Trainor was a teenager, her parents nudged her to attend songwriting conventions and took her to venues at which production companies were searching for new artists and songwriters. She used Logic Studio to record and produce her compositions and later worked independently in a home studio built by her parents.
Trainor independently released three albums of material she had written, recorded, performed, and produced, between the ages 15 and 17. These included her eponymous 2009 release, and 2011 albums I'll Sing with You and Only 17. Trainor introduced herself to former NRBQ member Al Anderson at a music conference in Nashville. Impressed by her songwriting, he referred her to his publisher Carla Wallace of music publishing firm Big Yellow Dog Music. Though Trainor had been offered a scholarship at the Berklee College of Music, she decided to pursue her songwriting career and signed with Big Yellow Dog in 2012. Her ability to compose in a variety of genres influenced this decision. Trainor was unsure about becoming a recording artist herself; her father recalled: "She thought she was one of the chubby girls who would never be an artist."
## Recording and production
Trainor found songwriting affinity with American songwriter Kevin Kadish, whom she met in June 2013, due to their mutual love of pop music from the 1950s and 1960s. Kadish had wished to create a 50s sounding record of doo-wop-inspired pop" for three years, but could not find any artist that was interested. He shared the idea with Trainor after the two bonded over Jimmy Soul's 1963 single "If You Wanna Be Happy", and they decided to create the extended play (EP) Title (2014) with the same sound, "just for fun". They wrote the song "All About That Bass" (2014) in July 2013, and had completed three songs before Kadish started producing a rock album for the rest of the year. Trainor and Kadish pitched it to several record labels, who said it would not be successful because of its retro-styled composition and wanted to rerecord it using synthesizers, which they refused. Trainor performed the song on a ukulele for L.A. Reid, the chairman of Epic Records, who signed her with the label 20 minutes later.
Trainor immediately began working on more songs with Kadish as Epic wanted her to record an entire album. The label briefly suggested that Trainor work with other producers, such as Pharrell Williams or Timbaland, but she insisted on continuing with Kadish. Her artists and repertoire called Kadish and said, "whatever you did on 'Bass,' do it 10 more times. Don't bring in any more writers. Don't bring in any other producers. Whoever you used on that song." While recording Title, Trainor took a two-month break because polyps were developing on her vocal cords. She recounted that Kadish would "calm [her] down, [they would] dim the lights, so [she] wouldn't get frustrated", and had to use demo vocal takes Trainor had recorded as guides. Some of the album's material was recorded while Trainor laid on a bed Kadish made in the studio. In September 2014, she told Billboard that it was "pretty much done" and she only had one more song left to realize. Following the initial completion of Title, Trainor and Kadish had an additional day to work together and went into the studio. They wrote the song "Lips Are Movin" (2014) within eight minutes. Trainor told USA Today in mid-August, "It was done until we wrote this smash in eight minutes, literally. I calculated it: eight minutes. We were like, we have to add this now."
Trainor wrote "Like I'm Gonna Lose You" (2015) with fellow songwriters Justin Weaver and Caitlyn Smith while working in Music Row, and intended to pitch it to Kelly Clarkson. Initially reluctant to include it on the album, her manager and uncle convinced her otherwise. Title's sound was inspired by Trainor's love of throwback-style records, and music from the 1950s and 1960s. She wanted to continue the doo-wop vibe of the album's preceding singles, and simultaneously showcase influences of Caribbean music, rapping, and Fugees. Trainor considered it distinctive and disparate from popular music at the time: "it's got the throwback in there, but I snuck some reggae in there and clever fun lyrics and catchy melodies". According to her, the writing on Title reflects on the changes in her life and artistic process. Trainor intended the album to be a source of empowerment for young people; she wished some of its songs existed before she attended high school. She gravitated towards discussing unemployed men she had dated in the past, who made her pay for them and only texted her instead of taking her out. When Time's Nolan Feeney asked Trainor what she wanted listeners to hear on it, she said, "I want to help myself. I want to make sure guys take me on a date and treat me right because I didn't do that in the past. I want to love my body more. I just hope younger girls love themselves more, and younger people in general."
## Composition
### Overview
The standard edition of Title includes 11 tracks; the deluxe edition contains four additional original songs. The album predominantly has a doo-wop, pop, blue-eyed soul, and R&B sound. Kadish and Trainor drew from their mutual interest in retro-styled music, as they were tired of penning hackneyed electronic dance music catered to contemporary hit radio's tastes. AllMusic's Stephen Thomas Erlewine thought it balanced old-fashioned girl-group pop and old-school hip hop. Title comprises three-part harmonies, handclaps, finger-clicks, acoustic bass guitar, bubblegum pop melodies, and reggae and soca riddims. According to Jim Farber of the New York Daily News, the album's Caribbean music tracks were inspired by Trainor's Tobago-born uncle, and Millie Small's song "My Boy Lollipop" (1964). He wrote that it roots itself in the same style as "All About That Bass" and "Lips Are Movin", and recalls "girl groups in all their glory".
Trainor performed Title reminiscent of musical theatre style, and combined rap verses with cabaret choruses. Chuck Arnold of Rolling Stone described her vocals as "torch-y" and "tangy", reminiscent of Amy Winehouse. The album has lyrics about contemporary female empowerment, self-respect, and self-awareness. It uses themes of contradiction, such as individual versus society, modernity versus tradition, and dependence versus independence. Writing for The Seattle Times, Paul de Barros noted that Title focuses on adult themes, and Trainor occasionally employs profanity on it. According to Boston's Bryanna Cappadona, she portrays a "bossy, egocentric and sexually candid" personality on the album. Helen Brown of The Daily Telegraph remarked that "Trainor tackles 'complicated' relationships and drunken one-night stands with perma-perkiness" on it, while Tshepo Mokoena of The Guardian wrote it proved that Trainor is not a feminist.
### Songs
Trainor's love of songwriting inspired the 24-second interlude "The Best Part", which Billboard's Carl Wilson compared to the 1954 song "Mr. Sandman". "All About That Bass", a bubblegum pop, doo-wop, hip hop, Italo-Latin soul, and retro-R&B pop song, encourages embracing inner beauty, and promotes positive body image and self-acceptance. Trainor stated that "Dear Future Husband" was inspired by doo-wop standards like Dion's "Runaround Sue" (1961), and Beach Boys songs that possess big choruses with intentionally low-pitched melodies; its lyrics are about chivalry and dating, and list the things a man needs to do to be Trainor's life partner. "Close Your Eyes" is a contemporary ballad on which Trainor gives a soulful and "nuanced, fluttery vocal performance" over an acoustic guitar and pitch-shifted background vocals. The song features lyrics about Trainor's body image insecurities. "3am" is a "quieter and more vulnerable" song, on which Trainor succumbs to an ex-boyfriend and drunk dials him.
The soul ballad "Like I'm Gonna Lose You" features guest vocals from John Legend. In the song's lyrics, Trainor parlays her fear of losing a loved one into determination to relish and savor every moment spent with them. "Bang Dem Sticks" is a raucous, suggestive, and thematically ribald song, about her attraction to drummers. It has a simple percussion rhythm, horn and drum instrumentation, and a patois-inflected rap verse from Trainor. On "Walkashame", she details a hangover, and expresses embarrassment while defending someone returning home nonchalantly after an unintended one-night stand. Trainor wrote the title track and "Dear Future Husband" as a reaction to issues with contemporary dating and hookup culture, like women basing their self worth on social media likes and whether their partner replied to their texts. The former is a doo-wop song with Caribbean music influences and a ska-inflected bridge, on which she refuses to be friends with benefits and pushes her partner to define their relationship more clearly.
The penultimate track of the standard edition is "What If I", a 1950s-style ballad with string instrumentation, which contemplates the dangers of first-date sex and is lyrically reminiscent of The Shirelles's 1960 single "Will You Still Love Me Tomorrow". The final track, "Lips Are Movin", is a bubblegum pop, doo-wop, and Motown bounce song, with lyrics inspired by Trainor's frustrations with her record label. Reviewers including The Tennessean's Dave Paulson and MTV News' Christina Garibaldi deemed it a song about leaving a significant other after being cheated on, an interpretation Kadish is open to. It received widespread comparisons to "All About That Bass" from critics; Trainor admitted they "followed the [same] formula". "No Good for You", the first of the four bonus songs on the deluxe edition, contains elements of ska, with Trainor offering her opinion about a troublesome man in its lyrics. "Mr. Almost" and "My Selfish Heart" are about being in an unhealthy romantic relationship. In "Credit", Trainor demands credit from her ex-boyfriend's new partner, for the positive qualities and habits he developed during his time with her. "I'll Be Home", a seasonal ballad, appears on the Japanese edition of the album.
## Release and promotion
Trainor marketed Title as her debut studio album. She pulled her independent albums from circulation in the build-up to its release. Upon his first meeting with Trainor, Reid thought she had "lightning in a bottle" with "All About That Bass" and "was going to explode", but was unsure about what her next step should be: "All I knew is that I had one in my hand, I didn't even think about what would come behind it." Trainor felt pressured to retain her look from the song's music video after it gained popularity. In March 2015, she partnered with plus-size retailer FullBeauty Brands as a consultant for the creation of clothing for women with varying body types. Trainor's 2014 EP of the same name, which included "All About That Bass", "Dear Future Husband", "Close Your Eyes", and the title track, was released on September 9, 2014. Epic announced Title's release date in October 2014, and replaced the EP with its pre-order. The album's standard and deluxe editions were released digitally on January 9, 2015. Its special edition, consisting of music videos and behind-the-scenes footage, was released on November 20, 2015.
Title was supported by several singles. The lead single, "All About That Bass" reached number one in 58 countries and sold 11 million units worldwide. According to the 2019 Nielsen Music year-end report, it was the best-selling song by a female artist during the 2010s, with 5.8 million digital downloads sold in the US. The lyrics caused controversy; some critics called the song anti-feminist and accused Trainor of body shaming thin women. It was nominated for Record of the Year and Song of the Year at the 57th Annual Grammy Awards. The follow-up singles "Lips Are Movin" and "Dear Future Husband" reached the top 15 on the US Billboard Hot 100. The latter's music video was criticized over allegations of antifeminism, sexism and perpetuation of gender stereotypes. "Like I'm Gonna Lose You" was released as the fourth single, and reached number one in Australia, New Zealand, and Poland.
Trainor promoted Title with a series of public appearances and televised live performances. She performed at award shows, including the Country Music Association Awards, iHeartRadio Music Awards, Billboard Music Awards, and the American Music Awards. Trainor's appearances on television talk shows included The Tonight Show Starring Jimmy Fallon, The Ellen DeGeneres Show, Today, and Jimmy Kimmel Live!. She was part of the line-up for the Jingle Ball Tour and Today's Toyota Concert Series. The album was supported by two concert tours, That Bass Tour and MTrain Tour. The former began in Vancouver, British Columbia, in February 2015, and concluded in Milan in June 2015. Sheppard served as the opening act. The MTrain Tour commenced in St. Louis the following month, supported by Charlie Puth and British band Life of Dillon. Its remainder was canceled on August 11, 2015, after Trainor suffered a vocal cord hemorrhage.
## Critical reception
Title received mixed reviews from music critics. At Metacritic, which assigns a weighted mean rating out of 100 to reviews from mainstream critics, the album received an average score of 59, based on 13 reviews. Entertainment Weekly's Melissa Maerz characterized it as "real-girl pop with massive charm" and said it will help Trainor project multi-generational appeal. Arnold thought Title is "charmingly old-fashioned" and commended Trainor for co-writing each of its tracks. Farber complimented her vocals and wit-laden style of songwriting but thought the album "crosses the line from confident to smug", and noted her self-harmonizing as emblematic of its "[emphasis on] the image of self-containment". Brown described it as "relentlessly cute" and a showcase of "plenty of wit and watertight tunes", but advised Trainor to "read more self-help than she spouts".
Title's repetitiveness drew criticism. Marc Hirsh of The Boston Globe considered the album "more of the same" as "All About That Bass" and censured Trainor for pillaging herself, but was positive of its sassy attitude and catchiness. Writing for the Los Angeles Times, Mikael Wood opined that it "offers a dozen variations" of her debut single and deried its opposing themes as "unexamined", accusing her of appropriating the vocal patterns of black artists. Wilson stated that though Title sends the right message to Trainor's young audience, it gets dreary.
Some reviewers thought Title signaled Trainor's unsustainable commercial success. Slant Magazine's Alexa Camp believed that her retro style is untenable and anticipated a commercial decline reminiscent of Duffy, as she lacked Winehouse's "raw emotive talent" and ability to infuse a retro sound with "distinctly 21st-century sonic and lyrical sophistication". Dan Weiss of Spin stated he would be pleased if the album became "a gateway for body-conscious adolescents", but thought it was indicative that Trainor lacks endurance: "If she was actually as clever as her press release and titled the album It Girl With Staying Power, she might actually have staying power". Wilson noted that aside from her "understandable naïveté", her foibles are "stylistic cherry-picking" and a "compulsion to appear adorably relatable and socially correct", which she would be wise to eschew for a long-lasting career. Mokoena said it is "full of lyrical contradictions" and warned listeners not to expect "insightful and intimate songwriting". Erlewine opined that though Title was marred by "echoes" of "All About That Bass", it proved Trainor is smart enough to channel "a big hit into a real career".
## Commercial performance
In the US, Title debuted at number one on the Billboard 200 issued for January 31, 2015, with 238,000 album-equivalent units during its first week, replacing Taylor Swift's 1989 at the top of the chart. Trainor became the first female artist to top the chart with her debut album since Ariana Grande's 2013 release Yours Truly. Keith Caufield of Billboard wrote that its debut-week tally included 195,000 in pure sales and that it was "an impressive figure, considering January is traditionally a sleepy month for big new releases". Title made her the fifth female artist in history to send her debut single and album to number one and follow-up single to the top five in the country. The album also entered at number one on the Canadian Albums Chart.
Title opened atop the Australian Albums Chart issued for January 25, Epic's first album to do so since Michael Jackson's The Essential Michael Jackson (2005). The album spent two weeks at the summit. It debuted atop the New Zealand Albums Chart on January 19, spending two consecutive weeks there. Title entered at number one on the Scottish Albums Chart and UK Albums Chart. It achieved success in Europe, where it peaked within the top 10 in Denmark, Norway, Spain, Sweden, and Switzerland. Several songs from it entered charts worldwide despite not being released as singles. The title track reached the 100th position on the Billboard Hot 100, and number nine in New Zealand. It was certified Gold in both countries. "No Good for You" debuted and peaked at number 91 on the Swedish Singles Chart, where it charted for two weeks.
Title received certifications, including 3× Platinum in the US, Australia, and Canada; 2× Platinum in Poland; Platinum+Gold in Mexico; Platinum in Denmark, New Zealand, Sweden, and the UK; and Gold in the Netherlands. According to the International Federation of the Phonographic Industry, it was the ninth-best-selling album of 2015, with 1.8 million copies sold worldwide.
## Track listing
Notes
- signifies a vocal producer
## Personnel
Credits are adapted from the liner notes of Title.
Recording locations
- Recorded and engineered at The Carriage House (Nolensville, Tennessee) (tracks 1–4, 8–11, and 13–15), The Green Room (East Nashville, Tennessee) (tracks 5 and 6), Germano Studios (New York City) (track 6), Meghan Trainor's home studio (Nashville) (track 7), and Beluga Heights Studio (Los Angeles) (track 12)
- Mixed at The Carriage House (Nolensville, Tennessee) (tracks 1–4, 8–11, and 13–15), Larrabee North Studios (Universal City, California) (tracks 5–7), and Beluga Heights Studio (Los Angeles) (track 12)
- Mastered at The Mastering Palace (New York City)
- Management – Atom Factory, a division of Coalition Media Group (Los Angeles)
- Legal – Myman Greenspan Fineman/Fox Rosenberg & Light LLP
Personnel
- Meghan Trainor – vocals, additional drum programming, background vocals, executive producer, guitar, handclaps, piano, production, programming, recording, ukulele, vocal production
- Kevin Kadish – vocals, acoustic guitar, background vocals, bass, bass vocals, classical guitar, drum programming, drums, electric guitar, electric upright bass, engineering, mixing, organ, piano, production, sound design, synthesizer, ukulele, vibraslap (tracks 1–4, 8–11, and 13–15)
- David Baron – baritone saxophone, bass, celesta, clavinet, electric piano, French horn, Hammond organ, piano, strings, synthesizer, tenor saxophone (tracks 2–4, 8–11, and 13–15)
- Jim Hoke – baritone saxophone, flute, tenor saxophone (tracks 3, 8, 10, and 13)
- Jeremy Lister – background vocals (track 4)
- Eleonore Denig – violin (track 10)
- Shannon Forrest – drums (track 10)
- Shy Carter – vocals
- Chris Gelbuda – additional background vocals, instruments, recording, production, programming,
- John Legend – vocals (track 6)
- Jason Agel – recording (track 6)
- Kenta Yonesaka – recording assistant (track 6)
- The Elev3n – production,
- Manny Marroquin – mixing (tracks 5–7)
- J.R. Rotem – bass, drums, horns, organ, piano, production, strings (track 12)
- Samuel Kalandjian – engineering, mixing, recording (track 12)
- Dave Kutch – mastering
- Anita Marisa Boriboon – art director, design
- Lana Jay Lackey – styling
- Danilo – hair
- Mylah Morales – make-up
- Brooke Nipar – photography
## Charts
### Weekly charts
### Year-end charts
### Decade-end charts
## Certifications
## Release history
## See also
- List of Billboard 200 number-one albums of 2015
- List of number-one albums from the 2010s (New Zealand)
- List of number-one albums of 2015 (Australia)
- List of number-one albums of 2015 (Canada)
- List of UK Albums Chart number ones of the 2010s
|
55,760,582 |
History of aluminium
| 1,173,266,871 |
History of the chemical element aluminium
|
[
"Aluminium",
"History of chemistry",
"History of technology"
] |
Aluminium (or aluminum) metal is very rare in native form, and the process to refine it from ores is complex, so for most of human history it was unknown. However, the compound alum has been known since the 5th century BCE and was used extensively by the ancients for dyeing. During the Middle Ages, its use for dyeing made it a commodity of international commerce. Renaissance scientists believed that alum was a salt of a new earth; during the Age of Enlightenment, it was established that this earth, alumina, was an oxide of a new metal. Discovery of this metal was announced in 1825 by Danish physicist Hans Christian Ørsted, whose work was extended by German chemist Friedrich Wöhler.
Aluminium was difficult to refine and thus uncommon in actual usage. Soon after its discovery, the price of aluminium exceeded that of gold. It was reduced only after the initiation of the first industrial production by French chemist Henri Étienne Sainte-Claire Deville in 1856. Aluminium became much more available to the public with the Hall–Héroult process developed independently by French engineer Paul Héroult and American engineer Charles Martin Hall in 1886, and the Bayer process developed by Austrian chemist Carl Joseph Bayer in 1889. These processes have been used for aluminium production up to the present.
The introduction of these methods for the mass production of aluminium led to extensive use of the light, corrosion-resistant metal in industry and everyday life. Aluminium began to be used in engineering and construction. In World Wars I and II, aluminium was a crucial strategic resource for aviation. World production of the metal grew from 6,800 metric tons in 1900 to 2,810,000 metric tons in 1954, when aluminium became the most produced non-ferrous metal, surpassing copper.
In the second half of the 20th century, aluminium gained usage in transportation and packaging. Aluminium production became a source of concern due to its effect on the environment, and aluminium recycling gained ground. The metal became an exchange commodity in the 1970s. Production began to shift from developed countries to developing ones; by 2010, China had accumulated an especially large share in both production and consumption of aluminium. World production continued to rise, reaching 58,500,000 metric tons in 2015. Aluminium production exceeds those of all other non-ferrous metals combined.
## Early history
> Today, I bring you the victory over the Turk. Every year they wring from the Christians more than three hundred thousand ducats for the alum with which we dye wool. For this is not found among the Latins except a very small quantity. [...] But I have found seven mountains so rich in this material that they could supply seven worlds. If you will give orders to engage workmen, build furnaces, and smelt the ore, you will provide all Europe with alum and the Turk will lose all his profits. Instead they will accrue to you ...
The history of aluminium was shaped by the usage of its compound alum. The first written record of alum was in the 5th century BCE by Greek historian Herodotus. The ancients used it as a dyeing mordant, in medicine, in chemical milling, and as a fire-resistant coating for wood to protect fortresses from enemy arson. Aluminium metal was unknown. Roman writer Petronius mentioned in his novel Satyricon that an unusual glass had been presented to the emperor: after it was thrown on the pavement, it did not break but only deformed. It was returned to its former shape using a hammer. After learning from the inventor that nobody else knew how to produce this material, the emperor had the inventor executed so that it did not diminish the price of gold. Variations of this story were mentioned briefly in Natural History by Roman historian Pliny the Elder (who noted the story had "been current through frequent repetition rather than authentic") and Roman History by Roman historian Cassius Dio. Some sources suggest this glass could be aluminium. It is possible aluminium-containing alloys were produced in China during the reign of the first Jin dynasty (266–420).
After the Crusades, alum was a commodity of international commerce; it was indispensable in the European fabric industry. Small alum mines were worked in Catholic Europe but most alum came from the Middle East. Alum continued to be traded through the Mediterranean Sea until the mid-15th century, when the Ottomans greatly increased export taxes. In a few years, alum was discovered in great abundance in Italy. Pope Pius II forbade all imports from the east, using the profits from the alum trade to start a war with the Ottomans. This newly found alum long played an important role in European pharmacy, but the high prices set by the papal government eventually made other states start their own production; large-scale alum mining came to other regions of Europe in the 16th century.
## Establishing the nature of alum
> I think it not too venturesome to predict that a day will come when the metallic nature of the base of alum will be incontestably proven.
At the start of the Renaissance, the nature of alum remained unknown. Around 1530, Swiss physician Paracelsus recognized alum as separate from vitriole (sulfates) and suggested it was a salt of an earth. In 1595, German doctor and chemist Andreas Libavius demonstrated that alum and green and blue vitriole were formed by the same acid but different earths; for the undiscovered earth that formed alum, he proposed the name "alumina". German chemist Georg Ernst Stahl stated that the unknown base of alum was akin to lime or chalk in 1702; this mistaken view was shared by many scientists for half a century. In 1722, German chemist Friedrich Hoffmann suggested that the base of alum was a distinct earth. In 1728, French chemist Étienne Geoffroy Saint-Hilaire claimed alum was formed by an unknown earth and sulfuric acid; he mistakenly believed burning that earth yielded silica. (Geoffroy's mistake was corrected only in 1785 by German chemist and pharmacist Johann Christian Wiegleb. He determined that earth of alum could not be synthesized from silica and alkalis, contrary to contemporary belief.) French chemist Jean Gello proved the earth in clay and the earth resulting from the reaction of an alkali on alum were identical in 1739. German chemist Johann Heinrich Pott showed the precipitate obtained from pouring an alkali into a solution of alum was different from lime and chalk in 1746.
German chemist Andreas Sigismund Marggraf synthesized the earth of alum by boiling clay in sulfuric acid and adding potash in 1754. He realized that adding soda, potash, or an alkali to a solution of the new earth in sulfuric acid yielded alum. He described the earth as alkaline, as he had discovered it dissolved in acids when dried. Marggraf also described salts of this earth: chloride, nitrate and acetate. In 1758, French chemist Pierre Macquer wrote that alumina resembled a metallic earth. In 1760, French chemist Théodore Baron d'Hénouville expressed his confidence that alumina was a metallic earth.
In 1767, Swedish chemist Torbern Bergman synthesized alum by boiling alunite in sulfuric acid and adding potash to the solution. He also synthesized alum as a reaction product between sulfates of potassium and earth of alum, demonstrating that alum was a double salt. Swedish German pharmaceutical chemist Carl Wilhelm Scheele demonstrated that both alum and silica originated from clay and alum did not contain silicon in 1776. Writing in 1782, French chemist Antoine Lavoisier considered alumina an oxide of a metal with an affinity for oxygen so strong that no known reducing agents could overcome it.
Swedish chemist Jöns Jacob Berzelius suggested the formula AlO<sub>3</sub> for alumina in 1815. The correct formula, Al<sub>2</sub>O<sub>3</sub>, was established by German chemist Eilhard Mitscherlich in 1821; this helped Berzelius determine the correct atomic weight of the metal, 27.
## Isolation of metal
> This amalgam quickly separates in air, and by distillation, in an inert atmosphere, gives a lump of metal which in color and luster somewhat resembles tin.
In 1760, Baron de Hénouville unsuccessfully attempted to reduce alumina to its metal. He claimed he had tried every method of reduction known at the time, though his methods were unpublished. It is probable he mixed alum with carbon or some organic substance, with salt or soda for flux, and heated it in a charcoal fire. Austrian chemists Anton Leopold Ruprecht and Matteo Tondi repeated Baron's experiments in 1790, significantly increasing the temperatures. They found small metallic particles they believed were the sought-after metal; but later experiments by other chemists showed these were iron phosphide from impurities in the charcoal and bone ash. German chemist Martin Heinrich Klaproth commented in an aftermath, "if there exists an earth which has been put in conditions where its metallic nature should be disclosed, if it had such, an earth exposed to experiments suitable for reducing it, tested in the hottest fires by all sorts of methods, on a large as well as on a small scale, that earth is certainly alumina, yet no one has yet perceived its metallization." Lavoisier in 1794 and French chemist Louis-Bernard Guyton de Morveau in 1795 melted alumina to a white enamel in a charcoal fire fed by pure oxygen but found no metal. American chemist Robert Hare melted alumina with an oxyhydrogen blowpipe in 1802, also obtaining the enamel, but still found no metal.
In 1807, British chemist Humphry Davy successfully electrolyzed alumina with alkaline batteries, but the resulting alloy contained potassium and sodium, and Davy had no means to separate the desired metal from these. He then heated alumina with potassium, forming potassium oxide but was unable to produce the sought-after metal. In 1808, Davy set up a different experiment on electrolysis of alumina, establishing that alumina decomposed in the electric arc but formed metal alloyed with iron; he was unable to separate the two. Finally, he tried yet another electrolysis experiment, seeking to collect the metal on iron, but was again unable to separate the coveted metal from it. Davy suggested the metal be named alumium in 1808 and aluminum in 1812, thus producing the modern name. Other scientists used the spelling aluminium; the former spelling regained usage in the United States in the following decades.
American chemist Benjamin Silliman repeated Hare's experiment in 1813 and obtained small granules of the sought-after metal, which almost immediately burned.
In 1824, Danish physicist Hans Christian Ørsted attempted to produce the metal. He reacted anhydrous aluminium chloride with potassium amalgam, yielding a lump of metal that looked similar to tin. He presented his results and demonstrated a sample of the new metal in 1825. In 1826, he wrote, "aluminium has a metallic luster and somewhat grayish color and breaks down water very slowly"; this suggests he had obtained an aluminium–potassium alloy, rather than pure aluminium. Ørsted placed little importance on his discovery. He did not notify either Davy or Berzelius, both of whom he knew, and published his work in a Danish magazine unknown to the European public. As a result, he is often not credited as the discoverer of the element; some earlier sources claimed Ørsted had not isolated aluminium.
Berzelius tried isolating the metal in 1825 by carefully washing the potassium analog of the base salt in cryolite in a crucible. Prior to the experiment, he had correctly identified the formula of this salt as K<sub>3</sub>AlF<sub>6</sub>. He found no metal, but his experiment came very close to succeeding and was successfully reproduced many times later. Berzelius's mistake was in using an excess of potassium, which made the solution too alkaline and dissolved all the newly formed aluminium.
German chemist Friedrich Wöhler visited Ørsted in 1827 and received explicit permission to continue the aluminium research, which Ørsted "did not have time" for. Wöhler repeated Ørsted's experiments but did not identify any aluminium. (Wöhler later wrote to Berzelius, "what Oersted assumed to be a lump of aluminium was certainly nothing but aluminium-containing potassium".) He conducted a similar experiment, mixing anhydrous aluminium chloride with potassium, and produced a powder of aluminium. After hearing about this, Ørsted suggested that his own aluminium might have contained potassium. Wöhler continued his research and in 1845 was able to produce small pieces of the metal and described some of its physical properties. Wöhler's description of the properties indicates that he had obtained impure aluminium. Other scientists also failed to reproduce Ørsted's experiment, and Wöhler was credited as the discoverer for many years. While Ørsted was not concerned with the priority of the discovery, some Danes tried to demonstrate he had obtained aluminium. In 1921, the reason for the inconsistency between Ørsted's and Wöhler's experiments was discovered by Danish chemist Johan Fogh, who demonstrated that Ørsted's experiment was successful thanks to use of a large amount of excess aluminium chloride and an amalgam with low potassium content. In 1936, scientists from American aluminium producing company Alcoa successfully recreated that experiment. However, many later sources still credit Wöhler with the discovery of aluminium, as well as its successful isolation in a relatively pure form.
## Early industrial production
> My first thought was I had laid my hands on this intermediate metal which would find its place in man's uses and needs when we would find the way of taking it out of the chemists' laboratory and putting it in the industry.
Since Wöhler's method could not yield large amounts of aluminium, the metal remained uncommon; its cost had exceeded that of gold before a new method was devised. In 1852, aluminium was sold at US\$34 per ounce. In comparison, the price of gold at the time was \$19 per ounce.
French chemist Henri Étienne Sainte-Claire Deville announced an industrial method of aluminium production in 1854 at the Paris Academy of Sciences. Aluminium chloride could be reduced by sodium, a metal more convenient and less expensive than potassium used by Wöhler. Deville was able to produce an ingot of the metal. Napoleon III of France promised Deville an unlimited subsidy for aluminium research; in total, Deville used 36,000 French francs—20 times the annual income of an ordinary family. Napoleon's interest in aluminium lay in its potential military use: he wished weapons, helmets, armor, and other equipment for the French army could be made of the new light, shiny metal. While the metal was still not displayed to the public, Napoleon is reputed to have held a banquet where the most honored guests were given aluminium utensils while others made do with gold.
Twelve small ingots of aluminium were later exhibited for the first time to the public at the Exposition Universelle of 1855. The metal was presented as "the silver from clay" (aluminium is very similar to silver visually), and this name was soon widely used. It attracted widespread attention; it was suggested aluminium be used in arts, music, medicine, cooking, and tableware. The metal was noticed by the avant-garde writers of the time—Charles Dickens, Nikolay Chernyshevsky, and Jules Verne—who envisioned its use in the future. However, not all attention was favorable. Newspapers wrote, "The Parisian expo put an end to the fairy tale of the silver from clay", saying that much of what had been said about the metal was exaggerated if not untrue and that the amount of the presented metal—about a kilogram—contrasted with what had been expected and was "not a lot for a discovery that was said to turn the world upside down". Overall, the fair led to the eventual commercialization of the metal. That year, aluminium was put to the market at a price of 300 F per kilogram. At the next fair in Paris in 1867, visitors were presented with aluminium wire and foil as well a new alloy—aluminium bronze, notable for its low cost of production, high resistance to corrosion, and desirable mechanical properties.
Manufacturers did not wish to divert resources from producing well-known (and marketable) metals, such as iron and bronze, to experiment with a new one; moreover, produced aluminium was still not of great purity and differed in properties by sample. This led to an initial general reluctance to produce the new metal. Deville and partners established the world's first industrial production of aluminium at a smelter in Rouen in 1856. Deville's smelter moved that year to La Glacière and then Nanterre, and in 1857 to Salindres. For the factory in Nanterre, an output of 2 kilograms of aluminium per day was recorded, with a purity of 98%. Originally, production started with synthesis of pure alumina, which was obtained from calcination of ammonium alum. In 1858, Deville was introduced to bauxite and soon developed into what became known as the Deville process, employing the mineral as a source for alumina production. In 1860, Deville sold his aluminium interests to Henri Merle, a founder of Compagnie d'Alais et de la Camargue; this company dominated the aluminium market in France decades later.
Some chemists, including Deville, sought to use cryolite as the source ore, but with little success. British engineer William Gerhard set up a plant with cryolite as the primary raw material in Battersea, London, in 1856, but technical and financial difficulties forced the closure of the plant in three years. British ironmaster Isaac Lowthian Bell produced aluminium from 1860 to 1874. During the opening of his factory, he waved to the crowd with a unique and costly aluminium top hat. No statistics about this production can be recovered, but it "cannot be very high". Deville's output grew to 1 metric ton per year in 1860; 1.7 metric tons in 1867; and 1.8 metric tons in 1872. At the time, demand for aluminium was low: for example, sales of Deville's aluminium by his British agents equaled 15 kilograms in 1872. Aluminium at the time was often compared with silver; like silver, it was found to be suitable for making jewelry and objéts d'art. Price for aluminium steadily declined to 240 F in 1859; 200 F in 1862; 120 F in 1867.
Other production sites began to appear in the 1880s. British engineer James Fern Webster launched the industrial production of aluminium by reduction with sodium in 1882; his aluminium was much purer than Deville's (it contained 0.8% impurities whereas Deville's typically contained 2%). World production of aluminium in 1884 equaled 3.6 metric tons. In 1884, American architect William Frishmuth combined production of sodium, alumina, and aluminium into a single technological process; this contrasted with the previous need to collect sodium, which combusts in water and sometimes air; his aluminium production cost was about \$16 per pound (compare to silver's cost of \$19 per pound, or the French price, an equivalent of \$12 per pound). In 1885, Aluminium- und Magnesiumfabrik started production in Hemelingen. Its production figures strongly exceeded those of the factory in Salindres but the factory stopped production in 1888. In 1886, American engineer Hamilton Castner devised a method of cheaper production of sodium, which decreased the cost of aluminium production to \$8 per pound, but he did not have enough capital to construct a large factory like Deville's. In 1887, he constructed a factory in Oldbury; Webster constructed a plant nearby and bought Castner's sodium to use it in his own production of aluminium. In 1889, German metallurgist Curt Netto launched a method of reduction of cryolite with sodium that produced aluminium containing 0.5–1.0% of impurities.
## Electrolytic production and commercialization
> I'm going for that metal.
Aluminium was first produced independently using electrolysis in 1854 by the German chemist Robert Wilhelm Bunsen and Deville. Their methods did not become the basis for industrial production of aluminium because electrical supplies were inefficient at the time. This changed only with Belgian engineer Zénobe-Théophile Gramme's invention of the dynamo in 1870, which made creation of large amounts of electricity possible. The invention of the three-phase current by Russian engineer Mikhail Dolivo-Dobrovolsky in 1889 made transmission of this electricity over long distances achievable. Soon after his discovery, Bunsen moved on to other areas of interest while Deville's work was noticed by Napoleon III; this was the reason Deville's Napoleon-funded research on aluminium production had been started. Deville quickly realized electrolytic production was impractical at the time and moved on to chemical methods, presenting results later that year.
Electrolytic mass production remained difficult because electrolytic baths could not withstand prolonged contact with molten salts, succumbing to corrosion. The first attempt to overcome this for aluminium production was made by American engineer Charles Bradley in 1883. Bradley heated aluminium salts internally: the highest temperature was inside the bath and the lowest was on its walls, where salts would solidify and protect the bath. Bradley then sold his patent claim to brothers Alfred and Eugene Cowles, who used it at a smelter in Lockport and later in Stoke-upon-Trent but the method was modified to yield alloys rather than pure aluminium. Bradley applied for a patent in 1883; due to his broad wordings, it was rejected as composed of prior art. After a necessary two-year break, he re-applied. This process lasted for six years, as the patent office questioned whether Bradley's ideas were original. When Bradley was granted a patent, electrolytic aluminium production had already been in place for several years.
The first large-scale production method was independently developed by French engineer Paul Héroult and American engineer Charles Martin Hall in 1886; it is now known as the Hall–Héroult process. Electrolysis of pure alumina is impractical, given its very high melting point; both Héroult and Hall realized it could be greatly lowered by the presence of molten cryolite. Héroult was granted a patent in France in April and subsequently in several other European countries; he also applied for a U.S. patent in May. After securing a patent, Héroult could not find interest in his invention. When asking professionals for advice, he was told there was no demand for aluminium but some for aluminium bronze. The factory in Salindres did not wish to improve its process. In 1888, Héroult and his companions founded Aluminium Industrie Aktiengesellschaft and started industrial production of aluminium bronze in Neuhausen am Rheinfall. Then, Société électrométallurgique française was founded in Paris. They convinced Héroult to return to France, purchased his patents, and appointed him as the director of a smelter in Isère, which produced aluminium bronze on a large scale at first and pure aluminium in a few months.
At the same time, Hall produced aluminium by the same process in his home at Oberlin. He applied for a patent in July, and the patent office notified Hall of an "interference" with Héroult's application. The Cowles brothers offered legal support. By then, Hall had failed to develop a commercial process for his first investors, and he turned to experimenting at Cowles' smelter in Lockport. He experimented for a year without much success but gained the attention of investors. Hall co-founded the Pittsburgh Reduction Company in 1888 and initiated production of aluminium. Hall's patent was granted in 1889. In 1889, Hall's production began to use the principle of internal heating. By September 1889, Hall's production grew to 385 pounds (175 kilograms) at a cost of \$0.65 per pound. By 1890, Hall's company still lacked capital and did not pay dividends; Hall had to sell some of his shares to attract investments. During that year, a new factory in Patricroft was constructed. The smelter in Lockport was unable to withstand the competition and shut down by 1892.
The Hall–Héroult process converts alumina into the metal. Austrian chemist Carl Josef Bayer discovered a way of purifying bauxite to yield alumina in 1888 at a textile factory in Saint Petersburg and was issued a patent later that year; this is now known as the Bayer process. Bayer sintered bauxite with alkali and leached it with water; after stirring the solution and introducing a seeding agent to it, he found a precipitate of pure aluminium hydroxide, which decomposed to alumina on heating. In 1892, while working at a chemical plant in Yelabuga, he discovered the aluminium contents of bauxite dissolved in the alkaline leftover from isolation of alumina solids; this was crucial for the industrial employment of this method. He was issued a patent later that year.
The total amount of unalloyed aluminium produced using Deville's chemical method from 1856 to 1889 equaled 200 metric tons. Production in 1890 alone was 175 metric tons. It grew to 715 metric tons in 1893 and to 4,034 metric tons in 1898. The price fell to \$2 per pound in 1889 and to \$0.5 per pound in 1894.
By the end of 1889, a consistently high purity of aluminium produced via electrolysis had been achieved. In 1890, Webster's factory went obsolete after an electrolysis factory was opened in England. Netto's main advantage, the high purity of the resulting aluminium, was outmatched by electrolytic aluminium and his company closed the following year. Compagnie d'Alais et de la Camargue also decided to switch to electrolytic production, and their first plant using this method was opened in 1895.
Modern production of the aluminium metal is based on the Bayer and Hall–Héroult processes. It was further improved in 1920 by a team led by Swedish chemist Carl Wilhelm Söderberg. Previously, anode electrodes had been made from pre-baked coal blocks, which quickly corrupted and required replacement; the team introduced continuous electrodes made from a coke and tar paste in a reduction chamber. This advance greatly increased the world output of aluminium.
## Mass usage
> Give us aluminum in the right quantity, and we will be able to fight for another four years.
Prices for aluminium declined, and by the early 1890s, the metal had become widely used in jewelry, eyeglass frames, optical instruments, and many everyday items. Aluminium cookware began to be produced in the late 19th century and gradually supplanted copper and cast iron cookware in the first decades of the 20th century. Aluminium foil was popularized at that time. Aluminium is soft and light, but it was soon discovered that alloying it with other metals could increase its hardness while preserving its low density. Aluminium alloys found many uses in the late 19th and early 20th centuries. For instance, aluminium bronze is applied to make flexible bands, sheets, and wire, and is widely employed in the shipbuilding and aviation industries. Aviation used a new aluminium alloy, duralumin, invented in 1903. Aluminium recycling began in the early 1900s and has been used extensively since as aluminium is not impaired by recycling and thus can be recycled repeatedly. At this point, only the metal that had not been used by end-consumers was recycled. During World War I, major governments demanded large shipments of aluminium for light strong airframes. They often subsidized factories and the necessary electrical supply systems. Overall production of aluminium peaked during the war: world production of aluminium in 1900 was 6,800 metric tons; in 1916, annual production exceeded 100,000 metric tons. The war created a greater demand for aluminium, which the growing primary production was unable to fully satisfy, and recycling grew intensely as well. The peak in production was followed by a decline, then a swift growth.
During the first half of the 20th century, the real price for aluminium fell continuously from \$14,000 per metric ton in 1900 to \$2,340 in 1948 (in 1998 United States dollars). There were some exceptions such as the sharp price rise during World War I. Aluminium was plentiful, and in 1919 Germany began to replace its silver coins with aluminium ones; more and more denominations were switched to aluminium coins as hyperinflation progressed in the country. By the mid-20th century, aluminium had become a part of everyday lives, becoming an essential component of housewares. Aluminium freight cars first appeared in 1931. Their lower mass allowed them to carry more cargo. During the 1930s, aluminium emerged as a civil engineering material used in both basic construction and building interiors. Its use in military engineering for both airplanes and tank engines advanced.
Aluminium obtained from recycling was considered inferior to primary aluminium because of poorer chemistry control as well as poor removal of dross and slags. Recycling grew overall but depended largely on the output of primary production: for instance, as electric energy prices declined in the United States in the late 1930s, more primary aluminium could be produced using the energy-expensive Hall–Héroult process. This rendered recycling less necessary, and thus aluminium recycling rates went down. By 1940, mass recycling of post-consumer aluminium had begun.
During World War II, production peaked again, exceeding 1,000,000 metric tons for the first time in 1941. Aluminium was used heavily in aircraft production and was a strategic material of extreme importance; so much so that when Alcoa (successor of Hall's Pittsburgh Reduction Company and the aluminium production monopolist in the United States at the time) did not expand its production, the United States Secretary of the Interior proclaimed in 1941, "If America loses the war, it can thank the Aluminum Corporation of America". In 1939, Germany was the world's leading producer of aluminium; the Germans thus saw aluminium as their edge in the war. Aluminium coins continued to be used but while they symbolized a decline on their introduction, by 1939, they had come to represent power. (In 1941, they began to be withdrawn from circulation to save the metal for military needs.) After the United Kingdom was attacked in 1940, it started an ambitious program of aluminium recycling; the newly appointed Minister of Aircraft Production appealed to the public to donate any household aluminium for airplane building. The Soviet Union received 328,100 metric tons of aluminium from its co-combatants from 1941 to 1945; this aluminium was used in aircraft and tank engines. Without these shipments, the output of the Soviet aircraft industry would have fallen by over a half.
After the wartime peak, world production fell for three late-war and post-war years but then regained its rapid growth. In 1954, the world output equaled 2,810,000 metric tons; this production surpassed that of copper, historically second in production only to iron, making it the most produced non-ferrous metal.
## Aluminium Age
> Nothing stops time. One epoch follows another, and sometimes we don't even notice it. The Stone Age... The Bronze Age... The Iron Age... [...] However one may assert that it is now that we stand on the threshold of the Aluminium Age.
Earth's first artificial satellite, launched in 1957, consisted of two joined aluminium hemispheres. All subsequent spacecraft have used aluminium to some extent. The aluminium can was first manufactured in 1956 and employed as a container for drinks in 1958. In the 1960s, aluminium was employed for the production of wires and cables. Since the 1970s, high-speed trains have commonly used aluminium for its high strength-to-weight ratio. For the same reason, the aluminium content of cars is growing.
By 1955, the world market had been dominated by the Six Majors: Alcoa, Alcan (originated as a part of Alcoa), Reynolds, Kaiser, Pechiney (merger of Compagnie d'Alais et de la Camargue that bought Deville's smelter and Société électrométallurgique française that hired Héroult), and Alusuisse (successor of Héroult's Aluminium Industrie Aktien Gesellschaft); their combined share of the market equaled 86%. From 1945, aluminium consumption grew by almost 10% each year for nearly three decades, gaining ground in building applications, electric cables, basic foils and the aircraft industry. In the early 1970s, an additional boost came from the development of aluminium beverage cans. The real price declined until the early 1970s; in 1973, the real price equaled \$2,130 per metric ton (in 1998 United States dollars). The main drivers of the drop in price was the decline of extraction and processing costs, technological progress, and the increase in aluminium production, which first exceeded 10,000,000 metric tons in 1971.
In the late 1960s, governments became aware of waste from the industrial production; they enforced a series of regulations favoring recycling and waste disposal. Söderberg anodes, which save capital and labor to bake the anodes but are more harmful to the environment (because of a greater difficulty in collecting and disposing of the baking fumes), fell into disfavor, and production began to shift back to the pre-baked anodes. The aluminium industry began promoting the recycling of aluminium cans in an attempt to avoid restrictions on them. This sparked recycling of aluminium previously used by end-consumers: for example, in the United States, levels of recycling of such aluminium increased 3.5 times from 1970 to 1980 and 7.5 times to 1990. Production costs for primary aluminium grew in the 1970s and 1980s, and this also contributed to the rise of aluminium recycling. Closer composition control and improved refining technology diminished the quality difference between primary and secondary aluminium.
In the 1970s, the increased demand for aluminium made it an exchange commodity; it entered the London Metal Exchange, the world's oldest industrial metal exchange, in 1978. Since then, aluminium has been traded for United States dollars and its price fluctuated along with the currency's exchange rate. The need to exploit lower-grade poorer quality deposits and fast increasing input costs of energy, but also bauxite, as well as changes in exchange rates and greenhouse gas regulation, increased the net cost of aluminium; the real price grew in the 1970s.
The increase of the real price, and changes of tariffs and taxes, began the redistribution of world producers' shares: the United States, the Soviet Union, and Japan accounted for nearly 60% of world's primary production in 1972 (and their combined share of consumption of primary aluminium was also close to 60%); but their combined share only slightly exceeded 10% in 2012. The production shift began in the 1970s with production moving from the United States, Japan, and Western Europe to Australia, Canada, the Middle East, Russia, and China, where it was cheaper due to lower electricity prices and favorable state regulation, such as low taxes or subsidies. Production costs in the 1980s and 1990s declined because of advances in technology, lower energy and alumina prices, and high exchange rates of the United States dollar.
In the 2000s, the BRIC countries' (Brazil, Russia, India and China) combined share grew from 32.6% to 56.5% in primary production and 21.4% to 47.8% in primary consumption. China has accumulated an especially large share of world production, thanks to an abundance of resources, cheap energy, and governmental stimuli; it also increased its share of consumption from 2% in 1972 to 40% in 2010. The only other country with a two-digit percentage was the United States with 11%; no other country exceeded 5%. In the United States, Western Europe and Japan, most aluminium was consumed in transportation, engineering, construction, and packaging.
In the mid-2000s, increasing energy, alumina and carbon (used in anodes) prices caused an increase in production costs. This was amplified by a shift in currency exchange rates: not only a weakening of the United States dollar, but also a strengthening of the Chinese yuan. The latter became important as most Chinese aluminium was relatively cheap.
World output continued growing: in 2018, it was a record 63,600,000 metric tons before falling slightly in 2019. Aluminium is produced in greater quantities than all other non-ferrous metals combined. Its real price (in 1998 United States dollars) in 2019 was \$1,400 per metric ton (\$2,190 per ton in contemporary dollars).
## See also
- List of countries by primary aluminium production
|
58,036,591 |
1959–60 Burnley F.C. season
| 1,170,319,343 | null |
[
"Burnley F.C. seasons",
"English football championship-winning seasons",
"English football clubs 1959–60 season"
] |
The 1959–60 season was Burnley Football Club's 61st season in the Football League, and their 13th consecutive campaign in the First Division, the top tier of English football. The team, and their manager Harry Potts, endured a tense season in which Tottenham Hotspur and Wolverhampton Wanderers were the other contenders for the league title. Burnley won their second First Division championship, and their first since 1920–21, on the last matchday with a 2–1 victory at Manchester City; they had not topped the table until the last match was played out. Only two players—Alex Elder and Jimmy McIlroy—had cost a transfer fee, while the others were recruited from Burnley's youth academy. With 80,000 inhabitants, the town of Burnley became one of the smallest to have hosted an English first-tier champion. In the FA Cup, Burnley reached the sixth round before being defeated by local rivals Blackburn Rovers after a replay. Burnley won the local Lancashire Cup for the fifth time in their history after defeating Manchester United in the final. After the regular season ended, the Burnley squad travelled to the United States to participate in the first edition of the International Soccer League.
During the season, 18 players made at least one appearance for the club, with Jimmy Adamson, Brian Miller and Ray Pointer present in all 50 competitive matches. The team's top goalscorer was John Connelly with 24 goals, including 20 in the league. The highest attendance recorded at home ground Turf Moor was 52,850 for the FA Cup fifth-round replay match against Bradford City; the lowest was 17,398 for a league game against Leeds United. The average league attendance at Turf Moor was 26,869, around one-third of the town's population.
## Background and pre-season
The 1959–60 campaign was Burnley's 61st season in the Football League, and their 13th consecutive season in the First Division, since promotion from the Second Division in 1946–47. The team had finished the 1958–59 season in seventh place and had reached the sixth round of the FA Cup. Burnley ended the campaign with 8 wins out of 13 in the league, and approached the new season with confidence. The club's chairman, Bob Lord, was elected to the position in 1955. Lord only appointed managers with a previous playing career at the club; he selected Harry Potts for the post in February 1958. Burnley had become one of the most progressive clubs under Lord, who was described by the scriptwriter Arthur Hopcraft as "the Khrushchev of Burnley" as a result of his authoritarian attitude. Burnley were one of the first to set up a purpose-built training ground (at Gawthorpe in 1955), which included a medical room, a gymnasium, three full-size pitches and an all-weather surface. The club also became renowned for its youth policy and scouting system. Burnley's scouts—including Jack Hixon—focused particularly on North East England, Scotland and Northern Ireland.
During matches, Potts often employed the then unfashionable 4–4–2 formation and he implemented a Total Football playing style. Billy Wright of Wolverhampton Wanderers described Burnley's playing style as "progressing [from defense to attack] by nicely controlled patterns with every man searching hungrily for space". Jimmy Greaves labelled the team's style of play as "smooth, skilled football that was a warming advertisement for all that was best about British football". Most Burnley players had been recruited from the club's youth academy—only Alex Elder and Jimmy McIlroy had cost a transfer fee. Both players were bought from Northern Irish club Glentoran; McIlroy transferred to Burnley for £8,000 in 1950, while Elder cost the club £5,000 in January 1959.
Potts made no major additions to his squad during pre-season, while Ken Bracewell (to Tranmere Rovers), Albert Cheesebrough (to Leicester City for £20,000), Doug Newlands (to Stoke City for £12,000) and Les Shannon (retired) left the club. On 17 August 1959, the team played a pre-season friendly against Glentoran, which was organised as part of Elder's transfer. Burnley defeated their opponents 8–1, with Jimmy Robson scoring four times. Burnley's kit remained unchanged from the previous seasons: a claret jersey with light blue sleeves, a light blue stripe around the collar, and white shorts along with claret and light blue socks.
## First Division
### August to December
Burnley's First Division campaign began with a 3–2 win over Leeds United at Elland Road on 22 August, with goals from Brian Pilkington, John Connelly, and Ray Pointer. Ahead of Burnley's first home match of the season at Turf Moor against Everton, Potts wrote in the club's matchday programme: "We pride ourselves on being a footballing team and no club can be more eager to meet the demand for better play". The team defeated Everton 5–2, but then lost 3–1 at home to West Ham United, despite taking the lead through Connelly. Burnley's form remained inconsistent: a 4–1 away loss against Chelsea was followed by 2–1 wins against local rival Preston North End and West Bromwich Albion, after coming from behind on both occasions. Potts had selected the same starting line-up for the first seven matches, but he made several changes to his side for the reverse fixture against Preston on 15 September. Billy White replaced McIlroy, who was injured, while Bobby Seith had contacted giant urticaria and was replaced by Tommy Cummings, leaving the left-back position open for the 18-year-old Elder to make his debut. Burnley lost the fixture 1–0 but Elder played well against Preston's England international Tom Finney and remained in the starting line-up.
Burnley ended September by defeating Newcastle United and Birmingham City, both by a scoreline of 3–1. The team were a point behind league leaders Tottenham Hotspur, who were their next opponents. Both sides were missing key players—Spurs were without Dave Mackay and Danny Blanchflower, while McIlroy, Burnley's playmaker, was still absent. The match ended in a 1–1 draw, after defender Brian Miller equalised for Burnley in the 87th minute. Burnley then faced Lancashire rivals Blackpool at home; Burnley took the lead through Robson but the visitors scored four goals to win 4–1. Before the East Lancashire derby at Blackburn Rovers on 17 October, Potts received criticism from the Burnley supporters who objected to his "confusing playing style", such as the defenders switching positions during matches. Against Blackburn, Burnley equalised twice, but the hosts scored a third goal to win 3–2. McIlroy was back to full fitness for the match against Manchester City a week later and led the team to a 4–3 victory. Burnley ended October with a 1–1 draw at Luton Town after being 1–0 down. On 7 November, Burnley defeated Wolverhampton Wanderers 4–1 at Turf Moor; Wolves were the First Division champions of the previous two seasons. Two weeks later, Burnley recorded their largest post-war league win when they beat Nottingham Forest, the previous season's FA Cup winners, 8–0 at home. The team kept their first clean sheet of the season, and Robson became the club's first player in over 30 years to score five goals in one match. It was followed by a 1–0 loss against newly promoted Fulham on 28 November.
After beating Bolton Wanderers 4–0 on 5 December, Burnley defeated Arsenal 4–2 at Highbury a week later. Arsenal led 2–0 after the first half; during half-time Potts pushed McIlroy and Miller forward. The team turned the match around: Jimmy Adamson, Burnley's captain, scored a penalty kick halfway through the second half and Connelly completed a hat-trick inside 16 minutes. McIlroy received many plaudits for his performance, even though he had picked up a groin strain injury early during the game. With McIlroy absent, Burnley hosted last-place Leeds United on 19 December in front of a season-lowest crowd of 17,398. Leeds won 1–0 and Burnley slipped down from third to fourth place in the table, three points behind leaders Tottenham. On Boxing Day, Burnley defeated Manchester United 2–1 at Old Trafford; forward Ian Lawson came back into the team after three years and scored the winner. In the return fixture against United two days later, Burnley lost 4–1 in front of 47,696 spectators—the highest home league crowd of the season.
### January to May
The team's first match of 1960 resulted in a 5–2 victory away at title contenders West Ham. The Sunday Pictorial concluded: "If they go on playing like this they'll soon have nobody above them". Burnley then defeated Chelsea 2–1 on a snowy Turf Moor pitch and drew 0–0 with West Brom to end January in second place in the table. Burnley beat Newcastle United 2–1 on 6 February, the scorers being Robson with a shot from 30 yards (27 m) and Pointer with a lob. The match against Birmingham City a week later was postponed due to poor weather. On 1 March, the team recorded a 2–0 home win over league leaders Tottenham to close the gap to three points, but with two games in hand on Spurs. Burnley also defeated Blackburn Rovers 1–0 and Arsenal 3–2 to win three league matches in a row. On 30 March, Burnley played second-placed Wolverhampton Wanderers but were overwhelmed by "Wolves' fast, direct power play" and were defeated 6–1. The following game, at home against Sheffield Wednesday, ended in a 3–3 draw after Miller equalised for Burnley in the 88th minute. It was Seith's last match for the club; he read in the Burnley Express that he would not play the next game against Nottingham Forest and was aggrieved at not being told directly by Potts. A dispute followed, after which Seith was put on the transfer list. Potts moved Adamson to the centre-half position to partner Cummings, while Miller was placed in midfield. The team recorded three consecutive wins: Forest and Leicester City were both defeated 1–0, while Burnley beat Luton 3–0 despite only having 10 men for most of the game, after Pointer came off injured. Connelly scored the winning goal against Leicester, his 20th league goal of the season, but picked up a cartilage injury during the match and was out for the remainder of the season; he was replaced by Trevor Meredith. On 18 April, Meredith scored his first goal in a 2–1 defeat in the return game at Leicester, with former Burnley player Cheesebrough netting the winner for the home team.
Five days later, Burnley drew 1–1 at Blackpool after the hosts equalised with six minutes remaining; Jim Furnell made his Burnley debut in goal as Adam Blacklaw was out injured. The team's main rivals for the league title, Tottenham and Wolverhampton Wanderers, met on the same day at Wolves' Molineux Stadium; Spurs won 3–1 to leave the title race open. Blacklaw returned in goal for the match against Birmingham City on 27 April, while Cummings, McIlroy and Miller also played despite having minor injuries; Burnley won 1–0 after a late goal from Pilkington. With the victory, the team moved up to second place behind Wolves on goal average and one point ahead of Tottenham. Burnley's last home match of the season ended in a goalless draw with Fulham, while Wolves and Tottenham were both victorious in their final games; Burnley needed to win their last match at Manchester City to claim the league title. On 2 May, in front of almost 66,000 spectators at Maine Road—including Wolves manager Stan Cullis and several of his players—Burnley went ahead after four minutes when Pilkington's shot deflected off City's Bert Trautmann into the net. The hosts soon equalised through Joe Hayes but Meredith's volley put Burnley back in front after half an hour. Blacklaw made several saves and the team held on to the lead. Burnley were crowned First Division champions for the second time, and won their first top flight title in 39 years. They had not led the table until the last match was played out. The Daily Mirror noted: "Burnley, the team of quiet men—five of them are part-timers and the whole outfit cost less than £15,000—snatched the First Division Championship from the teeth of the famous Wolves".
Burnley's population had reduced by around 20 per cent since the club won the First Division in 1921; with 80,000 inhabitants in 1960, the town became one of the smallest to have hosted an English first-tier champion. During the season, Burnley attracted an average crowd of 26,869; around one-third of the town's population, the highest ratio in the top flight. The team won the title with one of the lowest post-war point tallies (55), one of the smallest goal averages (1.39), and one of the highest numbers of goals conceded (61).
### Match results
Key
- In result column, Burnley's score shown first
- H = Home match
- A = Away match
- pen. = Penalty kick
- o.g. = Own goal
Results
Source:
### Partial league table
Source:
## FA Cup
Burnley entered the season's FA Cup in the third round where they were drawn away against Second Division side Lincoln City; the game finished in a 1–1 draw, necessitating a replay at Turf Moor. Although not fully fit, McIlroy returned in the starting line-up and opened the scoring from the penalty spot. He provided the assist for Pilkington's headed goal to lead Burnley to a 2–0 victory and qualification for the fourth round. The team faced mid-table Second Division side Swansea Town at Vetch Field and drew 0–0; the Swansea manager Trevor Morris was confident and stated: "We'll win the replay". At Turf Moor, Robson scored twice to put Burnley 2–0 ahead, before Swansea's Mel Nurse halved the lead in the 83rd minute. The team held on to the lead to set up a fifth-round fixture with Third Division side Bradford City, who were undefeated in 18 matches.
City's Valley Parade pitch was very muddy, which hindered Burnley in their passing game, and City took a 2–0 lead. With ten minutes remaining, Connelly dribbled through the Bradford defence and put the ball past their goalkeeper. He scored his second goal in injury time to salvage a replay for Burnley, following a scramble in City's penalty area. The replay took place three days later at Turf Moor, in front of an official attendance of 52,850. Some of the gates were broken down, however, and many uncounted fans poured into the ground. The road from Bradford was closed due to the traffic; numerous Bradford City and Burnley supporters were denied entry by the local police. On an icy Turf Moor pitch, Burnley ran out 5–0 winners and advanced to the sixth round.
Burnley were drawn at home for the first time in the FA Cup campaign. On 12 March, they faced arch-rivals Blackburn Rovers in front of 51,501 spectators at Turf Moor. Burnley quickly went 2–0 up in the second half: McIlroy set up both goals, with Pilkington and then Pointer finding the net. Connelly added a third before Blackburn scored three times in the final 15 minutes to draw the match 3–3. After a goalless 90 minutes in the replay, Rovers scored twice in extra time to eliminate Burnley from the competition. Robson had played while being ill; McIlroy was again not fully fit and concluded: "I had probably my poorest ever game in a Burnley shirt". Blackburn advanced to the final where they lost 3–0 to Wolves.
### Match results
Key
- In result column, Burnley's score shown first
- H = Home match
- A = Away match
- pen. = Penalty kick
- o.g. = Own goal
Results
Source:
## Player details
Potts used only 18 different players in the First Division during the season, the lowest number in the division; ten players scored at least one goal and one opposition player scored an own goal. The team usually played in a 4–4–2 formation throughout the season, with four defenders, four midfielders and two forwards. Miller, Adamson and Pointer featured in all 50 league and cup games; Blacklaw, John Angus and Pilkington each missed one First Division match and made 49 appearances for the club. Angus would go on to set a club record for appearances for outfield players with 521. Gordon Harris made just two appearances for Burnley in the First Division, while Billy Marshall and Furnell featured in only one league match during the campaign. Connelly was the top goalscorer for Burnley with 24 goals, including 20 in the league. With a tally of 23 goals, Pointer was the second-highest scorer, followed by Robson with 22; both players also scored four goals in the FA Cup. Connelly, Pointer and Robson scored two-thirds of the club's 85 league goals.
Source:
## Minor competitions
### Lancashire Cup
Burnley also participated in the local Lancashire Cup, although their starting line-ups consisted primarily of reserve and youth players. Their first game, against Manchester City on 23 November, ended in a 5–1 victory with five different goalscorers for the team—White, Ron Fenton, Ian Towers, Andy Lochhead and Harris. Burnley's next match, at Chester, ended in a 3–1 win and qualification for the semi-final. Burnley were drawn away against Preston North End and won by a scoreline of 3–0. Burnley secured their fifth Lancashire Cup title after winning the final 4–2 against Manchester United at Turf Moor. Walter Joyce and Harris each scored one goal, while Lochhead netted twice.
#### Match results
Key
- In result column, Burnley's score shown first
- H = Home match
- A = Away match
- pen. = Penalty kick
- o.g. = Own goal
Results
Source:
### International Soccer League
After the regular season ended, Burnley travelled to the United States to represent England in the initial edition of the International Soccer League, which was the first modern attempt to create an American soccer league. The team entered the first group, together with five other sides from Europe and North America. Burnley beat their first opponents Bayern Munich 3–0 through goals from Pointer, Pilkington and Miller. In the second game, Burnley faced Kilmarnock, the 1959–60 Scottish Football League runners-up, and lost 2–0. The team then drew 3–3 with home side New York Americans before defeating Northern Irish club Glenavon 6–2, with Pilkington scoring a hat-trick. In their final group match, Burnley faced French club Nice, who had won four Ligue 1 titles during the 1950s, most recently in 1958–59. Burnley defeated Nice 4–0 and finished as runners-up in the group behind Kilmarnock, who advanced to the final but lost against Brazilian side Bangu. Although Burnley faced strong opponents, the players found it hard to take the tournament seriously. The stadium announcer often misinterpreted the referee's decisions, the crowd showed little interest in the games, and according to McIlroy, every match would end with a countdown "worthy of a space-rocket launching".
#### Match results
Key
- In result column, Burnley's score shown first
- H = Home match
- A = Away match
- N = Neutral match
- pen. = Penalty kick
- o.g. = Own goal
Results
Source:
## Aftermath
Bobby Seith, who had been put on the transfer list by the club, was sold to Dundee in August 1960 for a fee of £7,500. Although he had made 27 First Division appearances during the season, Seith was not awarded a championship medal, although he would finally receive one in 1999. Burnley's championship-winning team remained intact going into the 1960–61 season and was strengthened with reserve and youth players such as Joyce and Lochhead. Burnley would go on to compete in six different competitions the following season—the First Division, the FA Cup, the newly created Football League Cup, the FA Charity Shield, the European Cup and the Lancashire Cup.
|
1,001,596 |
Richard Williams (RAAF officer)
| 1,148,305,799 |
Royal Australian Air Force chief
|
[
"1890 births",
"1980 deaths",
"Australian Army officers",
"Australian Companions of the Distinguished Service Order",
"Australian Companions of the Order of the Bath",
"Australian Flying Corps officers",
"Australian Knights Commander of the Order of the British Empire",
"Australian aviators",
"Australian military personnel of World War I",
"Australian people of Cornish descent",
"Australian public servants",
"Graduates of the Royal College of Defence Studies",
"Military personnel from South Australia",
"People from Moonta, South Australia",
"Royal Australian Air Force air marshals of World War II"
] |
Air Marshal Sir Richard Williams, (3 August 1890 – 7 February 1980), is widely regarded as the "father" of the Royal Australian Air Force (RAAF). He was the first military pilot trained in Australia, and went on to command Australian and British fighter units in World War I. A proponent for air power independent of other branches of the armed services, Williams played a leading role in the establishment of the RAAF and became its first Chief of the Air Staff (CAS) in 1922. He served as CAS for thirteen years over three terms, longer than any other officer.
Williams came from a working-class background in South Australia. He was a lieutenant in the Army when he learned to fly at Point Cook, Victoria, in 1914. As a pilot with the Australian Flying Corps (AFC) in World War I, Williams rose to command No. 1 Squadron AFC, and later 40th Wing RAF. He was awarded the Distinguished Service Order and finished the war a lieutenant colonel. Afterwards he campaigned for an Australian Air Force run separately to the Army and Navy, which came into being on 31 March 1921.
The fledgling RAAF faced several challenges to its continued existence in the 1920s and early 1930s, and Williams received much of the credit for maintaining its independence, but an adverse report on flying safety standards saw him dismissed from the position of CAS and seconded to the RAF prior to World War II. Despite some support for his reinstatement as Air Force chief, and promotion to air marshal in 1940, he never again led the RAAF. After the war he was forcibly retired along with other World War I veteran officers. He took up the position of Director-General of Civil Aviation in Australia, and was knighted the year before his retirement in 1955.
## Early life and career
Williams was born on 3 August 1890 into a working-class family in Moonta Mines, South Australia. He was the eldest son of Richard Williams, a copper miner who had emigrated from Cornwall, England, and his wife Emily. Leaving Moonta Public School at junior secondary level, Williams worked as a telegraph messenger and later as a bank clerk. He enlisted in a militia unit, the South Australian Infantry Regiment, in 1909 at the age of nineteen. Commissioned a second lieutenant on 8 March 1911, he joined the Permanent Military Forces the following year.
In August 1914, Lieutenant Williams took part in Australia's inaugural military flying course at Central Flying School, run by Lieutenants Henry Petre and Eric Harrison. After soloing in a Bristol Boxkite around the airfield at Point Cook, Victoria, Williams became the first student to graduate as a pilot, on 12 November 1914. He recalled the school as a "ragtime show" consisting of a paddock, tents, and one large structure: a shed for the Boxkite. Following an administrative and instructional posting, Williams underwent advanced flying training at Point Cook in July 1915. The next month he married Constance Esther Griffiths, who was thirteen years his senior. The couple had no children.
## World War I
Williams was promoted captain on 5 January 1916. He was appointed a flight commander in No. 1 Squadron Australian Flying Corps (AFC), which was initially numbered 67 Squadron Royal Flying Corps by the British. The unit departed Australia in March 1916 without any aircraft; after arriving in Egypt it received B.E.2 fighters, a type deficient in speed and manoeuvrability, and which lacked forward-firing machine guns. Williams wrote that in combat with the German Fokkers, "our fighting in the air was of short duration but could mean a quick end", and that when it came to bombing, he and his fellow pilots "depended mainly on luck". He further quoted a truism in the Flying Corps that "if a new pilot got through his first three days without being shot down he was lucky; if he got through three weeks he was doing well and if he got through three months he was set". Williams and the other Australians were initially involved in isolated tasks around the Suez Canal, attached to Royal Flying Corps (RFC) units. No. 1 Squadron began to operate concertedly in December 1916, supporting the Allied advance on Palestine. Williams completed his RFC attachment in February 1917.
On 5 March 1917, shortly after commencing operations with No. 1 Squadron, Williams narrowly avoided crash-landing when his engine stopped while he was bombing the railway terminus at Tel el Sheria. At first believing that he had been struck by enemy fire, he found that the engine switch outside his cockpit had turned off. Within 500 feet of the ground he was able to switch the engine back on and return to base. On 21 April, Williams landed behind enemy lines to rescue downed comrade Lieutenant Adrian Cole, having the day before pressed home an attack on Turkish cavalry while under "intense anti-aircraft fire"; these two actions earned him the Distinguished Service Order, the citation for which reads:
> For conspicuous gallantry and devotion to duty. Flying at a low altitude under intense anti-aircraft fire, he attacked and dispersed enemy troops who were concentrating on our flank. On another occasion, whilst on a reconnaissance, he landed in the enemy's lines, and rescued a pilot of a machine which had been brought down by hostile fire.
He was promoted major in May and given command of No. 1 Squadron, which was re-equipped with Bristol Fighters later that year. "Now for the first time," wrote Williams, "after 17 months in the field we had aircraft with which we could deal with our enemy in the air." His men knew him as a teetotaller and non-smoker, whose idea of swearing was an occasional "Darn me!".
In June 1918, Williams was made a brevet lieutenant colonel and commander of the RAF's 40th (Army) Wing, which was operating in Palestine. It comprised his former No. 1 Squadron and three British units. As a Dominion officer, Williams found that he was not permitted to "exercise powers of punishment over British personnel", leading to him being temporarily "granted a supplementary commission in the Royal Air Force". Augmented by a giant Handley Page bomber, his forces took part in the Battle of Armageddon, the final offensive in Palestine, where they inflicted "wholesale destruction" on Turkish columns. Of 40th Wing's actions at Wadi Fara on 21 September 1918, Williams wrote: "The Turkish Seventh Army ceased to exist and it must be noted that this was entirely the result of attack from the air." He also sent Captain Ross Smith in the Handley Page, accompanied by two Bristol Fighters, to aid Major T. E. Lawrence's Arab army north of Amman when it was harassed by German aircraft operating from Deraa. In November, Williams was appointed temporary commander of the Palestine Brigade, which comprised his previous command, the 40th (Army) Wing, and 5th (Corps) Wing. His service in the theatre later saw him awarded the Order of the Nahda by the King of the Hejaz. Twice mentioned in despatches, by the end of the war Williams had established himself, according to Air Force historian Alan Stephens, as "the AFC's rising star".
## Inter-war years
### Birth of the Royal Australian Air Force
Appointed an Officer of the Order of the British Empire in the 1919 New Year Honours, Williams served as Staff Officer, Aviation, at Australian Imperial Force (AIF) headquarters in London, before returning to Australia and taking up the position of Director of Air Services at Army Headquarters, Melbourne. The Australian Flying Corps had meanwhile been disbanded and replaced by the Australian Air Corps (AAC) which was, like the AFC, a branch of the Army.
Upon establishment of the Australian Air Board on 9 November 1920, Williams and his fellow AAC officers dropped their army ranks in favour of those based on the Royal Air Force. Williams, now a wing commander, personally compiled and tabled the Air Board's submissions to create the Australian Air Force (AAF), a service independent of both the Army and the Royal Australian Navy. Though the heads of the Army and Navy opposed the creation of an independent air arm for fear that they would be unable to find air cover for their operations, support from Prime Minister Billy Hughes, as well as prominent parliamentary figures including Treasurer Joseph Cook and Defence Minister George Pearce allowed the proposal to succeed. The AAF was duly formed on 31 March 1921; Williams deliberately chose this day rather than 1 April, the founding date of the RAF three years earlier, "to prevent nasty people referring to us as 'April Fools'". The "Royal" prefix was added five months later. Williams proposed an ensign for the AAF in July 1921, based on the Royal Air Force flag but featuring the five stars of the Southern Cross within the RAF roundel and the Commonwealth Star in the lower hoist quarter. This design was not adopted for the RAAF, the government employing instead a direct copy of the RAF ensign until 1949, when a new design using the stars of the Australian Flag was chosen.
As the senior officer of the Air Board, Williams held the title of First Air Member, the nascent Air Force initially not being deemed suitable for a "Chief of Staff" appointment equivalent to the Army and Navy. He moved to consolidate the new service's position by expanding its assets and training. Shortly after the AAF's establishment, land was purchased for an air base at Laverton, eight kilometres (five miles) inland of Point Cook, and in July 1921 Williams made the initial proposal to develop a base at Richmond, New South Wales, the first outside Victoria. He also started a program to second students from the Army and Navy, including graduates of the Royal Military College, Duntroon, to bolster officer numbers; candidates reaped by this scheme included future Air Force chiefs John McCauley, Frederick Scherger, Valston Hancock and Alister Murdoch, along with other senior identities such as Joe Hewitt and Frank Bladin. As a leader, Williams would gain a reputation for strong will, absorption in administrative minutiae and, in Alan Stephens' words, a "somewhat puritanical" nature. He became known throughout the service as "Dicky".
### Chief of the Air Staff
The position of First Air Member was replaced by Chief of the Air Staff (CAS) in October 1922. Williams would serve as CAS three times over seventeen years in the 1920s and '30s, alternating with Wing Commander (later Air Vice Marshal) Stanley Goble. One motive suggested for the rotation was a ploy by Army and Navy interests to "curb Williams' independence". Instead the arrangement "almost inevitably fostered an unproductive rivalry" between the two officers. Although in a legal sense the Air Board was responsible for the RAAF rather than the Chief of Staff alone, Williams dominated the board to such an extent that Goble would later complain that his colleague appeared to consider the Air Force his personal command.
Williams spent much of 1923 in England, attending the British Army Staff College in Camberley and RAF Staff College, Andover, followed by further study in Canada and the United States the following year. Goble served as Chief of the Air Staff in his absence. Shortly after his return in February 1925, Williams scuppered a plan by Goble to establish a small seaplane base at Rushcutters Bay in Sydney, instead organising purchase of Supermarine Seagulls, the RAAF's first amphibious aircraft, to be based at Richmond. He was promoted to group captain in July and later that year drafted a major air warfare study, "Memorandum Regarding the Air Defence of Australia". Considered prescient in many ways, it treated World War I ally Japan as Australia's main military threat, and advocated inter-service co-operation while maintaining that none of the armed forces was "purely auxiliary to another". Its concepts continue to influence RAAF strategy.
In 1926, Williams mandated the use of parachutes for all RAAF aircrew. He had visited the Irvin Air Chute Company while in the US during 1924 and recommended purchase at the time, but a backlog of orders for the RAF meant that the Australian equipment took almost two years to arrive. Flying Officer Ellis Wackett was assigned to instruct volunteers at RAAF Richmond, and made the country's first freefall descent from a military aircraft, an Airco DH.9, on 26 May. Williams himself jumped over Point Cook on 5 August, having decided that it would set "a good example if, before issuing an order for the compulsory wearing of parachutes, I showed my own confidence in them ..." Though his descent took him perilously close to the base water tank ("I thought it would be a poor ending to drown there, or even to be pulled out dripping wet") and "too close to be comfortable to a 30,000 volt electric transmission line", he completed the exercise unscathed.
The young Air Force was a small organisation with the atmosphere of a flying club, although several pioneering flights were made by its members. Goble had commanded the first circumnavigation of Australia by air in 1924 while he was CAS. On 25 September 1926, with two crew members including Goble's pilot, Ivor McIntyre, Williams commenced a 10,000-mile (16,000 km) round trip from Point Cook to the Solomon Islands in a De Havilland DH.50A floatplane, to study the South Pacific region as a possible theatre of operations. The trio returned on 7 December to a 12-plane RAAF escort and a 300-man honour guard. Though seen partly as a "matter of prestige" brought on by contemporary newspaper reports that claimed "'certain Foreign Powers'" were planning such a journey, and also as a "reaction" by Williams to Goble's 1924 expedition, it was notable as the first international flight undertaken by an RAAF plane and crew. Williams was appointed a Commander of the Order of the British Empire (CBE) in the 1927 King's Birthday Honours in recognition of the achievement, and promoted to air commodore on 1 July the same year.
As CAS, Williams had to contend with serious challenges to the RAAF's continued existence from the Army and Navy in 1929 and 1932, arising from the competing demands for defence funding during the Great Depression. According to Williams, only after 1932 was the independence of the Air Force assured. Williams again handed over the reins of CAS to Goble in 1933 to attend the Imperial Defence College in London, resuming his position in June 1934. His promotion to air vice marshal on 1 January 1935 belatedly raised him to the equivalent rank of his fellow Chiefs of Staff in the Army and Navy. He was appointed a Companion of the Order of the Bath in June that year.
Williams encouraged the local aircraft industry as a means to further the self-sufficiency of the Air Force and Australian aviation in general. He played a personal part in the creation of the Commonwealth Aircraft Corporation in November 1936, headed up by former Squadron Leader Lawrence Wackett, late of the RAAF's Experimental Section. Williams made the first overseas flight in an aeroplane designed and built in Australia when he accompanied Squadron Leader Allan Walters and two aircrew aboard a Tugan Gannet to Singapore in February 1938.
A series of mishaps with Hawker Demons at the end of 1937, which resulted in one pilot dying and four injured, subjected the Air Force to harsh public criticism. In 1939 Williams was dismissed from his post as CAS and "effectively banished overseas", following publication of the Ellington Report that January. Its author, Marshal of the Royal Air Force Sir Edward Ellington, criticised the level of air safety observed in the RAAF, though his interpretation of statistics has been called into question. The Federal government praised Williams for strengthening the Air Force but blamed him for Ellington's findings, and he was criticised in the press. Beyond the adverse report, Williams was thought to have "made enemies" through his strident championing of the RAAF's independence. A later CAS, George Jones, contended that Ellington had been "invited to Australia in order to inspect Williams rather than the air force and to recommend his removal from the post of Chief of the Air Staff if necessary". The government announced that it was seconding him to the RAF for two years.
## World War II
When war broke out in September 1939, Williams was Air Officer in charge of Administration at RAF Coastal Command, a position he had held since February that year, following a brief posting to the British Air Ministry. Goble had succeeded Williams as Chief of the Air Staff for the last time but clashed with the Federal government over implementation of the Empire Air Training Scheme and stepped down in early 1940. Williams was recalled from Britain with the expectation of again taking up the RAAF's senior position but Prime Minister Robert Menzies insisted on a British officer commanding the service, over the protest of his Minister for Air, James Fairbairn, and the RAF's Air Chief Marshal Sir Charles Burnett became CAS. In his volume in the official history of the Air Force in World War II, Douglas Gillison observed that considering Williams' intimate knowledge of the RAAF and its problems, and his long experience commanding the service, "it is difficult to see what contribution Burnett was expected to make that was beyond Williams' capacity". Williams was appointed Air Member for Organisation and Equipment and promoted to air marshal, the first man in the RAAF to achieve this rank.
Williams returned to England in October 1941 to set up RAAF Overseas Headquarters, co-ordinating services for the many Australians posted there. He maintained that Australian airmen in Europe and the Mediterranean should serve in RAAF units to preserve their national identity, as per Article XV of the Empire Air Training Scheme, rather than be integrated into RAF squadrons, but in practice most served in British units. Even nominally "RAAF" squadrons formed under the Scheme were rarely composed primarily of Australians, and Williams' efforts to establish a distinct RAAF Group within Bomber Command, similar to the Royal Canadian Air Force's No. 6 Group, did not come to fruition. He was able to negotiate improved conditions for RAAF personnel in Europe, including full Australian pay scales as opposed to the lower RAF rates that were offered initially.
When Air Chief Marshal Burnett completed his term in 1942, Williams was once more considered for the role of CAS. This was vetoed by Prime Minister John Curtin and the appointment unexpectedly went to acting Air Commodore George Jones. A mooted Inspector Generalship of the Air Force, which would have seen Williams reporting directly to the Minister for Air, also failed to materialise. Instead Williams was posted to Washington, D.C. as the RAAF's representative to the Combined Chiefs of Staff in the United States, and remained there until the end of the war.
## Later career
In 1946, Williams was forced into retirement despite being four years below the mandatory age of 60. All other senior RAAF commanders who were veteran pilots of World War I, with the exception of the-then Chief of the Air Staff, Air Vice Marshal Jones, were also dismissed, ostensibly to make way for the advancement of younger officers. Williams regarded the grounds for his removal as "specious", calling it "the meanest piece of service administration in my experience".
Following his completion of duty in the Air Force, Williams was appointed Australia's Director-General of Civil Aviation, serving in the position for almost 10 years. His department was responsible for the expansion of communications and infrastructure to support domestic and international aviation, establishing "an enviable safety record". Williams' tenure coincided with the beginnings of the government carrier Trans Australia Airlines (TAA) and introduction of the Two Airlines Policy, as well as the construction of Adelaide Airport and redevelopment of Sydney Airport as an international facility.
Williams' wife Constance died in 1948 and he married Lois Victoria Cross on 7 February 1950. He was appointed Knight Commander of the Order of the British Empire (KBE) in the 1954 New Year Honours, the year before he retired from the Director-Generalship of Civil Aviation. He then took up a place on the board of Tasman Empire Airways Limited (TEAL), forerunner of Air New Zealand. In 1977, Williams published his memoirs, These Are Facts, described in 2001 as "immensely important if idiosyncratic ... the only substantial, worthwhile record of service ever written by an RAAF chief of staff".
Sir Richard Williams died in Melbourne on 7 February 1980. He was accorded an Air Force funeral, with a flypast by seventeen aircraft.
## Legacy
For his stewardship of the Air Force before World War II, as well as his part in its establishment in 1921, Williams is considered the "father" of the RAAF. The epithet had earlier been applied to Eric Harrison, who had sole charge of Central Flying School after Henry Petre was posted to the Middle East in 1915, and was also a founding member of the RAAF. By the 1970s, the mantle had settled on Williams. Between the wars he had continually striven for his service's status as a separate branch of the Australian armed forces, seeing off several challenges to its independence from Army and Navy interests. He remains the RAAF's longest-serving Chief, totalling thirteen years over three terms: October to December 1922; February 1925 to December 1932; and June 1934 to February 1939.
In his 1925 paper "Memorandum Regarding the Air Defence of Australia", Williams defined "the fundamental nature of Australia's defence challenge" and "the enduring characteristics of the RAAF's strategic thinking". Ignored by the government of the day, the study's operational precepts became the basis for Australia's defence strategy in the 1980s, which remains in place in the 21st century. His input to debate in the 1930s around the "Singapore strategy" of dependence on the Royal Navy for the defence of the Pacific region has been criticised as limited, and as having "failed to demonstrate the validity of his claims for the central role of air power".
Williams' legacy extends to the very look of the RAAF. He personally chose the colour of the Air Force's winter uniform, a shade "somewhere between royal and navy blue", designed to distinguish it from the lighter Royal Air Force shade. Unique at the time among Commonwealth forces, the uniform was changed to an all-purpose middle blue suit in 1972 but following many complaints in the ensuing years reverted to Williams' original colour and style in 2000.
Memorials to Williams include Sir Richard Williams Avenue at Adelaide Airport, and RAAF Williams in Victoria, established in 1989 after the merger of Point Cook and Laverton bases. The Sir Richard Williams Trophy, inaugurated in 1974, is presented to the RAAF's "Fighter Pilot of the Year". In 2005, Williams' Australian Flying Corps wings, usually on display at the RAAF Museum in Point Cook, were carried into space and back on a shuttle flight by Australian-born astronaut Dr Andy Thomas. The Williams Foundation, named in his honour, was launched in February 2009 "to broaden public debate on issues relating to Australian defence and security".
|
7,483,495 |
Gemini (2002 film)
| 1,166,916,922 |
2002 Tamil film directed by Saran
|
[
"2000s Tamil-language films",
"2000s crime action films",
"2000s masala films",
"2002 films",
"AVM Productions films",
"Films directed by Saran",
"Films scored by Bharadwaj (composer)",
"Films shot in Switzerland",
"Indian crime action films",
"Indian films based on actual events",
"Tamil films remade in other languages"
] |
Gemini is a 2002 Indian Tamil-language crime action film written and directed by Saran and produced by AVM Productions. The film stars Vikram in the title role of a small-time criminal and aspiring don who, after falling in love, decides to refrain from crime; Kiran Rathod plays his love interest. Murali stars as Singaperumal, a police officer who inspires and guides Gemini in his attempts to reform. The cast includes Kalabhavan Mani as the antagonist while Vinu Chakravarthy, Manorama and Thennavan portray significant roles. Based on gang wars in Chennai, the film delves into the lives of outlaws and the roles the police and society play in their rehabilitation and acceptance.
In early 2001, rival gangsters "Vellai" Ravi and Chera reformed themselves with the patronage of a police officer. Saran was inspired by this incident and scripted a story based on it. Production began shortly afterwards in December the same year and was completed by March 2002. The film was shot mainly at the AVM Studios in Chennai, while two song sequences were filmed in Switzerland. The film had cinematography by A. Venkatesh and editing by Suresh Urs while the soundtrack was scored by Bharadwaj.
The soundtrack was well received, with the song "O Podu" becoming a sensation in Tamil Nadu. Gemini was released two days ahead of the Tamil New Year on 12 April 2002 and received mixed reviews, with praise for the performances of Vikram and Mani but criticism of Saran's script. Made at an estimated cost of ₹40 million (US\$500,000), the film earned more than ₹200 million (US\$2.5 million) at the box office and became one of the highest-grossing Tamil film of the year. Its success, largely attributed to the popularity of "O Podu", resurrected the Tamil film industry, which was experiencing difficulties after a series of box office failures. The film won three Filmfare Awards, three ITFA Awards and four Cinema Express Awards. Later that same year, Saran remade the film in Telugu as Gemeni.
## Plot
Teja is a high-profile gangster in North Madras who often imitates the behaviour characteristics of different animals for sarcastic effect. Accompanied by his gang, he arrives at a magistrates' court for a hearing. His animal antics are mocked at by "Chintai" Jeeva, another accused. Teja and his gang retaliate and kill Jeeva within the court premises. Jeeva was a member of a rival gang headed by Gemini, an aspiring goon from Chintadripet who wants to dethrone Teja and take his place. To avenge Jeeva's death, Gemini hunts down the murderer Pandian while Isaac, one of Gemini's men, kills him. This incident leads to a feud between Gemini and Teja, and a fight for supremacy ensues. Pandian's mother Annamma, a destitute woman, locates the whereabouts of her son's murderers. She approaches Gemini, becomes the gang's cook, and awaits a chance to poison them.
Gemini meets and falls in love with a North Indian woman Manisha Natwarlal, a free-spirited college girl. To pursue her, he joins an evening class at her college and she falls in love with him, unaware of his true identity. Two businessmen approach Gemini to evict traders from a market so that a shopping complex can be built in its place. As the market is in his control, Gemini refuses the offer, and the businessmen hire Teja to execute the job. Feigning an altercation with Gemini, his sidekick Kai joins Teja's gang, acts as the inside man, and foils the plan. Teja becomes enraged at being outsmarted by Gemini.
Singaperumal, an astute police officer, is promoted to the position of Director General of Police (DGP). Keen on eradicating crime, he arrests both Gemini and Teja, and the arrests are made "off the record" owing to their political influence. Aware of the rivalry between them, Singaperumal puts them in a private cell so they can beat each other to death. While Teja tries to exact revenge for the market issue, Gemini does not fight back but persuades Teja to trick Singaperumal by pleading guilty and requesting a chance to reform. Gemini's trick works, and they are released.
Since Gemini was arrested at the college, Manisha discovers his identity and resents him. To regain her attention, Gemini reforms his ways. Though his gang initially disapproves of it, they relent. As Gemini and his gang regret their actions, Annamma reveals her true identity and forgives them. Singaperumal helps Gemini get back into college and reunite with Manisha. Teja returns to his gang and continues his illegal activities. He pesters Gemini to help him in his business. Gemini informs Singaperumal of Teja's activities; Teja is caught smuggling narcotics, is prosecuted, and serves a term in prison.
A few months later, Singaperumal is transferred, and a corrupt officer takes his place. The current DGP releases Teja, and together, they urge Gemini to work for them and repay for the losses they incurred, but Gemini refuses. To force him to return to his old ways, Teja persuades Isaac to conspire against Gemini. With Isaac's help, Teja plots and kills Kai. Gemini is infuriated and confronts Teja to settle the issue. During the fight, Gemini beats up Teja and swaps their clothes, leaving Teja bound and gagged. The new DGP arrives and shoots Gemini dead; he later realises that he had actually shot Teja who was in Gemini's clothes. While the DGP grieves over Teja's death, he receives news that he has been transferred to the Sewage Control Board.
Gemini lives happily ever after with Manisha.
## Cast
- Vikram as Gemini
- Kiran Rathod as Manisha Natwarlal
- Kalabhavan Mani as Teja
- Manorama as Annamma
- Vinu Chakravarthy as a power-obsessed, corrupt police officer
- Murali as DGP Singaperumal
- Charle as Chinna Salem
- Ramesh Khanna as Gopal MA
- Dhamu as Ram
- Vaiyapuri as Oberoi
- Rani as Kamini
- Thennavan as Kai
- Isaac Varghese as Isaac
- Thyagu as Sammandham
- Madhan Bob as R. Anilwal IPS
- Ilavarasu as Police Commissioner
- Sridhar as Sridhar
- Omakuchi Narasimhan as Bombay Dawood
- Gemini Ganesan in a cameo as himself
## Production
### Development
In February 2001, underworld dons "Vellai" Ravi and Chera, who terrorised the city of Madras (now Chennai) in the 1990s, abandoned a life of crime and took up social work. The then-DCP of Flower Bazaar Shakeel Akhter presided over the oath-taking ceremony and welcomed the rehabilitation programme. During the making of his film Alli Arjuna (2002), director Saran came across a newspaper article carrying this piece of news and was fascinated. Shortly afterwards in March 2001, Saran announced his next directorial venture would be inspired by the incident. Titled Erumugam (meaning "upward mobility"), the project was scheduled to enter production after the completion of Alli Arjuna.
The director disclosed that it was "a modern day rags to riches story" where the protagonist rises from humble origins to an enviable position. The venture was to be funded by A. Purnachandra Rao of Lakshmi Productions. The film would mark the director's third collaboration with Ajith Kumar in the lead after the success of Kaadhal Mannan (1998) and Amarkalam (1999). Laila and Richa Pallod, who played the heroine in Saran's Parthen Rasithen (2000) and Alli Arjuna respectively, were to play the female lead roles. While the recording for the film's audio reportedly began on 16 March 2001, the filming was to start in mid-June and continue until August that year, followed by post-production work in September. It was planned to release the film on 14 November 2001 coinciding with Diwali. However, after finding a more engaging script in Red (2002), Ajith lost interest; he left the project after a week's shoot and the production was shelved. Following this incident, Saran stated that he would never do another film with Ajith. The pair would, however, reconcile their differences later, and collaborate on Attahasam (2004) and Aasal (2010).
Saran rewrote the script based on gang wars in Chennai and began the project again. The film, then untitled, was announced on 24 August 2001 with Vikram to star in the lead role. The production was taken over by M. Saravanan, M. Balasubramaniam, M. S. Guhan and B. Gurunath of AVM Productions. The film was AVM's 162nd production and their first film after a five-year hiatus, their last production being Minsara Kanavu (1997), the release of which marked fifty years since their debut Naam Iruvar (1947). By producing Gemini, AVM became one of the four film studios that had been producing films for over fifty years. While titling the film, producer M. Saravanan chose Gemini among the many titles suggested to him, but because Gemini Studios was the name of a major production house, Saravanan wrote to S. S. Balan, editor of Ananda Vikatan and son of Gemini Studios founder S. S. Vasan, requesting permission to use the title. In response, Balan gave his consent.
### Cast and crew
With Vikram cast in the title role, Saran was searching for a newcomer to play the female lead role of a Marwari woman. Kiran Rathod is a native of Rajasthan, the place where the Marwaris originate from. She is a relative of actress Raveena Tandon, whose manager brought Rathod the offer to act in Gemini. Saran was convinced after seeing a photograph of Rathod and cast her; Gemini thus became her debut Tamil film. Malayalam actors Kalabhavan Mani and Murali were approached to play significant roles.
Gemini is widely believed to be Mani's first Tamil film, though he had already starred in Vaanchinathan (2001). There have been varying accounts on how he was cast: while Vikram claims to have suggested Mani for the role of Teja, Saran said in one interview that casting Mani was his idea, and contradicted this in another interview, saying that Mani was chosen on his wife's recommendation. While searching for an unfamiliar actor for the DGP's role, Saran saw Murali in Dumm Dumm Dumm (2001) and found him "very dignified". He chose Murali as he wanted that dignity for the role. Though he had planned to make Murali a villain at the end of the film, Saran decided against it because he was "amazed to see awe in everyone's eyes when Murali entered the sets and performed".
Thennavan, Vinu Chakravarthy, Ilavarasu, Charle, Dhamu, Ramesh Khanna, Vaiyapuri, Madhan Bob, Thyagu and Manorama form the supporting cast. Gemini Ganesan made a cameo appearance at the request of Saravanan. The technical departments were handled by Saran's regular crew, which consisted of cinematographer A. Venkatesh, editor Suresh Urs, production designer Thota Tharani and costume designers Sai and Nalini Sriram. The choreography was by Super Subbarayan (action) and Suchitra, Brinda and Ashok Raja (dance). The music was composed by Bharadwaj and the lyrics were written by Vairamuthu. Kanmani who went on to direct films like Aahaa Ethanai Azhagu (2003) and Chinnodu (2006) worked as an assistant director.
### Filming
Gemini was formally launched on 21 November 2001 at the Hotel Connemara, Chennai in the presence of celebrities including Rajinikanth (through video conferencing) and Kamal Haasan. The launch function was marked by the submission of the script, songs and lyrics. Principal photography was scheduled to begin in mid-December that year, but commenced slightly earlier. Vikram shot for the film simultaneously with Samurai (2002). When Kalabhavan Mani was hesitant in accepting the film due to other commitments in Malayalam, shooting was rescheduled to film his scenes first. Saran persuaded Mani to allot dates for twelve days to complete his scenes. Since Mani was a mimicry artist, Saran asked him to exhibit his talents; Mani aped the behaviour of a few animals and Saran chose among them, which were added to the film.
Gemini, with the exception of two songs which were filmed in Switzerland, was shot at AVM Studios. One of the songs, "Penn Oruthi", was shot at Jungfraujoch, the highest railway station in Europe. Part of the song sequence was filmed on a sledge in Switzerland, making it the second Indian film to have done so after the Bollywood film Sangam (1964). Though there were problems in acquiring permission, executive producer M. S. Guhan persisted. The overseas shoots were arranged by Travel Masters, a Chennai-based company owned by former actor N. Ramji. Gemini was completed on schedule and M. Saravanan praised the director, saying, "... we felt like we were working with S. P. Muthuraman himself, such was Saran's efficiency".
## Themes and influences
The characters of Gemini and Teja were modelled on "Vellai" Ravi and Chera respectively—Tamil-Burma repatriates who settled in Bhaktavatsalam colony (B.V. Colony) in Vyasarpadi, North Madras. They were members of rival gangs headed by Benjamin and Subbhaiah respectively. Their rivalry began when Benjamin, a DYFI member, questioned the illegal activities of Subbhaiah who, apart from running a plastics and iron ore business, held kangaroo courts. When their feud developed into a Christian-Hindu conflict, they recruited jobless men and formed gangs to wage wars against each other. While Subbhaih's nephew Chera became his right-hand man, "Vellai" Ravi became Benjamin's aide. Benjamin and Ravi's gang killed Subbaiah in 1991. A year later, Chera's gang retaliated by killing Benjamin with the help of another gang member, Asaithambi. Another gangster, Kabilan, joined Chera's gang and they killed more than fourteen people to avenge Subbaiah's murder. One of the murders took place inside the Egmore court in early 2000 when Chera's gang killed Ravi's aide Vijayakumar, leading to a police crackdown on the gangsters. Fearing an encounter, both "Vellai" Ravi and Chera decided to give up their lives of crime and reform. The then-DCP of Flower Bazaar, Shakeel Akhter, held the "transition ceremony" in February 2001. "Vellai" Ravi and Chera were re-arrested under Goondas Act during the film's pre-production.
When asked about his fascination for "rowdy themes", Saran said:
> I come from a lower middle class background and have lived all my life in 'Singara Chennai'. I used to go to college from my house in Aminjikkarai by bus and many of the incidents that you see in my films are inspired by those days. Chennai city and its newspapers have been my source material.
The characters of DGP Singaperumal and "Chintai" Jeeva were based on Shakeel Akhter and Vijayakumar respectively. Since the criminals were re-arrested after being given a chance, the initial script had Singaperumal turning villainous during the climax. When Saran felt that the audience would not be kind to him and that it would damage the film, he added another corrupt police officer to do the job while maintaining Singaperumal as a "very strong, good police officer".
## Music
The soundtrack album and background score were composed by Bharadwaj. Since making his entry into Tamil films with Saran's directorial debut Kaadhal Mannan, he has scored the music for most films directed by Saran. The lyrics were written by poet-lyricist Vairamuthu.
The songs were well received by the audience, especially "O Podu". The music received positive reviews from critics. Sify wrote that Bharadwaj's music was the film's only saving grace. Writing for Rediff, Pearl stated that the music director was "impressive". Malathi Rangarajan of The Hindu said that the song by Anuradha Sriram has given the term "O! Podu!", which has been part of the "local lingo" for years, a "new, crazy dimension". The song enjoyed anthem-like popularity and according to V. Paramesh, a dealer of film music for 23 years, sold like "hot cakes". The album sold more than 100,000 cassettes even before the film released despite rampant piracy. It was one of the biggest hits in Bharadwaj's career and earned him his first Filmfare Award.
In 2009, Mid-Day wrote, "O podu is still considered the cornerstone of the rambunctious koothu dance". In 2011, The Times of India labelled the song an "evergreen hit number". Following the internet phenomenon of "Why This Kolaveri Di" in 2011, "O Podu" was featured alongside "Appadi Podu", "Naaka Mukka" and "Ringa Ringa" in a small collection of South Indian songs that are considered a "national rage" in India.
## Release
The film, which was supposed to be released on 14 April 2002 coinciding with the Tamil New Year, was released two days early on 12 April, apparently to capitalise on weekend collections. Gemini was released alongside Vijay's Thamizhan, Prashanth's Thamizh and Vijayakanth's Raajjiyam. The film was released across Tamil Nadu with 104 prints, the most for a Vikram film at the time of release. On the day of release, the film premiered in Singapore with the hero, heroine, director and producer in attendance. AVM sold the film to distributors for a "reasonable profit" and marketed it aggressively. They organised promotional events at Music World in Chennai's Spencer Plaza, Landmark and Sankara Hall, where Vikram publicised the film signing autographs. Since "O Podu" was such a hit among children, AVM invited young children to write reviews, and gave away prizes. In 2006, Saran revealed in a conversation with director S. P. Jananathan that his nervousness rendered him sleepless for four days until the film was released.
## Reception
### Critical response
Gemini received mixed reviews from critics. Malathi Rangarajan of The Hindu wrote that the "Vikram style" action film was more of a stylised fare as realism was a casualty in many sequences. She added that while the credibility level of the storyline was low, Saran had tried to strike a balance in making a formulaic film. Pearl of Rediff.com lauded Saran's "racy" screenplay but found the plot "hackneyed" and reminiscent of Saran's Amarkalam. The critic declared, "Gemini is your typical masala potboiler. And it works." In contrast, Sify was critical of the film and wrote, "Neither exciting nor absorbing Gemini is as hackneyed as they get. [...] Saran should be blamed for this inept movie, which has no storyline and has scant regard for logic or sense."
The performance of the lead also earned mixed response. Malathi Rangarajan analysed, "Be it action or sensitive enactment, Vikram lends a natural touch [...] helps Gemini score. [..] With his comic streak Mani makes himself a likeable villain." Kalabhavan Mani's mimicry and portrayal of a villain with a comic sense received acclaim from the critics and audience alike. Rediff said, "The highlight in Gemini is undoubtedly Kalabhavan Mani's performance. [...] As the paan-chewing Gemini, Vikram, too, delivers a convincing performance." However, Sify found the cast to be the film's major drawback and scrutinised, "Vikram as Gemini is unimpressive [...] Kalabhavan Mani an excellent actor hams as he plays a villain [...] Top character actor Murali is also wasted in the film."
Following the film's success, Vikram was compared with actor Rajinikanth. D. Govardan of The Economic Times wrote, "The film's success has catapulted its hero, Vikram as the most sought after hero after Rajinikanth in the Tamil film industry today". Rajinikanth, who saw the film, met Vikram and praised his performance. Saran told in an interview that Rajinikanth was so impressed with the songs, he predicted the film's success in addition to considering Rathod for a role in his film Baba (2002). The film's premise of an outlaw reforming his ways was appreciated. D. Ramanaidu of Suresh Productions—the co-producer of the Telugu remake—said, "The story of a rowdy sheeter turning into a good man is a good theme". In August 2014, Gemini was featured in a list of "Top 10 Tamil Gangster Films" compiled by Saraswathi of Rediff.com.
In an article discussing the rise of the gangster-based film becoming a genre in itself, Sreedhar Pillai wrote in a reference to Gemini:
> The hero is an all out villain, who is daringly different but the director makes him dream of those lush Switzerland songs (our hero then is clad in designer wear). The rowdies in the film have very highly educated girls from affluent family lusting after them ... [Amitabh] Bachchan of "Deewar" or Shah Rukh [Khan] of "Baazigar" took to crime because they were wronged. Nowadays the Geminis and Nandas of Tamil cinema indulge in crime for the heck of it.
### Box office
Gemini was a success and became the biggest hit of the year in Tamil. Made on an estimated budget of ₹40 million (equivalent to ₹150 million or US\$1.9 million in 2023), the film grossed more than ₹200 million (equivalent to ₹760 million or US\$9.5 million in 2023). The film's success was largely attributed to the popularity of the song "O Podu". D. Govardan of The Economic Times stated, "A neatly made 'masala' (spice) film, with the song O Podu.. as its USP, it took off from day one and has since then not looked back". Vikram, who was a struggling actor for almost a decade, credited Gemini as his first real blockbuster. Sreedhar Pillai said that a good story, presentation and peppy music made it a "winning formula" and declared "Gemini has been the biggest hit among Tamil films in the last two years". However, the sceptics in the industry dismissed the film's success as a fluke.
The film ran successfully in theatres for more than 125 days. Since most Tamil films released on the preceding Diwali and Pongal were not successful, Gemini helped the industry recover. Outlook wrote, "Gemini has single-handedly revived the Tamil film industry." The box office collections revived the fortunes of theatres that were on the verge of closure. AVM received a letter from the owner of New Cinema, a theatre in Cuddalore, who repaid his debts with the revenue the film generated. Abirami Ramanathan, owner of the multiplex Abhirami Mega Mall, said that Gemini'''s success would slow down the rapid closure of theatres from 2,500 to 2,000. Following the success of the film, Saran named his production house "Gemini Productions", under which he produced films including Aaru (2005), Vattaram (2006) and Muni (2007).
## Accolades
## Remakes
Encouraged by the film's success and wide-reaching popularity, Saran remade it in Telugu as Gemeni. It is the only film Saran made in a language other than Tamil. The remake starred Venkatesh and Namitha, while Kalabhavan Mani and Murali reprised their roles from the Tamil version. Most of the crew members were retained. Posani Krishna Murali translated the dialogues to Telugu. The soundtrack was composed by R. P. Patnaik, who reused most of the tunes from the original film. Released in October 2002, the film received lukewarm response and failed to repeat the success of the original. In 2013, a Kannada remake was reported to have been planned with Upendra in the lead, but he dismissed the reports as rumours.
## Legacy
In September 2002, Gemini was screened as part of a six-day workshop jointly conducted by the Department of Journalism and Communication and the Mass Communication Alumni Association of the University of Madras; it focussed on the impact of cinema on society.
The success of the film prompted other film producers to capitalise on the growing popularity of the phrase "O Podu" and Vikram. The Telugu film Bava Nachadu (2001) was dubbed and released in Tamil as O Podu, while Vikram's Telugu film Aadaalla Majaka (1995) and Malayalam film Itha Oru Snehagatha (1997) were swiftly dubbed and released into Tamil as Vahini and Thrill respectively. A game-based reality show for children, which was aired between 2003 and 2004, was titled "O Podu". AVM was involved in the show, which was produced by Vikatan Televistas and directed by Gerald. The show was broadcast for 26 weeks on Sun TV on Sundays with Raaghav as its anchor. In September 2003, physical trainer Santosh Kumar played "O Podu" among a range of popular music as part of a dance aerobics session in a fitness camp held for the Indian cricket team at Bangalore.
When Vikram was signed up as the brand ambassador of Coca-Cola in April 2005, the commercials featured him playing characters from different walks of life. One among them was a "rowdy role", the essence for which was taken from the character Gemini. During the run-up to the 2006 assembly election, Chennai-based journalist Gnani Sankaran began a social awareness movement to prevent electoral fraud and named it "O Podu" as a short form of "Oatu Podu" meaning "cast your vote". The movement urged the electorate to exercise the right to reject candidates under Section 49-O of The Conduct of Election Rules, 1961, wherein a voter, who has decided not to vote for anyone, can record the fact. For this purpose, the people behind "O Podu" also urged the election commission to facilitate a separate button on the electronic voting machine.
During the 2010 Asia Cup, a Sri Lankan band performed "O Podu" at the India vs. Pakistan cricket match held in Dambulla. In July 2011, Vikram inaugurated "Liver 4 Life", an initiative launched by MIOT Hospitals to create awareness of the Hepatitis B virus. As the campaign was targeted at school and college students, the organisers tweaked the term "O Podu" into "B Podu" and made it the event's tagline to capitalise on the song's immense appeal. In a comical sequence in the film, Dhamu's character claims to be an expert of a martial art form named "Maan Karate" (Maan means "Deer"), which is actually the art of running away like a deer when in danger. The phrase became famous and was used to name the 2014 comedy film Maan Karate'' because whenever there is a problem in his life, the hero (played by Sivakarthikeyan) fails to face them and runs for cover instead.
|
26,231,969 |
1876 Scotland v Wales football match
| 1,155,711,348 |
Association football match
|
[
"1870s in Glasgow",
"1875–76 in Scottish football",
"1875–76 in Welsh football",
"1876 in association football",
"1876 in sports",
"International association football matches",
"March 1876 events",
"Scotland national football team matches",
"Wales national football team matches"
] |
The 1876 association football match between the national teams representing Scotland and Wales was the first game played by the latter side. It took place on 25 March 1876 at Hamilton Crescent, Partick, the home ground of the West of Scotland Cricket Club. The match was also the first time that Scotland had played against a side other than England.
The fixture was organised by Llewelyn Kenrick, who had founded the Football Association of Wales (FAW) only a few weeks earlier in response to a letter published in The Field. Advertisements were placed in several sporting journals for Welsh players, or those with more than three years residence in the country, to come forward and the Welsh team was selected after trial matches were held at the Racecourse Ground in Wrexham. The FAW selected the side and Kenrick was appointed captain for the fixture.
As the more experienced team, Scotland dominated the match and had several chances to score in the first half. They had a goal disallowed after scoring directly from a corner kick, before taking the lead after 40 minutes through John Ferguson. In the early stages of the second half, Wales attempted to play more openly to find a goal, but the Scottish side took advantage of their opponent's inexperience and scored two further goals. The first was a rebound off the goalpost which was converted by Billy MacKinnon; the second was headed in by debutant James Lang. Scotland added a fourth through Henry McNeil and claimed a victory in front of a crowd of around 17,000 people, a record for an international fixture at the time.
The two nations have met frequently since this first match, playing against each other every year in friendly matches until 1884 when the British Home Championship was introduced. The competition was an annual tournament, and Scotland and Wales played a fixture against each other every year until 1984, apart from when competitive football was suspended during the First and Second World Wars. In total, the two sides have played more than 100 matches against each other since the first meeting.
## Background
The first officially recognised international association football match was played between Scotland and England on 30 November 1872. This had been preceded by a series of "unofficial" matches between the two sides in the previous two years, played at The Oval, a cricket ground in South London. As a result, the two sides are recognised as the joint oldest international football teams in history. Following the first game, Scotland and England met annually in a series of friendly matches. By the time their fixture against Wales was organised in 1876, Scotland had played England on five occasions in official matches.
Club football was well established in Scotland with the founding of Queen's Park in 1867, although the earliest Scottish club is believed to be the Foot-Ball Club of Edinburgh founded in 1827. The Scottish Football Association (SFA) and the Scottish Cup had been founded in 1873.
In Wales, association football had struggled to gain recognition, rugby union being the preferred sport, especially in the south. Football clubs were establishing in North Wales though – Druids and Wrexham were both founded in 1872. There was no recognised league or cup football until 1877 when the Welsh Cup was introduced and the first league was not founded until the start of the 20th century when the Welsh Football League was created. The clubs would instead have to arrange friendly matches between themselves on an ad hoc basis. It would take several decades before football became established in the south, Cardiff City becoming the first team from the region to win the Welsh Cup in 1912.
## Preparation
In January 1876, a London-based Welshman, G. A. Clay-Thomas, placed an advertisement in The Field magazine, a sports and country publication, proposing that a team be formed from Welsh men residing in London to play Scotland or Ireland at rugby. Llewelyn Kenrick of the Druids club saw the advertisement but decided that the international match should be association football and the field of players be drawn from all of Wales. Clay-Thomas' proposed rugby match between residents in London went ahead on 15 March. Kenrick told The Field that the footballers of North Wales accepted the challenge and he advertised for players:
> "Test matches will take place at the ground of the Denbighshire County Cricket Club at Wrexham for the purpose of choosing the Cambrian Eleven. Gentlemen desirous of playing are requested to send in their names and addresses."
The FAW sent out invitations proposing a football match to officials in England, Scotland and Ireland. England rejected the offer, and Ireland only wished to play under rugby rules. Scotland accepted the invitation, meaning Wales became the first team they had faced in an international fixture other than England. The Football Association of Wales (FAW) was formed in February 1876 at the Wynnstay Arms in Wrexham in preparation for the game and they had hoped for the match to be played in Wales. Scotland rejected this due to scheduling issues but did agree to a second fixture to be played the following year in Wales. Accordingly, the Welsh side travelled to Scotland, where their opposition had yet to lose a match. The venue chosen for the tie was Hamilton Crescent in Partick which was owned by the West of Scotland Cricket Club and had been used for the first official international fixture previously. Concerns by the FAW over financing the team's trip led to an appeal for public donations to raise money.
To qualify for selection, the Welsh players were required to have been born in Wales or taken up residence in the country for at least three years. Although Kenrick corresponded with several Welsh clubs and the nation's universities to raise a team, he was criticised for allegedly overlooking players from the South. One of the main criticisms was the decision to publish most of his notices in English sports journals such as The Field and Bell's Life, which were not widely circulated in Wales. C. C. Chambers, captain of Swansea Rugby Club, wrote a letter to the Western Mail newspaper in which he commented "... there must be some sort of error, and that the team to play Scotland is to be selected from North Wales only. I shall be happy to produce from these parts a team that shall hold their own against any team from North Wales". H. W. Davies, the honorary secretary of the South Wales Football Club also noted that "very few, if any, players (in the south) knew that a match ... had ever been thought of, much less that a date had been fixed". Although Kenrick refused to be drawn into a direct riposte to their letters, he did welcome players of sufficient ability to try out for the team.
Despite the objections, Kenrick and the FAW pushed ahead with their plans. Once applications had been received, the FAW organised trial matches at the Racecourse Ground in Wrexham which took place in February 1876. The first match was played between players from the town's own football club and Druids. The second was held a week later, while a third trial match was organised on 26 February 1876 against a combined Oswestry team, made up of players from the town's football clubs. The game was disrupted when six of the eleven players who were scheduled to appear for the Welsh side failed to turn up. This led to other local players who had travelled to watch the match taking their places. The fourth and final trial match was played in early March. Further matches were cancelled as the ground was being prepared for the upcoming cricket season. Scotland also held trial matches for players who had never previously represented the national side, which were held at Hampden Park in February.
### Team selection
For the final squad, Kenrick appointed himself as captain and selected six players from his own club, Druids, including Daniel Grey who was born in Scotland but had moved to Wales after obtaining his medical licence to open a practice in Ruabon. Two players from local rivals, Wrexham, and one from English club Oswestry were also selected. Usk-born William Evans, who played for the Oxford University team was the only player from South Wales selected, with the others all from North Wales other than John Hawley Edwards. He had been born in Shrewsbury and previously represented the England national football team in 1874. Edwards was a fellow solicitor and member of the Shropshire Wanderers. The Thomson brothers, goalkeeper David and forward George were also born in England but resided in Wales, the former was a captain in the Royal Denbighshire Militia. In Scotland, there was considerable interest in the team that would be arriving to play in the match, newspapers reporting that Beaumont Jarrett and Thomas Bridges Hughes may feature for the Welsh side.
All eleven players selected for Wales were amateurs, comprising "two lawyers, a timber merchant, a student, a soldier, a stonemason, a physician, a miner, a chimney sweep, an office worker and an insurance company employee".
Like the Welsh, the Scots fielded six players from one club (Queens Park) and three of their players were making their international debut: James Lang, Robert W. Neill and Moses McNeil. The latter was the brother of three-time capped Henry McNeil, who was also named in the team. Lang had lost the sight in one eye while working at a shipyard. The majority of the squad from their 3–0 victory over England three weeks earlier was retained. Both teams played a 2–2–6 formation; i.e. two full-backs, two half-backs and six forwards.
## Match
### Pre-match
The players from both sides travelled together from their hotels in Glasgow to the match in a horsebus and were greeted by a large crowd along the nearby highway. As Wales were an unknown team, the match drew a large crowd with the grandstand at the stadium being nearly full. Spectators were charged half-a-crown (equivalent to 1⁄8 pound sterling) for entry and the crowd at pitchside was described in the Wrexham Guardian as "very thick". In an attempt to see over the crowd, spectators climbed onto the roofs of parked taxis and horse buses and a nearby verge was filled with viewers. The official attendance of the match was recorded at 17,000, a new world record for a full international fixture, but some reports believe the number may have been even higher as between one and two hundred further spectators managed to gain access to the ground during the first half after a fence collapsed, allowing more people to enter.
Wales played in a plain white shirt, with the Prince of Wales's feathers embroidered on the chest, and black shorts. Scotland wore blue shirts and white shorts. Each player wore a different colour of socks so the crowd could recognise each player, and the list of colours was included in the match programme.
### Match summary
Scotland captain Charles Campbell won the coin toss and choose to play "downhill" as the ground featured a slight incline. This was perceived as advantageous to the attacking side, but their choice meant they started the match with the sun in their faces. The Welsh captain Kenrick kicked off the match at 3:40 p.m. The Scots gained possession almost immediately and proceeded to attack the Welsh goal as John Ferguson won the first corner of the game after making a run down the wing, but the resulting setpiece was cleared without incident. Wales were forced to defend resolutely; the North Wales Chronicle noted that "the Welsh defended their goal in such a compact and determined way that the ball could not be passed through them." William Evans was called on early on to "save the fortress" by intercepting a pass and sending the ball upfield. The Welsh players were unable to break out of their own half as the game progressed and their forwards' passing game was described as "not much understood", while David Thomson in the Welsh goal made his first save soon after with a comfortable catch. The Welsh side's strongest play of the first half came from a Scottish corner when the ball fell to Kenrick who beat several opposition players to break into the Scottish half before being chased down by Sandy Kennedy. Kenrick was able to pass to Edwards but he was quickly dispossessed by the defenders.
The Scots were eager to take advantage of their early pressure but frequently allowed the ball to go out of play in their haste. They had a goal disallowed after Joseph Taylor scored directly from a corner without another player gaining a touch, while Evans again denied a goalscoring opportunity by blocking a goal-bound shot before David Thomson gathered the Scots' second attempt. On the 40th minute, Lang's cross was caught by David Thomson in the Welsh goal, but Ferguson "seeing an advantage, jumped forward with remarkable suddenness" according to newspaper reports, thus forcing Thomson to drop the ball which was subsequently kicked into the goal to the delight of the home crowd. Henry McNeil nearly added a second goal on the stroke of half-time after making a run at the Welsh goal before shooting over the crossbar just as the half was brought to a close.
After the half-time interval, the Welsh team looked to utilise the "downhill" advantage and mounted early forays into the Scottish half of the pitch. The well-practised Scots took advantage of the openness of the Welsh side and around eight minutes into the second half, the Scots added a second goal as Campbell played a pass to Henry McNeil who promptly shot against the post. The Welsh goalkeeper David Thomson, believing the ball had gone out of play, stopped defending the goal as the ball rebounded out to Billy MacKinnon, who was able to turn the ball into the unguarded net. Within five minutes, Scotland extended their lead further as Wales were forced to push forward in an attempt to get back into the game. Scotland regained possession and, after playing several passes around the encamped Welsh defence, the ball was crossed towards Lang who headed in a goal on his debut. Suffering a three-goal deficit, Wales were unable to threaten any answer in return, while Campbell forced a save when he advanced on the Welsh goal almost immediately after the kick-off. The Welsh goal survived further scares until Henry McNeil completed the scoring after a combined move upfield by Ferguson and Kennedy won a corner kick. A goalmouth scramble ensued from the resulting cross before Henry McNeil was able to convert. MacKinnon made a final attempt on goal near the end of the game, going on a mazy individual dribble through the Welsh defence before being stopped by Kenrick. The match ended in a 4–0 win for Scotland.
### Details
## Post match
Among the Welsh side, Kenrick was picked out as one of the best performers in match reports, while the team's forwards were criticised as being the weak point of the side. In contrast, the Scottish forward line were praised for their performances, along with Kennedy. After the match, the Welsh visitors were hosted by the SFA with dinner at McRae's Hotel on Bath Street. The SFA chairman toasted the Welsh side and praised their "unflinching determination" during the match despite the defeat and Welsh captain Kenrick also gave a speech.
By playing in the fixture, Wales are recognised as the third oldest international football team. Their next international match came nearly a year later when they played a second fixture against Scotland on 5 March 1877 at the Racecourse Ground in Wrexham, which was the first international match to be played in Wales. Six of Wales' original side kept their places in the team and they gave a much improved performance as Scotland won the match 2–0. For three players, David Thomson, Edwards and John Jones, the 1876 game proved to be their only international appearance for Wales. Scotland had only played one match between the fixtures, losing 3–1 to England two days before the second match and travelling straight to Wales afterwards. The two countries continued to meet each other in friendly matches once each year in February or March until 1884 when the British Home Championship, which also involved England and Ireland, was inaugurated.
Scotland and Wales then met each year, other than when war intervened, until 1984, when the British Home Championship was abandoned. The Scots won the first 13 meetings against Wales, the first draw coming in 1889. It was not until 1905 that the Welsh claimed their first victory, defeating the Scots 3–1 at the Racecourse Ground. The two countries have also met in World Cup qualifying matches for the 1978 and 1986 tournaments, and were placed in the same group for the qualifying tournament for the 2014 World Cup. Since the two World Cup qualifying matches in 1985, the countries have met five times. The most recent was on 12 October 2012, when Wales won 2–1 in a World Cup qualifier.
|
59,502,933 |
J. Havens Richards
| 1,171,137,850 |
American Jesuit educator (1851–1923)
|
[
"1851 births",
"1923 deaths",
"19th-century American Jesuits",
"19th-century American educators",
"20th-century American Jesuits",
"20th-century American educators",
"Boston College alumni",
"Catholics from Ohio",
"Clergy from Columbus, Ohio",
"Converts to Roman Catholicism from Anglicanism",
"Deans and Prefects of Studies of the Georgetown University College of Arts & Sciences",
"Harvard University alumni",
"Pastors of the Church of St. Ignatius Loyola (New York City)",
"Presidents of Georgetown University",
"Presidents of Loyola School (New York City)",
"Presidents of Regis High School (New York City)",
"St. Stanislaus Novitiate (Frederick, Maryland) alumni",
"Woodstock College alumni"
] |
Joseph Havens Richards SJ (born Havens Cowles Richards; November 8, 1851 – June 9, 1923) was an American Catholic priest and Jesuit who became a prominent president of Georgetown University, where he instituted major reforms and significantly enhanced the quality and stature of the university. Richards was born to a prominent Ohio family; his father was an Episcopal priest who controversially converted to Catholicism and had the infant Richards secretly baptized as a Catholic.
Richards became the president of Georgetown University in 1888 and undertook significant construction, such as the completion of Healy Hall, which included work on Gaston Hall and Riggs Library, and the building of Dahlgren Chapel. Richards sought to transform Georgetown into a modern, comprehensive university. To that end, he bolstered the graduate programs, expanded the School of Medicine and Law School, established the Georgetown University Hospital, improved the astronomical observatory, and recruited prominent faculty. He also navigated tensions with the newly established Catholic University of America, which was located in the same city. Richards fought anti-Catholic discrimination by Ivy League universities, resulting in Harvard Law School admitting graduates of some Jesuit universities.
Upon the end of his term in 1898, Richards engaged in pastoral work attached to Jesuit educational institutions throughout the northeastern United States. He became the president of Regis High School and the Loyola School in New York City in 1915, and he was then made superior of the Jesuit retreat center on Manresa Island in Connecticut. Richards died at the College of the Holy Cross in 1923.
## Early life
Richards was born on November 8, 1851, in Columbus, Ohio. His parents were Henry Livingston Richards and Cynthia Cowles, who married on May 1, 1842, in Worthington, Ohio. Havens Cowles was the youngest of eight children, three of whom died in infancy. His surviving siblings were: Laura Isabella (b. 1843), Henry Livingston, Jr. (b. 1846), and William Douglas (b. 1848).
Henry Livingston Richards was an Episcopal priest and the pastor of a church in Columbus. To the surprise of many, on January 25, 1852, he sought to convert to Catholicism, two months after Havens Cowles's birth. He was said to have been moved during a visit to New Orleans, where he saw whites and enslaved blacks receiving the Eucharist side by side at the altar rail in a Catholic church. He was baptized by Caspar Henry Borgess at the Holy Cross Church in Columbus. One day, following his conversion, he sneaked out of the house with the infant Havens Cowles and brought him to Holy Cross, where Havens Cowles Richards was also baptized by Borgess. These two conversions disturbed Havens Cowles's mother, Cynthia, who was Episcopalian, and her relatives encouraged her to leave her husband. Likewise, Henry Livingston was ostracized by his family and acquaintances in Ohio. As a result, he abandoned his ministry and moved to New York City to search for work in business, leaving his family in the care of his father in Granville, Ohio. While there, Cynthia Cowles followed her husband in converting to Catholicism. She moved with her children to Jersey City, New Jersey, in September 1855 and was conditionally baptized on May 14, 1856, at St. Peter's Church. All the other children were eventually baptized as well.
### Ancestry
Richards was born into a prominent family that traced its lineage to colonial America on both his paternal and maternal sides. His uncle was Orestes Brownson, a Catholic activist and intellectual. On his mother's side, he was a descendant of James Kilbourne, a colonel in the U.S. Army who led a regiment on the American frontier in the War of 1812, founded the city of Worthington, Ohio, and became a United States Representative from Ohio.
On his father's side, Richards's lineage included combatants in the American Revolutionary War, such as William Richards (his great-grandfather), who led a contingent of troops that took part in the siege at the Battle of Fort Slongo and who later fought in the Battle of Bunker Hill as a colonel. Through William Richards, he traced his ancestry to James Richards, who was documented in 1634 as residing on the Eel River in Plymouth, Massachusetts.
### Education
Richards's father sought to send all his children to Catholic schools but was at times unable to. Therefore, Richards attended both Catholic and public schools in Jersey City. At the age of fourteen, he quit school and took up work as a bookkeeper for his father. Four years later, the two of them moved to Boston, Massachusetts, where they worked in the steel industry.
In September 1869, Richards enrolled at Boston College. The rest of his family joined him and his father in Boston in July of that year. Richards remained at the college for three years, where he was active in school sports, before entering the Society of Jesus and proceeding to the novitiate in Frederick, Maryland, on August 7, 1872. Upon entering the order, he changed his name to Joseph Havens Richards.
At the end of his probationary period, Richards was sent to Woodstock College in 1874, where he studied philosophy for four years. He then went to Georgetown University as a professor of physics and mathematics, doing work in chemistry during his vacations. In the summers of 1879 and 1880, he was sent by the Jesuit provincial superior to study at Harvard University. In July 1883, he returned to Woodstock for four years of theological studies. The provincial superior made an exception for Richards to be ordained after only two years because his father was ill. Therefore, on August 29, 1885, he was ordained a priest by James Gibbons, the Archbishop of Baltimore, in the college's chapel. He completed his theological studies in 1887 and returned to Frederick to complete his tertianship.
## Georgetown University
Immediately after the completion of his Jesuit formation, Richards was made the rector and president of Georgetown University, taking office on August 15, 1888, and succeeding James A. Doonan. He had a plan to transform Georgetown into a modern, comprehensive institution that would be the leading university of both the Catholic Church and the United States. This role would be amplified by the fact that the university was located in the nation's capital.
### Curriculum improvements
Though Richards sought to dispel the perception that Jesuit schools were of inferior quality than their secular counterparts, he maintained that the curriculum of the Ratio Studiorum should be preserved. Therefore, he revitalized the graduate programs of the university, introduced new courses in the law school, and oversaw construction of a new law building in 1892. He also sought to establish an electrical, chemical, and civil engineering program, but this did not come to fruition. For the first time, graduates of the university were authorized to wear a hood as part of their academic regalia. Richards sought to induce prominent scholars to join the faculty of Georgetown; he recruited the Austrian astronomer Johann Georg Hagen and several distinguished scientists from the Smithsonian Institution.
Graduate courses in the arts and sciences were re-established in 1889, and courses in theology and philosophy returned to the university, which had previously been moved to Boston and then to Woodstock College. Richards criticized the decision to relocate the theological training of Jesuits from Georgetown to the "semi-wilderness" of Woodstock, which was "remote from libraries, from contact with the learned world, and from all the stimulating influences which affect intellectual life".
Richards expanded the School of Medicine by establishing a chair and laboratory of bacteriology; increasing the number of instructors in anatomy, physiology, and surgery; and improving the chemistry curriculum. He also standardized the curriculum and increased its duration from three to four years. The property of the medical school, which theretofore had been owned by its own legal corporation, was transferred to the President and Directors of Georgetown College, giving Richards authority over the appointment of professors. Richards also desired to have a hospital adjoined to the medical school, but there was initially little interest in this among faculty and donors. Eventually, Georgetown University Hospital was completed in 1898, and it was put under the care of the Sisters of Saint Francis.
Richards worked with Bishop John Keane to address tensions with the newly established Catholic University of America, which was located in the same city and run by the American bishops. Many feared that it would interfere with Georgetown University, and it did indeed seek to take control of Georgetown's law and medical schools as its own. This proposal was approved by the Jesuit superior general, Luis Martín, who feared that the Vatican might suppress Georgetown altogether if it did not acquiesce. The faculties of the law and medical schools publicly protested the proposal, and Catholic University dropped its plans. Eventually, an agreement was reached that Catholic University would focus exclusively on the graduate education of secular priests.
### Construction
Richards's most immediate task upon taking office was the completion of Healy Hall, construction of which began in 1877 under a predecessor, Patrick F. Healy, but whose interior remained unfinished. Richards was able to have the bulk of the work completed by February 20, 1889, the date on which the university began its three-day centenary celebration. Within Healy Hall, he made improvements to Gaston Hall and oversaw the start of work on Riggs Library. Richards improved the university's astronomical observatory, placing Hagen in charge of it, which raised the stature of the university in scientific circles.
In 1892 Richards received a donation from the socialite Elizabeth Wharton Drexel for the construction of Dahlgren Chapel of the Sacred Heart. That year, he also procured the library of historian John Gilmary Shea, which extensively documented the history of the Catholic Church in the United States. Richards's presidency came to an end on July 3, 1898, by which time he had experienced worsening health for two years. He was succeeded by John D. Whitney.
### Anti-Catholicism in the Ivy League
Richards also took up the cause of fighting discrimination against Catholics by prominent Protestant universities, especially those of the Ivy League. In 1893, James Jeffrey Roche, the editor of the Catholic Boston newspaper The Pilot, wrote to Charles William Eliot, the president of Harvard University, about the fact that no Catholic universities were included on the list of institutions whose graduates were automatically eligible for admission to Harvard Law School. Eliot's response, which was published in The Pilot, was that the quality of education at Catholic universities was inferior to that offered at their Protestant counterparts. Richards and other Catholic educators had long believed that anti-Catholic discrimination had been at work at Protestant colleges.
Richards sought a retraction from Eliot, writing to him that graduates of reputable Catholic colleges were better prepared to study law than any other college graduates, and he included information on Georgetown's curriculum. Eliot responded by adding Georgetown, the College of the Holy Cross, and Boston College to the list. Upon the provincial superior's instruction, Richards then unsuccessfully lobbied to have all 24 Jesuit colleges in the United States added to the list.
## Pastoral work
Following his retirement from the presidency, Richards became the spiritual father of the novitiate in Frederick. He remained interested in Georgetown's astronomical observatory, and he petitioned to have a station established in South Africa so that the entire sky could be studied. The following year, he became the spiritual father of Boston College, where he established the Boston Alumni Sodality. When not in Boston, he spent time in Philadelphia and Brooklyn, where he worked with the New York Sodality. He also began cataloguing Catholic works in the New York Public Library, but his health soon prevented him from continuing. Upon the recommendation that it would benefit his health, Richards moved to the novitiate in Los Gatos, California, in March 1900, but he was there only briefly before visiting his family in Boston after his mother's death.
Richards returned to Los Gatos in April. In early 1901, he moved back to Frederick, Maryland, where he became minister of the novitiate. Richards then went to St. Andrew-on-Hudson in Hyde Park, New York, as minister in January 1903, when the novitiate relocated there. Several months later, he was made the procurator and was placed in charge of the mission in Pleasant Valley. He transferred again to Boston College in 1906 as spiritual father, remaining there for a year. From 1907 to July 1909, he was prefect of the Church of St. Ignatius Loyola at Boston College.
Richards then proceeded to the Church of St. Ignatius Loyola in New York City as operarius. After four years, he was sent to Canisius College in Buffalo as minister and prefect of studies. He ceased to be minister in July 1914 but remained as prefect. He was appointed the rector and president of both Regis High School and the Loyola School in New York the following year, succeeding David W. Hearn. Concurrently, he became the pastor of the Church of St. Ignatius Loyola. Being advanced in age, he retired from the position on March 25, 1919, and was succeeded by James J. Kilroy as pastor and as president of Regis and Loyola.
## Later years
Following his positions in New York, Richards was made superior of Manresa Island in Norwalk, Connecticut, where he received Jesuit scholastics and priests from the Diocese of Hartford during the summer for their retreats. During the rest of the year, he lived on the island with just one other Jesuit. In December 1921, he was transferred to Weston College as spiritual father and procurator, ceasing to hold the latter role in September 1922.
On March 2, 1923, Richards suffered a stroke, which left his speech impaired and the right side of his body paralyzed. He spent seven weeks in the hospital before going to the College of the Holy Cross in Worcester, Massachusetts. He suffered another stroke on June 8 and died the following day.
|
35,982,613 |
O heilges Geist- und Wasserbad, BWV 165
| 1,140,165,037 |
Church cantata for Trinity Sunday by Johann Sebastian Bach
|
[
"1715 compositions",
"Church cantatas by Johann Sebastian Bach",
"Nicodemus",
"Works based on the New Testament"
] |
O heilges Geist- und Wasserbad (O holy bath of Spirit and water), BWV 165, is a church cantata by Johann Sebastian Bach. He composed it in Weimar for Trinity Sunday and led the first performance on 16 June 1715.
Bach had taken up regular cantata composition a year before when he was promoted to concertmaster at the Weimar court, writing one cantata per month to be performed in the Schlosskirche, the court chapel in the ducal Schloss. O heilges Geist- und Wasserbad was his first cantata for Trinity Sunday, the feast day marking the end of the first half of the liturgical year. The libretto by the court poet Salomo Franck is based on the day's prescribed gospel reading about the meeting of Jesus and Nicodemus. It is close in content to the gospel and connects the concept of the Trinity to baptism.
The music is structured in six movements, alternating arias and recitatives, and scored for a small ensemble of four vocal parts, strings and continuo. The voices are combined only in the closing chorale, the fifth stanza of Ludwig Helmbold's hymn "Nun laßt uns Gott dem Herren", which mentions scripture, baptism and the Eucharist, in a summary of the cantata's topic. Based on the text full of Baroque imagery, Bach composed a sermon in music, especially in the two recitatives for the bass voice, and achieved contrasts in expression. He led the first performance, and probably another on the Trinity Sunday concluding his first year as Thomaskantor in Leipzig on 4 June 1724.
## Background
On 2 March 1714 Bach was appointed Konzertmeister (concert master) of the Weimar Hofkapelle (court chapel) of the co-reigning dukes Wilhelm Ernst and Ernst August of Saxe-Weimar. The position was created for him, possibly on his demand, giving him "a newly defined rank order" according to Christoph Wolff.
From 1695, an arrangement shared the responsibility for church music at the Schlosskirche (court church) between the Kapellmeister Samuel Drese and the Vize-Kapellmeister Georg Christoph Strattner, who took care of one Sunday per month while the Kapellmeister served on three Sundays. The pattern probably continued from 1704, when Strattner was succeeded by Drese's son Johann Wilhelm. When Konzertmeister Bach also assumed the principal responsibility for one cantata a month, the Kapellmeister's workload was further reduced to two Sundays per month.
The performance venue on the third tier of the court church, in German called Himmelsburg (Heaven's Castle), has been described by Wolff as "congenial and intimate", calling for a small ensemble of singers and players. Performers of the cantatas were mainly the core group of the Hofkapelle, formed by seven singers, three leaders and five other instrumentalists. Additional players of the military band were available when needed, and also town musicians and singers of the gymnasium. Bach as the concertmaster probably led the performances as the first violinist, while the organ part was played by Bach's students such as Johann Martin Schubart and Johann Caspar Vogler. Even in settings like chamber music, Bach requested a strong continuo section with cello, bassoon and violone in addition to the keyboard instrument.
### Monthly cantatas from 1714 to 1715
While Bach had composed vocal music only for special occasions until his promotion, the regular chance to compose and perform a new work resulted in a program into which Bach "threw himself wholeheartedly", as Christoph Wolff notes. In his first cantata of the series, Himmelskönig, sei willkommen, BWV 182, for the double feast of Palm Sunday and Annunciation, he showed his skill in an elaborate work in eight movements, for four vocal parts and at times ten-part instrumental writing, and presenting himself as a violin soloist.
The following table of works performed by Bach as concertmaster between 1714 and the end of 1715 is based on tables by Wolff and Alfred Dürr. According to Dürr, O heilges Geist- und Wasserbad is the eleventh cantata composition of this period. The works contain arias and recitatives, as in contemporary opera, while earlier cantatas had concentrated on biblical text and chorale. Some works, such as Widerstehe doch der Sünde, may have been composed earlier.
## Topic and text
### Trinity Sunday
Bach composed O heilges Geist- und Wasserbad for Trinity Sunday, the Sunday concluding the first half of the liturgical year. The prescribed readings for the day were from the Epistle to the Romans, "What depth of the riches of the wisdom and knowledge of God" (), and from the Gospel of John, the meeting of Jesus and Nicodemus ().
In Leipzig, Bach composed two more cantatas for the occasion which focused on different aspects of the readings, Höchsterwünschtes Freudenfest, BWV 194, first composed for the inauguration of church and organ in Störmthal on 2 November 1723, Es ist ein trotzig und verzagt Ding, BWV 176 (1725) and the chorale cantata Gelobet sei der Herr, mein Gott, BWV 129 (1726). Scholars debate if Bach performed on Trinity Sunday of 1724, which fell on 4 June, Höchsterwünschtes Freudenfest or O heilges Geist- und Wasserbad or both.
### Cantata text
The libretto was written by the court poet, Salomon Franck, and published in Evangelisches Andachts-Opffer in 1715. The opening refers to Jesus' words in John 3:5: "Except a man be born of water and of the Spirit, he cannot enter into the kingdom of God."() The second movement, a recitative, reflects upon birth in the Spirit as baptism through God's grace: "Er wird im Geist und Wasserbade ein Kind der Seligkeit und Gnade" (In the bath of spirit and water he becomes a child of blessedness and grace). Movement 3, an aria for alto, considers that the bond has to be renewed throughout life, because it will be broken by man, reflected in movement 4. The last aria is a prayer for the insight that the death of Jesus brought salvation, termed "Todes Tod" (death's death). The cantata concludes with the fifth stanza of Ludwig Helmbold's hymn of 1575, "Nun laßt uns Gott dem Herren", mentioning scripture, baptism and the Eucharist. Bach used the eighth and final stanza, "Erhalt uns in der Wahrheit" (Keep us in the truth), to conclude his cantata Gott der Herr ist Sonn und Schild, BWV 79.
Salomon expresses his thought in Baroque style rich in imagery. The image of the serpent is used in several meanings: as the serpent which seduced Adam and Eve to sin in paradise, as the symbol which Moses erected in the desert, and related to the gospel's verse 14: "And as Moses lifted up the serpent in the wilderness, even so must the Son of man be lifted up".
## Performance and publication
Bach led the first performance of the cantata on 16 June 1715. The performance material for Weimar is lost. Bach performed the work again as Thomaskantor in Leipzig. Extant performance material was prepared by his assistant Johann Christian Köpping. The first possible revival is the Trinity Sunday of Bach's first year in office, 4 June 1724, also the conclusion of his first year and first Leipzig cantata cycle, because he had assumed the office on the first Sunday after Trinity the year before. Bach made presumably minor changes.
The cantata was published in the Bach-Ausgabe, the first edition of Bach's complete works by the Bach-Gesellschaft, in 1887 in volume 33, edited by Franz Wüllner. In the second edition of the complete works, the Neue Bach-Ausgabe, it appeared in 1967, edited by Dürr, with a Kritischer Bericht (Critical report) following in 1968.
## Music
### Scoring and structure
The title on the copy by Johann Christian Köpping is: "Concerto a 2 Violi:1 Viola. Fagotto Violoncello S.A.T.e Basso e Continuo / di Joh:Seb:Bach" (Concerto for 2 violins, 1 viola. Bassoon Cello S.A.T and Bass and Continuo / by Joh:Seb:Bach). The cantata in six movements is scored like chamber music for four vocal soloists (soprano, alto, tenor and bass), a four-part choir (SATB) in the closing chorale, two violins (Vl), viola (Va), bassoon (Fg), cello (Vc) and basso continuo (Bc). The bassoon is called for, but has no independent part. The duration is given as about 15 minutes.
In the following table of the movements, the scoring follows the Neue Bach-Ausgabe, and the abbreviations for voices and instruments the list of Bach cantatas. The keys and time signatures are taken from the Bach scholar Alfred Dürr, using the symbol for common time (4/4). The instruments are shown separately for winds and strings, while the continuo, playing throughout, is not shown.
### Movements
The cantata consists of solo movements closed by a four-part chorale. Arias alternate with two recitatives, both sung by the bass. John Eliot Gardiner summarizes: "It is a true sermon in music, based on the Gospel account of Jesus' night-time conversation with Nicodemus on the subject of 'new life', emphasising the spiritual importance of baptism." He points out the many musical images of water.
#### 1
In the first aria, "O heilges Geist- und Wasserbad" (O bath of Holy Spirit and of water), the ritornello is a fugue, whereas in the five vocal sections the soprano and violin I are a duo in imitation on the same material. These sections are composed in symmetry, A B C B' A'. The theme of B involves an inversion of material from A, that of C is derived from measure 2 of the ritornello. Dürr writes:
> The prominent use made of formal schemes based on the principles of symmetry and inversion is in all probability intentional, serving as a symbol of the inner inversion of mankind — his rebirth in baptism.
#### 2
The first recitative, "Die sündige Geburt verdammter Adamserben" (The sinful birth of the cursed heirs of Adam), is secco, but several phrases are close to an arioso. The musicologist Julian Mincham notes that Bach follows the meaning of the text closely, for example by "rhythmic dislocations for death and destruction", a change in harmony on "poisoned", and "the complete change of mood at the mention of the blessed Christian". He summarizes: "Here anger and resentment at Man’s inheritance of suppurating sin is contrasted against the peace and joy of God-given salvation".
#### 3
The second aria, "Jesu, der aus großer Liebe" (Jesus, who out of great love), accompanied by the continuo, is dominated by an expressive motif with several upward leaps of sixths, which is introduced in the ritornello and picked up by the alto voice in four sections. Mincham notes that "the mood is serious and reflective but also purposeful and quietly resolute".
#### 4
The second recitative, "Ich habe ja, mein Seelenbräutigam" (I have indeed, o bridegroom of my soul), is accompanied by the strings (accompagnato), marked by Bach "Rec: con Stroment" (Recitative: with instruments). The German musicologist Klaus Hofmann notes that the text turns to mysticism, reflecting the Bridegroom, Lamb of God and the serpent in its double meaning. The text is intensified by several melismas, a marking "adagio" on the words "hochheiliges Gotteslamm" (most holy Lamb of God), and by melodic parts for the instruments. Gardiner notes that Bach has images for the serpent displayed in the desert by Moses, and has the accompaniment fade away on the last line "wenn alle Kraft vergehet" (when all my strength has faded).
#### 5
The last aria, "Jesu, meines Todes Tod" (Jesus, death of my death), is set for tenor, accompanied by the violins in unison, marked "Aria Violini unisoni e Tenore". The image of the serpent appears again, described by the composer and musicologist William G. Whittaker: "the whole of the obbligato for violins in unison is constructed out of the image of the bending, writhing, twisting reptile, usually a symbol of horror, but in Bach's musical speech a thing of pellucid beauty".
#### 6
The cantata closes with a four-part setting of the chorale stanza, Sein Wort, sein Tauf, sein Nachtmahl (His word, His baptism, His communion). The text in four short lines summarizes that Jesus helps any in need by his words, his baptism and his communion, and ends in the prayer that the Holy Spirit may teach to faithfully trust in this.
The hymn tune by Nikolaus Selnecker was first published in Leipzig in 1587 in the hymnal Christliche Psalmen, Lieder vnd Kirchengesenge (Christian psalms, songs and church chants). Bach marked the movement: "Chorale. Stromenti concordant", indicating that the instruments play colla parte with the voices.
## Recordings
The entries are taken from the listing on the Bach Cantatas Website. Instrumental ensmbles playing period instruments in historically informed performance are marked by green background.
|
2,456,705 |
Alpine newt
| 1,168,845,392 |
Species of amphibian
|
[
"Amphibians described in 1768",
"Amphibians of Europe",
"Articles containing video clips",
"Newts",
"Taxa named by Josephus Nicolaus Laurenti"
] |
The alpine newt (Ichthyosaura alpestris) is a species of newt native to continental Europe and introduced to Great Britain and New Zealand. Adults measure 7–12 cm (2.8–4.7 in) and are usually dark grey to blue on the back and sides, with an orange belly and throat. Males are more conspicuously coloured than the drab females, especially during breeding season.
The alpine newt occurs at high altitude as well as in the lowlands. Living mainly in forested land habitats for most of the year, the adults migrate to puddles, ponds, lakes or similar water bodies for breeding. Males court females with a ritualised display and deposit a spermatophore. After fertilisation, females usually fold their eggs into leaves of water plants. The aquatic larvae grow up to 5 cm (2.0 in) in around three months before metamorphosing into terrestrial juvenile efts, which mature into adults at around three years. In the southern range, the newts sometimes do not metamorphose but keep their gills and stay aquatic as paedomorphic adults. Larvae and adults feed mainly on diverse invertebrates and themselves fall prey to dragonfly larvae, large beetles, fish, snakes, birds or mammals.
Populations of the alpine newt started to diverge around 20 million years ago. At least four subspecies are distinguished, and some argue there are several distinct, cryptic species. Although still relatively common and classified as Least Concern on the IUCN Red List, alpine newt populations are decreasing and have locally gone extinct. The main threats are habitat destruction, pollution and the introduction of fish such as trout into breeding sites. Where it has been introduced, the alpine newt can potentially transmit diseases to native amphibians, and it is being eradicated in New Zealand.
## Taxonomy
### Nomenclature
The alpine newt was first described in 1768 by Austrian zoologist Laurenti, as Triton alpestris, from the Ötscher mountain in the Austrian Alps (alpestris meaning "alpine" in Latin). He used that name for a female and described the male (Triton salamandroides) and the larva (Proteus tritonius) as different species. Later, the alpine newt was placed in the genus Triturus along with most other European newts. When genetic evidence showed that Triturus as then defined contained several unrelated lineages, García-París and colleagues in 2004 split off the alpine newt as the monotypic genus Mesotriton, which had been erected as a subgenus by Bolkay in 1928.
However, the name Ichthyosaura had been introduced in 1801 by Sonnini de Manoncourt and Latreille for "Proteus tritonius", the larva of the alpine newt. It therefore has priority over Mesotriton and is now the valid genus name. "Ichthyosaura", Greek for "fish lizard", refers to a nymph-like creature in classical mythology.
### Subspecies
Four subspecies (see table below) were recognised for the alpine newt by Roček and colleagues (2003), followed by later authors, while some previously described subspecies were not retained. The four subspecies correspond only in part to the five major lineages identified within the species (see section Evolution below): The western populations of the nominate subspecies I. a. alpestris, together with the Cantabrian I. a. cyreni and the Apennine I. a. apuana form one group, while the eastern populations of I. a. alpestris are genetically closer to the Greek I. a. veluchiensis. Differences in body shape and colour between the subspecies are not consistent.
Several authors argued that the ancient lineages of the alpine newt might represent cryptic species. Four species were therefore distinguished by Raffaëlli in 2018, but Frost considers this premature.
## Evolution
Alpine newt populations have separated since the Early Miocene, around 20 million years ago, according to a molecular clock estimate by Recuero and colleagues. Known fossil remains are much more recent: they were found in the Pliocene of Slovakia and the Pleistocene of Northern Italy. An older, Miocene fossil from Germany, Ichthyosaura randeckensis, may be the sister species of the alpine newt.
Molecular phylogenetic analyses showed that alpine newts split into a western and an eastern group. Each of these again contains two major lineages, which in part correspond to described subspecies (see section Distribution and subspecies above). These ancient genetic differences suggest that the alpine newt may be a complex of several distinct species. Higher temperatures during the Miocene or sea level oscillations may have separated early populations, leading to allopatric speciation, although admixture and introgression between lineages probably took place. Populations from Vlasina Lake in Serbia have mitochondrial DNA that is distinct from and more ancient than that of all other populations; it may have been inherited from a now extinct "ghost" population. The Quaternary glaciation probably led to cycles of retreat into refugia, expansion and range shifts.
## Description
The alpine newt is medium-sized and stocky. It reaches 7–12 cm (2.8–4.7 in) length in total, females measuring roughly 1–2 cm (0.39–0.79 in) longer than males, and a body weight of 1.4–6.4 g. The tail is compressed sideways and is half as long or slightly shorter than the rest of the body. During their life in water, both sexes develop a tail fin, and males a low (up to 2.5 mm), smooth-edged crest on their back. The cloaca of males swells during breeding season. The skin is smooth during the breeding season and granular outside it, and is velvety during the animal's land phase.
The characteristic dark grey to bright blue of the back and sides is strongest during breeding season. This base colour may vary to greenish and is more drab and mottled in females. The belly and throat are orange and only occasionally have dark spots. Males have a white band with black spots and a light blue flash running along the flanks from the cheeks to the tail. During breeding season, their crest is white with regular dark spots. Juvenile efts, just after metamorphosis, resemble adult terrestrial females, but sometimes have a red or yellow line on the back. Very rarely, leucistic individuals have been observed.
While these traits apply to the widespread nominate subspecies, I. a. alpestris, the other subspecies differ slightly. I. a. apuana often has dark spots on the throat and sometimes on the belly. I. a. cyreni has a slightly rounder and larger skull than the nominate subspecies but is otherwise very similar. In I. a. veluchiensis, females have a more greenish colour, spots on the belly, sparse dark spots on the lower tail edge, and a narrower snout, but these differences between subspecies are not consistent.
Larvae are 7–11 mm long after hatching and grow to 3–5 cm (1.2–2.0 in) just before metamorphosis. They initially have only two small filaments (balancers), between the eyes and gills on each side of the head, which later disappear as the forelegs and then the hindlegs develop. The larvae are light brown to yellow and initially have dark longitudinal stripes, which later dissolve into a dark pigmentation that is stronger towards the tail. The tail is pointed and sometimes ends in a short filament. Alpine newt larvae are more robust and have wider heads than those of the smooth newt and palmate newt.
## Distribution
The alpine newt is native to continental Europe. It is relatively common over a large, more or less continuous range from northwestern France to the Carpathians in Romania, and from southern Denmark in the north to the Alps and France just north of the Mediterranean in the south, but absent from the Pannonian basin. Isolated areas of distribution in Spain, Italy and Greece correspond to distinct subspecies (see section Taxonomy: Subspecies above). Alpine newts have been deliberately introduced to parts of continental Europe, including within the boundaries of cities such as Bremen and Berlin. Other introductions have occurred to Great Britain, mainly England but also Scotland, and Coromandel Peninsula in New Zealand.
The alpine newt can occur at high elevation and has been found up to 2,370 m (7,780 ft) above sea level in the Alps. It also occurs in the lowlands down to sea level. Towards the south of its range, most populations are found above 1,000 m (3,300 ft).
## Habitats
Forests, including both deciduous and coniferous forests (pure spruce plantations are avoided), are the main land habitat. Less common are forest edges, brownfield land, or gardens. Populations can be found above tree line in the high mountains, where they prefer south-exposed slopes. The newts use logs, stones, leaf litter, burrows, construction waste or similar structures as hiding places.
Aquatic breeding sites close to adequate land habitat are critical. While small, cool water bodies in forested areas are preferred, alpine newts tolerate a wide range of permanent or non-permanent, natural or human-made water bodies. These can range from shallow puddles over small ponds to larger, fish-free lakes or reservoirs and quiet parts of streams. Damming by beavers creates suitable breeding sites. Overall, the alpine newt is tolerant regarding chemical parameters such as pH, water hardness and eutrophication. Other European newts such as the crested, smooth, palmate or Carpathian newt often use the same breeding sites, but are less common at higher elevation.
## Lifecycle and behaviour
Alpine newts are usually semiaquatic, spending most of the year (9–10 months) on land and only returning to the water for breeding. The efts are probably terrestrial until they reach sexual maturity. At lower altitudes this occurs in males after around three years, and in females after four to five years. Lowland alpine newts can reach the age of ten. At higher altitudes, maturity is reached only after 9–11 years, and the newts can live for up to 30 years.
### Terrestrial phase
On land, alpine newts are mainly nocturnal, hiding for most of the day and moving and feeding during the night or in the twilight. Hibernation also usually takes place in terrestrial hiding places. They have been observed to climb up to 2 metres (6.6 ft) on vertical walls of basement ducts, where they hibernated, on wet nights. Migration to breeding sites occurs on sufficiently warm (above 5 °C) and humid nights and may be delayed or interrupted for several weeks in unfavourable conditions. The newts can also leave the water in case of a sudden cold snap.
Alpine newts tend to stay close to their breeding sites and only a small proportion, mainly juvenile efts, disperse to new habitats. A dispersal distance of 4 km (2.5 mi) has been observed, but such large distances are uncommon. Over short distances, the newts use mainly their sense of smell for navigation, while over long distances, orientation by the night sky, and potentially through magnetoreception are more important.
### Aquatic phase and breeding
The aquatic phase starts at snowmelt, from February in the lowlands to June at higher altitudes, while egg laying follows a few months later and can continue until August. Some southern populations in Greece and Italy appear to stay aquatic most of the year and hibernate underwater. In the Apennine subspecies, I. a. apuana, two rounds of breeding and egg-laying in autumn and spring have been observed .
Breeding behaviour occurs mainly in the morning and at dawn. Males perform a courtship display. The male first places himself in front of the female remains static for a while, then fans his tail to stimulate the female and wave pheromones towards her. After leaning in and touching her snout, he creeps away, followed by the female. When she touches the base of his tail with her snout, he releases a sperm packet (spermatophore) and blocks the female's path so she picks it up with her cloaca. Several rounds of spermatophore deposition may follow. Males frequently interfere with displays of rivals. Experiments suggest that it is mainly male pheromones that trigger mating behaviour in females, while colour and other visual cues are less relevant. In a breeding season, a male can produce more than 48 spermatophores, and offspring from one female usually have several fathers.
Females wrap their eggs in leaves of water plants for protection, preferring leaves closer to the surface where temperatures are higher. Where no plants are available, they may also use leaf litter, dead wood or stones for egg deposition. They can lay 70–390 eggs in a season, which are light grey-brown and 1.5–1.7 mm in diameter (2.5–3 mm including the jelly capsule). Incubation time is longer under cold conditions, but larvae typically hatch after two to four weeks. The larvae are benthic, staying in general close to the bottom of the water body. Metamorphosis occurs after around three months, again depending on temperature, but some larvae overwinter and metamorphose only in the next year.
### Paedomorphy
Paedomorphy, where adults do not metamorphose and instead retain their gills and stay aquatic, is more common in the alpine newt than in other European newts. It is almost exclusively found in the southern part of the range (but not in the Cantabrian subspecies, I. a. cyreni). Paedomorphic adults are paler in colour than metamorphic ones. Only part of a population is usually paedomorphic, and metamorphosis can follow if the pool dries out. Paedomorphic and metamorphic newts sometimes prefer different prey, but they do interbreed. Overall, paedomorphy appears to be a facultative strategy under particular conditions that are not fully understood.
### Diet, predators and parasites
Alpine newts are diet generalists, taking mainly different invertebrates as prey. Larvae and adults living in the water eat for example plankton, molluscs, larvae of insects such as chironomids, crustaceans such as water fleas, ostracods, or amphipods, and terrestrial insects falling on the surface. Amphibian eggs and larvae, including of their own species, are also eaten. Prey on land includes insects, worms, spiders and woodlice.
Predators of adult alpine newts are snakes such as the grass snake, fish such as trout, birds such as herons or ducks, and mammals such as hedgehogs, martens or shrews. Under water, large diving beetles (Dytiscus) can prey on newts, while small efts on land may be predated by ground beetles (Carabus). For eggs and larvae, diving beetles, fish, dragonfly larvae, and other newts are the main enemies.Predator pressure can affect the phenotype of developing alpine newts. In an experiment, alpine newt larvae raised in the presence of caged dragonfly larvae took longer to emerge from the larval stage, growing slower and emerging later in the season than newt larvae that did not experience predator presence. They also exhibited traits such as darker coloration, larger body size, a proportionally larger head and tail, and more wary behavior than their predator-free counterparts.
Threatened adult newts often take on a defensive position, where they expose the warning colour of their belly by bending backwards or raising their tail and secrete a milky substance. Only trace amounts of the poison tetrodotoxin, abundant in the North American Pacific newts (Taricha), have been found in the alpine newt. They also sometimes produce sounds, whose function is unknown. When adult newts are in the presence of a predator, they tend to flee a majority of the time. However, the decision of whether or not to flee can depend on the newt's sex and temperature. In an experiment, female newts fled more often and at a greater speed over a greater range of temperatures than males, who tended to flee at a slower speed and remained immobile while secreting tetrodotoxin when the temperature was outside of the normal range.
Parasites include parasitic worms, leeches, the ciliate Balantidium elongatum, and potentially toadflies. A ranavirus transmitted to alpine newts from midwife toads in Spain caused bleeding and necrosis. The chytridiomycosis-causing fungus Batrachochytrium dendrobatidis has been found in wild populations, and the emerging B. salamandrivorans was lethal for alpine newts in laboratory experiments.
## Captivity
Several subspecies of the alpine newt have been bred in captivity, including a population from Prokoško Lake in Bosnia that is now probably extinct in the wild. Efts often return to the water after only one year. Captive individuals have reached an age of 15–20 years.
## Threats and conservation
Because of its overall large range and populations that are not severely fragmented, the alpine newt was classified as Least Concern on the IUCN Red List in 2009. The population trend, however, is "Decreasing", and the different geographic lineages, which may represent evolutionary significant units, have not been evaluated separately. Several populations in the Balkans, some of which have been described as subspecies of their own, are highly threatened or have even gone extinct.
Threats are similar to those affecting other newts and include mainly destruction and pollution of aquatic habitats. Beavers, previously widespread in Europe, were probably important in maintaining breeding sites. Introduction of fish, especially salmonids such as trout, and potentially crayfish is a significant threat that can eradicate populations from a breeding site. In the Montenegrin karst region, populations have declined as ponds created for cattle and human use were abandoned over the last decades. Lack of adequate, undisturbed land habitat (see section Habitats above) and dispersal corridors around and between breeding sites, is another problem.
## Effects as introduced species
Introduced alpine newts may pose a threat to native amphibians if they carry disease. A particular concern is chytridiomycosis, which was found in at least one introduced population in the United Kingdom. In New Zealand, the risk of spreading chytridiomycosis to endemic frogs has led to the introduced subspecies I. a. apuana being declared an "unwanted organism", and eradication being recommended. It has proven challenging to detect and remove the newts, but over 2000 individuals have been eradicated until 2015.
|
463,570 |
Fredonian Rebellion
| 1,160,353,604 |
1826–27 secession attempt in Mexican Texas
|
[
"1826 in Texas",
"1827 in Texas",
"Conflicts in 1826",
"Conflicts in 1827",
"Former unrecognized countries",
"Mexican Texas",
"Texas border disputes",
"Wars fought in Texas"
] |
The Fredonian Rebellion (December 21, 1826 – January 31, 1827) was the first attempt by Texians to secede from Mexico. The settlers, led by Empresario Haden Edwards, declared independence from Mexican Texas and created the Republic of Fredonia near Nacogdoches. The short-lived republic encompassed the land the Mexican government had granted to Edwards in 1825 and included areas that had been previously settled. Edwards's actions soon alienated the established residents, and the increasing hostilities between them and settlers recruited by Edwards led Víctor Blanco of the Mexican government to revoke Edwards's contract.
In late December 1826, a group of Edwards's supporters took control of the region by arresting and removing from office several municipality officials affiliated with the established residents. Supporters declared their independence from Mexico. Although the nearby Cherokee tribe initially signed a treaty to support the new republic because a prior agreement with the Mexican government negotiated by Chief Richard Fields was ignored, overtures from Mexican authorities and respected empresario, Stephen F. Austin, convinced tribal leaders to repudiate the rebellion. On January 31, 1827, a force of over 100 Mexican soldiers and 275 Texian Militia marched into Nacogdoches to restore order. Haden Edwards and his brother Benjamin Edwards fled to the United States. Chief Fields was killed by his own tribe. A local merchant was arrested and sentenced to death but later paroled.
The rebellion led Mexican president Guadalupe Victoria to increase the military presence in the area. As a result, several hostile tribes in the area halted their raids on settlements and agreed to a peace treaty. The Comanche abided by this treaty for many years. Fearing that, through the rebellion, the United States hoped to gain control of Texas, the Mexican government severely curtailed immigration to the region from the US. The new immigration law was bitterly opposed by colonists and caused increasing dissatisfaction with Mexican rule. Some historians consider the Fredonian Rebellion to be the beginning of the Texas Revolution. In the words of one historian, the rebellion was "premature, but it sparked the powder for later success".
## Background
After winning independence in 1821, several of Spain's colonies in the New World joined together to create a new country, Mexico. The country divided itself into several states, and the area known as Mexican Texas became part of the border state Coahuila y Tejas. To assist in governing the large area, the state created several departments; all of Texas was included in the Department of Béxar. The department was further subdivided into municipalities, which were each governed by an alcalde, similar to a modern-day mayor. A large portion of East Texas, ranging from the Sabine to the Trinity rivers and from the Gulf Coast to the Red River, became part of the municipality of Nacogdoches. Most residents of the municipality were Spanish-speaking families who had occupied their land for generations. An increasing number were English-speaking residents who had immigrated illegally during the Mexican War of Independence. Many of the immigrants were adventurers who had arrived as part of various military filibustering groups, which had attempted to create independent republics within Texas during Spanish rule.
For better control of the sparsely populated border region, in 1824 the Mexican federal government passed the General Colonization Law to allow legal immigration into Texas. Under the law, each state would set its own requirements for immigration. After some debate, on March 24, 1825, Coahuila y Tejas authorized a system granting land to empresarios, who would each recruit settlers for their particular colony. In addition, for every 100 families an empresario settled in Texas, they would receive 23,000 acres of land to cultivate and settle on. During the state government's deliberations, many would-be empresarios congregated in Mexico to lobby for land grants. Among them was Haden Edwards, an American land speculator known for his quick temper and aggressiveness. Despite his abrasiveness, Edwards was granted a colonization contract on April 14 allowing him to settle 800 families in East Texas. The contract contained standard language requiring Edwards to recognize all pre-existing Spanish and Mexican land titles in his grant area, to raise a militia to protect the settlers in the area and to allow the state land commissioner to certify all deeds awarded.
Edwards's colony encompassed the land from the Navasota River to 20 leagues west of the Sabine River, and from 20 leagues north of the Gulf of Mexico to 15 leagues north of the town of Nacogdoches. To the west and north of the colony were lands controlled by several Native tribes that had recently been driven out of the United States. The southern boundary was a colony overseen by Stephen F. Austin, the son of the first empresario in Texas. East of Edwards's grant was the former Sabine Free State, a neutral zone, which had been essentially lawless for several decades. The boundaries of the new colony and the municipality of Nacogdoches partially overlapped, leading to uncertainty over who had jurisdiction over which function. The majority of the established settlers lived outside the eastern boundary of the Edwards colony.
## Prelude
Edwards arrived in Nacogdoches in August 1825. Mistakenly believing that he had the authority to determine the validity of existing land claims, Edwards demanded written proof of ownership in September or the land would be forfeited and sold at auction. His action was at least partially driven by prejudice; Edwards scorned those who were poorer or of a different race. By removing less-prosperous settlers, he could assign their lands to wealthy planters, like himself, from the Southern United States.
Very few of the English-speaking residents had valid titles. Those who had not arrived as filibusters had been duped by fraudulent land speculators. Most of the Spanish-speaking landowners had lived on grants made to their families 70 or more years previously and were unable to produce any paperwork. Anticipating the potential conflict between the new empresario and the long-time residents of the area, the acting alcalde of the municipality, Luis Procela, and the municipality clerk, Jose Antonio Sepulveda, began validating old Spanish and Mexican land titles, a function legally assigned to the state land commissioner. In response, Edwards accused the men of forging deeds, further angering the residents.
By December 1825, Edwards had recruited 50 families to emigrate from the United States. As required under his contract, Edwards organized a Texian Militia company open to his colonists and established residents. When militia members elected Sepulveda as their captain, Edwards nullified the results and proclaimed himself head of the militia company. After that debacle, Edwards, acting outside his authority, called for elections for a new alcalde. Two men were nominated for the position—Edwards's son-in-law, Chichester Chaplin, seen as the representative for the newly-arrived immigrants, and Samuel Norris, an American who had married the daughter of a long-time resident and was sympathetic to the more-established landowners. After Chaplin's victory, many settlers alleged vote-stacking in an appeal to Juan Antonio Saucedo, the political chief of the Department of Béxar. In March, Saucedo overturned the election results and proclaimed Norris the winner. Edwards refused to recognize Norris's authority.
Shortly after Saucedo's ruling, Edwards left to recruit more settlers from the United States, leaving his younger brother, Benjamin, in charge of the colony. Benjamin could not maintain stability in the colony, and the situation deteriorated rapidly. A vigilante group of earlier settlers harassed many newcomers, and Benjamin made several complaints to state authorities. Unhappy with his tone and the increasing tension, Mexican authorities revoked the land grant in October and instructed the Edwards brothers to leave Mexico. Rumors that Haden Edwards had returned to the United States to raise an army and not just to recruit settlers likely influenced the government's action. Unwilling to abandon his \$50,000 (about \$ as of 2023) investment in the colony, Haden Edwards rejoined his brother in Nacogdoches in late October, continuing their business affairs despite the cancellation of his colonization contract.
## Conflict
In October, Norris ruled that Edwards had improperly taken land from an existing settler to give to a new immigrant. Norris evicted the immigrant, angering many of the colonists. Later that month, another new immigrant was arrested and ordered to leave the country after refusing to purchase a merchant license before trading with the Indian tribes. On November 22, 1826, local Texian Militia colonel Martin Parmer and 39 other Edwards colonists entered Nacogdoches and arrested Norris, Sepulveda, and the commander of the small Mexican garrison, charging them with oppression and corruption. Haden Edwards was also arrested for violating his expulsion order but was immediately paroled, possibly as a ploy to disguise his own involvement in the plot. A kangaroo court found the other men guilty, removed them from their positions, and banned them from ever holding another public office. The court disbanded after appointing a temporary alcalde. The actions benefitted Parmer personally; several weeks earlier, after Parmer killed a man in a dispute, Norris had issued a warrant for Parmer's arrest. With Norris removed from office, the arrest warrant was voided.
Throughout the fall, Benjamin Edwards had tried to gather support from the Edwards colonists for a potential armed revolt against Mexican authority. Largely unsuccessful, he approached the nearby Cherokee tribe for assistance. Several years earlier, the tribe had applied for title to the lands they occupied in northern East Texas. They were promised but never given a deed from the Mexican authorities. Benjamin Edwards offered the tribe clear title to all of Texas north of Nacogdoches in exchange for armed support for his plans.
On December 16, the Edwards brothers invaded Nacogdoches with only 30 settlers, seizing one building in town, the Old Stone Fort. On December 21, they declared the former Edwards colony to be a new republic, named Fredonia. Within hours of the announcement, the Fredonians signed a peace treaty with the Cherokee, represented by Chief Richard Fields and John Dunn Hunter. Fields and Hunter claimed to represent an additional 23 other tribes and promised to provide 400 warriors. In recognition of the agreement, above the Old Stone Fort flew a new flag containing two stripes (one red, one white) representing the two races. Inscribed on the banner was the motto, "Independence, Liberty, and Justice." Haden Edwards also sent messengers to Louisiana to request aid from the United States military, which refused to intervene. Another emissary sent to invite Stephen F. Austin and his colonists to join the rebellion garnered the rebuke: "You are deluding yourselves and this delusion will ruin you."
Edwards's actions disturbed many of his colonists because of their loyalty to their adopted country or their fear of his alliance with the Cherokee. Mexican authorities were also concerned with the Cherokee alliance, and both Peter Ellis Bean, the Mexican Indian agent, and Saucedo, the political chief, began negotiations with Fields. They explained to the Cherokee that the tribe had not followed proper procedures to attain a land grant and promised that if they reapplied through official channels, the Mexican government would honor their land request. Such arguments and a planned Mexican military response convinced many Cherokee to repudiate their treaty with Edwards.
On news of the November arrest of the alcalde, the Mexican government began preparing to retaliate. On December 11, Lieutenant Colonel Mateo Ahumada, the military commander in Texas, marched from San Antonio de Béxar with 110 members of the infantry and initially stopped in Austin's colony to assess the loyalty of his settlers. On January 1, Austin announced to his colonists that "infatuated madmen at Nacogdoches have declared independence." Much of his colony immediately volunteered to assist in quelling the rebellion. When the Mexican army left for Nacogdoches on January 22, they were joined by 250 Texian Militia from Austin's colony.
Impatient with the army's response time, Norris led 80 men to retake the Old Stone Fort. Although Parmer had fewer than 20 supporters with him, his men routed Norris's force in less than ten minutes. On January 31, Bean, accompanied by 70 Texian Militia from Austin's colony, rode into Nacogdoches. By now, Parmer and Edwards had learned that the Cherokee had abandoned any intention of waging war against Mexico. When not a single Cherokee warrior had appeared to reinforce the revolt, Edwards and his supporters fled. Bean pursued them to the Sabine River, but most, including both Edwards brothers, safely crossed into the United States. Ahumada and his soldiers, accompanied by political chief Saucedo, entered Nacogdoches on February 8 to restore order.
Although the Cherokee had not raised arms against Mexico, their treaty with the Fredonian revolutionaries caused Mexican authorities to question the tribe's loyalty. To demonstrate loyalty to Mexico, the Cherokee council ordered both Fields and Hunter to be executed. Under tribal law, certain offenses such as aiding an enemy of the tribe were punishable by death. By sentencing Fields and Hunter to death for that reason, the Cherokee affirmed that Edwards and his cohorts were their enemies. Both men fled but were soon captured and executed. When the executions were reported to Mexican authorities on February 28, the commandant general of the Eastern Interior Provinces, Anastasio Bustamante, praised the Cherokee for their prompt action.
Bustamante ultimately offered a general amnesty for all who participated in the conflict except for Haden and Benjamin Edwards, Parmer, and Adolphus Sterne, a local merchant who had provided supplies to the rebel force. Like the Edwards brothers, Parmer escaped into Louisiana. Sterne remained and was sentenced to death for treason but was paroled if he swore allegiance to Mexico and never again took up arms against the Mexican government.
## Aftermath
The rebellion changed the dynamic between settlers and local tribes. Although the Cherokee repudiated the rebellion, their initial support caused many settlers to distrust the tribe. The rebellion and subsequent Mexican army response also changed the settlers' relationships with other tribes. In preceding years, the Tawakoni and Waco tribes, allied with various Comanche bands, had regularly raided Texas settlements. Fearing that the tribes, like the Cherokee, could ally with other groups against Mexican control, Bustamante began preparations to attack and weaken all hostile tribes in East Texas. On learning of the imminent invasion, in April 1827 the Towakoni and Waco sued for peace. In June, the two tribes signed a peace treaty with Mexico, promising to halt all raids against Mexican settlers. The Towakoni then assisted their allies, the Penateka Comanche, in reaching a treaty with Mexico. When Bustamante's troops left Texas later that year, the Towakoni and Waco resumed their raiding. The Comanche tribe upheld their treaty for many years and often assisted Mexican soldiers in recovering livestock stolen by the other tribes.
The failed rebellion also affected Mexican relations with the United States. Even before the revolt, many Mexican officials had worried that the United States was plotting to gain control of Texas. Once the rebellion came to light, officials suspected that Edwards had been an agent of the United States. To help protect the region, a new, larger, garrison was established in Nacogdoches, to be commanded by Colonel Jose de las Piedras. As a direct result of Edwards's actions, the Mexican government authorized an extensive expedition, conducted by General Manuel de Mier y Terán, to inspect the Texas settlements and to recommend a future course of action. Mier y Teran's reports led to the Law of April 6, 1830, which severely restricted immigration into Texas. Within Texas the laws were widely denounced both by recent immigrants and by native-born Mexicans, and led to further armed conflict between Mexican soldiers and Texas residents.
Some historians regard the Fredonian Rebellion as the beginning of the Texas Revolution. Historian W.B. Bates remarked that the revolt was "premature, but it sparked the powder for later success". The people of Nacogdoches played instrumental roles in other rebellions in Texas over the next few years; in 1832, they expelled Piedras and his troops from Nacogdoches, and many Nacogdoches residents participated in the Texas Revolution.
## Popular culture
- The imaginary country of Freedonia, bordered by Sylvania, features in the Marx Brothers' 1933 movie Duck Soup. Since then, the name Freedonia has been used many times (see Freedonia).
- In the 2018 e-book Hail! Hail! by Harry Turtledove, the Marx Brothers are sent back in time by a lightning storm from 1934 to 1826 and interfere with the rebellion.
- Fredonia is mentioned in the 1985 Cormac McCarthy novel Blood Meridian.
## See also
- List of conflicts involving the Texas Military
|
66,465,126 |
Treaty of Guînes
| 1,143,081,757 |
Unratified treaty of the Hundred Years' War
|
[
"1350s in France",
"1350s treaties",
"1354 in England",
"Edward III of England",
"Hundred Years' War, 1337–1360",
"Military history of the Pas-de-Calais",
"Treaties not entered into force",
"Treaties of the Hundred Years' War"
] |
The Treaty of Guînes (, gheen) was a draft settlement to end the Hundred Years' War, negotiated between England and France and signed at Guînes on 6 April 1354. The war had broken out in 1337 and was further aggravated in 1340 when the English king, Edward III, claimed the French throne. The war went badly for France: the French army was heavily defeated at the Battle of Crécy, and the French town of Calais was besieged and captured. With both sides exhausted, a truce was agreed that, despite being only fitfully observed, was repeatedly renewed.
When English adventurers seized the strategically located town of Guînes in 1352, full-scale fighting broke out again. This did not go well for the French, as money and enthusiasm for the war ran out and state institutions ceased to function. Encouraged by the new pope, Innocent VI, negotiations for a permanent peace treaty opened at Guînes in early March 1353. These broke down, although a truce was again agreed and again not fully observed by either side. In early 1354 a faction in favour of peace with England gained influence in the French king's council. Negotiations were reopened and the English emissaries suggested that Edward would abandon his claim to the French throne in exchange for French territory. This was rapidly agreed and a draft treaty was formally signed on 6 April.
The treaty was supposed to be ratified by each country and announced by Innocent in October at the papal palace in Avignon. By then the French king, John II, had a new council that turned entirely against the treaty and John had decided that another round of warfare might leave him in a better negotiating position. The draft treaty was acrimoniously repudiated and war broke out again in June 1355. In 1356, the French royal army was defeated at the Battle of Poitiers and John was captured. In 1360, both sides agreed to the Treaty of Brétigny, which largely replicated the Treaty of Guînes, but was slightly less generous towards the English. War again flared up in 1369 and the Hundred Years' War finally ended in 1453, 99 years after the Treaty of Guînes was signed.
## Background
Since 1153 the English Crown had controlled the Duchy of Aquitaine, which extended across a large part of south-west France. By the 1330s this had been reduced to Gascony. A series of disagreements between France and England regarding the status of these lands culminated on 24 May 1337 in the council of the French king, Philip VI, declaring them forfeit. This marked the start of the Hundred Years' War, which was to last 116 years. In 1340 the English king, Edward III, as the closest male relative of Philip's predecessor Charles IV, laid formal claim to the Kingdom of France. This permitted his allies who were also vassals of the French crown to lawfully wage war on it. Edward was not fully committed to this claim and was repeatedly prepared to repudiate it in exchange for his claims to historically English territory in south-west France being satisfied.
In 1346 Edward led an army across northern France, storming and sacking the Norman town of Caen, defeating the French with great loss of life at the Battle of Crécy and laying siege to the port of Calais. With French finances and morale at a low ebb after Crécy, Philip failed to relieve the town and the starving defenders surrendered on 3 August 1347. With both sides financially exhausted, Pope Clement VI dispatched emissaries to negotiate a truce. On 28 September the Truce of Calais was agreed, bringing a temporary halt to the fighting. The agreement strongly favoured the English, confirming them in possession of all of their territorial conquests. It was agreed that it would expire nine months later on 7 July 1348 but was extended repeatedly over the years. The truce did not stop ongoing naval clashes between the two countries, nor small-scale fighting in Gascony and Brittany. In August 1350 John II succeeded his father, Philip, as King of France.
In early January 1352 a band of freelancing English soldiers seized the French-held town of Guînes by a midnight escalade. The French garrison of Guînes was not expecting an attack and the English crossed the moat, scaled the walls, killed the sentries, stormed the keep, released a group of English prisoners being held there and took over the whole castle. The French were furious and their envoys rushed to London to deliver a strong protest to Edward. Edward was thereby put in a difficult position. The English had been strengthening the defences of Calais with the construction of small fortifications at bottlenecks on the roads through the marshes to the town. These could not compete with the strength of the defences at Guînes that would greatly improve the security of the English enclave around Calais, but retaining it would be a flagrant breach of the truce then in force. Edward would suffer a loss of honour and possibly a resumption of open warfare, for which he was unprepared. He ordered the English occupants to hand Guînes back.
By coincidence, the English parliament was scheduled to meet, its opening session due on 17 January. Several members of the King's Council made fiery, warmongering speeches and the parliament was persuaded to approve three years of war taxes. Reassured that he had adequate financial backing, Edward changed his mind. By the end of January the Captain of Calais had fresh orders: to take over the garrisoning of Guînes in the King's name. The Englishmen who had captured the town were rewarded. Determined to strike back, the French took desperate measures to raise money and set about raising an army. Thus the opportunistic capture of Guînes resulted in the war resuming.
## Prelude
The resumption of hostilities caused fighting to flare up in Brittany and the Saintonge area of south-west France, but the main French effort was against Guînes. The French assembled an army of 4,500 men, including 1,500 men-at-arms and a large force of Italian crossbowmen under the command of Geoffrey of Charny, a senior and well-respected Burgundian knight in French service and the keeper of the Oriflamme, the French royal battle banner. By May 1352 the 115 men of the English garrison, commanded by Thomas Hogshaw, were under siege. The French reoccupied the town, but found it difficult to approach the castle because of the marshy terrain and the strength of its barbican.
By the end of May, the English authorities had raised a force of more than 6,000 which was gradually shipped to Calais. From there they harassed the French in what the modern historian Jonathan Sumption describes as "savage and continual fighting" throughout June and early July. In mid-July a large contingent of troops arrived from England and, reinforced by much of the Calais garrison, were successful in approaching Guînes undetected and launching a night attack on the French camp. Many Frenchmen were killed and a large part of the palisade around their positions was destroyed. Shortly after, Charny abandoned the siege, leaving a garrison to hold the town.
in August the French army in Brittany was defeated by a smaller English force at the Battle of Mauron with heavy losses, especially among its leadership and men-at-arms. In south-west France there was scattered fighting across the Agenais, Périgord and Quercy with the English getting the better of it; French morale in the area was poor and they despaired of being able to drive off the English.
## Treaty
### Negotiations
The war was going badly for the French on all fronts and money and enthusiasm for the war was running out. Sumption describes the French administration as "fall[ing] apart in jealous acrimony and recrimination". The new pope, Innocent VI, a relative of John's, encouraged negotiations for a permanent peace treaty and discussions opened at Guînes in early March 1353 overseen by the Cardinal Guy of Boulogne. The modern historian George Cuttino states that Innocent was acting at John's instigation. The English sent a senior deputation: Henry of Lancaster, one of Edward's most trusted and experienced military lieutenants; Michael Northburgh, keeper of the privy seal; William Bateman the Bishop of Norwich, the most experienced diplomat in England; and Simon Islip, an ex-keeper of the privy seal and the archbishop of Canterbury; among others. The French were represented by Pierre de La Forêt [fr], Archbishop of Rouen and John's Chancellor; Charles of Spain, who was the Constable of France and a close confident of John; Robert de Lorris [fr], John's Chamberlain; Guillaume Bertrand, the Bishop of Beauvais; and several other high-ranking figures. Both parties were ill-prepared and ill-briefed with only two of the French delegation having any previous negotiating experience with the English. After several meetings it was agreed they would adjourn to receive further instructions from their monarchs, reconvening on 19 May. Until then hostilities would be suspended by a formal truce. This temporary agreement was signed and sealed on 10 March.
In early May 1353 the English requested the negotiations not be restarted until June, to allow them to discuss the matter more fully. The French responded on 8 May by cancelling the truce and announcing an arrière-ban for Normandy, a formal call to arms for all able-bodied males. The negotiators met briefly in Paris on 26 July and extended the truce until November, although all concerned understood that much fighting would continue. French central and local government collapsed. French nobles took to violently settling old scores rather than fighting the English. Charles of Navarre, one of the most powerful figures in France, broke into the bedroom of Charles of Spain and murdered him as he knelt naked, pleading for his life. Navarre then boasted of it and made tentative approaches to the English regarding an alliance. Navarre and John formally reconciled in March 1354 and a new balance within the French government was reached; this was more in favour of peace with England, in some quarters at almost any price. Informal talks started again at Guînes in mid-March. The principle whereby Edward abandoned his claim to the French throne in exchange for French territory was agreed; Edward gave his assent to this on 30 March. Formal negotiations recommenced in early April. The French were represented by Forêt, Lorris and Bertrand again, joined by Robert le Coq, Bishop of Laon, Robert, Count of Roucy, and Jean [fr], Count of Châtillon. The makeup of the English delegation is not known. Discussions were rapidly concluded. A formal truce for a year was agreed, as was the broad outline of a permanent peace. On 6 April 1354 these heads of terms were formally signed by the representatives of both countries, witnessed by Guy of Boulogne.
### Agreement
The treaty was very much in the favour of the English. England was to gain the whole of Aquitaine, Poitou, Maine, Anjou, Touraine and Limousin – the large majority of western France – as well as Ponthieu and the Pale of Calais. All were to be held as sovereign English territory, not as a fief of the French crown as English possessions in France had previously been. It was also a treaty of friendship between the two nations and both France's alliance with Scotland – over which Edward claimed suzerainty – and England's with Flanders – which was technically a province of France – were to be abandoned. The truce was to be immediately publicised, while the fact that the outline of a peace treaty had been agreed was to be kept secret until 1 October, when Innocent would announce it at the papal palace in Avignon. In the same ceremony, English representatives would repudiate the English claim to John's throne and the French would formally relinquish sovereignty over the agreed provinces. Edward was overjoyed, the English parliament ratified the treaty sight unseen. The English party for the ceremony departed more than four months before they were due in Avignon. John also endorsed the treaty, but members of his council were less enthusiastic.
### Repudiation
The English adhered to the truce, but John of Armagnac, the French commander in the south-west, ignored his orders to observe the peace; however, his offensive was ineffectual. Details of how much of the treaty was known to the French ruling elite and their debates regarding it are lacking, but sentiment was against its terms. In August it was revealed that several of the men who had negotiated and signed the treaty had been deeply involved in the plot to murder Charles of Spain. At least three of John's closest councillors fled his court or were expelled. By early September the French court had turned against the treaty. The date for formal ceremony in Avignon was suspended.
In November 1354 John seized all of Navarre's lands, besieging those places which did not surrender. Planned negotiations in Avignon to finalise the details of the treaty did not take place in the absence of French ambassadors. The English emissaries who were to formally announce the agreement arrived amidst much pomp in late December. John had meanwhile decided that another round of warfare might leave him in a better negotiating position and the French planned an ambitious series of offensives for the 1355 campaigning season. The French ambassadors arrived in Avignon in mid-January, repudiated the previous agreement and attempted to reopen negotiations. The English and the Cardinal of Boulogne pressed them to adhere to the existing treaty. The impasse continued for a month. Simultaneously the English delegation plotted an anti-French alliance with Navarre. By the end of February the futility of their official missions was obvious to all and the delegations departed with much acrimony. Their one achievement was a formal extension of the ill-observed truce to 24 June. It was clear that from then both sides would be committed to full-scale war.
## Aftermath
The war resumed in 1355, with both Edward and his son, Edward the Black Prince, fighting in separate campaigns in France. In 1356 the French royal army was defeated by a smaller Anglo-Gascon force at the Battle of Poitiers and John was captured. In 1360, the fighting was brought to a temporary halt by the Treaty of Brétigny, which largely replicated the Treaty of Guînes, with slightly less generous terms for the English. By this treaty vast areas of France were ceded to England, including Guînes and its county which became part of the Pale of Calais. In 1369 large-scale fighting broke out again and the Hundred Years' War did not end until 1453, 99 years after the Treaty of Guînes was signed.
|
244,511 |
Barn owl
| 1,168,276,274 |
Common cosmopolitan owl species
|
[
"Articles containing video clips",
"Birds described in 1769",
"Cosmopolitan birds",
"Extant Quaternary first appearances",
"Falconry",
"Taxa named by Giovanni Antonio Scopoli",
"Tyto"
] |
The barn owl (Tyto alba) is the most widely distributed species of owl in the world and one of the most widespread of all species of birds, being found almost everywhere except for the polar and desert regions, Asia north of the Himalayas, most of Indonesia, and some Pacific Islands. It is also known as the common barn owl, to distinguish it from the other species in its family, Tytonidae, which forms one of the two main lineages of living owls, the other being the typical owls (Strigidae).
There are at least three major lineages of barn owl: the western barn owl of Europe, western Asia, and Africa; the eastern barn owl of southeastern Asia and Australasia; and the American barn owl of the Americas. Some taxonomic authorities classify barn owls differently, recognising up to five separate species; and further research needs to be done to resolve the disparate taxonomies. There is considerable variation of size and colour among the approximately 28 subspecies, but most are between 33 and 39 cm (13 and 15 in) in length, with wingspans ranging from 80 to 95 cm (31 to 37 in). The plumage on the head and back is a mottled shade of grey or brown; that on the underparts varies from white to brown and is sometimes speckled with dark markings. The face is characteristically heart-shaped and is white in most subspecies. This owl does not hoot, but utters an eerie, drawn-out screech.
The barn owl is nocturnal over most of its range; but in Great Britain and some Pacific Islands, it also hunts by day. Barn owls specialise in hunting animals on the ground and nearly all of their food consists of small mammals, which they locate by sound, their hearing being very acute. The owls usually mate for life unless one of the pair is killed, whereupon a new pair bond may be formed. Breeding takes place at varying times of the year, according to the locality, with a clutch of eggs, averaging about four in number, being laid in a nest in a hollow tree, old building, or fissure in a cliff. The female does all the incubation, and she and the young chicks are reliant on the male for food. When large numbers of small prey are readily available, barn owl populations can expand rapidly; and globally the bird is considered to be of least conservation concern. Some subspecies with restricted ranges are more threatened.
## Etymology
The barn owl was one of several species of bird first described in 1769 by the Tyrolean physician and naturalist Giovanni Antonio Scopoli in his Anni Historico-Naturales. He gave it the scientific name Strix alba. As more species of owl were described, the genus Strix (from the Greek στρίξ, strix, "owl") came to refer solely to the wood owls in the typical-owl family Strigidae, and the barn owl became Tyto alba in the barn-owl family Tytonidae. Tyto alba literally means 'white owl', from the onomatopoeic Ancient Greek τυτώ (tytō, 'owl') – compare English "hooter" – and Latin alba, 'white'.
The bird is known by many common names that refer to its appearance, call, habitat, or its eerie, silent flight: white owl, silver owl, demon owl, ghost owl, death owl, night owl, rat owl, church owl, cave owl, stone owl, monkey-faced owl, hissing owl, hobgoblin or hobby owl, dobby owl, white-breasted owl, golden owl, screech owl, straw owl, barnyard owl, and delicate owl. "Golden owl" might also refer to the related golden masked owl (T. aurantia). "Hissing owl" and, particularly in the U.K. and in India, "screech owl" refer to the piercing calls of these birds. The latter name is also applied to a different group of birds, the screech-owls in the genus Megascops.
## Description
The barn owl is a medium-sized, pale-coloured owl with long wings and a short, squarish tail. There is considerable size variation across the subspecies with a typical specimen measuring about 33 to 39 cm (13 to 15 in) in overall length, with a wingspan of some 80 to 95 cm (31 to 37 in). Adult body mass is also variable with male owls from the Galapagos weighing 260 g (9.2 oz) while male Pacific barn owls average 555 g (19.6 oz). In general, owls living on small islands are smaller and lighter, perhaps because they have a higher dependence on insect prey and need to be more manoeuvrable. The shape of the tail is a means of distinguishing the barn owl from typical owls when seen in the air. Other distinguishing features are the undulating flight pattern and the dangling, feathered legs. The pale face with its heart shape and black eyes give the flying bird a distinctive appearance, like a flat mask with oversized, oblique black eyeslits, the ridge of feathers above the bill somewhat resembling a nose.
The bird's head and upper body typically vary between pale brown and some shade of grey (especially on the forehead and back) in most subspecies. Some are purer, richer brown instead, and all have fine black-and-white speckles except on the remiges and rectrices (main wing feathers), which are light brown with darker bands. The heart-shaped face is usually bright white, but in some subspecies it is brown. The underparts, including the tarsometatarsal (lower leg) feathers, vary from white to reddish buff among the subspecies, and are either mostly unpatterned or bear a varying number of tiny blackish-brown speckles. It has been found that at least in the continental European populations, females with more spotting are healthier than plainer birds. This does not hold true for European males by contrast, where the spotting varies according to subspecies. The bill varies from pale horn to dark buff, corresponding to the general plumage hue, and the iris is blackish brown. The toes, like the bill, vary in colour, ranging from pink to dark pinkish-grey and the talons are black.
Both leucistic and melanistic barn owls have been recorded in the wild and in captivity, with melanistic individuals estimated to occur with odds of 1 out of every 100,000 birds.
On average within any one population, males tend to have fewer spots on the underside and are paler in colour than females. The latter are also larger with a strong female T. alba of a large subspecies weighing over 550 g (19.4 oz), while males are typically about 10% lighter. Nestlings are covered in white down, but the heart-shaped facial disk becomes visible soon after hatching.
Contrary to popular belief, the barn owl does not hoot (such calls are made by typical owls, like the tawny owl or other members of the genus Strix). It instead produces a characteristic piercing shree scream, ear-shattering at close range, an eerie, long-drawn-out shriek. Males in courtship give a shrill twitter. Both young and old can hiss like a snake to scare away intruders. Other sounds produced include a purring chirrup denoting pleasure, and a "kee-yak", which resembles one of the vocalisations of the tawny owl. When captured or cornered, the barn owl throws itself on its back and flails with sharp-taloned feet, making for an effective defence. In such situations it may emit rasping sounds or clicking snaps, produced probably by the bill but possibly by the tongue.
## Distribution
The barn owl is the most widespread landbird species in the world, occurring on every continent except Antarctica. Its range includes all of Europe (except Fennoscandia and Malta), most of Africa apart from the Sahara, the Indian subcontinent, Southeast Asia, Australia, many Pacific Islands, and North-, Central-, and South America. In general, it is considered to be sedentary; and, indeed, many individuals, having taken up residence in a particular location, remain there even when better nearby foraging areas are available. In the British Isles, the young seem largely to disperse along river corridors; and the distance travelled from their natal site averages about 9 km (5.6 mi).
In continental Europe the dispersal distance is greater, commonly somewhere between 50 and 100 kilometres (31 and 62 mi) but exceptionally 1,500 km (932 mi), with ringed birds from the Netherlands ending up in Spain and in Ukraine. In the United States, dispersal is typically over distances of 80 and 320 km (50 and 199 mi), with the most travelled individuals ending up some 1,760 km (1,094 mi) from their points of origin. Dispersal movements in the African continent include 1,000 km (621 mi) from Senegambia to Sierra Leone and up to 579 km (360 mi) within South Africa. In Australia there is some migration as the birds move towards the northern coast in the dry season and southward in the wet season, as well as nomadic movements in association with rodent plagues. Occasionally, some of these birds turn up on Norfolk Island, Lord Howe Island, or New Zealand, showing that crossing the ocean is within their capabilities. In 2008, barn owls were recorded for the first time breeding in New Zealand. The barn owl has been successfully introduced into the Hawaiian island of Kauai in an attempt to control rodents; distressingly, it has been found to also feed on native birds.
## Taxonomy
The ashy-faced owl (T. glaucops) was for some time included in T. alba. Based on DNA evidence, König, Weick & Becking (2009) recognised the American barn owl (T. furcata) and the Curaçao barn owl (T. bargei) as separate species. They proposed that T. a. delicatula should be split off as a separate species, to be known as the eastern barn owl, which would include the subspecies T. d. delicatula, T. d. sumbaensis, T. d. meeki, T. d. crassirostris, and T. d. interposita. As of 2021, the International Ornithological Committee had not accepted the split of Tyto delicatula from T. alba.
Some island subspecies are occasionally treated as distinct species, a move which should await further research into barn owl phylogeography. According to Murray Bruce in Handbook of Birds of the World Volume 5: Barn-owls to Hummingbirds, "a review of the whole group [is] long overdue". Molecular analysis of mitochondrial DNA shows a separation of the species into two clades, an Old World alba and a New World furcata, but this study did not include T. a. delicatula, which the authors seem to have accepted as a separate species. Extensive genetic variation was found between the Indonesian T. a. stertens and other members of the alba clade, leading to the separation of stertens into Tyto javanica.
Twenty to thirty subspecies are usually recognized, varying mainly in body proportions, size, and colour. Barn owls range in colour from the almost beige-and-white nominate subspecies alba, erlangeri, and niveicauda, to the nearly black-and-brown contempta. Island forms are mostly smaller than mainland ones, and those inhabiting forests have darker plumage and shorter wings than those living in open grasslands. Several subspecies are generally considered to be intergrades between more distinct populations.
In Handbook of Birds of the World Volume 5: Barn-owls to Hummingbirds, the following subspecies are listed:
## Behaviour and ecology
Like most owls, the barn owl is nocturnal, relying on its acute sense of hearing when hunting in complete darkness. It often becomes active shortly before dusk but can sometimes be seen during the day when relocating from one roosting site to another. In Britain, on various Pacific Islands, and perhaps elsewhere, it sometimes hunts by day. The owl's daylight hunting may depend on whether it can avoid being mobbed by other birds during that time. In Britain, some birds continue to hunt by day—even when mobbed by such birds as magpies, rooks, and black-headed gulls—possibly because the previous night has been wet, making night hunting difficult. By contrast, in southern Europe and the tropics, the birds seem to be almost exclusively nocturnal, with the few birds that hunt by day being severely mobbed. In some cases, an owl feeling threatened by the mobbing of a crow may become aggressive enough to decapitate the crow.
Barn owls are not particularly territorial but have a home range inside which they forage. For males in Scotland this home range has a radius of about 1 km (0.6 mi) from the nest site and an average area of about 300 hectares (740 acres). Female home ranges largely coincide with that of their mates. Outside the breeding season, males and females usually roost separately, each one having about three favoured sites in which to conceal themselves by day, and which are also visited for short periods during the night. Roosting sites include holes in trees, fissures in cliffs, disused buildings, chimneys, and hay sheds, and are often small in comparison to nesting sites. As the breeding season approaches, the birds move back to the vicinity of a chosen nest to roost. In a situation where a bird (e.g., pigeon) intrudes an owl nest, a male barn owl is observed to be docile and curious, while a female owl is protective of its chicks and may attack the bird, and the chicks themselves are seen to display a defensive behaviour.
The barn owl is a bird of open country, such as farmland or grassland with some interspersed woodland, usually at altitudes below 2,000 metres (6,600 ft) but occasionally as high as 3,000 metres (9,800 ft) in the tropics, such as in Ethiopia's Degua Tembien mountain range. This owl prefers to hunt along the edges of woods or in rough grass strips adjoining pasture. It has an effortless wavering flight as it quarters the ground, alert to the sounds made by potential prey. Like most owls, the barn owl flies silently; tiny serrations on the leading edges of its flight feathers and a hairlike fringe on the trailing edges help to break up the flow of air over the wings, thereby reducing turbulence and the noise that accompanies it. Hairlike extensions to the barbules of its feathers, which give the plumage a soft feel, also minimise noise produced during wingbeats. Behavioural and environmental preferences may differ slightly even between neighbouring subspecies, as shown in the case of the European T. a. guttata and T. a. alba, which probably evolved, respectively, in allopatric glacial refugia in southeastern Europe, and in Iberia or southern France.
### Hunting and feeding
Hunting in twilight or at night, the barn owl can target its prey and dive to the ground. Its legs and toes are long and slender, which improves its ability to forage among dense foliage or beneath the snow, and gives it a wide spread of talons when attacking prey. This bird hunts by flying slowly, quartering the ground and hovering over spots that may conceal prey. It has long, broad wings that enable it to manoeuvre and turn abruptly. It has acute hearing, with ears placed asymmetrically, which improves detection of sound position and distance; the bird does not require sight to hunt. The facial disc helps with the bird's hearing, as is shown by the fact that, with the ruff feathers removed, the bird can still determine a sound source's direction, although without the disc it can't determine the source's height. It may perch on branches, fence posts, or other lookouts to scan its surroundings; and this is the main means of prey location in the oil palm plantations of Malaysia.
Rodents and other small mammals may constitute over ninety percent of the prey caught. Birds are also taken, as well as lizards, amphibians, fish, spiders, and insects. Even when they are plentiful, and other prey scarce, earthworms do not seem to be consumed. In North America and most of Europe, voles predominate in the diet and shrews are the second most common food choice. In Ireland, the accidental introduction of the bank vole in the 1950s led to a major shift in the barn owl's diet: where their ranges overlap, the vole is now by far the largest prey item. Mice and rats are the main foodstuffs in the Mediterranean region, the tropics, subtropics, and Australia. Gophers, muskrats, hares, rabbits, and bats are also preyed upon. Barn owls are usually specialist feeders in productive areas and generalists in areas where prey is scarce.
On the Cape Verde Islands, geckos are the mainstay of the diet, supplemented by birds such as plovers, godwits, turnstones, weavers, and pratincoles. On a rocky islet off the coast of California, a clutch of four young were being reared on a diet of Leach's storm petrel (Oceanodroma leucorhoa). On bird-rich islands, a barn owl might include birds as some fifteen to twenty percent of its diet, while in grassland it will gorge itself on swarming termites, or on Orthoptera such as Copiphorinae katydids, Jerusalem crickets (Stenopelmatidae), or true crickets (Gryllidae). Smaller prey is usually torn into chunks and eaten completely, including bones and fur, while prey larger than about 100 grams (3.5 oz)—such as baby rabbits, Cryptomys blesmols, or Otomys vlei rats—is usually dismembered and the inedible parts discarded.
Compared to other owls of similar size, the barn owl has a much higher metabolic rate, requiring relatively more food. Relative to its size, barn owls consume more rodents. Studies have shown that an individual barn owl may eat one or more voles (or their equivalent) per night, equivalent to about fourteen percent of the bird's bodyweight. Excess food is often cached at roosting sites and can be used when food is scarce. This makes the barn owl one of the most economically valuable wildlife animals for agriculture. Farmers often find these owls more effective than poison in keeping down rodent pests, and they can encourage barn owl habitation by providing nesting sites.
### Breeding
Barn owls living in tropical regions can breed at any time of year, but some seasonality in nesting is still evident. Where there are distinct wet and dry seasons, egg-laying usually takes place during the dry season, with increased rodent prey becoming available to the birds as the vegetation dies off. In arid regions, such as parts of Australia, breeding may be irregular and may happen in wet periods, with the resultant temporary increase in the populations of small mammals. In temperate climates, nesting seasons become more distinct, and there are some seasons of the year when no egg-laying takes place. In Europe and North America, most nesting takes place between March and June, when temperatures are increasing. The actual dates of egg-laying vary by year and by location, being correlated with the amount of prey-rich foraging habitat around the nest site. An increase in rodent populations will usually stimulate the local barn owls to begin nesting, and, consequently, two broods are often raised in a good year, even in the cooler parts of the owl's range.
Females are ready to breed at ten to eleven months of age. Barn owls are usually monogamous, sticking to one partner for life unless one of a pair dies. During the non-breeding season they may roost separately, but as the breeding season approaches, they return to their established nesting site, showing considerable site fidelity. In colder climates, in harsh weather, and where winter food supplies may be scarce, they may roost in farm buildings and in barns between hay bales, but they then run the risk that their selected nesting hole may be taken over by some other species. Single males may establish feeding territories, patrolling the hunting areas, occasionally stopping to hover, and perching on lofty eminences where they screech to attract a mate. Where a female has lost her mate but maintained her breeding site, she usually seems to attract a new spouse.
Once a pair-bond has been formed, the male will make short flights at dusk around the nesting and roosting sites and then longer circuits to establish a home range. When he is later joined by the female, there is much chasing, turning, and twisting in flight, and frequent screeches, the male's being high-pitched and tremulous and the female's lower and harsher. In later stages of courtship, the male emerges at dusk, climbs high into the sky, and then swoops back to the vicinity of the female at speed. He then sets off to forage. The female meanwhile sits in an eminent position and preens, returning to the nest a minute or two before the male arrives with food for her. Such feeding behaviour of the female by the male is common, helps build the pair-bond, and increases the female's fitness before egg-laying commences.
Barn owls are cavity nesters. They choose holes in trees, fissures in cliff faces, the large nests of other birds such as the hamerkop (Scopus umbretta), and, particularly in Europe and North America, old buildings such as farm sheds and church towers. Buildings are preferred to trees in wetter climates in the British Isles and provide better protection for fledglings from inclement weather. Tree nests tend to be in open habitats rather than in the middle of woodland, and nest holes tend to be higher in North America than in Europe, because of possible predation by raccoons (Procyon lotor). No nesting material is used as such but, as the female sits incubating the eggs, she draws in the dry furry material of which her regurgitated pellets are composed, so that by the time the chicks are hatched, they are surrounded by a carpet of shredded pellets. Oftentimes other birds such as jackdaws (Corvus monedula) nest in the same hollow tree or building and seem to live harmoniously with the owls.
Before commencing laying, the female spends much time near the nest and is entirely provisioned by the male. Meanwhile, the male roosts nearby and may cache any prey that is surplus to their requirements. When the female has reached peak weight, the male provides a ritual presentation of food and copulation occurs at the nest. The female lays eggs on alternate days and the clutch size averages about five eggs (the range being two to nine). The eggs are chalky white, somewhat elliptical, and about the size of bantam eggs. Incubation begins as soon as the first egg is laid. While the female is sitting on the nest, the male is constantly bringing more provisions, and they may pile up beside the female. The incubation period is about thirty days, hatching takes place over a prolonged period, and the youngest chick may be several weeks younger than its oldest sibling. In years with a plentiful supply of food, there may be a hatching success rate of about 75%. The male continues to copulate with the female when he brings food, which makes the newly hatched chicks vulnerable to injury.
The chicks are at first covered with greyish-white down and develop rapidly. Within a week they can hold their heads up and shuffle around in the nest. The female tears up the food brought by the male and distributes it to the chicks. Initially, the chicks make a "chittering" sound but this soon changes into a food-demanding "snore". By two weeks old they are already half their adult weight and look naked, as the amount of down is insufficient to cover their growing bodies. By three weeks old, quills are starting to push through the skin and the chicks stand, making snoring noises with wings raised and tail stumps waggling, begging for food items which are now given whole. Atypically among birds, barn owl chicks can "negotiate" and allow weaker ones to eat first, possibly in exchange for grooming. The male is the main provider of food until all the chicks are at least four weeks old, at which time the female begins to leave the nest and starts to roost elsewhere. By the sixth week the chicks are as big as the adults, but have slimmed down somewhat by the ninth week when they are fully fledged and start leaving the nest briefly themselves. They are still dependent on the parent birds until about thirteen weeks and receive training from the female in finding, and eventually catching, prey.
### Moulting
Feathers become abraded over time and all birds need to replace them at intervals. Barn owls are particularly dependent on their ability to fly quietly and manoeuvre efficiently. In temperate areas the owls undergo a prolonged moult that lasts through three phases over a period of two years. The female starts to moult while incubating the eggs and brooding the chicks, a time when the male feeds her, so she does not need to fly much. The first primary feather to be shed is a central one, number 6, and it has regrown completely by the time the female resumes hunting. Feathers 4, 5, 7, and 8 are dropped at a similar time the following year and feathers 1, 2, 3, 9 and 10 in the bird's third year of adulthood. The secondary and tail feathers are lost and replaced over a similar timescale, again starting while incubation is taking place. In the case of the tail, the two outermost tail feathers are first shed, followed by the two central ones, the other tail feathers being shed the following year.
The male owl moults rather later in the year than the female, at a time when there is an abundance of food, the female has recommenced hunting, and the demands of the chicks are lessening. Unmated males without family responsibilities often start losing feathers earlier in the year. Their moult follows a pattern similarly prolonged as that of the female. The first sign that the male is moulting is often when a tail feather has been dropped at the roost. A consequence of moulting is the loss of thermal insulation. This is of little importance in the tropics, and barn owls there usually moult a complete complement of flight feathers annually. The hot-climate moult may still take place over a long period but is usually concentrated at a particular time of year outside the breeding season.
### Predators and parasites
Predators of the barn owl include large American opossums (Didelphis), the common raccoon, and similar carnivorous mammals, as well as eagles, larger hawks, and other owls. Among the latter, the great horned owl (Bubo virginianus), in the Americas, and the Eurasian eagle-owl (B. bubo) are noted predators of barn owls. Despite some sources claiming that there is little evidence of predation by great horned owls, one study from Washington found that 10.9% of the local great horned owl's diet was made up of barn owls. In Africa, the principal predators of barn owls are Verreaux's eagle-owls (Bubo lacteus) and Cape eagle-owls (B. capensis). In Europe, although less dangerous than the eagle-owls, the chief diurnal predators are the northern goshawk (Accipiter gentilis) and the common buzzard (Buteo buteo). About 12 other large diurnal raptors and owls have also been reported as predators of barn owls, ranging from the similar-sized Cooper's hawk (Accipiter cooperii) and scarcely larger tawny owl (Strix aluco) to huge bald (Haliaeetus leucocephalus) and golden eagles (Aquila chrysaetos). As a result of improved conservation measures, the populations of the northern goshawk and eagle-owls are increasing, thus increasing the incidence of hunting on barn owls where the species coexist.
When disturbed at its roosting site, an angry barn owl lowers its head and sways it from side to side, or the head may be lowered and stretched forward and the wings outstretched and drooped while the bird emits hisses and makes snapping noises with its beak. Another defensive attitude involves lying flat on the ground or crouching with wings spread out.
Barn owls are hosts to a wide range of parasites. Fleas are present at nesting sites, and externally the birds are attacked by feather lice and feather mites which chew the barbules of the feathers and which are transferred from bird to bird by direct contact. Blood-sucking flies, such as Ornithomyia avicularia, are often present, moving about among the plumage. Internal parasites include the fluke Strigea strigis, the tapeworm Paruternia candelabraria, several species of parasitic round worm, and spiny-headed worms in the genus Centrorhynchus. These gut parasites are acquired when the birds feed on infected prey. There is some indication that female birds with more and larger spots have a greater resistance to external parasites. This is correlated with smaller bursa of Fabricius, glands associated with antibody production, and a lower fecundity of the blood-sucking fly Carnus hemapterus, which attacks nestlings.
### Lifespan
Unusually for a medium-sized carnivorous animal, the barn owl exhibits r-selection, producing a large number of offspring with a high growth rate, which have a low probability of surviving to adulthood. Its typical lifespan is around four years. In Scotland, the species has been recorded living up to 18 and possibly even 34 years. A significant cause of death in temperate areas is starvation, particularly during the winter, and with significant snow cover.
Collision with road vehicles is another cause of death, and may result when birds forage on mown verges. Some of these birds are in poor condition and may have been less able to evade oncoming vehicles than fit individuals. In some locations, road mortality rates can be particularly high, with collision rates being influenced by higher commercial traffic, roadside verges that are grass rather than shrubs, and where small mammals are abundant. Historically, many deaths were caused by the use of pesticides, and this may still be the case in some parts of the world. Collisions with power-lines kill some birds; and being shot accounts for others, especially in Mediterranean regions.
## Status and conservation
Barn owls are relatively common throughout most of their range and not considered globally threatened. If considered as a single global species, the barn owl is the second most widely distributed of all raptors, behind only the peregrine falcon. It is wider-ranging than the also somewhat cosmopolitan osprey. Furthermore, the barn owl is likely the most numerous of all raptors, with the International Union for Conservation of Nature (IUCN) estimating, for all barn owl individuals, a population possibly as large as nearly 10 million individuals (throughout the Americas, the American barn owl species may comprise nearly 2 million). Severe local declines due to organochlorine (e.g., DDT) poisoning in the mid 20th century and rodenticides in the late 20th century have affected some populations, particularly in Europe and North America. Intensification of agricultural practices often means that the rough grassland that provides the best foraging habitat is lost. While barn owls are prolific breeders and able to recover from short-term population decreases, they are not as common in some areas as they used to be. A 1995–1997 survey put their British population at between 3,000 and 5,000 breeding pairs, out of an average of about 150,000 pairs in the whole of Europe. In the US, barn owls are listed as endangered species in seven Midwestern states (Ohio, Michigan, Indiana, Illinois, Wisconsin, Iowa, and Missouri), and in the European Community they are considered a Species of European Concern.
In Canada, barn owls are no longer common and are most likely to be found in coastal British Columbia south of Vancouver, having become extremely rare in a previous habitat, southern Ontario. In spite of a Recovery Strategy, particularly in 2007–2010 in Ontario, only a handful of wild, breeding barn owls existed in the province in 2018. This is primarily because of disappearing grasslands where the bird hunted in the past, but according to a study, also because of "harsh winters, predation, road mortality and use of rodenticides". The species is listed as endangered overall in Canada, due to loss of habitat and a lack of nesting sites.
In the Canary Islands, a somewhat larger number of these birds still seem to exist on the island of Lanzarote, but altogether this particular subspecies (T. a. gracilirostris, the Canary barn owl) is precariously rare: perhaps fewer than two hundred individuals still remain. Similarly, the birds on the western Canary Islands, which are usually assigned to this subspecies, have severely declined; and wanton destruction of the birds seems to be significant. On Tenerife they seem relatively numerous; but on the other islands the situation looks about as bleak as on Fuerteventura. Due to the assignment to this subspecies of birds common in mainland Spain, the western Canary Islands population is not classified as threatened.
Nest boxes are used primarily when populations suffer declines. Although such declines have many causes, among them are the lack of available natural nesting sites. Early successes among conservationists have led to the widespread provision of nest boxes, which has become the most used form of population management. The barn owl accepts the provided nest boxes and sometimes prefers them to natural sites. The nest boxes are placed under the eaves of buildings and in other locations. The upper bound of the number of barn owl pairs depends on the abundance of food at nesting sites. Conservationists encourage farmers and landowners to install nest boxes by pointing out that the resultant increased barn owl population would provide natural rodent control. In some conservation projects, the use of rodenticides for pest control was replaced by the installation of nest boxes for barn owls, which has been shown to be a less costly method of rodent control.
### Cultural aspects
Common names such as "demon owl", "death owl", "ghost owl", or "lich owl" (from lich, an old term for a corpse) show that rural populations in many places considered barn owls to be birds of evil omen. For example, the Tzeltal people in Mexico regard them as "disease givers". These owls don't "hoot", instead emitting raspy screeches and hissing noises, and their white face and underbelly feathers, visible as they fly overhead, make them look "ghostly". Consequently, they were often killed by farmers who were unaware of the benefits these birds bring. Negative perceptions can also be attributed to the false belief that they could eat large animals, such as chickens and cats. In Thailand, there are similar beliefs like this. Thai people believe when the barn owl flies over or perches on the roof of any house, that house must have its inhabitants die. In South Africa, barn owls are often associated with witchcraft and are persecuted. In some South African cultures, these owls are used in 'muthi' (traditional medicine) and are believed to give special powers when consumed.
In India, beliefs about the barn owl are completely different. Hindus consider the species of owl to be the mount and symbol of Lakshmi, goddess of wealth and fortune.
|
421,853 |
Wii
| 1,173,204,239 |
Home video game console by Nintendo
|
[
"2000s in video gaming",
"2000s toys",
"2006 in video gaming",
"2010s in video gaming",
"2010s toys",
"Backward-compatible video game consoles",
"Computer-related introductions in 2006",
"Discontinued video game consoles",
"Home video game consoles",
"Products and services discontinued in 2017",
"Products introduced in 2006",
"Seventh-generation video game consoles",
"Spike Video Game Award winners",
"Wii",
"Wii hardware"
] |
The Wii (/wiː/ WEE) is a home video game console developed and marketed by Nintendo. It was released on November 19, 2006, in North America and in December 2006 for most other regions of the world. It is Nintendo's fifth major home game console, following the GameCube and is a seventh-generation console alongside Microsoft's Xbox 360 and Sony's PlayStation 3.
In developing the Wii, Nintendo president Satoru Iwata directed the company to avoid competing with Microsoft and Sony on computational graphics and power and instead to target a broader demographic of players through novel gameplay. Game designers Shigeru Miyamoto and Genyo Takeda led the console's development under the codename Revolution. The primary controller for the Wii is the Wii Remote, a wireless controller with both motion sensing and traditional controls which can be used as a pointing device towards the television screen or for gesture recognition. The Wii was Nintendo's first home console to directly support Internet connectivity, supporting both online games and for digital distribution of games and media applications through the Wii Shop Channel. The Wii also supports wireless connectivity with the Nintendo DS handheld console for selected games. Initial Wii models included full backward compatibility support for GameCube games and most accessories. Later in its lifecycle, two lower-cost Wii models were produced: a revised model with the same design as the original Wii but removed the GameCube compatibility features and the Wii Mini, a compact, budget redesign of the Wii which further removed features including online connectivity and SD card storage.
Because of Nintendo's reduced focus on computational power, the Wii and its games were less expensive to produce than its competitors. The Wii was extremely popular at launch, causing the system to be in short supply in some markets. A bundled game, Wii Sports, was considered the killer app for the console; other flagship games included entries in the Super Mario, Legend of Zelda, Pokémon, and Metroid series. Within a year of launch, the Wii became the best-selling seventh-generation console, and by 2013, had surpassed over 100 million units sold. Total lifetime sales of the Wii had reached over 101 million units, making it Nintendo's best-selling home console until it was surpassed by the Nintendo Switch in 2021. As of 2022, the Wii is the fifth-best-selling home console of all time.
The Wii repositioned Nintendo as a key player in the video game console marketplace. The introduction of motion-controlled games via the Wii Remote led both Microsoft and Sony to develop their own competing products—the Kinect and PlayStation Move, respectively. Nintendo found that, while the Wii had broadened the demographics that they wanted, the core gamer audience had shunned the Wii. The Wii's successor, the Wii U, sought to recapture the core gamer market with additional features atop the Wii. The Wii U was released in 2012, and Nintendo continued to sell both units through the following year. The Wii was formally discontinued in October 2013, though Nintendo continued to produce and market the Wii Mini through 2017, and offered a subset of the Wii's online services through 2019.
## History
### 2001–2003: Development
Shortly after the release of the GameCube, Nintendo began conceptualizing their next console. The company's game designer Shigeru Miyamoto said that, in the early stages, they decided they would not aim to compete on hardware power, and would instead prioritize new gameplay concepts. Miyamoto cited Dance Dance Revolution's unique game controllers as inspiration for developing new input devices. Later in September 24, 2001, Nintendo began working with Gyration Inc., a firm that had developed several patents related to motion detection, to prototype future controllers using their licensed patents.
Over the next two years, sales of the GameCube languished behind its competitors—Sony's PlayStation 2 and Microsoft's Xbox. Satoru Iwata, who had been promoted to Nintendo's president in May 2002 following Hiroshi Yamauchi's retirement, recognized that Nintendo had not been keeping up with trends in the video game industry, such as adapting to online gaming. He also thought that video gaming had become too exclusive and wanted Nintendo to pursue gaming hardware and software that would appeal to all demographics. Nintendo's market analysis found that their focus on novel hardware had created consoles that made it difficult for third-party developers to create games for, hampering their position. One of the first major steps Iwata had made based on the company's research was directing the development of the Nintendo DS, a handheld incorporating dual screens including a touchscreen, to revitalize their handheld console line.
In 2003, Iwata met with Miyamoto and Genyo Takeda to discuss their market research. Iwata instructed Takeda "to go off the tech roadmap" for this console, but said it had to be appealing to mothers. Iwata wanted their next console to be capable of playing past Nintendo games, eliminating clutter in houses. Takeda led the team building the console's hardware components, and Miyamoto spearheaded the development of a new type of controller, based on Gyration's motion-sensing technology. Iwata had proposed that this new console use motion sensing to simplify the gaming interface, increasing appeal to all audiences. An initial prototype was completed within six months.
The Nintendo DS was said to have influenced the Wii's design, as the company found that the DS's novel two-screen interface had drawn in non-traditional players and wanted to replicate that on the new console. Designer Ken'ichiro Ashida noted, "We had the DS on our minds as we worked on the Wii. We thought about copying the DS's touch-panel interface and even came up with a prototype." The idea was eventually rejected because of the notion that the two gaming systems would be identical. Miyamoto also stated, "if the DS had flopped, we might have taken the Wii back to the drawing board."
### 2004–2005: Announcements
Prior to E3 2004, Iwata had referred to Nintendo's upcoming console offering as the GameCube Next (GCNext or GCN). Iwata first unveiled some details of Nintendo's new home console at E3 2004 under the codename "Revolution", as Iwata believed the console would revolutionize the gaming industry. BBC News' technology editor Alfred Hermida wrote that Nintendo's struggle to match Sony and Microsoft in the home console market made success crucial.
The console, still named "Revolution", was formally presented to the public at E3 2005. The motion controller interface had not yet been completed and was omitted from the unveiling. Iwata held the console above him with one hand to emphasize its size relative to its rivals. A smaller device meant it would draw on less power as to not overheat, and thus appealed to parents who were willing to have an attractive, small, power-efficient device in the living room. Iwata reportedly used a stack of three DVD cases as a size guide. The prototype held by Iwata was black, but at release the following year, the console was only available in white. In their book on the console, two Loyola University Chicago professors suggested that Nintendo was inviting comparisons with Apple's first iPod line.
Iwata later unveiled and demonstrated their current prototype of the Revolution controller at the Tokyo Game Show in September 2005. At this stage, the controller unit resembled the final Wii Remote device along with the separate Nunchuk attachment. Iwata demonstrated its motion sensing gameplay capabilities, and incorporated commentary from developers, such as Hideo Kojima and Yuji Horii, who had tested the controller and believed people would be drawn in by it.
The console's name was formally announced as the Wii in April 2006, a month prior to E3 2006. Nintendo's spelling of "Wii" (with two lower-case "i" characters) was intended to represent both two people standing side by side, and the Wii Remote and its Nunchuk. In the company's announcement, they stated: "Wii sounds like 'we', which emphasizes that the console is for everyone. Wii can easily be remembered by people around the world, no matter what language they speak. No confusion."
The name resulted in criticism and mockery. Forbes expressed a fear that the console would be seen as juvenile. BBC News reported the day after the name was announced that "a long list of puerile jokes, based on the name," had appeared on the Internet. Some video game developers and members of the press stated that they preferred "Revolution" over "Wii". Nintendo of America's Vice President of Corporate Affairs Perrin Kaplan defended the choice. President of Nintendo of America Reggie Fils-Aimé justified the new name over Revolution by saying that they wanted something short, distinctive, and easily pronounceable for all cultures.
The Wii was made available for a press demonstration at E3 2006. Planned launch titles were announced at a press conference alongside the unveiling. At the same conference, Nintendo confirmed its plans to release the console by the end of 2006.
### 2006–2010: Launch
Nintendo announced the launch plans and prices for the Wii in September 2006. The console was first launched in the United States on November 19, 2006, for . Other regional release dates and prices included Japan on December 2 for , followed by Australasia on December 7 for , and was later launched on December 8 in the United Kingdom for and for the majority of Europe for . Nintendo planned to have around 30 Wii games available by the end of 2006, and anticipated shipping over 4 million consoles before the end of the year.
As part of its launch campaign, Nintendo promoted the Wii in North America through a series of television advertisements (directed by Academy Award winner Stephen Gaghan); its Internet ads used the slogans "Wii would like to play" and "Experience a new way to play". The ads began in November 2006 and had a budget of over for the year. The ads targeted a wider demographic compared to ads for other consoles, inviting parents and grandparents to play on the Wii. Nintendo hoped that its console would appeal to a wider demographic than that of others in the seventh generation. In December 2006, Satoru Iwata said that Nintendo did not think of themselves as "fighting Sony", but were focused on how they could expand the gaming demographic.
It took several years for the Wii to launch in other regions. It was released in South Korea on April 26, 2008, Taiwan on July 12, 2008, and Hong Kong on December 12, 2009. Nintendo had planned work with its localization partner iQue to release the Wii in China in 2008, but failed to meet the requirements to circumvent the ban on foreign-made consoles the Chinese government had put in place.
### 2011–2017: Successor and discontinuation
Nintendo announced the successor to the Wii, the Wii U, at E3 2011. Nintendo had recognized that the Wii had generally been shunned by the core gaming audience as it was perceived more as a casual gaming experience. The Wii U was aimed to draw the core audience back in with more advanced features atop the basic Wii technology. The Wii U features the Wii U GamePad, a controller with an embedded touchscreen and output 1080p high-definition graphics that serves as a secondary screen alongside the television. The Wii U is fully backward-compatible with Wii games and peripherals for the Wii, including the Wii Remote, Nunchuk controller and Wii Balance Board, and select Wii U games including support for these devices. The Wii U was first released on November 18, 2012 in North America; November 30, 2012 in Europe and Australia, and December 8, 2012 in Japan.
Nintendo continued to sell the revised Wii model and the Wii Mini alongside the Wii U during the Wii U's first release year. During 2013, Nintendo began to sunset certain Wii online functions as they pushed consumers towards the Wii U as a replacement system or towards the offline Wii Mini, though the Wii Shop Channel remained available. Nintendo discontinued production of the Wii in October 2013 after selling over 100 million units worldwide, though the company continued to produce the Wii Mini unit primarily for the North American market. The WiiConnect24 service and several channels based on that service were shuttered in June 2013. Support for online multiplayer games via the Nintendo Wi-Fi Connection were discontinued in May 2014, while the Wii Shop was closed in January 2019, effectively ending all online services for the console. The Wii Mini continued to be manufactured and sold until 2017.
Despite the Wii's discontinuation, some developers continued to produce Wii games well beyond 2013. Ubisoft released Just Dance games for the Wii up to Just Dance 2020 (2019). Vblank Entertainment's Shakedown: Hawaii along with Retro City Rampage DX are the most recent Wii games, which were released on July 9, 2020 (more than 13 years after the Wii's launch). On January 27, 2020, Nintendo announced that they will no longer repair any faulty Wii consoles in Japan starting on February 6 due to a scarcity of spare parts.
## Hardware
### Console
In building the Wii, Nintendo did not aim to outpace the performance of their competitors. Unlike the company's previous consoles, they built the Wii from commercial off-the-shelf hardware rather than seek out customized components. This helped to reduce the cost of the Wii system to consumers. Miyamoto said "Originally, I wanted a machine that would cost \$100. My idea was to spend nothing on the console technology so all the money could be spent on improving the interface and software."
The console's central processing unit is a 32-bit IBM PowerPC-based processor named Broadway, with a clock frequency of 729 MHz. The reduced size of Broadway—based on a 90 nm process compared to the 180 nm process used in the GameCube's CPU—resulted in 20% lowered power consumption. The Wii's GPU is a system on a chip produced by ATI and named Hollywood; the core processor runs at 243 MHz, 3 MB of texture memory, digital signal processors, and input/output functions. Additionally, the GPU included an additional 24 MB of 1T-SRAM and an additional 64 MB of 1T-SRAM on the motherboard, totaling to 88 MB of memory for the console. The Wii's computational power was roughly 1.5 to 2 times as powerful as the GameCube, but was the least powerful of the major home consoles of its generation.
The Wii's motherboard has a WiFi adapter which supports IEEE 802.11 b/g modes, and a Bluetooth antenna that communicates with its controllers. A USB-based LAN adapter can connect the Wii to a wired Ethernet network.
The Wii reads games from an optical media drive located in the front of the device. The drive is capable of reading Nintendo's proprietary discs, the 12 cm Wii discs and 8 cm GameCube discs, but cannot read other common optical media—namely, DVD-Video, DVD-Audio or compact discs. Although Nintendo had planned on incorporating this feature into later revisions of the Wii, the demand for the console meant a delay in their schedule, until the feature lost interest. The slot of the optical drive is backed by LED lights which show the system's status. For example, it will pulse blue when the system is communicating with the WiiConnect24 service or when reading a disc after being inserted.
The Wii includes 512 MB of internal flash memory for storing saved games and downloaded content from the Wii channels. Users could expand their storage for downloaded games and saved games, as well as provide photos and music that could be used with some games and Wii channels, through SD cards (and later SDHC cards) inserted into an external slot on the console located under a front panel. A later system update added the ability to launch Wii channels and play Virtual Console and WiiWare games directly from SD cards.
The rear of the console features the unit's video output and power connections along with two USB ports. The top of the console, when placed vertically, includes a panel that includes four ports for GameCube controllers and two GameCube memory cards.
The Wii was Nintendo's smallest home console at the time (the current smallest is hybrid home-portable console Nintendo Switch, when in portable mode); it measures 44 mm (1.73 in) wide, 157 mm (6.18 in) tall and 215.4 mm (8.48 in) deep in its vertical orientation, slightly larger than three DVD cases stacked together. The included stand measures 55.4 mm (2.18 in) wide, 44 mm (1.73 in) tall and 225.6 mm (8.88 in) deep. The system weighs 1.2 kg (2.7 lb), making it the lightest of the three major seventh-generation consoles. The Wii may stand horizontally or vertically.
### Wii Remote
The Wii Remote is the primary controller for the console. The remote contains a MEMS-based three-dimension accelerometer, along with infrared detection sensors located at the far end of the controller. The accelerometers allow the Wii Remote to recognize its orientation after being moved from a resting position, translating that motion into gesture recognition for a game. For example, the pack-in game Wii Sports includes a ten-pin bowling game that had the player hold the Wii Remote and perform a delivery of a ball; the Wii Remote could account for the player's position relative to the Sensor Bar, and their arm and wrist rotation to apply speed and spin to the virtual ball's delivery on screen. The infrared detectors are used to track emissions from LEDs in the included Sensor Bar, which is placed just above or below the television display, as to track the relative orientation of the Wii Remote towards the screen. This gives the Wii Remote the ability to act as a pointing device like a computer mouse on the television screen, with an approximate 15 feet (4.6 m) range for accurate detection. In addition, the Wii Remote features traditional controller inputs, including a directional pad (d-pad), three face action buttons and a shoulder trigger, and four system-related buttons include a power switch. The Wii Remote connects to the Wii through Bluetooth with an approximate 30 feet (9.1 m) range, communicating the sensor and control information to the console unit. The Wii Remote includes an internal speaker and a rumble pack that can be triggered by a game to provide feedback directly to the player's hand. Up to four Wii Remotes could connect wirelessly to a Wii, with LED lights on each remote indicating which controller number the Remote had connected as. The remote is battery-operated, and when the Remote is not powered on, these LED lights can display the remaining battery power.
A wrist-mounted strap is included with the Wii Remote, with one end affixed to the bottom of the unit. Nintendo strongly encouraged players to use the strap in case the Wii Remote accidentally slipped out of their hands. Nintendo recalled the original straps in December 2006 and provided a free, stronger strap as a replacement, as well as packaging the new strap in future bundles after the company faced legal challenges from users that reported damage to their homes from the Wii Remote slipping from their hands while playing. In October 2007, Nintendo also added a silicon-based Wii Remote Jacket to shipments of the Wii and Wii Remote, as well as a free offering for existing users. The Jacket wraps around the bulk of the remote but leaves access to the various buttons and connectors, providing a stickier surface in the user's grip to further reduce the chance of the Remote falling out of the player's hand.
Accessories can be connected to a Wii Remote through a proprietary port at the base of the controller. The Wii shipped with the bundled Nunchuk—a handheld unit with an accelerometer, analog stick, and two trigger buttons—which connected to this port on the Wii Remote via a 4 feet (1.2 m) cable. Players hold both the Wii Remote and Nunchuk in separate hands to control supported games.
The Wii MotionPlus accessory plugs into the port at the base of the Wii Remote and augments the existing sensors with gyroscopes to allow for finer motion detection. The MotionPlus accessory was released in June 2009 with a number of new games directly supporting this new functionality, including Wii Sports Resort which including the accessory as part of a bundle. The MotionPlus functionality was later incorporated into a revision of the controller called the Wii Remote Plus, first released in October 2010.
A number of third-party controller manufacturers developed their own lower-cost versions of the Wii Remote, though these generally were less accurate or lacked the sensitivity that Nintendo's unit had.
### Other controllers and accessories
The Classic Controller is an extension for the Wii Remote, released alongside the Wii in November 2006. Its form factor is similar to classic gamepads such as that for the Nintendo 64, with a d-pad, four face buttons, Start and Select buttons alongside the Wii connection button, and two shoulder buttons. Players can use it with older games from the Virtual Console in addition to games designed for the Wii. In 2009, Nintendo released the Wii Classic Controller Pro, which was modelled after the GameCube's form factor and included two analog sticks.
The Wii Balance Board was released alongside Wii Fit in December 2007. It is a wireless balance board accessory for the Wii, with multiple pressure sensors used to measure the user's center of balance. Wii Fit offers a number of different exercise modes which monitored the player's position on the board, as well as exercise gamification, as to encourage players to exercise daily. In addition to use in Nintendo's Wii Fit Plus that expanded the range of exercises using the Wii Balance Board, the accessory can be used in other third-party games that translated the player's balance on the unit into in-game controls such as Shaun White Snowboarding and Skate It. Namco Bandai produced a mat controller (a simpler, less-sophisticated competitor to the Balance Board).
One of Iwata's initiatives at Nintendo was focused on "quality of life" products, those that encouraged players to do other activities beyond simply sitting and playing video games as to promote physical wellbeing. The use of motion controls in the Wii served part of this, but Nintendo developed additional accessories to give awareness of one's health as a lead-in for the company to break into the health care field. At E3 2009, Nintendo had presented a "Vitality Sensor" accessory that would be used to measure a player's pulse as a lead-in to a larger quality of life initiative, but this product was never released. In a 2013 Q&A, Satoru Iwata revealed that the Vitality Sensor had been shelved, as internal testing found that the device did not work with all users, and its use cases were too narrow. Despite this, Nintendo has continued Iwata's quality of life program with further products on later consoles and games.
A number of first- and third-party accessories were developed that the Wii Remote could be slotted into and then used in a more physical manner that took advantage of the accelerometer and gyroscopic functions of the controller. Some copies of Mario Kart Wii shipped with the Wii Wheel, a plastic steering wheel frame with the Wii Remote could be inserted into, so that players could steer more effectively in game. Rhythm games that used plastic instruments, such as Guitar Hero III, shipped with instruments that the Wii Remote could be slotted into; the remote powered the various buttons on the controller and relayed that to the Wii.
### Variants and bundles
The Wii launch bundle included the console; a stand to allow the console to be placed vertically; a plastic stabilizer for the main stand, one Wii Remote, a Nunchuk attachment for the Remote, a Sensor Bar and a removable stand for the bar to mount on a television set, an external power adapter, and two AA batteries for the Wii Remote. The bundle included a composite A/V cable with RCA connectors, and in appropriate regions such as in Europe, a SCART adapter was also included. A copy of the game Wii Sports was included in most regional bundles.
Although Nintendo showed the console and the Wii Remote in white, black, silver, lime-green and red before it was released, it was only available in white for its first two-and-a-half years of sales. Black consoles were available in Japan in August 2009, in Europe in November 2009 and in North America in May 2010. A red Wii system bundle was available in Japan on November 11, 2010, commemorating the 25th anniversary of Super Mario Bros. The European version of the limited-edition red Wii bundle was released on October 29, 2010, which includes the original Donkey Kong game pre-installed onto the console, New Super Mario Bros. Wii and Wii Sports. The red Wii bundle was released in North America on November 7, 2010, with New Super Mario Bros. Wii and Wii Sports. All of the red Wii system bundles feature the Wii Remote Plus, with integrated Wii MotionPlus technology.
### Revisions
The prefix for the numbering scheme of the Wii system and its parts and accessories is "RVL-" for its codename, "Revolution". The base Wii console had a model number of RVL-001, for example.
#### Redesigned model
A cost-reduced variant of the Wii (model RVL-101) was released late into the platform's lifespan that removed all GameCube functionality, including the GameCube controller ports and memory card slots found on the original model. This model is sometimes incorrectly referred to as the "Wii Family Edition", the name given to the bundle in which it was first sold in Europe. Additionally, it does not include a stand, as it is intended to be positioned horizontally. Nintendo announced the new revision in August 2011 as a replacement for the original Wii model which it was discontinuing in certain regions including Europe and the United States. The new unit in its bundles was priced at , a further reduction for the Wii's MSRP at the time of established in September 2009.
The console was first released in North America on October 23, 2011, in a black finish, bundled with a black Wii Remote Plus and Nunchuk, along with New Super Mario Bros. Wii and a limited-edition soundtrack for Super Mario Galaxy. It was released in Europe on November 4, 2011, in a white finish, bundled with a white Wii Remote Plus and Nunchuk, along with Wii Party and Wii Sports. A special bundle featuring a blue version of the revised Wii model and Wii Remote Plus and Nunchuk with the inclusion of Mario & Sonic at the London 2012 Olympic Games was released in Europe on November 18, 2011, in collaboration with Sega. Nintendo later revised the North American bundle by replacing the prior pack-in game and soundtrack with the original Wii Sports duology; the new bundle was released on October 28, 2012.
#### Wii Mini
The Wii Mini (model RVL-201) is a smaller, redesigned Wii with a top-loading disc drive. In addition to the lack of GameCube support, the Wii Mini removes Wi-Fi support and online connectivity, along with the removal of the SD card slot. It also removed support for 480p and component video output. According to Nintendo of Canada's Matt Ryan, they stripped these features to bring down the price of the console further as to make it an option for those consumers that had not yet gotten a Wii or for those who wanted a second Wii in a different location. Ryan stated that while removing the online functionality would prevent some games from being played, most Wii games could still be played without it. The Wii Mini is styled in matte black with a red border, and includes a red Wii Remote Plus and Nunchuk. According to Ryan, the red coloring was indicative of the planned exclusive release in Canada. A composite A/V cable, wired Sensor Bar and power adapter are also included.
The Wii Mini was first released on December 7, 2012, exclusively in Canada with a MSRP of . It was later released in Europe on March 22, 2013, and in the United States on November 17, 2013. The Canadian and European releases did not include a game, while Mario Kart Wii had been included in all launch bundles in the United States. Nintendo added several best-selling and critically acclaimed Wii games to its Nintendo Selects label and marketed those alongside the Wii Mini's release.
## Software
The console has many internal features made available from its hardware and firmware components. The hardware allows for extendability (via expansion ports), while the firmware (and some software) could receive periodic updates via the WiiConnect24 service.
### Wii Menu
The development of the Wii Menu, the main user interface for the Wii, was led by Takashi Aoyama of Nintendo's Integrated Research & Development Division. The project, named the "Console Feature Realization Project", was to figure out what the Wii interface could show running in a low-power mode on an around-the-clock schedule that would be of interest for people to look at if they were not playing games. The idea of having continually updated weather and news reports made logical sense from testing, and this led to the idea of presenting these similar to a row of televisions each set to a different television channel as if in an electronics shop, creating the "channels" concept. A user can navigate to any channel window to bring that to the forefront, whether to launch the game or application or to get more information that was being displayed. For example, the Forecast Channel would display a brief summary of the local area's temperature and short-term weather forecast, while clicking on the channel brought up an interactive globe that the user could manipulate with the Wii Remote to explore real-time weather conditions across the Earth.
The Wii launched with six channels: the Disc Channel which was used to launch Wii and GameCube titles from an optical disc; the Mii Channel to create Mii avatars; the Photo Channel which could be used to view and edit photos stored on an SD card; the Wii Shop Channel to purchase new games and applications; the Forecast Channel and the News Channel. In addition to default channels that came with the Wii, new channels could be added through system updates, downloaded applications from the Wii Shop Channel, or added by games themselves. Shortly after launch, other free channels created by Nintendo were made available to users, including the Internet Channel, a modified version of the Opera web browser for the Wii which supports USB keyboard input and Adobe Flash Player.
The Wii Menu channels feature music composed by video game composer Kazumi Totaka.
### Mii
The Wii introduced the use of player-customized avatars called Miis, which have been continued to be used by Nintendo in the Wii U, the Nintendo 3DS family, and, to a lesser extent, the Nintendo Switch. Each player on a Wii console was encouraged to create their own Mii via the Mii Channel to be used in games like Wii Sports and some of the system software like the Mii Channel. For example, players would select their Mii in Wii Sports, creating their in-game avatar for the game. Miis could be shared with other players through the Mii Channel.
### Nintendo DS connectivity
The Wii system supports wireless connectivity with the Nintendo DS without any additional accessories. This connectivity allows the player to use the Nintendo DS microphone and touchscreen as inputs for Wii games. The first game utilizing Nintendo DS-Wii connectivity is Pokémon Battle Revolution. Players with either the Pokémon Diamond or Pearl Nintendo DS games are able to play battles using the Nintendo DS as a controller. Final Fantasy Crystal Chronicles: Echoes of Time, released on both Nintendo DS and Wii, features connectivity in which both games can advance simultaneously. Nintendo later released the Nintendo Channel, which allows Wii owners to download game demos of popular games such as Mario Kart DS, or additional data to their Nintendo DS in a process similar to that of a DS Download Station. The console is also able to expand Nintendo DS games.
### Online connectivity
The Wii console connects to the Internet through its built-in 802.11b/g Wi-Fi or through a USB-to-Ethernet adapter; either method allows players to access the Nintendo Wi-Fi Connection service. The service has several features for the console, including Virtual Console, WiiConnect24, the Internet Channel, the Forecast Channel, the Everybody Votes Channel, the News Channel and the Check Mii Out Channel. The Wii can also communicate (and connect) with other Wii systems through a self-generated wireless LAN, enabling local wireless multi-playing on different television sets. Battalion Wars 2 first demonstrated this feature for non-split screen multi-playing between two (or more) televisions.
### Third-party applications
Third-party media apps were added to the Wii's online channels, typically offered as free downloads but requiring subscriber logins for paid services. Among some of these included the BBC iPlayer in November 2009, Netflix in November 2010, Hulu in February 2012, YouTube in December 2012, Amazon Prime Video in January 2013, and Crunchyroll in October 2015. In June 2017, YouTube ended support for its Wii channel. In January 2019, Nintendo ended support for all streaming services on the Wii.
### Parental controls
The console features parental controls, which can be used to prohibit younger users from playing games with content unsuitable for their age level. When one attempts to play a Wii or Virtual Console game, it reads the content rating encoded in the game data; if this rating is greater than the system's set age level, the game will not load without a password. Parental controls may also restrict Internet access, which blocks the Internet Channel and system-update features. Since the console is restricted to GameCube functionality when playing GameCube Game Discs, GameCube software is unaffected by Wii parental-control settings.
The Wii also includes a system that records the playtime based on any game or app on the system. While Nintendo decided against a profile system that would require each user to identify themselves, they kept the cumulative playtime tracking system, which cannot be erased or altered, to give parents the means to review their children's use of the Wii.
## Games
Retail copies of games are supplied on proprietary, DVD-type Wii optical discs, which are packaged in keep cases with instructions. In Europe, the boxes have a triangle at the bottom corner of the paper sleeve-insert side. The triangle is color-coded to identify the region for which the title is intended and which manual languages are included. The console supports regional lockout: software available in a region can be only played on that region's hardware.
Twenty-one games were announced for launch day in North and South America, with another twelve announced for release later in 2006. Among the system's launch titles in all regions included Wii Sports, which was bundled in all Wii packages except in Japan and South Korea, The Legend of Zelda: Twilight Princess, Sega's Super Monkey Ball: Banana Blitz, and Ubisoft's Red Steel. Metroid Prime 3: Corruption had been slated as a Wii launch title, but was pushed into 2007 a few months before the Wii's launch. Nintendo had also planned to release Super Smash Bros. Brawl as a launch title, but its director Masahiro Sakurai stated there were difficulties in adapting the format to the Wii's motion controls to require more time for the game's development.
New Wii games included those from Nintendo's flagship franchises such as The Legend of Zelda, Super Mario, Pokémon, and Metroid. Nintendo has received third-party support from companies such as Ubisoft, Sega, Square Enix, Activision Blizzard, Electronic Arts, and Capcom, with more games being developed for Wii than for the PlayStation 3 or Xbox 360. Nintendo also launched the New Play Control! line, a selection of enhanced ports of first-party GameCube games that have been updated to capitalize on the Wii's motion controls.
### Backward compatibility
The original launch Wii consoles are backward-compatible with all GameCube software, memory cards, and controllers, although Korean Wii consoles are not backwards compatible at all. Software compatibility is achieved by the slot-loading drive's ability to accept GameCube discs. A Wii console running a GameCube disc is restricted to GameCube functionality, and a GameCube controller is required to play GameCube titles. A GameCube memory card is also necessary to save game progress and content, since the Wii internal flash memory will not save GameCube games. Also, backward compatibility is limited in some areas. For example, online and LAN-enabled features for GameCube titles are unavailable on the Wii, since the console lacks serial ports for the GameCube Broadband Adapter and Modem Adapter. The revised Wii model and the Wii Mini lack the GameCube backward compatibility features.
### Virtual Console
The Virtual Console service allowed Wii owners to play games originally released for Nintendo's older consoles, including the Nintendo Entertainment System, Super Nintendo Entertainment System, and the Nintendo 64. Later updates included games from third-party consoles and computers, including the Sega Genesis/Mega Drive and Sega Mark III/Master System, NEC TurboGrafx-16/PC Engine, SNK Neo Geo, the Commodore 64 computer, the MSX computer (only in Japan), and various arcade games through Virtual Console Arcade. Virtual Console games were distributed over broadband Internet via the Wii Shop Channel and were saved to the Wii internal flash memory or to a removable SD card. Once downloaded, Virtual Console games can be accessed from the Wii Menu as individual channels or from an SD card via the SD Card Menu.
### WiiWare
WiiWare was Nintendo's foray into digital distribution on the Wii, comparable to the existing Xbox Live Arcade and PlayStation Network. The service allowed players to purchase games digitally through the Wii Shop, downloading the games to their local memory cards to be run from them. Besides facilitating this form of distribution, WiiWare was also envisioned to help support smaller and independent game developers, offering these teams a less expensive route to produce Wii games without having to go through retail production and distribution channels. The WiiWare channel launched on March 25, 2008, and remained active including through the Wii U's lifetime until the Wii Shop Channel was discontinued in 2019.
## Reception
### Critical reviews
The system was well received after its exhibition at E3 2006, winning the Game Critics Awards for Best of Show and Best Hardware. Later in December, Popular Science named the console a Grand Award Winner in home entertainment. The game proceeded to win multiple awards; the console was awarded Spike TV's Video Games Award, a Golden Joystick from the Golden Joystick Awards, and an Emmy Award for game controller innovation from the National Academy of Television Arts and Sciences. IGN and The Guardian named the Wii the 10th greatest video game console of all time out of 25, and GameSpot chose the console as having the best hardware in its "Best and Worst 2006" awards.
The Wii was praised for its simple yet responsive controls, as well as its simplicity that appeals to broader audiences. Although Dan Grabham of Tech Radar enjoyed its simple mechanics, stating how "even grandparents can pick things up pretty quickly", he also enjoyed the depth of content carried over from the GameCube. CNET likened the "no-brainer" setup and the easy to navigate home screen. Will Wright, the creator of The Sims, called the Wii "the only next gen system I've seen", and rather considered the PS3 and the Xbox 360 as simply successors with "incremental improvement". He believed that the Wii did not only improve on graphics, but also complimented how it "hits a completely different demographic". Reviewers were fond of the compact design, with Ars Technica comparing it to an Apple product.
By 2008, two years after the Wii's release, Nintendo acknowledged several limitations and challenges with the system (such as the perception that the system catered primarily to a "casual" audience and was unpopular among hardcore gamers). Miyamoto admitted that the lack of support for high-definition video output on the Wii and its limited network infrastructure also contributed to the system being regarded separately from its competitors' systems, the Xbox 360 and PlayStation 3. Miyamoto originally defended Nintendo's decision to not include HD graphics in the Wii, stating that the number of HDTV's in people's homes at the time was "really not that high, yet. Of course I think five years down the road it would be pretty much a given that Nintendo would create an HD system, but right now the predominant television set in the world is a non-HD set." In 2013, Miyamoto said in an interview with Japanese video game website 4Gamer that "Even for the Wii, no matter how much it made the system cost, it would have been great if it were HD in the first place."
At the same time, criticism of the Wii Remote and Wii hardware specifications had surfaced. Former GameSpot editor and Giantbomb.com founder Jeff Gerstmann stated that the controller's speaker produces low-quality sound, while Factor 5 co-founder Julian Eggebrecht stated that the console has inferior audio capabilities and graphics. UK-based developer Free Radical Design stated that the Wii hardware lacks the power necessary to run the software it scheduled for release on other seventh-generation consoles. Online connectivity of the Wii was also criticized; Matt Casamassina of IGN compared it to the "entirely unintuitive" service provided for the Nintendo DS.
Although the Wii Mini was met with praise for being cheap, considering it was bundled with a Wii Remote, Nunchuk and a copy of Mario Kart Wii, it was considered inferior compared to the original console. Critics were disappointed in the lack of online play and backwards compatibility with GameCube games, and also believed the hardware was still rather quite large, being about half the size of the Wii; Eurogamer's Richard Leadbetter thought the Wii Mini was not any more "living room friendly", as he believed the "bright red plastics make it stand out much more than the more neutral blacks and whites of existing model's casing." He stated that the overall design was rough in texture, and seemed to have been built with emphasis on durability. Nintendo Life reviewer Damien McFerran said that the lightweight design of the Wii Mini makes it feel "a little cheaper and less dependable" with empty space inside the shell. CNET criticized the pop-open lid for inserting disks to be "cheap-feeling".
### Third-party development
The Wii's success caught third-party developers by surprise due to constraints of the hardware's distinct limitations; this led to apologies for the quality of their early games. In an interview with Der Spiegel, Ubisoft's Yves Guillemot and Alain Corre admitted that they made a mistake in rushing out their launch titles, promising to take future projects more seriously. An executive for Frontline Studios stated that major publishers were wary of releasing exclusive titles for the Wii, due to the perception that third-party companies were not strongly supported by consumers. 1UP.com editor Jeremy Parish stated that Nintendo was the biggest disappointment for him in 2007. Commenting on the lack of quality third-party support, he stated that the content as worse than its predecessors, resulting in "bargain-bin trash".
Additionally, the lack of third-party support also came from the fact that first-party games released by Nintendo were too successful, and developers were having issues with competing. Game developers, such as Rod Cousens, CEO of Codemasters were having issues with the slow sales on the Wii. The Nikkei Business Daily, a Japanese newspaper, claimed that companies were too nervous to start or continue making games for the console, some of which considering the Wii to be a fad that will eventually die down in popularity. Nintendo considered why this was the case was due to the fact that they "know the Wii's special characteristics better than anyone", and began developing the games for the console long before its release, giving them a head start.
Due to struggling sales during 2010, developers began creating alternative options. Capcom took note of the difficulty of making money on the Wii, and shifted their content to making less games, but with higher quality. According to Sony, many third-party developers originally making games for the Wii started focusing attention more of the PlayStation 3 and Xbox 360.
### Sales
Initial consumer reaction to the Wii appears to have been positive, with commentators judging the launch to have been successful. The launch of the Wii in November 2006 was considered the largest console launch by Nintendo in the Americas, Japan, Europe and Australia. The console outsold combined sales of the PlayStation 3 and Xbox 360 in several regions in its launch period. The Wii remained in short supply through the first year. The company had already shipped nearly 3.2 million units worldwide by the end of 2006, and worked to raise production amounts to 17.5 million through 2007, but warned consumers that there would be shortages of the Wii through that year. Wii sales surpassed Xbox 360 sales by September 2007. To meet further demand, Nintendo increased production rates of the Wii from 1.6 million to around 2.4 million units per month in 2008, planning to meet the continued demand for the console.
At the March 2009 Game Developers Conference, Iwata reported that the Wii had reached 50 million sales. Nintendo announced its first price reductions for the console in September 2009, dropping the MSRP from to . The price cut had come days after both Sony and Microsoft announced similar price cuts on the PlayStation 3 and Xbox 360 consoles. Nintendo stated that the price reduction was in anticipation of drawing in more consumers who still cautious about buying a video game console. The Wii became the best-selling home video game console produced by Nintendo during 2009, with sales exceeding 67 million units.
In 2010, sales of the Wii began to decline, falling by 21 percent from the previous year. The drop in sales was considered to be due to a combination of the introduction of the PlayStation Move and Kinect motion control systems on the PlayStation 3 and Xbox 360 systems, and the waning fad of the Wii system. Wii sales also weakened into 2011 as third-party support for the console waned; major publishers were passing over the Wii which was underpowered and used non-standard development tools, and instead focused on games for the PlayStation 3, Xbox 360 and personal computers. Publishers were also drawn away from the Wii with the promise of the more powerful Wii U in the near future. Wii sales continued to decline into 2012, falling by half from the previous year. After its release in Canada on December 7, 2012, the Wii Mini had sold 35,700 units by January 31, 2013.
The Wii surpassed 100 million units sold worldwide during the second quarter of 2013. The Wii had total lifetime sales of 101.63 million consoles worldwide as of March 31, 2016, the last reported data for the console by Nintendo. At least 48 million consoles were sold in North America, 12 million in Japan, and 40 million in all other regions. As of 2020, the Wii is the fifth-best-selling home console of all time, surpassed by the original PlayStation (102.4 million units), the PlayStation 4 (117.2 million units), the Nintendo Switch (125.62 million units), and the PlayStation 2 (159 million units). As of 2023, the Wii is Nintendo's second-best-selling home console, having been outsold by the Nintendo Switch at 125.62 million units.
As of September 30, 2021, nine Wii games had sold over ten million units globally, which included Wii Sports (82.90 million, including pack-in copies), Mario Kart Wii (37.38 million), Wii Sports Resort (33.14 million), New Super Mario Bros. Wii (30.32 million), Wii Play (28.02 million), Wii Fit (22.67 million), Wii Fit Plus (21.13 million), Super Smash Bros. Brawl (13.32 million), and Super Mario Galaxy (12.80 million). A total of 921.85 million titles had been sold for the Wii by June 30, 2022. The popularity of Wii Sports was considered part of the console's success, making it a killer app for the Wii as it drew those that typically did not play video games to the system.
### Legal issues
There were a number of legal challenges stemming from the Wii and Wii Remote. Several of these were patent-related challenges from companies claiming the Wii Remote infringed on their patents. Most of these were either dismissed or settled out of court. One challenge was from iLife Technologies Inc., who sued Nintendo over their Wii Remote's motion detection technology for patent infringement in 2013. iLife initially won a judgement against Nintendo in 2017. The case was overturned in 2020, with the appellate court ruling that iLife's patents were too broad to cover the specific motion detection technologies developed by Nintendo.
There were lawsuits against Nintendo claiming physical damage done by ineffective wrist straps on the Wii Remote when they slipped out of players' hands and broke television screens or windows. The first class action suit filed in December 2006 led Nintendo to issue a product recall for the existing wrist straps and send out new versions that had an improved securing mechanism for the wrist. A second class action lawsuit was filed by a mother in Colorado in December 2008, claiming the updated wrist straps were still ineffective. This suit was dismissed by September 2010, finding for Nintendo that the wrist straps were not knowingly faulty under Colorado consumer protection laws.
## Legacy
### Impact on Nintendo
The Wii has been recognized as Nintendo's "blue ocean" strategy to differentiate itself from its competitors Sony and Microsoft for the next several years. The Wii has since become seen as a prime example of an effective blue ocean approach. While Sony and Microsoft continued to innovate their consoles on hardware improvements to provide more computational and graphics power, Nintendo put more effort towards developing hardware that facilitated new ways to play games. This was considered a key part to the success of the console, measured by sales over its competitors during that console generation. However, Nintendo did not maintain this same "blue ocean" approach when it took towards designing the Wii U, by which point both Sony and Microsoft had caught up with similar features from the Wii. These factors partially contributed towards weak sales of the Wii U.
Part of the Wii's success was attributed to its lower cost compared to the other consoles. While Microsoft and Sony have experienced losses producing their consoles in the hopes of making a long-term profit on software sales, Nintendo reportedly had optimized production costs to obtain a significant profit margin with each Wii unit sold. Soichiro Fukuda, a games analyst at Nikko Citigroup, estimated that in 2007, Nintendo's optimized production gave them a profit from each unit sold ranging from in Japan to in the United States and in Europe. The console's final price at launch of made it comparatively cheaper than the Xbox 360 (which had been available in two models priced at and ) and the then-upcoming PlayStation 3 (also to be available in two models priced at and ). Further, Nintendo's first-party games for the Wii were set at an retail price of , about less expensive than average games for Nintendo's competitors. Iwata stated they were able to keep the game price lower since the Wii was not as focused on high-resolution graphics in comparison to the other consoles, thus keeping development costs lower, averaging about per game compared to required for developing on the Xbox 360 or PlayStation 3.
### Health effects
The Wii was marketed to promote a healthy lifestyle via physical activity. It has been used in physical rehabilitation, and its health effects have been studied for several conditions. The most studied uses of Wii for rehabilitation therapy are for stroke, cerebral palsy, Parkinson's disease, and for balance training. The potential for adverse effects from video game rehabilitation therapy (for example, from falls) has not been well studied as of 2016.
A study published in the British Medical Journal stated that Wii players use more energy than they do playing sedentary computer games, but Wii playing was not an adequate replacement for regular exercise. Some Wii players have experienced musculoskeletal injuries known as Wiiitis, Wii knee, Wii elbow (similar to tennis elbow) or nintendinitis from repetitive play; a small number of serious injuries have occurred, but injuries are infrequent and most are mild.
In May 2010, Nintendo gave the American Heart Association (AHA) a \$1.5 million gift; the AHA endorsed the Wii with its Healthy Check icon, covering the console and two of its more active games, Wii Fit Plus and Wii Sports Resort.
### Homebrew, hacking, and emulation
The Wii has become a popular target for homebrewing new functionality and video games since its discontinuation. For example, homebrew projects have been able to add DVD playback to unmodified Wii consoles. The Wii also can be hacked to enable an owner to use the console for activities unintended by the manufacturer. Several brands of modchips are available for the Wii.
The Wii Remote also became a popular unit to hack for other applications. As it connected through standard Bluetooth interfaces, programmers were able to reverse engineer the communications protocol and develop application programming interfaces for the Wii Remote for other operating systems, and subsequently games and applications that used the Wii Remote on alternate platforms. Further hacks at the hardware level, typically taking apart the Wii Remote and Sensor Bar and reconfiguring its components in other configurations, led to other applications such as remote hand and finger tracking, digital whiteboards, and head tracking for virtual reality headsets.
The Wii has been a popular system for emulation; while the act of creating such emulators in a cleanroom-type approach have been determined to be legal, the actions of bringing the Wii system software and games to other systems has been of questionable legality and Nintendo has actively pursued legal action against those that distribute copies of their software. The open-source Dolphin project has been able to successfully emulate the Wii and GameCube through several years of cleanroom efforts.
### Music
Joe Skrebels of IGN has argued that the Wii's greatest and longest lasting legacy is that of the music composed by Totaka for the console, writing: "Motion controls, Miis, and balance boards have all been removed or diminished as Nintendo moved on, but take a quick look across YouTube, TikTok, or Twitter, and I guarantee it won't take all that long to hear a Wii track. Covers and memes featuring music from the Wii are everywhere. Music written for the Wii has taken on a new life as a cultural touchstone, and inspired people far beyond the confines of the little white wedge it was composed for." The Washington Post's Michael Andor Brodeur described the Mii Channel music as "a cultural touchstone", while Martin Robinson of Eurogamer called the theme of the Wii Shop Channel "a song so infectious it went on to become a meme"; both the Mii Channel theme and Wii Shop Channel theme have inspired jazz covers.
## See also
- History of Nintendo
|
2,068,545 |
Peace dollar
| 1,170,914,730 |
US dollar coin (1921–1928, 1934–1935, 2021–present)
|
[
"Currencies introduced in 1921",
"Eagles on coins",
"Goddess of Liberty on coins",
"Peace symbols",
"Sun on coins",
"United States dollar coins",
"United States silver coins"
] |
The Peace dollar is a United States dollar coin minted for circulation from 1921 to 1928 and 1934 to 1935, and beginning again for collectors in 2021. Designed by Anthony de Francisci, the coin was the result of a competition to find designs emblematic of peace. Its obverse represents the head and neck of the Goddess of Liberty in profile, and the reverse depicts a bald eagle at rest clutching an olive branch, with the legend "Peace". It was the last United States dollar coin to be struck for circulation in silver.
With the passage of the Pittman Act in 1918, the United States Mint was required to strike millions of silver dollars, and began to do so in 1921, using the Morgan dollar design. Numismatists began to lobby the Mint to issue a coin that memorialized the peace following World War I; although they failed to get Congress to pass a bill requiring the redesign, they were able to persuade government officials to take action. The Peace dollar was approved by Treasury Secretary Andrew Mellon in December 1921, completing the redesign of United States coinage that had begun in 1907.
The public believed the announced design, which included a broken sword, was illustrative of defeat, and the Mint hastily acted to remove the sword. The Peace dollar was first struck on December 28, 1921; just over a million were coined bearing a 1921 date. When the Pittman Act requirements were met in 1928, the mint ceased production of the coins, but more were struck during 1934 and 1935 as a result of further legislation. In 1965, amid much controversy, the Denver mint struck over 316,000 Peace dollars dated 1964, but these were never issued, and all are believed to have been melted.
In 2021, the U.S. Mint produced a special 2021 issue Peace Dollar to celebrate the design’s 100th anniversary, with minting of the coins to continue from 2023 onwards.
## Background and preparations
### Statutory history
The Bland–Allison Act, passed by Congress on February 28, 1878, required the Treasury to purchase a minimum of \$2 million in domestically mined silver per month and coin it into silver dollars. The Mint used a new design by engraver George T. Morgan, and struck what became known as the Morgan dollar. Many of the pieces quickly vanished into bank vaults for use as backing for paper currency redeemable in silver coin, known as silver certificates. In 1890, the purchases required under the Bland–Allison Act were greatly increased under the terms of the Sherman Silver Purchase Act. Although the Sherman Act was repealed in 1893, it was not until 1904 that the government struck the last of the purchased silver into dollars. Once it did, production of the coin ceased.
During World War I, the German government hoped to destabilize British rule over India by spreading rumors that the British were unable to redeem for silver all of the paper currency they had printed. These rumors, and hoarding of silver, caused the price of silver to rise and risked damaging the British war effort. The British turned to their war ally, the United States, asking to purchase silver to increase the supply and lower the price. In response, Congress passed the Pittman Act of April 23, 1918. This statute gave the United States authority to sell metal to the British government from up to 350,000,000 silver dollars at \$1 per ounce of silver plus the value of the copper in the coins, and handling and transportation fees. Only 270,232,722 coins were melted for sale to the British, but this represented 47% of all Morgan dollars struck to that point. The Treasury was required by the terms of the Act to strike new silver dollars to replace the coins that were melted, and to strike them from silver purchased from American mining companies.
### Idea and attempted legislation
It is uncertain who originated the idea for a US coin to commemorate the peace following World War I; the genesis is usually traced to an article by Frank Duffield published in the November 1918 issue of The Numismatist. Duffield suggested that a victory coin should be "issued in such quantities it will never become rare". In August 1920, a paper by numismatist Farran Zerbe was read to that year's American Numismatic Association (ANA) convention in Chicago. In the paper, entitled Commemorate the Peace with a Coin for Circulation, Zerbe called for the issuance of a coin to celebrate peace, stating,
> I do not want to be misunderstood as favoring the silver dollar for the Peace Coin, but if coinage of silver dollars is to be resumed in the immediate future, a new design is probable and desirable, bullion for the purpose is being provided, law for the coinage exists and limitation of the quantity is fixed—all factors that help pave the way for Peace Coin advocates. And then—we gave our silver dollars to help win the war, we restore them in commemoration of victory and peace.
Zerbe's proposal led to the appointment of a committee to transmit the proposal to Congress and urge its adoption. According to numismatic historian Walter Breen, "Apparently, this was the first time that a coin collector ever wielded enough political clout to influence not only the Bureau of the Mint, but Congress as well." The committee included noted coin collector and Congressman William A. Ashbrook (Democrat–Ohio), who had chaired the House Committee on Coinage, Weights, and Measures until the Republicans gained control following the 1918 elections.
Ashbrook was defeated for re-election in the 1920 elections; at that time congressional terms did not end until March 4 of the following year. He was friendly with the new committee chairman Albert Henry Vestal (Republican–Indiana), and persuaded him to schedule a hearing on the peace coin proposal for December 14, 1920. Though no bill was put before it, the committee heard from the ANA delegates, discussed the matter, and favored the use of the silver dollar, which as a large coin had the most room for an artistic design. The committee took no immediate action; in March 1921, after the Harding administration took office, Vestal met with the new Secretary of the Treasury, Andrew W. Mellon, and Mint Director Raymond T. Baker about the matter, finding them supportive so long as the redesign involved no expense.
On May 9, 1921, striking of the Morgan dollar resumed at the Philadelphia Mint under the recoinage called for by the Pittman Act. The same day, Congressman Vestal introduced the Peace dollar authorization bill as a joint resolution. Vestal placed his bill on the Unanimous Consent Calendar, but Congress adjourned for a lengthy recess without taking any action. When Congress returned, Vestal asked for unanimous consent that the bill pass on August 1, 1921. However, one representative, former Republican leader James R. Mann (Illinois) objected, and numismatic historian Roger Burdette suggests that Mann's stature in the House ensured that the bill would not pass. Nevertheless, Vestal met with the ANA and told them that he hoped Congress would reconsider when it met again in December 1921.
### Competition
Sometime after the December 1920 hearing requested by the ANA, the chairman of the U.S. Commission of Fine Arts, Charles Moore, became aware of the proposed congressional action, and decided to investigate. Moore, together with Commission member and Buffalo nickel designer James Earle Fraser, met with Mint Director Baker on May 26, 1921, and they agreed that it would be appropriate to hold a design competition for the proposed dollar, under the auspices of the Commission. This was formalized on July 26 with the Commission's written recommendation to the Mint that a competition, open only to invited sculptors, be used to select designs. The winner of the competition was to receive \$1,500 prize money, while all other participants would be given \$100. On July 28, President Harding issued Executive Order 3524, requiring that coin designs be submitted to the Commission before approval by the Treasury Secretary. In early September, following the failure of the bill, Baker contacted Moore, putting the matter aside pending congressional action.
By November, proponents of the peace coin had realized that congressional approval was not necessary—as the Morgan dollar had been struck for more than 25 years, it was eligible for replacement at the discretion of the Secretary of the Treasury under an 1890 act. The Morgan design was then being used for large quantities of silver dollars as the Mint struck replacements for the melted coins under the Pittman Act. Though Congress had not yet convened, Baker contacted Fraser in early November to discuss details of the design competition. According to Burdette, Baker's newfound enthusiasm came from the fact that President Harding was about to formally declare an end to the war with Germany—a declaration needed because the US had not ratified the Treaty of Versailles. In addition, the Washington Conference on disarmament, for which the administration had great hopes, was soon to convene. On November 19, Fraser notified competition participants by personal letter, sending official rules and requirements four days later, with submissions due by December 12. Competition participants included Hermon MacNeil, Victor D. Brenner, and Adolph Weinman, all of whom had designed previous U.S. coins.
The artists were instructed to depict the head of Liberty on the obverse, to be made "as beautiful and full of character as possible". The reverse would depict an eagle, as prescribed by the Coinage Act of 1792, but otherwise was left to the discretion of the artist. The piece also had to bear the denomination, the name of the country, "E pluribus unum", the motto "In God We Trust", and the word "Liberty".
On December 13, the commission assembled to review the submitted designs, as well as a set produced by Mint Chief Engraver Morgan at Baker's request, and a set, unrequested, from a Mr. Folio of New York City. It is not known how the designs were displayed for the Commission. After considerable discussion among Fraser, Moore, and Herbert Adams (a sculptor and former member of the Commission), a design by Anthony de Francisci was unanimously selected.
### Design
At age 34, de Francisci was the youngest of the competitors; he was also among the least experienced in the realm of coin design. While most of the others had designed regular or commemorative coins for the Mint, de Francisci's sole effort had been the conversion of drawings for the 1920 Maine commemorative half dollar to the finished design. De Francisci had had little discretion in that project, and later said of the work, "I do not consider it very favorably."
The sculptor based the obverse design of Liberty on the features of his wife, Teresa de Francisci. Due to the short length of the competition, he lacked the time to hire a model with the features he envisioned. Teresa de Francisci was born Teresa Cafarelli in Naples, Italy. In interviews, she related that when she was five years old and the steamer on which she and her family were immigrating passed the Statue of Liberty, she was fascinated by the statue, called her family over, and struck a pose in imitation. She later wrote to her brother Rocco,
> You remember how I was always posing as Liberty, and how brokenhearted I was when some other little girl was selected to play the role in the patriotic exercises in school? I thought of those days often while sitting as a model for Tony's design, and now seeing myself as Miss Liberty on the new coin, it seems like the realization of my fondest childhood dream.
Breen wrote that the radiate crown that the Liberty head bears is not dissimilar to those on certain Roman coins, but is "more explicitly intended to recall that on the Statue of Liberty". Anthony de Francisci recalled that he opened the window of the studio and let the wind blow on his wife's hair as he worked. However, he did not feel that the design depicted her exclusively. He noted that "the nose, the fullness of the mouth are much like my wife's, although the whole face has been elongated". De Francisci submitted two reverse designs; one showed a warlike eagle, aggressively breaking a sword; the other an eagle at rest, holding an olive branch. The latter design, which would form the basis for the reverse of the Peace dollar, recalled de Francisci's failed entry for the Verdun City medal. The submitted obverse is almost identical to the coin as struck, excepting certain details of the face, and that the submitted design used Roman rather than Arabic numerals for the date.
Baker, de Francisci, and Moore met in Washington on December 15. At that time, Baker, who hoped to start Peace dollar production in 1921, outlined the tight schedule for this to be accomplished, and requested certain design changes. Among these was the inclusion of the broken sword from the sculptor's alternate reverse design, to be placed under the eagle, on the mountaintop on which it stands, in addition to the olive branch. Baker approved the designs, subject to these changes. The revised designs were presented to President Harding on December 19. Harding insisted on the removal of a small feature of Liberty's face, which seemed to him to suggest a dimple, something he did not consider suggestive of peace, and the sculptor then did so.
### Controversy
The Treasury announced the new design on December 19, 1921. Photographs of Baker and de Francisci examining the final plaster model appeared in newspapers, along with written descriptions of the designs, since the Treasury at that time took the position that it was illegal for photographs of a United States coin to be printed in a newspaper. Secretary Mellon gave formal approval to the design on December 20. As it would take the Mint several days to produce working dies, the first strike of the new coins was scheduled for December 29.
The new design was widely reported in newspapers, and was the source of intense public attention. A Mint press release described the reverse as "a large figure of an eagle perched on a broken sword, and clutching an olive branch bearing the word, 'peace'". On December 21, the New York Herald ran a scathing editorial against the new design,
> If the artist had sheathed the blade or blunted it there could be no objection. Sheathing is symbolic of peace, of course; the blunted sword implies mercy. But a broken sword carries with it only unpleasant associations.
>
> A sword is broken when its owner has disgraced himself. It is broken when a battle is lost and breaking is the alternative to surrendering. A sword is broken when the man who wears it can no longer render allegiance to his sovereign. But America has not broken its sword. It has not been cashiered or beaten; it has not lost allegiance to itself. The blade is bright and keen and wholly dependable. It is regrettable that the artist should have made such an error in symbolism. The sword is emblematic of Justice as well as of Strength. Let not the world be deceived by this new dollar. The American effort to limit armament and to prevent war or at least reduce its horror does not mean that our sword is broken.
At the time, according to Burdette, given the traumas of the Great War, Americans were highly sensitive about their national symbols, and unwilling to allow artists any leeway in interpretation. The Mint, the Treasury, and the Fine Arts Commission began to receive large numbers of letters from the public objecting to the design. De Francisci attempted to defend his design, stating, "with the sword there is the olive branch of peace and the combination of the two renders it impossible to conceive of the sword as a symbolization of defeat". Baker had left Washington to visit the San Francisco Mint, a transcontinental journey of three days. Acting Mint Director Mary Margaret O'Reilly sent him a telegram on December 23, urgently seeking his approval to remove the sword from the reverse, as had been recommended by Moore and Fraser at a meeting the previous afternoon. Due to the tight timeline for 1921 strikings of the dollar, it was not possible to await Baker's response, so on the authority of Treasury Undersecretary Seymour Parker Gilbert, who was approached by O'Reilly, the Mint proceeded with the redesign. To satisfy Harding's executive order, the Fine Arts Commission quickly approved the change, and by the time Baker wired his approval on December 24, without being able to see the revisions, Gilbert had already approved the revised design in Secretary Mellon's absence. A press release was issued late on December 24, stating that the broken sword which had appeared on de Francisci's alternate reverse would not appear on the issued coin. In its December 25 edition, the Herald took full credit for the deletion of the broken sword from the coin's design.
Farran Zerbe, whose paper to the ANA convention helped launch the dollar proposal, saw de Francisci's defense and the press release, and suggested that the sculptor had mistakenly thought his alternate design had been approved.
## Production
### Initial release
The removal of the sword from the coinage hub, which had already been produced by reduction from the plaster models, was accomplished by painstaking work by Mint Chief Engraver Morgan, using extremely fine engraving tools under magnification. Morgan did the work on December 23 in the presence of de Francisci, who had been summoned to the Philadelphia Mint to ensure the work met with his approval. It was insufficient merely to remove the sword, as the rest of the design had to be adjusted. Morgan had to hide the excision; he did so by extending the olive branch, previously half-hidden by the sword, but had to remove a small length of stem that showed to the left of the eagle's talons. Morgan also strengthened the rays, and sharpened the appearance of the eagle's leg. The chief engraver did his work with such skill that the work on the dollar was not known for over 85 years.
On December 28, Philadelphia Mint Superintendent Freas Styer wired Baker in San Francisco, reporting the first striking of the Peace dollar. The Mint later reported that 1,006,473 pieces were struck in 1921, a rate of output for the four days remaining in the year that Burdette calls "amazing"; he speculates that minting of 1921 Peace dollars continued into 1922. The first coin struck was to be sent to President Harding, but what became of it is something of a mystery: O'Reilly indicated that she had the coin sent to Harding, but the inventory of Harding's estate, prepared after the President died in office less than two years later, does not mention it, nor is there any mention of the coin in Harding's papers. Breen, in his earlier book on U.S. coins, stated that the coin was delivered to Harding by messenger on January 3, 1922, but does not state the source of his information. A few proofs of the 1921 production were struck early in the run, in both satin and matte finishes, but it is unknown exactly how many with either finish were created; numismatic historians Leroy Van Allen and A. George Mallis estimate the mintage totals at 24 of the former and five of the latter.
The Peace dollar was released into circulation on January 3, 1922. In common with all silver and copper-nickel dollar coins struck from 1840 to 1978, the Peace dollar had a diameter of 1.5 inches (38 mm), which was larger than the Mint's subsequently struck modern dollar coins. Its issuance completed the redesign of United States coinage that had begun with issues in 1907. Long lines formed at the Sub-Treasury Building in New York the following day when that city's Federal Reserve Bank received a shipment; the 75,000 coins initially sent by the Mint were "practically exhausted" by the end of the day. Rumors that the coins did not stack well were contradicted by bank cashiers, who demonstrated for The New York Times that the coins stacked about as well as the Morgan dollars. De Francisci had paid Morgan for 50 of the new dollars; on January 3, Morgan sent him the pieces. According to his wife, de Francisci had bet several people that he would lose the design competition; he used the pieces to pay off the bets and did not keep any.
According to one Philadelphia newspaper,
> Liberty is getting younger. Take it from the new 'Peace Dollar,' put in circulation yesterday, the young woman who has been adorning silver currency for many years, never looked better than in the 'cart wheel' that the Philadelphia Mint has just started to turn out. The young lady, moreover, has lost her Greek profile. Helenic [sic] beauty seems to have been superseded by the newer 'flapper' type.
### Modification and production
From the start, the Mint found that excessive pressure had to be applied to fully bring out the design of the coin, and the dies broke rapidly. On January 10, 1922, O'Reilly, still serving as Acting Mint Director in Baker's absence, ordered production of the dollar stopped. Dies had been sent to the Denver and San Francisco mints in anticipation of beginning coinage there; they were ordered not to begin work until the difficulties had been resolved. The Commission of Fine Arts was asked to advise what changes might solve the problems. Both Fraser and de Francisci were called to Philadelphia, and after repeated attempts to solve the problem without reducing the relief failed, de Francisci agreed to modify his design to reduce the relief. The plaster models he prepared were reduced to coin size using the Mint's Janvier reducing lathe. However, even after 15 years of possessing the pantograph-like device, the Mint had no expert in its use on its staff, and, according to Burdette, "[h]ad a technician from Tiffany's or Medallic Art [Company] been called in, the 1922 low relief coins might have turned out noticeably better than they did".
Approximately 32,400 coins on which Morgan had tried to keep a higher relief were struck in January 1922. While all were believed to have been melted, one circulated example has surfaced. Also, high relief 1922 proof dollars occasionally appear on the market and it is believed that about six to 10 of them exist. The new low-relief coins, which Fraser accepted on behalf of the Commission, though under protest, were given limited production runs in Philadelphia in early February. When the results proved satisfactory, San Francisco began striking its first Peace dollars using the low-relief design on February 13, with Denver initiating production on February 21, and Philadelphia on February 23. The three mints together struck over 84 million pieces in 1922.
The 1926 Peace dollar, from all mints, has on the obverse the word "God", slightly boldened. The Peace dollar's lettering tended to strike indistinctly, and Burdette suggests that the new chief engraver, John R. Sinnock (who succeeded Morgan after his death in 1925), may have begun work in the middle of the motto "In God We Trust", and for reasons unknown, only the one word was boldened. No Mint records mention the matter, which was not discovered until 1999.
The Peace dollar circulated mainly in the Western United States, where coins were preferred over paper money, and saw little circulation elsewhere. Aside from this use, the coins were retained in vaults as part of bank reserves. They would frequently be obtained from banks as Christmas presents, with most deposited again in January. With the last of the Pittman Act silver struck into coins in 1928, the Mint ceased the production of Peace dollars.
Production of Peace dollars resumed in 1934, due to another congressional act; this one requiring the Mint to purchase large quantities of domestic silver, a commodity whose price was at a historic low. This Act assured producers of a ready market for their product, with the Mint gaining a large profit in seigniorage, through monetizing cheaply purchased silver—the Mint in fact paid for some shipments of silver bullion in silver dollars. Pursuant to this authorization, over seven million silver Peace dollars were struck in 1934 and 1935. Mint officials gave consideration to striking 1936 silver dollars, and in fact prepared working dies, but as there was no commercial demand for them, none was actually struck. With Mint Chief Engraver Sinnock thinking it unlikely that there would be future demand for the denomination, the master dies were ordered destroyed in January 1937.
## Striking of 1964-D Peace dollars
On August 3, 1964, Congress passed legislation providing for the striking of 45,000,000 silver dollars. Silver coins, including the dollar, had become scarce due to hoarding as the price of silver rose past the point at which a silver dollar was worth more as bullion than as currency. The new coins were intended to be used at Nevada casinos and elsewhere in the West where "hard money" was popular. Many in the numismatic press complained that the new silver dollars would only satisfy a small special interest, and would do nothing to alleviate the general coin shortage. Much of the pressure for the coins to be struck was being applied by the Senate Majority Leader, Mike Mansfield (Democrat–Montana), who represented a state that heavily used silver dollars. Preparations for the striking proceeded at a reluctant Mint Bureau. Some working dies had survived Sinnock's 1937 destruction order, but were found to be in poor condition, and Mint Assistant Engraver (later Chief Engraver) Frank Gasparro was authorized to produce new ones. Mint officials had also considered using the Morgan Dollar design; this idea was dropped and Gasparro replicated the Peace dollar dies. The reverse dies all bore Denver mintmarks; as the coins were slated for circulation in the West, it was deemed logical to strike them nearby.
Treasury Secretary C. Douglas Dillon was strongly opposed to restriking the Peace dollar and in early 1965 informed President Lyndon Johnson that such coins would be unlikely to circulate in Montana, nor anywhere else; they would simply be hoarded. Nevertheless, Dillon concluded that as Senator Mansfield insisted, they would have to be struck. Dillon resigned on April 1; his successor, Henry H. Fowler, was immediately questioned by Mansfield about the dollars, and he assured the senator that things would be worked out to his satisfaction. Mint Director Eva Adams was also against striking the silver dollars, but hoped to keep the \$600,000 appropriated for that expense. Senator Mansfield refused to consider any cancellation or delay and on May 12, 1965, the Denver Mint began trial strikes of the 1964-D Peace dollar—the Mint had obtained congressional authorization to continue striking 1964-dated coins into 1965.
The new pieces were publicly announced on May 15, 1965, and coin dealers immediately offered \$7.50 each for them, ensuring that they would not circulate. The public announcement prompted a storm of objections. Both the public and many congressmen saw the issue as a poor use of Mint resources during a severe coin shortage, which would only benefit coin dealers. On May 24, one day before a hastily called congressional hearing, Adams suddenly announced that the pieces were deemed trial strikes, never intended for circulation. The Mint later stated that 316,076 dollars had been struck; all were reported melted amid heavy security. To ensure that there would be no repetition, Congress inserted a provision in the Coinage Act of 1965 forbidding the coinage of silver dollars for five years. No 1964-D Peace dollars are known to exist in either public or private hands. In 1970, two unknown specimens were discovered in a Treasury vault and were ordered destroyed. Rumors and speculation about others surviving in illegal private possession immediately began and continue to appear from time to time. Pieces appearing to be 1964-D dollars have also been privately restruck using unofficial dies and genuine, earlier-date Peace dollars.
Some Peace dollars using an experimental base metal composition were struck in 1970 in anticipation of the approval of the Eisenhower dollar; they are all presumed destroyed. This new dollar coin was approved by an act signed by President Richard Nixon on December 31, 1970, with the obverse to depict President Dwight D. Eisenhower, who had died in March, 1969. Circulating Eisenhower dollars contained no precious metal, though some for collectors were struck in 40% silver.
## Modern Peace Dollars
On January 5, 2021, President Donald Trump signed legislation to issue Morgan and Peace dollars in 2021 to mark the centennial of the transition between the two designs.
The US Mint released order information in the spring, with the coins priced at \$85. The coins were released in "0.858 troy oz. of .999 fine silver with an uncirculated finish" Peace Dollars were available beginning August 10, 2021, shipped at the beginning of the next fiscal year, in October 2021. They have no mint mark, as the original coins from Philadelphia had none, and a Household Order Limit of 3.
The US Mint originally decided to continue the Morgan and Peace Dollar program for 2022 and beyond minted in San Francisco (S) with a proof finish, but on March 14, 2022, announced that the planned 2022 releases had been scrapped due to "supply chain issues, production capacity and shipping logistics", and the rising price of silver, with plans to resume the program in 2023.
For 2023, the US Mint announced plans to issue Peace Dollars in three finishes:
Uncirculated, minted by the Philadelphia Mint, no mint mark, 275,000 mintage limit. Proof, minted by the San Francisco Mint, S mint mark, 400,000 mintage limit. Reverse proof, as part of a 2-coin set including a Morgan Dollar, minted by the San Francisco Mint, S mint mark, 250,000 mintage limit.
The Uncirculated 2023 Peace Dollar was released on July 13, 2023, and sold out on the first day. The Proof 2023 Peace Dollar was released on August 9, 2023.
## Mintage figures
None of the Peace dollar mintages is particularly rare, and A Guide Book of United States Coins (or Red Book) lists low-grade circulated specimens for most years for little more than the coin's bullion value. Two exceptions are the first year of issue 1921 Peace dollar, minted only at the Philadelphia mint and issued in high relief, and the low-mintage 1928-P Peace dollar. The prices for the 1928-P dollar are much lower than its mintage of 360,649 would suggest, because the U.S. mint announced that limited quantities would be produced and many were saved. In contrast the 1934-S dollar was not saved in great numbers so that prices for circulated specimens are fairly inexpensive but mid-grade uncirculated specimens can cost thousands of dollars.
## See also
- VAM (Morgan and Peace dollar die varieties)
|
164,286 |
Battle of Zama
| 1,168,112,688 |
Final battle of the Second Punic War (202 BC)
|
[
"200s BC conflicts",
"202 BC",
"Articles containing video clips",
"Battles involving Numidia",
"Battles of the Second Punic War",
"Kingdom of Numidia",
"Military history of Tunisia"
] |
The Battle of Zama was fought in 202 BC in what is now Tunisia between a Roman army commanded by Scipio Africanus and a Carthaginian army commanded by Hannibal. The battle was part of the Second Punic War and resulted in such a severe defeat for the Carthaginians that they capitulated. The Roman army of approximately 30,000 men was outnumbered by the Carthaginians who fielded either 40,000 or 50,000; the Romans were stronger in cavalry, but the Carthaginians had 80 war elephants.
At the outset of the Second Punic War, in 218 BC, a Carthaginian army led by Hannibal invaded mainland Italy, where it campaigned for the next 16 years. In 210 BC Scipio took command of the faltering Roman war effort in Iberia (modern Spain and Portugal) and cleared the peninsula of Carthaginians in five years. He returned to Rome and was appointed consul in 205 BC. The following year his army landed near the Carthaginian port of Utica. The Carthaginians and their Numidian allies were repeatedly beaten in battle and the Roman ally Masinissa became the leading Numidian ruler. Scipio and Carthage entered into peace negotiations, while Carthage recalled armies from Italy commanded by Hannibal and Mago Barca. The Roman Senate ratified a draft treaty, but when Hannibal arrived from Italy, Carthage repudiated it. Hannibal marched inland to confront the Romans and a battle quickly ensued.
The fighting opened with a charge by the Carthaginian elephants. These were repulsed, some retreating through the Carthaginian cavalry on each wing and disorganising them. The Roman cavalry units on each wing took advantage to charge their counterparts, rout them and pursue them off the battlefield. The two armies' close-order infantry were each deployed in three lines. The first two lines engaged each other and after a hard-fought combat the Carthaginians were routed. The second Carthaginian line then fanatically assaulted the Roman first line, inflicting heavy losses and pushing it back. After the Romans committed their second line the Carthaginians were forced to withdraw. There was a pause, during which the Romans formed a single extended line, to match that of the Carthaginians. These two lines charged each other, according to the near-contemporary historian Polybius "with the greatest fire and fury". The fight continued for some time, neither side gaining the advantage. The Roman cavalry then returned to the battlefield and charged the Carthaginian line in the rear, routing and destroying it. Carthage was left with no army with which to continue the war. The peace treaty dictated by Rome stripped Carthage of its overseas territories and some of its African ones. Thereafter, it was clear that Carthage was politically subordinate to Rome.
## Primary sources
The main source for almost every aspect of the Punic Wars is the historian Polybius (c. 200 – c. 118 BC), a Greek sent to Rome in 167 BC as a hostage. His works include a now largely lost manual on military tactics, but he is best known for The Histories, written sometime after 146 BC. Polybius's work is considered broadly objective and largely neutral as between Carthaginian and Roman points of view. Polybius was an analytical historian and wherever possible interviewed participants, from both sides, in the events he wrote about.
The accuracy of Polybius's account has been much debated over the past 150 years. Modern historians consider Polybius to have treated the relatives of Scipio Aemilianus, his patron and friend, unduly favourably but the consensus is to accept his account largely at face value and the details of the wars in modern sources are largely based on interpretations of Polybius's account. The modern historian Andrew Curry sees Polybius as being "fairly reliable"; Craige Champion describes him as "a remarkably well-informed, industrious, and insightful historian". Much of Polybius's account of the Second Punic War is missing, or only exists in fragmentary form.
The account of the Roman historian Livy, who relied heavily on Polybius, is used by modern historians where Polybius's account is not extant. The classicist Adrian Goldsworthy says Livy's "reliability is often suspect", and the historian Philip Sabin refers to Livy's "military ignorance". Dexter Hoyos describes Livy's account of Zama as "bizarrely at odds with Polybius’ which he seems not to understand fully".
Other, later, ancient histories of the war exist, although often in fragmentary or summary form. Modern historians usually take into account the writings of Appian and Cassius Dio, two Greek authors writing during the Roman era; they are described by John Lazenby as "clearly far inferior" to Livy. Hoyos accuses Appian of bizarre invention in his account of Zama; Michael Taylor states that it is "idiosyncratic". But some fragments of Polybius can be recovered from their texts. The Greek moralist Plutarch wrote several biographies of Roman commanders in his Parallel Lives. Other sources include coins, inscriptions, archaeological evidence and empirical evidence from reconstructions such as the trireme Olympias.
## Background
The First Punic War was fought between the two main powers of the western Mediterranean in the 3rd century BC: Carthage and Rome. The war lasted for 23 years, from 264 to 241 BC, before the Carthaginians were defeated. It took place primarily on the Mediterranean island of Sicily, its surrounding waters and in North Africa.
Carthage expanded its territory in Iberia (modern Spain and Portugal) from 236 BC, in 226 BC agreeing the Ebro Treaty with Rome which established the Ebro River as the northern boundary of the Carthaginian sphere of influence. A little later Rome made a separate treaty of association with the city of Saguntum, well south of the Ebro. Hannibal, the de facto ruler of Carthaginian Iberia, led an army to Saguntum in 219 BC and besieged, captured and sacked it. Early the following year Rome declared war on Carthage, starting the Second Punic War.
Hannibal led a large Carthaginian army from Iberia, through Gaul, over the Alps and invaded mainland Italy in 218 BC. During the next three years Hannibal inflicted heavy defeats on the Romans at the battles of the Trebia, Lake Trasimene and Cannae. At the last of these alone, at least 67,500 Romans were killed or captured. The historian Toni Ñaco del Hoyo describes these as "great military calamities", and Brian Carey writes that they brought Rome to the brink of collapse. Hannibal's army campaigned in Italy for 14 years.
There was also extensive fighting in Iberia from 218 BC. In 210 BC Publius Cornelius Scipio arrived to take command of Roman forces in Iberia. During the following four years Scipio repeatedly defeated the Carthaginians, driving them out of Iberia in 206 BC. One of Carthage's allies in Iberia was the Numidian prince Masinissa, who led a force of light cavalry in several battles.
## Roman preparations
In 206 BC Scipio left Iberia and returned to Italy. There he was elected to the senior position of consul in early 205 BC, despite being aged 31 when the minimum age for the office was 42. Scipio was already anticipating an invasion of North Africa and while still in Iberia had been negotiating with the Numidian leaders Masinissa and Syphax. He failed to win over the latter, but made an ally of the former.
Opinion was divided in Roman political circles as to whether an invasion of North Africa was an excessive risk. Hannibal was still on Italian soil; there was the possibility of further Carthaginian invasions, shortly to be realised when Hannibal's youngest brother Mago Barca landed in Liguria with an army from Iberia; the practical difficulties of an amphibious invasion and its logistical follow up were considerable; and when the Romans had invaded North Africa in 256 BC during the First Punic War they had been driven out with heavy losses, which had re-energised the Carthaginians. Eventually a compromise was agreed: Scipio was given Sicily as his consular province, which was the best location for the Romans to launch an invasion of the Carthaginian homeland from and then logistically support it, and permission to cross to Africa on his own judgement. But Roman commitment was less than wholehearted, Scipio could not conscript troops for his consular army, as was usual, but only call for volunteers.
In 216 BC the survivors of the Roman defeat at Cannae had been formed into two legions and sent to Sicily. They formed the core of the Roman expeditionary force. Modern historians estimate a combat strength of 25,000–30,000, of whom more than 90 per cent were infantry. With up to half of the complement of his legions being fresh volunteers, and with no fighting having taken place on Sicily for the past five years, Scipio instigated a rigorous training regime. This extended from drills by individual centuries – the basic Roman army manoeuvre unit of 80 men – to exercises by the full army. This lasted for approximately a year. At the same time Scipio assembled a vast quantity of food and materiel, merchant ships to transport it and his troops, and warships to escort the transports.
Also during 205 BC, 30 Roman ships under Scipio's second-in-command, the legate Gaius Laelius, raided North Africa around Hippo Regius, gathering large quantities of loot and many captives. The Carthaginians initially believed this was the anticipated invasion by Scipio and his full invasion force; they hastily strengthened fortifications and raised troops. Reinforcements were sent to Mago in an attempt to distract the Romans in Italy. Meanwhile a succession war had broken out in Numidia between the Roman-supporting Masinissa and the Carthaginian-inclined Syphax. Laelius re-established contact with Masinissa during his raid. Masinissa expressed dismay regarding how long it was taking the Romans to complete their preparations and land in Africa.
## Opposing forces
### Roman
Most male Roman citizens were liable for military service and would serve as infantry; a better-off minority provided a cavalry component. Traditionally, when at war the Romans would raise two legions, each of 4,200 infantry – this could be increased to 5,000 in some circumstances, or, rarely, even more – and 300 cavalry. Approximately 1,200 of the infantry – poorer or younger men unable to afford the armour and equipment of a standard legionary – served as javelin-armed skirmishers known as velites; they each carried several javelins, which would be thrown from a distance, a short sword and a 90-centimetre (3 ft) shield. The balance were equipped as heavy infantry, with body armour, large shields and short thrusting swords. They were divided into three ranks, of which the hastati in the front rank also carried two javelins each; the principes and triarii, in the second and third ranks, respectively, had thrusting spears instead. A standard-size legion at full strength would have 1,200 hastati, 1,200 principes and 600 triarii.
Both legionary sub-units and individual legionaries fought in relatively open order. It was the long-standing Roman procedure to elect two men each year as senior magistrates, known as consuls, who in time of war would each lead an army. An army was usually formed by combining a Roman legion with a similarly sized and equipped legion provided by their Latin allies; allied legions usually had a larger attached complement of cavalry than Roman ones. By this stage of the war, Roman armies were generally larger, typically consisting of four legions, two Roman and two provided by its allies, for a total of approximately 20,000 men. The Roman army which invaded Africa consisted of four legions, each of the Roman pair reinforced to an unprecedented 6,200 infantry and with a more usual 300 cavalry each. Modern historians estimate the invading army to have totalled 25,000–30,000 men, including perhaps 2,500 cavalry. Goldsworthy describes the army as being "superbly trained" when it left Sicily.
### Carthaginian
Carthaginian citizens only served in their army if there was a direct threat to the city of Carthage. When they did they fought as well-armoured heavy infantry armed with long thrusting spears, although they were notoriously ill-trained and ill-disciplined. In most circumstances Carthage recruited foreigners to make up its army. Many were from North Africa and these were frequently referred to as "Libyans". The region provided several types of fighters, including: close-order infantry equipped with large shields, helmets, short swords and long thrusting spears; javelin-armed light infantry skirmishers; close-order shock cavalry (also known as "heavy cavalry") carrying spears; and light cavalry skirmishers who threw javelins from a distance and avoided close combat – the latter were usually Numidians.)
The close-order African infantry and the citizen-militia both fought in a tightly packed formation known as a phalanx. On occasion some of the infantry would wear captured Roman armour, especially those who served with Hannibal. As well both Iberia and Gaul provided experienced but unarmoured infantry who would charge ferociously, but had a reputation for breaking off if combat was protracted. Slingers were frequently recruited from the Balearic Islands. The Carthaginians also employed war elephants; North Africa had indigenous African forest elephants at the time. The sources are not clear as to whether they carried towers containing fighting men.
## Invasion
In 204 BC, probably in June or July, the Roman army left Sicily and disembarked three days later at Cape Farina 20 kilometres (12 mi) north of the large Carthaginian port of Utica. Carthaginian scouting parties were repulsed and the area was pillaged. Masinissa joined the Romans with either 200 or 2,000 men, the sources differ. Masinissa had been recently defeated by his Numidian rival Syphax, who had decided to act in support of Carthage. Wanting a more permanent base and a port more resilient to the bad weather to be expected when winter came, Scipio besieged Utica. Although the Romans were well supplied with siege engines the siege dragged on. A Carthaginian army under the experienced commander Hasdrubal Gisco and a Numidian one under Syphax set up separate fortified camps nearby. The size of both of these armies is uncertain, but it is accepted that the Romans were considerably outnumbered, especially in terms of cavalry. The Romans pulled back from Utica. Both sides were reluctant to commit to a pitched battle.
### Fighting in 203 BC
Scipio sent emissaries to Syphax to attempt to persuade him to defect. Syphax in turn offered to broker peace terms. A series of exchanges of negotiating parties followed, during which Scipio obtained information on the layout and construction of the Numidian camp, as well as the size and composition of the Numidian army and the most frequented routes in and out of the camp. As the weather improved Scipio made conspicuous preparations to assault Utica. Instead, he marched his army out late one evening and divided it in two. One part launched a night attack on the Numidian camp, setting fire to their reed barracks. In the ensuing panic and confusion the Numidians were dispersed with heavy casualties. Not realising what was happening, the Carthaginians were also taken by surprise when Scipio attacked them with the remaining Romans. Again the Romans inflicted heavy casualties in the dark. Hasdrubal fled 40 kilometres (25 mi) to Carthage with 2,500 survivors, pursued by Scipio. Syphax escaped with a few cavalry and regrouped 11 kilometres (7 mi) away.
When word of the defeat reached Carthage there was panic, and some wanted to renew the peace negotiations. The Carthaginian Senate also heard demands for Hannibal's army to be recalled. A decision was reached to fight on with locally available resources. A force of 4,000 Iberian warriors arrived in Carthage, and Hasdrubal raised further local troops with whom to reinforce the survivors of Utica; Syphax remained loyal and joined Hasdrubal with what was left of his army. The combined force is estimated at 30,000 and they established a strong camp in an area by the Bagradas River known as the Great Plains within 30–50 days of the defeat at Utica.
Scipio immediately marched most of his army to the scene. The size of his army is not known, but it was outnumbered by the Carthaginians. After several days of skirmishing both armies committed to a pitched battle. Upon being charged by the Romans and Masinissa's Numidians, those Carthaginians who had been involved in the debacle at Utica turned and fled; morale had not recovered. Only the Iberians stood and fought. They were enveloped by the well-drilled Roman legions and wiped out. Hasdrubal fled to Carthage, where he was demoted and exiled.
Syphax withdrew as far as his capital, Cirta, where he recruited more troops to supplement those survivors who had stayed with him. Masinissa's Numidians pursued their fleeing countrymen accompanied by part of the Roman force, under Laelius. The armies met in the battle of Cirta, where Syphax's army initially gained the upper hand. Laelius fed groups of Roman infantry into the battle line and Syphax's troops broke and fled. Syphax was captured and paraded beneath the city walls in chains, which caused Cirta to surrender to Masinissa, who then took over much of Syphax's kingdom and joined it to his own.
## Hannibal's return
Scipio and Carthage entered into peace negotiations. Carthage built up its naval strength and prepared the city of Carthage for a siege. The Carthaginian Senate recalled both Hannibal and Mago from Italy. After Scipio overran all of Carthaginian Iberia in 205 BC, Mago had left with those forces still loyal and sailed to Liguria in northern Italy where he recruited Gallic and Ligurian reinforcements. In 203 BC Mago marched into Cisalpine Gaul in an attempt to draw Roman attention away from North Africa, but was defeated at the battle of Insubria. His army retreated and sailed for Carthage from Genua. Mago died of wounds on the voyage and some of his ships were intercepted by the Romans, but 12,000 of his troops reached Carthage.
By 207 BC, after 12 years of campaigning in Italy, Hannibal's forces had been compelled to withdraw to Bruttium, the "toe" of Italy, where they remained undefeated but were ineffective. When recalled the limited number of ships available meant that few horses could be taken and that many newer recruits were left in Italy. Hannibal's army sailed from Croton and landed at Leptis Minor, some 140 kilometres (87 mi) south of Carthage, with 15,000–20,000 experienced veterans. Hannibal was appointed to command the new army and consolidated his forces at Hadrumetum.
### Prelude to battle
The Roman Senate ratified a draft treaty, but because of mistrust and a surge in confidence when Hannibal arrived from Italy, Carthage repudiated it. The Romans retaliated by methodically capturing Carthaginian-controlled towns in Carthage's hinterland and selling their inhabitants into slavery, regardless of whether they had surrendered before being attacked or not. Scipio probably anticipated that these attacks would create pressure on the Carthaginians to dispatch an army to face him as soon as possible, rather than wait until it had recruited to maximum strength and was fully trained. Scipio was himself under time pressure, as he was concerned that his political opponents in the Roman Senate might appoint a new consul to replace him. The Carthaginian Senate repeatedly ordered Hannibal to advance from his base at Hadrumetum and deal with Scipio's army, but Hannibal delayed until he had been reinforced by 2,000 Numidian cavalry led by a relative of Syphax – they were reputed to be elite troops.
Hannibal believed, correctly, that the Roman army had not yet been joined by its Numidian auxiliaries under Masinissa and so had the Carthaginian army march inland for five days and camp not far from the town of Zama, just 3 kilometres (1.9 mi) from the Roman army. This proximity all but guaranteed that a battle would result. While the Carthaginians were en route Masinissa arrived at the Roman camp with 10,000 Numidians. The site of the battle is generally, but not universally, believed to be a flat area to the south of Sicca (modern El Kef), the Draa el Metnan.
## Battle
### Numbers involved
Little is known of the number of men Scipio commanded at Zama. An estimated 25,000–30,000 men had landed in Africa the year before and there is no record of any reinforcements arriving from Italy. However, the strength of the force left to guard their camp and continue the siege of Utica is not known, nor is the level of attrition suffered in the three major battles and several skirmishes the legions had so far been involved in. The ancient sources agree that the Romans were supported by 6,000 Numidian infantry and 4,000 cavalry under Masinissa. The ancient historian Appian, writing 350 years after the event, states that the Numidians brought the total to 34,500 troops, but modern historians do not accept this. They usually give a total of 29,000 or 30,000, although Nigel Bagnall gives 40,000. Of these, slightly more than 6,000 were cavalry.
Appian states that the Carthaginian army at the battle of Zama consisted of 50,000 men; this is discounted by many modern historians, although some accept it with provisos. Most give 40,000, based on Polybius. Of these, all but 4,000 were infantry. Hannibal's army had abandoned its horses in Italy because of a lack of shipping space and Masinissa's defeat of Syphax had dried up the supply of Numidian cavalry; thus, even with the recent addition of 2,000 Numidians the Carthaginians fielded only 4,000 cavalry. Hannibal also deployed 80 war elephants, the first time these are recorded as being used since Scipio invaded. Hannibal delayed seeking battle to give his army time to train up a force of elephants. Such forces had been fielded earlier in the war in both Italy and Iberia. Hannibal had famously taken elephants over the Alps in 218 BC. It is unclear why Carthage was not able to field a force of fully trained war elephants at Zama, or at any time since Scipio invaded.
### Initial dispositions
The Roman army formed up with the heavy infantry of its two Roman legions in the centre and with allied legions on each side of them. As usual, the hastati formed the front rank with the principes and then the triarii behind them. Instead of organising each legion's maniples – the basic Roman infantry manoeuvre unit of 120 men each – in the usual "checkerboard" or quincunx formation, Scipio arranged a principes maniple directly behind each maniple of hastati. This left broad avenues through the Roman lines, which were occupied by the Roman light infantry, the velites. Masinissa's 4,000 Numidian cavalry were on the right of the infantry. Laelius led 1,500 Roman and allied cavalry positioned on the left. There were a further 600 Numidian cavalry under Dacamas, but it is not known whether they were attached to Masinissa's or Laelius's force. It is not stated in the ancient sources what role or roles the 6,000 Numidian infantry took up. Modern suggestions include operating in close support of their cavalry, guarding the Roman camp, supplementing the velites as skirmishers or forming up as close-order infantry to one side of the legions.
The Carthaginian deployment reflected the fact that Hannibal's command was made up of the survivors of three different armies. Hannibal had not had time to integrate the forces he had been allocated into a unified command and so felt it wisest to deploy them separately. The Carthaginian infantry, like the Romans', went in the centre. Its first line was made up largely of veterans of Mago's failed expedition to northern Italy. The close-order troops were Iberians, Gauls and Ligurians. In front of these heavy infantry were light-infantry skirmishers consisting of Balearic slingers, Moor archers and Moor and Ligurian javelin-men. The total strength of this component was 12,000 men. In front of these infantry were the 80 war elephants, evenly spaced along the line, approximately 30 metres (98 ft) apart. The modern historian José Lago states that the Carthaginian light infantry were sent out in front of the whole Carthaginian army, as was usual, including in front of the elephants, for the several hours it took the army to form up.
Carthaginians and other Africans made up the second line. They were either survivors of the earlier campaigns whose morale was poor or freshly raised recruits who had received little training. They probably fought as close-order infantry; Polybius describes them as adopting phalanx formations, but there is modern debate as to just what this describes. The strength of the second line is not known, but it is sometimes assumed by modern historians to have consisted of a further 12,000 men. About 200 metres (700 ft) behind the Carthaginian second line were the infantry Hannibal had brought back from Italy. Most of them were Bruttians, but they included some Africans and Iberians who had left Iberia with Hannibal more than 17 years before, and Gauls recruited in northern Italy in 218 and 217 BC. All were battle-hardened veterans. This third line is variously estimated at 12,000, 15,000–20,000 or 20,000 men by modern historians. The Carthaginians are believed to have fielded approximately 4,000 cavalry. Hannibal placed the Numidians among them on his left flank, facing Masinissa's Numidians; and the other African cavalry on the right. How many of the total of 4,000 cavalry were in each of these contingents is not known, although Lazenby suggests that the Numidians on the left would have been the stronger.
### Initial charges
The armies advanced towards each other, the first clashes occurring on the Carthaginian left flank, the Roman right, between the 2,000 or more Carthaginian-supporting Numidian cavalry and the 4,000 – or possibly 4,600 – siding with the Romans. Each force sent detachments to hurl javelins at the other and then withdraw. Lazenby describes these skirmishes as "desultory". Hannibal then ordered a charge against the Roman infantry by his 80 elephants, with the whole of his first two lines moving forward in support. The modern historian Jacob Edwards, in a study of Hannibal's use of elephants during the war, describes their deployment at Zama as "an ill-advised practice which departed from the successful tactics used previously". He suggests that they would have been better employed against the superior Roman cavalry on the flanks, rather than directly charging the Roman infantry. It is possible that Hannibal believed the elephants would have brought an element of surprise, as their previous use in the war had been limited. Most modern accounts have the elephants in front of the Carthaginian infantry, but Lago has the Carthaginian light infantry in front of the whole Carthaginian army, skirmishing with their opposite numbers, as was usual before armies were formed up and ready to commence the battle proper. Lago states that they stayed in front of and between the elephants, protecting them from the javelins of the Roman velites, until the elephants charged.
As the elephants advanced, the velites moved forward into the gap between the armies, hurled javelins at the elephants and fell back. The Roman heavy infantry then sounded their bugles, and possibly rhythmically banged their weapons against their shields – swashbuckling. This startled some of the elephants and several of those on the left turned and fled, past the end of the line of infantry behind them. Edwards expresses amazement that war elephants should be so easily panicked and again suggests that at least some of the animals were "young and inexperienced at battle" making them "a liability rather than an asset". These out-of-control elephants trampled their way through the Carthaginian-backing Numidian cavalry, thoroughly disordering them. Masinissa took advantage of the situation by ordering a charge. This routed the disordered cavalry and they fled, pursued by Masinissa's force.
Most of the rest of the elephants charged into the Roman infantry, amid showers of javelins. Terrified by the swashbuckling infantry and their bugles the majority stampeded into the broad gaps the Romans had left between their maniples. Many of the velites were killed as they ran back in front of the elephants and into the gaps between the ranks of the heavy infantry. From there they hurled javelins into the elephants' flanks. Those elephants which emerged into the rear of the Roman army were all wounded and now cut off. They were subsequently hunted down and killed. Some elephants did charge into the hastati as planned, where they caused heavy casualties before being driven off. This causes Mir Bahmanyar to suggest that the elephants accomplished what Hannibal expected of them. Some elephants baulked at charging the hastati on the Roman left and attacked the cavalry alongside them, who also showered the elephants with javelins. Most of these elephants were badly wounded and had lost their crews by this point; those which could fled, avoiding the line of Carthaginian infantry, but not the Carthaginian cavalry on the right flank. This cavalry force became disorganised by the out-of-control elephants and like Masinissa, Laelius ordered his cavalry to take advantage of this and charge. The Carthaginian cavalry were swept from the field and the Roman cavalry closely pursued them.
### Infantry engagement
With the battlefield cleared of both elephants and cavalry all three ranks of the Roman heavy infantry and the first two of Carthaginian advanced towards each other. The Carthaginian third rank, Hannibal's Italian veterans, remained in place. The two front ranks charged enthusiastically and violently into each other and commenced a hard-fought, close-quarter, hand-to-hand combat. The Romans' superior weaponry and organisation eventually told and despite the hastati taking further heavy losses, the Carthaginian front rank broke and fled. They attempted to make their way through the Carthaginian second rank, but these men refused to let them pass; according to Polybius to the point of fighting them off. The survivors of the front rank were forced to make their escape around the flanks of the second rank. Many of these then rallied and rejoined the fight by extending the flanks of the Carthaginian second rank.
The hastati, despite having taken casualties from the elephants and the Carthaginian first rank, now attacked the Carthaginian second rank. Polybius reports that the Carthaginian and other African spearmen who made up this force fought "fanatically and in an extraordinary manner". The Romans were pushed back in disorder. Bahmanyar opines that the Roman front rank came close to being broken at this stage. The Romans were forced to commit their second line, the principes, to the fight. Liddell Hart writes that even the principes struggled to hold the line, but eventually this reinforcement was sufficient to break the Carthaginian second line; they fled, pursued impetuously by the hastati.
Both Bahmanyar and Goldsworthy suggest this was an opportunity for the Carthaginian third line to counter attack the disorganised hastati, but that Hannibal decided against it because his third line was some distance back, the fleeing Carthaginians from the first two lines were inadvertently blocking a clean charge and because the ground over which the third line would have attacked was strewn with corpses. According to Polybius the gap between the fighting lines "was now covered with blood, slaughter, and dead bodies ... slippery corpses which were still soaked in blood and had fallen in heaps". Bagnall suggests the withdrawal of the Carthaginian second line was more deliberate and orderly than the ancient sources portray. Taylor believes that Hannibal had hoped that the Romans would rush forward in pursuit at this stage and that he had prepared an infantry envelopment in anticipation of this; in the event Scipio saw the potential trap and his troops were disciplined enough to break off their pursuit when recalled.
### Decision
The Romans recalled the pursuing hastati by sounding bugles and reformed their line. The Carthaginian third line – Hannibal's veterans supplemented by some of the survivors of the first and second lines – was longer than the Roman formation and outflanked it on both sides. The hastati formed up in the centre and the principes and triarii moved to each side to make a single, longer line. There was a prolonged pause while this was taking place. The Carthaginians took advantage of the hiatus to rally some of their first and second line troops, using them to extend the length of their own fighting line. This enabled Roman close-order infantry to match the length of the Carthaginian's third line, but correspondingly thinned their line, preventing them from using their habitual tactic of feeding new, less-fatigued men into the fighting line as a combat wore on. The surviving heavy infantry of each side were roughly equal in numbers. Most of the original Carthaginian were equipped in the same manner as the Romans they faced. They were veterans of many years' experience and they were fresh, having not yet fought. Many of the Romans were veterans, some having fought at Cannae and almost all having taken part in the two, or for some three, major victories the previous year. Many of the Romans were tired from the two immediately preceding fierce combats, but their victories in both would have boosted their morale.
Having satisfactorily reorganised, the two lines charged each other, according to Polybius "with the greatest fire and fury". The fight continued for some time, neither side gaining the advantage. Lazenby describes this fighting as "a grim business". The cavalry commanded by Masinissa and Laelius then returned to the battlefield, apparently at more or less the same time. Philip Sabin states that they arrived "in the nick of time". Being fiercely engaged to their front, the Carthaginian infantry were helpless to prevent the Roman cavalry from charging into their rear. Their line collapsed and there was a great massacre. Hannibal was one of the few Carthaginians to escape.
### Casualties
Polybius states that 20,000 Carthaginians were killed and as many again taken prisoner, which accounts for the entire Carthaginian army. He gives Roman losses as 1,500 killed. This is five per cent or more of their total force; Goldsworthy considers this fatality rate "a substantial loss for a victorious army, testimony to the hard fighting" and that the battle as a whole was "a slogging match". The number of wounded is not known, although the ancient sources refer to many Roman wounded being carried to the rear during the pause before the final engagement. At least 11 Carthaginian elephants survived the battle to be captured by the Romans.
Hannibal and his companions reached the main Carthaginian base at Hadrumetum, where they mustered 6,000 infantry and 500 cavalry. Hannibal considered this too few with which to continue the war and advised the Carthaginian Senate to make peace on whatever terms they could.
## Aftermath
The Romans looted the Carthaginian camp and then Scipio marched his legions back to Tunis. The Carthaginians again sued for peace. Given the difficulty of ending the war by storming or starving the city of Carthage, and his continuing fear that he might be superseded in command, Scipio entered into negotiations. During these Scipio received word that a Numidian army under Syphax's son Vermina was marching to Carthage's assistance. This was intercepted and surrounded by a Roman force largely made up of cavalry and defeated. The number of Numidians involved is not known, but Livy records that more than 16,000 were killed or captured. This was the last battle of the Second Punic War.
The peace treaty the Romans subsequently imposed on the Carthaginians stripped them of their overseas territories and some of their African ones. An indemnity of 10,000 silver talents was to be paid over 50 years, hostages were taken, Carthage was forbidden to possess war elephants and its fleet was restricted to 10 warships. It was prohibited from waging war outside Africa and in Africa only with Rome's express permission. Many senior Carthaginians wanted to reject it, but Hannibal spoke strongly in its favour and it was accepted in spring 201 BC. Henceforth it was clear Carthage was politically subordinate to Rome. Scipio was awarded a triumph and received the agnomen "Africanus".
### Third Punic War
Masinissa exploited the prohibition on Carthage waging war to repeatedly raid and seize Carthaginian territory with impunity. Carthage appealed to Rome, which always backed its Numidian ally. In 149 BC, fifty years after the end of the Second Punic War, Carthage sent an army, under Hasdrubal the Boetharch, against Masinissa, the treaty notwithstanding. The campaign ended in disaster at the battle of Oroscopa and anti-Carthaginian factions in Rome used the illicit military action as a pretext to prepare a punitive expedition. The Third Punic War began later in 149 BC when a large Roman army landed in North Africa and besieged Carthage. In the spring of 146 BC the Romans launched their final assault, systematically destroying the city and killing its inhabitants; 50,000 survivors were sold into slavery. The formerly Carthaginian territories were annexed by Rome and reconstituted to become the Roman province of Africa, with Utica as its capital.
## Notes, citations and sources
|
3,148,447 |
Noye's Fludde
| 1,171,570,900 |
1958 children's opera by Benjamin Britten
|
[
"16th-century plays",
"1957 operas",
"Christian plays",
"English plays",
"English-language operas",
"Noah's Ark in popular culture",
"Operas",
"Operas by Benjamin Britten"
] |
Noye's Fludde is a one-act opera by the British composer Benjamin Britten, intended primarily for amateur performers, particularly children. First performed on 18 June 1958 at that year's Aldeburgh Festival, it is based on the 15th-century Chester "mystery" or "miracle" play which recounts the Old Testament story of Noah's Ark. Britten specified that the opera should be staged in churches or large halls, not in a theatre.
By the mid-1950s Britten had established himself as a major composer, both of operas and of works for mixed professional and amateur forces – his mini-opera The Little Sweep (1949) was written for young audiences, and used child performers. He had previously adapted text from the Chester play cycle in his 1952 Canticle II, which retells the story of Abraham and Isaac. Noye's Fludde was composed as a project for television; to the Chester text Britten added three congregational hymns, the Greek prayer Kyrie eleison as a children's chant, and an Alleluia chorus. A large children's chorus represents the pairs of animals who march into and out of the ark, and proceedings are directed by the spoken Voice of God. Of the solo sung roles, only the parts of Noye (Noah) and his wife were written to be sung by professionals; the remaining roles are for child and adolescent performers. A small professional ensemble underpins the mainly amateur orchestra which contains numerous unconventional instruments to provide particular musical effects; bugle fanfares for the animals, handbell chimes for the rainbow, and various improvisations to replicate musically the sounds of a storm.
At its premiere Noye's Fludde was acclaimed by critics and public alike, both for the inspiration of the music and the brilliance of the design and production. The opera received its American premiere in New York in March 1959, and its first German performance at Ettal in May of that year. Since then it has been staged worldwide; the performance in Beijing in October 2012 organised by the KT Wong Foundation was the first in China of any Britten opera. The occasion of Britten's centenary in 2013 led to numerous productions at music festivals, both in the UK and abroad.
## Background
### Chester mystery plays
English mystery or "miracle" plays were dramatised Bible stories, by ancient tradition performed on Church feast days in town squares and market places by members of the town's craft guilds. They covered the full range of the narrative and metaphor in the Christian Bible, from the fall of Lucifer to the Last Judgement. From the many play cycles that originated in the late Middle Ages, the Chester cycle is one of four that has survived into the 21st century. The texts, by an unidentified writer, were revised during the late 15th century into a format similar to that of contemporary French passion plays, and were published in 1890, in Alfred W. Pollard's English Miracle Plays, Moralities, and Interludes.
The story of Noah and the flood, the third play in the Chester cycle, was originally performed by the city's Guild of the Drawers of Dee, otherwise known as the water-carriers. A feature of this play, observed by the historian Rosemary Woolf, is the depiction of Noah's wife, and by implication women generally, as disobedient, obdurate and finally abusive, in contrast to the "grave and obedient" Noah and his patient sons.
By the latter part of the 16th-century Reformation, the Church had grown less tolerant of mystery plays. A performance in Chester in 1575 is the last recorded from the city until the 20th century, when the Chester cycle was revived under the supervision of Christopher Ede, as part of the city's Festival of Britain celebrations in June 1951. This production was received enthusiastically, and was repeated the following year; thereafter it became a regular feature and tourist attraction.
### Inception
By the late 1940s Benjamin Britten had established himself as a leading composer of English opera, with several major works to his credit. In 1947 he suggested to his librettist Eric Crozier that they should create a children's opera, based on a bible story. Crozier gave Britten a copy of Pollard's book, as a possible source of material. Nothing came of this project immediately; instead, Britten and Crozier wrote the cantata Saint Nicolas (1948), the first of several works in which Britten combined skilled performers with amateurs. The cantata involves at least two children's choirs, and incorporates two congregational hymns sung by the audience. Britten also used this fusion of professional with amateur forces in The Little Sweep (1949), which forms the second part of his entertainment for children, Let's Make an Opera, that he devised with Crozier. Again, child singers (also doubling as actors) were used, and the audience sings choruses at appropriate points. In 1952, although Britten's collaboration with Crozier had ended, he used the Chester plays book as the source text for his Canticle II, based on the story of Abraham and Isaac.
In April 1957 Boris Ford, Head of Schools Broadcasting at Associated Rediffusion (A-R), wrote to Britten, proposing a series of half-hour programmes. These would show Britten composing and rehearsing a work through to its performance, and would provide children with "an intimate piece of musical education, by ... watching a piece of music take shape and in some degree growing with it". Britten was initially cautious. He found the idea interesting, but he warned Ford that he was currently occupied with travel and had limited time for writing. He was also anxious not to cover the same ground as he had with Let's Make an Opera. However, he agreed to meet Ford to discuss the project further. On 11 July they met in London, together with Britten's musical assistant Imogen Holst. Britten told Ford that he had "for some months or a year vaguely been thinking of doing something with the [Chester] miracle plays", and agreed to write an opera for A-R's 1958 summer term of school programmes. The subject would be Noah and the flood, based on the Chester text. Later, Ford and his script editor, Martin Worth, travelled to Aldeburgh, and with Britten looked at possible churches for the performance. St Bartholomew's Church, Orford, was chosen as, unlike most other churches in East Suffolk, its pews were not fixed, thus offering a more flexible performing space.
## Roles
## Synopsis
After the opening congregational hymn "Lord Jesus, think on me", the spoken Voice of God addresses Noye, announcing the forthcoming destruction of the sinful world. God tells Noye to build an ark ("a shippe") that will provide salvation for him and his family. Noye agrees, and calls on the people and his family to help. His sons and their wives enter with tools and materials and begin work, while Mrs Noye and her Gossips (close friends) mock the project.
When the ark is completed, Noye tries to persuade his wife to enter: "Wyffe, in this vessel we shall be kepte", but she refuses, and they quarrel. The Voice of God foretells forty days and forty nights of rain, and instructs Noye to fill the ark with animals of every kind. The animals enter the ark in pairs, while Noye's sons and their wives provide a commentary. Noye orders his family to board the ark; again Mrs Noye and the Gossips refuse, preferring to carouse. The sons finally drag Mrs Noye on board, while the Gossips are swept away by the encroaching flood; she rewards her husband with a slap. Rain begins to fall, building to a great storm at the height of which the first verse of the naval hymn "Eternal Father, Strong to Save" is heard from the ark. The congregation joins in the second and third verses of the hymn, during which the storm gradually subsides. When it is calm, Noye sends out a Raven, saying "If this foule come not againe/it is a signe soth to sayne/that dry it is on hill or playne." When the Raven fails to return, Noye knows that the bird has discovered dry land. He sends out a Dove, who eventually brings back an olive branch. Noah accepts this as a sign of deliverance, and thanks God.
The Voice of God instructs everyone to leave the ark. As they do, the animals sing "Alleluias" and the people sing a chorus of praise: "Lord we thanke thee through thy mighte". God promises that he will never again destroy the earth with water, and produces a rainbow as his token. The cast begins Addison's hymn "The spacious firmament on high", with the congregation joining in the last two verses. All the cast depart except Noye, who receives God's blessing and promise of no more vengeance: "And nowe fare well, my darling deare" before his departure from the stage.
## Creation
### Writing
Britten began detailed planning for the opera in August 1957, while sailing to Canada for a tour with the English Opera Group. He told Colin Graham, at that time the EOG's stage manager, that he wanted him to direct the new work. After a further meeting at Associated Rediffusion's London headquarters on 18 October, Britten began a composition draft in Aldeburgh on 27 October. To Pollard's edition of the Noah play's text, he added three congregational Anglican hymns: "Lord Jesus, think on me"; "Eternal Father, strong to save"; and "The spacious firmament on high". Britten introduced the repetitive Greek chant "Kyrie eleison" ("Lord, have mercy") at the entry of the animals, and "Alleluias" at their triumphant exit. He had completed about two-thirds of the opera when Ford was dismissed from A-R, allegedly for administrative shortcomings and inexperience. A-R decided to withdraw from the project, which was then taken up by Associated Television (ATV), whose chairman Lew Grade personally took responsibility for signing the contract and urged that Britten should complete the opera.
In November 1957 Britten moved to The Red House, Aldeburgh, but continued to work on the opera throughout the upheaval. According to a letter he wrote to Edith Sitwell on 14 December, "the final bars of the opera [were] punctuated by hammer-blows" from workmen busy at the Red House. Before he finished the composition draft, Britten wrote to the baritone Owen Brannigan, who had sung in several previous Britten operas, asking if he would take the title role. Britten completed the full score of the opera in March 1958, which he dedicated "To my nephew and nieces, Sebastian, Sally and Roguey Welford, and my young friend Ronald Duncan [one of Britten's godsons]".
### Performance requirements
With the wide variety of child performers required in the opera, and in light of how it was cast and performed at its premiere, Britten detailed some of its specific requirements for performance in the vocal and study scores published by Boosey & Hawkes. The opera is intended for a large hall or church, not a theatre. The action should take place on raised rostra, though not on a formal stage set apart from the audience, and the orchestra should be placed in full sight, with the conductor in a position to conduct both the orchestra and, when performing the hymns, the congregation. Noye and Mrs Noye are sung by "accomplished singer-actors", and the Voice of God, although not necessarily a professional actor, should have "a rich speaking voice, with a simple and sincere delivery, without being at all 'stagey'". The young amateurs playing the parts of Noye's children should be between 11 and 15 years old, with "well-trained voices and lively personalities"; Jaffet, the eldest, could have a broken voice. Mrs Noye's Gossips should be older girls with strong voices and considerable acting ability. The children playing the animals should vary in size, and range in age from seven to eighteen. The older age groups, with perhaps some broken voices, should represent the larger animals (lions, leopards, horses, camels etc.), while the younger play rats, mice and birds. There is a dance or ballet involving two child performers playing the roles of the raven and the dove.
For the first time in any of his works involving amateurs, Britten envisaged a large complement of child performers among his orchestral forces, led by what Graham described as "the professional stiffening" of a piano duet, string quintet (two violins, viola, cello and bass), recorder and a timpanist. The young musicians play a variety of instruments, including a full string ensemble with each section led by a member of the professional string quintet. The violins are further divided into parts of different levels of difficulty, from the simplest (mostly playing open strings) to those able to play in third position. The recorders should be led by an accomplished soloist able to flutter-tongue; bugles, played in the original production by boys from a local school band, are played as the children representing animals march into the ark, and at the climax of the opera. The child percussionists, led by a professional timpanist, play various exotic and invented percussion instruments: the score itself specifies sandpaper ("two pieces of sandpaper attached to blocks of wood and rubbed together"), and "Slung Mugs", the latter used to represent the first drops of rain. Britten originally had the idea of striking teacups with a spoon, but having failed to make this work, he sought Imogen Holst's advice. She recalled that "by great good fortune I had once had to teach Women's Institute percussion groups during a wartime 'social half hour', so I was able to take him into my kitchen and show him how a row of china mugs hanging on a length of string could be hit with a large wooden spoon.
Britten also added – relatively late in the process of scoring the work – an ensemble of handbell ringers. According to Imogen Holst, a member of the Aldeburgh Youth Club brought Britten's attention to a local group of bellringers; hearing them play, Britten was so enchanted by the sound that he gave the ensemble a major part to play as the rainbow unfolds towards the end of the opera. Several commentators, including Michael Kennedy, Christopher Palmer and Humphrey Carpenter, have noted the affinity between the sound of Britten's use of the handbells and the gamelan ensembles he had heard first-hand in Bali in 1956. The scarcity of handbells tuned at several of the pitches required by Britten in the opera was to become an issue when the score was being prepared for publication.
## Performance history and reception
### Premiere
The first performance of Noye's Fludde was staged during the 1958 Aldeburgh Festival, at St Bartholomew's Church, Orford, on 18 June. The conductor was Charles Mackerras, who had participated in several productions at past Aldeburgh festivals. The production was directed by Colin Graham, who also designed its set, with costume designs by Ceri Richards.
Apart from Brannigan as Noye, two other professional singers were engaged: Gladys Parr, in her last role before retirement, sang the part of Mrs Noye, and the spoken Voice of God was provided by the Welsh bass Trevor Anthony. The other major roles were taken by child soloists, who were selected from extensive auditions. Among these was the future actor-singer, Michael Crawford, then 16 years old and described by Graham as "a very recently broken-voiced young tenor", who played the role of Jaffet. Mrs Noye's Gossips were originally to be performed by girls from a Suffolk school, but when the headmistress heard rumours about the "dissolute" parts they were to play, she withdrew her pupils.
The professional element in the orchestra was provided by the English Opera Group players, led by Emanuel Hurwitz, with Ralph Downes at the organ. The children players, billed as "An East Suffolk Children's Orchestra", included handbell ringers from the County Modern School, Leiston; a percussion group, whose instruments included the slung mugs, from Woolverstone Hall School; recorder players from Framlingham College; and bugle players from the Royal Hospital School, Holbrook. Graham, recalling the premiere some years later, wrote: "The large orchestra (originally 150 players) ... were massed around the font of Orford Church while the opera was played out on a stage erected at the end of the nave." Philip Hope-Wallace, writing for The Manchester Guardian, observed that "Charles Mackerras conducted the widespread forces, actually moving round a pillar to be able to control all sections in turn." Martin Cooper of The Daily Telegraph noted: "The white walls of Orford Church furnished an ideal background to the gay colours of Ceri Richards's costumes and the fantastic head-dresses of the animals. In fact, the future of the work will lie in village churches such as this and with amateur musicians, for whom Britten has written something both wholly new and outstandingly original."
The general critical reception was warmly enthusiastic. Felix Aprahamian in The Sunday Times called the performance "a curiously moving spiritual and musical experience". Eric Roseberry, writing in Tempo magazine, found the music "simple and memorably tuneful throughout ... the writing for strings, recorders and percussion is a miracle of inspiration". Andrew Porter in Opera magazine also found the music touched "by high inspiration"; the evening was "an unforgettable experience ... extraordinarily beautiful, vivid and charming, and often deeply moving". The design and production, Porter reported, were "brilliant", while Mackerras commanded his disparate forces masterfully. Several critics remarked favourably on the sound of the handbells. The Times's critic noted the effectiveness of Britten's setting of the mystery play: "It is Britten's triumph that in this musically slender piece he has brought to new life the mentality of another century by wholly modern means. These means included a miscellaneous orchestra such as he alone could conceive and handle".
After the premiere, there were two further performances by the same forces in Orford Church, on 19 and 21 June. Noye's Fludde became the first of Britten's operas to be shown on television, when it was broadcast by ATV on 22 June 1958.
### Later performances
Noye's Fludde had been largely created according to the resources available from the local Suffolk community. However, once Britten witnessed the public and critical reception following the premiere, he insisted on taking it to London. Looking for a suitable London church, Britten settled on Southwark Cathedral, somewhat reluctantly as he felt that it did not compare favourably with Orford. Four performances featuring the same principals as the premiere were given, on 14 and 15 November 1958, with Britten conducting the first. All four performances sold out on the first day of booking, even, as Britten told a friend, "before any advertisement & with 2000 circulars yet to be sent!!" On 24 and 25 April 1959 the Finchley Children's Music Group, which was formed in 1958 specially to perform Noye's Fludde, gave what was billed as "the first amateur London performance" of the work, at All Saint's Church, Finchley; the cast included the operatic bass Norman Lumsden as Noah.
In the United States, after a radio broadcast in New York City on 31 July 1958, the School of Sacred Music of Union Theological Seminary staged the US premiere on 16 March 1959. The following year saw the opera's Canadian premiere, conducted by John Avison, staged during the 1960 Vancouver International Festival in Christ Church Cathedral.
During preparations for the first German performance of Noye's Fludde in Ettal, planned for May 1959, the problem of the scarcity of handbells became acute. Britten suggested that in the absence of handbells a set of tubular bells in E flat in groups of twos and threes could be played by four or six children with two hammers each to enable them to strike the chords. Britten was not present in Ettal, but he learned from Ernst Roth, of Boosey & Hawkes, that the Ettal production had substituted glockenspiel and metallophone for the handbells; according to Roth the bells in Carl Orff's Schulwerk percussion ensembles were "too weak" for the purpose. Britten later wrote to a friend: "I am rather relieved that I wasn't there! – no church, no bugles, no handbells, no recorders – but they seem to have done it with a great deal of care all the same. Still I rather hanker after doing it in Darmstadt as we want it – even importing handbells for instance."
In the UK, Christopher Ede, producer of the landmark performances of the Chester mystery plays during the Festival of Britain, directed Britten's opera in Winchester Cathedral, 12–14 July 1960. Writing to Ede on 19 December 1959, Britten urged him to keep the staging of Noye's Fludde simple rather than elaborate. In 1971 the Aldeburgh Festival once again staged Noye's Fludde at Orford; a full television broadcast of the production, transferred to Snape Maltings, was made by the BBC, conducted by Steuart Bedford under the composer's supervision, with Brannigan resuming the role of Noah, Sheila Rex as his wife, and Lumsden as the Voice of God.
In 1972 Jonathan Miller directed his first opera with a production of Noye's Fludde, staged during 21–23 December at the Roundhouse Theatre, London. The adult roles were taken by Michael Williams (God), Bryan Drake (Noah) and Isabelle Lucas (Mrs Noah), and the conductor was John Lubbock.
Among less conventional productions, in September 2005 Noye's Fludde was performed at Nuremberg zoo, in a production by the Internationales Kammermusikfestival Nürnberg involving around 180 children from Nuremberg and from England, directed by Nina Kühner, conducted by Peter Selwyn. A subsequent zoo production was presented in Belfast, Northern Ireland, by NI Opera and the KT Wong Foundation. The performance was directed by Oliver Mears and conducted by Nicholas Chalmers, with Paul Carey Jones as Noye and Doreen Curran as Mrs Noye. The same production was performed in China, in October 2012, at the Beijing Music Festival, this being the Chinese premiere of the work, and the first full performance of a Britten opera in China. It was performed again at the Shanghai Music In The Summer Air (MISA) Festival in July 2013.
A performance is a minor plot point in the 2012 movie Moonrise Kingdom.
Britten's centenary year 2013 prompted numerous performances across the UK, including at Tewkesbury Abbey during the Cheltenham Music Festival, and the Thaxted Festival where 120 local children appeared as the animals. An Aldeburgh Festival production as a finale to the centenary year was staged in November, on the eve of Britten's 100th birthday anniversary, in his home town of Lowestoft. Andrew Shore appeared as Noye, and Felicity Palmer as Mrs Noye. It was directed by Martin Duncan and broadcast in the UK on BBC Radio 3 on 24 November. Outside the UK, several professional companies mounted centenary year productions involving local children, including the Santa Fe Opera, and the New Orleans Opera which mounted its first production of any Britten opera.
## Music
Noye's Fludde has been described by the musicologist Arnold Whittall as a forerunner of Britten's church parables of the 1960s, and by the composer's biographer Paul Kildea as a hybrid work, "as much a cantata as an opera". Most of the orchestral writing, says the music analyst Eric Roseberry, lies "well within the range of intelligent young players of very restricted technique". Several episodes of the opera – such as "the grinding conflict of Britten's passacaglia theme against Dykes's familiar hymn-tune in the storm" – introduces listeners and the youthful performers to what Roseberry terms "a contemporary idiom of dissonance", in contrast to the "outworn style" of most music written for the young. With its innovatory arrangement of vocal and instrumental forces, Noye's Fludde is summarised by Whittall as "a brilliant demonstration of how to combine the relatively elementary instrumental and vocal skills of amateurs with professionals to produce a highly effective piece of music theatre."
The opera begins with a short, "strenuous" instrumental prelude, which forms the basis of the musical accompaniment to the opening congregational hymn; its first phrase is founded on a descending bass E-B-F, itself to become an important motif. Humphrey Carpenter notes that throughout the hymn the bass line is out of step with the singing, an effect which, he says, "suggests an adult world where purity is unattainable". Following the hymn, the Voice of God is accompanied, as it is in all his pre-flood warnings and declamations, by the E-B-F notes from the opera's opening bass line, sounded on the timpani. After Noye's response in recitative, the next musical episode is the entry of Noye's children and their wives, a passage which, Carpenter suggests, replaces the pessimism of the adult word with "the blissful optimism of childhood". The syncopated tune of the children's song is derived from the final line of Noye's recitative: "As God has bidden us doe".
Mrs Noye and her Gossips enter to an F sharp minor distortion of the children's tune, which reflects their mocking attitude. In Noye's song calling for the ark to be built, a flood leitmotiv derived from the first line of the opening hymn recurs as a solemn refrain. The music which accompanies the construction work heavily involves the children's orchestra, and includes recorder trills, pizzicato open strings, and the tapping of oriental temple-blocks. After the brief "quarrel" duet between Noye and his wife in 6/8 time, timpani-led percussion heralds the Voice of God's order to fill the ark. Bugle fanfares announce the arrival of the animals, who march into the ark to a "jauntily innocent" tune in which Roseberry detects the spirit of Mahler; the fanfares punctuate the entire march. The birds are the last group to enter the ark, to the accompaniment of a three-part canon sung by Noye's children and their wives. In the final scene before the storm, where Noye and his family try to persuade Mrs Noye to join them in the ark in G major, the music expresses Mrs Noye's obstinacy by having her reply accompanied by a D sharp pedal which prepares for the Gossips' drinking scherzo in E minor. The slap which Mrs Noye administers when finally persuaded is accompanied by an E major fortissimo.
The storm scene which forms the centre of the opera is an extended passacaglia, the theme of which uses the entire chromatic scale. In a long instrumental introduction, full rein is given to the various elements of the children's orchestra. Slung mugs struck with a wooden spoon give the sound of the first raindrops. Trills in the recorders represent the wind, strings impersonate waves, while piano chords outline the flood leitmotiv. The sound builds to a peak with thunder and lightning from the percussion. When "Eternal Father" is sung at the climax of the storm, the passacaglia theme provides the bass line for the hymn. After the hymn, the minor-key fury of the passacaglia gradually subsides, resolving into what Roseberry describes as "a dewy, pastoral F major" akin to that of the finale of Beethoven's Pastoral Symphony. Noye's reappearance is followed by the brief waltzes for the Raven, accompanied by solo cello, and the Dove, the latter a flutter-tongued recorder solo the melody of which is reversed when the Dove returns.
Following God's instruction, the people and animals leave the ark singing a thankful chorus of Alleluias with more bugle fanfares in B flat. The appearance of the rainbow is accompanied by handbell chimes, a sound which dominates the final stages of the work. In the final canonical hymn, the main tune moves from F major to G major and is sung over reiterated bugle calls, joined by the handbells. In the third verse, the organ provides a brief discordant intervention, "the one jarring note in Noye's Fludde" according to the musicologist Peter Evans. Graham Elliott believes that this may be a musical joke in which Britten pokes gentle fun at the habits of some church organists. The mingled chimes of slung mugs and bells continue during God's final valedictory blessing. As Noye leaves, the full orchestra provides a final fortissimo salute, the opera then concluding peacefully with B flat chimes of handbells alternating with extended G major string chords – "a hauntingly beautiful close", says Roseberry.
## Publication
Several of the opera's novel features, including the use of a large amateur orchestra, and specifically its use of handbells, posed problems for Britten's publishers, Boosey & Hawkes. Ernst Roth made enquiries about the availability of handbells to the firm Mears & Stainbank (the bell foundry based in Whitechapel, London), and then wrote to Britten urging him to prepare an alternative, simplified version of Noye's Fludde for publication, since the rarity of handbells in the scale of E flat made the original score, in his view, impractical. Britten resisted such a proposal: "I think if you consider a performance of this work in a big church with about fifty or more children singing, you will agree that the orchestra would sound totally inadequate if it were only piano duet, a few strings and a drum or two." Britten suggested, rather, that Boosey & Hawkes should invest in a set of E flat handbells to hire for performances; or, that the handbells music could be simply cued in the piano duet part.
After the score had been published, and in the face of an imminent performance in Ettal, Britten suggested that he could attempt to rewrite the music for a handbell ensemble in D, since sets in that key were more common than in E flat. Britten never prepared this alternative version for reduced instrumentation. He did agree, however, to make the published full score "less bulky" by presenting the amateur forces of recorders, ripieno strings and percussion in the form of short score, on the understanding that full scores for those groups would be available to hire for rehearsal and performance purposes. The full score was published in 1958, and the vocal score, prepared by Imogen Holst with the libretto translated into German by Prince Ludwig of Hesse and the Rhine, under the pseudonym Ludwig Landgraf, was published in 1959.
## Recordings
|
1,218,259 |
Bill Kibby
| 1,119,333,831 |
Recipient of the Victoria Cross
|
[
"1903 births",
"1942 deaths",
"Australian Army personnel of World War II",
"Australian Army soldiers",
"Australian World War II recipients of the Victoria Cross",
"Australian military personnel killed in World War II",
"British emigrants to Australia",
"Military personnel from County Durham",
"People from Winlaton"
] |
William Henry Kibby, VC (15 April 1903 – 31 October 1942) was a British-born Australian recipient of the Victoria Cross, the highest award for gallantry in the face of the enemy that could be awarded to a member of the Australian armed forces at the time. Kibby emigrated to South Australia with his parents in early 1914 and worked as an interior decorator and served in the part-time Militia prior to World War II. In 1940, he enlisted in the all-volunteer Second Australian Imperial Force and joined the 2/48th Infantry Battalion. His unit was sent to the Middle East, but soon after arriving, Kibby broke his leg and spent the next year recovering and undergoing further training while his battalion took part in the North African campaign. He rejoined his unit when it was serving on garrison duties in northern Syria after its involvement in the siege of Tobruk, but in June 1942 it was sent to Egypt and recommitted to the North Africa campaign. Kibby was with the battalion during the First Battle of El Alamein in July.
In October, the 2/48th Battalion was committed to the Second Battle of El Alamein, during which Kibby undertook a series of courageous actions across the period from 23 to 31 October. In the first episode, he went forward alone and silenced an enemy machine-gun post. In the second, he provided inspirational leadership to his platoon and mended its telephone line under heavy fire. On the final occasion, he pressed forward under withering fire and helped his company capture its objective. This final action ultimately cost him his life. He was then posthumously awarded the Victoria Cross. A memorial trust used donated money to purchase a house for his widow and two daughters. His medal set is displayed at the Australian War Memorial in the Hall of Valour.
## Early life
William Henry Kibby was born at Winlaton, County Durham, United Kingdom, on 15 April 1903. The second of three children, Kibby was born to John Robert Kibby, a draper's assistant, and Mary Isabella Kibby née Birnie. He had two sisters. In early 1914, the Kibby family emigrated to Adelaide, South Australia. Bill attended Mitcham Public School and then held various jobs before securing a position at the Perfection Fibrous Plaster Works in Edwardstown. There, he worked as an interior decorator, designing and fixing plaster decorations. He married Mabel Sarah Bidmead Morgan in 1926; they lived at Helmsdale (now Glenelg East) and had two daughters, Clariss and Jacqueline.
Kibby stood only 5 feet 6 inches (168 cm) tall, but was a strong man and enjoyed outdoor activities. He joined the scouting movement, as an assistant scoutmaster of the 2nd Glenelg Sea Scouts where he crewed their lifeboat. He enjoyed family walks and picnics and was a keen golfer, playing on various public courses. He was also a talented artist, painting and drawing in addition to his plaster design work, and even briefly attended art classes at the School of Mines and Industries. He was described as a quiet and sincere man who loved gardening. In 1936, he joined the part-time Militia and was posted to the 48th Field Battery, Royal Australian Artillery. Along with his Militia service, he enjoyed participating in military tattoos.
## World War II
On 29 June 1940, Kibby enlisted in the all-volunteer Second Australian Imperial Force, which had been raised for overseas service in World War II. He was posted to the 2/48th Infantry Battalion, part of the 26th Brigade. This brigade was initially assigned to the 7th Division. On 14 September, when the battalion was training in South Australia, Kibby was promoted to acting corporal, and this was followed by promotion to acting sergeant a month later. The 2/48th embarked on the troopship HMT Stratheden on 17 November and sailed for the Middle East, where it disembarked in Palestine on 17 December. On New Year's Eve, Kibby fell into a slit trench and broke his leg. He then spent months convalescing. During his recovery, he produced at least forty watercolours and pencil drawings, which, according to his biographer, Bill Gammage, displayed "a fondness for Palestine's countryside and a feeling for its people". While in Palestine, Kibby struck up a friendship with the painter Esmond George, and occasionally accompanied him on sketching trips. After recovering, Kibby joined the brigade training battalion in August 1941 and also attended the infantry school to complete a weapons course. He rejoined the 2/48th in February 1942, the 26th Brigade having been transferred to the 9th Division a year earlier. At the time, the battalion was undertaking garrison duties in northern Syria, after participating in the siege of Tobruk.
During early 1942, the Axis forces had advanced steadily through northwest Egypt. It was decided that the British Eighth Army should make a stand just over 100 kilometres (62 mi) west of Alexandria, at the railway siding of El Alamein, where the coastal plain narrowed between the Mediterranean Sea and the inhospitable Qattara Depression. On 26 June 1942, the 9th Division was ordered to begin moving from northern Syria to El Alamein. On 1 July, Generalfeldmarschall Erwin Rommel's forces made a major attack, hoping to dislodge the Allies from the area, take Alexandria, and open the way to Cairo and the Suez Canal. This attack resulted in the First Battle of El Alamein. The Eighth Army had regrouped sufficiently to repel the Axis forces and launch counter-attacks. On 6 July, the lead elements of the 9th Division arrived at Tel el Shammama 22 miles (35 km) from the front, from where they would be committed to the fighting in the northern sector.
Before dawn on 10 July, as Rommel focused his efforts on the southern flank of the battlefield, the 9th Division attacked the north flank of the enemy positions and captured the strategic high ground around Tel el Eisa. In the days following, Rommel redirected his forces against them, in a series of intense counter-attacks, but was unable to dislodge the Australians. On 22 July, the 24th and 26th Brigades attacked German positions on the ridges south of Tel el Eisa, suffering heavy casualties but taking positions on Makh Khad Ridge and Tel el Eisa itself.
At the Second Battle of El Alamein, from 23 to 31 October 1942, Kibby distinguished himself through his skill in leading his platoon, after its commander had been killed, during the first attack at Miteiriya Ridge. On 23 October, he charged a machinegun position, firing at it with his Thompson submachine gun; Kibby killed three enemy soldiers, captured twelve others and took the position. His company commander intended to recommend him for the Distinguished Conduct Medal after this action, but was killed. During the following days, Kibby moved among his men directing fire and cheering them on. He mended his platoon's telephone line several times under intense fire, restoring communications with the battalion mortars and enabling them to bring down fire on the attacking enemy. During 30–31 October, the platoon came under heavy machinegun and mortar fire. Most of the members of the platoon were killed or wounded, and by the time the battle was over the total fighting strength of the battalion was down to 213 men from an establishment strength of 910. At one point before midnight on 31 October, in order to achieve his company's objective, Kibby moved forward alone, to within a few metres of the enemy, throwing grenades. Just as his success in this endeavour appeared decisive, he was killed. By the morning, the 2/48th consisted of fewer than 50 unwounded men. The posts captured by the 2/48th that night were lost to the enemy, who buried Kibby with other dead in a common grave. Later, when the area was retaken by Australian troops, the men of his unit searched for ten days, found the grave and reburied the men individually.
Kibby was subsequently recommended for the posthumous award of the Victoria Cross, the highest award for gallantry in the face of the enemy that could be awarded to an Australian armed forces member at the time. The citation was partly based on a note found in the pocket of his dead company commander. The award was listed in the London Gazette on 28 January 1943, and the citation read:
> During the initial attack at Miteiriya Ridge on the 23rd October, 1942, the Commander of No. 17 Platoon, to which Sergeant Kibby belonged, was killed. No sooner had Sergeant Kibby assumed command, than his Platoon was ordered to attack strong enemy positions holding up the advance of his Company. Sergeant Kibby immediately realised the necessity for quick decisive action, and without thought for his personal safety he dashed forward towards the enemy posts firing his Tommy-gun. This rapid and courageous individual action resulted in the complete silencing of the enemy fire, by the killing of three of the enemy and the capture of twelve others. With these posts silenced, his Company was then able to continue the advance.After the capture of TRIG 29 on 26 October, intense enemy artillery concentrations were directed on the battalion area, which were invariably followed with counter-attacks by tanks and infantry. Throughout the attack that culminated in the capture of TRIG 29 and the re-organisation period which followed, Sergeant Kibby moved from section to section personally directing their fire and cheering the men, despite the fact that the Platoon throughout was suffering heavy casualties. Several times, while under intense machinegun fire, he went out and mended the platoon line communications, thus allowing mortar concentrations to be directed effectively against the attacks on his Company's front. His whole demeanour during this difficult phase in the operations was an inspiration to his Platoon.On the night of 30–31 October when the Battalion attacked "ring contour" 25 behind the enemy lines, it was necessary for No. 17 Platoon to move through withering fire in order to reach its objective. These conditions did not deter Sergeant Kibby from pressing forward right to the objective, despite his platoon's being mown down by machine-gun fire from point-blank range. One pocket of resistance still remained and Sergeant Kibby went forward alone throwing grenades to destroy the enemy now only a few yards distant. Just as success appeared certain, he was killed by a burst of machine gunfire. Such outstanding courage, tenacity of purpose and devotion to duty was entirely responsible for the successful capture of the Company's objective. His work was an inspiration to all and he left behind an example and the memory of a soldier who fearlessly and unselfishly fought to the end to carry out his duty.
George was invalided back to Adelaide early in 1943 and was able to pass on to Mabel Kibby some of her husband's works. The Governor-General of Australia, Baron Gowrie, himself a recipient of the VC, presented Kibby's award to Mabel Kibby on 27 November 1943.
## Postscript
In January 1944, Kibby's remains were re-interred in the El Alamein War Cemetery maintained by the Commonwealth War Graves Commission. In the same year, a memorial trust was established and raised A£1,001, which was used to purchase a house on Third Avenue, Helmsdale, for Mabel and their daughters. Along with the Victoria Cross, Kibby was also entitled to the 1939–1945 Star, Africa Star with 8th Army clasp, Defence Medal, War Medal 1939–1945 and Australia Service Medal 1939–1945. Later, Mabel donated his medal set to the Australian War Memorial; it is on display in the Hall of Valour. In 1947, Kibby's father John met Field Marshal Bernard Montgomery, who had commanded the Allied forces during the Second Battle of El Alamein, when he visited Adelaide. In 1956, the soldiers' mess at Woodside Barracks in the Adelaide Hills was named for Kibby. In 1996, a rest area on the Federal Highway near Yarra, New South Wales was named after him. A veteran's shed and a street in Loxton are also named after him.
|
12,808,261 |
Yusuf I of Granada
| 1,171,605,318 |
Sultan of Granada from 1333 to 1354
|
[
"1318 births",
"1354 deaths",
"14th century in al-Andalus",
"14th-century Arab people",
"14th-century monarchs in Europe",
"14th-century people from al-Andalus",
"Deaths by stabbing in Spain",
"Sultans of Granada"
] |
Abu al-Hajjaj Yusuf ibn Ismail (Arabic: أبو الحجاج يوسف بن إسماعيل; 29 June 1318 – 19 October 1354), known by the regnal name al-Muayyad billah (المؤيد بالله, "He who is aided by God"), was the seventh Nasrid ruler of the Emirate of Granada on the Iberian Peninsula. The third son of Ismail I (r. 1314–1322), he was Sultan between 1333 and 1354, after his brother Muhammad IV (r. 1325–1333) was assassinated.
Coming to the throne at age fifteen, he was initially treated as a minor and given only limited power by his ministers and his grandmother Fatima. In February 1334, his representatives secured a four-year peace treaty with Granada's neighbours Castile and the Marinid Sultanate. Aragon joined in the treaty in May. After gaining more control of the government, in 1338 or 1340 he expelled the Banu Abi al-Ula family, who had masterminded the murder of his brother and had been the leaders of the Volunteers of the Faith—North African soldiers who fought for Granada. After the treaty expired, he allied himself with Abu al-Hasan Ali (r. 1331–1348) of the Marinids against Alfonso XI of Castile (r. 1312–1350). After winning a major naval victory in April 1340, the Marinid–Granadan alliance was decisively defeated on 30 October in the disastrous Battle of Río Salado. In its aftermath, Yusuf was unable to prevent Castile from taking several Granadan castles and towns, including Alcalá de Benzaide, Locubín, Priego and Benamejí. In 1342–1344, Alfonso XI besieged the strategic port of Algeciras. Yusuf led his troops in diversionary raids into Castilian territory, and later engaged the besieging army, but the city fell in March 1344. A ten-year peace treaty with Castile followed.
In 1349, Alfonso XI broke the treaty and invaded again, laying siege to Gibraltar. Yusuf was responsible for supplying the besieged port, and led counter-attacks into Castile. The siege was lifted when Alfonso XI died of the Black Death in March 1350. Out of respect, Yusuf ordered his commanders to not attack the Castilian army as they retreated from Granadan territories carrying their king's body. Yusuf signed a treaty with Alfonso's son and successor Peter I (r. 1350–1366), even sending his troops to suppress a domestic rebellion against the Castilian king, as required by the treaty. His relation with the Marinids deteriorated when he provided refuge for the rebellious brothers of Sultan Abu Inan Faris (r. 1348–1358). He was assassinated by a madman while praying in the Great Mosque of Granada, on the day of Eid al-Fitr, 19 October 1354.
In contrast to the military and territorial losses suffered during his reign, the emirate flourished in the fields of literature, architecture, medicine and the law. Among other new buildings, he constructed the Madrasa Yusufiyya inside the city of Granada, as well as the Tower of Justice and various additions to the Comares Palace of the Alhambra. Major cultural figures served in his court, including the hajib Abu Nu'aym Ridwan, as well as the poet Ibn al-Jayyab and the polymath Ibn al-Khatib, who consecutively served as his viziers. Modern historians consider his reign, and that of his son Muhammad V (r. 1354–1359, 1362–1391), as the golden era of the Emirate.
## Early life
Abu al-Hajjaj Yusuf ibn Ismail was born on 29 June 1318 (28 Rabi al-Thani 718 AH) in the Alhambra, the fortified royal palace complex of the Nasrid dynasty of the Emirate of Granada. He was the third son of the reigning sultan, Ismail I, and a younger brother of the future Muhammad IV. Ismail had four sons and two daughters, but Yusuf was the only child of his mother, Bahar. She was an umm walad (freed concubine) originally from the Christian lands, described as "noble in good deeds, chastity and equanimity" by Yusuf's vizier, the historian Ibn al-Khatib. When Ismail was assassinated in 1325, he was succeeded by the ten-year old Muhammad, who ruled until he too was assassinated in 25 August 1333, when he was en route back to Granada after repulsing a Castilian siege of Gibraltar, jointly with the Marinids of Morocco.
Ibn al-Khatib described the young Yusuf as "white-skinned, naturally strong, had a fine figure and an even finer character", with large eyes, dark straight hair and a thick beard. He further wrote that Yusuf liked to "dress with elegance", was interested in art and architecture, was a "collector of arms", and "had some mechanical ability". Before his accession, Yusuf lived in his mother's house.
## Background
Founded by Muhammad I in the 1230s, the Emirate of Granada was the last Muslim state on the Iberian Peninsula. Through a combination of diplomatic and military manoeuvres, the emirate succeeded in maintaining its independence, despite being located between two larger neighbours: the Christian Crown of Castile to the north and the Muslim Marinid Sultanate across the sea in Morocco. Granada intermittently entered into alliance or went to war with both of these powers, or encouraged them to fight one another, in order to avoid being dominated by either. From time to time, the sultans of Granada swore fealty and paid tribute to the kings of Castile, an important source of income for Castile. From Castile's point of view, Granada was a royal vassal, while Muslim sources never described the relationship as such. Muhammad I, for instance, on occasion declared his fealty to other Muslim sovereigns.
Yusuf's predecessor, Muhammad IV, sought help from the Marinid Sultanate to counter a threat by an alliance of Castile and the powerful Granadan commander Uthman ibn Abi al-Ula, who supported a pretender to the throne in a civil war. In exchange for the Marinid alliance, he had to yield Ronda, Marbella and Algeciras. Subsequently, the Marinid–Granadan forces captured Gibraltar and fended off a Castilian attempt to retake it, before signing a peace treaty with Alfonso XI of Castile and Abu al-Hasan Ali of the Marinids the day before Muhammad IV's assassination. While the actual killing of Muhammad IV was carried out by a slave named Zayyan, the instigators were Muhammad's own commanders, Abu Thabit ibn Uthman and Ibrahim ibn Uthman. They were the sons of Uthman ibn Abi al-Ula, who died in 1330, and his successors as the leaders of the Volunteers of the Faith, the corps of North Africans fighting on the Iberian Peninsula for Granada. According to Ibn Khaldun, the two brothers decided to kill Yusuf due to his closeness to the Marinid Sultan Abu al-Hasan—their political enemy—while according to Castilian chronicles it was because of the friendly way he treated Alfonso XI at the conclusion of the siege.
As a result of Muhammad's cessions to the Marinids and the taking of Gibraltar, the Marinids had sizeable garrisons and territories on traditionally Granadan lands in Al-Andalus (the Muslim-controlled part of the Iberian Peninsula). Their control of Algeciras and Gibraltar—two ports of the Strait of Gibraltar—gave them ability to move troops easily between North Africa and the Iberian Peninsula. The control of these ports and the waters surrounding them was also an important objective for Alfonso XI, who wanted to halt the North African intervention in the peninsula.
## Accession
The Nasrid dynasty of Granada had no specific rule of succession, and the sources are silent as to why Yusuf was chosen over Ismail's second son Faraj, who was a year older. There are differing reports of where Yusuf was proclaimed and who selected him. According to the historians L. P. Harvey and Brian Catlos, who follow the report of the Castilian chronicles, the hajib (chamberlain) Abu Nu'aym Ridwan, who was present at Muhammad IV's assassination, rode quickly to the capital Granada, arriving on the same day and, after consultation with Fatima bint al-Ahmar (Ismail's mother, and grandmother of Muhammad and Yusuf), arranged for the declaration of Yusuf as the new sultan. The proclamation took place the next day, 26 August (14 Dhu al-Hijja 733 AH). Another modern historian, Francisco Vidal Castro, writes that the declaration and the oath of allegiance took place in the Muslim camp near Gibraltar instead of in the capital, and that the instigators of the assassination, the Banu Abi al-Ula brothers, were the ones who proclaimed him.
Coming to the throne at the age of fifteen, Yusuf was initially treated as a minor and, according to Ibn al-Khatib, his authority was limited to only "choosing the food to eat from his table". His grandmother, Fatima, and the hajib Ridwan became his tutors and exercised some powers of government, together with other ministers. Upon his accession he took the laqab (honorific or regnal name) al-Mu'ayyad billah ("He who is aided by God"). The founder of the dynasty, Muhammad I, had taken a laqab (al-Ghalib billah, "Victor by the grace of God") but the subsequent sultans up to Yusuf did not adopt this practice. After Yusuf this was done by almost all of the Nasrid sultans. According to the Castilian chronicles, Yusuf immediately requested the protection of Abu al-Hasan, his late brother's ally.
## Political and military events
### Early peace
The peace that Muhammad IV secured after the siege of Gibraltar was, by the principles of the time, rendered void by his death, and representatives of Yusuf met with those of Alfonso XI and Abu al-Hasan Ali. They signed a new treaty with a four-year duration at Fez, the capital of the Marinid Sultanate, on 26 February 1334. Like previous treaties, it authorised free trade between the three kingdoms, but, unusually, it did not include payments of tribute from Granada to Castile. Marinid ships were to be given access to Castilian ports, and the Marinid Sultan Abu al-Hasan promised not to increase his garrisons on the Iberian Peninsula—but he could still rotate them. The latter condition was favourable not only to Castile but also to Granada, which was wary of possible expansionism by the larger Marinid Sultanate into the peninsula. Alfonso IV of Aragon (r. 1327–1336) agreed to join the treaty in May 1334 and signed his own agreement with Yusuf on 3 June 1335. After Alfonso IV's death in January 1336, his son Peter IV (r. 1336–1387) renewed the bilateral Granadan–Aragonese treaty for five years, ushering in a period of peace between Granada and all its neighbours.
With the treaty in place, the monarchs redirected their attentions elsewhere: Alfonso XI cracked down on his rebellious nobles, while Abu al-Hasan waged war against the Zayyanid Kingdom of Tlemcen in North Africa. During these years, Yusuf acted against the Banu Abi al-Ula family, the masterminds of Muhammad IV's assassination. In September 1340 (or 1338), Abu Thabit ibn Uthman was removed from his post as the overall Chief of the Volunteers and replaced by Yahya ibn Umar of the Banu Rahhu family. Abu Thabit was expelled along with his three brothers and the entire family to the Hafsid Kingdom of Tunis. Harvey comments that "[b]y the standards of acts of revenge in those days [...] this was quite restrained", probably because Yusuf did not want to unnecessarily create tensions with the North African volunteers.
### Marinid–Granadan war against Castile
In the spring of 1339, after the expiration of the treaty, hostilities recommenced with Marinid raids into the Castilian countryside. Confrontations ensued between Castile on one side and the two Muslim kingdoms on the other. Granada was invaded by Castilian troops led by Gonzalo Martínez, Master of the Order of Alcántara, who raided Locubín, Alcalá de Benzaide and Priego. In turn, Yusuf led an army of 8,000 in besieging Siles, but was forced to lift the siege by the forces of the Master of the Order of Santiago, Alfonso Méndez de Guzmán.
The personal rivalry between Martínez and de Guzmán appears to have caused the former to defect to Yusuf, but he was soon captured by Castilian forces, hanged as a traitor and his body burned. The Marinid commander on the peninsula, Abu Malik Abd al-Wahid, son of Abu al-Hasan, died during a battle with Castile on 20 October 1339, but Marinid forces continued to ravage the Castilian frontiers until they were defeated at Jerez. At the same time, Nasrid forces achieved military successes, including the conquest of Carcabuey.
In autumn 1339, the Aragonese fleet under Jofre Gillabert tried to land near Algeciras but was driven away after their admiral was killed. On 8 April 1340, a major battle took place off Algeciras between the Castilian fleet under Alfonso Jofré Tenorio and a larger Marinid–Granadan fleet under Muhammad al-Azafi, resulting in a Muslim victory and the death of Tenorio. The Muslim fleet captured 28 galleys out of the 44 in the Castilian fleet, as well as 7 carracks. Abu al-Hasan saw the naval victory as a harbinger for the conquest of Castile. He crossed the Strait of Gibraltar with his army, including siege engines, his wives and his entire court. He landed in Algeciras on 4 August, was joined by Yusuf, and laid siege to Tarifa, a Castilian port on the Strait, on 23 September.
Alfonso XI marched to relieve Tarifa, joined by Portuguese troops led by his ally, King Afonso IV of Portugal (r. 1325–1357). They arrived five miles (eight kilometres) from Tarifa on 29 October, and Yusuf and Abu al-Hasan moved to meet them. Alfonso XI commanded 8,000 horsemen, 12,000 foot soldiers and an unknown number of urban militia, while Afonso IV had 1,000 men. The Muslim strength is unclear: contemporary Christian sources claimed an exaggerated 53,000 horsemen and 600,000 foot soldiers, while modern historian Ambrosio Huici Miranda in 1956 estimated 7,000 Granadan troops and 60,000 Moroccans. Crucially, the Christian knights had much better armour than the more lightly equipped Muslim cavalry.
#### Battle of Río Salado
The resulting Battle of Río Salado (also known as the Battle of Tarifa), on 30 October 1340, was a decisive Christian victory. Yusuf, who wore a golden helmet in the battle, fled the field after a charge by the Portuguese troops. The Granadan contingent initially defended itself and was about to defeat Afonso IV in a counterattack, but was routed when Christian reinforcements arrived, leaving their Marinid allies behind. The Marinids too were routed in the main battle against the Castilians, which lasted from 9:00 a.m. to noon. Harvey opined that the key to the Christian victory—despite their numeric disadvantage—was their cavalry tactics and superior armour. Muslim tactics—which focused on lightly armoured, highly mobile cavalry—were well suited for open battle, but in the relatively narrow battlefield of Río Salado the Christian formation of armoured knights attacking in a well-formed battle line had a decisive advantage.
In the aftermath of the battle, the Christian troops pillaged the Muslim camp, and massacred the women and children, including Abu al-Hasan's queen, Fatima, the daughter of King Abu Bakr II of Tunis—to the dismay of their commanders, who would have preferred to see her ransomed. Numerous royal persons and nobles were captured, including Abu al-Hasan's son, Abu Umar Tashufin. Among the fallen were many of Granada's intellectuals and officials. Yusuf retreated to his capital through Marbella. Abu al-Hasan marched to Gibraltar, sent news of victory back home to prevent any rebellion in his absence, and crossed the Strait to Ceuta the same night.
Various Muslim authors laid the blame on the Marinid Sultan, with Umar II of Tlemcen saying that he "humiliated the head of Islam and filled the idolaters with joy", and al-Maqqari commenting that he allowed his army to be "scattered like dust before the wind". Yusuf appeared to not have been blamed, and continued to be popular in Granada. Alfonso XI returned victorious to Seville and paraded the Muslim captives and the booty taken by his army. There was so much gold and silver that their prices as far away as Paris and Avignon fell by one sixth.
### After Río Salado
With the bulk of the Marinid forces retreating to North Africa, Alfonso XI was able to act freely against Granada. He invaded the emirate in April 1341, feigning an attack against Málaga. When Yusuf reinforced this western port—taking many men from elsewhere—Alfonso redirected his troops towards Alcalá de Benzaide, a major border fortress 30 miles (50 km) north of Granada, whose garrison had been reduced in order to reinforce Málaga. The Castilian army started a siege and ravaged the surrounding countryside, not only taking food but also destroying vines—causing lasting damage to the local agriculture without any benefit to the attackers. In response, Yusuf moved to a strong position in Pinos Puente to block Castilian attempts to raid further into the rich plains surrounding the city of Granada. Alfonso XI extended the raids into more areas in order to tempt Yusuf to leave his position, but the Granadan army held its ground as the Castilians devastated the area surrounding Locubín and Illora. As the siege progressed, Yusuf received Marinid reinforcements from Algeciras and moved six miles (ten kilometres) to Moclín. Neither side was willing to risk a frontal attack, and Alfonso unsuccessfully tried to provoke Yusuf into an ambush. With relief unlikely, the Muslim defenders of Alcalá offered to surrender the fortress in exchange for safe conduct, to which Alfonso agreed; the capitulation took place on 20 August 1341. Yusuf then offered a truce, but Alfonso demanded that he break his alliance with the Marinids, which Yusuf refused to do, and the war continued.
Concurrently with the siege of Alcalá, Alfonso's troops also captured the nearby Locubín. In the weeks after the fall of Alcalá, the Castilians captured Priego, Carcabuey, Matrera and Benamejí. In May 1342, a Marinid–Granadan fleet sailing in the Strait of Gibraltar was ambushed by Castilian and Genoese ships, resulting in a Christian victory, the destruction of twelve galleys and the dispersal of other vessels along the Granadan coast.
### Siege of Algeciras
Alfonso XI then targeted Algeciras, an important port on the Strait of Gibraltar which his father, Ferdinand IV, had failed to take in 1309–10. Alfonso arrived in early August 1342 and slowly imposed a land and sea blockade on the city. Yusuf's army took to the field, joined by Marinid troops from Ronda, trying to threaten the besiegers from the rear or divert their attention. Between November 1342 and February 1343, it raided the lands around Écija, entered and sacked Palma del Río, retook Benamejí and captured Estepa. In June, Yusuf sent his hajib, Ridwan, to Alfonso, offering payments in exchange for the lifting of the siege. Alfonso countered the offer by increasing the payment he would require. Yusuf sailed to North Africa to consult with Abu al-Hasan and raise the money, but the payment from the Marinid Sultan was not enough. Despite the safe conduct given by Alfonso, Yusuf's galley was attacked by a Genoese ship in Alfonso's service, which tried to steal the gold. Yusuf's ships repulsed the attack; Alfonso apologised but did not take any action against the Genoese ship's captain.
The Muslim defenders of Algeciras made use of cannons, one of the earliest recorded uses of this weapon in a major European confrontation—before their better-known use at the Battle of Crécy in 1346. Alfonso's forces were augmented by crusading contingents from all over Europe, including from both France and England, which were at war. Among European nobles present were King Philip III of Navarre, Gaston, Count of Foix, the Earl of Salisbury and the Earl of Derby.
On 12 December 1343, Yusuf crossed the Palmones River and engaged a Castilian detachment. This was reported in Castilian sources as a Muslim defeat. Early in 1344, Alfonso constructed a floating barrier, made of trees chained together, that stopped supplies from reaching Algeciras. With the hope of victory fading and the city on the verge of starvation, Yusuf began negotiations again. He sent an envoy, named Hasan Algarrafa in Castilian chronicles, and offered the surrender of Algeciras if its inhabitants were allowed to leave with their movable property, in exchange for a fifteen-year peace between Granada, Castile and the Marinids. Despite being counselled to reject the offer, and instead to take Algeciras by storm and massacre its inhabitants, Alfonso was aware of the uncertain outcome of an assault when hostile forces were nearby. He agreed to Algarrafa's proposal, but requested that the truce be limited to ten years, which Yusuf accepted. Other than Yusuf and Alfonso, the treaty included Abu al-Hasan, Peter IV and the Doge of Genoa. Yusuf and Alfonso signed the treaty on 25 March 1344 in the Castilian camp outside Algeciras.
### Siege of Gibraltar and related events
War broke out again in Granada in 1349, when Alfonso declared that the peace treaty no longer prevented him from attacking Muslim territories because the Marinid Iberian territories were now controlled by Abu Inan Faris, Abu al-Hasan's son, who had rebelled and seized Fez the year before. In June or July 1349, his forces began the siege of Gibraltar, a port which had been captured by Ferdinand IV in 1309 before falling to the Marinids in 1333. Prior to the siege, Yusuf sent archers and foot soldiers to reinforce the town's garrison. In July, Alfonso was personally present among the besiegers, and in the same month he ordered his Kingdom of Murcia to attack Yusuf's Granada. Despite Yusuf's protests, Peter IV sent an Aragonese fleet to assist the siege, even though in order to respect the peace treaty with Yusuf he instructed his men not to harm any subject of Granada. With the Marinids unable to send help, the main responsibility for fighting Castile fell to Yusuf, who led his troops in a series of counter-attacks. During the summer of 1349 he raided the outskirts of Alcaraz and Quesada, and besieged Écija. In the winter he sent Ridwan to besiege Cañete la Real, which surrendered after two days.
As the siege progressed, the Black Death (known in Spain as the mortandad grande), which had entered Iberian ports in 1348, struck the besiegers' camp. Alfonso persisted in the siege despite the urgings of his counselors. He became infected himself and died on either Good Friday 1350 (26 March) or the day before. The Castilian forces withdrew from Gibraltar, with some of the defenders coming out to watch. Out of respect, Yusuf ordered his army and his commanders in the border regions not to attack the Castilian procession as it travelled with the King's body to Seville. Alfonso was succeeded by his fifteen-year-old son, Peter I. Yusuf, Peter and Abu Inan of the Marinids concluded a treaty in 17 July 1350, which was to last until 1 January 1357. Trade was reopened between Granada and Castile (except for horses, arms and wheat), and captives were exchanged. In exchange for peace, Yusuf paid tribute to Peter and agreed to provide 300 light horsemen when requested, but Yusuf did not formally become Peter's vassal. Despite privately disliking Peter, Yusuf observed his treaty obligations: he sent 300 jinete cavalry—reluctantly, according to the historian Joseph O'Callaghan—to help the Castilian king suppress the rebellion of Alfonso Fernández Coronel in Aguilar, and refused to help the King's half-brother, Henry, when he attempted to start a rebellion against Peter from Algeciras.
### Yusuf and the Marinid princes
Abu al-Hasan tried unsuccessfully to regain the Marinid throne until his death in 1351. Two other challengers to Abu Inan, his brothers Abu al-Fadl and Abu Salim, fled to Granada. Yusuf refused pressure from the Marinid Sultan to hand them over. Like many other Nasrid sultans, Yusuf found that the presence of Marinid pretenders in his court gave him leverage in case the two states came into conflict.
At Yusuf's encouragement, Abu al-Fadl then went to Castile to seek help from Peter. Peter, seeking to incite another civil war in North Africa, provided ships to land the prince in Sus in order to attack Abu Inan. The Marinid Sultan was extremely angered by Yusuf's actions but felt unable to take action, knowing that he was supported by Castile. Abu al-Fadl was subsequently captured by Abu Inan and executed in 1354 or 1355. Abu Salim eventually became sultan in 1359–1361, well after Yusuf's death.
## Architecture
Yusuf constructed the Bab al-Sharia (now the Tower of Justice) in the Alhambra in 1348, forming the grand entrance to the complex. He also built what is now the Broken Tower (Torre Quebrada) of the Citadel of the Alhambra. He also carried out work in the Comares Palace, including renovations of its hammam (bathhouse), as well as the construction of the Hall of the Comares, also known as the Chamber of the Ambassadors, the largest Nasrid structure in the complex. He built various new walls and towers to accommodate his enlargement of the Comares, and adorned many of the courts and halls of the Alhambra, as may be seen from the repeated appearance of his name in the inscriptions on the walls. Also in the Alhambra, he built the small prayer hall (oratorio) of the Partal Palace, and what is now the Gate of the Seven Floors. He built or converted two of the towers in the Alhambra's northern ramparts into small palatial residences, which became a novel feature of Nasrid architecture in this period. These two towers are known today as the Peinador de la Reina (which Charles V expanded in the 16th century for new royal apartments) and the Torre de la Cautiva (Tower of the Captive).
In 1349, he founded a religious school, the Madrasa Yusufiyya, near the Great Mosque of Granada (now Granada Cathedral), providing higher education comparable to that of the medieval universities in Bologna, Paris and Oxford. Only its prayer room remains today. He built al-Funduq al-Jadida ("the new funduq"), today's Corral del Carbón in the city of Granada, the only remaining caravanserai from the Nasrid era. Outside Granada, he enlarged the Alcazaba of Málaga, the ancestral home of his paternal grandfather, Abu Said Faraj, the former governor of Málaga, as well as the city's Gibralfaro precinct.
Yusuf also constructed new defensive structures throughout his realm, including new towers, gates and barbicans, especially after the defeat of Río Salado. He reinforced existing castles and walls, as well as coastal defences. The hajib Ridwan built forty watchtowers (tali'a), stretching the entire length of the Emirate's southern coast. Yusuf reinforced the city walls of Granada, as well as the Bab Ilbira (now the Gate of Elvira) and the Bab al-Ramla (the Gate of the Ears).
## Administration
Yusuf's administration was supported by numerous ministers, including "a constellation of major cultural figures", according to Fernández-Puertas. Among them was Ridwan, who held the post of hajib (chamberlain), a title created for the first time in the Nasrid rule for him by Muhammad IV and which outranked that of the vizier and other ministers. The hajib had command of the army in the absence of the sultan. He was dismissed and imprisoned after the defeat at Río Salado; he was freed a year later but then refused Yusuf's offer to reappoint him as vizier. The next hajib, Abu al-Hasan ibn al-Mawl, came from a prominent family but proved unskilled in political matters. He was dismissed after a few months and fled to North Africa to avoid the intrigues of his rivals. The office of hajib remained vacant until Ridwan regained it under Yusuf's successor Muhammad V (first reign, 1354–1359); following Ridwan's assassination in 1359, the post again disappeared until the appointment of Abu al-Surrur Mufarrij by Yusuf III (r. 1408–1417).
The famous poet Ibn al-Jayyab was appointed as vizier in 1341, becoming the highest-ranked minister and the mastermind of Yusuf's cautious policy after Río Salado. He was also the royal secretary, therefore he was titled dhu'l-wizaratayn ("the holder of the two vizierates"). The Black Death struck the emirate in 1348 and outbreaks were recorded in its three largest cities: Granada, Málaga and Almería. The epidemic killed many scholars and officials, including Ibn al-Jayyab who died in 1349. In accordance with his wishes, he was succeeded as both vizier and royal secretary by his protégé, Ibn al-Khatib. Ibn al-Khatib had entered the court chancery (diwan al-insha) in 1340, replacing his father who died at Río Salado and serving under Ibn al-Jayyab. After becoming vizier he was also appointed to other posts, such as superintendent of the finances. The "preeminent writer and intellectual of fourteenth-century al-Andalus" according to Catlos, throughout his lifetime Ibn al-Khatib produced works in subjects as diverse as history, poetry, medicine, manners, mysticism and philosophy. With access to official documents and the court archives, he remains one of the main historical sources on the Emirate of Granada.
Yusuf received his subjects publicly twice each week, on Monday and Thursday, to listen to their concerns, assisted by his ministers and members of the royal family. According to Shihab al-Din al-'Umari, these hearings included the recitation of a tenth of the Quran and some parts of the hadith. On solemn state occasions, Yusuf presided over court activities from a wooden folding armchair that is currently preserved in the Museum of the Alhambra and bears the Nasrid coat of arms across its back. Between April and May 1347, he made a state visit to his eastern regions, with the main purpose of inspecting the fortifications in this part of his realm. Accompanied by his court, he visited twenty places in twenty-two days, including the port of Almería, where he was well received by the populace. Ibn al-Khatib describes other anecdotes that illustrate Yusuf's popularity, including his reception by a well-respected judge in Purchena, by the people—including common womenfolk—of Guadix in 1354 and by certain Christian merchants in the same year. According to Vidal Castro, gold coins bearing Yusuf's name had particularly beautiful designs, many of which are still found today (one example is provided in the infobox of this article).
In diplomacy, for the first time in Nasrid history he sent an embassy to the Mamluk Sultanate of Cairo. A surviving copy of a letter from the Mamluk Sultan al-Salih Salih indicates that Yusuf had requested military help to fight the Christians; al-Salih prayed for Yusuf's victory but declined to send troops, saying they were needed for conflicts on his own borders. Many of Yusuf's diplomatic exchanges with the rulers of North Africa—especially the Marinid sultans—are preserved in Rayhanat al-Kuttab compiled by Ibn al-Khatib.
In the judiciary, the chief judge (qadi al-jama'a) Abu Abdullah Muhammad al-Ash'ari al-Malaqi, who was appointed by Muhammad IV continued to serve under Yusuf until his death at the battle of Río Salado. He was known for his strong opinions; in one occasion, he wrote a poem to Yusuf warning him of officials who squandered tax revenues, and in another, he reminded the Sultan of his responsibilities to his subjects as a Muslim leader. After the death of al-Malaqi, Yusuf appointed, consecutively, Muhammad ibn Ayyash, Ibn Burtal and Abu al-Qasim Muhammad al-Sabti. The latter resigned in 1347 and Yusuf then appointed Abu al-Barakat ibn al-Hajj al-Balafiqi, who had previously served as a judge in various provinces and was known for his love of literature. Yusuf strengthened the function of the muftis, distinguished jurists who issued legal opinions (fatwas), often to assist judges in interpreting difficult points of Islamic law. The Madrasa Yusufiyya, where Maliki Islamic law was among the subjects taught, was created partly to increase the influence of the muftis. Yusuf's emphasis of the rule of law and his appointment of distinguished judges improved his standing among his subjects and among other Muslim monarchies. On the other hand, Yusuf had a mystical inclination that displeased the jurists (fuqaha) in his court, including his appreciation of the famous philosopher al-Ghazali (1058–1111), whose Sufi doctrines were disliked by the mainstream scholars.
## Family
According to Ibn al-Khatib, Yusuf began "playing with the idea of taking a concubine" after his accession. He had two concubines, both originally from the Christian lands, named Buthayna and Maryam or Rim. His union with Buthayna might have taken place in 737 AH (c. 1337 CE), the date of a poem written by Ibn al-Jayyab about the wedding celebration. The wedding took place in a rainy day, and a horse race was held in its honour. In 1339 Buthayna gave birth to Yusuf's first son Muhammad (later Muhammad V), and subsequently to a daughter named Aisha. Maryam/Rim bore him seven children: two sons—Ismail (later Ismail II, r. 1359–1360), who was born nine months after Muhammad, and Qays—as well as five daughters—Fatima, Mu'mina, Khadija, Shams and Zaynab. The eldest daughter married her cousin, the future Muhammad VI (r. 1360–1362). Maryam/Rim's influence was said to be greater than that of Buthayna, and Yusuf favoured his second son Ismail above his other children. Yusuf had another son, Ahmad, whose mother is unknown. He also had a wife, who was the daughter of a Nasrid relative. Apart from their wedding in 738 AH (c. 1338 CE), there is no reference to this wife in the historical sources, leading the historian Bárbara Boloix Gallardo to speculate that she might have died early. Initially, Yusuf designated Ismail as his heir, but later—a few days before his death—he named Muhammad instead because he was considered to have a better judgement. Both Muhammad and Ismail were aged around 15 at Yusuf's death.
The education of the children was entrusted to Abu Nu'aym Ridwan, the hajib who was a former Christian and managed to teach the young Ismail some Greek. Yusuf's grandmother Fatima, who had been influential in the Granadan court for multiple generations, died in 1349 aged 90 lunar years, and received an elegy from Ibn al-Khatib. The activity of Yusuf's mother Bahar was also attested: when the North African traveller Ibn Battuta visited Granada in 1350 and sought a royal audience, Yusuf was sick and in his place Bahar provided Ibn Battuta with enough money for his stay, even though it was unknown if Bahar actually met him or if he was received inside the Alhambra. Yusuf's concubine Maryam/Rim played an important role after his death: In 1359, she financed a coup that involved 100 men and deposed her stepson Muhammad V in favour of her son Ismail.
Other than his predecessor Muhammad, Yusuf had another elder half-brother, Faraj, who moved overseas after Yusuf's succession. He later returned to the emirate, and was subsequently imprisoned and killed—probably for political reasons—on Yusuf's order in Almería, 751 AH (1350 or 1351). Yusuf also imprisoned his younger half-brother, Ismail, who was later freed by Muhammad V and then settled in North Africa. Additionally, Yusuf had two half-sisters, Fatima and Maryam, whose marriages he arranged. One of them was married to Abu al-Hasan Ali, a distant member of the Nasrid family.
## Death
Yusuf was assassinated whilst praying in Granada's Great Mosque on 19 October 1354 (Eid al-Fitr/1 Shawwal 755 AH). A man stabbed him with a dagger during the last prostration of the Eid prayer ritual. Ibn al-Khatib was present—likely praying a few metres from the Sultan, given that he was then a high court official—and his works include a detailed narration of the events. The attacker broke from the ranks of the congregation and went towards the Sultan. His movement was not noticed or did not alarm anyone because of his condition and rank (see next paragraph), and upon reaching the Sultan, he leapt and stabbed him. The solemn prayer was then interrupted and Yusuf was carried to his royal apartment in the Alhambra, where he died. The assassin was interrogated, but his words were unintelligible. He was soon killed by a mob. His body was burnt (according to Ibn al-Khatib, though this statement might have referred to his supposed burning in hellfire) or "cut into a thousand pieces" (according to Ibn Khaldun).
Ibn al-Khatib's account presents the murder as an act of a madman (mamrur) without any motive, and this is also the main account presented by Fernández-Puertas and Harvey, although the latter adds that the lack of reported motive "fill[s] one with suspicion". Ibn Khaldun, as well as another Arab near-contemporary historian, Ibn Hajar al-Asqalani, concurred that the attacker was a madman of low rank and intelligence. Ibn Khaldun added that he was a slave in the royal stables whom some suspected to be a bastard son of Muhammad IV with a black woman. This led Vidal Castro to suggest an alternative explanation that it was a politically motivated attack instigated by a third party. Vidal Castro considers it unlikely that the attacker planned a political plot of his own, given his mental condition, or that the instigators aimed to have a demented bastard enthroned, given that Yusuf had his own sons as heirs. Instead, the historian suggests that the objective was simply to kill Yusuf and to end his rule, taking advantage of the attacker's unique condition. As Yusuf's supposed nephew, he would have an easier access to the Sultan, and with his mental condition he could be easily manipulated to conduct a likely suicidal attack without knowing its actual objective. In addition, it allowed the attack to be dismissed simply as a madman's action. Vidal Castro speculates that the real instigators could have been a faction at court whose identity and specific motives for killing Yusuf are unknown, or agents of the Marinid Sultan Abu Inan, whose relations with Yusuf soured towards the end of the latter's reign.
## Legacy
Yusuf was succeeded by his eldest son, who became Muhammad V. Yusuf was buried in the royal cemetery (rawda) of the Alhambra, alongside his great-grandfather, Muhammad II, and his father, Ismail I. Centuries later, with the surrender of Granada, the last sultan, Muhammad XII (also known as Boabdil), exhumed the bodies in this cemetery and reburied them in Mondújar, part of his Alpujarras estates. Fernández-Puertas describes the reigns of Yusuf and his successor Muhammad V as the "climax" of the Nasrid period, as seen from the realm's architectural and cultural output, and the flourishing of the study of medicine. Similarly, the historian Brian A. Catlos describes the reigns of these two sultans as the emirate's "era of greatest glory", and Rachel Arié describes the same period as its "apogee". L. P. Harvey describes Yusuf's cultural achievements as "considerable" and "solid", and as marking the beginning of the dynasty's "Golden Age". Furthermore, Yusuf's Granada survived the "onslaught of Alfonso XI's attacks" and at the end reduced its dependency on the Marinids. However, Harvey notes that he was defeated at Río Salado, "the greatest single reverse suffered by the Muslim cause" during the Nasrid period before the fall of Granada, and presided over the strategically significant losses of Algeciras and Alcalá de Benzaide.
|
242,465 |
Albert Pierrepoint
| 1,171,730,390 |
English executioner
|
[
"1905 births",
"1992 deaths",
"20th-century British businesspeople",
"Albert Pierrepoint",
"British publicans",
"English autobiographers",
"English executioners",
"People from Clayton, West Yorkshire"
] |
Albert Pierrepoint (/ˈpɪərpɔɪnt/; 30 March 1905 – 10 July 1992) was an English hangman who executed between 435 and 600 people in a 25-year career that ended in 1956. His father Henry and uncle Thomas were official hangmen before him.
Pierrepoint was born in Clayton in the West Riding of Yorkshire. His family struggled financially because of his father's intermittent employment and heavy drinking. Pierrepoint knew from an early age that he wanted to become a hangman, and was taken on as an assistant executioner in September 1932, aged 27. His first execution was in December that year, alongside his uncle Tom. In October 1941 he undertook his first hanging as lead executioner.
During his tenure he hanged 200 people who had been convicted of war crimes in Germany and Austria, as well as several high-profile murderers—including Gordon Cummins (the Blackout Ripper), John Haigh (the Acid Bath Murderer) and John Christie (the Rillington Place Strangler). He undertook several contentious executions, including Timothy Evans, Derek Bentley and Ruth Ellis and executions for high treason—William Joyce (also known as Lord Haw-Haw) and John Amery—and treachery, with the hanging of Theodore Schurch.
In 1956 Pierrepoint was involved in a dispute with a sheriff over payment, leading to his retirement from hanging. He ran a pub in Lancashire from the mid-1940s until the 1960s. He wrote his memoirs in 1974 in which he concluded that capital punishment was not a deterrent, although he may have changed his position after that. He approached his task with gravitas and said that the execution was "sacred to me". His life has been included in several works of fiction, such as the 2005 film Pierrepoint, in which he was portrayed by Timothy Spall.
## Biography
### Early life
Albert Pierrepoint was born on 30 March 1905 in Clayton in the West Riding of Yorkshire. He was the third of five children and eldest son of Henry Pierrepoint and his wife Mary (née Buxton). Henry had a series of jobs, including butcher's apprentice, clog maker and a carrier in a local mill, but employment was mostly short-term. With intermittent employment, the family often had financial problems, worsened by Henry's heavy drinking. From 1901 Henry had been on the list of official executioners. The role was part-time, with payment made only for individual hangings, rather than an annual stipend or salary, and there was no pension included with the position.
Henry was removed from the list of executioners in July 1910 after arriving drunk at a prison the day before an execution and excessively berating his assistant. Henry's brother Thomas became an official executioner in 1906. Pierrepoint did not find out about his father's former job until 1916, when Henry's memoirs were published in a newspaper. Influenced by his father and uncle, when asked at school to write about what job he would like when older, Pierrepoint said that "When I leave school I should like to be public executioner like my dad is, because it needs a steady man with good hands like my dad and my Uncle Tom and I shall be the same".
In 1917 the Pierrepoint family left Huddersfield, West Riding of Yorkshire, and moved to Failsworth, near Oldham, Lancashire. Henry's health declined and he was unable to undertake physical work; as a result, Pierrepoint left school and began work at the local Marlborough Mills. Henry died in 1922 and Pierrepoint received two blue exercise books—in which his father had written his story as a hangman—and Henry's execution diary, which listed details of each hanging in which he had participated. In the 1920s Pierrepoint left the mill and became a drayman for a wholesale grocer, delivering goods ordered through a travelling salesman. By 1930 he had learned to drive a car and a lorry to make his deliveries; he later became manager of the business.
### As assistant executioner, 1931–1940
On 19 April 1931 Pierrepoint wrote to the Prison Commissioners and applied to be an assistant executioner. He was turned down; there were no vacancies. He received an invitation for an interview six months later. He was accepted and spent four days training at Pentonville Prison, London, where a dummy was used for practice. He received his formal acceptance letter as an assistant executioner at the end of September 1932. At that time, the assistant's fee was £1 11s 6d per execution, with another £1 11s 6d paid two weeks later if his conduct and behaviour were satisfactory. The executioner was chosen by the county high sheriff—or more commonly delegated to the undersheriff, who selected both the hangman and the assistant. Executioners and their assistants were required to be discreet and the rules for those roles included the clause:
> He should clearly understand that his conduct and general behaviour should be respectable, not only at the place and time of the execution, but before and subsequently, that he should avoid attracting public attention in going to or from the prison, and he is prohibited from giving to any person particulars on the subject of his duty for publication.
In late December 1932 Pierrepoint undertook his first execution. His uncle Tom had been contracted by the government of the Irish Free State for the hanging of Patrick McDermott, a young Irish farmer who had murdered his brother; Tom was free to select his own assistant as it was outside Britain, and took Pierrepoint with him. They travelled to the Mountjoy Prison, Dublin for the hanging. It was scheduled for 8:00 am, and took less than a minute to perform. Pierrepoint's job as assistant was to follow the prisoner onto the scaffold, bind the prisoner's legs together, then step back off the trapdoor before the lead executioner sprang the mechanism.
For the remainder of the 1930s Pierrepoint worked in the grocery business and as an assistant executioner. Most of his commissions were with his uncle Tom, from whom Pierrepoint learned much. He was particularly impressed with his uncle's approach and demeanour, which were dignified and discreet; he also followed Tom's advice "if you can't do it without whisky, don't do it at all."
In July 1940 Pierrepoint was the assistant at the execution of Udham Singh, an Indian revolutionary who had been convicted of shooting the colonial administrator Sir Michael O'Dwyer. The day before the execution, Stanley Cross, the newly promoted lead executioner, became confused with his calculations of the drop length, and Pierrepoint stepped in to advise on the correct measurements; Pierrepoint was added to the list of head executioners soon after.
### As lead executioner, 1940–1956
In October 1941 Pierrepoint undertook his first execution as lead executioner when he hanged the gangland killer Antonio "Babe" Mancini. He followed the routine as established by Home Office guidelines, and as followed by his predecessors. He and his assistant arrived the day before the execution, where he was told the height and weight of the prisoner; he viewed the condemned man through the "Judas hole" in the door to judge his build. Pierrepoint then went to the execution room—normally next to the condemned cell—where he tested the equipment using a sack that weighed about the same as the prisoner; he calculated the length of the drop using the Home Office Table of Drops, making allowances for the man's physique, if necessary. He left the weighted sack hanging on the rope to ensure the rope was stretched and it would be re-adjusted in the morning if necessary.
On the day of the execution, the practice was for Pierrepoint, his assistant and two prison officers to enter the condemned man's cell at 8:00 am. Pierrepoint secured the man's arms behind his back with a leather strap, and all five walked through a second door, which led to the execution chamber. The prisoner was walked to a marked spot on the trapdoor whereupon Pierrepoint placed a white hood over the prisoner's head and a noose around his neck. The metal eye through which the rope was looped was placed under the left jawbone which, when the prisoner dropped, forced the head back and broke the spine. Pierrepoint pushed a large lever, releasing the trapdoor. From entering the condemned man's cell to opening the trapdoor took him a maximum of 12 seconds. The neck was broken in almost exactly the same position in each hanging—the hangman's fracture.
#### War-related executions
During the Second World War Pierrepoint hanged 15 German spies, as well as US servicemen found guilty by courts martial of committing capital crimes in England. In December 1941, he executed the German spy Karel Richter at Wandsworth prison. When Pierrepoint entered the condemned man's cell for the hanging, Richter stood up, threw aside one of the guards and charged headfirst at the stone wall. Stunned momentarily, he rose and shook his head. After Richter struggled with the guards, Pierrepoint managed to get the leather strap around Richter's wrists. He burst the leather strap from eye-hole to eye-hole and was free again. After another struggle, the strap was wrapped tightly around his wrists. He was brought to the scaffold where a strap was wrapped around his ankles, followed by a cap and noose. Just as Pierrepoint pushed the lever, Richter jumped up with bound feet. As Richter plummeted through the trapdoor, Pierrepoint could see that the noose had slipped, but it became stuck under Richter's nose. Despite the unusual position of the noose, the prison medical officer determined that it was an instantaneous, clean death. Writing about the execution in his memoirs, Pierrepoint called it "my toughest session on the scaffold during all my career as an executioner". The broken strap was given to Pierrepoint as a souvenir; he used it occasionally for what he thought were meaningful executions.
In August 1943 Pierrepoint married Anne Fletcher after a courtship of five years. He did not tell her about his role of executioner until a few weeks after the nuptials, when he was flown to Gibraltar to hang two saboteurs; on his return he explained the reason for his absence and she accepted it, saying that she had known about his second job all along, after hearing gossip locally.
In late 1945, following the liberation of the Bergen-Belsen concentration camp and the subsequent trial of the camp's officials and functionaries, Pierrepoint was sent to Hamelin, Germany to carry out the executions of eleven of those sentenced to death, plus two other German war criminals convicted of murdering an RAF pilot in the Netherlands in March 1945. He disliked any publicity connected to his role and was unhappy that his name had been announced to the press by General Sir Bernard Montgomery. When he flew to Germany, he was followed across the airfield by the press, which he described as being "as unwelcome as a lynch mob". He was given the honorary military rank of lieutenant colonel and, on 13 December, he first executed the women individually, then the men two at a time. Pierrepoint travelled several times to Hamelin, and between December 1948 and October 1949 he executed 226 people, often over 10 a day, and on several occasions groups of up to 17 over 2 days.
Six days after the Belsen hangings in December 1945, Pierrepoint hanged John Amery at Wandsworth prison. Amery, the eldest son of the cabinet minister Leo Amery, was a Nazi sympathiser who had visited prisoner-of-war camps in Germany to recruit allied prisoners for the British Free Corps; he had also broadcast to Britain to encourage men to join the Nazis. He was found guilty of treason. On 3 January 1946 Pierrepoint hanged William Joyce, also known as Lord Haw-Haw, who had been given the death sentence for high treason, although it was questionable if he were a British citizen, and therefore subject to the charge. The following day Pierrepoint hanged Theodore Schurch, a British soldier who had been found guilty under the Treachery Act 1940. Joyce was the last person to be executed in Britain for treason; the death penalty for treason was abolished with the introduction of the Crime and Disorder Act 1998. Schurch was the last person to be hanged in Britain for treachery, and the last to be hanged for any offence other than murder.
In September 1946 Pierrepoint travelled to Graz, Austria, to train staff at Karlau Prison in the British form of long-drop hanging. Previously, the Austrians had used a shorter drop, leaving the executed men to choke to death, rather than the faster long-drop kill. He undertook four double executions of prisoners, with his trainees acting as assistants. Despite Pierrepoint's expertise as an executioner and his experience with hanging the German war criminals at Hamelin, he was not selected as the hangman to carry out the sentences handed down at the Nuremberg trials; the job went to an American, Master Sergeant John C. Woods, who was relatively inexperienced. The press was invited to observe the process, and pictures were later circulated which suggested the hangings had been poorly done. Wilhelm Keitel took 20 minutes to die after the trapdoor opened; the trap was not wide enough, so that some of the men hit the edges as they fell—more than one person's nose was torn off in the process—and others were strangled, rather than having their necks broken.
#### Post-war executions
After the war, Pierrepoint left the delivery business, and took over the lease of a pub, the Help the Poor Struggler on Manchester Road, in the Hollinwood area of Oldham. In the 1950s he left the pub and took a lease of the larger Rose and Crown at Much Hoole near Preston, Lancashire. He later said that he changed his main occupation because:
> I wanted to run my own business so that I should be under no obligation when I took time off. ... I could take a three o'clock plane from Dublin after conducting an execution there and be opening my bar without comment at half past five.
In 1948 Parliament debated a new Criminal Justice Bill, which raised the question of whether to continue with the death penalty or not. While the debates were proceeding, no executions took place, and Pierrepoint worked solely in his pub. When the bill failed in the House of Lords, hangings resumed after a nine-month gap. The following year, the Home Secretary, Chuter Ede, set up a Royal Commission to look into capital punishment in the UK. Pierrepoint gave evidence in November 1950 and included a mock hanging at Wandsworth prison for the commission members. The commission's report was published in 1953 and resulted in the Homicide Act 1957 which reduced the grounds for execution by differentiating between capital and non-capital charges for homicide.
From the late 1940s and into the 1950s Pierrepoint, Britain's most experienced executioner, carried out several more hangings, including those of prisoners described by his biographer, Brian Bailey, as "the most notorious murderers of the period ... [and] three of the most controversial executions in the latter years of the death penalty." In August 1949 he hanged John Haigh, nicknamed "the Acid Bath Murderer", as he had dissolved the bodies of his victims in sulphuric acid; Haigh admitted to nine murders, and tried to avoid hanging by saying he drank the blood of his victims and claiming insanity. The following year Pierrepoint hanged James Corbitt, one of the regular customers at Pierrepoint's pub; the two had sung duets together and while Pierrepoint called Corbitt "Tish", Corbitt returned the nickname "Tosh". In his autobiography, Pierrepoint considered the matter:
> As I polished the glasses, I thought if any man had a deterrent to murder poised before him, it was this troubadour whom I called Tish, coming to terms with his obsessions in the singing room of Help The Poor Struggler. He was not only aware of the rope, he had the man who handled it beside him, singing a duet. ... The deterrent did not work. He killed the thing he loved.
In March 1950 Pierrepoint hanged Timothy Evans, a 25-year-old man who had the vocabulary of a 14-year-old and the mental age of a ten-year-old. Evans was arrested for the murder of his wife and daughter at their home, the top floor flat of 10 Rillington Place, London. His statements to the police were contradictory, telling them that he killed her, and also that he was innocent. He was tried and convicted for the murder of his daughter. Three years later Evans's landlord, John Christie, was arrested for the murder of several women, whose bodies he hid in the house. He subsequently admitted to the murder of Evans's wife, but not the daughter. Pierrepoint hanged him in July 1953 in Pentonville Prison, but the case showed Evans's conviction and hanging had been a miscarriage of justice. The matter led to further questions on the use of the death penalty in Britain.
In the months before he hanged Christie, Pierrepoint undertook another controversial execution, that of Derek Bentley, a 19-year-old man who had been an accomplice of Christopher Craig, a 16-year-old boy who shot and killed a policeman. Bentley was described in his trial as:
> a youth of low intelligence, shown by testing to be just above the level of a feeble-minded person, illiterate, unable to read or write, and when tested in a way which did not involve scholastic knowledge shown to have a mental age between 11 and 12 years.
At the time the policeman was shot, Bentley had been under arrest for 15 minutes, and the words he said to Craig—"Let him have it, Chris"—could either have been taken for an incitement to shoot, or for Craig to hand his gun over (one policeman had asked him to hand the gun over just beforehand). Bentley was found guilty by the English law principle of joint enterprise.
Pierrepoint hanged Ruth Ellis for murder in July 1955. Ellis was in an abusive relationship with David Blakely, a racing driver; she shot him four times after what her biographer, Jane Dunn, called "three days of sleeplessness, panic, and pathological jealousy, fuelled by quantities of Pernod and a reckless consumption of tranquillizers". The case attracted great interest from the press and public. The matter was discussed in Cabinet and a petition of 50,000 signatures was sent to the Home Secretary, Gwilym Lloyd George, to ask for a reprieve; he refused to grant one. Ellis was the last woman to be hanged in Britain. Two weeks after Ellis's execution, Pierrepoint hanged Norman Green, who had confessed to killing two boys in the Wigan area; it was Pierrepoint's last execution.
### Retirement and later life
In early January 1956 Pierrepoint travelled to Manchester for another execution and paid for staff to cover the bar in his absence. He spent the afternoon in the prison calculating the drop and setting up the rope to the right length. That evening the prisoner was given a reprieve. Pierrepoint left the prison and, because of heavy snow, stayed overnight in a local hotel before returning home. Two weeks later he received from the instructing sheriff a cheque for his travelling expenses, but not his execution fee. He wrote to the Prison Commissioners to point out that he had received a full fee in other cases of reprieve, and that he had spent additional money in employing bar staff. The Commissioners advised he speak to the instructing sheriff, as it was his responsibility, not theirs; they also reminded him that his conditions of employment were that he was paid only for the execution, not in the case of a reprieve. Shortly afterwards he received a letter from the sheriff offering £4 as a compromise. On 23 February he replied to the Prison Commissioners and informed them that he was resigning with immediate effect, and requested that his name be taken from the list of executioners.
There were soon rumours in the press that his resignation was connected with the hanging of Ellis. In his autobiography he denied this was the case:
> At the execution of Ruth Ellis no untoward incident happened which in any way appalled me or anyone else, and the execution had absolutely no connection with my resignation seven months later. Nor did I leave the list, as one newspaper said, by being arbitrarily taken off it, to shut my mouth, because I was about to reveal the last words of Ruth Ellis. She never spoke.
Pierrepoint's autobiography does not give any reasons for his resignation—he states that the Prison Commissioners asked him to keep the details private. The Home Office contacted the Sheriff of Lancashire, who paid Pierrepoint the full fee of £15 for his services, but he was adamant that he was still retiring. He had received an offer for £30,000 to £40,000 from the Empire News and Sunday Chronicle to publish weekly stories about his experiences. The Home Office considered prosecuting him under the Official Secrets Act 1939, but when two of the stories appeared that contained information that contradicted the recollections of other witnesses, they did not do so. Instead pressure was put on the publishers, who stopped the stories.
Pierrepoint and his wife ran their pub until they retired to the seaside town of Southport in the 1960s. In 1974 he published his autobiography, Executioner: Pierrepoint. He died on 10 July 1992, aged 87, in the nursing home where he had lived for the last four years of his life.
## Views on capital punishment
In his 1974 autobiography, Pierrepoint changed his view on capital punishment, and wrote that hanging:
> ... is said to be a deterrent. I cannot agree. There have been murders since the beginning of time, and we shall go on looking for deterrents until the end of time. If death were a deterrent, I might be expected to know. It is I who have faced them last, young lads and girls, working men, grandmothers. I have been amazed to see the courage with which they take that walk into the unknown. It did not deter them then, and it had not deterred them when they committed what they were convicted for. All the men and women whom I have faced at that final moment convince me that in what I have done I have not prevented a single murder.
In a 1976 interview with BBC Radio Merseyside, Pierrepoint expressed his uncertainty towards the sentiments, and said that when the autobiography was originally written, "there was not a lot of crime. Not like there is today. I am now honestly on a balance and I don’t know which way to think because it changes every day." Pierrepoint's position as an opponent of capital punishment was questioned by his long-time former assistant, Syd Dernley, in his 1989 autobiography, The Hangman's Tale:
> Even the great Pierrepoint developed some strange ideas in the end. I do not think I will ever get over the shock of reading in his autobiography, many years ago, that like the Victorian executioner James Berry before him, he had turned against capital punishment and now believed that none of the executions he had carried out had achieved anything! This from the man who proudly told me that he had done more jobs than any other executioner in English history. I just could not believe it. When you have hanged more than 680 people, it's a hell of a time to find out you do not believe capital punishment achieves anything!
## Approach and legacy
Pierrepoint described his approach to hanging in his autobiography. He did so in what Lizzie Seal, a reader in Criminology, calls "quasi-religious language", including the phrase that a "higher power" selected him as an executioner. When asked by the Royal Commission about his role, he replied that "It is sacred to me". In his autobiography, Pierrepoint describes his ethos thus:
> I have gone on record ... as saying that my job is sacred to me. That sanctity must be most apparent at the hour of death. A condemned prisoner is entrusted to me, after decisions have been made which I cannot alter. He is a man, she is a woman, who, the church says, still merits some mercy. The supreme mercy I can extend to them is to give them and sustain in them their dignity in dying and death. The gentleness must remain.
Brian Bailey highlights Pierrepoint's phrasing relating to hangings; the autobiography reads "I had to hang Derek Bentley", "I had to execute John Christie" and "I had to execute Mrs Louisa Merrifield". Bailey comments that Pierrepoint "never had to hang anybody".
The exact number of people executed by Pierrepoint has never been established. Bailey, in the Oxford Dictionary of National Biography, and Leonora Klein, one of his biographers, state it was over 400; Steven Fielding, another biographer, puts the figure at 435—based on the Prison Execution Books held at The National Archives; the obituarists of The Times and The Guardian put the figure at 17 women and 433 men. The Irish Times puts the figure at 530 people, The Independent considers the figure to be 530 men and 20 women, while the BBC states it is "up to 600" people.
In addition to his 1974 autobiography, Pierrepoint has been the subject of several biographies, either focusing on him, or alongside other executioners. These include Pierrepoint: A Family of Executioners by Fielding, published in 2006, and Leonora Klein's 2006 book A Very English Hangman: The Life and Times of Albert Pierrepoint. There have been several television and radio documentaries about or including Pierrepoint, and he has been portrayed on stage and screen, and in literature.
On Pierrepoint's resignation, two assistant executioners were promoted to lead executioner: Jock Stewart and Harry Allen. Over the next seven years they carried out the remaining thirty-four executions in the UK. On 13 August 1964 Allen hanged Gwynne Evans at Strangeways Prison in Manchester for the murder of John Alan West; at the same time, Stewart hanged Evans's accomplice, Peter Allen, at Walton Gaol in Liverpool. They were the last hangings in English legal history. The following year the Murder (Abolition of Death Penalty) Act 1965 was passed, which imposed a five-year moratorium on executions. The temporary ban was made permanent on 18 December 1969.
## See also
- Locations of executions conducted by Albert Pierrepoint
- List of executioners
|
168,058 |
Antiochus X Eusebes
| 1,173,512,496 |
King of Syria
|
[
"110s BC births",
"1st-century BC Seleucid monarchs",
"88 BC deaths",
"92 BC deaths",
"Monarchs killed in action",
"Year of birth uncertain",
"Year of death uncertain"
] |
Antiochus X Eusebes Philopator (Ancient Greek: Ἀντίοχος Εὐσεβής Φιλοπάτωρ; c. 113 BC – 92 or 88 BC) was a Seleucid monarch who reigned as King of Syria during the Hellenistic period between 95 BC and 92 BC or 89/88 BC (224 SE [Seleucid year]). He was the son of Antiochus IX and perhaps his Egyptian wife Cleopatra IV. Eusebes lived during a period of general disintegration in Seleucid Syria, characterized by civil wars, foreign interference by Ptolemaic Egypt and incursions by the Parthians. Antiochus IX was killed in 95 BC at the hands of Seleucus VI, the son of his half-brother and rival Antiochus VIII. Antiochus X then went to the city of Aradus where he declared himself king. He avenged his father by defeating Seleucus VI, who was eventually killed.
Antiochus X did not enjoy a stable reign as he had to face three of Seleucus VI's brothers, Antiochus XI, Philip I and Demetrius III. Antiochus XI defeated Antiochus X and expelled him from the capital Antioch in 93 BC. A few months later, Antiochus X regained his position and killed Antiochus XI. This led to both Philip I and Demetrius III becoming involved. The civil war continued but its final outcome is uncertain due to the contradictions between different ancient historians' accounts. Antiochus X married his stepmother, Antiochus IX's widow Cleopatra Selene, and had several children with her, including a future king Antiochus XIII.
The death of Antiochus X is a mystery. The year of his demise is traditionally given by modern scholars as 92 BC, but other dates are also possible including the year 224 SE (89/88 BC). The most reliable account of his end is that of the first century historian Josephus, who wrote that Antiochus X marched east to fight off the Parthians who were attacking a queen called Laodice; the identity of this queen and who her people were continues to be debated. Other accounts exist: the ancient Greek historian Appian has Antiochus X defeated by the Armenian king Tigranes II and losing his kingdom; the third century historian Eusebius wrote that Antiochus X was defeated by his cousins and escaped to the Parthians before asking the Romans to help him regain the throne. Modern scholars prefer the account of Josephus and question practically every aspect of the versions presented by other ancient historians. Numismatic evidence shows that Antiochus X was succeeded in Antioch by Demetrius III, who controlled the capital in c. 225 SE (88/87 BC).
## Background, early life and name
The second century BC witnessed the disintegration of the Syria-based Seleucid Empire due to never-ending dynastic feuds and foreign Egyptian and Roman interference. Amid constant civil wars, Syria fell to pieces. Seleucid pretenders fought for the throne, tearing the country apart. In 113 BC, Antiochus IX declared himself king in opposition to his half-brother Antiochus VIII. The siblings fought relentlessly for a decade and a half until Antiochus VIII was killed in 96 BC. The following year, Antiochus VIII's son Seleucus VI marched against Antiochus IX and killed him near the Syrian capital Antioch.
Egypt and Syria attempted dynastic marriages to maintain a degree of peace. Antiochus IX married several times; known wives are his cousin Cleopatra IV of Egypt, whom he married in 114 BC, and her sister Cleopatra Selene, the widow of Antiochus VIII. Some historians, such as John D. Grainger, maintain the existence of a first wife unknown by name who was the mother of Antiochus X. Others, such as Auguste Bouché-Leclercq, believe that the first wife of Antiochus IX and the mother of his son was Cleopatra IV, in which case Antiochus X would have been born in c. 113 BC. None of those assertions are based on evidence, and the mother of Antiochus X is not named in ancient sources. Antiochus is a Greek name meaning "resolute in contention". The capital Antioch received its name in deference to Antiochus, the father of the Seleucid dynasty's founder Seleucus I; the name became dynastic and many Seleucid kings bore it.
## Reign
According to Josephus, following the death of his father, Antiochus X went to the city of Aradus where he declared himself king; it is possible that Antiochus IX, before facing Seleucus VI, sent his son to that city for protection. Aradus was an independent city since 137 BC, meaning that Antiochus X made an alliance with it, since he would not have been able to subdue it by force at that stage of his reign. As the descendants of Antiochus VIII and Antiochus IX fought over Syria, they portrayed themselves in the likeness of their respective fathers to indicate their legitimacy; Antiochus X's busts on his coins show him with a short nose that ends with an up-turn, like his father. Ancient Hellenistic kings did not use regnal numbers. Instead, they usually employed epithets to distinguish themselves from other rulers with similar names; the numbering of kings is mostly a modern practice. On his coins, Antiochus X appeared with the epithets Eusebes (the pious) and Philopator (father-loving). According to Appian, the king received the epithet Eusebes from the Syrians because he escaped a plot on his life by Seleucus VI, and, officially, the Syrians thought that he survived because of his piety, but, in reality, it was a prostitute in love with Antiochus X who saved him.
Beginning his reign in 218 SE (95/94 BC), Antiochus X was deprived of resources and lacked a queen. He therefore married a woman who could provide what he needed, his stepmother Cleopatra Selene. Antiochus X was probably no more than twenty years old while his wife was in her forties. This union was not unprecedented in the Seleucid dynasty, as Antiochus I had married his stepmother Stratonice, but nevertheless, the marriage was scandalous. Appian commented that he thought the real reason behind the epithet "Eusebes" to be a joke by the Syrians, mocking Antiochus X's piety, as he showed loyalty to his father by bedding his widow. Appian concluded that it was "divine vengeance" for his marriage that eventually led to Antiochus X's fall.
### First reign in Antioch
One of Antiochus X's first actions was to avenge his father; in 94 BC, he advanced on the capital Antioch and drove Seleucus VI out of northern Syria into Cilicia. According to Eusebius, the final battle between Antiochus X and Seleucus VI took place near the Cilician city of Mopsuestia, ending in Antiochus X's victory while Seleucus VI took refuge in the city, where he perished as a result of a popular revolt.
During the Seleucid period, currency struck in times of campaigns against a rival or a usurper showed the king bearded, and what seems to be the earliest bronze coinage of Antiochus X shows him with a curly beard, while later currency, apparently meant to show the king in firm control of his realm, depicted Antiochus X clean shaven. Early in 93 BC, the brothers of Seleucus VI, Antiochus XI and Philip I, avenged Seleucus VI by sacking Mopsuestia. Antiochus XI then advanced on Antioch, defeated Antiochus X, and expelled him from the city, reigning alone in the capital for few months.
### Second reign in Antioch
Antiochus X recruited new soldiers and attacked Antioch the same year. He emerged victorious, while Antiochus XI drowned in the Orontes River as he tried to flee. Now Antiochus X ruled northern Syria and Cilicia; around this time, Mopsuestia minted coins with the word "autonomous" inscribed. This new political status seems to have been a privilege bestowed upon the city by Antiochus X, who, as a sign of gratitude for Mopsuestia's role in eliminating Seleucus VI, apparently not just rebuilt it, but also compensated it for the damage it suffered at the hands of Seleucus VI's brothers. In the view of the numismatist Hans von Aulock [de], some coins minted in Mopsuestia may carry a portrait of Antiochus X. Other cities minted their own civic coinage under the king's rule, including Tripolis, Berytus, and perhaps the autonomous city of Ascalon.
In the capital, Antiochus X might have been responsible for building a library and an attached museum on the model of the Library of Alexandria. Philip I was probably centered at Beroea; his brother, Demetrius III, who ruled Damascus, supported him and marched north probably in the spring of 93 BC. Antiochus X faced fierce resistance from his cousins. In the year 220 SE (93/92 BC), the city of Damascus stopped issuing coins in the name of Demetrius III, then resumed the following year; this could have been the result of incursions by Antiochus X, which weakened his cousin and made Damascus vulnerable to attacks by the Judaean king Alexander Jannaeus.
## Children
The Roman statesman Cicero wrote about two sons of Antiochus X and Cleopatra Selene who visited Rome during his time (between 75 and 73 BC); one of them was named Antiochus. The king might have also fathered a daughter with his wife; according to the first century historian Plutarch, the Armenian king Tigranes II, who killed Cleopatra Selene in 69 BC, "put to death the successors of Seleucus, and [carried] off their wives and daughters into captivity". This statement makes it possible to assume that Antiochus X had at least one daughter with his wife.
- Antiochus XIII: mentioned by Cicero. His epithets raised questions about how many sons with that name Antiochus X fathered; when Antiochus XIII issued coins as a sole ruler, he used the epithet Philadelphos ("brother-loving"), but on jugate coins that show Cleopatra Selene as regent along with a ruling son named Antiochus, the epithet Philometor ("mother-loving") is used. The historian Kay Ehling [de], agreeing with the view of Bouché-Leclercq, argued that two sons, both named Antiochus, resulted from the marriage of Antiochus X and Cleopatra Selene. Cicero, on the other hand, left one of the brothers unnamed, and clearly stated that Antiochus was the name of only one prince. Ehling's theory is possible but only if "Antiochus Philometor" was the prince named by Cicero, and the brother, who had a different name, assumed the dynastic name Antiochus with the epithet Philadelphos when he became king following the death of Antiochus Philometor. In the view of the historian Adrian Dumitru, such a scenario is complicated; more likely, Antiochus XIII bore two epithets, Philadelphos and Philometor. Several numismatists, such as Oliver D. Hoover, Catharine Lorber and Arthur Houghton, agree that both epithets denoted Antiochus XIII.
- Seleucus VII: the numismatist Brian Kritt deciphered and published a newly discovered jugate coin bearing the portrait of Cleopatra Selene and a co-ruler in 2002. Kritt's reading gave the name of King Seleucus Philometor and, considering the epithet which means mother loving, equated him with the unnamed son mentioned by Cicero. Kritt gave the newly discovered king the regnal name Seleucus VII. Some scholars, such as Lloyd Llewellyn Jones and Michael Roy Burgess [de], accepted the reading, but Hoover rejected Kritt's reading as the coin is badly damaged and some of the letters cannot be read. Hoover proposed a different reading where the king's name is Antiochus, to be identified with Antiochus XIII.
- Seleucus Kybiosaktes: the unnamed son mentioned by Cicero does not appear in other ancient literature. Seleucus Kybiosaktes, a man who appeared in Egypt as a husband of its queen Berenice IV, is identified by modern scholarship with the unnamed prince. According to the first century BC historian Strabo, Kybiosaktes pretended to be of Seleucid descent. Kritt considered it plausible to identify Seleucus VII with Seleucus Kybiosaktes.
## Conflict with Parthia and demise
Information about Antiochus X after the interference of Demetrius III is scanty. Ancient sources and modern scholars present different accounts and dates for the demise of the king. Antiochus X's end as told by Josephus, which has the king killed during a campaign against the Parthians, is considered the most reliable and likely by modern historians. Towards the end of his reign, Antiochus X increased his coin production, and this could be related to the campaign undertaken by the Seleucid monarch against Parthia, as recorded by Josephus. The Parthians were advancing in eastern Syria in the time of Antiochus X, which would have made it important for the king to counter attack, thus strengthening his position in the war against his cousins. The majority of scholars accept the year 92 BC for Antiochus X's end:
### Year of death
No known coins issued by the king in Antioch contain a date. Josephus wrote that the king fell soon after Demetrius III's interference, but this statement is vague. Most scholars, such as Edward Theodore Newell, understood Josephus's statement to indicate 92 BC. According to Hoover, the dating of Newell is apparently based on combining the statement of Josephus with that of Eusebius, who wrote that Antiochus X was ejected from the capital in 220 SE (93/92 BC) by Philip I. Hoover considered Newell's dating hard to accept; a market weight from Antioch bearing Antiochus X's name, from 92 BC, might contradict the dating of 220 SE (93/92 BC). On the other hand, in the year 221 SE (92/91 BC), the city of Antioch issued civic coinage mentioning no king; Hoover noted that the civic coinage mentions Antioch as the "metropolis" but not as autonomous, and this might be explained as a reward from Antiochus X bestowed upon the city for supporting him in his struggle against his cousins.
In 2007, using a methodology based on estimating the annual die usage average rate (the Esty formula), Hoover proposed the year 224 SE (89/88 BC) for the end of Antiochus X's reign. Later in 2011, Hoover noted that this date is hard to accept considering that during Antiochus X's second reign in the capital, only one or two dies were used per year, far too few for the Seleucid average rate to justify a long reign. Hoover then noted that there seem to be several indications that the coinage of Antiochus X's second reign in the capital, along with the coinages of Antiochus XI and Demetrius III, were re-coined by Philip I who eventually took Antioch c. 87 BC, thus explaining the rarity of those kings' coins. Hoover admitted that his conclusion is "troubling". The historian Marek Jan Olbrycht [pl] considered Hoover's dating and arguments too speculative, as they contradict ancient literature.
### Manner of death
The manner of the king's death varies depending on which ancient account is used. The main ancient historians providing information on Antiochus X's end are Josephus, Appian, Eusebius and Saint Jerome:
The account of Josephus: "For when he was come as an auxiliary to Laodice, queen of the Gileadites, when she was making war against the Parthians, and he was fighting courageously, he fell." The Parthians might have been allied with Philip I. The people of Laodice, their location, and who she was are hard to determine, as surviving manuscripts of Josephus's work transmit different names for the people. Gileadites is an older designation based on the Codex Leidensis (Lugdunensis) manuscript of Josephus's work, but the academic consensus uses the designation Sameans, based on the Codex Palatinus (Vaticanus) Graecus manuscript.
- Based on the reading Gileadites: In the view of Bouché-Leclercq, the division of Syria between Antiochus X and his cousins must have tempted the Parthian king Mithridates II to annex the kingdom. Bouché-Leclercq, agreeing with the historian Alfred von Gutschmid, identified the mysterious queen with Antiochus X's cousin Laodice, daughter of Antiochus VIII, and wife of Mithridates I, the king of Commagene, which had recently detached from the Seleucids, and suggested that Laodice resided in Samosata. Bouché-Leclercq hypothesized that Antiochus X did not go to help his rivals' sister, but to stop the Parthians before they reached his own borders. The historian Adolf Kuhn, on the other hand, considered it implausible that Antiochus X would support a daughter of Antiochus VIII and he questioned the identification with the queen of Commagene. Ehling, attempting to explain Antiochus X's assistance of Laodice, suggested that the queen was a daughter of Antiochus IX, a sister of Antiochus X.
- Based on the reading Sameans: the historian Josef Dobiáš [cs] considered Laodice a queen of a nomadic tribe based on the similarities between the name from the Codex Palatinus (Vaticanus) Graecus with the Samènes, a people mentioned by the sixth century geographer Stephanus of Byzantium as an Arab nomadic tribe. This would solve the problems posed by the identification with the queen of Commagene, and end the debate regarding the location of the people, as the nature of their nomadic life makes it impossible to determine exactly the place where the fight took place. Dobiáš attributed the initiative to Antiochus X who was not merely trying to defend his borders but actively attacking the Parthians.
The account of Appian: Antiochus X was expelled from Syria by Tigranes II of Armenia. Appian gave Tigranes II a reign of fourteen years in Syria ending in 69 BC. That year witnessed the retreat of the Armenian king due to a war with the Romans. Hence, the invasion of Syria by Tigranes, based on the account of Appian, probably took place in 83 BC. Bellinger dismissed this account, and considered that Appian confused Antiochus X with his son Antiochus XIII. Kuhn considered a confusion between father and son to be out of the question because Appian mentioned the epithet Eusebes when talking about the fate of Antiochus X. In the view of Kuhn, Antiochus X retreated to Cilicia after being defeated by Tigranes II, and his sons ruled that region after him and were reported visiting Rome in 73 BC. However, numismatic evidence proves that Demetrius III controlled Cilicia following the demise of Antiochus X, and that Tarsus minted coins in his name c. 225 SE (88/87 BC). The Egyptologist Christopher J. Bennett, considered it possible that Antiochus X retreated to Ptolemais after being defeated by Tigranes since it became his widow's base. In his history, Appian failed to mention the reigns of Demetrius III and Philip I in the capital which preceded the reign of Tigranes II. According to Hoover, Appian's ignorance of the intervening kings between Antiochus X and Tigranes II might explain how he confused Antiochus XIII, who is known to have fled from the Armenian king, with his father.
Eusebius and others: According to Eusebius, who used the account of the third century historian Porphyry, Antiochus X was ejected from the capital by Philip I in 220 SE (93/92 BC) and fled to the Parthians. Eusebius added that following the Roman conquest of Syria, Antiochus X surrendered to Pompey, hoping to be reinstated on the throne, but the people of Antioch paid money to the Roman general to avoid a Seleucid restoration. Antiochus X was then invited by the people of Alexandria to rule jointly with the daughters of Ptolemy XII, but he died of illness soon after. This account has been questioned by many scholars, such as Hoover and Bellinger. The story told by Eusebius contains factual inaccuracies, as he wrote that in the same year Antiochus X was defeated by Philip I, he surrendered to Pompey, while at the same time Philip I was captured by the governor of Syria Aulus Gabinius. However, Pompey arrived in Syria only in 64 BC, and left it in 62 BC. Aulus Gabinius was appointed governor of Syria in 57 BC. Also, the part of Eusebius's account regarding the surrender to Pompey echoes the fate of Antiochus XIII; the writer seems to be confusing the fate of Antiochus X with that of his son. The second century historian Justin, writing based on the work of the first century BC historian Trogus, also confused the father and son, as he wrote that Antiochus X was appointed king of Syria by the Roman general Lucullus following the defeat of Tigranes II in 69 BC.
## Succession
It is known from numismatic evidence that Demetrius III eventually succeeded Antiochus X in Antioch. Eusebius's statement that Antiochus X was ejected from the capital by Philip I in 220 SE (93/92 BC) is contradicted by the coins of Demetrius III, who was not mentioned at all by Eusebius. Any suggestions that Philip I controlled Antioch before the demise of Demetrius III can be dismissed; in addition to the numismatic evidence, no ancient source claimed that Demetrius III had to push Philip I out of the city.
In 1949, a jugate coin of Cleopatra Selene and Antiochus XIII, from the collection of the French archaeologist Henri Arnold Seyrig, was dated by the historian Alfred Bellinger to 92 BC and ascribed to Antioch. Based on Bellinger's dating, some modern historians, such as Ehling, proposed that Cleopatra Selene enjoyed an ephemeral reign in Antioch between the death of her husband and the arrival of his successor. Bellinger doubted his own dating and the coin's place of issue in 1952, suggesting Cilicia instead of Antioch. This coin is dated by many twenty-first century scholars to 82 BC.
## See also
- List of Syrian monarchs
- Timeline of Syrian history
|
59,401,226 |
James A. Ryder
| 1,169,511,427 |
19th-century American Jesuit
|
[
"1800 births",
"1860 deaths",
"19th-century American Jesuits",
"19th-century American educators",
"American Roman Catholic clergy of Irish descent",
"Burials at the Jesuit Community Cemetery",
"Christian clergy from Dublin (city)",
"Deans and Prefects of Studies of the Georgetown University College of Arts & Sciences",
"Georgetown University College of Arts & Sciences alumni",
"Irish emigrants to the United States",
"Pastors of St. John the Evangelist Catholic Church (Frederick, Maryland)",
"Philodemic Society members",
"Presidents of Georgetown University",
"Presidents of Saint Joseph's University",
"Presidents of the College of the Holy Cross",
"Provincial superiors of the Jesuit Maryland Province"
] |
James A. Ryder SJ (October 8, 1800 – January 12, 1860) was an American Catholic priest and Jesuit who became the president of several Jesuit universities in the United States. Born in Ireland, he immigrated with his widowed mother to the United States as a child, to settle in Georgetown, in the District of Columbia. He enrolled at Georgetown College and then entered the Society of Jesus. Studying in Maryland and Rome, Ryder proved to be a talented student of theology and was made a professor. He returned to Georgetown College in 1829, where he was appointed to senior positions and founded the Philodemic Society, becoming its first president.
In 1840, Ryder became the president of Georgetown College, and oversaw the construction of the university's Astronomical Observatory, as well as Georgetown's legal incorporation by the United States Congress. He earned a reputation as a skilled orator and preacher. His term ended in 1843 with his appointment as provincial superior of the Jesuit Maryland Province. As provincial, he laid the groundwork for the transfer of ownership of the newly established College of the Holy Cross from the Diocese of Boston to the Society of Jesus. Two years later, Ryder became the second president of the College of the Holy Cross, and oversaw the construction of a new wing. He returned to Georgetown in 1848 for a second term as president, and accepted a group of local physicians to form the Georgetown School of Medicine, constructed a new home for Holy Trinity Church, and quelled a student rebellion.
In his later years, Ryder went to Philadelphia, where he assisted with the founding of Saint Joseph's College and became its second president in 1856. He became the pastor of St. John the Evangelist Church in Philadelphia, and then transferred to St. John the Evangelist Church in Frederick, Maryland, as pastor. Finally, he returned to Philadelphia, where he died in 1860.
## Early life
James Ryder was born on October 8, 1800, in Dublin, in the Kingdom of Ireland. He emigrated to the United States as a young boy with his mother, who was widowed by James' father, a Protestant who died when he was a child. She took up residence in Georgetown, then a city in the newly formed District of Columbia. Ryder enrolled at Georgetown College on August 29, 1813, and entered the Society of Jesus in 1815 as a novice, at the age of fifteen. He began his novitiate in White Marsh Manor in Maryland, before being sent to Rome in the summer of 1820 by Peter Kenney, the apostolic visitor to the Jesuit's Maryland mission.
He was sent alongside five other American Jesuits, who would go on to become influential in the administration of the Society in the United States for several decades. Among these, Ryder and Charles Constantine Pise were identified as the most intellectually advanced. They left from Alexandria, Virginia, on June 6, 1820, and landed in Gibraltar to be quarantined, before traveling to Naples on July 13 and then on to Rome in late August, where Ryder studied theology and philosophy.
There, he was ordained a priest in 1824, and proceeded to teach theology at the Roman College. He then went to teach theology and sacred scripture at the University of Spoleto, where he remained for two years. He became a good friend of Archbishop Giovanni Mastai-Ferretti (later Pope Pius IX), who appointed him the chair of philosophy. Ryder also spent part of 1828 teaching in Orvieto.
Ryder returned to the United States in 1829, where he took up a professorship in philosophy and theology at Georgetown, to teach Jesuit scholastics. He was named the prefect of studies, where he implemented an overhaul of the curriculum under the direction of President Thomas F. Mulledy; he was simultaneously made vice president of the school. It was during this time that Ryder founded the Philodemic Society, of which he became the first president.
Founded on January 17, 1830, it was the first collegiate debating society in the United States, and it was Ryder who selected the name. He was also appointed by Peter Kenney as minister and admonitor to Mulledy. In this role, he received a severe lecture from Kenney in 1832 for not properly welcoming six Belgian Jesuits who arrived at the college. In 1834, Ryder became a professor of rhetoric at the university.
In an 1835 speech to Catholics in Richmond, Virginia, he called upon Catholics to defend national unity, which included opposing the efforts of Northern abolitionists to abolish slavery in the South; he warned Catholics that they would themselves become victims of persecution if their "glorious system of national independence" were to be overthrown. The group gathered resolved that "slavery in the abstract" was evil, but that Catholic citizens were obligated to support the civil institutions of the United States. However, they also celebrated "the determination [of] our Southern brethren, in not condescending to discuss the question of slavery with those [Northern] fanatics."
## Georgetown College
### First presidency
The appointment of Ryder as president of Georgetown College was announced on May 1, 1840. His selection came despite concerns that he was more interested in giving talks and leading retreats than ensuring the institution was financially stable. Although he had the support of the Jesuit leadership, the Superior General of the Jesuits, Jan Roothaan, was worried that Ryder's American attitude in support of republicanism would take priority over his obedience to the Jesuits.
Succeeding Joseph A. Lopez, he entered office while the Provincial Council of Baltimore was in progress, and the council fathers who were gathered in Baltimore took the opportunity to visit Georgetown. As president, Ryder's connections with Washington's politicians were strong. He had a particularly good relationship with the President of the United States, John Tyler, who enrolled his son at Georgetown, and whose sister converted to Catholicism. Their relationship went so far that Ryder played a significant role in the unsuccessful attempt to have Tyler run as a Democrat in the 1844 presidential election. Upon assuming the presidency, Ryder inherited a significant debt of \$20,000 (), which he liquidated by 1842, at least part of it being paid by Ryder himself from monies he earned lecturing. Ryder had gained a reputation for talent in preaching, which he did without notes. This was particularly admired by Archbishop Samuel Eccleston, and Roothaan cited it as a source of many conversions to Catholicism.
Word of his preaching reached President James Buchanan, who would attend his sermons and who received private instruction in Catholicism from him. Eventually, Ryder was described as the most well-known Catholic preacher in antebellum America. Twice during his presidency stones were thrown at him in the streets of Washington, one of these incidents occurring on April 26, 1844, as he was returning from the Capitol Building, where he had presided over the funeral of Representative Pierre Bossier. Such anti-Catholic aggression was the outgrowth of the Know Nothing movement in the United States.
Ryder oversaw the establishment of the Georgetown College Observatory in 1842, a project spearheaded by James Curley. The opening of the observatory attracted several renowned Jesuit scientists from Europe who were fleeing the Revolutions of 1848. Moreover, the College of the Holy Cross was established in Worcester, Massachusetts, in 1843, and Ryder sent Jesuits from Georgetown to teach there, while graduates of the new college received a degree from Georgetown until it was independently chartered by the Massachusetts General Court. Through having been recognized by the United States Congress in 1815, the university, as the President and Directors of Georgetown College, was officially incorporated by an act of Congress in 1844, and Ryder was named as one of the five members of the corporation. His term came to an end on January 10, 1845, when he was succeeded by Samuel A. Mulledy.
### Second presidency
In 1848, Ryder was appointed president of Georgetown for a second time, replacing Thomas Mulledy. His first act was to build a new edifice for Holy Trinity Catholic Church in the Georgetown neighborhood, which was then located on college property. He also implemented his fervent support for temperance by prohibiting students from consuming alcohol on or off campus, and eventually applied this ban to the Jesuits as well. This unpopular policy was accompanied by a ban on smoking.
In the fall of 1849, Ryder was approached by four physicians who had been excluded from the Washington Infirmary and established a new medical faculty. They asked that their faculty be incorporated into Georgetown as its medical department, creating the first Catholic medical school in the United States. Ryder accepted the proposition within a week, giving rise to the Georgetown College School of Medicine. He appointed the four petitioners as the first professors of the school on November 5, 1849, and the first classes were held in May 1851.
A rebellion broke out among the students in 1850. It began when members of the Philodemic Society held a meeting one day, in defiance of the prefect's order to the contrary. Ryder, who frequently left the college to preach, had been away for several weeks on a preaching tour. In response, the prefect suspended the society's meetings for one month. Upset at this decision, several members refused to perform their nightly reading at the refectory, and later threw stones in the dormitory. When Ryder returned, he expelled three students. One of these entered the refectory that night and incited the students to insurrection, who stormed a Jesuit's room. Forty-four of the students abandoned the college for downtown Washington and wrote Ryder that they would not return until the three were re-admitted and the prefect replaced. With the students' hotel bills mounting and going unpaid, Ryder convinced them to return to the college and quit the rebellion. He later replaced the prefect with Bernard A. Maguire.
Later that year, Ryder presided over the marriage of William Tecumseh Sherman and Eleanor Boyle Ewing. His presidency came to an end in 1851, and Ryder was replaced by Charles H. Stonestreet.
## Maryland provincial
In September 1843, while president of Georgetown, Ryder was appointed the provincial superior of the Maryland Province of the Society of Jesus, with the strong support of his predecessor, Francis Dzierozynski. Ryder voiced support that the Jesuits should sell their parochial property, leaving this to diocesan priests, to instead focus on education in cities.
At the same time, the Bishop of Boston, Benedict Joseph Fenwick, had become concerned with the cost of operating the newly established College of the Holy Cross. Therefore, he encouraged Ryder to accept ownership of the school on behalf of the Society of Jesus. The Superior General, Roothaan, delegated this decision to Ryder, who was initially hesitant to accept the college. By 1844, Ryder had privately decided to agree to the transfer, but this was not communicated to Fenwick and the deal formally struck until 1845 by Ryder's successor.
Ryder delegated much of his responsibility, though he remained in charge. He held the post until 1845; Jan Roothaan believed the province had to be put under the control of a European to rectify the compounding scandal and mismanagement that had begun under Thomas Mulledy. To that end, he was replaced by Peter Verhaegen of Belgium.
## College of the Holy Cross
After his first presidency at Georgetown ended in 1845, Ryder went to Rome to clear his name in light of suspicions of his relationship with a woman who had exchanged letters with him. He traveled to Rome in January by way of New York City and France. In Italy, he recruited eight Jesuits to join him in the United States. One of these was a future president of the College of the Holy Cross, Anthony F. Ciampi. Upon Ryder's return, suspicions continued, despite his defense that the correspondence involved only spiritual counseling, but they finally ceased following Roothaan's order in 1847 that the correspondence end.
Upon returning to the United States, he was appointed by Bishop Fenwick as president of the College of the Holy Cross on October 9, 1845, succeeding the school's first president, Thomas F. Mulledy. As president, he oversaw the construction of an east wing at the college, in accordance with the original plan for the school, which contained a dining room, chapel, study hall, and dormitory. This wing was the only part of the school spared by a subsequent fire in 1852. In 1846, he saw to the burial of the founder of the institution, Fenwick, in the college cemetery, pursuant to his wishes. The number of students increased during his administration.
Ryder clashed with Thomas Mulledy during Mulledy's election as procurator of the Jesuits' Maryland province. As a result, he praised Ignatius Brocard's decision not to send Mulledy back to the College of the Holy Cross, where Mulledy was greatly disliked. The lack of discipline among the Jesuits at Holy Cross drew the commentary of both the Bishop of Boston, John Bernard Fitzpatrick, and Roothaan, who were particularly concerned with the propensity for drinking among the priests. Upon the end of his standard three-year term, Ryder was succeeded by John Early on August 29, 1848, and he returned to Georgetown.
## Later years
### Saint Joseph's College
In 1851, he moved to Philadelphia, where he assisted in the founding of Saint Joseph's College. He was made the pastor of St. John the Evangelist Church on September 30, 1855, when he replaced Richard Kinahan to become the first Jesuit in this position, and remained until he was succeeded by John McGuigan on October 4, 1858.
In the meantime, he was appointed the president of Saint Joseph's College in 1856, following its first president Felix-Joseph Barbelin. Ryder sought to relocate the college from Willings Alley to the existing school building at St. John's, which would involve the transfer in ownership of the pro-cathedral from the Diocese of Philadelphia to the Jesuits; the diocese was unwilling to entertain this offer.
In light of the ongoing Know Nothing movement, Ryder was referred to for some time as "Doctor Ryder" rather than "Father Ryder". He also wore layman's clothes, such as a bow tie rather than a Roman collar, in accordance with the orders of Charles Stonestreet, the Maryland provincial, that the Jesuits should not wear their clerical attire. Ryder's tenure lasted only until 1857 before he was succeeded by James A. Ward. He was forced to resign the presidency due to his deteriorating health, though his likeness endures in the form of a gargoyle of Barbelin Hall.
### Pastoral work
Because of his oratorical skills, Ryder was sent to raise money for St. Joseph's College in California in 1852, where he raised \$5,000 (). While there, he fell ill, and briefly went to Havana, Cuba, and then to the Southern United States, where he recuperated for several months. He was then stationed at St. Joseph's until 1856, when he was made the rector of St. John the Evangelist Church in Frederick, Maryland.
In 1857, he was transferred to Alexandria, Virginia to do pastoral work, and he returned to Philadelphia in 1859 as spiritual prefect at St. Joseph's College. Ryder died on January 12, 1860, in the rectory of Old St. Joseph's Church in Philadelphia, following a brief illness. His body was transported back to Georgetown to be buried in the Jesuit Community Cemetery.
|
67,323,884 |
Powder House Island
| 1,173,093,730 |
Island in Michigan, United States
|
[
"1881 establishments in Michigan",
"Artificial islands of Michigan",
"Explosions in 1906",
"Industrial fires and explosions in the United States",
"Islands of Wayne County, Michigan",
"Islands of the Detroit River",
"River islands of Michigan"
] |
Powder House Island (also known as Dynamite Island) is an artificial island on the lower Detroit River in southeast Michigan, directly adjacent to the Canada–United States border. It was constructed in the late 1880s by the Dunbar & Sullivan Company to store explosives during their dredging of the Livingstone Channel. It was constructed in a successful attempt to circumvent an 1880 court order forbidding the company to store explosives on nearby Fox Island.
Powder House Island was the location of dynamite storage sheds, as well as a dynamite factory and several ice houses. During this time, it was the site of a series of accidents, including fires in 1895 and 1919 (which both burned the island "to the water's edge"). Twenty short tons (18,000 kg) of the island's dynamite exploded in 1906 after two men "had been shooting with a revolver" near it; while there were no deaths (and only minor injuries to the two men), windows were shattered 3 mi (4.8 km) away and the explosion was clearly audible from 85 mi (137 km) away.
After the completion of the Livingstone Channel in 1912, the island continued to be used for storing explosives, including during later projects to deepen the channel in the 1930s. By the 1980s it was completely unused, and by 2015 the island was owned by the Michigan Department of Natural Resources, managed by its Wildlife Division as part of the Pointe Mouillee State Game Area, and accessible to the public for hunting.
## Geography
Powder House Island is contained within Grosse Ile Township, in Wayne County, Michigan. It is near the southern end of the Detroit River, closer to Lake Erie than to Lake St. Clair, and around 200 ft (0.04 mi; 60 m) from the water border with Canada. It is approximately 500 yd (1,500 ft; 460 m) east of Fox Island. Further to its west is Grosse Ile, beyond which is Trenton. To its east, across the Livingstone Channel, is the Canadian Bois Blanc Island (and further, Amherstburg in Ontario). The southern tip of Stony Island is around 700 yd (2,100 ft; 640 m) to the northeast; other islands in the vicinity include Sugar Island to the south and Elba Island to the southwest. The island, which is covered in foliage, is approximately 200 ft (60 m) from north to south, and 50 ft (15 m) from east to west, giving it an approximate area of 10,000 sq ft (930 m<sup>2</sup>; 0.23 acres). The United States Geological Survey (USGS) gave its elevation as 574 ft (175 m) above sea level in 1980. In Wayne County records, the island is listed as "Dynamite Island", in ZIP code 48138; it is contained within a single parcel (whose total area is given as 0.91 acres (0.37 ha)).
## History
### Background and first explosion
In the late 19th century, the Dunbar & Sullivan Company won a number of government contracts to widen and deepen shipping channels in the Detroit River, including the Livingstone Channel and Lime-Kiln Crossing. This work involved large amounts of blasting, due to limestone bedrock in the area; the nearby Fox Island was a natural choice for storage of explosive compounds. On December 12, 1879, the three tons (2,700 kg) of nitroglycerin stored on Fox Island detonated unexpectedly, destroying all structures on the island and leaving a crater 60 ft (18 m) wide and 16 ft (4.9 m) deep. The resultant shock wave shattered the windows of nearby houses, and was clearly audible in St. Clair some 60 mi (97 km) to the north.
### Injunction and second explosion
In March 1880, litigation related to the circumstances of the explosion resulted in an injunction being issued by the Wayne County chancery court in the case of Walter Crane v. Charles F. Dunbar et al. The injunction forbade the company's operators, Charles F. Dunbar and Daniel B. Reaume, to engage in "storing nitroglycerine or any other explosive material on Fox Island".
In order to continue work on the channel, it was necessary to store the explosives somewhere; Dunbar and Reaume requested that the injunction be dissolved. Shortly afterwards, however, another explosion occurred at the Lime-Kiln Crossing worksite on September 24, 1880, which shook houses in Amherstburg "to their foundations", and could be felt in the town of Essex 16 mi (26 km) away. Dunbar and Reaume's request was denied in November, and it became evident that a new location would need to be found or created.
### Construction of new island
After the injunction was issued, Dunbar & Sullivan resorted to storing their explosives on a scow anchored several hundred yards to the east of Fox Island. While this allowed work to continue, it was not a permanent solution. The scow had limited capacity; Dunbar & Sullivan had to purchase raw materials and manufacture dynamite, Hercules powder, and other explosive materials themselves at the work site. Storing dynamite would require a much larger facility, which was only practical if situated on solid land.
While Dunbar & Sullivan had been forbidden to store explosives on Fox Island, the location of the worksite meant that there were few other places to do so. The southern extension of Stony Island had not yet been constructed, and all other land within a reasonable distance of the worksite was inhabited. By 1881, houses had been built along the shore of Grosse Ile, and Hickory and Sugar Island were being used as campgrounds; on the Canadian side, Bois Blanc Island was being used for summer vacation homes. It was therefore decided that an artificial island would be constructed next to Fox Island, to which the 1880 injunction (which only stipulated that Dunbar & Sullivan not store explosives on Fox Island specifically) would not apply.
The risks involved in manufacturing and handling explosive devices on the putative artificial island would be largely identical to those incurred by the Fox Island facility—the explosions there had caused damage for miles around, and the new site was only a couple hundred yards away. No additional structures or mechanisms were planned to help contain or redirect the energy of an unintentional detonation; the primary difference between the two sites was that one of them had not existed when the ruling was made, and was therefore (ostensibly) not subject to it. It is unclear why government engineering authorities approved of this reasoning, or indeed that they even did; the Sixth Circuit Court of Appeals later said that "no formal action appears to have been taken by the government or any officer thereof, giving the defendant the right to erect the island [...] all that was shown was at most a verbal permission and an acquiescence on the part of the government officers in charge of the Lime-Kiln Crossing work".
Regardless, by May 1881, construction was underway: between eight and ten carpenters, directed by John P. Jones, were employed in the construction of a scow. This scow was tasked with carrying rock from channel excavation to a site several hundred yards to the east of Fox Island, and dumping it on the river floor. Eventually, the scow was scuttled atop the rock; the resultant mound was high enough to rise above water level.
Once it was large and solid enough to permit the erection of structures, the dynamite operations of Dunbar & Sullivan were moved to the island. While it was initially referred to as "Dunbar Island", it eventually became known as "Powder House Island".
Shanties were erected to contain and shelter the large quantities of dynamite required for channel excavation. They caught fire on April 21, 1895, and the island "burned to the water's edge". In 1904, it was reported that Canadian police had found American poachers, illegally fishing for sturgeon, living in a shack on the island. By 1906, twenty short tons (18,000 kg) of dynamite were stored on the island; a witness was quoted as saying "you could throw a cat through the cracks" in dynamite shanties of questionable quality.
### Third explosion
On June 27, 1906, the twenty short tons of dynamite in Dunbar & Sullivan's facilities exploded. Powder House Island was shaken by an explosion "so terrific in nature that the residents of the town and pleasure-seekers on adjacent islands thought it was an earthquake visitation". Two men, Henry Rogers and Theodore Perry, were injured; they had just left the island, and were 100 yd (91 m) from the shore, when a sudden explosion launched them from their catboat, tore the clothes from their backs, and caused severe burns and lacerations. The Detroit Free Press described an "immediate cessation of pleasure" which occurred among people in the immediate vicinity. Charles Stedman, a vacationer from Indiana on a trip with his wife and children to Bois Blanc Island, said:
> We were sprawled out in the shade of a tree [...] when the shock came. It was the most effective transformation scene I ever witnessed. The river in the vicinity of the dynamite houses was instantly lashed into a seething torrent. Rocks and spray shot hundreds of feet into the air, and the report was followed immediately by a shower of white that I afterward learned was limestone. Big trees were uprooted by the shock, and the one under which we were camped rocked ominously. All was confusion on the picnic grounds. Women shrieked in dismay, and repentant men fell upon their knees and began to offer fervent invocations for divine intercession.
In the aftermath of the explosion, thousands of windows were shattered on Grosse Ile alone, plate glass was broken 3 mi (4.8 km) away in Trenton, and work on the shipping channel was delayed due to the loss of blasting equipment. The shock wave from the explosion was felt as far as Cleveland, Ohio, 85 mi (137 km) away on the other side of Lake Erie. The island itself was described as a "wreck", which needed to be rebuilt with many scowloads of stone and mud. The cause of the explosion was not known with certainty, as it had been a hot day, but it was suspected to be related to Rogers and Perry firing revolvers near the dynamite storage area immediately before it exploded (despite Rogers' claim several days later that the revolvers had been loaded with blanks).
> The men said they had been shooting with a revolver, and it is supposed that one of the bullets touched off the fireworks. While admitting that the explosion may have been caused by the heat, experts do not think this theory at all plausible.
The explosion has been the subject of misinformation: a June 28 article in the Detroit Free Press pushed the untrue claim that the explosion had occurred on Fox Island. On July 6, the Yale Expositor, while correctly reporting that the explosion had occurred on Dynamite Island, claimed that there had been 12 short tons (11,000 kg) of explosives across two artificial islands, and that "a keg of one of the explosives was hurled into the central part of Grosse Ile and there exploded in a clump of woods, tearing century old oaks into splinters". A 2016 article in the Trenton Tribune would later falsely state that the explosion happened in 1907. These baseless claims have been debunked by fact-checkers; court proceedings concerning the lawsuit (Henderson v. Sullivan) related to the explosion state that it took place in June 1906, from twenty tons of dynamite, on a single island (Powder House Island). These details were not disputed by the plaintiff, defendant, or judiciary.
### Second injunction
Several days after the explosion, construction began to rebuild the storage sheds. However, attorney Edwin Henderson filed a petition claiming that his house on Grosse Ile had been damaged by the explosion; on July 6, a temporary injunction was issued against the Dunbar & Sullivan Company preventing them from storing explosives on the island. Henderson requested that the court permanently enjoin Dunbar & Sullivan from storing any dynamite in the Detroit River, which was denied by the judge. Henderson appealed the case to the United States Court of Appeals in Cincinnati, which reversed the prior ruling and granted an injunction (albeit with limitations) in February 1908, with Judge John K. Richards giving the opinion:
> We think it apparent from the record that a reasonable amount of dynamite, for use in the public work, might be stored on Powder House Island without injuring persons and property in the neighborhood, and to the great interest of the public in the doing of the improvement of the Detroit River now going on, and so we think that, under proper limitations, an injunction ought to be granted; the judgement of the court below is therefore reversed and the case remanded, with instructions to grant an injunction restraining the defendant from storing dynamite on the island or place described in the bill as the place where the defendant had recently been storing it, in such quantity as to create danger to the complainant or his family personally, or danger to the property, real or personal, owned by or possessed by him, at the place described in the bill as his residence on Grosse Ile.
By 1908, reports indicated that the explosives used for channel blasting were being stored in a bulletproof concrete structure. However, public opinion on the issue was divided, with some residents of nearby islands "apprehensive" about the storage of dynamite along the lower Detroit River. On March 5, 1908, C. McD. Townsend, United States district engineer, held a hearing on the issue. At the hearing, the residents of Grosse Ile and Hickory Island were represented by Dr. David Inglis, who expressed concern about the storage of explosives, and proposed that the matter be settled in federal court. Townsend's proposal for a compromise entailed constructing three additional islands, between which a total of 60 short tons (54,000 kg) of dynamite would be divided evenly. However, no such islands were constructed, and none appear on survey maps from 1906 through 2019.
By March 1910, the dynamite factory on Powder House Island had returned to operation, with an output of 2 short tons (1,800 kg) per day. In July of that year, several hundred passengers of the pleasure boat Wauketa were "given nearly a three hours extension of their outing" when it ran aground on the shore of a Dunbar Island:
> The passengers, none of whom seemed to take the occurrence very seriously, enjoyed themselves in the cool breezes counting the stars and speculating on what would happen if the dynamite magazine of Dunbar & Sullivan on the island nearby should chance to blow up.
In January 1912, a contract was carried out to fill its ice houses, and in May of that year, a "full force of men" was working at the factory under an O. B. Barnes.
### Deepening of channel and subsequent use of island
Dunbar & Sullivan's operations on the Livingstone Channel soon ended. The channel was completed, and opened to the public, in October 1912. While the ice house and dynamite factory would again burn "to the water's edge" in 1919, the company continued to carry out dredging and excavation around the Detroit River for decades afterward. In the 1920s, it was using nearby Stony Island as a central part of its dredging and excavation operations, and by 1931 work was still going on there "24 hours a day".
In December 1932, channel-deepening operations began again, this time being carried out by a George Mills Company of Ontario. The company continued to use Powder House Island as a storage location for explosives; in that year they constructed a new powder house on the island. By that time, dynamite had been replaced with blasting gelatin. The Windsor Star reported in March 1933 that "in spite of the safety of modern blasting gelatine, the Mills Company takes excellent care that not too much of it is on the job at one time", with a reserve stock of approximately 2,000 50 lb (23 kg) cases being kept on the island. By April 1935, the company was preparing to begin draining the third (and final) section of the channel. In December, the company completed the contract, and laid off "all of its 240 employees [...] with the exception of 20 men, who [were] packing the company's equipment for shipment to the next job". Further work on the channel was contracted to an Arundell Corporation, who hoped to complete remaining work by December. Several months later, all work on the project was completed; it was opened for ship traffic on September 5, 1936.
In 1936, the George Mills Company and the Arundel Corporation were the subject of another lawsuit, in which Amherstburg residents sought monetary compensation for "damage and loss to [...] property as a result of the blasting operations on the channel project". The case was settled out of court in September of that year. Another suit with similar complaints was filed against the Arundel Corporation in 1938.
Work on nearby shipping channels would continue through the mid-20th century, including a project to deepen the Amherstburg Channel in the late 1950s that was mostly completed by 1961. Powder House Island is shown largely unchanged on survey maps throughout this time. It was included in a 1961 proposal by the United States Army Corps of Engineers to draw harbor lines around several islands in the lower Detroit River, enlarging the islands up to the boundaries of shipping channels. This proposal, however, prompted a "vigorous protest" by the Michigan Attorney General, who said "the establishment of a harbor line would undoubtedly result in the making of fills which would constitute an obstruction to the navigable waters of the State without the sanction of the Legislature or any public officials of the State". Ultimately, no such filling projects were carried out, and the island stayed the same size.
By the 1980s, industrial activity on Powder House Island had ceased, and in 1984 it was uninhabited. As of 2015, Powder House Island (as well as the nearby Stony Island, also formerly used by Dunbar & Sullivan) was owned by the Michigan Department of Natural Resources, managed by its Wildlife Division as part of the Pointe Mouillee State Game Area, and accessible to the public for hunting and camping. The area surrounding the island is known for good perch fishing; a 1982 report by the U.S. Fish and Wildlife Service said that walleye spawned near the island.
|
13,754,626 |
French battleship Iéna
| 1,160,755,261 |
French Navy pre-dreadnought battleship
|
[
"1898 ships",
"1907 in France",
"Battleships of the French Navy",
"Maritime incidents in 1907",
"Ships built in France",
"Ships sunk as targets",
"Ships sunk by non-combat internal explosions",
"Shipwrecks in the Mediterranean Sea",
"Shipwrecks of France"
] |
Iéna was a pre-dreadnought battleship built for the French Navy (Marine nationale). Completed in 1902 and named for one of Napoleon's victories, the ship was assigned to the Mediterranean Squadron and remained there for the duration of her career, frequently serving as a flagship. She participated in the annual fleet manoeuvres and made many visits to French ports in the Mediterranean. In 1907, while Iéna was docked for a refit, there was a magazine explosion that was probably caused by the decomposition of old Poudre B propellant. It killed 120 people and badly damaged the ship. Investigations were launched afterwards, and the ensuing scandal forced the Navy Minister to resign. While the damage could have been repaired, the obsolete ship was considered neither worth the time nor the expense; her salvaged hulk was used as a gunnery target in 1909, then sold for scrap in 1912.
## Design and description
On 11 February 1897 Navy Minister (Ministre de la Marine) Armand Besnard, after consultations with the Supreme Naval Council (Conseil supérieur de la Marine), requested a design for an enlarged Charlemagne-class battleship with a maximum displacement of 12,000 tonnes (11,810 long tons), an armour scheme capable of preserving stability and buoyancy after several penetrations of the hull and the resulting flooding, an armament equal to those of foreign battleships, a speed of 18 knots (33 km/h; 21 mph) and a minimum range of 4,500 nautical miles (8,300 km; 5,200 mi). The Director of Naval Construction (Directeur du matérial), Jules Thibaudier, had already prepared a preliminary design two months earlier with improved Harvey armour, but it was modified to increase the height of the belt armour above the waterline and to replace the 138.6-millimetre (5.5 in) guns of the Charlemagnes with 164.7-millimetre (6.5 in) guns. Thibaudier submitted his revised design on 9 February and it was approved by the Board of Construction (Conseil des travaux) on 4 March with minor revisions.
Iéna had an overall length of 122.31 metres (401 ft 3 in), a beam of 20.81 metres (68 ft 3 in) and, at deep load, a draught of 7.45 metres (24 ft 5 in) forward and 8.45 metres (27 ft 9 in) aft. She displaced 11,688 tonnes (11,503 long tons) at normal and 12,105 tonnes (11,914 long tons) at deep load. As a flagship, Iéna had a crew of 48 officers and 731 ratings; as a private ship, her crew numbered 33 officers and 668 ratings. The ship was fitted with large bilge keels, but, according to naval historian N.J.M. Campbell, was reported to roll considerably and pitch heavily, although this is contradicted by Captain (Capitaine de vaisseau) Bouxin's report of November 1905: "From the sea-keeping point of view the Iéna is an excellent ship. Pitching and rolling movements are gentle and the ship rides the waves well." Naval historians John Jordan and Philippe Caresse believe the ship was a good gun platform because she had a long, slow roll and she manoeuvred well.
Iéna was powered by a trio of four-cylinder vertical triple-expansion steam engines, each driving a three-bladed propeller that was 4.5 metres (14 ft 9 in) in diameter on the outer shafts and 4.4 metres (14 ft 5 in) on the centre shaft. The engines were powered by 20 Belleville boilers at a working pressure of 18 kg/cm<sup>2</sup> (1,765 kPa; 256 psi) and were rated at a total of 16,500 metric horsepower (12,100 kW) to give the ship a speed of 18 knots (33 km/h; 21 mph). During her sea trials on 16 July 1901, the ship barely exceeded her designed speed, reaching 18.1 knots (33.5 km/h; 20.8 mph) from 16,590 metric horsepower (12,200 kW). Iéna carried a maximum of 1,165 tonnes (1,147 long tons) of coal; this allowed her to steam for 4,400 nautical miles (8,100 km; 5,100 mi) at a speed of 10.3 knots (19.1 km/h; 11.9 mph). The ship's 80-volt electrical power was provided by four dynamos, a pair each of 600- and 1,200-ampere capacity.
### Armament and armour
Like the Charlemagne-class ships, Iéna carried her main armament of four 40-calibre Canon de 305 mm (12 in) Modèle 1893–1896 guns in two twin-gun turrets, one each fore and aft of the superstructure. Each turret had a dedicated 300-ampere dynamo to traverse it and to power the ammunition hoist. The guns, however, were manually elevated between their limits of −5° and +15°, and they were normally loaded at an angle of −5°. The guns fired 340-kilogram (750 lb) armour-piercing, capped (APC) projectiles at the rate of one round per minute at a muzzle velocity of 815 m/s (2,670 ft/s). This gave a range of 12,000 metres (13,000 yd) at the maximum elevation of +15°. The magazines stored 45 shells per gun, and an additional 14 projectiles were stowed in each turret.
The ship's secondary armament consisted of eight 45-calibre Canon de 164.7 mm Modèle 1893 guns, which were mounted in the central battery on the upper deck, and fired 54.2-kilogram (119 lb) APC shells. At their maximum elevation of +15°, their muzzle velocity of 800 m/s (2,600 ft/s) gave them a maximum range of 9,000 metres (9,800 yd). Each gun was provided with 200 rounds, enough for 80 minutes at their sustained rate of fire of 2–3 rounds per minute. She also carried eight 45-calibre Canon de 100 mm (3.9 in) Modèle 1893 guns in single, unprotected, mounts on the shelter deck. These guns fired a 14-kilogram (31 lb) projectile at 740 m/s (2,400 ft/s), which could be trained up to 20° for a maximum range of 9,500 metres (10,400 yd). Their theoretical maximum rate of fire was six rounds per minute, but only three rounds per minute could be sustained. Each gun was provided with 240 shells in the ship's magazine.
Iéna's anti-torpedo boat defences consisted of twenty 40-calibre Canon de 47 mm (1.9 in) Modèle 1885 Hotchkiss guns, fitted in platforms on both military masts, embrasures in the hull, and in the superstructure. They fired a 1.49-kilogram (3.3 lb) projectile at 610 m/s (2,000 ft/s) to a maximum range of 4,000 metres (4,400 yd). Their theoretical maximum rate of fire was fifteen rounds per minute, but only seven rounds per minute sustained. The ship's magazines held 15,000 shells for these guns. Rear-Admiral (Contre-amiral) René Marquis criticised the arrangements for the 47 mm guns in a 1903 report: "The number of ready-use rounds is insufficient and the hoists are desperately slow. The 47 mm guns, much more so than the large and medium-calibre guns, will have to fight at night; yet these are the only guns without a fire-control system designed for night operations. This is a deficiency which needs to be corrected as soon as possible." Iéna also mounted four 450-millimetre (17.7 in) torpedo tubes, two on each broadside, one submerged and the other above water. The submerged tubes were fixed at a 60° angle from the centreline and the above-water mounts could traverse 80°. Twelve Modèle 1889 torpedoes were carried, of which four were training models in peacetime.
The ship had a complete waterline belt of Harvey armour that was 2.4 metres (7 ft 10 in) high. The armour plates were 320 mm (12.6 in) thick amidships; they thinned to a thickness of 272 mm (10.7 in) at the bow and 224 mm (8.8 in) at the stern. Below the waterline, the plates tapered to a thickness of 120 mm (4.7 in) at their bottom edge for most of the ship's length although the plates at the stern were 100 mm thick. The upper armour belt was in two strakes, the lower 120 mm thick and the upper 80 mm (3.1 in). Their combined height was 2 metres (6 ft 7 in) amidships. The lower strake was backed by a highly subdivided cofferdam intended to reduce flooding from any penetrating hits as its compartments were filled by 14,858 water-resistant "bricks" of dried and compressed Zostera seaweed (briquettes de zostère). The seaweed was intended to expand upon contact with water and plug any holes. The armoured deck consisted of a 65 mm (2.6 in) mild steel plate laid over two 9 mm (0.35 in) plates. The splinter deck beneath it comprised two layers of 17 mm (0.67 in) plates.
The Harvey armour plates protecting the sides of the turrets were 290 mm (11.4 in) in thickness and the mild steel of the turret roofs was 50 mm (2 in) thick. The barbettes were protected by 250 mm (9.8 in) of Harvey armour. The sides and rear of the central battery were 90 mm (3.5 in) thick. The forward transverse bulkhead ranged in thickness from 55–150 millimetres (2.2–5.9 in), the thicker plates protecting the central battery, and reduced in thickness the further down it went until it met the armoured deck. The 164 mm guns were protected by 70-millimetre (2.8 in) gun shields. The armour plates protecting the conning tower ranged in thickness from 258 to 298 mm (10.2 to 11.7 in) on its face and rear, respectively. Its communications tube was protected by 200 mm (7.9 in) of armour.
## Construction and career
Ordered on 3 April 1897, and named after the French victory at the Battle of Jena, Iéna was laid down at the Arsenal de Brest on 15 January 1898. She was launched on 1 September and completed (armement définitif) on 14 April 1902 at a cost of F25.58 million. Five days later the ship departed for Toulon, losing one man overboard and having some problems with her rudder en route, before arriving on 25 April. Iéna became Marquis' flagship as commander of the Second Division of the Mediterranean Squadron on 1 May and was docked for repairs during 14–31 May. After the completion of the repairs the ship began a series of port visits in France and French North Africa which would be repeated for most of her career. She spent most of January 1903 refitting and was inspected by King Alfonso XIII of Spain during a visit to Cartagena in June. After another refit from 20 August to 10 September, Iéna, together with the rest of the Mediterranean Squadron, visited the Balearic Islands in October. During the return voyage, two crewmen died while training with the manual steering gear in heavy seas. Marquis was relieved by Rear-Admiral Léon Barnaud on 3 November. Iéna conducted training exercises off the coast of Provence from 19 November to 17 December.
Iéna participated in the fleet review off Naples in April–May 1904 when Émile Loubet, President of France, had a state visit with King Victor Emmanuel III of Italy. Afterwards, the Mediterranean Squadron cruised the Levant, visiting Beirut, Suda Bay, Smyrna, Mytilene, Salonika and Piraeus. In 1905 the ship was refitted during 15–25 April and then participated in the summer cruise of the Mediterranean Squadron, during which she visited ports in France and French North Africa between 10 May and 24 June. She took part in the annual fleet manoeuvres over the period 3 July–1 August. Rear-Admiral Henri-Louis Manceron relieved Barnaud on 16 November. During 12–17 April 1906, Iéna was dispatched to provide assistance to Naples after the eruption of Mount Vesuvius. Beginning on 3 July, the ship participated in the combined fleet manoeuvres, which included the Northern Squadron that year. After the conclusion of the exercise on 4 August, she spent most of the next several months refitting, aside from participating in an international naval review in Marseilles on 16 September with British, Spanish and Italian ships. While exercising off Toulon shortly afterwards, the ship accidentally collided with and sank Torpedo Boat No. 96.
### Loss
On 4 March 1907 Iéna was moved into Dry dock No. 2 in the Missiessy Basin at Toulon to undergo maintenance of her hull as well as an inspection of her leaking rudder shaft. Eight days later, beginning at 13:35 and continuing until 14:45, a series of explosions began near the aft 100-millimetre magazines which devastated the ship and the surrounding area. The explosions blew the roofs off three nearby workshops and gutted the area between the aft funnel and the aft turret. Because the ship was in a dry dock with the water pumped out, it was initially impossible to flood the magazines, which had not been unloaded before docking. The commanding officer of the battleship Patrie, which was moored nearby, fired a shell into the dry dock gates in an attempt to flood it, but the shell ricocheted without holing the gate. They were manually opened shortly afterwards by one of the ship's officers. A total of 118 crewmen and dockyard workers were killed by the explosions, as were 2 civilians in the suburb of Pont-Las who were killed by fragments.
On 17 March, the President of France, Armand Fallières, and Georges Clemenceau, who was both the President of the Council of Ministers and Minister of the Interior attended the funeral of those lost during the explosion. A national day of mourning was declared and a monument was built in the cemetery of Lagoubran. Both houses of the French Parliament, the Senate and the Chamber of Deputies, organised commissions to inquire into the cause of the explosion. The Senate appointed its commission on 20 March under the chairmanship of Ernest Monis; the Chamber of Deputies followed eight days later with Henri Michel as chair.
The origin of the first explosion was traced to a 100 mm magazine and was believed to have been caused by decomposing Poudre B, a nitrocellulose-based propellant, which tended to become unstable with age and self-ignite, though a report published in April 1907 stated a torpedo exploded in the torpedo room directly below the magazine. When burnt, it gave off yellow-coloured smoke, which matched the colour seen by eye-witnesses. To test this theory, Gaston Thomson, the Navy Minister, ordered on 31 March that a replica magazine and the adjacent black-powder magazine be built, but when the tests were conducted on 6–7 August, they were deemed inconclusive because the propellant used in the test was not of the same age as that aboard Iéna. Fallières appointed a technical commission on 6 August that included mathematician Henri Poincaré, chemist Albin Haller and the inventor of Poudre B, Paul Vieille, that failed to come to a definite conclusion. The navy's Propellant Branch (Service des Poudres et Saltpêtres) objected to the criticisms of its product, claiming that it was tested to resist 43 °C (110 °F) temperatures for 12 hours, although it never explained how that test was relevant to the long-term storage of Poudre B in magazines limited to natural ventilation, as was used by every ship in the fleet. The Monis Commission published its report on 9 July, blaming the explosion on Poudre B, and was debated on 21–26 November. The Michel Commission published its report on 7 November 1908, although its contents had been debated on 16–19 October, and was "a model of vagueness and imprecision". The reason for the explosion became a cause célèbre with accusations of gross negligence by the government such that Thomson was forced to resign on the last day of the debate.
### Disposal
The multiple explosions ripped open the ship's side between Frames 74 and 84 down to the lower edge of the armour belt, and all the machinery in this area was destroyed. After it was estimated that it would take seven million francs and two years to fully repair Iéna, which was already obsolete, the navy decided to decommission her and use her as a target ship. The ship was stricken from the navy list on 18 March and her crew was reassigned on 3 July. Iéna was disarmed, except for her 305 mm guns, and all useful equipment was removed in 1908. She was rendered seaworthy again at a cost of 700,000 francs and was towed to a mooring off the Île des Porquerolles. A programme to evaluate the effectiveness of Melinite-filled armour-piercing shells began on 9 August 1909 with the armoured cruiser Condé firing projectiles from her 164.7 mm and 194 mm (7.6 in) guns at a range of 6,000 metres (6,600 yd). After every shot the results were photographed and the effects on the crew of wooden dummies and live animals evaluated. By 2 December Iéna was close to foundering and the navy decided to have her towed to deeper water. Shortly after the tow began, she capsized and sank in shallow water. The rights to the wreck were sold on 21 December 1912 for 33,005 francs and she was slowly broken up and salvaged between 1912 and 1927. Another company was contracted to remove the remnants of the wreck in 1957.
|
977,013 |
Japanese battleship Nagato
| 1,169,506,906 |
Super-dreadnought sunk by nuclear test in Bikini atoll
|
[
"1919 ships",
"Artificial reefs",
"Maritime incidents in 1946",
"Nagato-class battleships",
"Second Sino-Japanese War naval ships of Japan",
"Ships built by Kure Naval Arsenal",
"Ships involved in Operation Crossroads",
"Ships sunk as targets",
"Shipwrecks in the Pacific Ocean",
"Top 10 dive sites",
"World War II battleships of Japan"
] |
Nagato (長門), named for Nagato Province, was a super-dreadnought battleship built for the Imperial Japanese Navy (IJN). Completed in 1920 as the lead ship of her class, she carried supplies for the survivors of the Great Kantō earthquake in 1923. The ship was modernized in 1934–1936 with improvements to her armor and machinery and a rebuilt superstructure in the pagoda mast style. Nagato briefly participated in the Second Sino-Japanese War in 1937 and was the flagship of Admiral Isoroku Yamamoto during the attack on Pearl Harbor. She covered the withdrawal of the attacking ships and did not participate in the attack itself.
Other than participating in the Battle of Midway in June 1942, where she did not see combat, the ship spent most of the first two years of the Pacific War training in home waters. She was transferred to Truk in mid-1943, but did not see any combat until the Battle of the Philippine Sea in mid-1944 when she was attacked by American aircraft. Nagato did not fire her main armament against enemy vessels until the Battle of Leyte Gulf in October. She was lightly damaged during the battle and returned to Japan the following month. The IJN was running out of fuel by this time and decided not to fully repair her. Nagato was converted into a floating anti-aircraft platform and assigned to coastal defense duties. She was attacked in July 1945 as part of the American campaign to destroy the IJN's last remaining capital ships, but was only slightly damaged and went on to be the only Japanese battleship to have survived World War II. In mid-1946, the ship was a target for nuclear weapon tests during Operation Crossroads. She survived the first test with little damage, but was sunk by the second.
## Description
Nagato had a length of 201.17 meters (660 ft) between perpendiculars and 215.8 meters (708 ft) overall. She had a beam of 29.02 meters (95 ft 3 in) and a draft of 9.08 meters (29 ft 9 in). The ship displaced 32,720 metric tons (32,200 long tons) at standard load and 39,116 metric tons (38,498 long tons) at full load. Her crew consisted of 1,333 officers and enlisted men as built and 1,368 in 1935. The crew totaled around 1,734 men in 1944.
In 1930, Nagato's bow was remodeled to reduce the amount of spray produced when steaming into a head sea. This increased her overall length by 1.59 meters (5 ft 3 in) to 217.39 meters (713 ft 3 in). During her 1934–1936 reconstruction, the ship's stern was lengthened by 7.55 meters (24.8 ft) to improve her speed and her forward superstructure was rebuilt into a pagoda mast. She was given torpedo bulges to improve her underwater protection and to compensate for the weight of the additional armor and equipment. These changes increased her overall length to 224.94 m (738 ft), her beam to 34.6 m (113 ft 6 in) and her draft to 9.49 meters (31 ft 2 in). Her displacement increased over 7,000 metric tons (6,900 long tons) to 46,690 metric tons (45,950 long tons) at deep load. The ship's metacentric height at deep load was 2.35 meters (7 ft 9 in). In November 1944, the tops of Nagato's mainmast and funnel were removed to improve the effective arcs of fire for her anti-aircraft guns.
### Propulsion
Nagato was equipped with four Gihon geared steam turbines, each of which drove one propeller shaft. The turbines were designed to produce a total of 80,000 shaft horsepower (60,000 kW), using steam provided by 21 Kampon water-tube boilers; 15 of these were oil-fired while the remaining half-dozen consumed a mixture of coal and oil. The ship could carry 1,600 long tons (1,600 t) of coal and 3,400 long tons (3,500 t) of fuel oil, giving her a range of 5,500 nautical miles (10,200 km; 6,300 mi) at a speed of 16 knots (30 km/h; 18 mph). The ship exceeded her designed speed of 26.5 knots (49.1 km/h; 30.5 mph) during her sea trials, reaching 26.7 knots (49.4 km/h; 30.7 mph) at 85,500 shp (63,800 kW).
Funnel smoke would often choke and blind crewmen on the bridge and in the fire-control systems so a "fingernail"-shaped deflector was installed on the fore funnel in 1922 to direct the exhaust away from them. It was less than effective and the fore funnel was rebuilt in a serpentine shape in an unsuccessful effort during a refit in 1924. That funnel was eliminated during the ship's 1930s reconstruction when all of her boilers were replaced by ten oil-fired Kampon boilers, which had a working pressure of 22 kg/cm<sup>2</sup> (2,157 kPa; 313 psi) and temperature of 300 °C (572 °F). In addition her turbines were replaced by lighter, more modern, units. When Nagato conducted her post-reconstruction trials, she reached a speed of 24.98 knots (46.26 km/h; 28.75 mph) with 82,300 shp (61,400 kW). Additional fuel oil was stored in the bottoms of the newly added torpedo bulges, which increased her capacity to 5,560 long tons (5,650 t) and thus her range to 8,560 nmi (15,850 km; 9,850 mi) at 16 knots.
### Armament
Nagato's eight 45-caliber 41-centimeter (16 inch) guns were mounted in two pairs of twin-gun, superfiring turrets fore and aft. Numbered one through four from front to rear, the hydraulically powered turrets gave the guns an elevation range of −2 to +35 degrees. The rate of fire for the guns was around two rounds per minute. The turrets aboard the Nagato-class ships were replaced in the mid-1930s with the turrets stored from the unfinished Tosa-class battleships. While in storage the turrets had been modified to increase their range of elevation to –3 to +43 degrees, which increased the gun's maximum range from 30,200 to 37,900 meters (33,000 to 41,400 yd).
The ship's secondary armament of twenty 50-caliber 14-centimeter guns was mounted in casemates on the upper sides of the hull and in the superstructure. The manually operated guns had a maximum range of 20,500 metres (22,400 yd) and fired at a rate of six to 10 rounds per minute. Anti-aircraft defense was provided by four 40-caliber 3rd Year Type three-inch AA guns in single mounts. The 3-inch (76 mm) high-angle guns had a maximum elevation of +75 degrees, and had a rate of fire of 13 to 20 rounds per minute. The ship was also fitted with eight 53.3-centimeter (21.0 in) torpedo tubes, four on each broadside, two above water and two submerged.
Around 1926, the four above-water torpedo tubes were removed and the ship received three additional 76 mm AA guns that were situated around the base of the foremast. They were replaced by eight 40-caliber 12.7-centimeter Type 89 dual-purpose (DP) guns in 1932, fitted on both sides of the fore and aft superstructures in four twin-gun mounts. When firing at surface targets, the guns had a range of 14,700 meters (16,100 yd); they had a maximum ceiling of 9,440 meters (30,970 ft) at their maximum elevation of +90 degrees. Their maximum rate of fire was 14 rounds a minute, but their sustained rate of fire was around eight rounds per minute. Two twin-gun mounts for license-built Vickers two-pounder light AA guns were also added to the ship that same year. These guns had a maximum elevation of +80 degrees which gave them a ceiling of 4,000 meters (13,000 ft). They had a maximum rate of fire of 200 rounds per minute.
When the ship was reconstructed in 1934–1936, the remaining torpedo tubes and the two forward 14 cm (5+1⁄2-inch) guns were removed from the hull. The remaining 14 cm guns had their elevation increased to +35 degrees which increased their range to 20,000 meters (22,000 yd). An unknown number of license-built Hotchkiss M1929 machine gun 13.2 mm (0.52 in) in twin mounts were added. The maximum range of these guns was 6,500 meters (7,100 yd), but the effective range against aircraft was 700–1,500 meters (770–1,640 yd). The cyclic rate was adjustable between 425 and 475 rounds per minute, but the need to change 30-round magazines reduced the effective rate to 250 rounds per minute.
The unsatisfactory two-pounders were replaced in 1939 by twenty license-built Type 96 Hotchkiss Type 96 25 mm (0.98 in) light AA guns in a mixture of twin-gun and single mounts. This was the standard Japanese light AA gun during World War II, but it suffered from severe design shortcomings that rendered it a largely ineffective weapon. According to historian Mark Stille, the twin and triple mounts "lacked sufficient speed in train or elevation; the gun sights were unable to handle fast targets; the gun exhibited excessive vibration; the magazine was too small, and, finally, the gun produced excessive muzzle blast". These 25 mm guns had an effective range of 1,500–3,000 meters (1,600–3,300 yd), and an effective ceiling of 5,500 meters (18,000 ft) at an elevation of 85 degrees. The maximum effective rate of fire was only between 110 and 120 rounds per minute because of the frequent need to change the fifteen-round magazines. Additional Type 96 guns were installed during the war; on 10 July 1944, the ship was reported to have 98 guns on board. An additional 30 guns were added during a refit in Yokosuka in November. Two more twin 12.7 cm (5 inch) gun mounts were added at the same time abreast the funnel and her 14 cm guns were removed as she was by then a floating anti-aircraft battery.
### Armor
The ship's waterline armor belt was 305 mm (12 in) thick and tapered to a thickness of 100 mm (3.9 in) at its bottom edge; above it was a strake of 229 mm (9.0 in) armor. The main deck armor was 69 mm (2.7 in) while the lower deck was 75 mm (3 in) thick. The turrets were protected with an armor thickness of 305 mm on the face, 230–190 mm (9.1–7.5 in) on the sides, and 152–127 mm (6.0–5.0 in) on the roof. The barbettes of the turrets were protected by armor 305 mm thick, while the casemates of the 140 mm (1.6 in) guns were protected by 25 mm (0.98 in) armor plates. The sides of the conning tower were 369 mm (14.5 in) thick.
The new 41 cm turrets installed during Nagato's reconstruction were more heavily armored than the original ones. Face armor was increased to 460 mm (18.1 in), the sides to 280 mm (11.0 in), and the roof to 250–230 mm (9.8–9.1 in). The armor over the machinery and magazines was increased by 38 mm on the upper deck and 25 mm (0.98 in) on the upper armored deck. These additions increased the weight of the ship's armor to 13,032 metric tons (12,826 long tons), 32.6 percent of her displacement. In early 1941, as a preparation for war, Nagato's barbette armor was reinforced with 100 mm (3.9 in) armor plates above the main deck and 215 mm (8.5 in) plates below it.
### Fire control and sensors
When completed in 1920, the ship was fitted with a 10-meter (32 ft 10 in) rangefinder in the forward superstructure; six-meter (19 ft 8 in) and three-meter (9 ft 10 in) anti-aircraft rangefinders were added in May 1921 and 1923, respectively. The rangefinders in the second and third turrets were replaced by 10-meter units in 1932–1933.
Nagato was initially fitted with a Type 13 fire-control system derived from Vickers equipment received during World War I, but this was replaced by an improved Type 14 system around 1925. It controlled the main and secondary guns; no provision was made for anti-aircraft fire until the Type 31 fire-control director was introduced in 1932. A modified Type 14 fire-control system was tested aboard the ship in 1935 and later approved for service as the Type 34. A new anti-aircraft director called the Type 94 that was used to control the 127 mm AA guns was introduced in 1937, although when Nagato received hers is unknown. The Type 96 25 mm (0.98 in) AA guns were controlled by a Type 95 director that was also introduced in 1937.
While in drydock in May 1943, a Type 21 air search radar was installed on the roof of the 10-meter rangefinder at the top of the pagoda mast. On 27 June 1944, two Type 22 surface search radars were installed on the pagoda mast and two Type 13 early warning radars were fitted on her mainmast.
### Aircraft
Nagato was fitted with an 18-meter (59 ft 1 in) aircraft flying-off platform on Turret No. 2 in August 1925. Yokosuka Ro-go Ko-gata and Heinkel HD 25 floatplanes were tested from it before it was removed early the following year. An additional boom was added to the mainmast in 1926 to handle the Yokosuka E1Y now assigned to the ship. A Hansa-Brandenburg W.33 floatplane was tested aboard Nagato that same year. A catapult was fitted between the mainmast and Turret No. 3 in mid-1933, a collapsible crane was installed in a portside sponson, and the ship was equipped to operate two or three floatplanes, although no hangar was provided. The ship now operated Nakajima E4N2 biplanes until they were replaced by Nakajima E8N2 biplanes in 1938. A more powerful catapult was installed in November 1938 to handle heavier aircraft, such as the one Kawanishi E7K that was added in 1939–1940. Mitsubishi F1M biplanes replaced the E8Ns on 11 February 1943.
## Construction and career
Nagato, named for Nagato Province, was ordered on 12 May 1916 and laid down at the Kure Naval Arsenal on 28 August 1917 as the lead ship of her class. She was launched on 9 November 1919 by Admiral Katō Tomosaburō, completed on 15 November 1920 and commissioned 10 days later with Captain Nobutaro Iida in command. Nagato was assigned to the 1st Battleship Division and became the flagship of Rear Admiral Sōjirō Tochinai. On 13 February 1921, the ship was inspected by the Crown Prince, Hirohito. Captain Kanari Kabayama relieved Iida on 1 December 1921. The ship hosted Marshal Joseph Joffre on 18 February 1922 and Edward, Prince of Wales, and his aide-de-camp Lieutenant Louis Mountbatten on 12 April during the prince's visit to Japan.
After the 1923 Great Kantō earthquake, Nagato loaded supplies from Kyushu for the victims on 4 September. Together with her sister ship Mutsu, she sank the hulk of the obsolete battleship Satsuma on 7 September 1924 during gunnery practice in Tokyo Bay in accordance with the Washington Naval Treaty. The ship was transferred to the reserve of the 1st Division on 1 December and became a gunnery training ship. In August 1925, aircraft handling and take-off tests were conducted aboard Nagato. She was reassigned as the flagship of the Combined Fleet on 1 December, flying the flag of Admiral Keisuke Okada. Captain Kiyoshi Hasegawa assumed command of the ship on 1 December 1926.
Nagato was again placed in reserve on 1 December 1931 and her anti-aircraft armament was upgraded the following year. In August 1933 the ship participated in fleet maneuvers north of the Marshall Islands and she began her first modernization on 1 April 1934. This was completed on 31 January 1936 and Nagato was assigned to the 1st Battleship Division of the 1st Fleet. During the attempted coup d'état on 26 February by disgruntled Army officers, the ship was deployed in Tokyo Bay and some of her sailors were landed in support of the government. In August, she transported 1,749 men of the 43rd Infantry Regiment of the 11th Infantry Division from Shikoku to Shanghai during the Second Sino-Japanese War. Her floatplanes bombed targets in Shanghai on 24 August before she returned to Sasebo the following day. Nagato became a training ship on 1 December until she again became the flagship of the Combined Fleet on 15 December 1938. The ship participated in an Imperial Fleet Review on 11 October 1940. She was refitted in early 1941 in preparation for war.
### World War II
Admiral Isoroku Yamamoto issued the code phrase "Niitaka yama nobore" (Climb Mount Niitaka) on 2 December 1941 from Nagato at anchor at Hashirajima to signal the 1st Air Fleet (Kido Butai) in the North Pacific to proceed with its attack on Pearl Harbor. When the war started for Japan on 8 December, she sortied for the Bonin Islands, along with Mutsu, the battleships Hyūga, Yamashiro, Fusō, Ise of Battleship Division 2, and the light carrier Hōshō as distant cover for the withdrawal of the fleet attacking Pearl Harbor, and returned six days later. Yamamoto transferred his flag to the new battleship Yamato on 12 February 1942. Nagato was briefly refitted 15 March – 9 April at Kure Naval Arsenal.
In June 1942 Nagato, commanded by Captain Hideo Yano, was assigned to the Main Body of the 1st Fleet during the Battle of Midway, together with Yamato, Mutsu, Hosho, the light cruiser Sendai, nine destroyers and four auxiliary ships. Following the loss of all four carriers of the 1st Air Fleet on 4 June, Yamamoto attempted to lure the American forces west to within range of the Japanese air groups at Wake Island, and into a night engagement with his surface forces, but the American forces withdrew and Nagato saw no action. After rendezvousing with the remnants of the 1st Air Fleet on 6 June, survivors from the aircraft carrier Kaga were transferred to Nagato. On 14 July, the ship was transferred to Battleship Division 2 and she became the flagship of the 1st Fleet. Yano was promoted to rear admiral on 1 November and he was replaced by Captain Yonejiro Hisamune nine days later. Nagato remained in Japanese waters training until August 1943. On 2 August Captain Mikio Hayakawa assumed command of the ship.
That month, Nagato, Yamato, Fusō and the escort carrier Taiyō, escorted by two heavy cruisers and five destroyers transferred to Truk in the Caroline Islands. In response to the carrier raid on Tarawa on 18 September, Nagato and much of the fleet sortied for Eniwetok to search for the American forces before they returned to Truk on 23 September, having failed to locate them. The Japanese had intercepted some American radio traffic that suggested an attack on Wake Island, and on 17 October, Nagato and the bulk of the 1st Fleet sailed for Eniwetok to be in a position to intercept any such attack. The fleet arrived on 19 October, departed four days later, and arrived back at Truk on 26 October. Hayakawa was promoted to rear admiral on 1 November and he was relieved on 25 December by Captain Yuji Kobe.
On 1 February 1944, Nagato departed Truk with Fusō to avoid an American air raid, and arrived at Palau on 4 February. They departed on 16 February to escape another air raid. The ships arrived on 21 February at Lingga Island, near Singapore, and the ship became the flagship of Vice Admiral Matome Ugaki, commander of Battleship Division 1, on 25 February, until he transferred his flag to Yamato on 5 May. Aside from a brief refit at Singapore, the ship remained at Lingga training until 11 May when she was transferred to Tawi-Tawi on 12 May. The division was now assigned to the 1st Mobile Fleet, under the command of Vice Admiral Jisaburō Ozawa.
On 10 June, Battleship Division 1 departed Tawi-Tawi for Batjan in preparation for Operation Kon, a planned counterattack against the American invasion of Biak. Three days later, when Admiral Soemu Toyoda, commander-in-chief of the Combined Fleet, was notified of American attacks on Saipan, Operation Kon was canceled and Ugaki's force was diverted to the Mariana Islands. The battleships rendezvoused with Ozawa's main force on 16 June. During the Battle of the Philippine Sea, Nagato escorted the aircraft carriers Jun'yō, Hiyō and the light carrier Ryūhō. She fired 41 cm Type 3 Sankaidan incendiary anti-aircraft shrapnel shells at aircraft from the light carrier Belleau Wood that were attacking Jun'yō and claimed to have shot down two Grumman TBF Avenger torpedo bombers. The ship was strafed by American aircraft during the battle, but was not damaged and suffered no casualties. During the battle Nagato rescued survivors from Hiyō that were transferred to the carrier Zuikaku once the ship reached Okinawa on 22 June. She continued on to Kure where she was refitted with additional radars and light AA guns. Undocked on 8 July, Nagato loaded a regiment of the 28th Infantry Division the following day and delivered them to Okinawa on 11 July. She arrived at Lingga via Manila on 20 July.
#### Battle of Leyte Gulf
Kobe was promoted to rear admiral on 15 October. Three days later, Nagato sailed for Brunei Bay, Borneo, to join the main Japanese fleet in preparation for "Operation Sho-1", the counterattack planned against the American landings at Leyte. The Japanese plan called for Ozawa's carrier forces to lure the American carrier fleets north of Leyte so that Vice Admiral Takeo Kurita's 1st Diversion Force (also known as the Center Force) could enter Leyte Gulf and destroy American forces landing on the island. Nagato, together with the rest of Kurita's force, departed Brunei for the Philippines on 22 October.
In the Battle of the Sibuyan Sea on 24 October, Nagato was attacked by multiple waves of American dive bombers and fighters. At 14:16 she was hit by two bombs dropped by planes from the fleet carrier Franklin and the light carrier Cabot. The first bomb disabled five of her casemate guns, jammed one of her Type 89 gun mounts, and damaged the air intake to No. 1 boiler room, immobilizing one propeller shaft for 24 minutes until the boiler was put back on line. Damage from the second bomb is unknown. The two bombs killed 52 men between them; the number of wounded is not known.
On the morning of 25 October, the 1st Diversion Force passed through the San Bernardino Strait and headed for Leyte Gulf to attack the American forces supporting the invasion. In the Battle off Samar, Nagato engaged the escort carriers and destroyers of Task Group 77.4.3, codenamed "Taffy 3". At 06:01 she opened fire on three escort carriers, the first time she had ever fired her guns at an enemy ship, but missed. At 06:54 the destroyer USS Heermann fired a spread of torpedoes at the fast battleship Haruna; the torpedoes missed Haruna and headed for Yamato and Nagato which were on a parallel course. The two battleships were forced 10 miles (16 km) away from the engagement before the torpedoes ran out of fuel. Turning back, Nagato engaged the American escort carriers and their screening ships, claiming to have damaged one cruiser with forty-five 410 mm and ninety-two 14 cm shells. The ineffectiveness of her shooting was the result of the poor visibility caused by numerous rain squalls and by smoke screens laid by the defending escorts. At 09:10 Kurita ordered his ships to break off the engagement and head north. At 10:20 he ordered the fleet south once more, but as they came under increasingly severe air attack he ordered a retreat again at 12:36. At 12:43 Nagato was hit in the bow by two bombs, but the damage was not severe. Four gunners were washed overboard at 16:56 as the ship made a sharp turn to avoid dive-bomber attacks; a destroyer was detached to rescue them, but they could not be found. As it retreated back to Brunei on 26 October, the Japanese fleet came under repeated air attacks. Nagato and Yamato used Sankaidan shells against them and claimed to have shot down several bombers. Over the course of the last two days she fired ninety-nine 410 mm and six hundred fifty-three 14 cm shells, suffering 38 crewmen killed and 105 wounded during the same time.
#### Final days of the war
On 15 November the ship was assigned to Battleship Division 3 of the 2nd Fleet. After an aerial attack at Brunei on 16 November, Nagato, Yamato, and the fast battleship Kongō left the following day, bound for Kure. En route, Kongō and one of the escorting destroyers were sunk by USS Sealion on 21 November. On 25 November, she arrived at Yokosuka, Japan, for repairs. Lack of fuel and materials meant that she could not be brought back into service and she was turned into a floating anti-aircraft battery. Her funnel and mainmast were removed to improve the arcs of fire of her AA guns, which were increased by two Type 89 mounts and nine triple Type 96 gun mounts. Her forward secondary guns were removed in compensation. Captain Kiyomi Shibuya relieved Kobe in command of Nagato on 25 November. Battleship Division 3 was disbanded on 1 January 1945 and the ship was reassigned to Battleship Division 1. That formation was disbanded on 10 February and she was assigned to the Yokosuka Naval District as a coastal defense ship. Moored alongside a pier, a coal-burning donkey boiler was installed on the pier for heating and cooking purposes and a converted submarine chaser was positioned alongside to provide steam and electricity; her anti-aircraft guns lacked full power and were only partially operational. On 20 April, Nagato was reduced to reserve and retired Rear Admiral Miki Otsuka assumed command a week later.
In June 1945, all of her secondary guns and about half of her anti-aircraft armament was moved ashore, together with her rangefinders and searchlights. Her crew was accordingly reduced to less than 1,000 officers and enlisted men. On 18 July 1945, the heavily camouflaged ship was attacked by fighter bombers and torpedo bombers from five American carriers as part of Admiral William Halsey Jr.'s campaign to destroy the IJN's last surviving capital ships. Nagato was hit by two bombs, the first 500-pound (230 kg) bomb struck the bridge and killed Otsuka, the executive officer, and twelve sailors when it detonated upon hitting the roof of the conning tower. The second 500-pound bomb struck the deck aft of the mainmast and detonated when it hit No. 3 barbette. It failed to damage the barbette or the turret above it, but blew a hole nearly 12 feet (3.7 m) in diameter in the deck above the officer's lounge, killing 21 men and damaging four Type 96 guns on the deck above. A dud rocket of uncertain size hit the ship's fantail, but failed to do any significant damage. To convince the Americans that Nagato had been badly damaged by the attack, her damage was left unrepaired and some of her ballast tanks were pumped full of seawater to make her sit deeper in the water as if she had sunk to the harbor bottom.
Captain Shuichi Sugino was appointed as Nagato's new captain on 24 July, but he was unable to take up his appointment until 20 August. Retired Rear Admiral Masamichi Ikeguchi was assigned as the ship's interim captain until Sugino arrived. The Yokosuka Naval District received an alarm on the night of 1/2 August that a large convoy was approaching Sagami Bay and Nagato was ordered to attack immediately. The ship was totally unprepared for any attack, but Ikeguchi began the necessary preparations. The water in the ballast compartments was pumped out and her crew began reloading the propellant charges for her 16-inch guns. The ship received more fuel from a barge later that morning, but no order to attack ever came as it had been a false alarm. Sailors from the battleship USS Iowa, Underwater Demolition Team 18, and the high-speed transport USS Horace A. Bass secured the battleship on 30 August after the occupation began and Captain Thomas J Flynn, executive officer of the Iowa, assumed command. By the time the war ended, Nagato was the only Japanese battleship still afloat. She was stricken from the Navy List on 15 September.
## After the war
The ship was selected to participate as a target ship in Operation Crossroads, a series of nuclear weapon tests held at Bikini Atoll in mid-1946. In mid-March, Nagato departed Yokosuka for Eniwetok under the command of Captain W. J. Whipple with an American crew of about 180 men supplementing her Japanese crew. The ship was only capable of a speed of 10 knots (19 km/h; 12 mph) from her two operating propeller shafts. Her hull had not been repaired from the underwater damage sustained during the attack on 18 July 1945 and she leaked enough that her pumps could not keep up. Her consort, the light cruiser Sakawa, broke down on 28 March and Nagato attempted to take her in tow, but one of her boilers malfunctioned and the ship ran out of fuel in bad weather. The ship had a list of seven degrees to port by the time tugboats from Eniwetok arrived on 30 March. Towed at a speed of 1 knot (1.9 km/h; 1.2 mph), the ship reached Eniwetok on 4 April where she received temporary repairs. On her trip to Bikini in May, Nagato reached 13 knots (24 km/h; 15 mph).
Operation Crossroads began with the first blast (Test Able), an air burst on 1 July; she was 1,500 meters (1,640 yd) from ground zero and was only lightly damaged. A skeleton crew boarded Nagato to assess the damage and prepare her for the next test on 25 July. As a test, they operated one of her boilers for 36 hours without any problems. For Test Baker, an underwater explosion, the ship was positioned 870 meters (950 yd) from ground zero. Nagato rode out the tsunami from the explosion with little apparent damage; she had a slight starboard list of two degrees after the tsunami dissipated. A more thorough assessment could not be made because she was dangerously radioactive. Her list gradually increased over the next five days and she capsized and sank during the night of 29/30 July.
The wreck is upside down and her most prominent features are her four propellers, at a depth of 33.5 meters (110 ft) below the surface. She has become a scuba diving destination in recent years and The Times named Nagato as one of the top ten wreck diving sites in the world in 2007.
|
22,858,804 |
1950 United States Senate election in California
| 1,167,791,338 | null |
[
"1950 California elections",
"1950 United States Senate elections",
"Richard Nixon",
"United States Senate elections in California"
] |
The 1950 United States Senate election in California was held on November 7 of that year, following a campaign characterized by accusations and name-calling. Republican Representative and future President Richard Nixon defeated Democrat Representative Helen Gahagan Douglas, after Democratic incumbent Sheridan Downey withdrew during the primary election campaign. Douglas and Nixon each gave up their congressional seats to run against Downey; no other representatives were willing to risk the contest.
Both Douglas and Nixon announced their candidacies in late 1949. In March 1950, Downey withdrew from a vicious primary battle with Douglas by announcing his retirement, after which Los Angeles Daily News publisher Manchester Boddy joined the race. Boddy attacked Douglas as a leftist and was the first to compare her to New York Representative Vito Marcantonio, who was accused of being a communist. Boddy, Nixon, and Douglas each entered both party primaries, a practice known as cross-filing. In the Republican primary, Nixon was challenged only by cross-filers and fringe candidates.
Nixon won the Republican primary and Douglas the Democratic contest, with each also finishing third in the other party's contest (Boddy finished second in both races). The contentious Democratic race left the party divided, and Democrats were slow to rally to Douglas—some even endorsed Nixon. The Korean War broke out only days after the primaries, and both Nixon and Douglas contended that the other had often voted with Marcantonio to the detriment of national security. Nixon's attacks were far more effective, and he won the November 7 general election by almost 20 percentage points, carrying 53 of California's 58 counties and all metropolitan areas.
Though Nixon was later criticized for his tactics in the campaign, he defended his actions, and also stated that Douglas's positions were too far to the left for California voters. Other reasons for the result have been suggested, ranging from tepid support for Douglas from President Truman and his administration to the reluctance of voters in 1950 to elect a woman. The campaign gave rise to two political nicknames, both coined by Boddy or making their first appearance in his newspaper: "the Pink Lady" for Douglas and "Tricky Dick" for Nixon.
## Background
California Senator Sheridan Downey was first elected in 1938. An attorney, he had run unsuccessfully in 1934 for Lieutenant Governor of California as Upton Sinclair's running mate, and had a reputation as a liberal. As a senator, however, his positions gradually moved to the right, and he began to favor corporate interests. Manchester Boddy, the editor and publisher of the Los Angeles Daily News, was born on a potato farm in Washington state. He had little newspaper experience when, in 1926, he was given the opportunity to purchase the Daily News by a bankruptcy court, but built it into a small but thriving periodical. He shared his views with his readers through his column, "Thinking and Living", and, after initial Republican leanings, was a firm supporter of the New Deal. While the Daily News had not endorsed the Sinclair–Downey ticket, Boddy had called Sinclair "a great man" and allowed the writer-turned-gubernatorial candidate to set forth his views on the newspaper's front page.
Both Helen Douglas and Richard Nixon entered electoral politics in the mid-1940s. Douglas, a New Deal Democrat, was a former actress and opera singer, and the wife of actor Melvyn Douglas. She represented the 14th congressional district beginning in 1945. Nixon grew up in a working-class family in Whittier. In 1946, he defeated 12th district Representative Jerry Voorhis to claim a seat in the United States House of Representatives in January 1947, where he became known for his anticommunist activities, including his involvement in the Alger Hiss affair.
In the 1940s, California experienced a huge influx of migrants from other US states, increasing its population by 55%. Party registration in 1950 was 58.4% Democratic and 37.1% Republican. However, other than Downey, most major California officeholders were Republican, including Governor Earl Warren (who was seeking a third term in 1950) and Senator William Knowland.
During the 1950 campaign, both Nixon and Douglas were accused of having a voting record comparable to that of New York Representative Vito Marcantonio. The sole representative from the American Labor Party at the time, Marcantonio represented East Harlem. He was accused of being a communist, though he denied being one; he rarely discussed the Soviet Union or communism. Marcantonio opposed restrictions on communists and the Communist Party, stating that such restrictions violated the Bill of Rights. He regularly voted against contempt citations requested by the House Un-American Activities Committee (HUAC), on which Nixon served.
## Primary campaign
### Democratic contest
#### Early campaign
Douglas disregarded advice from party officials to wait until 1952 to run for the Senate, when Republican Senator Knowland would be up for reelection. Fundraising for the campaign was a concern from the beginning; Douglas friend and aide Ed Lybeck wrote her that she would probably need to raise \$150,000 (\$1.8 million today), which Douglas considered a massive sum. Lybeck wrote,
> Now, you can win. You will not be a favorite; you'll be rather a long shot. But given luck and money and a hell of a lot of work, you can win ... but for Christ's sake don't commit suicide with no dough ... Maybe you can't crucify mankind on a cross of gold, but you can sure as hell crucify a statewide candidate upon a cross of no-gold.
On October 5, 1949, Douglas made a radio appearance announcing her candidacy. She attacked Downey almost continuously throughout the remainder of the year, accusing him of being a do-nothing, a tool of big business, and an agent of oil interests. She hired Harold Tipton, a newcomer to California who had managed a successful congressional campaign in the Seattle area, as her campaign manager. Douglas realized that Nixon would most likely be the Republican nominee, and felt that were she to win the primary, the wide gap between Nixon's positions and hers would cause voters to rally to her. Downey, who suffered from a severe ulcer, was initially undecided about running, but announced his candidacy in early December in a speech that included an attack on Douglas. Earl Desmond, a member of the California State Senate from Sacramento whose positions were similar to Downey's, also entered the race.
In January 1950, Douglas opened campaign headquarters in Los Angeles and San Francisco, which was seen as a signal that she was serious about contesting Downey's seat and would not withdraw from the race. Downey challenged Douglas to a series of debates; Douglas, who was not a good debater, declined. The two candidates traded charges via press and radio, with Downey describing Douglas's views as extremist.
Douglas's formal campaign launch on February 28 was overshadowed by rumors that Downey might retire, which Douglas called a political maneuver on Downey's part to get the attention of the press. However, on March 29, amid rumors that he was doing badly in the polls, Downey announced both his retirement and his endorsement of Los Angeles Daily News publisher Manchester Boddy. In his statement, the senator indicated that, due to his ill health, he was not up to "waging a personal and militant campaign against the vicious and unethical propaganda" of Douglas.
Boddy filed his election paperwork the next day, on the final day petitions were accepted, with his papers signed by Los Angeles Mayor Fletcher Bowron, a Republican, and by Downey campaign manager and 1946 Democratic senatorial candidate Will Rogers Jr. The publisher had been urged to enter the race by state Democratic leaders and by wealthy oilmen. He had no political experience; Democratic leaders had sought to draft him to run for the Senate in 1946, but he had declined. He later stated that his reasons for running were that the race would be a challenge, and that he would meet interesting people. Boddy, Douglas, and Nixon each "cross-filed", entering both major party primaries.
Douglas called Downey's departure in favor of the publisher a cheap gimmick and made no attempt to reach a rapprochement with the senator, who entered Bethesda Naval Hospital for treatment in early April, and was on sick leave from Congress for several months. The change in opponents was a mixed blessing for Douglas; it removed the incumbent from the field, but deprived her of the endorsement of the Daily News—one of the few big city papers to consistently support her.
#### Boddy versus Douglas
For the first month of Boddy's abbreviated ten-week campaign, he and Douglas avoided attacking each other. Boddy's campaign depicted him as born in a log cabin, and highlighted his World War I service. The publisher campaigned under the slogan, "Manchester Boddy, the Democrat Every Body Wants." Boddy stated that he was fighting for the "little man", and alleged that the average individual was overlooked by both big government and big labor. However, his campaign, having a late start, was disorganized. The candidate himself had little charisma, and little presence as a public speaker. According to Rob Wagner, who wrote of the campaign in his history of Los Angeles newspapers of the era, Boddy "was all sizzle and no substance".
The campaign calm broke off near the end of April 1950, when Boddy's Daily News and affiliated newspapers referred to the congresswoman as "decidedly pink" and "pink shading to deep red". At the end of the month, the Daily News referred to her for the first time as "the pink lady". Douglas generally ignored Boddy's attacks, which continued unabated through May. In a Daily News column, Boddy wrote that Douglas was part of "a small minority of red hots" which proposed to use the election to "establish a beachhead on which to launch a Communist attack on the United States". One Boddy campaign publication was printed with red ink, and stated that Douglas "has too often teamed up with the notorious extreme radical, Vito Marcantonio of New York City, on votes that seem more in the interest of Soviet Russia than of the United States".
On May 3, Representative George Smathers defeated liberal Senator Claude Pepper for the Democratic Senate nomination in Florida. Smathers' tactics included dubbing his opponent "Red Pepper" and distributing red-covered brochures, The Red Record of Senator Claude Pepper, that included a photograph of Pepper with Marcantonio. Soon after Smathers' triumph in the primary, which in the days of the yellow dog South was tantamount to election, South Dakota Republican Senator Karl Mundt, who when in the House had served with Nixon on HUAC, sent him a letter telling him about Smathers' brochure. Senator Mundt wrote to Nixon, "It occurs to me that if Helen is your opponent in the fall, something of a similar nature might well be produced ..." Douglas wrote of Senator Pepper's defeat, "The loss of Pepper is a great tragedy, and we are sick about it." She also noted, "What a vicious campaign was carried on against him. No doubt the fur will begin to fly out here too", and "It is revolting to think of the depths to which people will go."
Downey reentered the fray on May 22, when he made a statewide radio address on behalf of Boddy, stating his belief that Douglas was not qualified to be a senator. He concluded, "Her record clearly shows very little hard work, no important influence on legislation, and almost nothing in the way of solid achievement. The fact that Mrs. Douglas has continued to bask in the warm glow of publicity and propaganda should not confuse any voter as to what the real facts are."
Douglas brought an innovation to the race—a small helicopter, which she used to travel around the state at a time when there were few freeways linking California's cities. She got the idea from her friend, Texas Senator Lyndon Johnson, who had extensively used helicopters in his campaign in the contested Democratic Party primary for the 1948 United States Senate election in Texas. Douglas leased the craft from a helicopter company in Palo Alto owned by Republican supporters, who hoped her influence would lead to a defense contract. When she used it to land in San Rafael, her local organizer, Dick Tuck, called it the "Helencopter", and the name stuck.
In early April, polls gave Nixon some chance of winning the Democratic primary, which would mean his election was secured. He sent out mailings to Democratic voters. Boddy attacked Nixon for the mailings; Nixon responded that Democratic voters should have the opportunity to express no confidence in the Truman administration by voting for a Republican. "Democrats for Nixon", a group affiliated with Nixon's campaign, asked Democratic voters "as one Democrat to another" to vote for the congressman, sending out flyers which did not mention his political affiliation. Boddy quickly struck back in his paper, accusing Nixon of misrepresenting himself as a Democrat. A large ad in the same issue by the "Veterans Democratic Committee" warned Democratic voters that Nixon was actually a Republican and referred to him for the first time as "Tricky Dick". The exchange benefited neither Nixon nor Boddy; Douglas won the primary on June 6 and exceeded their combined vote total.
#### Democratic primary results
### Republican contest
In mid-1949, Nixon, although anxious to advance his political career, was reluctant to run for the Senate unless he was confident of winning the Republican primary. He considered his party's prospects in the House to be bleak, absent a strong Republican trend, and wrote "I seriously doubt if we can ever work our way back in power. Actually, in my mind, I do not see any great gain in remaining a member of the House, even from a relatively good District, if it means we would be simply a vocal but ineffective minority."
In late August 1949, Nixon embarked on a putatively nonpolitical speaking tour of Northern California, where he was less well known, to see if his candidacy would be well received if he ran. With many of his closest advisers urging him to do so, Nixon decided in early October to seek the Senate seat. He hired a professional campaign manager, Murray Chotiner, who had helped to run successful campaigns for both Governor Warren and Senator Knowland and had played a limited role in Nixon's first congressional race.
Nixon announced his candidacy in a radio broadcast on November 3, painting the race as a choice between a free society and state socialism. Chotiner's philosophy for the primary campaign was to focus on Nixon and ignore the opposition. Nixon did not indulge in negative campaigning in the primaries; according to Nixon biographer Irwin Gellman, the internecine warfare in the Democratic Party made it unnecessary. The Nixon campaign spent most of late 1949 and early 1950 concentrating on building a statewide organization, and on intensive fundraising, which proved successful.
Nixon had built part of his reputation in the House on his role in the Alger Hiss affair. Hiss's retrial for perjury after a July 1949 hung jury was a cloud over Nixon's campaign; if Hiss was acquitted, Nixon's candidacy would be in serious danger. On January 21, 1950, the jury found Hiss guilty, and Nixon received hundreds of congratulatory messages, including one from the only living former president, Herbert Hoover.
At the end of January 1950, a subcommittee of the California Republican Assembly, a conservative grassroots group, endorsed former Lieutenant Governor Frederick Houser (who had lost narrowly to Downey in 1944) over Nixon for the Senate candidacy by a 6–3 vote, only to be reversed by the full committee, which endorsed Nixon by 13–12. Houser eventually decided against running. Los Angeles County Supervisor Raymond Darby commenced a Senate run, but changed his mind and instead ran for lieutenant governor. Darby was defeated by incumbent Lieutenant Governor Goodwin Knight in the Republican primary. Knight had also been considered likely to run for the Senate, but decided to seek re-election instead. Actor Edward Arnold began a Senate run, but dropped it in late March, citing a lack of time to prepare his campaign. Nixon was opposed for the Republican nomination only by cross-filing Democrats and by two fringe candidates: Ulysses Grant Bixby Meyer, a consulting psychologist for a dating service, and former judge and law professor Albert Levitt, who opposed "the political theories and activities of national and international Communism, Fascism, and Vaticanism" and was unhappy that the press was paying virtually no attention to his campaign.
On March 20, Nixon cross-filed in the two major party primaries, and two weeks later began to criss-cross the state in his campaign vehicle: a yellow station wagon with "Nixon for U.S. Senator" in big letters on both sides. According to one contemporary news account, in his "barnstorming tour", Nixon intended to "[talk] up his campaign for the U.S. Senate on street corners and wherever he can collect a crowd." During his nine-week primary tour, he visited all of California's 58 counties, speaking sometimes six or eight times in a day. His wife Pat Nixon stood by as her husband spoke, distributing campaign thimbles that urged the election of Nixon and were marked with the slogan "Safeguard the American Home". She distributed more than 65,000 by the end of the campaign.
A Douglas supporter heard Nixon speak during the station wagon tour, and wrote to the congresswoman:
> He gave a magnificent speech. He is one of the cleverest speakers I have ever heard. The questions on the Mundt-Nixon bill, his views on the loyalty oath, and the problem of international communism were just what he was waiting for. Indeed, he was so skillful—and, I might add, cagey—that those who came indifferent were sold, and even many of those who came to heckle went away with doubts ... If he is only a fraction as effective as he was here you have a formidable opponent on your hands.
With no serious challenge from Republican opponents, Nixon won an overwhelming victory in the Republican primary, with his cross-filing rivals, Boddy, Douglas, and Desmond, dividing a small percentage of the vote but running well ahead of the two fringe candidates.
#### Republican primary results
### Joint appearances
There were no candidate debates, but Douglas and Nixon met twice on the campaign trail during the primary season. The first meeting took place at the Commonwealth Club in San Francisco, where Nixon waved a check for \$100 that his campaign had received from "Eleanor Roosevelt", with an accompanying letter, "I wish it could be ten times more. Best wishes for your success." The audience was shocked at the idea of Eleanor Roosevelt, widow of Democratic former president Franklin Roosevelt and known for her liberal views, contributing to Nixon's campaign. Nixon went on to explain that the envelope was postmarked Oyster Bay, New York, and that the Eleanor Roosevelt who had sent the contribution was Eleanor Butler Roosevelt, the widow of former Republican president Theodore Roosevelt's eldest son. The audience laughed, and Douglas later wrote that she had been distracted and gave a poor speech. A memo from Chotiner several days later noted that Boddy had failed to attend the function, and that Douglas wished that she had also not attended.
A second joint appearance took place in Beverly Hills. According to Nixon campaign adviser Bill Arnold, Douglas arrived late, while Nixon was already speaking. Nixon ostentatiously looked at his watch, provoking laughter from the audience. The laughter recurred as Nixon, sitting behind Douglas as she spoke, fidgeted to indicate his disapproval of what she was saying; she appeared bewildered at the laughter. Douglas concluded her remarks and Nixon rose to speak again, but she did not stay to listen.
## General election
### War in Korea, conflict in California
The rift in the Democratic party caused by the primary was slow to heal; Boddy's supporters were reluctant to join Douglas's campaign, even with President Truman's encouragement. The President refused to campaign in California; he resented Democratic gubernatorial candidate James Roosevelt. Roosevelt, the eldest son of Franklin Roosevelt, had urged Democrats not to nominate Truman in 1948, but to instead choose General Dwight Eisenhower. Fundraising continued to be a major problem for Douglas, the bulk of whose financial support came from labor unions. The weekend after the primary, Nixon campaign officials held a conference to discuss strategy for the general election campaign. They decided on a fundraising goal of just over \$197,000 (today, about \$2,400,000). They were helped in that effort when Democratic Massachusetts Congressman John F. Kennedy, a political opponent of Nixon's, came to Nixon's office and gave him a donation of \$1,000 on behalf of Joseph P. Kennedy Sr., his father. John Kennedy indicated that he could not endorse Nixon, but that he would not be heartbroken if Douglas was returned to her acting career. Joseph Kennedy later stated that he gave Nixon the money because Douglas was a communist.
Nixon's positions generally favored large corporations and farming interests, while Douglas's did not, and Nixon reaped the reward with contributions from them. Nixon favored the Taft–Hartley Act, passage of which had been bitterly opposed by labor unions; Douglas advocated its repeal. Douglas supported a requirement that federally subsidized water from reclamation projects only go to farms of not more than 160 acres (0.65 km<sup>2</sup>); Nixon fought for the repeal of that requirement.
When the Korean War broke out in late June, Douglas and her aides feared being put on the defensive by Nixon on the subject of communism, and sought to preempt his attack. Douglas's opening campaign speech included a charge that Nixon had voted with Marcantonio to deny aid to South Korea and to cut aid to Europe in half. Chotiner later cited this as the crucial moment of the campaign:
> She was defeated the minute she tried to do it, because she could not sell the people of California that she would be a better fighter against communism than Dick Nixon. She made the fatal mistake of attacking our strength instead of sticking to attacking our weakness.
Nixon objected to Douglas's speech, stating that he had opposed the Korea bill because it did not include aid to Taiwan, and had supported it once the aid had been included. As for the Europe charge, according to Nixon biographer Stephen Ambrose, Nixon was so well known as a supporter of the Marshall Plan that Douglas's charge had no credibility. In fact, Nixon had opposed a two-year reauthorization of the Marshall Plan, favoring a one-year reauthorization with a renewal provision, allowing for more congressional oversight.
Nixon realized that the battle in California would be fought over the threat of communism, and his campaign staff began to research Douglas's voting record. Republican officials in Washington sent the campaign a report listing 247 times Marcantonio (who generally followed the Democratic line) and Douglas had voted together, and 11 times that they had not. Nixon biographer Conrad Black suggests that Nixon's strategy in keeping the focus on communism was to "distract [Douglas] from her strengths—a sincere and attractive woman fighting bravely for principles most Americans would agree with if they were packaged correctly—to scrapping ... on matters where she could not win." Chotiner stated 20 years later that Marcantonio suggested the comparison of voting records, as he disliked Douglas for failing to support his beliefs fully.
Public support for the Korean War initially resulted in anger towards communists, and Nixon advocated the passage of legislation he had previously introduced with Senator Mundt which would tighten restrictions on communists and the Communist Party. Douglas argued that there was already sufficient legislation to effect any necessary prosecutions, and that the Mundt–Nixon bill (soon replaced by the similar McCarran–Wood bill) would erode civil liberties. With the bill sure to pass, Douglas was urged to vote in favor to provide herself with political cover. She declined to do so, though fellow California Representative Chester E. Holifield warned her that she would not be able to get around the state fast enough to explain her vote and Nixon would "beat [her] brains in". Douglas was one of only 20 representatives (including Marcantonio) who voted against the bill. Truman vetoed it; Congress enacted it over his veto by wide margins in late September. Douglas was one of 47 representatives (including Marcantonio) to vote to sustain the veto. In a radio broadcast soon after the veto override, Douglas announced that she stood with the President, Attorney General J. Howard McGrath and FBI Director J. Edgar Hoover in their fight against communism.
### Debut of the Pink Sheet
On September 10, Eleanor Roosevelt, the late president's widow and the gubernatorial candidate's mother, arrived in California for a quick campaign swing to support her son and Douglas before she had to return to New York as a delegate to the United Nations. Douglas hoped that the former first lady's visit would mark a turning point in the campaign. At a Democratic rally featuring Mrs. Roosevelt the next day in Long Beach, Nixon workers first handed out a flyer headed "Douglas–Marcantonio Voting Record", printed with dark ink on pink paper. The legal-size flyer compared the voting records of Douglas and Marcantonio, principally in the area of national security, and concluded that they were indistinguishable. In contrast, the flyer said, Nixon had voted entirely in opposition to the "Douglas–Marcantonio Axis". It implied that sending Douglas to the Senate would be no different from electing Marcantonio, and asked if that was what Californians wanted. The paper soon became known as the "Pink Sheet". Chotiner later stated that the color choice was made at the print shop when campaign officials approved the final copy, and "for some reason or other it just seemed to appeal to us for the moment". An initial print run of 50,000 was soon followed by a reprint of 500,000, distributed principally in heavily populated Southern California.
Douglas made no immediate response to the Pink Sheet, despite the advice of Mrs. Roosevelt, who appreciated its power and urged her to answer it. Douglas later stated that she had failed to understand the appeal of the Pink Sheet to voters, and simply thought it absurd. Nixon followed up on the Pink Sheet with a radio address on September 18, accusing Douglas of being "a member of a small clique which joins the notorious communist party-liner Vito Marcantonio of New York, in voting time after time against measures that are for the security of this country". He assailed Douglas for advocating that Taiwan's seat on the United Nations Security Council be given to the People's Republic of China, as appeasement towards communism.
Late in September, Douglas complained of alleged whispering campaigns aimed at her husband's Jewish heritage, and which stated that he was a communist. At the end of September, the splits in the Democratic Party became open when 64 prominent Democrats, led by George Creel, endorsed Nixon and castigated Douglas. Creel said, "She has voted consistently with Vito Marcantonio. Belated flag-waving cannot erase this damning record, nor can the tawdry pretense of 'liberalism' excuse it." According to Creel, Downey was working behind the scenes to secure Nixon's election.
James Roosevelt's lackluster campaign led Douglas backers to state that he was not only failing to help Douglas, he was not even helping himself. With polls showing the two major Democratic candidates in dire straits, Roosevelt wrote to President Truman, proposing that Truman campaign in the state in the final days before the election. Truman refused to do so. He also declined Douglas's pleas for a letter of support (privately calling her "one of the worst nuisances"), and even refused to allow her to be photographed with him at a signing ceremony for a water bill which would benefit California. When Truman flew to Wake Island in early October to confer with General Douglas MacArthur regarding the Korean situation, he returned via San Francisco, but told the press he had no political appointments scheduled. He spoke at an event at the War Memorial Opera House during his stopover, but both Roosevelt and Douglas were relegated to orchestra-level seats, far from the presidential box. Vice President Alben Barkley did visit the state to campaign for the Democrats. However, Time magazine wrote that he did not appear to be helpful to Douglas's campaign. The Vice President stated that while he was not familiar with Douglas's votes, he was certain that she had voted the way she did out of sincere conviction and urged Californians to give the Senate a "dose of brains and beauty". Attorney General McGrath also came to California to campaign for the Democrats, and freshman Senator Hubert Humphrey of Minnesota tirelessly worked the San Joaquin Valley, talking to farmers and workers.
### Name-calling and supporters: the final days
Douglas adopted Boddy's "Tricky Dick" nickname for Nixon, and also referred to him as "pee wee". Her name-calling had an effect on Nixon: when told she had called him "a young man with a dark shirt" in an allusion to Nazism, he inquired, "Did she say that? Why, I'll castrate her." Campaign official Bill Arnold joked that it would be difficult to do, and Nixon replied that he would do it anyway. Nixon returned the attacks; at friendly gatherings and especially at all-male events, he stated that Douglas was "pink right down to her underwear".
Douglas's last large-scale advertisement blitz contained another Nazi allusion. Citing five votes in which Nixon and Marcantonio had voted together and in opposition to Douglas, it accused Nixon of using "the big lie" and stated: "HITLER invented it/STALIN perfected it/NIXON uses it". Nixon responded, "Truth is not smear. She made the record. She has not denied a single vote. The iron curtain of silence has closed around the opposition camp." Through the final days of the campaign, he struck a constant drumbeat: Douglas was soft on communism.
Though polls showed Nixon well ahead, his campaign did not let up. A fundraising solicitation warned, "Right Now Nixon Is Losing ... Not Enough Money". Skywriting urged voters to cast their ballots for him. Borrowing an idea from Nixon's 1946 campaign, the campaign announced that people should answer their phones, "Vote for Nixon"; random calls would be made from campaign headquarters and households that answered their phones that way would receive scarce consumer appliances. Chotiner even instructed that 18-month-old copies of The Saturday Evening Post, containing a flattering story about Nixon, be left in doctor's offices, barber shops, and other places where people wait across the state.
In the last days of the campaign, Douglas finally began to receive some of the support she had hoped for. Boddy's paper endorsed her, while Truman praised her. Douglas's actor husband, Melvyn Douglas, on tour with the play Two Blind Mice throughout the campaign, spoke out on behalf of his wife, as did movie stars Myrna Loy and Eddie Cantor. Nixon had several Hollywood personalities supporting him, including Howard Hughes, Cecil B. DeMille and John Wayne. Another actor, Ronald Reagan, was among Douglas's supporters, but when his girlfriend and future wife Nancy Davis took him to a pro-Nixon rally led by actress ZaSu Pitts, he was converted to Nixon's cause and led quiet fundraising for him. Douglas was apparently unaware of this—30 years later she mentioned Reagan in her memoirs as someone who worked hard for her.
Chotiner had worked on Warren's 1942 campaign, but had parted ways from him, and the popular governor did not want to be connected to the Nixon campaign. Nonetheless, Chotiner sought to maneuver him into an endorsement. Chotiner instructed Young Republicans head and future congressman Joseph F. Holt to follow Douglas from appearance to appearance and demand to know who she was supporting for governor, as other Young Republicans handed out copies of the Pink Sheet. Douglas repeatedly avoided the question, but with four days to go before the election and the Democratic candidate near exhaustion from the bitter campaign, she responded that she hoped and prayed that Roosevelt would be elected. Holt contacted a delighted Chotiner, who had a reporter ask Warren about Douglas's comments, and the governor responded, "In view of her statement, I might ask her how she expects I will vote when I mark my ballot for United States senator on Tuesday." Chotiner publicized this response as an endorsement of Nixon, and the campaign assured voters that Nixon would be voting for Warren as well.
Despite the polls, Douglas was confident that the Democratic registration edge would lead her to victory, so much so that she offered a Roosevelt staffer a job in her senatorial office. On election day, November 7, 1950, Nixon defeated Douglas by 59 percent to 41. Of California's 58 counties, Douglas won only five, all in Northern California and with relatively small populations; Nixon won every urban area. Although Warren defeated Roosevelt by an even larger margin, Nixon won by the greatest number of votes of any 1950 Senate candidate. Douglas, in her concession speech, declined to congratulate Nixon. Marcantonio was also defeated in his New York district.
## Aftermath
### Candidates
A week after the election, Downey announced that he was resigning for health reasons. Warren appointed Nixon to the short remainder of Downey's term; under the Senate rules at the time, this gave Nixon seniority over the new senators sworn in during January 1951. Nixon took office on December 4, 1950, after resigning from the House. He used little of his seniority, since in November 1952 he was elected vice president as Dwight Eisenhower's running mate, the next step on a path that would lead him to the presidency in 1969. Downey, who as a former senator retained floor privileges, was hired as a lobbyist by oil interests. In 1952, as Republicans took over the White House and control of both houses of Congress, he was fired. An aide stated that the big corporations did not need Downey anymore. Boddy, dispirited by his election defeat and feeling let down by the average citizens for whom he had sought to advocate, lapsed into semi-retirement after his primary defeat. In 1952, he sold his interest in the Daily News, which went into bankruptcy in December 1954.
It was rumored that Douglas would be given a political appointment in the Truman administration, but the Nixon–Douglas race had made such an appointment too controversial for the President. According to Democratic National Committee vice-chair India Edwards, a Douglas supporter, the former congresswoman could not have been appointed dogcatcher. In 1952, she returned to acting, and eight years later campaigned for John F. Kennedy during Nixon's first, unsuccessful presidential run. She also campaigned for George McGovern in his unsuccessful bid to prevent Nixon's 1972 reelection, and called for his ouster from office during the Watergate scandal.
Less than a week after the election, Douglas wrote to one of her supporters that she did not think there was anything her campaign could have done to change the result. Blaming the war, voter mistrust of Truman's foreign policy, and high prices at home, Douglas stated that she lost in California because Nixon was able to take a large part of the women's vote and the labor vote. Later in November, she indicated that liberals must undertake a massive effort to win in 1952. In 1956, she stated in an interview that, while Nixon had never called her a communist, he had designed his whole campaign to create the impression that she was a communist or "communistic". In 1959, she wrote that she had not particularly wanted to be a senator, and in 1962 she stated that the policy of her campaign was to avoid attacks on Nixon. In her memoirs, published posthumously in 1982, she wrote, "Nixon had his victory, but I had mine ... He hadn't touched me. I didn't carry Richard Nixon with me, thank God." She concluded her chapter on the 1950 race with, "There's not much to say about the 1950 campaign except that a man ran for Senate who wanted to get there, and didn't care how."
In 1958, Nixon, by then vice president, allegedly stated that he regretted some of the tactics his campaign had used in the campaign against Douglas, blaming his youth. When the statements were reported, Nixon denied them. He issued press releases defending his campaign, and stating that any impression that Douglas was pro-communist was justified by her record. He said Douglas was part of a whispering campaign accusing him of being "anti-Semitic and Jim Crow". In his 1978 memoirs, he stated that "Helen Douglas lost the election because the voters of California in 1950 were not prepared to elect as their senator anyone with a left-wing voting record or anyone they perceived as being soft on or naive about communism." He indicated that Douglas faced difficulties in the campaign because of her gender, but that her "fatal disadvantage lay in her record and in her views".
### History and legend
Contemporary accounts ascribed the result to a number of causes. Douglas friend and former Interior Secretary Harold L. Ickes blamed Roosevelt's weak candidacy and what he believed was Nixon's use of the red scare. Supervisor John Anson Ford of Los Angeles County chalked up the result to Nixon's skill as a speaker and a lack of objective reporting by the press. Douglas's campaign treasurer, Alvin Meyers, stated that while labor financed Douglas's campaign, it failed to vote for her, and blamed the Truman Administration for "dumping" her. Douglas's San Diego campaign manager claimed that 500,000 people in San Diego and Los Angeles had received anonymous phone calls alleging Douglas was a communist, though he could not name anyone who had received such a call. Time magazine wrote that Nixon triumphed "by making the Administration's failures in Asia his major issue".
As Nixon continued his political rise and then moved towards his downfall, the 1950 race increasingly took on sinister tones. According to Nixon biographer Earl Mazo, "Nothing in the litany of reprehensible conduct charged against Nixon, the campaigner, has been cited more often than the tactics by which he defeated Congresswoman Helen Gahagan Douglas for senator." Douglas friend and McGovern campaign manager Frank Mankiewicz, in his 1973 biography, Perfectly Clear: Nixon from Whittier to Watergate, focused on the race and the Pink Sheet, and alleged that Nixon never won a free election, that is, one without "major fraud".
Historian Ingrid Scobie came to a different conclusion in her biography of Douglas, Center Stage. Scobie concluded that, given voter attitudes at the time, no woman could have won that race. Scobie stated that Nixon's tactics, which used voter anger at communists, contributed to the magnitude of Douglas's defeat, as did the fragmentation of the California Democratic Party in 1950, the weakness of Roosevelt at the head of the ticket, Douglas's idealistic positions (to the left of many California Democrats) and Boddy's attacks. In his early biography of Nixon, Mazo contrasted the two campaigns and concluded, "when compared with the surgeons of the Nixon camp, the Douglas operators performed like apprentice butchers".
Both Roger Morris and Greg Mitchell (who wrote a book about the 1950 race) conclude that Nixon spent large sums of money on the campaign, with Morris estimating \$1–2 million (perhaps \$12 million–\$24 million today) and Mitchell suggesting twice that. Gellman, in his later book, conceded that Nixon's officially reported amount of \$4,209 was understated, but indicated that campaign finance law at that time was filled with loopholes, and few if any candidates admitted to their full spending. He considered Morris's and Mitchell's earlier estimates, though, to be "guess[es]" and "fantastic". Black suggests that Nixon spent about \$1.5 million and Douglas just under half of that.
Scobie summed up her discussion of Douglas's defeat,
> As an actress, she entered Broadway as a star on sheer talent and little training ... [As an opera singer], she sang abroad for two summers, fully expecting that the next step would be the Metropolitan Opera. In politics after five months of working with the [California Democratic] Women's Division, it seemed only natural that she head the state's organization and serve as Democratic National Committeewoman. Restless after three years in those positions, she saw the possibility of becoming a member of Congress as a logical next step. Only four years later, she felt ready to run for the Senate. But her lack of political experience and her inflexible stands on political issues, along with gender questions, eroded the support of the Democratic Party in 1950. What in fact may have hurt her the most is that for which she is most remembered—her idealism.
## General election results, November 7, 1950
### Results by county
Final results from the Secretary of State of California:
|
7,978,458 |
Effects of Hurricane Isabel in North Carolina
| 1,171,661,388 | null |
[
"2003 in North Carolina",
"Effects of hurricanes in the United States",
"Hurricane Isabel",
"Hurricanes in North Carolina",
"Outer Banks"
] |
The effects of Hurricane Isabel in North Carolina were widespread, with the heaviest damage in Dare County. The hurricane made landfall in the Outer Banks of North Carolina on September 18. There, storm surge flooding and strong winds damaged thousands of houses. The storm surge produced a 2,000 feet (610 m) wide inlet on Hatteras Island, isolating Hatteras by road for two months. Several locations along North Carolina Highway 12 were partially washed out or covered with debris. Hurricane Isabel produced hurricane-force wind gusts across eastern North Carolina, knocking down trees and power lines. About 700,000 residents lost power due to the storm, although most outages were restored within a few days. The hurricane killed three people in the state – two due to falling trees, and the other a utility worker attempting to restore electricity. Damage in the state totaled \$450 million (2003 USD, \$ 2023 USD).
The National Hurricane Center issued a hurricane watch, and later warning, for the state's coastline in advance of the hurricane's landfall. Local officials issued evacuation orders for 18 counties, along with various flood warnings. In the aftermath of the hurricane, then-President George W. Bush declared a state of emergency for 26 counties in the state, which allocated federal resources to the state. Utility crews from nearby states helped restore power. The United States Geological Survey dredged sand to restore the breach on Hatteras Island, with traffic restored about two months after the hurricane.
## Preparations
On September 14 – four days before Hurricane Isabel made landfall – most computer models predicted that Isabel would strike the east coast of the United States between North Carolina and New Jersey. The National Hurricane Center consistently forecast a landfall on North Carolina. Forecasters initially predicted a landfall in the northeastern portion of the state, which became more accurate as the hurricane neared land. Strong confidence in Isabel's final landfall prompted the National Hurricane Center to issue a hurricane watch for the entire North Carolina coastline about 50 hours before Isabel struck land. About 12 hours later, the National Hurricane Center issued a hurricane warning from Cape Fear to the North Carolina/Virginia border, with a tropical storm warning extending southward to South Carolina. The Newport Weather Forecast Office warned for the potential of flash flooding. The office began preparing for the hurricane one week before landfall, and brought additional staff members to assist with hurricane related duties.
On September 16, officials issued a voluntary evacuation for portions of four counties and one entire county. By around 24 hours before landfall, mandatory evacuations were ordered for eight counties, including the state's coastal counties from Cape Fear to the North Carolina/Virginia border. Despite being under a mandatory evacuation, 57% of Outer Banks residents chose not to leave, as well as 77% of residents in storm surge-prone areas of the Pamlico Sound; this was based on a survey taken after the hurricane. Residents who evacuated their homes cited the hurricane's strength and track, as well as statements from officials, as the main reasons for leaving. Evacuees utilized the house of a friend or relative, or a public shelter. Issues related to the evacuations included traffic problems, stalled cars along roads, inadequate route signing, and flooded or damaged roads. By the morning of the hurricane's landfall, 65 shelters were prepared with a capacity of 95,000 people. The American Red Cross prepared 100 feeding vehicles in staging areas, and deployed two mobile kitchens each with the capacity to provide 10,000 meals per day. Additionally, five Southern Baptist Convention kitchens were on standby, in total being able to provide 20,000 meals per day.
## Impact
Hurricane Isabel produced hurricane-force wind gusts throughout eastern North Carolina. The winds downed hundreds of trees, leaving about 700,000 people without power across the state. Damage from the hurricane totaled about \$450 million (2003 USD, (\$ 2023 USD)). Three people were killed in the state – a utility worker attempting to restore electricity and two by falling trees.
### Outer Banks
Hurricane Isabel first began affecting North Carolina about 15 hours before it struck land. Upon making landfall along the Outer Banks, the hurricane produced strong waves of 15 to 25 feet (4.6 to 7.6 m) in height and a storm surge of about 6 to 8 feet (1.8 to 2.4 m). Storm tides along the coast peaked at 7.7 feet (2.3 m) in Cape Hatteras, though the total could be higher there due to the tide gauge being destroyed by the hurricane. Rough surf and storm surge caused overwash and severe beach erosion throughout the Outer Banks, with flooding in Ocracoke reportedly being up to waist-high. The high waters washed out a 2,000 feet (610 m) portion of Hatteras Island between Hatteras and Frisco, creating a new inlet unofficially dubbed Isabel Inlet. The break was 15 feet (4.6 m) deep in areas, consisting of three distinct channels. The new inlet washed away all utility connections to Hatteras Village, including power lines and water pipes, as well as dunes, three houses, and a portion of North Carolina Highway 12. The storm surge and waves from Isabel resulted in another breach between Hatteras and Hatteras Inlet, in an area without roads or houses. The breach nearly became an inlet, though it was not deep enough for a constant water flow; it had little impact on residents. In addition to the floodwaters, Isabel produced an estimated 4 inches (100 mm) of rain throughout most of the Outer Banks, with Duck reporting a peak of 4.72 inches (120 mm). Wind gusts in association with the hurricane peaked at 105 mph (169 km/h) in Ocracoke, with several other locations reporting hurricane-force gusts.
Wind and water damage across the Outer Banks was extensive, with monetary damage in Dare County estimated at \$350 million (2003 USD, (\$ 2023 USD)). Strong waves and the storm surge from Hurricane Isabel knocked 30–40 houses and several motels off of their pilings. Two families who did not evacuate were nearly swept out to sea when their home was destroyed; they reached safety despite local rescue being unable to reach them. The rough waves damaged piers in Nags Head, Rodanthe, and Frisco, with three completely destroyed. Several locations along North Carolina Highway 12 were partially washed out or covered with debris, and 15 feet (4.6 m) sections of pavement on both sides of a bridge near Ocracoke were washed away. Strong waves destroyed a beach access ramp. Several thousand homes and businesses were damaged by the passage of the hurricane, but no deaths or injuries were reported in the Outer Banks.
### Southeast North Carolina
Southwest of where Isabel moved ashore, Isabel's effects were lighter than in the Outer Banks. Sustained winds along the coast reached 45 mph (72 km/h) at the Wilmington International Airport, with gusts to 66 mph (106 km/h) at a port facility in Wilmington. The large circulation of Isabel dropped moderate rainfall across the area, peaking at 4.51 inches (115 mm) in Whiteville. Weather radars estimated over 5 inches (130 mm) of precipitation fell in portions of New Hanover County. The rainfall collected into ponds roadways, though no severe flooding was reported. Storm tides were generally around 1 foot (0.30 m) above normal, though Wilmington reported a storm tide of 3.22 feet (0.98 m). Rough waves resulted in moderate beach erosion near Cape Fear and minor erosion along eastward-facing beaches north of Cape Fear.
Damage was minor in southeast North Carolina. Moderate winds inflicted isolated shingle and siding damage along barrier islands. The winds downed several trees, some onto cars and houses. Brief power outages were also reported. Beach erosion damaged a bridge in Bald Head Island. In Chowan County, a business parking lot was under several feet of water due to flash flooding. One person died in Carteret County while trying to restore electricity.
### Inland
Isabel produced strong winds throughout inland areas of eastern North Carolina. Plymouth, located 75 miles (121 km) from the hurricane's landfall, reported gusts to 95 mph (153 km/h). Sustained winds were lighter, with only a few locations receiving tropical storm strength winds. Tropical storm force wind gusts were reported as far inland as Lumberton, where gusts reached 52 mph (84 km/h). The passage of the hurricane resulted in moderate rainfall of up to 6.02 inches (153 mm) in Havelock. Upon making landfall, Isabel produced moderate to severe storm surges along the Pamlico and Neuse Rivers, with a location in Craven County reporting a storm tide of 10.5 feet (3.2 m) above normal.
The strong storm surge produced significant flooding in Harlowe and Oriental. Several other locations also reported flooding of streets and low-lying areas. The rise of water flooded many homes in Craven County and the eastern portions of Carteret and Pamlico counties. Emergency personnel rescues people who had not evacuated and became trapped by storm surge flooding. Eyewitnesses reported high velocity, waist deep water moving homes, trailers, and other objects many yards inland. As the water retreated, these objects were then dragged back towards the water. A 5 to 8 foot (1.5 to 2.4 m) storm surge struck the western portion of the Albemarle Sound, with significant surge flooding occurring to the west of Edenton. There, the surge destroyed four homes, two of which were moved up to 20 feet (6.1 m) off their concrete block foundations. Nearly 60 percent of all homes and business in Chowan County suffered some structural damage due to wind, many of which were the result of large falling trees. One woman died when a tree fell on her vehicle in Chowan County.
## Aftermath
Due to its impacts, the name "Isabel" was retired following its usage in 2003, meaning its name will not be used again for an Atlantic hurricane.
Hundreds of residents were stranded in Hatteras following the formation of the new inlet created by rising waters. Many parts of North Carolina Highway 12 were partially washed out, damaged, or reduced one lane, which slowed recovery efforts and the return of homeowners in the Outer Banks. The ferry between Hatteras Island and Ocracoke Island was temporarily halted due to damage after the hurricane, though a small passenger ferry remained available for Hatteras Village residents and emergency workers. Non-residents were barred from being on the Outer Banks for two weeks after the hurricane. After the ban was lifted, visitors walked nearly 1 mi (1.6 km) to see the new Isabel Inlet. Officials considered building a bridge or ferry system across the new inlet, which were dropped in favor of pumping sand and filling the inlet. This was despite opposition from coastal geologists who stated that the evolution of the Outer Banks is dependent on inlets from hurricanes. Dredging operations began on October 17, using sand from the ferry channel to the southwest of Hatteras Island; this choice minimized impact to submerged aquatic vegetation. On November 22, about two months after the hurricane struck, Highway 12 and Hatteras Island were reopened to public access. On the same day, the ferry between Hatteras and Ocracoke was reopened. The breach on the southern end of Hatteras Island was filled in with sand.
Hardware stores experienced great demand for portable generators, chain saws, dehumidifiers, and air movers following the passage of the hurricane. Utility crews from across the country came to the state to assist in returning power, though power outages persisted for several days. Over 2,500 utility members worked, in some cases around the clock, to restore the power. One power company restored power to 68% of its affected customers by the day after Isabel passed through the area. By four days after landfall, 83,000 customers were without power, down from its peak of several hundred thousand.
Hours after Isabel made landfall, then-President George W. Bush issued a major disaster declaration for 26 North Carolina counties. This was later expanded to 47 counties in the state. The order allocated federal funds for the long-term recovery of hurricane-stricken residents and business owners, as well as providing federal funds for the state and local governments to pay 75 percent of the eligible cost for debris removal and emergency services related to the hurricane, including requested emergency work undertaken by the federal government. The order also allowed for the use of federal personnel, equipment and lifesaving systems and the delivery of heavy-duty generators, plastic sheeting, tents, cots, food, water, medical aid and other essential supplies and materials for sustaining human life. By four days after the emergency declaration, assistance checks were first mailed and used by residents to pay for what was not covered by their insurance.
By four days after landfall, FEMA served around 68,000 meals to displaced families. More than a dozen disaster recovery centers were initiated throughout the state. FEMA provided 125,000 pounds of ice in the first few days, and prepared 200,000 pounds of ice and 180,000 liters of water for the following week for the remaining communities without water. By six days after Isabel struck the state, all hospitals were opened and all roads excluding North Carolina Highway 12 were passable due to emergency crews clearing roads with debris. By 12 weeks after the hurricane passed through the state, 54,425 residents applied for federal assistance, with disaster aid totaling \$155.2 million (2003 USD, (\$ 2023 USD)).
## See also
- List of retired Atlantic hurricane names
- List of North Carolina hurricanes (2000–present)
|
221,162 |
Grey currawong
| 1,153,532,267 |
Large passerine bird native to southern Australia and Tasmania
|
[
"Birds described in 1801",
"Birds of New South Wales",
"Birds of South Australia",
"Birds of Tasmania",
"Birds of Victoria (state)",
"Birds of Western Australia",
"Endemic birds of Australia",
"Strepera",
"Taxa named by John Latham (ornithologist)"
] |
The grey currawong (Strepera versicolor) is a large passerine bird native to southern Australia, including Tasmania. One of three currawong species in the genus Strepera, it is closely related to the butcherbirds and Australian magpie of the family Artamidae. It is a large crow-like bird, around 48 cm (19 in) long on average; with yellow irises, a heavy bill, dark plumage with white undertail and wing patches. The male and female are similar in appearance. Six subspecies are recognised and are distinguished by overall plumage colour, which ranges from slate-grey for the nominate from New South Wales and eastern Victoria and subspecies plumbea from Western Australia, to sooty black for the clinking currawong of Tasmania and subspecies halmaturina from Kangaroo Island. All grey currawongs have a loud distinctive ringing or clinking call.
Within its range, the grey currawong is generally sedentary, although it is a winter visitor in the southeastern corner of Australia. Comparatively little studied, much of its behaviour and habits is poorly known. Omnivorous, it has a diet that includes a variety of berries, invertebrates, and small vertebrates. The habitat includes all kinds of forested areas as well as scrubland in drier parts of the country. It is less arboreal than the pied currawong, spending more time foraging on the ground. It builds nests high in trees, which has limited the study of its breeding habits. Unlike its more common relative, it has adapted poorly to human impact and has declined in much of its range, although not considered endangered.
## Taxonomy and naming
The grey currawong was first described as Corvus versicolor by ornithologist John Latham in 1801, who gave it the common name of "variable crow". The specific name versicolor means 'of variable colours' in Latin. Other old common names include grey crow-shrike, leaden crow-shrike, mountain magpie, black-winged currawong (in western Victoria), clinking currawong (in Tasmania), and squeaker (in Western Australia). The black-winged currawong was known to the Ramindjeri people of Encounter Bay as wati-eri, the word meaning "to sneak" or "to track". Kiling-kildi was a name derived from the call used by the people of the lower Murray River.
Together with the pied currawong (S. graculina) and black currawong (S. fuliginosa), the grey currawong forms the genus Strepera. Although crow-like in appearance and habits, currawongs are only distantly related to true crows, and are instead closely related to the Australian magpie and the butcherbirds. The affinities of all three genera were recognised early on and they were placed in the family Cracticidae in 1914 by ornithologist John Albert Leach after he had studied their musculature. Ornithologists Charles Sibley and Jon Ahlquist recognised the close relationship between the woodswallows and the butcherbirds and relatives in 1985, and combined them into a Cracticini clade, which later became the family Artamidae.
### Subspecies
Six subspecies are spread around Australia. They vary extensively in the colour of their plumage, from grey to sooty black, and the amount of white on their wings, and most were at one time considered separate species:
- S. v. versicolor, the nominate race, is known as the grey currawong, and is found in New South Wales, the Australian Capital Territory, and eastern and central Victoria, west to Port Phillip on the coast, and to the Grampians inland.
- S. v. intermedia, the grey-brown form of South Australia, is also known as the brown currawong. It is found in the Yorke and Eyre Peninsulas, the Gawler and Mount Lofty Ranges and the eastern areas of the Great Australian Bight. The smallest of the six subspecies, it has a shorter wing and tail. Birds in the southern Eyre Peninsula have darker plumage than those in the northern parts. First described by Richard Bowdler Sharpe in 1877 from a specimen collected in Port Lincoln, its specific name is the Latin adjective intermedia "intermediate".
- S. v. arguta, the darkest race, is from eastern Tasmania and is known as the clinking currawong from its call or locally as the black magpie. Sharpe called it the Tasmanian hill-crow. It was first described by John Gould in 1846. The specific name is the Latin adjective argūtus "shrill/piercing", "noisy" or "melodious". Larger and heavier than the nominate subspecies, it has longer wings, tail, bill, and tarsus.
- S. v. melanoptera, known as the black-winged currawong, is from western Victoria's Mallee region and South Australia west to the Mount Lofty Ranges. It can be difficult to distinguish from the black and pied currawongs at any distance. Of similar size and bill-shape to the nominate subspecies, it has a darker blackish-brown plumage and lacks the white wing markings. Birds from much of western Victoria are intermediates between this and the nominate subspecies, often bearing partial white markings on the wings. Similarly, in the western part of its range in South Australia are intermediate with subspecies to the west and also have some paler patches. Named by John Gould in 1846, its specific name is derived from the Ancient Greek words melano- "black" and pteron "wings". American ornithologist Dean Amadon observed that birds from northwestern Victoria were lighter in plumage than those of South Australia, and tentatively classified them as a separate subspecies howei. However, he noted they warranted further investigation, and subsequent authorities have not recognised the populations as separate.
- S. v. halmaturina is restricted to Kangaroo Island. A dark-plumaged subspecies, it has a longer narrower bill than the nominate race, and is lighter in weight. The specific name is the adjective halmaturina "of Kangaroo Island". It was first named by Gregory Mathews in 1912.
- S. v. plumbea is found from western South Australia and the southwestern corner of the Northern Territory westwards into Western Australia. It is colloquially known as "squeaker" from the sound of its call. Named by Gould in 1846, its specific name is the Latin adjective plumběus "leaden". The common name leaden cuckoo-shrike refers to this group. Very similar in plumage to the nominate subspecies, it differs in its thicker, more downward curved bill. The base plumage is variable, but tends to be slightly darker and possibly more brown-tinged than the nominate subspecies. Amadon noted that a specimen from the Everard Ranges in northwestern South Australia was larger and paler than other specimens of plumbea. Although he considered these Central Australian birds as a separate subspecies centralia, he conceded very little was known. They have been considered part of plumbea subsequently.
## Description
A larger and more slender bird than its more common relative the pied currawong, the adult grey currawong ranges from 44 to 57 cm (17 to 22 in) in length, with an average of around 52 cm (20 in); the wingspan varies from 72 to 85 cm (28 to 33 in), averaging around 78 cm (31 in), with an average weight of around 350 g (12 oz). Adults of the Tasmanian subspecies average around 440 g (16 oz). The male is on average slightly larger than the female, but the size and weight ranges mostly overlap. It is generally a dark grey bird with white in the wing, undertail coverts, the base of the tail and most visibly, the tip of the tail. It has yellow eyes. The orbital (eye-ring), legs and feet are black, whereas the bill and gape range from greyish black to black. The overall plumage varies according to subspecies. The nominate race versicolor and plumbea are slate-grey in colour, while melanoptera and intermedia are blackish-brown, and arguta of Tasmania and halmaturina a sooty black. The size of the white patch on the wing also varies, being large and easily spotted in versicolor, plumbea, intermedia and arguta, but non-existent or indistinct in melanoptera and halmaturina.
More specifically, the nominate subspecies has a grey forehead, crown, nape, ear-coverts and throat with the face a darker grey-black. The feathers of the throat are longer, giving rise to hackles there. The upperparts and underparts are a brownish-grey and become more brown with age. Towards the belly, the feathers are a paler grey. The wings are grey-brown, and the blackish primaries have white edges which merge to form the prominent white wing markings.
Birds appear to moult once a year in spring or summer, although observations have been limited. Young birds spend about a year in juvenile plumage before moulting into adult plumage at around a year old. Juvenile birds have more brown-tinged and uniform plumage; the darker colour around the lores and eyes are less distinct. Their blackish bill is yellow-tipped, and the gape is yellow. Their eyes are brownish, but turn yellow early. The exact timing is unknown but likely to be around four months of age.
### Voice
Unlike that of the pied currawong, the grey currawong's call does not sound like its name. The grey currawong is best known for making a sound variously transcribed as p'rink, clink, cling, ker-link or tullock, either in flight or when gathered in any numbers. The call has been described as very loud and ringing in the Tasmanian and Kangaroo Island subspecies; Edwin Ashby wrote that in Tasmania it was akin to the squeaking of a wheelbarrow and Gregory Mathews that it was like the kling of an anvil. Elsewhere, their call has been likened to the screech of ungreased metal grinding in Victoria and South Australia (races versicolor and melanoptera are noted as similar to each other), and as a harsh squeak in Western Australia. The clinking call resembles that of the superb lyrebird, which imitates the currawong call at times.
A softer and more tuneful musical call has been called the toy-trumpet call. It has been reported to foretell rainy weather. The loud bell call resembles the clinking call, and is a clear piping sound. Females and young make an insistent repetitive squawking when begging for food from a parent or mate, similar to the begging call of the Australian magpie, and make a gobbling sound when fed.
### Similar species
The grey currawong is unlikely to be confused with other species apart from other currawongs. It is immediately distinguishable from crows and ravens as they have wholly black plumage, a stockier build and white (rather than yellow) eyes. However, it can be encountered in mixed-species flocks with the pied currawong. It can be distinguished by its paler plumage, lack of white base to the tail, straighter bill, and very different vocalisations. In northwestern Victoria, the black-winged currawong (subspecies melanoptera) has a darker plumage than other grey subspecies, and is thus more similar in appearance to the pied currawong, but its wings lack the white primaries of the latter species. In Tasmania, the black currawong is similar but has a heavier bill and call similar to the pied and lacks the white rump.
## Distribution and habitat
Grey currawongs are found right across the southern part of Australia from the Central Coast region of New South Wales, occurring south of latitude 32°S southwards and westwards, from the vicinity of Mudgee in the north and southwest to Temora and Albury onto the Riverina and across most of Victoria and southern South Australia to the fertile south-west corner of Western Australia and the semi-arid country surrounding it. The clinking subspecies is endemic to Tasmania, where it is more common in the eastern parts, but is absent from King and Flinders Islands in Bass Strait. There is an outlying population in the arid area where the Northern Territory meets South Australia and Western Australia. In general, the grey currawong is sedentary throughout its range, although it appears to be resident in the cooler months only in south Gippsland in eastern Victoria and the far south coast of New South Wales.
The grey currawong is found in wet and dry sclerophyll forests across its range, as well as mallee scrubland, and open areas such as parks or farmland near forested areas. It also inhabits pine plantations. Preferences vary between regions; subspecies versicolor is more common in wetter forests in southeastern mainland Australia, while the Tasmanian subspecies arguta is found most commonly in lowland dry sclerophyll forest. The subspecies melanoptera and intermedia are found mainly in mallee scrublands and woodlands, while in Western Australia, subspecies plumbea is found in various forests and woodlands, such as jarrah (Eucalyptus marginata), karri (E. diversicolor), tuart (E. gomphocephala) and wandoo (E. wandoo), as well as paperbark woodlands around swampy areas, and acacia shrublands dominated by summer-scented wattle (Acacia rostellifera) and mulga (Acacia aneura) with Eremophila understory.
Formerly common, the grey currawong appears to have declined across its distribution; it became scarce in northern Victoria in the 1930s, and in northeastern Victoria in the 1960s. Habitat destruction has seen it decline in southeastern South Australia around Naracoorte and from many areas in the Western Australian Wheatbelt. It also became rare in the Margaret River and Cape Naturaliste regions after 1920, and vanished from much of the Swan Coastal Plain by the 1940s. One place which has seen an increase in numbers is the Mount Lofty Ranges in the 1960s. The species has never been common in the Sydney Basin and sightings have been uncommon and scattered since the time of John Gould in the early 19th century. The status of the species is uncertain in the Northern Territory, where it may be extinct. It has been classified as critically endangered there pending further information.
## Behaviour
Overall, data on the social behaviour of the grey currawong is lacking, and roosting habits are unknown. It is generally shyer and more wary than its pied relative, but has become more accustomed to people in areas of high human activity in southwest Western Australia. Its undulating flight is rapid and silent. It hops or runs when on the ground. Birds are generally encountered singly or in pairs, but may forage in groups of three to eleven birds. Up to forty birds may gather to harvest a fruit tree if one is found. The black-winged subspecies is seldom seen in groups larger than four or five, while the clinking currawong may form groups of up to forty birds over the non-breeding season.
There is some evidence of territoriality, as birds in the Wheatbelt maintain territories year-round there. The grey currawong has been recorded harassing larger birds such as the wedge-tailed eagle, square-tailed kite and Australian hobby. The species has been observed bathing by shaking its wings in water at ponds, as well as applying clay to its plumage after washing.
Two species of chewing louse have been isolated and described from grey currawongs: (Menacanthus dennisi) from subspecies halmaturina on Kangaroo Island in South Australia, and Australophilopterus strepericus from subspecies arguta near Launceston in Tasmania. A new species of spirurian nematode, Microtetrameres streperae isolated from a grey currawong at Waikerie was described in 1977. The parasitic alveolate Isospora streperae was described from a grey currawong (subspecies plumbea) from Western Australia.
### Breeding
The breeding habits of the grey currawong are not well known, and the inaccessibility of its nests makes study difficult. The breeding season lasts from August to December. The grey currawong builds a large shallow nest of thin sticks lined with grass and bark high in trees; generally eucalypts are chosen. It produces a clutch of one to five (though usually two or three) rounded or tapered oval eggs, which vary in size and colour according to subspecies. Those of subspecies versicolor average 30 mm × 43 mm (1.2 in × 1.7 in) in size and are a pale brown or buff with shades of pink or wine tones, and are marked with streaks or splotches of darker brown, purple-brown, slate-grey or even blue-tinged. Those of the black-winged currawong are similarly sized at 30 mm × 41 mm (1.2 in × 1.6 in) and are buff or flesh-coloured with a purple tint and marked with darker browns or purple-browns. The clinking currawong lays larger and paler eggs of dull white, pale grey or buff with a faint wine-colour tint, and marked with darker tones of purple-, grey- or blue-tinged brown, which average 31 mm × 46 mm (1.2 in × 1.8 in). The eggs of the brown currawong are also pale wine-tinted brown, buff, or cream with darker markings of cinnamon, brown or purple-brown, and measure 29 mm × 42 mm (1.1 in × 1.7 in). Finally, the western subspecies lays eggs averaging 31 mm × 43 mm (1.2 in × 1.7 in) in size which are pale shades of red-brown or wine-colour, with darker red-brown markings. In all subspecies, the markings can coalesce over the larger end of the egg to form a darker 'cap'. The incubation period is poorly known because of the difficulty of observing nests, but one observation suggested around 23 days from laying to hatching. Like all passerines, the chicks are born naked, and blind (altricial), and remain in the nest for an extended period (nidicolous). Both parents feed the young.
Data on nesting success rates is limited; one study of 35 nests found that 28 (80%) resulted in the fledging of at least one young currawong. Causes of failure included nest collapse by gale-force winds and rain, and harassment and nest raiding by pied currawongs. The incidence of brood parasitism is uncertain. A pair of grey currawongs have been observed feeding a channel-billed cuckoo (Scythrops novaehollandiae) chick on one occasion.
### Feeding
The grey currawong is an omnivorous and opportunistic feeder. It preys on many invertebrates, such as snails, spiders and woodlice, and a wide variety of insects including beetles, earwigs, cockroaches, wasps, ants and grasshoppers, and smaller vertebrates, including frogs, lizards such as the bearded dragon as well as skinks, rats, mice, and nestlings or young of Tasmanian nativehen, red wattlebird, eastern spinebill, house sparrow (Passer domesticus), and splendid fairywren (M. splendens), It has been recorded hunting at the nests of the superb fairywren (Malurus cyaneus), and the bell miner (Manorina melanophrys).
A wide variety of plant material is also consumed, including the fruit or berries of Ficus species, Leucopogon species, Exocarpos species, a cycad Macrozamia riedlei, a mistletoe Lysiana exocarpi, Astroloma humifusum, A. pinifolium, Myoporum insulare, Enchylaena tomentosa and Coprosma quadrifida. The grey currawong also eats berries of introduced plants such as Pyracantha angustifolia and P. fortuneana, and Cotoneaster species, and crops such as maize, apples, pears, quince, various stone fruit of the genus Prunus, grapes, tomato, passion flowers, and the nectar of gymea lily (Doryanthes excelsa). On Kangaroo Island, the grey currawong has been identified as the main vector for the spread of bridal creeper (Asparagus asparagoides). Boneseed (Chrysanthemoides monilifera subspecies monilifera), another invasive species readily dispersed in bird droppings, is also consumed by grey currawongs. In Tasmania, A. pinifolium is especially popular, and one observer noted that the normally noisy birds became quiet and sluggish after eating it, prompting him to wonder whether the plant had a narcotic effect on the birds.
Foraging takes place on the ground, or less commonly in trees or shrubs. Most commonly the grey currawong probes the ground for prey, but sometimes chases more mobile animals. It has been recorded removing insects from parked cars, as well as employing the zirkeln method, where it inserts its bill in a crack or under a rock and uses it to lever open a wider space to hunt prey. In one case, a bird was observed holding bark off the branch of a eucalypt and levering open gaps every 4 to 5 cm (1.5 to 2 in) with its bill. The grey currawong usually swallows prey whole, although one bird was observed impaling a rodent on a stick and eating parts of it, in the manner of a butcherbird. A field study on road ecology in southwestern Australia revealed that the grey currawong is unusual in inhabiting cleared areas adjacent to roads. However, it was not recorded feeding on roadkill, and moves away from the area in breeding season. It was also commonly hit and killed by vehicles.
## Conservation status
The grey currawong has a very large range and thus does not meet the range size criteria for vulnerable. The population trend appears to be stable, although the population size has not been quantified, it is unlikely to approach the susceptible thresholds under the population size criterion (10,000 mature individuals with a continuing decline estimated to be \>10 percent in ten years or three generations, or with a specified population structure), and the International Union for Conservation of Nature evaluated it as least concern.
## In Aboriginal mythology
A grey currawong features in the major Dreaming story of the Kaurna people, when the ancestor hero Tjilbruke kills one in order to use fat and feathers to cover his body before transforming himself into a glossy ibis at Rosetta Head.
|
11,930,363 |
SMS Kaiser Wilhelm II
| 1,158,099,638 |
Battleship of the German Imperial Navy
|
[
"1897 ships",
"Kaiser Friedrich III-class battleships",
"Ships built in Wilhelmshaven",
"World War I battleships of Germany"
] |
SMS Kaiser Wilhelm II ("His Majesty's Ship Emperor William II") was the second ship of the Kaiser Friedrich III class of pre-dreadnought battleships. She was built at the Imperial Dockyard in Wilhelmshaven and launched on 14 September 1897. The ship was commissioned into the fleet as its flagship on 13 February 1900. Kaiser Wilhelm II was armed with a main battery of four 24-centimeter (9.45 in) guns in two twin turrets. She was powered by triple expansion engines that delivered a top speed of 17.5 knots (32.4 km/h; 20.1 mph).
Kaiser Wilhelm II served as the flagship of the Active Battle Fleet until 1906, participating in numerous fleet training exercises and visits to foreign ports. She was replaced as flagship by the new battleship SMS Deutschland. After the new dreadnought battleships began entering service in 1908, Kaiser Wilhelm II was decommissioned and put into reserve. She was reactivated in 1910 for training ship duties in the Baltic, but was again taken out of service in 1912.
With the outbreak of World War I in August 1914, Kaiser Wilhelm II and her sisters were brought back into active duty as coastal defense ships in V Battle Squadron. Her age, coupled with shortages of ship crews, led to her withdrawal from this role in February 1915, after which she served as a command ship for the High Seas Fleet, based in Wilhelmshaven. Following the end of the war in November 1918, Kaiser Wilhelm II was stricken from the navy list and sold for scrap in the early 1920s. Her bow ornament is preserved at the Military History Museum of the Bundeswehr in Dresden.
## Design
After the German Kaiserliche Marine (Imperial Navy) ordered the four Brandenburg-class battleships in 1889, a combination of budgetary constraints, opposition in the Reichstag (Imperial Diet), and a lack of a coherent fleet plan delayed the acquisition of further battleships. The former Secretary of the Reichsmarineamt (Imperial Navy Office), Leo von Caprivi became the Chancellor of Germany in 1890, and Vizeadmiral (Vice Admiral) Friedrich von Hollmann became the new Secretary of the Reichsmarineamt. Hollmann requested a new battleship in 1892 to replace the elderly ironclad turret-ship Preussen, built twenty years earlier, but the Franco-Russian Alliance, signed the year before, put the government's attention on expanding the Army's budget. Parliamentary opposition forced Hollmann to delay until the following year, when Caprivi spoke in favor of the project, noting that Russia's recent naval expansion threatened Germany's Baltic Sea coastline. In late 1893, Hollmann presented the Navy's estimates for the 1894–1895 budget year, again with a request for a replacement for Preussen, which was approved. The new ship abandoned the six-gun arrangement of the Brandenburgs for four large-caliber pieces, the standard arrangement of other navies at the time. A second member of the class, Kaiser Wilhelm II, was delayed until early 1896, when the Reichstag approved the ship for the 1896–1897 budget.
Kaiser Wilhelm II was 125.3 m (411 ft 1.07 in) long overall and had a beam of 20.4 m (66 ft 11.15 in). Her draft was 7.89 m (25 ft 10.63 in) forward and 8.25 m (27 ft 0.80 in) aft. She displaced 11,097 t (10,922 long tons) as designed and up to 11,785 t (11,599 long tons) at full load. The ship was powered by three 3-cylinder vertical triple-expansion steam engines that drove three screw propellers. Steam was provided by four marine-type and eight cylindrical water-tube boilers, all of which burned coal and were vented through a pair of tall funnels. Kaiser Wilhelm II's powerplant was rated at 12,822 indicated horsepower (9,561 kW), which generated a top speed of 17.5 knots (32.4 km/h; 20.1 mph). She had a cruising radius of 3,420 nautical miles (6,330 km; 3,940 mi) at a speed of 10 knots (19 km/h; 12 mph). She had a normal crew of 39 officers and 612 enlisted men; while serving as the fleet flagship, she carried an additional admiral's staff of 12 officers and 51–63 enlisted men.
The ship's armament consisted of a main battery of four 24 cm (9.4 in) SK L/40 guns in twin gun turrets, one fore and one aft of the central superstructure. Her secondary armament consisted of eighteen 15 cm (5.9 inch) SK L/40 guns carried in a mix of turrets and casemates. Close-range defense against torpedo boats was provided by a battery of twelve 8.8 cm (3.5 in) SK L/30 quick-firing guns all mounted in casemates. She also carried twelve 3.7 cm (1.5 in) machine cannon, but these were later removed. The armament suite was rounded out with six 45 cm (17.7 in) torpedo tubes, one of which was placed in an above-water swivel mount at the stern, with four submerged on the broadside and one submerged in the bow. The ship's belt armor was 300 mm (11.81 in) thick, and the main armor deck was 65 mm (2.56 in) thick. The conning tower and main battery turrets were protected with 250 mm (9.84 in) of armor plating, and the secondary casemates received 150 mm (5.91 in) of armor protection.
## Service history
### Construction to 1902
Kaiser Wilhelm II's keel was laid on 26 October 1896, at the Kaiserliche Werft in Wilhelmshaven, under construction number 24. Ordered under the contract name Ersatz Friedrich der Grosse, to replace the elderly armored frigate Friedrich der Grosse, she was launched on 14 September 1897. During the launching ceremony, Konteradmiral (Rear Admiral) Prince Heinrich christened the ship for his brother, Kaiser Wilhelm II. She was commissioned on 13 February 1900, assuming the position of fleet flagship, which she held until 1906. Kaiser Wilhelm II was the first battleship of the German Navy specifically built to serve as a fleet flagship. After completing her sea trials in June 1900, she was assigned to II Division of I Squadron, where she replaced the old armored corvette Bayern in the division and the battleship Kurfürst Friedrich Wilhelm as flagship of the Active Battle Fleet.
In early July 1900, the four Brandenburg-class battleships, which were assigned to I Division of I Squadron, were ordered to East Asian waters to assist in the suppression of the Boxer Uprising. As a result, Kaiser Wilhelm II and the other ships of II Division were transferred to I Division on 8 July, under the command of Konteradmiral Paul Hoffmann. On 15 August the annual autumn maneuvers began; initially, the fleet practiced tactical maneuvers in the German Bight. A cruise in battle formation through the Kattegat followed, and the maneuvers concluded in the western Baltic on 21 September. During these exercises, Kaiser Wilhelm II served as the umpire ship, and so Hoffmann temporarily transferred his flag to her sister ship Kaiser Friedrich III. He returned to Kaiser Wilhelm II on 29 September after the conclusion of the exercises in Kiel.
On 1 November 1900, Kaiser Friedrich III replaced Kaiser Wilhelm II as the I Squadron flagship; the latter, as the fleet flagship, remained assigned to the squadron for tactical purposes. From 4 to 15 December, Kaiser Wilhelm II and I Squadron went on a winter training cruise to Norway; the ships anchored at Larvik from 10 to 12 December. Kaiser Wilhelm II went into drydock in January 1901 for overhaul and some modernization work. This included the reconstruction of a larger bridge and the removal of some of her searchlights. While the ship was laid up, Admiral Hans von Koester replaced Hoffmann as the fleet commander, a position he would hold until the end of 1906.
The annual training routine began at the end of March 1901 with squadron exercises in the Baltic. On the night of 1–2 April, Kaiser Friedrich III ran hard aground on the Adlergrund, a shoal to the east of Cape Arkona, and Kaiser Wilhelm II lightly brushed the bottom. After a short inspection, it was determined that Kaiser Wilhelm II was undamaged, and so Prince Heinrich transferred his flag to the ship on 23 April, while Kaiser Friedrich III went into drydock for repairs. In the meantime, on 18 April, Wilhelm II commissioned his son Prince Adalbert aboard Kaiser Wilhelm II. On 27 April, I Squadron conducted gunnery drills and a landing exercise off Apenrade. By 17 June, Kaiser Wilhelm II's sister ship Kaiser Wilhelm der Grosse had entered service, and so she took over flagship duties for the squadron, while Kaiser Wilhelm II returned to serving as only the fleet flagship. The squadron then went on a cruise to Spain, and while docked in Cádiz, rendezvoused with the Brandenburg-class battleships returning from East Asian waters. I Squadron was back in Kiel by 11 August, though the late arrival of the Brandenburgs delayed the participation of I Squadron in the annual autumn fleet training. The maneuvers began with exercises in the German Bight, followed by a mock attack on the fortifications in the lower Elbe. Gunnery drills took place in Kiel Bay before the fleet steamed to Danzig Bay; there, during the maneuvers, Wilhelm II and Czar Nicholas II of Russia visited the fleet and came aboard Kaiser Wilhelm II. The autumn maneuvers concluded on 15 September. Kaiser Wilhelm II and the rest of I Squadron went on their normal winter cruise to Norway in December, which included a stop at Oslo from 7 to 12 December, when the ship was visited by King Oscar II.
In January 1902, Kaiser Wilhelm II went into dock at Wilhelmshaven for her annual overhaul. In mid-March, Wilhelm II and his wife, Augusta Victoria, came aboard the ship and waited in the mouth of the Elbe for Wilhelm's brother Prince Heinrich, who was returning from the United States. I Squadron then went on a short cruise in the western Baltic before embarking on a major cruise around the British Isles, which lasted from 25 April to 28 May. Individual and squadron maneuvers took place from June to August, interrupted only by a cruise to Norway in July. During these maneuvers, three of Kaiser Wilhelm II's boiler tubes burst, but the damage was repaired by the start of the autumn maneuvers in August. These exercises began in the Baltic and concluded in the North Sea with a fleet review in the Jade. Kaiser Wilhelm II took no active part in the exercises; she instead served as an observation ship for the commander of the fleet, as well as her namesake, Kaiser Wilhelm II. The regular winter cruise followed during 1–12 December.
### 1903–1905
The first quarter of 1903 followed the usual pattern of training exercises. The squadron went on a training cruise in the Baltic, followed by a voyage to Spain that lasted from 7 May to 10 June. After returning to Germany, Kaiser Wilhelm II participated in the Kiel Week sailing regatta. In July, she joined I Squadron for the annual cruise to Norway. The autumn maneuvers consisted of a blockade exercise in the North Sea, a cruise of the entire fleet first to Norwegian waters and then to Kiel in early September, and finally a mock attack on Kiel. The exercises concluded on 12 September. Kaiser Wilhelm II finished the year's training schedule with a cruise into the eastern Baltic that started on 23 November and a cruise into the Skagerrak that began on 1 December. During the latter, the ship stopped in the Danish port of Frederikshavn.
Kaiser Wilhelm II participated in an exercise in the Skagerrak from 11 to 21 January 1904, after which she returned to Kiel. She then went to the Norwegian city of Ålesund to assist with the major fire that devastated the largely wooden city on 23 January. Squadron exercises followed from 8 to 17 March. A major fleet exercise took place in the North Sea in May, and Kaiser Wilhelm II was again present at Kiel Week in June, where she was visited by Britain's King Edward VII, Lord William Palmer, and Prince Louis of Battenberg. In June, Kaiser Wilhelm II won the Kaiser's Schießpreis (Shooting Prize) for excellent gunnery. The following month, I Squadron and I Scouting Group visited Britain, including a stop at Plymouth on 10 July. The German fleet departed on 13 July, bound for the Netherlands; I Squadron anchored in Vlissingen the following day. There, the ships were visited by Queen Wilhelmina. I Squadron remained in Vlissingen until 20 July, when they departed for a cruise in the northern North Sea with the rest of the fleet. The squadron stopped in Molde, Norway, on 29 July, while the other units went to other ports.
The fleet reassembled on 6 August and steamed back to Kiel, where it conducted a mock attack on the harbor on 12 August. During its cruise in the North Sea, the fleet experimented with wireless telegraphy on a large scale and searchlights at night for communication and recognition signals. Immediately after returning to Kiel, the fleet began preparations for the autumn maneuvers, which began on 29 August in the Baltic. The fleet moved to the North Sea on 3 September, where it took part in a major landing operation, after which the ships took the ground troops from IX Corps that participated in the exercises to Altona for a parade for Wilhelm II. The ships then conducted their own parade for the Kaiser off the island of Helgoland on 6 September. Three days later, the fleet returned to the Baltic via the Kaiser Wilhelm Canal, where it participated in further landing operations with IX Corps and the Guards Corps. On 15 September, the maneuvers came to an end. I Squadron went on its winter training cruise, this time to the eastern Baltic, from 22 November to 2 December.
Kaiser Wilhelm II took part in a pair of training cruises with I Squadron during 9–19 January and 27 February – 16 March 1905. Individual and squadron training followed, with an emphasis on gunnery drills. On 12 July, the fleet began a major training exercise in the North Sea. The fleet then cruised through the Kattegat and stopped in Copenhagen, where Kaiser Wilhelm II was visited by the Danish King Christian IX. The fleet then stopped in Stockholm, where Kaiser Wilhelm II, the battleship Brandenburg, and the armored cruiser Friedrich Carl all ran aground, though only Friedrich Carl was seriously damaged. The summer cruise ended on 9 August, though the autumn maneuvers that would normally have begun shortly thereafter were delayed by a visit from the British Channel Fleet that month. The British fleet stopped in Danzig, Swinemünde, and Flensburg, where it was greeted by units of the German Navy; Kaiser Wilhelm II and the main German fleet was anchored at Swinemünde for the occasion. The visit was strained by the Anglo-German naval arms race.
As a result of the British visit, the 1905 autumn maneuvers were shortened considerably, from 6 to 13 September, and consisted only of exercises in the North Sea. The first exercise presumed a naval blockade in the German Bight, and the second envisioned a hostile fleet attempting to force the defenses of the Elbe. During October, Kaiser Wilhelm II conducted individual training and, in November, joined the rest of I Squadron for a cruise in the Baltic. In early December, I and II Squadrons went on their regular winter cruise, this time to Danzig, where they arrived on 12 December. While on the return trip to Kiel, the fleet conducted tactical exercises.
### 1906–1914
Kaiser Wilhelm II and the rest of the fleet undertook a heavier training schedule in 1906 than in previous years. The ships were occupied with individual, division and squadron exercises throughout April. Starting on 13 May, major fleet exercises took place in the North Sea and lasted until 8 June with a cruise around the Skagen into the Baltic. The fleet began its usual summer cruise to Norway in mid-July. Kaiser Wilhelm II and I Squadron anchored in Molde, where they were joined on 21 July by Wilhelm II aboard the steamer Hamburg. The fleet was present for the birthday of Norwegian King Haakon VII on 3 August. The German ships departed the following day for Helgoland, to join exercises being conducted there. The fleet was back in Kiel by 15 August, where preparations for the autumn maneuvers began. On 22–24 August, the fleet took part in landing exercises in Eckernförde Bay outside Kiel. The maneuvers were paused from 31 August to 3 September when the fleet hosted vessels from Denmark and Sweden, along with a Russian squadron from 3 to 9 September in Kiel. The maneuvers resumed on 8 September and lasted five more days.
On 26 September 1906, now-Großadmiral (Grand Admiral) von Koester lowered his flag aboard Kaiser Wilhelm II, ending her tenure as the fleet flagship; the new battleship Deutschland replaced her in this role. Kaiser Wilhelm II was now assigned to I Squadron, where she served as the second command flagship, under Konteradmiral Max Rollmann. The ship participated in the uneventful winter cruise into the Kattegat and Skagerrak from 8 to 16 December. The first quarter of 1907 followed the previous pattern and, on 16 February, the Active Battle Fleet was re-designated the High Seas Fleet. From the end of May to early June the fleet went on its summer cruise in the North Sea, returning to the Baltic via the Kattegat. This was followed by the regular cruise to Norway from 12 July to 10 August, during which Kaiser Wilhelm II anchored in Trondheim. During the autumn maneuvers, which lasted from 26 August to 6 September, the fleet conducted landing exercises in northern Schleswig with IX Corps. The winter training cruise went into the Kattegat from 22 to 30 November.
In May 1908, the fleet went on a major cruise into the Atlantic instead of its normal voyage in the North Sea. Kaiser Wilhelm II stopped in Horta in the Azores. The fleet returned to Kiel on 13 August to prepare for the autumn maneuvers, which lasted from 27 August to 7 September. Division exercises in the Baltic immediately followed from 7 to 13 September. At the conclusion of these maneuvers, Kaiser Wilhelm II was taken out of service. In 1909–1910, she underwent a major reconstruction in Wilhelmshaven. The superstructure amidships was cut down to reduce top-heaviness, new circular funnels were installed, and the conning tower was enlarged. The fighting tops from the masts were removed, and the secondary battery was significantly revised. Four of the 15 cm guns were removed and two 8.8 cm guns were added; most of the 8.8 cm guns were moved from the upper decks into casemates in the main deck. On 14 October 1910, Kaiser Wilhelm II was recommissioned for service in the Baltic reserve division. She underwent short sea trials from 21 to 23 October before proceeding to Kiel, where she was based with her four sister ships.
From 3 to 29 April 1911, the ship participated in maneuvers off Rügen. Together with the North Sea reserve division, Kaiser Wilhelm II and her sister ships went on a training cruise to Norway, starting on 8 June. During the visit, she stopped in Arendal, Bergen, and Odda. In July, the ship conducted gunnery training near the northern coast of Holstein, followed by training cruises off the coast of Mecklenburg. Kaiser Wilhelm II served as the flagship of III Squadron, which was organized for the autumn maneuvers in August. III Squadron was attached to the High Seas Fleet for the maneuvers, which lasted from 28 August to 11 September. The following day, III Squadron was disbanded and Kaiser Wilhelm II returned to service with the Baltic reserve division. In February 1912, Kaiser Wilhelm II was sent to the Fehmarn Belt to assist in freeing several freighters that were stuck in ice. She and her sisters were again decommissioned on 9 May, and remained out of service until 1914.
### World War I
As a result of the outbreak of World War I, Kaiser Wilhelm II and her sisters were brought out of reserve and mobilized as V Battle Squadron on 5 August 1914; Kaiser Wilhelm II served as the flagship of the squadron. The ships were readied for war very slowly, and they were not ready for service in the North Sea until the end of August. They were initially tasked with coastal defense, though they served in this capacity for a very short time. In mid-September, V Squadron was transferred to the Baltic, under the command of Prince Heinrich. He initially planned to launch a major amphibious assault on Windau, but a shortage of transports forced a revision of the plan. Instead, V Squadron was to carry the landing force, but this too was cancelled after Heinrich received false reports of British warships having entered the Baltic on 25 September. Kaiser Wilhelm II and her sisters returned to Kiel the following day, disembarked the landing force, and then proceeded to the North Sea, where they resumed guard ship duties. Before the end of the year, V Squadron was once again transferred to the Baltic.
Prince Heinrich ordered a foray toward Gotland. On 26 December 1914, the battleships rendezvoused with the Baltic cruiser division in the Bay of Pomerania and then departed on the sortie. Two days later, the fleet arrived off Gotland to show the German flag, and was back in Kiel by 30 December. The squadron returned to the North Sea for guard duties, but was withdrawn from front-line service in February 1915. Shortages of trained crews in the High Seas Fleet, coupled with the risk of operating older ships in wartime, necessitated the deactivation of Kaiser Wilhelm II and her sisters. During this period, her sister Kaiser Karl der Grosse briefly served as the squadron flagship, but Kaiser Wilhelm II resumed the post starting on 24 February. The following month, on 5 March, her crew was reduced and she steamed to Wilhelmshaven, where she was converted into the headquarters ship for the commander of the High Seas Fleet, commencing on 26 April. The ship had its wireless equipment modernized for use by the commander when the fleet was in port.
After the end of the war, Kaiser Wilhelm II continued in her role as headquarters ship for the fleet commander and his staff, along with the commander of the minesweeping operation in the North Sea. She was decommissioned for the last time on 10 September 1920. The naval clauses of the Treaty of Versailles, which ended the war, limited the capital ship strength of the re-formed Reichsmarine to eight pre-dreadnought battleships of the Deutschland and Braunschweig classes, of which only six could be operational at any given time. As a result, Kaiser Wilhelm II was stricken from the navy list on 17 March 1921 and sold to shipbreakers. By 1922, Kaiser Wilhelm II and her sisters had been broken up for scrap metal. The ship's bow ornament (bugzier) is preserved at the Military History Museum of the Bundeswehr in Dresden.
|
9,009,961 |
IK Pegasi
| 1,170,975,811 |
Star in the constellation Pegasus
|
[
"A-type main-sequence stars",
"Am stars",
"Astronomical objects discovered in 1862",
"Bright Star Catalogue objects",
"Durchmusterung objects",
"Henry Draper Catalogue objects",
"Hipparcos objects",
"Objects with variable star designations",
"Pegasus (constellation)",
"Spectroscopic binaries",
"Supernovae",
"White dwarfs"
] |
IK Pegasi (or HR 8210) is a binary star system in the constellation Pegasus. It is just luminous enough to be seen with the unaided eye, at a distance of about 154 light years from the Solar System.
The primary (IK Pegasi A) is an A-type main-sequence star that displays minor pulsations in luminosity. It is categorized as a Delta Scuti variable star and it has a periodic cycle of luminosity variation that repeats itself about 22.9 times per day. Its companion (IK Pegasi B) is a massive white dwarf—a star that has evolved past the main sequence and is no longer generating energy through nuclear fusion. They orbit each other every 21.7 days with an average separation of about 31 million kilometres, or 19 million miles, or 0.21 astronomical units (AU). This is smaller than the orbit of Mercury around the Sun.
IK Pegasi B is the nearest known supernova progenitor candidate. When the primary begins to evolve into a red giant, it is expected to grow to a radius where the white dwarf can accrete matter from the expanded gaseous envelope. When the white dwarf approaches the Chandrasekhar limit of 1.4 solar masses (), it may explode as a Type Ia supernova.
## Observation
This star system was catalogued in the 1862 Bonner Durchmusterung ("Bonn astrometric Survey") as BD +18°4794B. It later appeared in Pickering's 1908 Harvard Revised Photometry Catalogue as HR 8210. The designation "IK Pegasi" follows the expanded form of the variable star nomenclature introduced by Friedrich W. Argelander.
Examination of the spectrographic features of this star showed the characteristic absorption line shift of a binary star system. This shift is created when their orbit carries the member stars toward and then away from the observer, producing a doppler shift in the wavelength of the line features. The measurement of this shift allows astronomers to determine the relative orbital velocity of at least one of the stars even though they are unable to resolve the individual components.
In 1927, the Canadian astronomer William E. Harper used this technique to measure the period of this single-line spectroscopic binary and determined it to be 21.724 days. He also initially estimated the orbital eccentricity as 0.027. (Later estimates gave an eccentricity of essentially zero, which is the value for a circular orbit.) The velocity amplitude was measured as 41.5 km/s, which is the maximum velocity of the primary component along the line of sight to the Solar System.
The distance to the IK Pegasi system can be measured directly by observing the tiny parallax shifts of this system (against the more distant stellar background) as the Earth orbits around the Sun. This shift was measured to high precision by the Hipparcos spacecraft, yielding a distance estimate of 150 light years (with an accuracy of ±5 light years). The same spacecraft also measured the proper motion of this system. This is the small angular motion of IK Pegasi across the sky because of its motion through space.
The combination of the distance and proper motion of this system can be used to compute the transverse velocity of IK Pegasi as 16.9 km/s. The third component, the heliocentric radial velocity, can be measured by the average red-shift (or blue-shift) of the stellar spectrum. The General Catalogue of Stellar Radial Velocities lists a radial velocity of −11.4 km/s for this system. The combination of these two motions gives a space velocity of 20.4 km/s relative to the Sun.
An attempt was made to photograph the individual components of this binary using the Hubble Space Telescope, but the stars proved too close to resolve. Recent measurements with the Extreme Ultraviolet Explorer space telescope gave a more accurate orbital period of 21.72168 ± 0.00009 days. The inclination of this system's orbital plane is believed to be nearly edge-on (90°) as seen from the Earth. If so it may be possible to observe an eclipse.
## IK Pegasi A
The Hertzsprung–Russell diagram (HR diagram) is a plot of luminosity versus a color index for a set of stars. IK Pegasi A is currently a main sequence star—a term that is used to describe a nearly linear grouping of core hydrogen-fusing stars based on their position on the HR diagram. However, IK Pegasi A lies in a narrow, nearly vertical band of the HR diagram that is known as the instability strip. Stars in this band oscillate in a coherent manner, resulting in periodic pulsations in the star's luminosity.
The pulsations result from a process called the κ-mechanism. A part of the star's outer atmosphere becomes optically thick due to partial ionization of certain elements. When these atoms lose an electron, the likelihood that they will absorb energy increases. This results in an increase in temperature that causes the atmosphere to expand. The inflated atmosphere becomes less ionized and loses energy, causing it to cool and shrink back down again. The result of this cycle is a periodic pulsation of the atmosphere and a matching variation of the luminosity.
Stars within the portion of the instability strip that crosses the main sequence are called Delta Scuti variables. These are named after the prototypical star for such variables: Delta Scuti. Delta Scuti variables typically range from spectral class A2 to F8, and a stellar luminosity class of III (giants) to V (main sequence stars). They are short-period variables that have a regular pulsation rate between 0.025 and 0.25 days. Delta Scuti stars have an abundance of elements similar to the Sun's (see Population I stars) and between 1.5 and . The pulsation rate of IK Pegasi A has been measured at 22.9 cycles per day, or once every 0.044 days.
Astronomers define the metallicity of a star as the abundance of chemical elements that have a higher atomic number than helium. This is measured by a spectroscopic analysis of the atmosphere, followed by a comparison with the results expected from computed stellar models. In the case of IK Pegasus A, the estimated metal abundance is [M/H] = +0.07 ± 0.20. This notation gives the logarithm of the ratio of metal elements (M) to hydrogen (H), minus the logarithm of the Sun's metal ratio. (Thus if the star matches the metal abundance of the Sun, this value will be zero.) A logarithmic value of 0.07 is equivalent to an actual metallicity ratio of 1.17, so the star is about 17% richer in metallic elements than the Sun. However the margin of error for this result is relatively large.
The spectrum of A-class stars such as IK Pegasi A show strong Balmer lines of hydrogen along with absorption lines of ionized metals, including the K line of ionized calcium (Ca II) at a wavelength of 393.3 nm. The spectrum of IK Pegasi A is classified as marginal Am (or "Am:"), which means it displays the characteristics of a spectral class A but is marginally metallic-lined. That is, this star's atmosphere displays slightly (but anomalously) higher than normal absorption line strengths for metallic isotopes. Stars of spectral type Am are often members of close binaries with a companion of about the same mass, as is the case for IK Pegasi.
Spectral class-A stars are hotter and more massive than the Sun. But, in consequence, their life span on the main sequence is correspondingly shorter. For a star with a mass similar to IK Pegasi A (1.65 ), the expected lifetime on the main sequence is 2–3 × 10<sup>9</sup> years, which is about half the current age of the Sun.
In terms of mass, the relatively young Altair is the nearest star to the Sun that is a stellar analogue of component A—it has an estimated 1.7 . The binary system as a whole has some similarities to the nearby system of Sirius, which has a class-A primary and a white dwarf companion. However, Sirius A is more massive than IK Pegasi A and the orbit of its companion is much larger, with a semimajor axis of 20 AU.
## IK Pegasi B
The companion star is a dense white dwarf star. This category of stellar object has reached the end of its evolutionary life span and is no longer generating energy through nuclear fusion. Instead, under normal circumstances, a white dwarf will steadily radiate away its excess energy, mainly stored heat, growing cooler and dimmer over the course of many billions of years.
### Evolution
Nearly all small and intermediate-mass stars (below about 11 ) will end up as white dwarfs once they have exhausted their supply of thermonuclear fuel. Such stars spend most of their energy-producing life span as a main-sequence star. The time that a star spends on the main sequence depends primarily on its mass, with the lifespan decreasing with increasing mass. Thus, for IK Pegasi B to have become a white dwarf before component A, it must once have been more massive than component A. In fact, the progenitor of IK Pegasi B is thought to have had a mass between 6 and .
As the hydrogen fuel at the core of the progenitor of IK Pegasi B was consumed, it evolved into a red giant. The inner core contracted until hydrogen burning commenced in a shell surrounding the helium core. To compensate for the temperature increase, the outer envelope expanded to many times the radius it possessed as a main sequence star. When the core reached a temperature and density where helium could start to undergo fusion this star contracted and became what is termed a horizontal branch star. That is, it belonged to a group of stars that fall upon a roughly horizontal line on the H-R diagram. The fusion of helium formed an inert core of carbon and oxygen. When helium was exhausted in the core a helium-burning shell formed in addition to the hydrogen-burning one and the star moved to what astronomers term the asymptotic giant branch, or AGB. (This is a track leading to the upper-right corner of the H-R diagram.) If the star had sufficient mass, in time carbon fusion could begin in the core, producing oxygen, neon and magnesium.
The outer envelope of a red giant or AGB star can expand to several hundred times the radius of the Sun, occupying a radius of about 5 × 10<sup>8</sup> km (3 AU) in the case of the pulsating AGB star Mira. This is well beyond the current average separation between the two stars in IK Pegasi, so during this time period the two stars shared a common envelope. As a result, the outer atmosphere of IK Pegasi A may have received an isotope enhancement.
Some time after an inert oxygen-carbon (or oxygen-magnesium-neon) core formed, thermonuclear fusion began to occur along two shells concentric with the core region; hydrogen was burned along the outermost shell, while helium fusion took place around the inert core. However, this double-shell phase is unstable, so it produced thermal pulses that caused large-scale mass ejections from the star's outer envelope. This ejected material formed an immense cloud of material called a planetary nebula. All but a small fraction of the hydrogen envelope was driven away from the star, leaving behind a white dwarf remnant composed primarily of the inert core.
### Composition and structure
The interior of IK Pegasi B may be composed wholly of carbon and oxygen; alternatively, if its progenitor underwent carbon burning, it may have a core of oxygen and neon, surrounded by a mantle enriched with carbon and oxygen. In either case, the exterior of IK Pegasi B is covered by an atmosphere of almost pure hydrogen, which gives this star its stellar classification of DA. Due to higher atomic mass, any helium in the envelope will have sunk beneath the hydrogen layer. The entire mass of the star is supported by electron degeneracy pressure—a quantum mechanical effect that limits the amount of matter that can be squeezed into a given volume.
At an estimated , IK Pegasi B is considered to be a high-mass white dwarf. Although its radius has not been observed directly, it can be estimated from known theoretical relationships between the mass and radius of white dwarfs, giving a value of about 0.60% of the Sun's radius. (A different source gives a value of 0.72%, so there remains some uncertainty in this result.) Thus this star packs a mass greater than the Sun into a volume roughly the size of the Earth, giving an indication of this object's extreme density.
The massive, compact nature of a white dwarf produces a strong surface gravity. Astronomers denote this value by the decimal logarithm of the gravitational force in cgs units, or log g. For IK Pegasi B, log g is 8.95. By comparison, log g for the Earth is 2.99. Thus the surface gravity on IK Pegasi is over 900,000 times the gravitational force on the Earth.
The effective surface temperature of IK Pegasi B is estimated to be about 35,500 ± 1,500 K, making it a strong source of ultraviolet radiation. Under normal conditions this white dwarf would continue to cool for more than a billion years, while its radius would remain essentially unchanged.
## Future evolution
In a 1993 paper, David Wonnacott, Barry J. Kellett and David J. Stickland identified this system as a candidate to evolve into a Type Ia supernova or a cataclysmic variable. At a distance of 150 light years, this makes it the nearest known candidate supernova progenitor to the Earth. However, in the time it will take for the system to evolve to a state where a supernova could occur, it will have moved a considerable distance from Earth but may still pose a threat.
At some point in the future, IK Pegasi A will consume the hydrogen fuel at its core and start to evolve away from the main sequence to form a red giant. The envelope of a red giant can grow to significant dimensions, extending up to a hundred times its previous radius (or larger). Once IK Pegasi A expands to the point where its outer envelope overflows the Roche lobe of its companion, a gaseous accretion disk will form around the white dwarf. This gas, composed primarily of hydrogen and helium, will then accrete onto the surface of the companion. This mass transfer between the stars will also cause their mutual orbit to shrink.
On the surface of the white dwarf, the accreted gas will become compressed and heated. At some point the accumulated gas can reach the conditions necessary for hydrogen fusion to occur, producing a runaway reaction that will drive a portion of the gas from the surface. This would result in a (recurrent) nova explosion—a cataclysmic variable star—and the luminosity of the white dwarf would rapidly increase by several magnitudes for a period of several days or months. An example of such a star system is RS Ophiuchi, a binary system consisting of a red giant and a white dwarf companion. RS Ophiuchi has flared into a (recurrent) nova on at least six occasions, each time accreting the critical mass of hydrogen needed to produce a runaway explosion.
It is possible that IK Pegasi B will follow a similar pattern. In order to accumulate mass, however, only a portion of the accreted gas can be ejected, so that with each cycle the white dwarf would steadily increase in mass. Thus, even should it behave as a recurring nova, IK Pegasus B could continue to accumulate a growing envelope.
An alternate model that allows the white dwarf to steadily accumulate mass without erupting as a nova is called the close-binary supersoft x-ray source (CBSS). In this scenario, the mass transfer rate to the close white dwarf binary is such that a steady fusion burn can be maintained on the surface as the arriving hydrogen is consumed in thermonuclear fusion to produce helium. This category of super-soft sources consist of high-mass white dwarfs with very high surface temperatures (0.5 × 10<sup>6</sup> to 1 × 10<sup>6</sup> K).
Should the white dwarf's mass approach the Chandrasekhar limit of 1.4 it will no longer be supported by electron degeneracy pressure and it will undergo a collapse. For a core primarily composed of oxygen, neon and magnesium, the collapsing white dwarf is likely to form a neutron star. In this case, only a fraction of star's mass will be ejected as a result. If the core is instead made of carbon-oxygen, however, increasing pressure and temperature will initiate carbon fusion in the center prior to attainment of the Chandrasekhar limit. The dramatic result is a runaway nuclear fusion reaction that consumes a substantial fraction of the star within a short time. This will be sufficient to unbind the star in a cataclysmic, Type Ia supernova explosion.
Such a supernova event may pose some threat to life on the Earth. It is thought that the white dwarf, IK Pegasi B, is unlikely to detonate as a supernova for 1.9 billion years. As shown previously, the space velocity of this star relative to the Sun is 20.4 km/s (12.7 mi/s). This is equivalent to moving a distance of one light year every 14,700 years. After 5 million years, for example, this star will be separated from the Sun by more than 500 light years. A Type Ia supernova within a thousand parsecs (3,300 light-years) is thought to be able to affect the Earth, but it must be closer than about 10 parsecs (around thirty light-years) to cause a major harm to the terrestrial biosphere.
Following a supernova explosion, the remnant of the donor star (IK Pegasus A) would continue with the final velocity it possessed when it was a member of a close orbiting binary system. The resulting relative velocity could be as high as 100–200 km/s (62–124 mi/s), which would place it among the high-velocity members of the galaxy. The companion will also have lost some mass during the explosion, and its presence may create a gap in the expanding debris. From that point forward it will evolve into a single white dwarf star. The supernova explosion will create a remnant of expanding material that will eventually merge with the surrounding interstellar medium.
## See also
- Lists of stars
|
4,802,175 |
SS Politician
| 1,155,106,669 |
Cargo ship that operated between 1923 and 1941
|
[
"1941 in Scotland",
"Cargo ships of the United Kingdom",
"History of the Outer Hebrides",
"Looting",
"Maritime incidents in February 1941",
"World War II shipwrecks in the Atlantic Ocean"
] |
SS Politician was a cargo ship that ran aground off the coast of the Hebridean island of Eriskay in 1941. Her cargo included 22,000 cases of scotch whisky and £3 million worth of Jamaican banknotes. Much of the whisky was recovered by islanders from across the Hebrides, contrary to marine salvage laws. Because no duty had been paid on the whisky, members of HM Customs and Excise pursued and prosecuted those who had removed the cargo.
Politician was completed in 1923 under the name London Merchant. She was a general cargo ship that traded between Britain, the United States and Canada, and up and down the west coast of the US. In 1924—during the years of American prohibition—Oregon's state prohibition commissioner seized her cargo of whisky despite its having been approved and sealed by US federal authorities. After the British Embassy in Washington complained to the US government, the whisky was released back to the ship. During the Second World War Politician participated in the Atlantic convoys between the UK and US. In February 1941 she was on her way to the north of Scotland, where she ran aground while attempting to rendezvous with a convoy. No-one was badly injured or killed in the accident.
The local islanders continually visited the wreck of Politician to unload whisky, even though it was in a hold filled with marine engine oil and seawater. Customs men undertook raids, arresting many and seizing the boats of those suspected of taking part. The excise authorities pushed for charges under the punitive customs legislation, but the authorities charged those arrested with theft. Many were found not guilty or not proven, and several were fined; 19 were incarcerated at Inverness Prison for terms ranging between 20 days and two months. Salvors were used to rescue as much of the ship as they could, and the whisky they raised was shipped back to its bonded warehouses; this was also looted during its journey. Two salvage crews removed much of the cargo, and the second crew raised the wreck off the seabed. Part of the ship's hold, and her stern, were cut away and sank to the bottom of Eriskay Sound; the remainder of the hold was destroyed with gelignite to prevent further looting.
A few of the Jamaican banknotes from Politician were presented at banks in Britain, Jamaica and other countries. As a result, in 1952 the blue ten-shilling notes were withdrawn and replaced with notes of the same design, printed in purple. Bottles of whisky have been raised from the seabed by divers, and some have been found in hiding places on Eriskay; these have been auctioned. The story of the wreck and looting was the basis for the book Whisky Galore; an adaptation was released as a film in 1949 and a remake in 2016.
## 1920–1939
The cargo ship SS Politician was built by the Furness Shipbuilding Company between 19 September 1920—when she was laid down—and 1923 at the Haverton Hill shipyard, County Durham, England. She was launched in November 1921 as SS London Merchant, and was completed in May 1923. London Merchant was one of six sister ships built at the yard; the others were London Commerce, London Importer, London Mariner, London Shipper and Manchester Regiment. London Merchant's gross registered tonnage was 7,899, she was 450 ft (140 m) long and 58 ft (17.7 m) at the beam; her depth of hold was 19 ft (5.8 m) and she could achieve 14 knots (26 km/h; 16 mph). While being fitted out, she was hit by another ship and damaged.
After London Merchant was repaired she began trading across the Atlantic; her owners, the Furness Withy company, advertised her cargo services in The Manchester Guardian, shipping from Manchester to Los Angeles, Seattle and Vancouver. In December 1924—during Prohibition in the United States—she docked in Portland, Oregon, with whisky as part of her cargo; this had been approved and sealed by the US federal authorities. George Cleaver, Oregon's state prohibition commissioner, ignored the approval, broke the seal on the cargo and seized the whisky. The ship's master refused to leave without the whisky and the British Embassy in Washington complained to the Federal authorities, who intervened and ordered the whisky released back to the ship. Cleaver was ordered to write an apology to the captain and the Furness Withy company. On Christmas Eve 1927 she was involved in another collision and was repaired. She traded on the US eastern seaboard until 1930 when, with the onset of Great Depression, world trade dropped, and she was tied up in the River Blackwater, Essex, along with 60 other vessels.
In May 1935 London Merchant was purchased by the Charente Steamship Company, part of the T & J Harrison shipping line. Charente renamed her Politician, and used her on cargo routes between Britain and South Africa; her crew soon nicknamed her Polly. At the outbreak of the Second World War Politician came under Admiralty orders and was involved in the Atlantic convoys between the UK and US.
## Early February – 12 March 1941
In early February 1941 SS Politician left the Liverpool docks to travel to the north of Scotland, where she was to assemble with other ships to be convoyed across the Atlantic to the US and Caribbean. Captain Beaconsfield Worthington was the ship's master, overseeing a crew of 51. She carried a mixed cargo that included cotton, machetes, sweets, cutlery, bicycles, cigarettes, pineapple chunks and biscuits. In the fifth hold there were eight crates of Jamaican banknotes, comprising ten-shilling and one- and five-pound notes, to the value of £3 million; alongside the notes were 22,000 cases (264,000 bottles) of Scotch whisky of various brands. The whisky had been taken from bonded warehouses in Leith and Glasgow that had been damaged by German bombing, and was being shipped to the US to raise hard currency for the war effort; as an export product, none of the bottles bore an excise stamp.
After leaving the River Mersey, Politician travelled through the Irish Sea, made her way past the Isle of Man, through the North Channel that separates Britain and Ireland, past Islay then to the west of the Skerryvore lighthouse and into the Sea of the Hebrides. In the vicinity of Eriskay, Politician ran aground on rocks at about 7:40 am on 4 February in bad weather and poor visibility. Sources differ on where Politician was grounded. The Canmore database run by the Historic Environment Scotland puts the event half way along the eastern cost of Eriskay; Roger Hutchinson's book on the story of the ship states it was on the rocks of Ru Melvick, a rock outcrop at the southernmost point of South Uist; the Merseyside Maritime Museum considers it was on "submerged rocks on the northern side of the island of Eriskay"; and Arthur Swinson's 1963 history places it just north of Calvay, a small uninhabited island at the north end of Eriskay. Eriskay is 703 hectares (1,740 acres); the population recorded on the island in the 1931 census was 420.
Worthington attempted to free Politician from the rocks, but she would not move. The rocks had breached the hull and water began to flood the engine room and stokehold and break the vessel's propeller shaft. Worthington was concerned the heavy waves would soon break up the ship, so he ordered the crew to abandon ship. The radio operator sent two SOS messages; the first was "Abandoning ship. Making Water. Engine-room flooded"; the second, sent at 8:22 am stated the vessel was positioned "ashore south of Barra island, pounding heavily". One lifeboat was launched with 26 men on board. It was washed onto rocks close inshore to Rudha Dubh, an outcrop on South Uist. All survived, although one man was injured on the rocks. Lloyd's, the lifeboat from Barra, spent several hours searching the area south of the island in heavy mist before a report came in of Politician's siren, which had been heard north of Eriskay. Lloyd's travelled to the area, by which time fishermen from Eriskay had boarded Politician. At Worthington's request they sailed to Rudha Dubh, collected those who had left earlier, and returned to the ship. The lifeboat reached Politician at about 4:00 pm, when Politician's crew boarded the lifeboat and were taken to Eriskay. They spent the night there, billeted in small groups in the homes of the islanders; while staying on the island, the sailors told the islanders that Politician's cargo contained whisky.
The following morning, 6 February, Worthington and his first mate, R. A. Swaine, were taken back to Politician to view the damage and see if there was any chance of salvaging her. He found that someone had been on board overnight, as personal possessions of the crew had been taken. The vessel was in the same situation as the previous day, so they signalled the situation to T&J Harrison. Harrison's asked the Liverpool & Glasgow Salvage Association to assess Politician's status. The chief salvage officer, Commander Kay, arrived at the stricken vessel on 8 February, and reported back that a salvage attempt was possible. The signal stated that there was 5 feet (1.5 m) of water in the main hold, 23 feet (10 m) in the engine room and 11 feet (3.4 m) in number five hold. Within days the salvage ship Ranger had arrived and 500 long tons (510 t) of cargo were removed. As hold five was below the surface, and full of a mixture of seawater and oil, Kay did not attempt to salvage its contents.
Local customs officers considered that some whisky had already made its way onto the islands, and asked Kay to put a guard on the ship at night-time. He refused, pointing out that with the rough seas it was dangerous for the man left behind, and it would be a waste of his time. There was evidence that islanders had been aboard during the nights: the crew's bonded stores—the food, drink and tobacco for consumption during the voyage—were all looted on 19 February. Some of Kay's salvors had managed to obtain whisky from the hold. When they returned to Glasgow on one trip, a search by customs men found several bottles, which they seized. On their second trip, the salvors dropped the whisky before entering port and had it picked up later. On 10 March representatives of HM Customs and Excise secured the hold with an excise seal to show no duty had been paid on the contents. On 12 March 1941 Kay and the salvage crew left the wreck of Politician.
## 12 March – early April 1941
In 1941 all wrecks came under the protection of the Merchant Shipping Act 1894. Part IX, paragraph 536 of the Act covered "Interfering with wrecked vessel or wreck", and stated that:
> 1.) A person shall not without the leave of the master board any vessel which is wrecked, stranded, or in distress ...
> 2.) A person shall not: ...
> : (c) wrongfully carry away or remove any part of a vessel stranded or in danger of being stranded, or otherwise in distress, on or near any coast or tidal water, or any part of the cargo or apparel thereof, or any wreck.
The islanders took a different view of salvage and considered that they did not "steal" any cargo from local wrecks, but instead talked of "saving" or "rescuing" it from the sea. They knew Politician had been abandoned by the owners and the salvage crews; one islander later told Swinson "when the salvors quit a ship—she's ours". Once the salvage crew had left the Politician, islanders from across the Hebrides, as well as boats from Scotland's west coast, engaged in what Hutchinson calls the "wholesale rescuing" of the whisky. They were aided in navigating round the wreck by Angus John Campbell, a local man who had served as boatswain on Politician between the wars. Wartime rationing had led to shortages of the spirit, and what supplies were made available were increasingly expensive because of rising duty. For several nights, the islanders worked on hooking the crates out of the oil-and-seawater-filled hold; every night between 20 and 50 men were on the wreck working to remove the whisky. As the contents being raised were covered in oil, the men's clothes were soon covered, and many began to use their wives' dresses to cover their own clothes.
Some of the men made only a few trips to Politician to get what they wanted—Campbell obtained 300 cases; others picked up between 20 and 80 cases a night, and one man with a larger boat is thought to have recovered more than 1,000 cases. When the men returned to their respective islands each night, they hid their spoils in a variety of places, in case the Excise men raided. Rabbit holes, piles of peat and creels placed under the sea and behind panels in homes were all used. Burying caches of whisky was also popular, but brought about a second problem; islanders who had not visited the wreck would watch where it was buried and dig it up as soon as the men left the burial site. One man put 46 cases in a small cave on an island off Barra as a reserve for when he ran out; when he returned only four were left.
News of the islanders' removal of whisky from Politician was known early on. The local Customs and Excise officer, Charles McColl, commandeered a local boat on 15 March and, with the aid of Donald MacKenzie, a local constable, went out into Eriskay Sound—the stretch of water between Eriskay and South Uist—and intercepted two boats laden with cases of whisky. On landing, McColl walked along the coast and intercepted a third boat unloading whisky. The details of 18 men were taken down from the day's efforts. Two days later McColl and MacKenzie conducted searches of the crofts of those they had intercepted and seized thousands of items from Politician, but no whisky. Surmising that the whisky had been well hidden, he expanded his search and, on his own, searched other local crofts, but still found no whisky. His initial searches lasted until 22 March, when he thought the sea was too rough for the looters to visit the wreck, although they still did.
McColl never visited the wreck at night time. When the weather cleared on 5 April, he tried to commandeer the boat again but it was unavailable. Instead he patrolled the coast of South Uist and apprehended one boat when it landed. He began searching the crofts of South Uist, but the residents had learned of his raids on Eriskay, and hidden their bounty carefully; there were stories of the police who assisted in the searches turning a blind eye where they could. No seizures were made on Barra, but local police heard of large-scale selling of the whisky on the island and arrested four men, whom they charged with theft.
## Early April – August 1941
On 9 April a second salvage boat arrived at Eriskay. While Kay and the Liverpool & Glasgow Salvage Association had been retained to salvage what they could, the scrap metal was of no concern to them. The second salvage company was British Iron & Steel Corporation (Salvage) Ltd (BISC). Their remit was to check Kay's conclusion about the inability to refloat Politician. If it could not be refloated, then it was planned to tow part of the superstructure to be reused. If that was still not possible, stripping the vessel of as much metal as possible was sometimes financially viable. The wreck of Thalia was nearby and known to contain iron ore, which made the salvage more lucrative for them.
After two visits to Politician in early April, BISC considered that it was possible that the wreck could be refloated. McColl had visited the ship with the salvors, and was angered by that state of the vessel, which showed signs of having been extensively looted. He wrote to Ivan Gledhill—the local Customs surveyor and his direct superior—and told him "I should imagine that 300 cases have gone out of her. That, I believe, is a conservative estimate." He also told Gledhill that he intended to step up his search efforts, and ensure that as many of the malefactors from Eriskay and South Uist were sent to prison for as long as possible. Gledhill agreed with the strategy. He accompanied McColl as often as he could, although his territory was too large and his workload proportionally higher, so the visits were not as frequent as he would have liked.
With the arrival of Captain Edward Lauretson and the salvage ship Assistance, BISC returned to Politician on 21 April. The salvage operation they conducted took several months, and involved divers descending into hold five to clear out the cargo. They removed 13,500 cases and three casks of whisky from the wreck, as well as stout and sherry. Several eyewitnesses later said the salvors helped themselves to whisky whenever they wanted, and would often return to their billets on Eriskay and South Uist with bottles to share with the islanders. A report from the salvors to the Salvage Association passed information that some of the Jamaican banknotes had been seen on Benbecula—25 miles (40 km) from Politician. The organisation that provided the administration of British Crown colonies for the government, which included providing banknotes, was the Crown Agents; it was they who had arranged for the printing of the money by De La Rue, and who organised its shipping to the Caribbean. On hearing the news of the loss of the money, the Crown Agents thought that:
> The local police service is no doubt on a very small scale but the nature of the place and its surroundings should tend to reduce the chances of serious loss through the notes being presented and paid.
Children on the islands were found playing with the notes, and within two months water-stained Jamaican notes were being exchanged in banks in Liverpool.
The first court cases took place on 26 April; they involved four men arrested on Barra three days earlier when the police saw them unloading whisky and barrels of oil. Three of the men were fined £3 each; the other two had to pay £5. McColl and Gledhill applied pressure on the legal authorities, directly and through their superiors. McColl argued that the looters should be tried under the terms of the specialist Customs Consolidation Act 1876 or the Merchant Shipping Act 1894, both of which carried more punitive punishments than ordinary legislation for theft. McColl and Gledhill wrote reports to their superiors that accused the looters of vandalism on Politician and widescale black-marketeering of the stolen whisky, and claimed the local police were being bribed to ignore the situation. The journalists Adrian Turpin and Peter Day write that the outrage of the customs men should be taken "with a pinch of salt"; the organisation was in the midst of providing evidence for later prosecutions and was not neutral.
McColl continued with his attempts to find the whisky. On 5 June he and Gledhill persuaded Edward Bootham White, the Customs officer based on Harris, to assist them; they were also provided with two police sergeants from the mainland to assist them. On 6 and 7 June they conducted intensive searches of crofts and farms on Eriskay and Uist. Hutchinson relates that the searches destroyed peat stacks, forced entry into people's homes and disrupted the innocent and guilty alike, "an unnecessary, disproportionately harsh harassment". Sources differ over the success of the raids: Swinson quotes Gledhill, who states that "wherever we went, we got tons of the stuff ... [At Lochboisdale, South Uist], it filled the cells, the police garage and the policeman's house. A lot of it had to be stacked outside". Hutchinson writes that the raids were "spectacularly unsuccessful", only two cases of whisky being found. Hutchinson also quotes Gledhill, who says "The ineffective result was due to the fact that on the first day the local inspector of police refused to continue the search after lunchtime". The police did not work on the Sunday (the 8th), and those on Eriskay spent the day hiding or moving goods to better locations, waiting for a resumption of the raids the following week. A storm blew up on Monday 9, so the mail boat could not carry McColl and his colleagues across, and by Tuesday the policemen had returned to the mainland to resume their normal duties.
Between 10 and 13 June the trials took place of 32 men arrested for the theft from Politician. McColl gave evidence and stated that the men had stolen whisky from a vessel that was still seaworthy; the sheriff-substitute hearing the case accepted McColl's statement. One man was found not guilty, nine others were not proven—the Scottish legal verdict to acquit an individual but not declare them innocent—three were fined and 19 were incarcerated at Inverness Prison for terms ranging between 20 days and two months. McColl still thought the sentences were too lenient, and wrote to the interim procurator fiscal to complain; he also wrote to the Customs commissioners and said:
> In my opinion these few small sentences are quite inadequate to act as a general detriment to the population of these islands, who in my opinion will probably seize their next opportunity to further looting and damage.
The night the prison sentences were handed down, a hole was made in the roof of the shared garage where McColl's car was parked; paraffin was poured in and set alight. McColl's car was only damaged in the event, but another was destroyed. According to the Customs men, they were subjected to threats of violence throughout their investigation; Bootham White reported to the commissioners in London that McColl should not be active in any further searches because of "threats and warning of bodily injury".
On some of the raids by Gledhill and McColl, they seized boats that had been identified as being involved in visiting Politician; these were either through reports from informants, or because there was the ship's fuel oil on the boat. Those that were not seized at the time were painted with an arrow for seizure later. By the time the court cases had been heard, the customs men had amassed a considerable number of the vessels. Several islanders wrote to him asking for the boats to be returned, as the lobster fishing season was in progress, and they were unable to work; one man pointed out that his sons had used the boat against his wishes, and as one of the sons was in prison and the other fighting in North Africa, he wanted his boat back; one farmer whose boat had been used by local boys to visit the wreck needed his craft to tend 200 sheep and lambs grazing on a smaller island nearby, and was unable to access it without his vessel. All the requests were turned down by Gledhill, who instructed McColl to continue seizing any craft he thought were involved.
## September 1941 – August 1942
In September 1941 the whisky that had been salvaged by BISC was shipped to the mainland and put into locked railway carriages which had the excise seal placed on them. By the time the trains reached Kilmarnock on their way to the same bonded warehouses the cargo had left in January 1941, the customs seals had been broken, the doors unlocked and the cargo part looted.
Relations between the police and Customs men became increasingly strained by late 1941, and Gledhill began to criticise the force in his reports back to London. He also wrote to William Fraser, the chief constable of Inverness-shire, to complain that customs were not being fully informed of all developments, nor of the total amount of whisky seized. Fraser began to become annoyed with correspondence between himself and Gledhill, and between himself and the customs commissioners in London. One of his ongoing requests was for the removal of the whisky from Lochboisdale police station, where it still occupied considerable space. He made progress only when he threatened to raise the matter with his Secretary of State, and it was agreed to remove it with the whisky that the salvors had raised from the wreck.
Gledhill continued to push for stringent measures to be taken against those still awaiting trial. A permanent procurator fiscal, Donald Macmillan, had replaced his temporary predecessor, and Gledhill wrote to him to try and have the remaining cases heard under customs legislation. Macmillan told him to establish what the customs commissioners wanted, and at the end of October, Gledhill had been told by his superiors not to press for the punitive charges, but to allow charges of theft; he also asked to be kept informed of any further prosecutions involving four crofters who were found with stashes of whisky on their land. Macmillan wrote back that two of the cases had been dismissed by his predecessor and the remaining two defendants had gone to sea. Neither were prosecuted when they returned. The seized boats were eventually returned to their owners, but only after they had purchased them from the customs men.
The BISC salvors spent over four months preparing Politician for refloating. They removed extraneous weight, patched the underwater holes, pumped compressed air into the hold, and waited until the weather conditions and tides were right. On 22 September 1941 they finished preparations and the ship was lifted off the rocks. BISC's site agent, Percy Holden, wanted to tow the ship the seven miles (eleven kilometres) to Lochboisdale, where she could be beached to await the heavy tugs needed to tow her to the docks on the River Clyde, where she could be scrapped. The BISC's superintendent engineer on site refused to allow the towing to take place; he said that if there was bad weather on the route, or the sea was rough, then Politician could sink in deep water and never be recovered. The vessel was then towed to a point 500 yards (460 m) north of Calvay and beached on a sandbank; none of the men knew that the bank covered a rock. Politician settled, and broke her back, although no-one realised it until 25 October, when the heavy tugs came to move her to the mainland. All work on the vessel was halted over the winter months, to allow the poor weather to pass.
The salvage divers had reported that number five hold still contained "one stack of probably about 2,000 cases of spirits and, on the bottom of the hold, a very large accumulation of loose paper, carton cases and loose bottles, both broken and unbroken". McColl was concerned about the possibility of more thefts from the ship and requested permission from his superiors to have the hold demolished by explosives; in his request he lied about the remaining cargo, and stated there were 3,000 to 4,000 cases, and thousands of loose bottles. He was given permission to proceed, and on 6 August, 16 sticks of gelignite were used to destroy number five hold and its contents. Swinson described the act as "the ultimate in stupidity, waste and vandalism, symbolising a mental attitude beyond ... [the islanders'] comprehension". Angus John Campbell commented "Dynamiting whisky. You wouldn't think there'd be men in the world so crazy as that!"
There is no accurate figure for the number of bottles taken. McColl estimated that the islanders had taken about 2,000 cases (24,000 bottles). Swinson estimates 7,000 cases. Swinson bases his estimate on the interviews he made with islanders in the early 1960s; he spoke with men who had taken over 500 cases between them, and they were, Swinson records, only a few of the several hundred who visited the wreck.
Holden returned with his salvage team in March 1942 to cut the stern—including number five hold—from the rest of Politician. Once the waterlogged hold had been removed, the remainder of the ship rose from the sandbank, at which point she was towed to Lochboisdale and then on to Rothesay. Within two weeks the main part of the ship had been turned to scrap; number five hold remained on the floor of Eriskay Sound.
The salvors extracted £360,000 in Jamaican currency from number five hold and passed it to Gledhill. He sealed the money in boxes and sent it to the salvage agents via the local post office on South Uist. The notes were handed over to the Bank of England. Many had already been presented at banks for exchange. A Royal Air Force corporal changing Jamaican notes in Rothesay was arrested, but was acquitted after he proved he had recently returned from a posting in Jamaica; in November 1942 the foreman of the salvage operation was questioned by police: he had been giving away the notes as souvenirs. By 1958, 211,267 notes had been located; 2,329 more had been presented at banks in Ireland, Switzerland, Malta, the US and Jamaica, some of which had been paid into the banks by people unaware of the source of the money. About 76,400 banknotes remained lost.
## Legacy
The islanders involved in removing the whisky were resentful of those who had provided information to the customs officials. There was also a bad feeling towards those who had sold the whisky they found; when interviewed by Swinson in the 1960s, islanders told him that most of those involved in looting the whisky either drank it, hid it for later, or gave it to friends and families. Opinions varied about those who had taken the hidden whisky caches of others. Some islanders thought it was not theirs to take in the first place, so it didn't matter who took it the second time; one man told Swinson "it was all part of the fun". Another man said that he didn't mind customs searching for it—that was their job, after all—but "what I did mind were the people who hadn't the courage to board the steamer ... they would watch where we buried the stuff and unearth it later on". Those islanders who were prosecuted were angered by what Hutchinson describes as "the perversion of natural justice, by the stain put on their characters and not least by the fact that each of them, members of possibly the most peaceable and law-abiding community on Britain, now had a criminal record".
At the official inquiry into the sinking of Politician, Captain Worthington and First Mate Swain were cleared of all blame for her fate. Both returned to sea. Worthington captained SS Arica, which was sunk in November 1942 by U160; he survived the war and died in 1961. Swain commanded SS Custodian, another ship in the Harrison line, and survived the war. E. H. Mossman, Politician's chief engineer, sailed on SS Barrister, which ran aground on rocks off the coast of Ireland in December 1942. According to Hutchinson, Mossman "is reputed to have commented 'we've done it again'."
The writer Compton Mackenzie was a resident of Barra from 1933, and was aware of the events surrounding Politician. In 1947 he published a fictionalised humorous account under the title Whisky Galore; he set the story on two islands, Great Todday and Little Todday and developed the theme of "the right of small communities to self-determination in the face of larger, frequently ignorant, interfering forces", according to the historian Gavin Wallace. The book sold several million copies and was reprinted several times. Two factual books deal with the events surrounding Politician; in 1963 Arthur Swinson published Scotch on the Rocks: The True Story Behind Whisky Galore, which contained a foreword by Mackenzie, and in 1990 Roger Hutchinson wrote Polly: The True Story Behind Whisky Galore.
In 1949 Mackenzie's novel became the source for a film of the same name produced by Ealing Studios; Mackenzie made a brief appearance as the captain of SS Cabinet Minister, the renamed ship that grounded itself on the rocks. The customs men were replaced with Captain Paul Waggett, an English officer of the Home Guard, who vainly seeks out the purloined whisky. The plot device of pitting a small group of British against a series of changes to the status quo from an external agent leads the British Film Institute to consider Whisky Galore!, along with other Ealing comedies, as "conservative, but 'mildly anarchic' daydreams, fantasies". A remake of the film was released in June 2016. In January 1991 the broadcaster Derek Cooper presented Distilling Whisky Galore!, an hour-long documentary on the Politician, the Ealing comedy film and attempts to salvage any remaining cargo.
Because of the loss of the Jamaican notes, and the number that were being cashed in banks, from 1 July 1952 the blue ten-shilling notes were no longer accepted as legal tender. They were replaced with notes of the same design, but printed in purple on a light orange background. As at 2019, one of the notes from the wreck hangs over the bar of the Am Politician, Eriskay's only pub, which was named after the SS Politician.
It was the practice of some Eriskay residents to hide their empty bottles from Politician on Eriskay's interior for fear of incriminating themselves. Many of these were filled with sand from the local beach and turned into lamp bases before being sold in Edinburgh; the provenance was particularly interesting to American tourists who had seen the Ealing film. Several full bottles of whisky were found on the island when locations had been forgotten by those who buried them; sand dunes that changed shape with the wind or a new thatch roof being installed often uncovered a hidden cache. In 1991 one man who moved to Eriskay found four bottles under the floor of the croft he had purchased; he then found two bottles buried in the ground outside.
Several bottles have been raised from the wreck by divers. In 1987 a diving expedition brought up eight bottles, described as being "in perfect condition", and in 1989 the Glasgow-based company SS Politician Plc was formed to raise £500,000 for a salvage operation to locate any further bottles on the wreck. The salvage operation took place during the calm weather of the summer months of 1990, but in the first storm at the end of the summer the rig secured over the wreck site was blown off its moorings and the salvage operation was cancelled. The operation uncovered 24 bottles. A blended whisky, SS Politician, containing a small amount of the whisky they had raised was produced, but did not sell well and the company went into liquidation. A separate brand of whisky has been released under the name SS Politician, although this has no connection with the first brand or the whisky from the ship. The various finds of whisky—whether found on land or raised from the wreck—have been placed at auction.
|
86,402 |
Paul Henderson
| 1,170,760,857 |
Canadian former professional ice hockey player
|
[
"1943 births",
"20th-century Canadian male writers",
"20th-century Canadian non-fiction writers",
"21st-century Canadian male writers",
"21st-century Canadian memoirists",
"21st-century Canadian non-fiction writers",
"Atlanta Flames players",
"Birmingham Bulls (CHL) players",
"Birmingham Bulls players",
"Canadian anti-communists",
"Canadian autobiographers",
"Canadian evangelicals",
"Canadian ice hockey left wingers",
"Canadian male non-fiction writers",
"Detroit Red Wings players",
"Hamilton Red Wings (OHA) players",
"IIHF Hall of Fame inductees",
"Ice hockey people from Ontario",
"Living people",
"Members of the Order of Canada",
"Members of the Order of Ontario",
"Order of Hockey in Canada recipients",
"People from Bruce County",
"Pittsburgh Hornets players",
"Toronto Maple Leafs players",
"Toronto Toros players"
] |
Paul Garnet Henderson, CM OOnt (born January 28, 1943) is a Canadian former professional ice hockey player. A left winger, Henderson played 13 seasons in the National Hockey League (NHL) for the Detroit Red Wings, Toronto Maple Leafs and Atlanta Flames and five in the World Hockey Association (WHA) for the Toronto Toros and Birmingham Bulls. He played over 1,000 games between the two major leagues, scoring 376 goals and 758 points. Henderson played in two NHL All-Star Games and was a member of the 1962 Memorial Cup-winning Hamilton Red Wings team as a junior.
Henderson is best known for playing for Team Canada in the 1972 Summit Series against the Soviet Union. Played during the Cold War, the series was viewed as a battle for both hockey and cultural supremacy. Henderson scored the game-winning goal in the sixth, seventh and eighth games, the last of which has become legendary in Canada and made him a national hero: it was voted the "sports moment of the century" by The Canadian Press and earned him numerous accolades. Henderson has twice been inducted into Canada's Sports Hall of Fame: in 1995 individually and in 2005 along with all players of the Summit Series team. He was inducted into the International Ice Hockey Federation Hall of Fame in 2013.
A born-again Christian, Henderson became a minister, motivational speaker and author following his playing career. He has co-written three books related to hockey or his life. Henderson was made a Member of the Order of Canada in 2013 and of the Order of Ontario in 2014.
## Early life
Henderson was born January 28, 1943, near Kincardine, Ontario. His mother, Evelyn, went into labour while staying at his father's parents' farm in the nearby community of Amberley during a snowstorm. She gave birth to Paul while the family was crossing Lake Huron via horse-drawn sleigh attempting to reach the hospital in Kincardine. His father, Garnet, was fighting for Canada during the Second World War at the time and did not meet his son until Paul was nearly three years old. Garnet worked for the Canadian National Railway following his return and the family – Paul was the eldest to brother Bruce and sisters Marilyn, Coralyn and Sandra – moved frequently to different posts in Ontario before settling in Lucknow.
The family often struggled financially, though Garnet was always able to provide the basic life necessities. Paul's first experiences with hockey came at a young age in the basement of a Chinese restaurant operated by Charlie Chin, an immigrant who settled in Lucknow. Henderson played with Chin's sons using a ball instead of a puck. The Chin family bought Henderson his first set of hockey equipment; he had been using old catalogues as shin pads. His father coached his youth teams, and at one minor hockey tournament, told his teammates simply to "just give the puck to Paul and get out of his way". That incident remained with Henderson throughout his life: while it embarrassed him at the time to be singled out in front of his friends and teammates, he later realized it stood as an affirmation and expression of his father's pride in him and his abilities.
It was in Lucknow where Henderson met his future wife, Eleanor, at the age of 15 while he was working at a grocery store. They married in 1962 and, wanting to ensure he could provide for his wife, he considered giving up the game to become a history and physical education teacher. His father convinced him to remain in hockey, warning him that he would regret it for the rest of his life if he never tried to make the National Hockey League (NHL). After considering his father's advice and talking with Eleanor, Henderson decided to play two additional years, and if he had not reached the NHL by 1964, he would quit the game and focus on his education.
## Playing career
### Junior
Henderson attracted the attention of NHL scouts at the age of 15 when he scored 18 goals and 2 assists in a 21–6 victory in a juvenile playoff game. The junior affiliates of both the Boston Bruins and Detroit Red Wings offered him tryouts. He chose to sign with the Red Wings as their junior teams were based in Hamilton, which was closest to his home. He played the 1959–60 season with the Junior B Goderich Sailors and was the youngest player on the team. Henderson moved up to the Junior A Hamilton Red Wings in 1960–61 where he was an extra forward for much of the season. Returning to Hamilton in 1961–62, he became a regular player on the team, and recorded 24 goals and 43 points in 50 games.
Hamilton won the Ontario championship that season, then defeated the Quebec Citadelles in four consecutive games to win the eastern Canadian championship. Henderson scored a goal in the clinching game, a 9–3 win, that propelled the Red Wings to their first Memorial Cup final in the team's history. They faced the Edmonton Oil Kings in the 1962 Memorial Cup final series. The Red Wings won the best-of-seven set 4–1 to capture the national championship. Henderson scored a goal in the deciding game, a 7–4 victory before over 7,000 fans at Kitchener, Ontario. He finished with seven goals and seven assists in 14 Memorial Cup playoff games.
Returning for a third season with Hamilton in 1962–63, Henderson led the Ontario Hockey Association in scoring with 49 goals in 48 games. He added 27 assists to finish the season with 76 points. A bout of strep throat resulted in his missing Hamilton's playoff games, but he was called up to the Detroit Red Wings late in their season when they were short of players. Henderson played his first two NHL games against the Toronto Maple Leafs, with only one shift in each game. In his first game, Henderson elbowed Dick Duff in the head, sparking a fight. He spent the rest of the game on the bench after several Toronto players threatened retaliation against him. In his second, he incurred a slashing penalty during his only time on the ice. Henderson estimated that he was on the ice for only 20 seconds over the two games, but drew nine penalties in minutes.
### Detroit and Toronto
After failing to make the Detroit roster out of training camp, Henderson was assigned to their American Hockey League (AHL) affiliate, the Pittsburgh Hornets, to begin the 1963–64 season. He appeared in 38 games for the Hornets and his speed and aggressive nature helped him score 10 goals and 24 points. Henderson earned a brief recall to Detroit in November, then joined the NHL team permanently early in the new year. He scored his first NHL goal on January 29, 1964, against the Chicago Black Hawks. It came late in the game against goaltender Glenn Hall and resulted in a 2–2 tie. In 32 regular season NHL games, Henderson recorded three goals and three assists, then appeared in 14 playoff games where he added five more points. The Red Wings reached the 1964 Stanley Cup Finals, but lost in seven games to Toronto.
Henderson established himself as a full-time NHL player in 1964–65, though with limited ice time. He was used primarily in a defensive role and to kill penalties, scoring 8 goals and 21 points, while appearing in 70 games. Switching to the left wing in 1965–66, Henderson played a more offensive role and scored 22 goals. He added three more in 12 playoff games as the Red Wings reached the 1966 Stanley Cup Finals versus the Montreal Canadiens. Henderson scored the game-winning goal in the first game of the finals. After winning the first two games in Montreal, Detroit lost four straight and the series.
Seeking to double his \$7,000 salary from the previous season, Henderson became embroiled in a contract dispute with the Red Wings prior to the 1966–67 NHL season, before the team acceded to his demands. He then spent the year attempting to overcome injuries; a case of tracheitis forced him to miss several early season games and led the team to consider having him play wearing a surgical mask to protect against the cold air of the arena. Henderson eventually spent time in the dry air of Arizona to cure the ailment, but he also suffered from torn chest muscles and ultimately missed a third of the season. On the ice, Henderson scored 21 goals and 40 points in 49 games.
The Red Wings were in last place of the NHL's East Division late in the 1967–68 season when, on March 4, 1968, they completed one of the biggest trades in league history up to that time: Henderson was sent to the Toronto Maple Leafs as part of a six-player deal, along with Norm Ullman and Floyd Smith, in exchange for Frank Mahovlich, Garry Unger and Pete Stemkowski. Henderson finished the season with 11 points in 13 games for Toronto, then scored 27 goals and 59 points in 1968–69.
A groin injury plagued Henderson throughout much of the 1969–70 season, but he continued to play at the team's request. He finished with 20 goals despite playing the entire season with pain. The Maple Leafs offered Henderson only a small raise, arguing that he did not deserve more because his offensive production had declined. The contract offer and the team's indifference towards his injury left Henderson disillusioned with management's attitude towards its players. Healthy in 1970–71, he scored 30 goals and an NHL career-high 60 points.
### Summit Series
Canada had long been at a disadvantage in international ice hockey tournaments as its best players were professionals in the NHL and therefore ineligible to play at the ostensibly amateur World Championship and Olympic Games. The Soviets masked the status of their best players by having them serve in the military or hold other jobs affiliated with the teams, so they retained amateur status, even though playing hockey was their only occupational responsibility. The International Ice Hockey Federation (IIHF) promised to allow Canada to use a limited number of professional players at the 1970 tournament but later reneged, causing the nation to withdraw from all international competition. Officials in Canada and the Soviet Union subsequently negotiated an arrangement that would see the top players of each nation – amateur or professional – play in an eight-game "Summit Series" in September 1972 between the world's two greatest hockey nations. Canadian fans and media approached the series with confidence; many predicted that the Canadian professionals would win all eight games.
Henderson's 38-goal season in 1971–72, a career high, earned him a place on Team Canada's roster. He scored a goal early in the first game, in Montreal, that gave Canada a 2–0 lead. The Soviet team then humbled the Canadians by scoring the next four goals and winning 7–3. A 4–1 Canadian win followed in the second game at Maple Leaf Gardens in Toronto, but the Soviets overcame a 4–2 deficit, the fourth goal scored by Henderson, to tie the third game in Winnipeg. Canada lost the fourth game, 5–3, and were jeered by the fans in Vancouver as they headed to Moscow for the final four games with a series deficit. Henderson, like most of his teammates, was frustrated by his team's play and the negative reaction they received from the crowd.
In the first game in Moscow, Henderson scored a goal to help Canada establish a 4–1 lead, but also suffered a concussion when he was tripped into the boards and knocked unconscious. He returned to finish the game, but the Soviets came back to win, 5–4, and were one victory shy of winning the series. In game six, Canada overcame what coach Harry Sinden called "the worst officials I have ever seen in my life" to win by a 3–2 score, with Henderson scoring the game-winning goal. The game was also notable for Bobby Clarke using his stick in a two-handed slash that broke Valeri Kharlamov's ankle. Henderson later called the event "the low point of the series" during the 30th anniversary celebration, but apologized for his comments after Clarke took umbrage. Canada drew even in the series at three wins apiece, plus one tie, with a 4–3 victory in game seven. Henderson again scored the winner despite being tripped as he took the shot.
By the eighth game, the competition had become more than a battle for hockey supremacy: it was also viewed as a battle between contrasting ways of life, particularly in the Soviet Union, where success in sport was used to promote the superiority of communism over western capitalism. An estimated 50 million Soviets watched the final contest, while in Canada, offices were closed and schools suspended classes to allow students to watch the game on television in gymnasium assemblies. The two teams ended the first period tied at two goals apiece, but Canada trailed at the second intermission, 5–3, and Soviet officials stated they would claim the overall victory if the game ended in a draw as a result of scoring more goals throughout the series. Canada rallied in the third period to tie the game with seven minutes remaining.
Sitting on the bench as the game entered the final minute of play, Henderson "had a feeling" that he could score. He convinced coach Sinden to send him out when Peter Mahovlich left the ice. Rushing into the Soviet zone, Henderson missed a pass from Yvan Cournoyer in front of the net and was tripped by a Soviet defenceman. As he got to his feet, Phil Esposito recovered the puck and sent it towards Henderson in front of the net. The first shot was stopped by Vladislav Tretiak, but Henderson recovered the rebound and lifted it over the fallen goaltender to give Canada a 6–5 lead with only 34 seconds left to play. It was his seventh goal of the tournament, tying him for the series lead with Esposito and Alexander Yakushev. The goal won the game, and the series, for Canada. The team returned home to massive crowds in Montreal and Toronto, and Paul Henderson had become a national hero.
### World Hockey Association
Henderson struggled to adjust to his new-found popularity. While he appreciated the support from fans and the business opportunities it created, he grew increasingly frustrated over time as the attention intruded on his private life. In his autobiography, Shooting for Glory, Henderson stated that the fame left him less satisfied than he had ever been. His frustrations with Maple Leafs owner Harold Ballard, who Henderson felt was destroying the team, contributed to his developing an ulcer. (Henderson later admitted he was not mature enough at the time to deal with the acerbic Ballard). He briefly turned to alcohol as he struggled to deal with his situation. Henderson's professional career reached its lowest point during the 1972–73 NHL season. He had become depressed, and by December, had scored only six goals. He struggled with a groin injury and played only 40 games for the Maple Leafs, who missed the playoffs.
Prior to the 1973–74 NHL season, Henderson spoke to John Bassett, owner of the World Hockey Association (WHA)'s Toronto Toros. Bassett offered Henderson a five-year contract worth twice his annual \$75,000 salary from the Maple Leafs. The deal included a \$50,000 signing bonus and performance bonuses based on how he played in his final year with the Maple Leafs. Henderson signed the contract, though he said in his autobiography that he regretted doing so before completing his term with his NHL club. A bitter opponent of the WHA, Ballard had vowed not to lose more players to the rival league. When he found out about the deal, he offered Henderson the same contract terms, but without a signing bonus. Upset at how stingy Ballard had been with his teammates, Henderson told Ballard to "take this contract and shove it". An angered Ballard never forgave Henderson, and never spoke to him again.
Following a 24-goal campaign in his final season with the Maple Leafs, Henderson officially moved to the WHA where he played in another tournament against the Soviets. While the original series was restricted to players from the NHL, the 1974 Summit Series featured a Canadian team made up of WHA players. The series lacked the intensity of the original, yet Henderson felt that he played well: he scored two goals and an assist, and though Canada finished with one win, four losses and three ties, he felt the WHA had proven itself. Henderson scored 33 goals and 63 points in the 1974–75 WHA season for the Toros while playing 58 games. He missed the playoffs after tearing his knee ligaments in a game against the Phoenix Roadrunners when colliding with Bob Mowat, an opposing player during a line change.
Henderson scored 24 goals and 55 points in 1975–76, his last in Toronto. Following that season, the Toros relocated to Alabama where they became the Birmingham Bulls. While his contract stipulated he did not have to relocate with the team, Henderson appreciated the chance to move to a city where he could play in relative anonymity. He played the final three years of his contract in Birmingham, scoring 23, 37 and 24 goals, but made only one playoff appearance during his five WHA seasons, in 1978.
### Atlanta Flames
The WHA merged with the NHL following the 1978–79 season. Birmingham was not invited to join the NHL; the team instead joined the Central Hockey League for the 1979–80 season and became a minor league affiliate of the Atlanta Flames. Henderson considered retiring, but his family had settled in Birmingham and he knew they could remain in the United States only as long as he was employed. The Flames offered him a spot on their roster, but he preferred to remain with his family. He signed a two-year contract with the Flames on the promise that he would stay in Birmingham unless the team needed his services as a result of injury to other players. He spent the majority of the season in Birmingham, but when Atlanta did struggle with injuries, they recalled him for 30 games where he scored seven goals and six assists. Henderson also appeared in four playoff games. In his final game at Toronto's Maple Leaf Gardens, Henderson led the Flames to a 5–1 win over the Maple Leafs with a two-goal effort, resulting in his being named the game's first star.
Henderson intended the 1980–81 season to be his last as a player. He was again offered a spot on the Flames, in part to help develop the team's young players, but the franchise had relocated to Canada to become the Calgary Flames and Henderson chose to remain with Birmingham, as a player and assistant coach. He missed several games due to injuries but scored six goals in 33 games. However, the Bulls fell into financial difficulty and on February 23, 1981, the team ceased operations mid-season. Choosing not to leave his Birmingham home, Henderson retired as a player and spent the remainder of the season as a scout for the Flames.
## Legacy
Though he was not considered a good puck handler, Henderson was a fast skater and was known for his skills at shooting the puck. His career spanned 19 professional seasons during which he played over 1,000 major league games in the NHL and WHA. He scored 376 goals and 760 points between the two leagues and was a two-time NHL all-star, playing in the 1972 and 1973 All-Star Games. His career, however, was defined by the goal he scored on September 28, 1972, to win the Summit Series for Canada. It is the most famous goal in Canadian hockey history and was the defining moment for a generation of Canadians. Decades later, Henderson remains a national hero. The Canadian Press named Henderson's goal the "sports moment of the century" in 2000. The jersey worn by Henderson when he scored the goal was sold at auction for over \$1 million in 2010, thought to be the highest price ever paid for a hockey sweater.
Frank Lennon's photograph, taken moments after the goal and showing a jubilant Henderson being embraced by Yvan Cournoyer, has been "etched into the visual cortex of every Canadian". The photo won a National Newspaper Award and has been reproduced by the Royal Canadian Mint on coins. It was also named Canadian Press photograph of the year.
Sportswriters and fans have frequently called for Henderson to be inducted into the Hockey Hall of Fame on the strength of his performance. Commentator and former NHL coach Don Cherry argued that Henderson's status as hero of the "greatest series in hockey history" was enough to qualify him. Henderson himself does not believe he belongs: "So many Canadians get upset that I’m not in the Hall of Fame, and I tell them all the time if I was on the committee, I wouldn’t vote for me. Quite frankly, I didn’t have a Hall of Fame career." Henderson has been honoured by Canada's Sports Hall of Fame on two occasions: he was first inducted as an individual in 1995, and again ten years later along with his 1972 teammates. The Summit Series team has also been honoured with a star on Canada's Walk of Fame. Henderson has been inducted into the Ontario Sports Hall of Fame (1997), the IIHF Hall of Fame (2013) and has been honoured by Hockey Canada with the Order of Hockey in Canada as part of its 2013 class. He was named a Member of the Order of Canada in the 2013 Canadian honours in recognition of "his engagement in support of a range of social and charitable causes" along with his achievements on the ice. In 2014, he was named to the Order of Ontario.
## Personal life
Henderson and his wife Eleanor have three daughters: Heather, Jennifer and Jill. The family remained in Birmingham for a time following his retirement as a player. He had an opportunity to become a colour commentator for Maple Leafs broadcasts in 1981 but Ballard, still upset that Henderson had defected to the WHA, prevented his hiring. In Birmingham, he became a stockbroker, briefly joining brokerage firm E. F. Hutton. However, he was unable to get a work permit in the United States despite a petition signed by thousands of Birmingham residents who fought for him to stay.
Following the high of the 1972 Summit Series and the personal lows that came after, Henderson struggled with a sense of discontentment. He turned to religion, becoming a born-again Christian in 1975. Unable to work as a broker, Henderson entered the seminary and studied to become a minister. When he finally gave up his efforts to acquire an American work visa in 1984, he returned to Toronto. Under the auspices of Power to Change Ministries, formerly Campus Crusade for Christ Canada, he founded a men's ministry in Ontario called LeaderImpact and travels across Canada giving talks and speeches, particularly to businessmen. He has received an honorary doctorate from Briercrest College and Seminary and an honorary degree from Tyndale University College and Seminary.
Henderson is also a published author. His autobiography, Shooting for Glory, was released in 1992. With Jim Prime, he co-authored the 2011 book How Hockey Explains Canada, an exploration of the relationship between the sport and Canadian culture. He released a memoir in 2012 called The Goal of My Life with Roger Lajoie.
The death of his father due to heart problems at the age of 49 had a lasting effect on Henderson. He was conscious of his own health, and survived a blockage in his own heart that was discovered in 2004. He was diagnosed with chronic lymphocytic leukemia in 2009. The disease prevented him from attending 40th anniversary celebrations of the Summit Series in Moscow, but he was responding well to experimental treatment as part of a clinical trial he participated in into 2013.
## Awards and honours
- J. Ross Robertson Cup (1962)
- Memorial Cup (1962)
- NHL All-Star Game (1972, 1973)
- Canada's Sports Hall of Fame (1995, 2005)
- Order of Canada (2012)
- IIHF Hall of Fame (2013)
- Order of Ontario (2014)
## Career statistics
### Regular season and playoffs
### International
|
495,671 |
Operation Infinite Reach
| 1,172,662,470 |
1998 American bombing campaign in Sudan
|
[
"1998 United States embassy bombings",
"1998 in Afghanistan",
"1998 in Sudan",
"20th-century military history of the United States",
"Afghan Civil War (1996–2001)",
"Afghanistan–United States relations",
"Al-Qaeda",
"August 1998 events in Asia",
"Clinton administration controversies",
"Conflicts in 1998",
"Counterterrorism in the United States",
"Military operations involving the United States",
"Military operations post-1945",
"Sudan–United States relations"
] |
Operation Infinite Reach was the codename for American cruise missile strikes on al-Qaeda bases that were launched concurrently across two continents on August 20, 1998. Launched by the U.S. Navy, the strikes hit the al-Shifa pharmaceutical factory in Khartoum, Sudan, and a camp in Khost Province, Afghanistan, in retaliation for al-Qaeda's August 7 bombings of American embassies in Kenya and Tanzania, which killed 224 people (including 12 Americans) and injured over 4,000 others. Operation Infinite Reach was the first time the United States acknowledged a preemptive strike against a violent non-state actor.
U.S. intelligence wrongly suggested financial ties between the al-Shifa plant, which produced over half of Sudan's pharmaceuticals, and Osama bin Laden; a soil sample collected from al-Shifa allegedly contained a chemical used in VX nerve gas manufacturing. Suspecting that al-Shifa was linked to, and producing chemical weapons for, bin Laden and his al-Qaeda network, the U.S. destroyed the facility with cruise missiles, killing or wounding 11 Sudanese. The strike on al-Shifa proved controversial; after the attacks, the U.S. evidence and rationale were criticized as faulty, and academics Max Taylor and Mohamed Elbushra cite "a broad acceptance that this plant was not involved in the production of any chemical weapons."
The missile strikes on al-Qaeda's Afghan training camps were aimed at preempting more attacks and killing bin Laden. These strikes damaged the installations, but bin Laden was not present at the time. Two of the targeted camps were run by the Inter-Services Intelligence of Pakistan who were training militants to fight in Kashmir; in all, five ISI officers were confirmed killed and at least twenty militants also died. Following the attacks, Afghanistan's ruling Taliban allegedly reneged on a promise to Saudi intelligence chief Turki bin Faisal to hand over bin Laden, and the regime instead allegedly strengthened its ties with the al-Qaeda chief.
Operation Infinite Reach, the largest U.S. action in response to a terrorist attack since the 1986 bombing of Libya, was met with a mixed international response: U.S. allies and most of the American public supported the strikes, but many across the Muslim world disapproved them, viewing them as attacks specifically against Muslims, a factor that was further capitalized by radicals. The failure of the attacks to kill bin Laden also enhanced his public image in parts of the Muslim world. Further strikes were planned but not executed; as a 2002 congressional inquiry noted, Operation Infinite Reach was "the only instance ... in which the CIA or U.S. military carried out an operation directly against Bin Laden before September 11."
## Background
On February 23, 1998, Osama bin Laden, Ayman al-Zawahiri, and three other leaders of Islamic militant organizations issued a fatwa in the name of the World Islamic Front for Jihad Against Jews and Crusaders, publishing it in Al-Quds Al-Arabi. Deploring the stationing of U.S. troops in Saudi Arabia, the alleged U.S. aim to fragment Iraq, and U.S. support for Israel, they declared that "The ruling to kill the Americans and their allies—civilian and military—is an individual duty for every Muslim who can do it in any country in which it is possible to do it." In spring 1998, Saudi elites became concerned about the threat posed by al-Qaeda and bin Laden; militants attempted to infiltrate surface-to-air missiles inside the kingdom, an al-Qaeda defector alleged that Saudis were bankrolling bin Laden, and bin Laden himself lambasted the Saudi royal family. In June 1998, Al Mukhabarat Al A'amah (Saudi intelligence) director Prince Turki bin Faisal Al Saud traveled to Tarnak Farms to meet with Taliban leader Mullah Omar to discuss the question of bin Laden. Turki demanded that the Taliban either expel bin Laden from Afghanistan or hand him over to the Saudis, insisting that removing bin Laden was the price of cordial relations with the Kingdom. American analysts believed Turki offered a large amount of financial aid to resolve the dispute over bin Laden. Omar agreed to the deal, and the Saudis sent the Taliban 400 pickup trucks and funding, enabling the Taliban to retake Mazar-i-Sharif. While the Taliban sent a delegation to Saudi Arabia in July for further discussions, the negotiations stalled by August.
Around the same time, the U.S. was planning its own actions against bin Laden. Michael Scheuer, chief of the Central Intelligence Agency's bin Laden unit (Alec Station), considered using local Afghans to kidnap bin Laden, then exfiltrate him from Afghanistan in a modified Lockheed C-130 Hercules. Documents recovered from Wadih el-Hage's Nairobi computer suggested a link between bin Laden and the deaths of U.S. troops in Somalia. These were used as the foundation for the June 1998 New York indictment of bin Laden, although the charges were later dropped. The planned raid was cancelled in May after internecine disputes between officials at the FBI and the CIA; the hesitation of the National Security Council (NSC) to approve the plan; concerns over the raid's chance of success, and the potential for civilian casualties.
Al-Qaeda had begun reconnoitering Nairobi for potential targets in December 1993, using a team led by Ali Mohamed. In January 1994, bin Laden was personally presented with the team's surveillance reports, and he and his senior advisers began to develop a plan to attack the American embassy there. From February to June 1998, al-Qaeda prepared to launch their attacks, renting residences, building their bombs, and acquiring trucks; meanwhile, bin Laden continued his public-relations efforts, giving interviews with ABC News and Pakistani journalists. While U.S. authorities had investigated al-Qaeda activities in Nairobi, they had not detected any warnings of imminent attacks.
On August 7, 1998, al-Qaeda teams in Nairobi, Kenya, and Dar es Salaam, Tanzania, attacked the cities' U.S. embassies simultaneously with truck bombs. In Nairobi, the explosion collapsed the nearby Ufundi Building and destroyed the embassy, killing 213 people, including 12 Americans; another 4,000 people were wounded. In Dar es Salaam, the bomber was unable to get close enough to the embassy to demolish it, but the blast killed 11 Africans and wounded 85. Bin Laden justified the high-casualty attacks, the largest against the U.S. since the 1983 Beirut barracks bombings, by claiming they were in retaliation for the deployment of U.S. troops in Somalia; he also alleged that the embassies had devised the Rwandan genocide as well as a supposed plan to partition Sudan.
## Execution
### Planning the strikes
National Security Advisor Sandy Berger called President Bill Clinton at 5:35 AM on August 7 to notify him of the bombings. That day, Clinton started meeting with his "Small Group" of national security advisers, which included Berger, CIA director George Tenet, Secretary of State Madeleine Albright, Attorney General Janet Reno, Defense Secretary William Cohen, and Chairman of the Joint Chiefs of Staff Hugh Shelton. The group's objective was to plan a military response to the East Africa embassy bombings. Initially the U.S. suspected either Hamas or Hezbollah for the bombings, but FBI Agents John P. O'Neill and Ali Soufan demonstrated that al-Qaeda was responsible. Based on electronic and phone intercepts, physical evidence from Nairobi, and interrogations, officials soon demonstrated bin Laden as the perpetrator of the attacks. On August 8, the White House asked the CIA and the Joint Chiefs of Staff to prepare a targets list; the initial list included twenty targets in Sudan, Afghanistan, and an unknown third country, although it was narrowed down on August 12.
In an August 10 Small Group meeting, the principals agreed to use Tomahawk cruise missiles, rather than troops or aircraft, in the retaliatory strikes. Cruise missiles had been previously used against Libya and Iraq as reprisals for the 1986 Berlin discotheque bombing and the 1993 attempted assassination of then-President George H. W. Bush. Using cruise missiles also helped to preserve secrecy; airstrikes would have required more preparation that might have leaked to the media and alerted bin Laden. The option of using commandos was discarded, as it required too much time to prepare forces, logistics, and combat search and rescue. Using helicopters or bombers would have been difficult due to the lack of a suitable base or Pakistani permission to cross its airspace, and the administration also feared a recurrence of the disastrous 1980 Operation Eagle Claw in Iran. While military officials suggested bombing Kandahar, which bin Laden and his associates often visited, the administration was concerned about killing civilians and hurting the U.S.' image.
On August 11, General Anthony Zinni of Central Command was instructed to plan attacks on bin Laden's Khost camps, where CIA intelligence indicated bin Laden and other militants would be meeting on August 20, purportedly to plan further attacks against the U.S. Clinton was informed of the plan on August 12 and 14. Participants in the meeting later disagreed whether or not the intelligence indicated bin Laden would attend the meeting; however, an objective of the attack remained to kill the al-Qaeda leader, and the NSC encouraged the strike regardless of whether bin Laden and his companions were known to be present at Khost. The administration aimed to prevent future al-Qaeda attacks discussed in intercepted communications. As Berger later testified, the operation also sought to damage bin Laden's infrastructure and show the administration's commitment to combating bin Laden. The Khost complex, which was 90 miles (140 km) southeast of Kabul, also had ideological significance: Bin Laden had fought nearby during the Soviet–Afghan War, and he had given interviews and even held a press conference at the site. Felix Sater, then a CIA source, provided additional intelligence on the camps' locations.
On August 14, Tenet told the Small Group that bin Laden and al-Qaeda were doubtless responsible for the attack; Tenet called the intelligence a "slam dunk", according to counterterrorism official Richard Clarke, and Clinton approved the attacks the same day. As the 9/11 Commission Report relates, the group debated "whether to strike targets outside of Afghanistan". Tenet briefed the small group again on August 17 regarding possible targets in Afghanistan and Sudan; on August 19, the al-Shifa pharmaceutical facility in Khartoum, Sudan, al-Qaeda's Afghan camps, and a Sudanese tannery were designated as targets. The aim of striking the tannery, which had allegedly been given to bin Laden by the Sudanese for his road-building work, was to disrupt bin Laden's finances, but it was removed as a target due to fears of inflicting civilian casualties without any loss for bin Laden. Clinton gave the final approval for the attacks at 3:00 AM on August 20; the same day, he also signed Executive Order 13099, authorizing sanctions on bin Laden and al-Qaeda. The Clinton administration justified Operation Infinite Reach under Article 51 of the UN Charter and Title 22, Section 2377 of the U.S. Code; the former guarantees a UN member state's right to self-defense, while the latter authorizes presidential action by "all necessary means" to target international terrorist infrastructure. Government lawyers asserted that since the missile strikes were an act of self-defense and not directed at an individual, they were not forbidden as an assassination. A review by administration lawyers concluded that the attack would be legal, since the president has the authority to attack the infrastructure of anti-American terrorist groups, and al-Qaeda's infrastructure was largely human. Officials also interpreted "infrastructure" to include al-Qaeda's leadership.
The missiles would pass into Pakistani airspace, overflying "a suspected Pakistani nuclear weapons site," according to Vice Chairman of the Joint Chiefs of Staff General Joseph Ralston; U.S. officials feared Pakistan would mistake them for an Indian nuclear attack. Clarke was concerned the Pakistanis would shoot down the cruise missiles or airplanes if they were not notified, but also feared the ISI would warn the Taliban or al-Qaeda if they were alerted. In Islamabad on the evening of August 20, Ralston informed Pakistan Army Chief of Staff Jehangir Karamat of the incoming American strikes ten minutes before the missiles entered Pakistani airspace. Clarke also worried the Pakistanis would notice the U.S. Navy ships, but was told that submerged submarines would launch the missiles. However, the Pakistan Navy detected the destroyers and informed the government.
### Al-Shifa plant attack
At about 7:30 PM Khartoum time (17:30 GMT), two American warships in the Red Sea (USS Briscoe and USS Hayler) fired thirteen Tomahawk missiles at Sudan's Al-Shifa pharmaceutical factory, which the U.S. wrongly claimed was helping bin Laden build chemical weapons. The entire factory was destroyed except for the administration, water-cooling, and plant laboratory sections, which were severely damaged. One night watchman was killed and ten other Sudanese were wounded by the strike. Worried about the possibility for hazardous chemical leakages, analysts ran computer simulations on wind patterns, climate, and chemical data, which indicated a low risk of collateral damage. Regardless, planners added more cruise missiles to the strike on Al-Shifa, aiming to completely destroy the plant and any dangerous substances.
Clarke stated that intelligence linked bin Laden to Al-Shifa's current and past operators, namely Iraqi nerve gas experts such as Emad al-Ani and Sudan's ruling National Islamic Front. Since 1995, the CIA had received intelligence suggesting collaboration between Sudan and bin Laden to produce chemical weapons for attacking U.S. Armed Forces personnel based in Saudi Arabia. Since 1989, the Sudanese opposition and Uganda had alleged that the regime was manufacturing and using chemical weapons, although the U.S. did not accuse Sudan of chemical weapons proliferation. Al-Qaeda defector Jamal al-Fadl had also spoken of bin Laden's desire to obtain weapons of mass destruction, and an August 4 CIA intelligence report suggested bin Laden "had already acquired chemical weapons and might be ready to attack". Cohen later testified that physical evidence, technical and human intelligence, and the site's security and purported links to bin Laden backed the intelligence community's view that the Al-Shifa plant was producing chemical weapons and associated with terrorists.
With the help of an Egyptian agent, the CIA had obtained a sample of soil from the facility taken in December 1997 showing the presence of O-Ethyl methylphosphonothioic acid (EMPTA), a substance used in the production of VX nerve gas, at 2.5 times trace levels. (Reports are contradictory on whether the soil was obtained from within the compound itself, or outside.) The collected soil was split into three samples, which were then analyzed by a private laboratory. The agent's bona fides were later confirmed through polygraph testing; however, the CIA produced a report on Al-Shifa on July 24, 1998, questioning whether Al-Shifa produced chemical weapons or simply stored precursors, and the agency advised collecting more soil samples. Cohen and Tenet later briefed U.S. senators on intercepted telephone communications from the plant that reputedly bolstered the U.S. case against Al-Shifa. U.S. intelligence also purportedly researched the Al-Shifa factory online and searched commercial databases, but did not find any medicines for sale.
### Al-Shifa controversy
U.S. officials later acknowledged that the evidence cited by the U.S. in its rationale for the Al-Shifa strike was weaker than initially believed: The facility had not been involved in chemical weapons production, and was not connected to bin Laden. The \$30 million Al-Shifa factory, which had a \$199,000 contract with the UN under the Oil-for-Food Programme, employed 300 Sudanese and provided over half of the country's pharmaceuticals, including medicines for malaria, diabetes, gonorrhea, and tuberculosis. A Sudanese named Salah Idris purchased the plant in March 1998; while the CIA later said it found financial ties between Idris and the bin Laden-linked terrorist group Egyptian Islamic Jihad, the agency had been unaware at the time that Idris owned the Al-Shifa facility. Idris later denied any links to bin Laden and sued to recover \$24 million in funds frozen by the U.S., as well as for the damage to his factory. Idris hired investigations firm Kroll Inc., which reported in February 1999 that neither Idris nor Al-Shifa was connected to terrorism.
The chairman of Al-Shifa Pharmaceutical Industries insisted that his factory did not make nerve gas, and Sudanese President Omar al-Bashir formed a commission to investigate the factory. Sudan invited the U.S. to conduct chemical tests at the site for evidence to support its claim that the plant might have been a chemical weapons factory; the U.S. refused the invitation to investigate and did not officially apologize for the attacks. Press coverage indicated that Al-Shifa was not a secure, restricted-access factory, as the U.S. alleged, and American officials later conceded that Al-Shifa manufactured pharmaceutical drugs. Sudan requested a UN investigation of the Al-Shifa plant to verify or disprove the allegations of weapons production; while the proposal was backed by several international organizations, it was opposed by the U.S.
The American Bureau of Intelligence and Research (INR) criticized the CIA's intelligence on Al-Shifa and bin Laden in an August 6 memo; as James Risen reported, INR analysts concluded that "the evidence linking Al Shifa to bin Laden and chemical weapons was weak." According to Risen, some dissenting officials doubted the basis for the strike, but senior principals believed that "the risks of hitting the wrong target were far outweighed by the possibility that the plant was making chemical weapons for a terrorist eager to use them." Senior NSC intelligence official Mary McCarthy had stated that better intelligence was needed before planning a strike, while Reno, concerned about the lack of conclusive evidence, had pressed for delaying the strikes until the U.S. obtained better intelligence. According to CIA officer Paul R. Pillar, senior Agency officials met with Tenet before he briefed the White House on bin Laden and Al-Shifa, and the majority of them opposed attacking the plant. Barletta notes that "It is unclear precisely when U.S. officials decided to destroy the Shifa plant." ABC News reported that Al-Shifa was designated as a target just hours in advance; Newsweek stated that the plant was targeted on August 15–16; U.S. officials asserted that the plant was added as a target months in advance; and a U.S. News & World Report article contended that Al-Shifa had been considered as a target for years. Clinton ordered an investigation into the evidence used to justify the Al-Shifa strike, while as of July 1999, the House and Senate intelligence committees were also investigating the target-selection process, the evidence cited, and whether intelligence officials recommended attacking the plant.
It was later hypothesized that the EMPTA detected was the result of the breakdown of a pesticide or confused with Fonofos, a structurally similar insecticide used in African agriculture. Eric Croddy contends that the sample did not contain Fonofos, arguing that Fonofos has a distinct ethyl group and a benzene group, which distinguish it from EMPTA, and that the two chemicals could not be easily confused. Tests conducted in October 1999 by Idris' defense team found no trace of EMPTA. Although Tenet vouched for the Egyptian agent's truthfulness, Barletta questions the operative's bona fides, arguing that they may have misled U.S. intelligence; he also notes that the U.S. withdrew its intelligence staff from Sudan in 1996 and later retracted 100 intelligence reports from a fraudulent Sudanese source. Ultimately, Barletta concludes that "It remains possible that Al-Shifa Pharmaceutical Factory may have been involved in some way in producing or storing the chemical compound EMPTA ... On balance, the evidence available to date indicates that it is more probable that the Shifa plant had no role whatsoever in CW production."
### Attack on Afghan camps
Four U.S. Navy ships and the submarine USS Columbia, stationed in the Arabian Sea, fired between 60 and 75 Tomahawk cruise missiles into Afghanistan at the Zhawar Kili Al-Badr camp complex in the Khost region, which included a base camp, a support camp, and four training camps. Peter Bergen identifies the targeted camps, located in Afghanistan's "Pashtun belt," as al-Badr 1 and 2, al-Farooq, Khalid bin Walid, Abu Jindal, and Salman Farsi; other sources identify the Muawia, Jihad Wahl, and Harkat-ul-Jihad al-Islami camps as targets. According to Shelton, the base camp housed "storage, housing, training and administration facilities for the complex," while the support camp included weapons-storage facilities and managed the site's logistics. Egyptian Islamic Jihad and the Algerian Armed Islamic Group also used the Khost camps, as well as Pakistani militant groups fighting an insurgency in Kashmir, such as Harkat Ansar, Lashkar-e-Taiba, and Hizbul Mujahideen. The rudimentary camps, reputedly run by Taliban official Jalaluddin Haqqani, were frequented by Arab, Chechen, and Central Asian militants, as well as the ISI. The missiles hit at roughly 10:00 PM Khost time (17:30 GMT); as in Sudan, the strikes were launched at night to avoid collateral damage. In contrast to the attack on Al-Shifa, the strike on the Afghan camps was uncontroversial.
The U.S. first fired unitary (C-model) Tomahawks at the Khost camps, aiming to attract militants into the open, then launched a barrage of D-model missiles equipped with submunitions to maximize casualties. Sources differ on the precise number of casualties inflicted by the missile strikes. Bin Laden bodyguard Abu Jandal and militant trainee Abdul Rahman Khadr later estimated that only six men had been killed in the strikes. The Taliban claimed 22 Afghans killed and over 50 seriously injured, while Berger put al-Qaeda casualties at between 20 and 30 men. Bin Laden jokingly told militants that only a few camels and chickens had died, although his spokesman cited losses of six Arabs killed and five wounded, seven Pakistanis killed and over 15 wounded, and 15 Afghans killed. A declassified September 9, 1998, State Department cable stated that around 20 Pakistanis and 15 Arabs died, out of a total of over 50 killed in the attack. Harkat-ul-Mujahideen's leader, Fazlur Rehman Khalil, initially claimed a death toll of over 50 militants, but later said that he had lost fewer than ten fighters. Death toll ranges from 6 or 50 militants.
Pakistani and hospital sources gave a death toll of eleven dead and fifty-three wounded. Pakistani journalist Ahmed Rashid writes that 20 Afghans, seven Pakistanis, three Yemenis, two Egyptians, one Saudi and one Turk were killed. Initial reports by Pakistani intelligence chief Chaudhry Manzoor and a Foreign Ministry spokesman stated that a missile had landed in Pakistan and killed six Pakistanis; the government later retracted the statement and fired Manzoor for the incorrect report. However, the 9/11 Commission Report states that Clinton later called Pakistani Prime Minister Nawaz Sharif "to apologize for a wayward missile that had killed several people in a Pakistani village." One 1998 U.S. News & World Report article suggested that most of the strike's victims were Pakistani militants bound for the Kashmiri insurgency, rather than al-Qaeda members; the operation killed a number of ISI officers present in the camps. A 1999 press report stated that seven Harkat Ansar militants were killed and 24 were wounded, while eight Lashkar-e-Taiba and Hizbul Mujahideen members were killed. In a May 1999 meeting with American diplomats, Haqqani said his facilities had been destroyed and 25 of his men killed in the operation.
Following the attack, U.S. surveillance aircraft and reconnaissance satellites photographed the sites for damage assessment, although clouds obscured the area. According to The Washington Post, the imagery indicated "considerable damage" to the camps, although "up to 20 percent of the missiles ... [had] disappointing results." Meanwhile, bin Laden made calls by satellite phone, attempting to ascertain the damage and casualties the camps had sustained. One anonymous official reported that some buildings were destroyed, while others suffered heavy or light damage or were unscathed. Abu Jandal stated that bathrooms, the kitchen, and the mosque were hit in the strike, but the camps were not completely destroyed. Berger claimed that the damage to the camps was "moderate to severe," while CIA agent Henry A. Crumpton later wrote that al-Qaeda "suffered a few casualties and some damaged infrastructure, but no more." Since the camps were relatively unsophisticated, they were quickly and easily rebuilt within two weeks.
ISI director Hamid Gul reportedly notified the Taliban of the missile strikes in advance; bin Laden, who survived the strikes, later claimed that he had been informed of them by Pakistanis. A bin Laden spokesman claimed that bin Laden and the Taliban had prepared for the strike after hearing of the evacuation of Americans from Pakistan. Other U.S. officials reject the tip-off theory, citing a lack of evidence and ISI casualties in the strike; Tenet later wrote in his memoirs that the CIA could not ascertain whether Bin Laden had been warned in advance. Steve Coll reports that the CIA heard after the attack that bin Laden had been at Zhawar Kili Al-Badr but had left some hours before the missiles hit. Bill Gertz writes that the earlier arrest of Mohammed Odeh on August 7, while he was traveling to meet with bin Laden, alerted bin Laden, who canceled the meeting; this meant the camps targeted by the cruise missiles were mainly empty the day of the U.S. strike. Lawrence Wright says the CIA intercepted a phone call indicating that bin Laden would be in Khost, but the al-Qaeda chief instead decided to go to Kabul. Other media reports indicate that the strike was delayed to maximize secrecy, thus missing bin Laden. Scheuer charges that while the U.S. had planned to target the complex's mosque during evening prayers to kill bin Laden and his associates, the White House allegedly delayed the strikes "to avoid offending the Muslim world". Simon Reeve states that Pakistani intelligence had informed bin Laden that the U.S. was using his phone to track him, so he turned it off and cancelled the meeting at Khost.
## Aftermath
### Reactions in the U.S.
Clinton flew back to Washington, D.C. from his vacation at Martha's Vineyard, speaking with legislators from Air Force One and British Prime Minister Tony Blair, Egyptian President Hosni Mubarak, and Sharif from the White House. Clinton announced the attacks in a TV address, saying the Khost camp was "one of the most active terrorist bases in the world." He emphasized: "Our battle against terrorism ... will require strength, courage and endurance. We will not yield to this threat ... We must be prepared to do all that we can for as long as we must." Clinton also cited "compelling evidence that [bin Laden] was planning to mount further attacks" in his rationale for Operation Infinite Reach.
The missiles were launched three days after Clinton testified on the Clinton–Lewinsky scandal, and some countries, media outlets, protesters, and Republicans accused him of ordering the attacks as a diversion. The attacks also drew parallels to the then-recently released movie Wag the Dog, which features a fictional president faking a war in Albania to distract attention from a sex scandal. Administration officials denied any connection between the missile strikes and the ongoing scandal, and 9/11 Commission investigators found no reason to dispute those statements.
Operation Infinite Reach was covered heavily by U.S. media: About 75% of Americans knew about the strikes by the evening of August 20. The next day, 79% of respondents in a Pew Research Center poll reported they had "followed the story 'very' or 'fairly' closely." The week after the strikes, the evening programs of the three major news networks featured 69 stories on them. In a Newsweek poll, up to 40% thought that diverting attention from the Lewinsky scandal was one objective of the strikes; according to a Star Tribune poll, 31% of college-educated respondents and 60% of those "with less than a 12th grade education" believed that the attacks were motivated "a great deal" by the scandal. A USA Today/CNN/Gallup poll of 628 Americans showed that 47% thought it would increase terrorist attacks, while 38% thought it would lessen terrorism. A Los Angeles Times poll of 895 taken three days after the attack indicated that 84% believed that the operation would trigger a retaliatory terrorist attack on U.S. soil.
### International reactions
While U.S. allies such as Australia, Germany, the United Kingdom, Israel, and the Northern Alliance supported the attacks, they were opposed by Cuba, Russia, and China, as well as the targeted nations and other Muslim countries. German Chancellor Helmut Kohl said that "resolute actions by all countries" were required against terrorism, while Russian President Boris Yeltsin condemned "the ineffective approach to resolving disputes without trying all forms of negotiation and diplomacy." The Taliban denounced the operation, denied charges it provided a safe haven for bin Laden, and insisted the U.S. attack killed only innocent civilians. Mullah Omar condemned the strikes and announced that Afghanistan "will never hand over bin Laden to anyone and (will) protect him with our blood at all costs." A mob in Jalalabad burned and looted the local UN office, while an Italian UN official was killed in Kabul on August 21, allegedly in response to the strikes. Thousands of anti-U.S. protesters took to the streets of Khartoum. Omar al-Bashir led an anti-U.S. rally and warned of possible reciprocation, and Martha Crenshaw notes that the strike "gained the regime some sympathy in the Arab world." The Sudanese government expelled the British ambassador for Britain's support of the attacks, while protesters stormed the empty U.S. embassy. Sudan also reportedly allowed two suspected accomplices to the embassy bombings to escape. Libyan leader Muammar al-Gaddafi declared his country's support for Sudan and led an anti-U.S. rally in Tripoli. Zawahiri later equated the destruction of Al-Shifa with the September 11 attacks.
Pakistan condemned the U.S. missile strikes as a violation of the territorial integrity of two Islamic countries, and criticized the U.S. for allegedly violating Pakistani airspace. Pakistanis protested the strikes in large demonstrations, including a 300-strong rally in Islamabad, where protesters burned a U.S. flag outside the U.S. Information Service center; in Karachi, thousands burned effigies of Clinton. The Pakistani government was enraged by the ISI and trainee casualties, the damage to ISI training camps, the short notice provided by the U.S., and the Americans' failure to inform Sharif of the strikes. Iran's Supreme Leader, Ali Khamenei, and Iraq denounced the strikes as terrorism, while Iraq also denied producing chemical weapons in Sudan. The Arab League, holding an emergency meeting in Cairo, unanimously demanded an independent investigation into the Al-Shifa facility; the League also condemned the attack on the plant as a violation of Sudanese sovereignty.
Several Islamist groups also condemned Operation Infinite Reach, and some of them threatened retaliation. Hamas founder Ahmed Yassin stated that American attacks against Muslim countries constituted an attack on Islam itself, accusing the U.S. of state terrorism. Mustafa Mashhur, the leader of the Muslim Brotherhood, said that U.S. military action would inflame public opinion against America and foster regional unrest, which was echoed by a Hezbollah spokesman. Harkat-ul-Mujahideen threatened Americans and Jews, announcing a worldwide jihad against the U.S. Al-Gama'a al-Islamiyya denounced the strikes as "a crime which will not go without punishment" and encouraged fellow militant groups to reciprocate. In November, Lashkar-e-Taiba held a 3-day demonstration in Lahore to support bin Laden, in which 50,000 Pakistanis promised vengeance for the strikes. American embassies and facilities worldwide also received a high volume of threats following the attacks. The attacks led to anti-Semitic conspiracy theories in the region that Lewinsky was a Jewish agent influencing Clinton against aiding Palestine, which would influence Mohamed Atta to join al-Qaeda's Hamburg cell and commit the September 11 attacks.
#### Planet Hollywood bombing
A Planet Hollywood restaurant in Cape Town, South Africa, was the target of a terrorist bombing on August 25, killing two and injuring 26. The perpetrators, Muslims Against Global Oppression (later known as People Against Gangsterism and Drugs), stated that it was in retaliation for Operation Infinite Reach.
### Al-Qaeda victory
The outcome was considered a political and strategic victory for al-Qaeda. The Taliban announced within a day that bin Laden had survived the attacks, which Wright notes strengthened his image in the Muslim world "as a symbolic figure of resistance" to the U.S. Bin Laden had prominent support in Pakistan, where two hagiographies of the al-Qaeda chief were soon published, parents began naming their newborn sons Osama, mosques distributed his taped speeches, and cargo trucks bore the slogan "Long Live Osama". Children in Kenya and Tanzania wore bin Laden T-shirts, and al-Qaeda sold propaganda videos of the strikes' damage in European and Middle Eastern Islamic bookstores. A 1999 report prepared by Sandia National Laboratories stated that bin Laden "appeared to many as an underdog standing firm in the face of bullying aggression," adding that the missile strikes sparked further planning of attacks by extremists. Operation Infinite Reach also strengthened bin Laden's associates' support for him, and helped the al-Qaeda leader consolidate support among other Islamist militant groups. The attacks also helped al-Qaeda recruit new members and solicit funds. Naftali concludes that the strikes damaged the Khost camps but failed to deter al-Qaeda and "probably intensified [bin Laden's] hunger for violence." Similarly, researcher Rohan Gunaratna told the 9/11 Commission that the attacks did not reduce the threat of al-Qaeda.
## Assessment
Each cruise missile cost between \$750,000 and \$1 million, and nearly \$750,000,000 in weapons was fired in the strikes overall. The missiles' failure to eliminate their targets led to an acceleration in the American program to develop unmanned combat air vehicles. On September 2, the Taliban announced that it had found an unexploded U.S. missile, and the Pakistani press claimed that another had landed in Balochistan's Kharan Desert. Russian intelligence and intercepted al-Qaeda communications indicate that China sent officials to Khost to examine and buy some of the unexploded missiles; bin Laden used the over \$10 million in proceeds to fund Chechen opposition forces. Pakistani missile scientists studied the recovered Tomahawk's computer, GPS, and propulsion systems, and Wright contends that Pakistan "may have used [the Tomahawks] ... to design its own version of a cruise missile."
The September 9 State Department cable also claimed that "the U.S. strikes have flushed the Arab and Pakistani militants out of Khost," and while the camps were relocated near Kandahar and Kabul, paranoia lingered as al-Qaeda suspected that a traitor had facilitated the attacks. For example, Abu Jandal claimed that the U.S. had employed an Afghan cook to pinpoint bin Laden's location. Bin Laden augmented his personal bodyguard and began changing where he slept, while Al-Qaeda military chief Mohammed Atef frisked journalists who sought to meet Bin Laden.
Two days after Operation Infinite Reach, Omar reportedly called the State Department, saying that the strikes would only lead to more anti-Americanism and terrorism, and that Clinton should resign. The embassy bombings and the declaration of war against the U.S. had divided the Taliban and angered Omar. However, bin Laden swore an oath of fealty to the Taliban leader, and the two became friends. According to Wright, Omar also believed that turning over bin Laden would weaken his position. In an October cable, the State Department also wrote that the missile strikes worsened Afghan-U.S. relations while bringing the Taliban and al-Qaeda closer together. A Taliban spokesman even told State Department officials in November that "If [the Taliban] could have retaliated with similar strikes against Washington, it would have." The Taliban also denied American charges that bin Laden was responsible for the embassy bombings. When Turki visited Omar to retrieve bin Laden, Omar told the prince that they had miscommunicated and he had never agreed to give the Saudis bin Laden. In Turki's account, Omar lambasted him when he protested, insulting the Saudi royal family and praising the Al-Qaeda leader; Turki left without bin Laden. The Saudis broke off relations with the Taliban and allegedly hired a young Uzbek named Siddiq Ahmed in a failed bid to assassinate bin Laden. American diplomatic engagement with the Taliban continued, and the State Department insisted to them that the U.S. was only opposed to bin Laden and al-Qaeda, at whom the missile strikes were aimed, not Afghanistan and its leadership.
Following the strikes, Osama bin Laden's spokesman announced that "The battle has not started yet. Our answer will be deeds, not words." Zawahiri made a phone call to reporter Rahimullah Yusufzai, stating that "We survived the attack ... we aren't afraid of bombardment, threats, and acts of aggression ... we are ready for more sacrifices. The war has only just begun; the Americans should now await the answer." Al-Qaeda attempted to recruit chemists to develop a more addictive type of heroin for export to the U.S. and Western Europe, but was unsuccessful. A September 1998 intelligence report was titled "UBL Plans for Reprisals Against U.S. Targets, Possibly in U.S.," while the August 6, 2001, President's Daily Brief stated that after Operation Infinite Reach, "Bin Ladin told followers he wanted to retaliate in Washington."
Afterwards, U.S. considered, but did not execute, more cruise missile strikes; from 1999 to 2001, ships and submarines in the North Arabian Sea were prepared to conduct further attacks against bin Laden if his location could be ascertained. The U.S. considered firing more cruise missiles against bin Laden in Kandahar in December 1998 and May 1999; at an Emirati hunting camp in Helmand in February 1999; and in Ghazni in July 1999, but the strikes were called off due to various factors, including questionable intelligence and the potential for collateral damage. Similarly, CIA-employed Afghans planned six times to attack bin Laden's convoy but did not, citing fears of civilian casualties, tight security, or that the al-Qaeda chief took a different route. Thus, Operation Infinite Reach was the only U.S. operation directed against bin Laden before the September 11 attacks. The operation's failure later dissuaded President George W. Bush from ordering similar strikes in the 2001 invasion of Afghanistan.
## See also
- Foreign policy of the Bill Clinton administration
- History of Afghanistan (1992–present)
- Sudan–United States relations
- Timeline of United States military operations
|
418,065 |
Stuyvesant High School
| 1,172,751,701 |
Specialized high school in New York City
|
[
"1904 establishments in New York City",
"Battery Park City",
"Educational institutions established in 1904",
"Gifted education",
"New Classical architecture",
"Public high schools in Manhattan",
"Specialized high schools in New York City",
"Stuyvesant High School",
"Stuyvesant family"
] |
Stuyvesant High School (pronounced /ˈstaɪvəsənt/), commonly referred to among its students as Stuy (pronounced /staɪ/), is a public college-preparatory, specialized high school in New York City, United States. Operated by the New York City Department of Education, these specialized schools offer tuition-free accelerated academics to city residents. It is one of the most selective public high schools in New York City, New York State, and the United States.
Stuyvesant was established as an all-boys school in the East Village of Manhattan in 1904. An entrance examination was mandated for all applicants starting in 1934, and the school started accepting female students in 1969. Stuyvesant moved to its current location at Battery Park City in 1992 because the student body had become too large to be suitably accommodated in the original campus. The old building now houses several high schools.
Admission to Stuyvesant involves passing the Specialized High Schools Admissions Test. Every March, the 800 to 850 applicants with the highest SHSAT scores out of the around 30,000 students who apply to Stuyvesant are accepted. The school has a wide range of extracurricular activities, including a theater competition called SING! and two student publications.
Stuyvesant consistently ranks among the top public schools in the nation. Based on a Niche report, Stuyvesant High School ranks as the \#1 public high school in New York State and ranks sixth nationally among public high schools in the United States. Notable alumni include former United States Attorney General Eric Holder, physicists Brian Greene and Lisa Randall, economist Thomas Sowell, mathematician Paul Cohen, chemist Roald Hoffmann, genome researcher Eric Lander, Angel Investor Naval Ravikant, Oscar-winning actor James Cagney, and comedian Billy Eichner. Stuyvesant is one of only six secondary schools worldwide that has educated four or more Nobel laureates.
## History
### Planning
New York City's Superintendent of Schools, William Henry Maxwell, had first written about the need to construct manual trade schools in New York City in 1887. At the time, C. B. J. Snyder was designing many of the city's public school buildings using multiple architectural styles. The first trade school in the city was Manual Training High School in Brooklyn, which opened in 1893. By 1899, Maxwell was advocating for a manual trade school in Manhattan.
In January 1903, Maxwell and Snyder submitted a report to the New York City Board of Education in which they suggested the creation of a trade school in Manhattan. The Board of Education approved the plans in April 1904. They suggested that the school occupy a plot on East 15th Street, west of First Avenue. However, that plot did not yet contain a school building, and so the new trade school was initially housed within PS 47's former building at 225 East 23rd Street. The Board of Education also wrote that the new trade school would be "designated as the Stuyvesant High School, as being reminiscent of the locality." Stuyvesant Square, Stuyvesant Street, and later Stuyvesant Town (which was built in 1947) are all located near the proposed 15th Street school building. All of these locations were named after Peter Stuyvesant, the last Dutch governor of New Netherland and owner of the area's Stuyvesant Farm. The appellation was selected in order to avoid confusion with Brooklyn's Manual Training High School.
### Opening and boys' school
Stuyvesant High School opened in September 1904 as Manhattan's first manual trade school for boys. At the time of its opening, the school consisted of 155 students and 12 teachers.
At first, the school provided a core curriculum of "English, Latin, modern languages, history, mathematics, physics, chemistry, [and] music," as well as a physical education program and a more specialized track of "woodworking, metalworking, mechanical drawing, [and] freehand drawing." However, in June 1908, Maxwell announced that the trade school curriculum would be separated from the core curriculum and a discrete trade school would operate in the Stuyvesant building during the evening. Thereafter, Stuyvesant became renowned for excellence in math and science. In 1909, eighty percent of the school's alumni went to college, compared to other schools, which only sent 25% to 50% of their graduates to college.
By 1919, officials started restricting admission based on scholastic achievement. Stuyvesant implemented a double session plan in 1919 to accommodate the rising number of students: some students would attend in the morning, while others would take classes in the afternoon and early evening. All students studied a full set of courses. These double sessions ran until Spring 1957. The school implemented a system of entrance examinations in 1934. The examination program, developed with the assistance of Columbia University, was expanded in 1938 to include the newly founded Bronx High School of Science.
In 1956, a team of six students designed and began construction of a cyclotron. A low-power test of the device succeeded six years later. A later attempt at full-power operation, however, knocked out the power to the school and surrounding buildings.
### Co-educational school
In 1967, Alice de Rivera filed a lawsuit against the Board of Education, alleging that she had been banned from taking Stuyvesant's entrance exam because of her gender. The lawsuit was decided in the student's favor, and Stuyvesant was required to accept female students. The first female students were accepted in September 1969, when Stuyvesant offered admission to 14 girls and enrolled 12 of them. The next year, 223 female students were accepted to Stuyvesant. By 2015, females represented 43% of the total student body.
In 1972, the New York State Legislature passed the Hecht–Calandra Act, which designated Brooklyn Technical High School, Bronx High School of Science, Stuyvesant High School, and the High School of Music & Art (now Fiorello H. LaGuardia High School) as specialized high schools of New York City. The act called for a uniform exam to be administered for admission to Brooklyn Tech, Bronx Science, and Stuyvesant. The exam, named the Specialized High Schools Admissions Test (SHSAT), tested the mathematical and verbal abilities of students who were applying to any of the specialized high schools. The only exception was for applicants to LaGuardia High School, who were accepted by audition rather than examination.
#### September 11 attacks
The current school building is about 0.5 miles (0.8 km) away from the site of the World Trade Center, which was destroyed in the terrorist attacks on September 11, 2001. The school was evacuated during the attack. Although the smoke cloud coming from the World Trade Center engulfed the building at one point, there was no structural damage to the building, and there were no reports of physical injuries. Less than an hour after the collapse of the second World Trade Center tower, concern over a bomb threat at the school prompted an evacuation of the surrounding area, as reported live on the Today show. When classes resumed on September 21, 2001, students were moved to Brooklyn Technical High School while the Stuyvesant building served as a base of operations for rescue and recovery workers. This caused serious congestion at Brooklyn Tech, and required the students to attend in two shifts, with the Stuyvesant students attending the evening shift. Normal classes resumed nearly a month after the attack, on October 9.
Because Stuyvesant was so close to the World Trade Center site, there were concerns of asbestos exposure. The U.S. EPA indicated at that time that Stuyvesant was safe from asbestos, and conducted a thorough cleaning of the Stuyvesant building. However, the Stuyvesant High School Parents' Association contested the accuracy of the assessment. Some problems, including former teacher Mark Bodenheimer's respiratory problems, have been reported—he accepted a transfer to The Bronx High School of Science after having difficulty continuing his work at Stuyvesant. Other isolated cases include Stuyvesant's 2002 class president Amit Friedlander, who received local press coverage in September 2006 after being diagnosed with cancer. While there have been other cases linked to the same dust cloud that emanated from Ground Zero, there is no definitive evidence that such cases have directly affected the Stuyvesant community. Stuyvesant students did spend a full year in the building before the theater and air systems were cleaned, however, and a group of Stuyvesant alumni has been lobbying for health benefits since 2006 as a result. In 2019, during a hearing on the reauthorization of the 9/11 Victims Compensation Fund, alumnus Lila Nordstrom testified before the House Judiciary Committee about the conditions at Stuyvesant on and after 9/11.
Nine alumni were killed in the World Trade Center attack. Another alumnus, Richard Ben-Veniste of the class of 1960, was on the 9/11 Commission. On October 2, 2001, the school newspaper, The Spectator, created a special 24-page full-color 9/11 insert containing student photos, reflections and stories. On November 20, 2001, the magazine was distributed for free to the greater metropolitan area, enclosed within 830,000 copies of The New York Times. In the months after the attacks, Annie Thoms, an English teacher at Stuyvesant and the theater adviser at the time, suggested that the students take accounts of staff and students' reactions during and after September 11, 2001, and turn them into a series of monologues. Thoms then published these monologues as With Their Eyes: September 11—The View from a High School at Ground Zero.
#### Later history
During the 2003–2004 school year, Stuyvesant celebrated the 100th anniversary of its founding with a full year of activities. Events included a procession from the 15th Street building to the Chambers Street one, a meeting of the National Consortium for Specialized Secondary Schools of Mathematics, Science and Technology, an all-class reunion, and visits and speeches from notable alumni.
In the 21st century, keynote graduation speakers have included Attorney General Eric Holder (2001), former President Bill Clinton (2002), United Nations Secretary General Kofi Annan (2004), Late Night comedian Conan O'Brien (2006), Humans of New York founder Brandon Stanton (2015), actor George Takei (2016), and astrophysicist Neil deGrasse Tyson (2018).
## Buildings
### 15th Street building
In August 1904, the Board of Education authorized Snyder to design a new facility for Stuyvesant High School at 15th Street. The new school would be shaped like the letter "H" in order to maximize the number of windows on the building. The cornerstone for the new building was laid in September 1905. Approximately \$1.5 million was spent on constructing the school, including \$600,000 for the exterior alone. In 1907, Stuyvesant moved to the new building on 15th Street. The new building had a capacity of 2,600 students, more than double that of the existing school building at 23rd Street. It contained 25 classrooms devoted to skilled industrial trades such as joinery, as well as 53 regular classrooms and a 1,600-seat auditorium.
During the 1950s, the building underwent a \$2 million renovation to update its classrooms, shops, libraries, and cafeterias.
Through the 1970s and 1980s, when New York City public schools, in general, were marked by violence and low grades among their students, Stuyvesant had a reputation for being a top-notch school. However, the school building was deteriorating due to overuse and lack of maintenance. A New York Times report stated that the building had "held out into old age with minimal maintenance and benign neglect until its peeling paint, creaking floorboards and antiquated laboratories became an embarrassment." The five-story building could not cater adequately to the several thousand students, leading the New York City Board of Education to make plans to move the school to a new building in Battery Park City, near lower Manhattan's Financial District. The 15th Street building remains in use as the "Old Stuyvesant Campus," housing three schools: the Institute for Collaborative Education, the High School for Health Professions and Human Services, and PS 226.
### Current building
In 1987, New York City Mayor Ed Koch and New York State Governor Mario Cuomo jointly announced the construction of a new Stuyvesant High School building in Battery Park City. The Battery Park City Authority donated 1.5 acres (0.61 ha) of land for the new building. The authority was not required to hire the lowest bidder, which meant that the construction process could be accelerated in return for a higher cost. The building was designed by the architectural firms of Gruzen Samton Steinglass and Cooper, Robertson & Partners. The structure's main architect, Alexander Cooper of Cooper, Robertson & Partners, had also designed much of Battery Park City.
Stuyvesant's principal at the time, Abraham Baumel, visited the country's most advanced laboratories to gather ideas about what to include in the new Stuyvesant building's 12 laboratory rooms. The new 10-story building also included banks of escalators, glass-walled studios on the roof, and a four-story northern wing with a swimming pool, five gymnasiums, and an auditorium. Construction began in 1989. When it opened in 1992, the building was New York City's first new high school building in ten years. The new Stuyvesant Campus cost \$150 million, making it the most expensive high school building ever built in the city at the time. The library has a capacity of 40,000 volumes and overlooks Battery Park City.
Shortly after the building was completed, the \$10-million Tribeca Bridge was built to allow students to enter the building without having to cross the busy West Street. The building was designed to be fully compliant with the Americans with Disabilities Act and is listed as such by the New York City Department of Education. As a result, the building is one of the 5 additional sites of P721M, a school for students with multiple disabilities who are between the ages of 15 and 21.
In 1997, the eastern end of the mathematics floor was dedicated to Richard Rothenberg, the math department chairman who had died from a sudden heart attack earlier that year. Sculptor Madeleine Segall-Marx was commissioned to create the Rothenberg Memorial in his honor. She created a mathematics wall entitled "Celebration", consisting of 50 wooden boxes—one for each year of his life—behind a glass wall, featuring mathematical concepts and reflections on Rothenberg.
In 2006, Robert Ira Lewy of the class of 1960 made a gift worth \$1 million to found the Dr. Robert Ira Lewy M.D. Multimedia Center. and donated his personal library in 2007. In late 2010, the school library merged with the New York Public Library (NYPL) network in a four-year pilot program, in which all students of the school received a student library card so they could check books out of the school library or any other public library in the NYPL system.
An escalator collapse at Stuyvesant High School on September 13, 2018 injured 10 people, including 8 students.
#### Mnemonics
During construction, the Battery Park City Authority, the Percent for Art Program of the City of New York, the Department of Cultural Affairs, and the New York City Board of Education commissioned Mnemonics, an artwork by public artists Kristin Jones and Andrew Ginzel. Four hundred hollow glass blocks were dispersed randomly from the basement to the tenth floor of the new Stuyvesant High School building. Each block contains relics providing evidence of geographical, natural, cultural, and social worlds, from antiquity to the present time.
The blocks are set into the hallway walls and scattered throughout the building. Each block is inscribed with a brief description of its contents or context. The items displayed include a section of the Great Wall of China, fragments of the Mayan pyramids, leaves from the sacred Bo tree, water from the Nile and Ganges Rivers, a Revolutionary War button, pieces of the 15th Street Stuyvesant building, a report card of a student who studied in the old building, and fragments of monuments from around the world, various chemical compounds, and memorabilia from each of the 88 years' history of the 15th Street building. Empty blocks were also installed to be filled with items chosen by each of the graduating classes up through 2080. The installation received the Award for Excellence in Design from the Art Commission of the City of New York.
## Transportation
The New York City Subway's Chambers Street station, served by the , is located nearby, as well as the Chambers Street–World Trade Center station served by the . Additionally, New York City Bus's and routes stop near Stuyvesant. Students residing a certain distance from the school are provided full-fare or half-fare student MetroCards for public transportation at the start of each term, based on how far away the student resides from the school.
## Enrollment
### Entrance examination
Stuyvesant has a total enrollment of over 3,000 students and is open to residents of New York City entering ninth or tenth grade. Enrollment is based solely on performance on the three-hour Specialized High Schools Admissions Test, which is administered annually. Approximately 28,000 students took the test in 2017. The list of schools using the SHSAT has since grown to include eight of New York's nine specialized high schools. The test score necessary for admission to Stuyvesant has consistently been higher than that needed for admission to the other schools using the test. Admission is currently based on an individual's score on the examination and the pre-submitted ranking of Stuyvesant among the other specialized schools. Ninth- and rising tenth–grade students are also eligible to take the test for enrollment, but far fewer students are admitted that way. The test covers math (word problems and computation) and verbal (reading comprehension) skills. Former Mayor John Lindsay and community activist group Association of Community Organizations for Reform Now (ACORN) have argued that the exam may be biased against African and Hispanic Americans, while attempts to eliminate the exam have been criticized as discriminatory against Asian Americans.
### Demographics and SHSAT controversy
For most of the 20th century, the student body at Stuyvesant was heavily Jewish. A significant influx of Asian students began in the 1970s; by 2019, 74% of the students in attendance were Asian-American (53% from families with low incomes). In the 2013 academic year, the student body was 72.43% Asian, 21.44% Caucasian, 1.03% African American, 2.34% Hispanic, and 3% unknown/other. The paucity of Black and Hispanic students at Stuyvesant has often been an issue for some city administrators. In 1971, Mayor John Lindsay argued that the test was culturally biased against black and Hispanic students and sought to implement an affirmative action program. However, protests by parents forced the plan to be scrapped and led to the passage of the Hecht-Calandra Act, which preserved admissions by examination only. A small number of students judged to be economically disadvantaged and who came within a few points of the cut-off score were given an extra chance to pass the test.
Community activist group ACORN published two reports in 1996, titled Secret Apartheid and Secret Apartheid II. In these reports, ACORN called the SHSAT "permanently suspect" and described it as a "product of an institutional racism," saying that black and Hispanic students did not have access to proper test preparation materials. Along with Schools Chancellor Rudy Crew, they began an initiative for more diversity in the city's gifted and specialized schools, in particular demanding the SHSAT be suspended altogether until the Board of Education was able to show all children have had access to appropriate materials to prepare themselves. Students published several editorials in response to ACORN's claims, stating the admissions system at the school was based on student merit, not race.
A number of students take preparatory courses offered by private companies such as The Princeton Review and Kaplan in order to perform better on the SHSAT, often leaving those unable to afford such classes at a disadvantage. To bridge this gap and boost minority admissions, the Board of Education started the Math Science Institute in 1995, a free program to prepare students for the admissions test. Students attend preparatory classes through the program, now known as the Specialized High School Institute (also known as DREAM), at several schools around the city from the summer after sixth grade until the eighth-grade exam. Despite the implementation of these free programs for improving underprivileged children's enrollment, black and Hispanic enrollment continued to decline. After further expansion of those free test prep programs, there was still no increase in percentages to the attendance of black and Hispanic children. As of 2019, fewer than 1% of freshman openings were given to black students, while over 66% were given to Asian-American students, most of whom had similar socioeconomic backgrounds to those of the black students.
The New York City Department of Education reported in 2003 that public per student spending at Stuyvesant is slightly lower than the city average. Stuyvesant also receives private contributions.
## Academics
The college-preparatory curriculum at Stuyvesant mostly includes four years of English, history, and laboratory-based sciences. The sciences courses include requisite biology, chemistry, and physics classes. Students also take four years of mathematics. Students also take three years of a single foreign language; a semester each of introductory art, music, health, and technical drawing; one semester of computer science; and two lab-based technology courses. Several exemptions from technology education exist for seniors. Stuyvesant offers students a broad selection of elective courses. Some of the more notable offerings include astronomy, New York City history, Women's Voices, and the mathematics of financial markets. Most students complete the New York City Regents courses by junior year and take calculus during their senior year. However, the school offers math courses through differential equations for the more advanced students. A year of technical drawing was formerly required; students learned how to draft by hand in its first semester and how to draft using a computer in the second. Now, students take a one-semester compacted version of the former drafting course, as well as a semester of introductory computer science. For the class of 2015, the one-semester computer science course was replaced with a two-semester course.
As a specialized high school, Stuyvesant offers a wide range of Advanced Placement (AP) courses. These courses focus on math, science, history, English, or foreign languages. This gives students various opportunities to earn college credit. AP computer science students can also take three additional computer programming courses after the completion of the AP course: systems level programming, computer graphics, and software development. In addition, there is a one-year computer networking class which can earn students Cisco Certified Network Associate (CCNA) certification.
Stuyvesant's foreign language offerings include Mandarin Chinese, French, German, Japanese, Latin, and Spanish. In 2005, the school also started offering courses in Arabic after the school's Muslim Student Association had raised funds to support the course. Stuyvesant's biology and geo-science department offers courses in molecular biology, human physiology, medical ethics, medical and veterinary diagnosis, human disease, anthropology and sociobiology, vertebrate zoology, laboratory techniques, medical human genetics, botany, the molecular basis of cancer, nutrition science, and psychology. The chemistry and physics departments include classes in organic chemistry, physical chemistry, astronomy, engineering mechanics, and electronics.
Although Stuyvesant is primarily known for its math and science focus, the school also has a comprehensive humanities program. The English Department offers students courses in British and classical literature, Shakespearean literature, science fiction, philosophy, existentialism, debate, acting, journalism, creative writing, and poetry. The Social Studies core requires two years of global history (or one year of global followed by one year of European history), one year of American history, as well as a semester each of economics and government. Humanities electives include American foreign policy; civil and criminal law, prejudice and persecution, and race, ethnicity and gender issues.
In 2004, Stuyvesant entered into an agreement with City College of New York in which the college funds advanced after-school courses that are taken for college credit but taught by Stuyvesant teachers. Some of these courses include linear algebra, advanced Euclidean geometry, and women's history.
Prior to the 2005 revision of the SAT, Stuyvesant graduates had an average score of 1408 out of 1600 (685 in the verbal section of the test, 723 in the math section). In 2010, the average score on the SAT for Stuyvesant students was 2087 out of 2400, while the class of 2013 had an average SAT score of 2096. As of 2023, Stuyvesant students' average SAT score was 1510 of 1600 points. Stuyvesant also administers more Advanced Placement exams than any other high school in the world, as well as the highest number of students who reach the AP courses' "mastery level". As of 2018, there are 31 AP classes offered, with a little more than half of all students taking at least one AP class, and about 98% of students pass their AP tests.
## Extracurricular activities
### Sports
Stuyvesant fields 32 varsity teams, including the swimming, golf, bowling, volleyball, soccer, basketball, gymnastics, wrestling, fencing, baseball/softball, American handball, tennis, track/cross country, cricket, football, and lacrosse teams. In addition, Stuyvesant has ultimate teams for the boys' varsity, boys' junior varsity, and girls' varsity divisions.
In September 2007, the Stuyvesant football team was given a home field at Pier 40, located north of the school at Houston Street and West Street. In 2008, the baseball team was granted use of the pier after construction and delivery of an artificial turf pitching mound that met Public Schools Athletic League specifications. Stuyvesant also has its own swimming pool, but it does not contain its own running track or tennis court. Unlike most American high schools, most sports teams at Stuyvesant are individually known by different names. Only the football, cheerleading, girls' table tennis, baseball, girls' handball, and boys' lacrosse teams retain the traditional Pegleg monikers.
### Student government
The student body of Stuyvesant is represented by the Stuyvesant Student Union, a student government. It comprises a group of students (elected each year for each grade) who serve the student body in two important areas: improving student life by promoting and managing extracurricular activities (clubs and publications), by organizing out-of-school activity such as city excursions or fundraisers; and providing a voice to the student body in all discussion of school policy with the administration.
### Clubs and publications
Stuyvesant allows students to join clubs, publications, and teams under a system similar to that of many colleges. As of 2015, the school had 150 student clubs.
#### The Spectator
The Spectator is Stuyvesant's official in-school newspaper, which is published biweekly and is independent from the school. There are over 250 students who help with publication. At the beginning of the fall and spring terms, there are recruitments, but interested students may join at any time.
Founded in 1915, The Spectator is one of Stuyvesant's oldest publications. It has a long-standing connection with its older namesake, Columbia University's Columbia Daily Spectator, and has been recognized by the Columbia University Graduate School of Journalism's Columbia Scholastic Press Association.
#### The Voice
The Voice was founded in the 1973–1974 academic year as an independent publication only loosely sanctioned by school officials. It had the appearance of a magazine and gained a large readership. The Voice attracted a considerable amount of controversy and a First Amendment lawsuit, after which the administration forced it to go off-campus and to turn commercial in 1975–1976.
At the beginning of the 1975–1976 academic year, The Voice decided to publish the results of a confidential random survey measuring the "sexual attitudes, preferences, knowledge and experience" of the students. The administration refused to permit The Voice to distribute the questionnaire, and the Board of Education refused to intervene, believing that "irreparable psychological damage" would be occasioned on some of the students receiving it. The editor-in-chief of The Voice, Jeff Trachtman, brought a First Amendment challenge to this decision in the United States District Court for the Southern District of New York in front of Judge Constance Baker Motley.
Motley, relying on the relatively recent Supreme Court precedent Tinker v. Des Moines Independent Community School District (holding that "undifferentiated fear or apprehension of disturbance is not enough to overcome the right to freedom of expression"), ordered the Board of Education to come up with an arrangement permitting the distribution of the survey to the juniors and seniors. However, Motley's ruling was overturned on appeal to the United States Court of Appeals for the Second Circuit. Judge J. Edward Lumbard, joined by Judge Murray Gurfein and over an impassioned dissent by Judge Walter R. Mansfield, held that the distribution of the questionnaires was properly disallowed by the administration since there was the basis for the belief that it might "result in significant emotional harm to a number of students throughout the Stuyvesant population." The Supreme Court denied certiorari review.
### SING!
The annual theater competition known as SING! pits seniors, juniors, and "soph-frosh" (freshmen and sophomores working together) against each other in a contest to put on the best performance. SING! started in 1947 at Midwood High School in Brooklyn and has expanded to many New York City high schools since then. SING! at Stuyvesant started as a small event in 1973, and since then, has grown to a school-wide event; in 2005, nearly 1,000 students participated. The entire production is written, directed, produced, and funded by students. Their involvement ranges from being members of the production's casts, choruses, or costume and tech crews to Step, Hip-Hop, Swing, Modern, Belly, Flow, Tap or Latin dance groups. SING! begins in late January to February and culminates in final performances on three nights in March/April. Scoring is done on each night's performances and the winner is determined by the overall total. In 2023, soph-frosh won SING! for the first time in the tradition's fifty-one year history.
## Reputation
Stuyvesant has produced many notable alumni, including four Nobel laureates. In 2017, Stuyvesant was ranked 71st in national rankings by U.S. News & World Report, and 21st among STEM high schools. According to a September 2002 high school ranking by Worth magazine, 3.67% of Stuyvesant students went on to attend Harvard, Yale, and Princeton universities, ranking it as the 9th top public high school in the United States and 120th among all schools, public or private. In December 2007, The Wall Street Journal studied the freshman classes at eight selective colleges and reported that Stuyvesant sent 67 students to these schools, comprising 9.9% of its 674 seniors. In recent years, the Stuyvesant Spectator has reported on college admissions of the graduating classes, with Class of 2021 having 133 students offered admission to Ivy League institutions.
Stuyvesant, along with other similar schools, has regularly been excluded from Newsweeks annual list of the Top 100 Public High Schools. The May 8, 2008, issue states the reason as being, "because so many of their students score well above average on the SAT and ACT." U.S. News & World Report, however, included Stuyvesant on its list of "Best High Schools" published in December 2009, ranking 31st. In its 2010 progress report, the New York City Department of Education assigned it an "A", the highest possible grade.
Stuyvesant has contributed to the education of several Nobel laureates, winner of the Fields Medal, and other accomplished alumni. In recent years, it has had the second highest number of National Merit Scholarship semi-finalists, behind Thomas Jefferson High School for Science and Technology in Alexandria, Virginia. From 2002 to 2010, Stuyvesant has produced 103 semi-finalists and 13 finalists on the Intel Science Talent Search, the second most of any secondary school in the United States behind the Bronx High School of Science. In 2014, Stuyvesant had 11 semifinalists for the Intel Search, the highest number of any school in the U.S.
In the 2010s, exam schools, including Stuyvesant, have been the subject of studies questioning their academic effectiveness. A study by Massachusetts Institute of Technology and Duke University economists compared high school outcomes for Stuyvesant students who barely passed the SHSAT score required for admission, to those of applicants just below that score, using the latter as a natural control group of peers who attended other schools. The study found no discernible average difference in the two groups' later performance on New York state exams.
## Notable people
Notable scientists among Stuyvesant alumni include mathematicians Bertram Kostant (1945) and Paul Cohen (1950), string theorist Brian Greene (1980), physicist Lisa Randall (1980), and genomic researcher Eric Lander (1974). Other prominent alumni include civil rights leader Bob Moses, MAD Magazine editor Nick Meglin (1953), entertainers such as songwriter and Steely Dan founder Walter Becker, Thelonious Monk (1935), and actors Lucy Liu (1986), Tim Robbins (1976), and James Cagney (1918), comedian Paul Reiser (1973), playwright Arthur M. Jolly (1987), sports anchor Mike Greenberg (1985), and basketball player and bookmaker Jack Molinas (1949). In business, government and politics, former United States Attorney General Eric Holder (1969) is a Stuyvesant alumnus, as are Senior Advisor to President Obama David Axelrod (1972), former adviser to President Clinton Dick Morris (1964), and founder of 5W Public Relations Ronn Torossian (1992).
Pulitzer Prize-winning author Frank McCourt taught English at Stuyvesant before the publication of his memoirs Angela's Ashes, Tis, and Teacher Man. Teacher Man's third section, titled Coming Alive in Room 205, concerns McCourt's time at Stuyvesant, and mentions a number of students and faculty. Former New York City Council member Eva Moskowitz (1982) graduated from the school, as did the creator of the BitTorrent protocol, Bram Cohen (1993). A notable Olympic medalist from the school was foil fencer Albert Axelrod. Economist Thomas Sowell was also a student of Stuyvesant High School, but dropped out at age 17 because of financial difficulties and problems in his home. Russian journalist Vladimir Pozner Jr., known in the West for his appearances on Nightline, U.S.–Soviet Space Bridge and Phil Donahue, was also a student of Stuyvesant High School.
Four Nobel laureates are Stuyvesant alumni:
- Joshua Lederberg (1941) – Nobel Prize in Physiology or Medicine, 1958
- Robert Fogel (1944) – Nobel Memorial Prize in Economic Sciences, 1993
- Roald Hoffmann (1954) – Nobel Prize in Chemistry, 1981
- Richard Axel (1963) – Nobel Prize in Physiology or Medicine, 2004
## In the media
In the film The Glass Wall, the character Freddie Zakoyla attended "Peter Stuyvesant High School."
## See also
- Education in New York City
- Health effects arising from the September 11 attacks
- List of New York City Designated Landmarks in Manhattan from 14th to 59th Streets
|
28,882,454 |
Adelaide leak
| 1,059,434,692 |
Cricket scandal
|
[
"1933 in Australian cricket",
"1933 in English cricket",
"Cricket controversies",
"The Ashes"
] |
The Adelaide leak was the revelation to the press of a dressing-room incident during the third Test, a cricket match played during the 1932–33 Ashes series between Australia and England, more commonly known as the Bodyline series. During the course of play on 14 January 1933, the Australian Test captain Bill Woodfull was struck over the heart by a ball delivered by Harold Larwood. Although not badly hurt, Woodfull was shaken and dismissed shortly afterwards. On his return to the Australian dressing room, Woodfull was visited by the managers of the Marylebone Cricket Club (MCC) team, Pelham Warner and Richard Palairet. Warner enquired after Woodfull's health, but the latter dismissed his concerns in brusque fashion. He said he did not want to speak to the Englishman owing to the Bodyline tactics England were using, leaving Warner embarrassed and shaken. The matter became public knowledge when someone present leaked the exchange to the press and it was widely reported on 16 January. Such leaks to the press were practically unknown at the time, and the players were horrified that the confrontation became public knowledge.
In the immediate aftermath, many people assumed Jack Fingleton, the only full-time journalist on either team, was responsible. This belief may have affected the course of his subsequent career. Fingleton later wrote that Donald Bradman, Australia's star batsman and the primary target of Bodyline, was the person who disclosed the story. Bradman always denied this, and continued to blame Fingleton; animosity between the pair continued for the rest of their lives. Woodfull's earlier public silence on the tactics had been interpreted as approval; the leak was significant in persuading the Australian public that Bodyline was unacceptable.
## Background
In 1932–33 the English team, led by Douglas Jardine and jointly managed by Pelham Warner and Richard Palairet, toured Australia and won the Ashes in an acrimonious contest that became known as the Bodyline series. The English team used contentious bowling tactics where the English pace bowlers Harold Larwood, Bill Voce and Bill Bowes bowled the ball roughly on the line of leg stump. The deliveries were often short-pitched, designed to rise at the batsman's body, with four or five fielders close by on the leg side waiting to catch deflections off the bat. Intended to be intimidating, the tactics proved difficult for batsmen to counter and were physically threatening. The primary target of Bodyline was Donald Bradman, who had overwhelmed the English bowling in the 1930 Ashes series. Leading English cricketers and administrators feared that Bradman would be unstoppable on good Australian batting wickets in 1932–33, and looked for possible weaknesses in his batting technique.
Following Jardine's appointment as England captain in July 1932, he developed a plan based on his belief that Bradman was weak against bowling directed at leg stump and that if this line of attack could be maintained, it would restrict Bradman's scoring to one side of the field, giving the bowlers greater control of his scoring. In a meeting, he outlined his plan to Larwood and Voce, who tried out the tactic in the remainder of the season with mixed success. Both Nottinghamshire fast bowlers were selected to tour, as was Yorkshire bowler Bill Bowes who had tried similar tactics at the end of the season. In one match, he bowled short at Jack Hobbs; in his capacity as cricket correspondent of The Morning Post, Warner was highly critical of the Yorkshire bowlers and Bowes in particular. These remarks were seized upon by Australian opponents of Bodyline in the coming months. A fourth fast bowler, Middlesex amateur Gubby Allen, was later added to the tour. The selection of this many pace bowlers was unusual at the time, drawing comment from Australian writers, including Bradman.
In Australia, while Jardine's unfriendly approach and superior manner caused some friction with the press and spectators, the early tour matches were uncontroversial and Larwood and Voce had a light workload in preparation for the Test series. The first signs of trouble came in the match against a representative "Australian XI" at near full strength, in which the bowlers first used Bodyline tactics. Under the captaincy of Bob Wyatt (Jardine having rested from the match), the England attack bowled short and around leg stump, with fielders positioned close by on the leg side to catch any deflections. Wyatt later claimed this was not pre-planned and he simply informed Jardine what had happened. The Bodyline tactics continued in the next match and several players, including Jack Fingleton, were hit. The Australian press were shocked and criticised the hostility of Larwood in particular. Some former Australian players joined the criticism, saying the tactics were ethically wrong. However, at this stage, not everyone was opposed, and the Australian Board of Control believed the English team had bowled fairly. On the other hand, Jardine increasingly came into disagreement with tour manager Warner over Bodyline as the tour progressed. Warner hated Bodyline but would not speak out against it. He was accused of hypocrisy for not taking a stand on either side, particularly after expressing sentiments at the start of the tour that cricket "has become a synonym for all that is true and honest. To say 'that is not cricket' implies something underhand, something not in keeping with the best ideals ... all who love it as players, as officials or spectators must be careful lest anything they do should do it harm."
Jardine's tactics were successful in one respect: in six innings against the tourists ahead of the Tests, Bradman scored only 103 runs, causing concern among the Australian public who expected much more from him. At the time, Bradman was in dispute with the Board of Control, who would not allow players to write in newspapers unless journalism was their full-time profession; Bradman, although not a journalist, had a contract to write for the Sydney Sun. A particular irritation for Bradman was that Jack Fingleton, a full-time journalist, was allowed to write for the Telegraph Pictorial, although he required permission from the Board to write about cricket. Bradman threatened to withdraw from the team unless the Board allowed him to write. Fingleton and Bradman were openly hostile towards each other. From their first meeting while playing together for New South Wales, they disliked each other. Fingleton, conscious that Bradman's self-possession and solitary nature made him unpopular with some teammates, kept his distance after a dressing room argument, while Bradman believed the more popular Fingleton had tried to turn the team against him. Later hostility arose from Bradman's public preference for Bill Brown as a batsman, which Fingleton believed cost him a place on the 1934 tour of England. Fingleton's writings on the Bodyline series further soured the relationship. Bradman believed some of the differences stemmed from religion; Fingleton was a Roman Catholic, Bradman an Anglican.
Bradman missed the first Test, worn out by constant cricket and the ongoing argument with the Board of Control. The English bowlers used Bodyline intermittently in the first match, to the crowd's vocal displeasure. Behind the scenes, administrators began to express concerns to each other. Yet the English tactics still did not earn universal disapproval; former Australian captain Monty Noble praised the English bowling. For the second Test, Bradman returned to the team after his newspaper employers released him from his contract. England continued to use Bodyline and Bradman was dismissed by his first ball in the first innings. In the second innings, against the full Bodyline attack, he scored an unbeaten century which helped Australia to win the match and level the series at one match each. Critics began to believe Bodyline was not quite the threat that had been perceived and Bradman's reputation, which had suffered slightly with his earlier failures, was restored. However, the pitch was slightly slower than others in the series, and Larwood was suffering from problems with his boots which reduced his effectiveness. Meanwhile, Woodfull was being encouraged to retaliate to the short-pitched English attack, not least by members of his own side such as Vic Richardson, but refused to consider doing so.
## Warner–Woodfull incident
### Woodfull's injury
During the mid-afternoon of Saturday 14 January 1933, the second day of the Third Test, Woodfull and Fingleton opened the batting for Australia in the face of an England total of 341 before a record attendance of 50,962 people. Fingleton was caught by the wicketkeeper without scoring. The third over of the innings was bowled by Larwood with fielders still in orthodox positions. The fifth ball narrowly missed Woodfull's head and the final ball, delivered short on the line of middle stump, struck Woodfull over the heart. The batsman dropped his bat and staggered away holding his chest, bent over in pain. The England players surrounded Woodfull to offer sympathy but the crowd began to protest noisily. Jardine called to Larwood: "Well bowled, Harold!" Although the comment was aimed at unnerving Bradman, who was also batting at the time, Woodfull was appalled. Play resumed after a brief delay, once it was certain the Australian captain was fit to carry on, and since Larwood's over had ended, Woodfull did not have to face the bowling of Allen in the next over. However, when Larwood was ready to bowl at Woodfull again, play was halted once more when the fielders were moved into Bodyline positions, causing the crowd to protest and call abuse at the England team. Subsequently, Jardine claimed that Larwood requested a field change, Larwood said that Jardine had done so. Many commentators condemned the alteration of the field as unsporting, and the angry spectators became extremely volatile. Jardine, although writing that Woodfull could have retired hurt if he was unfit, later expressed his regret at making the field change at that moment. It is likely Jardine wished to press home his team's advantage in the match, and the Bodyline field was usually employed at this stage of an innings.
Shortly afterwards, a delivery from Larwood knocked Woodfull's bat from his hands and the Australian captain seemed unsettled. Two quick wickets fell before Ponsford joined Woodfull in the middle, but having been struck by short balls several more times, Woodfull was bowled by Allen for 22, having batted for an hour and a half. When a doctor was publicly requested, to attend an injury to Voce, many in the crowd believed it was Woodfull who required assistance, leading to a renewal of protest. In later years, Woodfull's wife believed that his injury at Adelaide was partly responsible for his death aged 67 in 1965.
### Warner's visit to the dressing room
Warner learned from twelfth man Leo O'Brien that Woodfull was badly injured. Later in the afternoon, while Ponsford and Richardson were still batting, Warner and Palairet visited the Australian dressing room with the intention of enquiring about Woodfull's health. Accounts vary about what followed. According to the original newspaper reports and Fingleton's later description, Woodfull was lying on the masseur's table, awaiting treatment from a doctor, although this may have been an exaggeration for dramatic effect. Leo O'Brien described Woodfull as wearing a towel around his waist, having showered. Warner expressed sympathy to Woodfull but was surprised by the Australian's response. According to Warner, Woodfull replied, "I don't want to see you, Mr Warner. There are two teams out there. One is trying to play cricket and the other is not." Fingleton wrote that Woodfull had added, "This game is too good to be spoilt. It is time some people got out of it." Woodfull was usually dignified and quietly spoken, making his reaction surprising to Warner and others present. Warner recalled saying, "Apart from all that, we most sincerely hope you are not too badly hurt," to which Woodfull replied, "The bruise is coming out." Embarrassed and humiliated, Warner and Palairet turned and left. Fingleton noted that Woodfull spoke quietly and calmly, which increased the effectiveness of his words. He also pointed out that Warner prided himself on sportsmanship, so an accusation of "not playing cricket" would have stung the Englishman. Warner was so shaken that he was found in tears later that day in his hotel room.
According to O'Brien, only he, Woodfull, the masseur (who was deaf), Alan Kippax, and former Australian Test players Jack Ryder and Ernie Jones were present when the incident took place, but most of the Australian team were watching the match from a balcony adjoining the dressing room from where they would have been able to hear the confrontation. O'Brien claimed that he went outside and told the group what had happened; around twenty people were present.
Later that afternoon, Warner related the incident to Jardine, who replied that he "couldn't care less". The England captain then locked the dressing room doors and told the team what Woodfull had said and warned them not to speak to anyone concerning the matter. Warner later wrote to his wife that Woodfull had made "a complete fool of himself" and had been "fanning the flames".
## Leak
Sunday being a rest day, there was no play. On Monday, the exchange between Warner and Woodfull was reported in several newspapers along with the description of Woodfull's injury. Most headlines were variations on "Woodfull Protests", and the most extensive accounts were by Claude Corbett in The Sun and The Daily Telegraph. He wrote in the Telegraph that the "fires which have been smouldering in the ranks of the Australian Test cricketers regarding the English shock attack suddenly burst into flames yesterday." Another newspaper, The Advertiser of Adelaide, claimed several members of the Australian team had repeated the story.
The players and officials were horrified that a sensitive private exchange had been reported to the press. Leaks to the press were practically unknown in 1933. David Frith notes that discretion and respect were highly prized and such a leak was "regarded as a moral offence of the first order." Woodfull made it clear that he severely disapproved of the leak, and later wrote that he "always expected cricketers to do the right thing by their team-mates." As the only full-time journalist in the Australian team, suspicion immediately fell on Fingleton, although as soon as the story was published, he told Woodfull he was not responsible. Warner offered Larwood a reward of one pound if he could dismiss Fingleton in the second innings; Larwood obliged by bowling him for a duck.
Later, Warner issued a statement to the press that Woodfull had apologised for the incident and that "we are now the best of friends". Woodfull denied through Bill Jeanes, the Secretary of the Australian Board of Control, that he had expressed regret, but he had said there was no personal animosity between the two men.
### Suspects
Until he read Warner's Cricket Between Two Wars during the Second World War, Fingleton was unaware that Warner assumed he was responsible for the leak. When he found out, Fingleton wrote to Warner, who replied that although he believed Fingleton to be the source, he would publish a correction if presented with evidence to the contrary. Fingleton did not pursue the case. Australian cricketer Bill O'Reilly wrote that during the 1948 tour of England, he and Fingleton confronted Warner, who apologised as he no longer believed Fingleton to be the culprit. Fingleton thought the belief he was responsible cost him a place on the 1934 tour to England, although there were other possible factors in his exclusion. According to Fingleton, Woodfull later told him that the controversy had led to his missing selection. A letter which Woodfull wrote to Fingleton in 1943 stated "I can assure you that I did not connect your name with the passing on of that conversation."
In his 1978 biography of Victor Trumper, Fingleton accused Bradman of relating Woodfull's words to the press. Fingleton claimed that Claude Corbett revealed the information to him. In Fingleton's version of events, Bradman telephoned Corbett during the night to arrange a meeting. Bradman wrote for Corbett's paper, Sydney's Sun. Sitting in Corbett's car, Bradman told the journalist about the Warner–Woodfull incident. Corbett considered the story too important to keep to himself, so shared it with other journalists. Fingleton later added that "Bradman would have saved me a lot of backlash ... had he admitted that he had given the leak. Part of his job was writing for the Sydney Sun and he had every right to leak such a vital story."
Bradman denied this version of events. In 1983, two years after Fingleton's death, a book written by Michael Page, with Bradman's close co-operation, blamed Fingleton for the leak and dismissed Fingleton's story concerning Bradman and Corbett as "an absurd fabrication", arising from a grudge against Bradman. The book pointed out that Fingleton only made the accusation after Corbett's death. Fingleton's executor, Malcolm Gemmell, summarised the evidence which supported Fingleton's accusation in a magazine article: that Bradman wrote for the Sun, was the prime target of Bodyline, and had previously urged the Australian Board of Control to object to the tactic. Fingleton's brother supported the claim that Bradman was responsible, repeating in 1997 the alleged view of Corbett that Bradman provided the information. In 1995, Bradman was interviewed for television, and when asked about the source of the leak, responded sharply: "It wasn't me!" In the same year, a biography of Bradman, written with his close co-operation, by Roland Perry, said that Bradman had confronted Corbett to ask who leaked the story, to be told it was Fingleton.
O'Reilly believed that Bradman, with whom he did not get along, was responsible, wishing to expose the English bowling he believed was designed to cause him physical injury. He also said Bradman was an expert at diverting blame. Cricket writer Ray Robinson wrote that many of the Australian team did not blame Fingleton, and they knew who met Corbett. In the early 1980s another journalist, Michael Davie, interviewed Ponsford who said that Woodfull never forgave Bradman for "a couple of things". Davie suggests that one of these may have been leaking the Adelaide story.
Gilbert Mant, a journalist who covered the tour, investigated the leak in the mid-1990s. He died in 1997, but had arranged for a summary of his findings to be sent to David Frith with a request not to publish the information before Bradman died. Mant believed the leak was not a serious crime and pointed out that any of the players except Ponsford and Richardson, who were batting at the time Warner entered the dressing room, could have leaked the story. Bradman, in correspondence with Mant in 1992, continued to blame Fingleton and would never forgive the "dastardly lie he concocted about me" and hoped Mant could clear Bradman's name. As part of his investigations, Mant contacted Corbett's family. Corbett died in 1944, and his son Mac said he never mentioned the leak. However, his daughter Helen related that Corbett had spoken to his wife about the affair. She had told Helen that Corbett had received the information from Bradman. Mant believed that while Corbett may have played a joke on Fingleton in naming the culprit, he would not have done so with his wife.
### Aftermath
Many commentators and cricketers deplored the use of Bodyline bowling. Some felt frustration that Woodfull had not publicly condemned the tactics, believing that his silence was interpreted as approval. Once his opinions were revealed by the leak, opponents of Bodyline felt publicly legitimised and expressed their opinions more freely. It also revealed deep and unaccustomed divisions between the teams which had been kept from view. The leak and subsequent events in the same match brought varied opinions from journalists and former players on Bodyline into the newspapers, both for and against Bodyline tactics.
During the play on Monday, a short ball from Larwood fractured Bert Oldfield's skull, although Bodyline tactics were not being used at the time. The Australian Board of Control contacted the MCC managers Warner and Palairet asking them to arrange for the team to cease the use of Bodyline, but they replied the captain was solely in charge of the playing side of the tour. On the Wednesday of the game, the Australian Board sent a cable to the MCC which stated "Bodyline bowling has assumed such proportions as to menace the best interests of the game, making protection of the body by the batsman the main consideration. This is causing intensely bitter feeling between the players, as well as injury. In our opinion it is unsportsmanlike. Unless stopped at once it is likely to upset the friendly relations existing between Australia and England." After England's victory in the match, Jardine went to the Australian dressing room but had the door closed in his face. Speaking to his team, Jardine offered to end the use of the tactics if the players opposed them, but they unanimously voted to continue. The report in Wisden Cricketers' Almanack stated it was probably the most unpleasant match ever played.
Jardine threatened to withdraw his team from the Fourth and Fifth Tests unless the Australian Board retracted the accusation of unsporting behaviour. The MCC responded angrily to the accusations of unsporting conduct, played down the Australian claims about the danger of Bodyline and threatened to call off the tour. The series had become a major diplomatic incident by this stage, and many people saw Bodyline as damaging to an international relationship that needed to remain strong. The public in both England and Australia reacted with outrage towards the other nation. Alexander Hore-Ruthven, the Governor of South Australia, who was in England at the time, expressed his concern to J. H. Thomas, the British Secretary of State for Dominion Affairs that this would cause a significant impact on trade between the nations. The standoff was settled only when Australian Prime Minister Joseph Lyons met members of the Australian Board and outlined to them the severe economic hardships that could be caused in Australia if the British public boycotted Australian trade. Given this understanding, the Board withdrew the allegation of unsportsmanlike behaviour two days before the fourth Test, thus saving the tour. However, correspondence continued for almost a year. Fingleton was dropped after scoring a pair in the third Test, and England won the final two matches to win the series 4–1.
|
7,682,623 |
The Sinking of the Lusitania
| 1,148,770,764 |
1918 silent animated short documentary
|
[
"1910s American animated films",
"1910s animated short films",
"1910s disaster films",
"1918 animated films",
"1918 documentary films",
"1918 films",
"1918 short films",
"American World War I propaganda films",
"American animated documentary films",
"American black-and-white films",
"American silent short films",
"Articles containing video clips",
"Black-and-white documentary films",
"Documentary films about maritime disasters",
"Films directed by Winsor McCay",
"Films set in 1915",
"Films set in the Atlantic Ocean",
"RMS Lusitania",
"Silent adventure films",
"Silent war films",
"Surviving American silent films",
"United States National Film Registry films"
] |
The Sinking of the Lusitania (1918) is an American silent animated short film by cartoonist Winsor McCay. It is a work of propaganda re-creating the never-photographed 1915 sinking of the British liner RMS Lusitania. At twelve minutes it has been called the longest work of animation at the time of its release. The film is the earliest surviving animated documentary and serious, dramatic work of animation. The National Film Registry selected it for preservation in 2017.
In 1915, a German submarine torpedoed and sank the RMS Lusitania; 128 Americans were among the 1,198 dead. The event outraged McCay, but the newspapers of his employer William Randolph Hearst downplayed the event, as Hearst was opposed to the U.S. joining World War I. McCay was required to illustrate anti-war and anti-British editorial cartoons for Hearst's papers. In 1916, McCay rebelled against his employer's stance and began work on the patriotic Sinking of the Lusitania on his own time with his own money.
The film followed McCay's earlier successes in animation: Little Nemo (1911), How a Mosquito Operates (1912), and Gertie the Dinosaur (1914). McCay drew these earlier films on rice paper, onto which backgrounds had to be laboriously traced; The Sinking of the Lusitania was the first film McCay made using the new, more efficient cel technology. McCay and his assistants spent twenty-two months making the film. His subsequent animation output suffered setbacks, as the film was not as commercially successful as his earlier efforts, and Hearst put increased pressure on McCay to devote his time to editorial drawings.
## Synopsis
The film opens with a live-action prologue in which McCay busies himself studying a picture of the Lusitania as a model for his film-in-progress. Intertitles boast of McCay as "the originator and inventor of Animated Cartoons", and of the 25,000 drawings needed to complete the film. McCay is shown working with a group of anonymous assistants on "the first record of the sinking of the Lusitania".
The liner passes the Statue of Liberty and leaves New York Harbor. After some time, a German submarine cuts through the waters and fires a torpedo at the Lusitania, which billows smoke that builds until it envelops the screen. Passengers scramble to lower lifeboats, some of which capsize in the confusion. The liner tilts from one side to the other and passengers are tossed into the ocean.
A second blast rocks the Lusitania, which sinks slowly into the deep as more passengers fall off its edges, and the ship submerges amid scenes of drowning bodies. The liner vanishes from sight, and the film closes with a mother struggling to keep her baby above the waves. An intertitle declares: "The man who fired the shot was decorated for it by the Kaiser! And yet they tell us not to hate the Hun".
## Background
Winsor McCay (c. 1869–1934) produced prodigiously detailed and accurate drawings since early in life. He earned a living as a young man drawing portraits and posters in dime museums, and attracted large crowds with his ability to draw quickly in public. He began working as a newspaper illustrator full-time in 1898, and in 1903 began drawing comic strips. His greatest comic strip success was the children's fantasy comic strip Little Nemo in Slumberland, which he began in 1905. In 1906, McCay began performing on the vaudeville circuit, doing chalk talks—performances during which he drew in front of a live audience.
Inspired by the flip books his son brought home, McCay said he "came to see the possibility of making moving pictures" of his cartoons. His first animated film, Little Nemo (1911), was composed of four thousand drawings on rice paper. His next film, How a Mosquito Operates (1912), naturalistically shows a giant mosquito draw blood from a sleeping man until it burst. McCay followed this with a film that became an interactive part of his vaudeville shows: in Gertie the Dinosaur (1914), McCay commanded his animated dinosaur with a whip on stage.
The British liner RMS Lusitania briefly held the record for largest passenger ship upon its completion in 1906. McCay displayed a fondness for it, and featured it in the episode for September 28, 1907, of his comic strip Dream of the Rarebit Fiend, and again in the episode for November 10, 1908, of A Pilgrim's Progress by Mister Bunion, where Bunion declares it "the monster boat that has smashed the record".
The Germans employed submarines in the North Atlantic during World War I, and in April 1915 the German government issued a warning that it would target British civilian ships. The Lusitania was torpedoed on May 7, 1915, during a voyage from New York; 128 Americans were among the 1,198 who lost their lives. Newspapers owned by McCay's employer William Randolph Hearst downplayed the tragedy, as Hearst was opposed to the U.S. entering the war. His own papers' readers were increasingly pro-war in the aftermath of the Lusitania. McCay was as well, but was required to illustrate anti-war and anti-British editorials by editor Arthur Brisbane. In 1916, McCay rebelled against his employer's stance and began to make the pro-war Sinking of the Lusitania in his own time.
The sinking itself was never photographed. McCay said that he gathered background details on the Lusitania from Hearst's Berlin correspondent August F. Beach, who was in London at the time of the disaster and was the first reporter at the scene. The film was the first attempt at a serious, dramatic work of animation.
## Production history
The Sinking of the Lusitania took twenty-two months to complete. McCay had assistance from his neighbor, artist John Fitzsimmons, and from Cincinnati cartoonist William Apthorp "Ap" Adams, who took care of layering the cels in proper sequence for shooting. Fitzsimmons was responsible for a sequence of waves, sixteen frames to be cycled over McCay's drawings. McCay provided illustrations during the day for the newspapers of William Randolph Hearst, and spent his off hours at home drawing the cels for the film, which he took to Vitagraph Studios to be photographed.
McCay's working methods were laborious. On Gertie the Dinosaur an assistant painstakingly traced and retraced the backgrounds thousands of times. Rival animators developed a number of methods to reduce the workload and speed production to meet the increasing demand for animated films. Within a few years of Nemo's release, it became near-universal practice in animation studios to use American Earl Hurd's cel technology, combined with Canadian Raoul Barré's registration pegs, used to keep cels aligned when photographed. Hurd had patented the cel method in 1914; it saved work by allowing dynamic drawings to be drawn on one or more layers, which could be laid over a static background layer, relieving animators of the tedium of retracing static images onto drawing after drawing. McCay adopted the cel method beginning with The Sinking of the Lusitania.
As with all his films, McCay financed Lusitania himself. The cels were an added expense, but greatly reduced the amount of drawing necessary in contrast to McCay's earlier methods. The cels used were thicker than those that later became industry standard, and had a "tooth", or rough surface, that could hold pencil, wash, and crayon, as well as ink lines. The amount of rendering caused the cels to buckle, which made it difficult to keep them aligned for photographing; Fitzsimmons addressed this problem using a modified loose-leaf binder.
McCay said it took him about eight weeks to produce eight seconds' worth of film. The claimed 25,000 drawings filled 900 feet of film. Lusitania was registered for copyright on July 19, 1918, and was released by Jewel Productions who were reported to have acquired it for the highest price paid for a one-reel film up to that time. It was included as part of a Universal Studios Weekly newsreel and featured on the cover of an issue of Universal's in-house publication The Moving Picture Weekly. Its première in England followed in May 1919. Advertisements called it "he world's only record of the crime that shocked humanity".
## Style
The animation combines editorial cartooning techniques with live-action-like sequences, and is considered McCay's most realistic effort; the intertitles emphasized that the film was a "historical record" of the event. McCay animated the action in what animation historian Donald Crafton describes as a "realistic graphic style". The film has a dark mood and strong propagandist feel. It depicts the terrifying fates of the passengers, such as the drowning of children and human chains of passengers jumping to their deaths. The artwork is highly detailed, the animation fluid and naturalistic. McCay used alternating shots to simulate the feel of a newsreel, which reinforced the film's realistic feel.
McCay made stylistic choices to add emotion to the "historical record", as in the anxiety-inducing shots of the submarines lurking beneath the surface, and abstract styling of the white sheets of sky and sea, vast voids which engorge themselves on the drowning bodies. Animation historian Paul Wells suggested the negative space in the frames filled viewers with anxiety through psychological projection or introjection, Freudian ideas that had begun circulating in the years before the film's release. Scholar Ulrich Merkl suggests that as a newspaperman, McCay was likely aware of Freud's widely reported work, though McCay never publicly acknowledged such an influence.
## Reception and legacy
The Sinking of the Lusitania was noted as a work of war propaganda, and is often called the longest work of animation of its time. The film is likely the earliest animated documentary. McCay's biographer, animator John Canemaker, called The Sinking of the Lusitania "a monumental work in the history of the animated film". Though it was admired by his animation contemporaries, Canemaker wrote that it "did not revolutionize the film cartoons of its time" as McCay's skills were beyond what animators of the time were able to follow. In the era that followed, animation studios made occasional non-fiction films, but most were comedic shorts lasting no more than seven minutes. Animation continued in its role of supporting feature films rather than as the main attraction, and rarely received reviews. Lusitania was not a commercial success; after a few years in theaters, Lusitania brought McCay about \$80,000. McCay made at least seven further films, only three of which are known to have seen commercial release.
After 1921, when Hearst learned McCay devoted more of his time to animation than to his newspaper illustrations, Hearst required McCay to give up animation. He had plans for several animation projects that never came to fruition, including a collaboration with Jungle Imps author George Randolph Chester, a musical film called The Barnyard Band, and a film about the Americans' role in World War I. Later in life, McCay at times publicly expressed his dissatisfaction with the animation industry as it had become—he had envisioned animation as an art, and lamented how it had become a trade. According to Canemaker, it was not until Disney's feature films in the 1930s that the animation industry caught up with McCay's level of technique.
Animation historian Paul Wells described Lusitania as "a seminal moment in the development of the animated film" for its combination of documentary style with propagandist elements, and considered it an example of animation as a form of Modernism. Steve Bottomore called the film "he most significant cinematic version of the disaster". A review in The Cinema praised the film, especially the scene in which the first torpedo explodes, which it called "more than reality". The National Film Registry selected the film for preservation in 2017.
|
3,040,640 |
Banksia ilicifolia
| 1,169,992,793 |
Tree in the family Proteaceae endemic to southwest Western Australia
|
[
"Banksia taxa by scientific name",
"Eudicots of Western Australia",
"Plants described in 1810",
"Taxa named by Robert Brown (botanist, born 1773)"
] |
Banksia ilicifolia, commonly known as holly-leaved banksia, is a tree in the family Proteaceae. Endemic to southwest Western Australia, it belongs to Banksia subg. Isostylis, a subgenus of three closely related Banksia species with inflorescences that are dome-shaped heads rather than characteristic Banksia flower spikes. It is generally a tree up to 10 metres (33 ft) tall with a columnar or irregular habit. Both the scientific and common names arise from the similarity of its foliage to that of the English holly Ilex aquifolium; the glossy green leaves generally have very prickly serrated margins, although some plants lack toothed leaves. The inflorescences are initially yellow but become red-tinged with maturity; this acts as a signal to alert birds that the flowers have opened and nectar is available.
Robert Brown described Banksia ilicifolia in 1810. Although Banksia ilicifolia is variable in growth form, with low coastal shrubby forms on the south coast near Albany, there are no recognised varieties as such. Distributed broadly, the species is restricted to sandy soils. Unlike its close relatives which are killed by fire and repopulate from seed, Banksia ilicifolia regenerates after bushfire by regrowing from epicormic buds under its bark. It is rarely cultivated.
## Description
Banksia ilicifolia is a variable species. It usually grows as an erect tree up to 10 metres (33 ft) in height, but some populations along the south coast consist of small trees or even spreading shrubs. It is generally a 5 metres (16 ft) high small tree in the Margaret River region. The leaves arising from many short branchlets make a dense foliage close to the trunk and branches.
Banksia ilicifolia has a stout trunk up to 50 cm (19.5 in) in diameter, and rough, fibrous, grey bark which is up to 2 cm (1 in) thick. New growth takes place mainly in summer. Young branchlets are covered in hair which they lose after two or three years. Leaves grow on stems less than two years of age, and are arranged in a scattered pattern along the stems although crowded at the apices (branchlet tips). Resembling those of holly, its leaves are a dark shiny green colour, and variously obovate (egg-shaped), elliptic, truncate or undulate (wavy) in shape, and 3–10 cm (1–4 in) long. Generally serrated, the leaf edges have up to 14 prickly "teeth" separated by broad v- to u-shaped sinuses along each side, although some leaves have margins lacking teeth. The leaves sit atop petioles 0.3–1 cm (0.12–0.39 in) in length. The upper and undersurface of the leaves are initially covered in fine hairs but become smooth with maturity. Flowering takes place from late winter to early summer. The inflorescences are dome-shaped flower heads rather than spikes as many other banksias, and arise from stems that are around a year old. No lateral branchlets grow outwards from the node where the flower head arises. The flower heads measure 7–9 cm (3–3.5 in) in diameter, and bear 60 to 100 individual flowers. The inflorescences pass through three colour phases, being initially yellow, then pink, then finally red, before falling away from the head. One to three follicles develop from fertilised flowers, and remain embedded in the woody base of the flower head. Each follicle bears one or two seeds.
The cotyledon leaves are a dull green with no visible nerves or markings. Transversely elliptic in shape, they measure 8 to 13 mm long by 12 to 18 mm wide and range from convex to concave. The pointed spreading auricles are 1.5 mm long. The cotyledon leaves sit atop the stout hypocotyl, which is green and smooth. The seedling leaves are crowded above the cotyledons. Resembling those of B. coccinea, they are lined with triangular lobes or "teeth" (with a u- or v-shaped sinus) and obovate to broadly lanceolate in shape. The first set of leaves measure 1 to 2.5 cm (0.39 to 0.98 in) in length and around 1 cm (0.5 in) in width, with three or four lobes in each margin. Both upper and lower seedling leaf surfaces are covered in spreading hairs, as is the seedling stem. Juvenile leaves are obovate to truncate or mucronate with triangular lobes and measure 4 to 10 cm (1.5 to 4 in) long by 1.5 to 3.5 cm (0.59 to 1.38 in) wide. These lobes are smaller toward the petiole and apex of the leaf.
In the Margaret River region, Banksia ilicifolia has been confused with Banksia sessilis var. cordata as both have prickly foliage and domed flowerheads. However, the former grows on deep sand while the latter grows on grey sand over limestone ridges. The embedded follicles of B. ilicifolia compared with the loose ones of B. sessilis are another distinguishing feature.
## Taxonomy
Specimens of B. ilicifolia were first collected by Scottish surgeon Archibald Menzies during the visit of the Vancouver Expedition to King George Sound in September and October 1791, but this collection did not result in the description of the species. It was next collected by Robert Brown in December 1801, during the visit of HMS Investigator to King George Sound. The species was also drawn by the expedition's botanical artist Ferdinand Bauer. Like nearly all of Bauer's field drawings of Proteaceae, the original field sketch of B. ilicifolia was destroyed in a Hofburg fire in 1945. A painting based on the drawing survives, however, at the Natural History Museum in London.
Brown eventually published the species in his 1810 work On the natural order of plants called Proteaceae. The specific name is derived from the Latin words ilex "holly" and folium "leaf", hence "holly-leaved". In 1810, Brown published Prodromus Florae Novae Hollandiae et Insulae Van Diemen in which he arranged the genus into two unranked groups. B. ilicifolia was placed alone in Isostylis because of its unusual dome-shaped inflorescences. All other species were placed in Banksia verae, the "true banksias", because they have the elongate flower spike then considered characteristic of Banksia.
The shrubby, coastal ecotype was published as a separate species Banksia aquifolium by John Lindley in his 1840 A Sketch of the Vegetation of the Swan River Colony, but this is now regarded as a taxonomic synonym of B. ilicifolia. A specimen collected by Ludwig Preiss on 13 April 1839 from coastal sands in Perth was described as Banksia ilicifolia var integrifolia in Bentham's Flora Australiensis in 1870, but has not been recognised since. B. ilicifolia is variable in form, although the variations are not consistent enough to warrant recognising infraspecific taxa. Adult leaf margins can be entire or serrate (like holly), and can both be present on the one plant. Populations from the south coast have larger flowers and leaves, but some trees in the north of the range also have large flowers and leaves.
In 1891, Otto Kuntze, in his Revisio Generum Plantarum, rejected the generic name Banksia L.f., on the grounds that the name Banksia had previously been published in 1776 as Banksia J.R.Forst & G.Forst, referring to the genus now known as Pimelea. Kuntze proposed Sirmuellera as an alternative, referring to this species as Sirmuellera candolleana. This application of the principle of priority was largely ignored by Kuntze's contemporaries, and Banksia L.f. was formally conserved and Sirmuellera rejected in 1940.
### Infrageneric placement
The unranked group Isostylis, with its one species, was reclassified as a section in the 1856 arrangement of Carl Meissner, and 1870 arrangement of George Bentham. In his 1981 revision of the genus, Alex George reclassified the group as a subgenus—Banksia subg. Isostylis—defined by the dome-shaped flower heads, with B. ilicifolia joined by newly described species B. cuneata and later B. oligantha. Banksia ilicifolia is the only common member of that subgenus; the two other species are rare and threatened, and are protected under the Environment Protection and Biodiversity Conservation Act 1999. Relationships between B. ilicifolia and the other members of B. subg. Isostylis still remain unclear. Although DNA studies found B. cuneata to be the most basal of the three species, a 2004 study of genetic divergence within the subgenus yielded both other possibilities: some analyses suggested B. ilicifolia as basal, while others suggested B. oligantha. Biogeographical factors suggest that B. ilicifolia would be the most basal of the three species: it occurs in the High Rainfall Zone where relictual species are most common, whereas the others are restricted to the Transitional Rainfall Zone, where more recently evolved species are most common.
A 1996 cladistic analysis of the genus by botanists Kevin Thiele and Pauline Ladiges assumed the status B. subg. Isostylis as a subgenus and earliest offshoot within Banksia, so George's placement of B. ilicifolia was retained in their arrangement. The placement of B. ilicifolia was unchanged in George's 1999 arrangement, and can be summarised as follows:
Banksia
: B. subg. Banksia (3 sections, 11 series, 73 species, 11 subspecies, 14 varieties)
: B. subg. Isostylis
: : B. ilicifolia
: : B. oligantha
: : B. cuneata
Since 1998, American botanist Austin Mast and co-authors have been publishing results of ongoing cladistic analyses of DNA sequence data for Banksia and Dryandra. Their analyses suggest a phylogeny that differs greatly from George's taxonomic arrangement. Banksia ilicifolia and B. oligantha form a clade, that is they are each other's closest relative, with Banksia cuneata resolving as the next closest relative, suggesting a monophyletic B. subg. Isostylis; but the clade appears fairly derived (that it, it evolved relatively recently), suggesting that B. subg. Isostylis may not merit subgeneric rank. Early in 2007, Mast and Thiele rearranged the genus Banksia by merging Dryandra into it, and published B. subg. Spathulatae for the taxa having spoon-shaped cotyledons; thus B. subg. Banksia was redefined as encompassing taxa lacking spoon-shaped cotyledons. They foreshadowed publishing a full arrangement once DNA sampling of Dryandra was complete; in the meantime, if Mast and Thiele's nomenclatural changes are taken as an interim arrangement, then B. ilicifolia is placed in B. subg. Banksia.
## Distribution and habitat
A relatively common species, the holly-leaved banksia is widely distributed within south west Western Australia. It occurs within 70 km (43 mi) of the coast, from Mount Lesueur to Augusta, and then east to the Cordinup River east of Albany. In the Margaret River region, it grows on yellow sand plains behind the Leeuwin-Naturaliste Ridge. Almost all occurrences are to the west (seaward) side of the Darling Scarp, although there are two outlying populations – one near Collie east of Bunbury and the other in the Tonbridge-Lake Muir area near Manjimup. Along the south coast, there is one inland population at Sheepwash Nature Reserve near Narrikup northwest of Albany. The annual rainfall over its distribution ranges from 600 to 1,100 mm (24 to 43 in).
Banksia ilicifolia grows exclusively on sandy soils; its range ends where heavy soils are evident. It especially favours low-lying areas. It generally grows in open woodland alongside such trees as jarrah (Eucalyptus marginata), candlestick banksia (Banksia attenuata), firewood banksia (B. menziesii) and Western Australian Christmas tree (Nuytsia floribunda). Along the south coast, it grows in heath, sometimes forming stands with bull banksia (B. grandis).
The holly-leaved banksia gives its name to the Banksia ilicifolia woodlands ('community type 22'), a possibly threatened ecological community found in the Bassendean and Spearwood systems in the central Swan Coastal Plain north of Rockingham. These are low-lying areas which are seasonally waterlogged. The habitat is open woodland and with an open understorey, and such trees as B. ilicifolia, B. attenuata and stout paperbark (Melaleuca preissiana).
Banksia ilicifolia is a component of the critically endangered Assemblage of Tumulus Springs (organic mound springs) of the Swan Coastal Plain community north of Perth, which is characterised by a permanently moist peaty soil. The dominant trees include M. preissiana, swamp banksia (B. littoralis) and flooded gum (Eucalyptus rudis), with understorey ferns such as bracken (Pteridium esculentum) and Cyclosorus interruptus, and shrubs swamp peppermint (Taxandria linearifolia) and Astartea fascicularis.
## Ecology
Banksia ilicifolia has been recorded as a source of nectar for the honey possum (Tarsipes rostratus) in winter to early summer (May to December), from field studies in the Scott National Park, replaced by Adenanthos meisneri in the summer. Several honeyeater species visit and pollinate Banksia ilicifolia. The western spinebill (Acanthorhynchus superciliosus) in particular prefers this species over other banksias.
A field study carried out at Jandakot Airport south of Perth and published in 1988 found that birds and insects overwhelmingly preferred visiting yellow-coloured flowerheads. The species recorded include several species of honeyeater, including the red wattlebird (Anthochaera carunculata), western wattlebird (A. lunulata), western spinebill, brown honeyeater (Lichmera indistincta), New Holland honeyeater (Phylidonyris novaehollandiae), white-cheeked honeyeater (P. nigra), singing honeyeater, (Lichenostomus virescens), as well as the twenty-eight parrot (Barnardius zonarius semitorquatus) two species of native bee of the genus Leioproctus, a beetle of the genus Liparetrus, and ant species Iridomyrmex conifer. The yellow flowerheads are also the ones that bear the most nectar, and are greatly preferred by red wattlebirds.
An analysis of the invertebrate population in the canopy of Banksia woodland found that mites and ticks (Acari), beetles (Coleoptera) and ants, bees and wasps (Hymenoptera) predominated overall, with the three orders also common on B. ilicifolia, although outnumbered by thrips (Thysanoptera). More arthropods on B. ilicifolia might be related to a higher nutrient (potassium) level in the leaves. Lower overall numbers of invertebrates on Banksia species were thought to be related to the presence of insectivorous birds.
Hand-pollination experiments on wild populations near Perth showed that Banksia ilicifolia is self-compatible, although progeny produced have less vigour and seed production is reduced. Further experiments show that seedlings of outcrossing with plants greater than 30 kilometres (19 mi) apart are more vigorous and adaptable, suggesting that plants breeding within small fragmented populations are subject to reduced vigour and genetic inbreeding.
Banksia ilicifolia regenerates after bushfire by regrowing from epicormic shoots under its bark. Follicles open and release seeds after several years. It is weakly serotinous, like eight other Banksia species, all of which tend to occur in Western Australia's southwestern corner. The other two species of the subgenus Isostylis are killed by fire and regenerate by seed.
All banksias have developed proteoid or cluster roots in response to the nutrient-poor conditions of Australian soils (particularly lacking in phosphorus). The plant develops masses of fine lateral roots which form a mat-like structure underneath the soil surface. These enable it to extract nutrients as efficiently possible out of the soil. A study of three co-occurring species in Banksia woodland in southwestern Australia—Banksia menziesii, B. attenuata and B. ilicifolia—found that all three develop fresh roots in September after winter rainfall, and that the bacteria populations associated with the root systems of B. menziesii differ from the other two, and that they also change depending on the age of the roots. Along with its shallow lateral roots, Banksia ilicifolia sinks one or more deep taproots seeking the water table. It is an obligate phreatophyte, that is, it is reliant upon accessing groundwater for its survival; it is more closely tied to the water table than the co-occurring B. menziesii and B. attenuata, and must remain in areas where the depth of the water table is less than 8 m (26 ft) below the surface. Recent falls of the water table on the Swan Coastal Plain from use of the Gnangara Mound aquifer for Perth's water supply combined with years of below average rainfall have seen the population and vigour or Banksia ilicifolia fall considerably (more so than other banksia species) since the mid-1960s.
Like many Western Australian banksias, Banksia ilicifolia has been shown to be highly sensitive to dieback from the soil-borne water mould Phytophthora cinnamomi. A study of Banksia attenuata woodland 400 km (250 mi) southeast of Perth across 16 years and following a wave of P. cinnamomi infestation showed that B. ilicifolia populations were present but significantly reduced in diseased areas. Specimens in coastal dune vegetation were reported killed by Armillaria luteobubalina, with mycelial sheaths of the fungus beneath the bark of the root collar.
## Cultivation
Rarely cultivated, Banksia ilicifolia requires a sunny position and sandy well-drained soil to do well. A slow-growing plant, it takes up to ten years to flower from seed. The glossy green foliage and long flowering period, combined with prominently displayed flowers give it horticultural potential, although its prickly foliage makes fallen leaves a problem if planted near lawns or walkways. Seeds do not require any treatment, and take 22 to 41 days to germinate. Difficulties in collection and low seed set make seed relatively expensive. Seeds are often eaten by insects before they can be collected.
|
61,423,643 |
Islanders (video game)
| 1,171,956,296 |
2019 city-building game
|
[
"2019 video games",
"Casual games",
"City-building games",
"Coatsink games",
"Grizzly Games games",
"Indie games",
"Linux games",
"MacOS games",
"Nintendo Switch games",
"PlayStation 4 games",
"Single-player video games",
"Video games developed in Germany",
"Video games set on fictional islands",
"Windows games",
"Xbox One games"
] |
Islanders (stylized in all uppercase) is a casual city-building game developed and published by German indie game studio Grizzly Games. It was initially released on Steam for Microsoft Windows on 4 April 2019, and support for macOS and Linux was added in June that year. A version for consoles was released for Nintendo Switch on 11 August 2021 and PlayStation 4 and Xbox One on 26 August 2021. This version was published by Coatsink, which announced it had acquired the franchise from Grizzly Games in May 2022.
In Islanders, players earn points by strategically placing buildings from their inventory onto a procedurally generated island. Earning points restocks the building inventory, eventually unlocking new types of buildings and the ability to move to a new island and continue the session. The session ends when no more points can be gained because no buildings are available or there is no space to place them. The overall goal of the game is to obtain the highest score possible in a single session.
Islanders was developed over seven months while the members of Grizzly Games were completing degrees in video game design at HTW Berlin. The developers were inspired by a mutual love of city-building games, and chose to embrace simplicity in designing Islanders because of the limitations of working with a small team. Employing procedural generation of new islands enabled them to keep the game's mechanics simple while still providing the player enough variety to make the game engaging for repeat sessions.
Islanders was one of the top twenty best-selling releases on Steam in April 2019. Critical reception was generally positive. Most reviews highlighted elements of the game's minimalist design: low poly visuals, relaxing sound design, and simple yet engaging gameplay mechanics. These same attributes also attracted a degree of criticism from reviewers who felt there was room for more complexity. Several video game journalists placed it on lists of favorites for 2019.
## Gameplay
At the start of each session, players are presented with a small procedurally generated island. There are several styles of islands; some have terrain that restricts the placement of certain buildings. The player is given a choice between two building packs to start with, each of which provides a limited number of buildings according to a theme, such as forestry, farming, or fishing. When selected from the inventory, a building displays a translucent sphere around it, which indicates the distance at which it will earn points from existing buildings and natural features, such as trees. The size of this scoring sphere varies between building types.
Buildings gain points from being placed near relevant structures, but lose points for incompatible ones. A circus, for example, gains points for being placed near houses, but loses points for being near mansions. As potential points are shown in preview before placement, the player can move the building around the island to determine the best location before setting the building down permanently. Buildings can be rotated to fit into position, but once placed, cannot be removed or built over, so careful placement and forward planning are important to maximize the score.
As buildings are placed, they are removed from the inventory. When the player reaches a given threshold of points, they may choose from one of two new themes for their next building pack, which will include more copies of already-unlocked buildings as well as buildings from the newly-selected theme. This process gradually unlocks more advanced building types such as gold mines and resorts, which may have more difficult placement criteria but higher scoring potential. Scoring points fills up the island gauge at the bottom of the screen; when filled, the player can click on it to move to the next island. The number of points required to restock the inventory and move to new islands increases with each unlock. Players are free to remain on their current island and continue to build and increase their score until they decide to move on. The session ends if the player runs out of buildings to place, or space to place buildings, before unlocking the next island. The player's score is cumulative across all islands in a session, and the overall objective is to reach a high score for the entire session.
The game intentionally omits many features common to city-builders, such as resource accumulation, traffic management, and technology research. There are no sidequests or optional objectives, although there is a short list of achievements to earn. The sole multiplayer element is the global high score board that ranks every player's highest-scoring game.
## Development
Grizzly Games is composed of Paul Schnepf, Friedemann Allmenröder, and Jonas Tyroller, who met during the Bachelor of Arts in Game Design program at HTW Berlin. Schnepf and Allmenröder first worked together on a second-year project, a short experimental game called ROM. Later in their second year, they worked with another student, Shahriar Shahrabi, to develop minimalist wingsuit flight simulator Superflight, founding Grizzly Games as a means to release it. Shahrabi left after the release of Superflight. During their third year, Tyroller joined Grizzly Games and development began on Islanders.
The development of Islanders began with a three-week process of researching, prototyping, and refining several concepts. The team was inspired by their mutual childhood interest in city-building games like Anno, The Settlers, and SimCity, which they enjoyed but found complicated. Seeking to provide a streamlined experience focused solely on building, the team decided to move forward with the concept that became Islanders. The game had a short development cycle of seven months: four months of major development time, and another three months of refinement and preparation before release.
In an interview with Game World Observer, Allmenröder described the game as an evolution of ideas explored in the earlier Superflight, particularly the embrace of minimalism and procedural generation. Because there were only three team members, each had to fill multiple roles in the development process. Rather than struggling against the limits of working with a small team, they adopted simplicity as a design philosophy and decided to create a game that was simple enough to be played in short sessions, but engaging enough to be returned to from time to time. Schnepf compared the process of building a city in Islanders to be "just like growing a garden."
The game's use of procedural generation had its roots in the development of Superflight. In order to test game mechanics, the developers created a script that quickly assembled new levels from pre-generated blocks. They found that having new levels each time they played kept their experience entertaining without extending development time, so they decided to use the process for Islanders. When developing the mechanics of the game, Allmenröder explained that his team constantly discussed simplifying the systems they were implementing: "Every time we made a decision, we asked ourselves: Can we make it simpler? Can the game still be fun if we cut this feature?" The gameplay went through various iterations, including one with a day-night cycle, before the team settled on a simple proximity-based scoring system. The visual design of the buildings is intended to be divorced from any specific time period or culture.
### Release and updates
The game, which uses the Unity 3D engine, was initially released on Steam for Microsoft Windows on 4 April 2019. Several post-release updates expanded the game with new content. Early updates added new island types and new buildings, such as seaweed farms and monuments, as well as new gameplay features, such as a photo mode that removes the user interface elements to allow for uncluttered screenshots. The final major update was made in June 2019, adding support for macOS and Linux, a sandbox mode which removes the scoring mechanic and provides the players with an unlimited selection of buildings, and an undo button to allow players in original game mode to remove the last building placed.
Islanders: Console Edition was developed by Grizzly Games in collaboration with Coatsink and released in 2021. The Nintendo Switch version was released on 11 August 2021. Versions for PlayStation 4 and Xbox One were released on 26 August 2021. Backward compatibility allowed those versions to be played on the PlayStation 5 and Xbox Series X and Series S, respectively. The console version has new island types and color schemes as well as an additional building type. On 23 May 2022, Coatsink announced that it had acquired Islanders from Grizzly Games, with an eye towards creating downloadable content, releasing versions for additional platforms, and possibly developing a sequel. Grizzly will remain involved with the series in an unstated capacity.
## Reception
Critical reception of Islanders was largely positive; it received an aggregate score of 82/100 on Metacritic, which uses a weighted average system. Reviewers praised the game's intentionally simple mechanics, as well as its minimalist, low-poly visual aesthetic and relaxing soundtrack. The game was commercially successful as well: in April 2019, it was one of the top twenty highest-selling new releases on Steam. In July 2019, Hayden Dingman of PC World called it one of their favorite indie games of the year to that point. That month, the staff at Rock, Paper, Shotgun also placed it on their list of the year's best games so far. Luke Plunkett of Kotaku placed the game on his list of the top 10 games of 2019. Paul Tamayo, also of Kotaku, named it one of the most relaxing games of 2019.
Many critics highlighted the game's simplicity as a positive, calling the game relaxing or meditative. In his full review, Luke Plunkett called Islanders "pure city-building. No fuss, no distractions." Reviewers found that the simple gameplay encouraged variable session length. Many enjoyed the ability to play in short sessions. Both the reviewer from video game magazine Edge and Cass Marshall of Polygon described using the game as a "palate cleanser" to wind down between sessions of more complicated games. Others felt the game was suitable for long sessions in and of itself. Several reviewers found that the process of strategically placing buildings reminded them of carefully directing falling blocks in the puzzle game Tetris.
Visual style was a draw that affected the way some reviewers played the game. French gaming site Millenium [fr] appreciated the way the color palettes and shapes suited the gameplay. Samuel Guglielmo of TechRaptor found that the art style prompted him to place buildings "in locations that looked pretty" even if it meant scoring fewer points. The reviewer from Edge described going through a similar "battle between efficiency and beauty," but found that the "crisp geometric style" of the graphics meant that the islands still looked attractive even when they focused on scoring over aesthetics. Benja Hiller of German indie magazine Welcome to Last Week enjoyed the lack of human characters: "there are no annoying people. Nobody who wags his finger maliciously in front of you and says: Now take care of the road damage."
Some reviewers felt that the game reflected or encouraged philosophical thinking. Michael Moore at The Verge wrote that the way each island visually progressed from a pristine natural setting to being densely packed with buildings felt like an honest reflection of "humanity's exploitative relationship with nature." At Eurogamer, Christian Donlan had similar thoughts, asking "Is it nice to see one of the game's gorgeous low-poly islands filled with buildings, or is it a crime against nature?" He appreciated that the game allowed the player to decide that for themselves rather than forcing a perspective on them. Reviewing the console version in 2021 for Nintendo Life, Roland Ingram wrote "Islanders is an elucidation of how games build meaning from abstract systems." Ryan Young of The Indie Games Website discussed Islanders in relation to the "lusory attitude", a psychological state of willingness to play and abide by a game's arbitrary rules. He found that the game's slow pace combined with the possibility of entropy spiraling from a poor building placement affected his ability to adopt the lusory attitude towards the game.
The game's studied minimalism attracted criticism from reviewers who wanted more depth from the experience. Both Nicoló Paschetto of Italian gaming site The Games Machine and Alice Liguori of Rock, Paper, Shotgun were disappointed that the game did not have animated inhabitants to give the islands a sense of life. Some critics cited the single-song soundtrack as a negative. Other reviewers had concerns with game mechanics. The reviewer from Millenium wished there were more objectives aside from simply earning points. Reviewing the console edition, Joe Findlay of Comics Gaming Magazine found the lack of in-game consequences for placement of buildings made the game feel pointless to him. The reviewer from Edge magazine noted that the game can be "a little persnickety about placement" of buildings, and Alessandro Barbosa of Critical Hit disliked the lack of an undo button at launch. Several reviewers found it frustrating to start again on the earlier, simpler islands after a game over. Young wrote that the prospect of restarting a failed session felt stressful enough to him that he quit playing entirely instead. Rahul Shirke of IND13 wished for an option to choose the size or type of island when starting a new game, and Alec Meer suggested that players should be able to reset existing islands.
### Console version
Response to Islanders: Console Edition was also positive; the Switch version received an aggregate score of 76/100 on Metacritic. Critics generally found that the relaxed gameplay and low-poly graphics translated well to the Nintendo Switch in both docked and handheld mode. However, many found the controls did not translate well to game controllers. Donlan noted that playing with the Switch controller was "not quite as elegant as it was with a mouse". Ingram wrote that the controls "can sometimes get fiddly." Willem Hilhorst, writing for Nintendo World Report, was the most critical, calling the control scheme "irritating". Hilhorst also wanted more explicit instructions on the specifics of building placement, as he sometimes found the mechanics of "where you can actually place the building" to be confusing.
## Legacy
Some critics have drawn comparisons between Islanders and later minimalist building games. Following the console release of Islanders, many compared it to Dorfromantik, a tile-based city-building game released in 2021. Several reviews for Townscaper, a low poly city-builder released in 2021, explicitly compared it to Islanders.
Writing in 2022, Geoffrey Bunting of Eurogamer linked the rising popularity of games like Islanders with the COVID-19 pandemic. During the early stages of the pandemic, many countries initiated lockdowns as a pandemic control measure. Bunting argues that during these periods, people had increased free time and needed distraction from stress, and turned to relaxing minimalist games as a solution.
## See also
- List of city-building video games
|
26,808 |
Star
| 1,173,389,655 |
Large self-illuminated object in space
|
[
"Concepts in astronomy",
"Light sources",
"Stars",
"Stellar astronomy"
] |
A star is an astronomical object comprising a luminous spheroid of plasma held together by self-gravity. The nearest star to Earth is the Sun. Many other stars are visible to the naked eye at night; their immense distances from Earth make them appear as fixed points of light. The most prominent stars have been categorised into constellations and asterisms, and many of the brightest stars have proper names. Astronomers have assembled star catalogues that identify the known stars and provide standardized stellar designations. The observable universe contains an estimated 10<sup>22</sup> to 10<sup>24</sup> stars. Only about 4,000 of these stars are visible to the naked eye—all within the Milky Way galaxy.
A star's life begins with the gravitational collapse of a gaseous nebula of material largely comprising hydrogen, helium, and trace heavier elements. Its total mass mainly determines its evolution and eventual fate. A star shines for most of its active life due to the thermonuclear fusion of hydrogen into helium in its core. This process releases energy that traverses the star's interior and radiates into outer space. At the end of a star's lifetime, its core becomes a stellar remnant: a white dwarf, a neutron star, or—if it is sufficiently massive—a black hole.
Stellar nucleosynthesis in stars or their remnants creates almost all naturally occurring chemical elements heavier than lithium. Stellar mass loss or supernova explosions return chemically enriched material to the interstellar medium. These elements are then recycled into new stars. Astronomers can determine stellar properties—including mass, age, metallicity (chemical composition), variability, distance, and motion through space—by carrying out observations of a star's apparent brightness, spectrum, and changes in its position in the sky over time.
Stars can form orbital systems with other astronomical objects, as in planetary systems and star systems with two or more stars. When two such stars orbit closely, their gravitational interaction can significantly impact their evolution. Stars can form part of a much larger gravitationally bound structure, such as a star cluster or a galaxy.
## Etymology
The word "star" ultimately derives from the Proto-Indo-European root "h2stḗr" also meaning star, but further analyzable as h2eh1s- ("to burn", also the source of the word "ash") + -tēr (agentive suffix). Compare Latin stella, Greek aster, German Stern. Some scholars believe the word is a borrowing from Akkadian "istar" (venus), however some doubt that suggestion. Star is cognate (shares the same root) with the following words: asterisk, asteroid, astral, constellation, Esther.
## Observation history
Historically, stars have been important to civilizations throughout the world. They have been part of religious practices, used for celestial navigation and orientation, to mark the passage of seasons, and to define calendars.
Early astronomers recognized a difference between "fixed stars", whose position on the celestial sphere does not change, and "wandering stars" (planets), which move noticeably relative to the fixed stars over days or weeks. Many ancient astronomers believed that the stars were permanently affixed to a heavenly sphere and that they were immutable. By convention, astronomers grouped prominent stars into asterisms and constellations and used them to track the motions of the planets and the inferred position of the Sun. The motion of the Sun against the background stars (and the horizon) was used to create calendars, which could be used to regulate agricultural practices. The Gregorian calendar, currently used nearly everywhere in the world, is a solar calendar based on the angle of the Earth's rotational axis relative to its local star, the Sun.
The oldest accurately dated star chart was the result of ancient Egyptian astronomy in 1534 BC. The earliest known star catalogues were compiled by the ancient Babylonian astronomers of Mesopotamia in the late 2nd millennium BC, during the Kassite Period (c. 1531 BC – c. 1155 BC).
The first star catalogue in Greek astronomy was created by Aristillus in approximately 300 BC, with the help of Timocharis. The star catalog of Hipparchus (2nd century BC) included 1,020 stars, and was used to assemble Ptolemy's star catalogue. Hipparchus is known for the discovery of the first recorded nova (new star). Many of the constellations and star names in use today derive from Greek astronomy.
In spite of the apparent immutability of the heavens, Chinese astronomers were aware that new stars could appear. In 185 AD, they were the first to observe and write about a supernova, now known as SN 185. The brightest stellar event in recorded history was the SN 1006 supernova, which was observed in 1006 and written about by the Egyptian astronomer Ali ibn Ridwan and several Chinese astronomers. The SN 1054 supernova, which gave birth to the Crab Nebula, was also observed by Chinese and Islamic astronomers.
Medieval Islamic astronomers gave Arabic names to many stars that are still used today and they invented numerous astronomical instruments that could compute the positions of the stars. They built the first large observatory research institutes, mainly for the purpose of producing Zij star catalogues. Among these, the Book of Fixed Stars (964) was written by the Persian astronomer Abd al-Rahman al-Sufi, who observed a number of stars, star clusters (including the Omicron Velorum and Brocchi's Clusters) and galaxies (including the Andromeda Galaxy). According to A. Zahoor, in the 11th century, the Persian polymath scholar Abu Rayhan Biruni described the Milky Way galaxy as a multitude of fragments having the properties of nebulous stars, and gave the latitudes of various stars during a lunar eclipse in 1019.
According to Josep Puig, the Andalusian astronomer Ibn Bajjah proposed that the Milky Way was made up of many stars that almost touched one another and appeared to be a continuous image due to the effect of refraction from sublunary material, citing his observation of the conjunction of Jupiter and Mars on 500 AH (1106/1107 AD) as evidence. Early European astronomers such as Tycho Brahe identified new stars in the night sky (later termed novae), suggesting that the heavens were not immutable. In 1584, Giordano Bruno suggested that the stars were like the Sun, and may have other planets, possibly even Earth-like, in orbit around them, an idea that had been suggested earlier by the ancient Greek philosophers, Democritus and Epicurus, and by medieval Islamic cosmologists such as Fakhr al-Din al-Razi. By the following century, the idea of the stars being the same as the Sun was reaching a consensus among astronomers. To explain why these stars exerted no net gravitational pull on the Solar System, Isaac Newton suggested that the stars were equally distributed in every direction, an idea prompted by the theologian Richard Bentley.
The Italian astronomer Geminiano Montanari recorded observing variations in luminosity of the star Algol in 1667. Edmond Halley published the first measurements of the proper motion of a pair of nearby "fixed" stars, demonstrating that they had changed positions since the time of the ancient Greek astronomers Ptolemy and Hipparchus.
William Herschel was the first astronomer to attempt to determine the distribution of stars in the sky. During the 1780s, he established a series of gauges in 600 directions and counted the stars observed along each line of sight. From this he deduced that the number of stars steadily increased toward one side of the sky, in the direction of the Milky Way core. His son John Herschel repeated this study in the southern hemisphere and found a corresponding increase in the same direction. In addition to his other accomplishments, William Herschel is noted for his discovery that some stars do not merely lie along the same line of sight, but are physical companions that form binary star systems.
The science of stellar spectroscopy was pioneered by Joseph von Fraunhofer and Angelo Secchi. By comparing the spectra of stars such as Sirius to the Sun, they found differences in the strength and number of their absorption lines—the dark lines in stellar spectra caused by the atmosphere's absorption of specific frequencies. In 1865, Secchi began classifying stars into spectral types. The modern version of the stellar classification scheme was developed by Annie J. Cannon during the early 1900s.
The first direct measurement of the distance to a star (61 Cygni at 11.4 light-years) was made in 1838 by Friedrich Bessel using the parallax technique. Parallax measurements demonstrated the vast separation of the stars in the heavens. Observation of double stars gained increasing importance during the 19th century. In 1834, Friedrich Bessel observed changes in the proper motion of the star Sirius and inferred a hidden companion. Edward Pickering discovered the first spectroscopic binary in 1899 when he observed the periodic splitting of the spectral lines of the star Mizar in a 104-day period. Detailed observations of many binary star systems were collected by astronomers such as Friedrich Georg Wilhelm von Struve and S. W. Burnham, allowing the masses of stars to be determined from computation of orbital elements. The first solution to the problem of deriving an orbit of binary stars from telescope observations was made by Felix Savary in 1827.
The twentieth century saw increasingly rapid advances in the scientific study of stars. The photograph became a valuable astronomical tool. Karl Schwarzschild discovered that the color of a star and, hence, its temperature, could be determined by comparing the visual magnitude against the photographic magnitude. The development of the photoelectric photometer allowed precise measurements of magnitude at multiple wavelength intervals. In 1921 Albert A. Michelson made the first measurements of a stellar diameter using an interferometer on the Hooker telescope at Mount Wilson Observatory.
Important theoretical work on the physical structure of stars occurred during the first decades of the twentieth century. In 1913, the Hertzsprung-Russell diagram was developed, propelling the astrophysical study of stars. Successful models were developed to explain the interiors of stars and stellar evolution. Cecilia Payne-Gaposchkin first proposed that stars were made primarily of hydrogen and helium in her 1925 PhD thesis. The spectra of stars were further understood through advances in quantum physics. This allowed the chemical composition of the stellar atmosphere to be determined.
With the exception of rare events such as supernovae and supernova imposters, individual stars have primarily been observed in the Local Group, and especially in the visible part of the Milky Way (as demonstrated by the detailed star catalogues available for the Milky Way galaxy) and its satellites. Individual stars such as Cepheid variables have been observed in the M87 and M100 galaxies of the Virgo Cluster, as well as luminous stars in some other relatively nearby galaxies. With the aid of gravitational lensing, a single star (named Icarus) has been observed at 9 billion light-years away.
## Designations
The concept of a constellation was known to exist during the Babylonian period. Ancient sky watchers imagined that prominent arrangements of stars formed patterns, and they associated these with particular aspects of nature or their myths. Twelve of these formations lay along the band of the ecliptic and these became the basis of astrology. Many of the more prominent individual stars were given names, particularly with Arabic or Latin designations.
As well as certain constellations and the Sun itself, individual stars have their own myths. To the Ancient Greeks, some "stars", known as planets (Greek πλανήτης (planētēs), meaning "wanderer"), represented various important deities, from which the names of the planets Mercury, Venus, Mars, Jupiter and Saturn were taken. (Uranus and Neptune were Greek and Roman gods, but neither planet was known in Antiquity because of their low brightness. Their names were assigned by later astronomers.)
Circa 1600, the names of the constellations were used to name the stars in the corresponding regions of the sky. The German astronomer Johann Bayer created a series of star maps and applied Greek letters as designations to the stars in each constellation. Later a numbering system based on the star's right ascension was invented and added to John Flamsteed's star catalogue in his book "Historia coelestis Britannica" (the 1712 edition), whereby this numbering system came to be called Flamsteed designation or Flamsteed numbering.
The internationally recognized authority for naming celestial bodies is the International Astronomical Union (IAU). The International Astronomical Union maintains the Working Group on Star Names (WGSN) which catalogs and standardizes proper names for stars. A number of private companies sell names of stars which are not recognized by the IAU, professional astronomers, or the amateur astronomy community. The British Library calls this an unregulated commercial enterprise, and the New York City Department of Consumer and Worker Protection issued a violation against one such star-naming company for engaging in a deceptive trade practice.
## Units of measurement
Although stellar parameters can be expressed in SI units or Gaussian units, it is often most convenient to express mass, luminosity, and radii in solar units, based on the characteristics of the Sun. In 2015, the IAU defined a set of nominal solar values (defined as SI constants, without uncertainties) which can be used for quoting stellar parameters:
{\|
\| nominal solar luminosity \| '' = 3.828×10<sup>26</sup> W \|- \| nominal solar radius \| = 6.957×10<sup>8</sup> m
The solar mass was not explicitly defined by the IAU due to the large relative uncertainty (10<sup>−4</sup>) of the Newtonian constant of gravitation G. Since the product of the Newtonian constant of gravitation and solar mass together (G) has been determined to much greater precision, the IAU defined the nominal solar mass parameter to be:
{\|
\| nominal solar mass parameter: \| G = 1.3271244×10<sup>20</sup> m<sup>3</sup>/s<sup>2</sup>
The nominal solar mass parameter can be combined with the most recent (2014) CODATA estimate of the Newtonian constant of gravitation G to derive the solar mass to be approximately 1.9885×10<sup>30</sup> kg. Although the exact values for the luminosity, radius, mass parameter, and mass may vary slightly in the future due to observational uncertainties, the 2015 IAU nominal constants will remain the same SI values as they remain useful measures for quoting stellar parameters.
Large lengths, such as the radius of a giant star or the semi-major axis of a binary star system, are often expressed in terms of the astronomical unit—approximately equal to the mean distance between the Earth and the Sun (150 million km or approximately 93 million miles). In 2012, the IAU defined the astronomical constant to be an exact length in meters: 149,597,870,700 m.
## Formation and evolution
Stars condense from regions of space of higher matter density, yet those regions are less dense than within a vacuum chamber. These regions—known as molecular clouds—consist mostly of hydrogen, with about 23 to 28 percent helium and a few percent heavier elements. One example of such a star-forming region is the Orion Nebula. Most stars form in groups of dozens to hundreds of thousands of stars. Massive stars in these groups may powerfully illuminate those clouds, ionizing the hydrogen, and creating H II regions. Such feedback effects, from star formation, may ultimately disrupt the cloud and prevent further star formation.
All stars spend the majority of their existence as main sequence stars, fueled primarily by the nuclear fusion of hydrogen into helium within their cores. However, stars of different masses have markedly different properties at various stages of their development. The ultimate fate of more massive stars differs from that of less massive stars, as do their luminosities and the impact they have on their environment. Accordingly, astronomers often group stars by their mass:
- Very low mass stars, with masses below , are fully convective and distribute helium evenly throughout the whole star while on the main sequence. Therefore, they never undergo shell burning and never become red giants. After exhausting their hydrogen they become helium white dwarfs and slowly cool. As the lifetime of stars is longer than the age of the universe, no such star has yet reached the white dwarf stage.
- Low mass stars (including the Sun), with a mass between and \~ depending on composition, do become red giants as their core hydrogen is depleted and they begin to burn helium in core in a helium flash; they develop a degenerate carbon-oxygen core later on the asymptotic giant branch; they finally blow off their outer shell as a planetary nebula and leave behind their core in the form of a white dwarf.
- Intermediate-mass stars, between \~ and \~, pass through evolutionary stages similar to low mass stars, but after a relatively short period on the red-giant branch they ignite helium without a flash and spend an extended period in the red clump before forming a degenerate carbon-oxygen core.
- Massive stars generally have a minimum mass of \~. After exhausting the hydrogen at the core these stars become supergiants and go on to fuse elements heavier than helium. They end their lives when their cores collapse and they explode as supernovae.
### Star formation
The formation of a star begins with gravitational instability within a molecular cloud, caused by regions of higher density—often triggered by compression of clouds by radiation from massive stars, expanding bubbles in the interstellar medium, the collision of different molecular clouds, or the collision of galaxies (as in a starburst galaxy). When a region reaches a sufficient density of matter to satisfy the criteria for Jeans instability, it begins to collapse under its own gravitational force.
As the cloud collapses, individual conglomerations of dense dust and gas form "Bok globules". As a globule collapses and the density increases, the gravitational energy converts into heat and the temperature rises. When the protostellar cloud has approximately reached the stable condition of hydrostatic equilibrium, a protostar forms at the core. These pre-main-sequence stars are often surrounded by a protoplanetary disk and powered mainly by the conversion of gravitational energy. The period of gravitational contraction lasts about 10 million years for a star like the sun, up to 100 million years for a red dwarf.
Early stars of less than are called T Tauri stars, while those with greater mass are Herbig Ae/Be stars. These newly formed stars emit jets of gas along their axis of rotation, which may reduce the angular momentum of the collapsing star and result in small patches of nebulosity known as Herbig–Haro objects. These jets, in combination with radiation from nearby massive stars, may help to drive away the surrounding cloud from which the star was formed.
Early in their development, T Tauri stars follow the Hayashi track—they contract and decrease in luminosity while remaining at roughly the same temperature. Less massive T Tauri stars follow this track to the main sequence, while more massive stars turn onto the Henyey track.
Most stars are observed to be members of binary star systems, and the properties of those binaries are the result of the conditions in which they formed. A gas cloud must lose its angular momentum in order to collapse and form a star. The fragmentation of the cloud into multiple stars distributes some of that angular momentum. The primordial binaries transfer some angular momentum by gravitational interactions during close encounters with other stars in young stellar clusters. These interactions tend to split apart more widely separated (soft) binaries while causing hard binaries to become more tightly bound. This produces the separation of binaries into their two observed populations distributions.
### Main sequence
Stars spend about 90% of their lifetimes fusing hydrogen into helium in high-temperature-and-pressure reactions in their cores. Such stars are said to be on the main sequence and are called dwarf stars. Starting at zero-age main sequence, the proportion of helium in a star's core will steadily increase, the rate of nuclear fusion at the core will slowly increase, as will the star's temperature and luminosity. The Sun, for example, is estimated to have increased in luminosity by about 40% since it reached the main sequence 4.6 billion (4.6×10<sup>9</sup>) years ago.
Every star generates a stellar wind of particles that causes a continual outflow of gas into space. For most stars, the mass lost is negligible. The Sun loses 10<sup>−14</sup> M<sub>☉</sub> every year, or about 0.01% of its total mass over its entire lifespan. However, very massive stars can lose 10<sup>−7</sup> to 10<sup>−5</sup> M<sub>☉</sub> each year, significantly affecting their evolution. Stars that begin with more than can lose over half their total mass while on the main sequence.
The time a star spends on the main sequence depends primarily on the amount of fuel it has and the rate at which it fuses it. The Sun is expected to live 10 billion (10<sup>10</sup>) years. Massive stars consume their fuel very rapidly and are short-lived. Low mass stars consume their fuel very slowly. Stars less massive than , called red dwarfs, are able to fuse nearly all of their mass while stars of about can only fuse about 10% of their mass. The combination of their slow fuel-consumption and relatively large usable fuel supply allows low mass stars to last about one trillion (10×10<sup>12</sup>) years; the most extreme of will last for about 12 trillion years. Red dwarfs become hotter and more luminous as they accumulate helium. When they eventually run out of hydrogen, they contract into a white dwarf and decline in temperature. Since the lifespan of such stars is greater than the current age of the universe (13.8 billion years), no stars under about are expected to have moved off the main sequence.
Besides mass, the elements heavier than helium can play a significant role in the evolution of stars. Astronomers label all elements heavier than helium "metals", and call the chemical concentration of these elements in a star, its metallicity. A star's metallicity can influence the time the star takes to burn its fuel, and controls the formation of its magnetic fields, which affects the strength of its stellar wind. Older, population II stars have substantially less metallicity than the younger, population I stars due to the composition of the molecular clouds from which they formed. Over time, such clouds become increasingly enriched in heavier elements as older stars die and shed portions of their atmospheres.
### Post–main sequence
As stars of at least exhaust the supply of hydrogen at their core, they start to fuse hydrogen in a shell surrounding the helium core. The outer layers of the star expand and cool greatly as they transition into a red giant. In some cases, they will fuse heavier elements at the core or in shells around the core. As the stars expand, they throw part of their mass, enriched with those heavier elements, into the interstellar environment, to be recycled later as new stars. In about 5 billion years, when the Sun enters the helium burning phase, it will expand to a maximum radius of roughly 1 astronomical unit (150 million kilometres), 250 times its present size, and lose 30% of its current mass.
As the hydrogen-burning shell produces more helium, the core increases in mass and temperature. In a red giant of up to , the mass of the helium core becomes degenerate prior to helium fusion. Finally, when the temperature increases sufficiently, core helium fusion begins explosively in what is called a helium flash, and the star rapidly shrinks in radius, increases its surface temperature, and moves to the horizontal branch of the HR diagram. For more massive stars, helium core fusion starts before the core becomes degenerate, and the star spends some time in the red clump, slowly burning helium, before the outer convective envelope collapses and the star then moves to the horizontal branch.
After a star has fused the helium of its core, it begins fusing helium along a shell surrounding the hot carbon core. The star then follows an evolutionary path called the asymptotic giant branch (AGB) that parallels the other described red-giant phase, but with a higher luminosity. The more massive AGB stars may undergo a brief period of carbon fusion before the core becomes degenerate. During the AGB phase, stars undergo thermal pulses due to instabilities in the core of the star. In these thermal pulses, the luminosity of the star varies and matter is ejected from the star's atmosphere, ultimately forming a planetary nebula. As much as 50 to 70% of a star's mass can be ejected in this mass loss process. Because energy transport in an AGB star is primarily by convection, this ejected material is enriched with the fusion products dredged up from the core. Therefore, the planetary nebula is enriched with elements like carbon and oxygen. Ultimately, the planetary nebula disperses, enriching the general interstellar medium. Therefore, future generations of stars are made of the "star stuff" from past stars.
#### Massive stars
During their helium-burning phase, a star of more than 9 solar masses expands to form first a blue and then a red supergiant. Particularly massive stars may evolve to a Wolf-Rayet star, characterised by spectra dominated by emission lines of elements heavier than hydrogen, which have reached the surface due to strong convection and intense mass loss, or from stripping of the outer layers.
When helium is exhausted at the core of a massive star, the core contracts and the temperature and pressure rises enough to fuse carbon (see Carbon-burning process). This process continues, with the successive stages being fueled by neon (see neon-burning process), oxygen (see oxygen-burning process), and silicon (see silicon-burning process). Near the end of the star's life, fusion continues along a series of onion-layer shells within a massive star. Each shell fuses a different element, with the outermost shell fusing hydrogen; the next shell fusing helium, and so forth.
The final stage occurs when a massive star begins producing iron. Since iron nuclei are more tightly bound than any heavier nuclei, any fusion beyond iron does not produce a net release of energy.
#### Collapse
As a star's core shrinks, the intensity of radiation from that surface increases, creating such radiation pressure on the outer shell of gas that it will push those layers away, forming a planetary nebula. If what remains after the outer atmosphere has been shed is less than roughly , it shrinks to a relatively tiny object about the size of Earth, known as a white dwarf. White dwarfs lack the mass for further gravitational compression to take place. The electron-degenerate matter inside a white dwarf is no longer a plasma. Eventually, white dwarfs fade into black dwarfs over a very long period of time.
In massive stars, fusion continues until the iron core has grown so large (more than ) that it can no longer support its own mass. This core will suddenly collapse as its electrons are driven into its protons, forming neutrons, neutrinos, and gamma rays in a burst of electron capture and inverse beta decay. The shockwave formed by this sudden collapse causes the rest of the star to explode in a supernova. Supernovae become so bright that they may briefly outshine the star's entire home galaxy. When they occur within the Milky Way, supernovae have historically been observed by naked-eye observers as "new stars" where none seemingly existed before.
A supernova explosion blows away the star's outer layers, leaving a remnant such as the Crab Nebula. The core is compressed into a neutron star, which sometimes manifests itself as a pulsar or X-ray burster. In the case of the largest stars, the remnant is a black hole greater than . In a neutron star the matter is in a state known as neutron-degenerate matter, with a more exotic form of degenerate matter, QCD matter, possibly present in the core.
The blown-off outer layers of dying stars include heavy elements, which may be recycled during the formation of new stars. These heavy elements allow the formation of rocky planets. The outflow from supernovae and the stellar wind of large stars play an important part in shaping the interstellar medium.
#### Binary stars
Binary stars' evolution may significantly differ from that of single stars of the same mass. For example, when any star expands to become a red giant, it may overflow its Roche lobe, the surrounding region where material is gravitationally bound to it; if stars in a binary system are close enough, some of that material may overflow to the other star, yielding phenomena including contact binaries, common-envelope binaries, cataclysmic variables, blue stragglers, and type Ia supernovae. Mass transfer leads to cases such as the Algol paradox, where the most-evolved star in a system is the least massive.
The evolution of binary star and higher-order star systems is intensely researched since so many stars have been found to be members of binary systems. Around half of Sun-like stars, and an even higher proportion of more massive stars, form in multiple systems, and this may greatly influence such phenomena as novae and supernovae, the formation of certain types of star, and the enrichment of space with nucleosynthesis products.
The influence of binary star evolution on the formation of evolved massive stars such as luminous blue variables, Wolf-Rayet stars, and the progenitors of certain classes of core collapse supernova is still disputed. Single massive stars may be unable to expel their outer layers fast enough to form the types and numbers of evolved stars that are observed, or to produce progenitors that would explode as the supernovae that are observed. Mass transfer through gravitational stripping in binary systems is seen by some astronomers as the solution to that problem.
## Distribution
Stars are not spread uniformly across the universe but are normally grouped into galaxies along with interstellar gas and dust. A typical large galaxy like the Milky Way contains hundreds of billions of stars. There are more than 2 trillion (10<sup>12</sup>) galaxies, though most are less than 10% the mass of the Milky Way. Overall, there are likely to be between 10<sup>22</sup> and 10<sup>24</sup> stars (more stars than all the grains of sand on planet Earth). Most stars are within galaxies, but between 10 and 50% of the starlight in large galaxy clusters may come from stars outside of any galaxy.
A multi-star system consists of two or more gravitationally bound stars that orbit each other. The simplest and most common multi-star system is a binary star, but systems of three or more stars exist. For reasons of orbital stability, such multi-star systems are often organized into hierarchical sets of binary stars. Larger groups are called star clusters. These range from loose stellar associations with only a few stars to open clusters with dozens to thousands of stars, up to enormous globular clusters with hundreds of thousands of stars. Such systems orbit their host galaxy. The stars in an open or globular cluster all formed from the same giant molecular cloud, so all members normally have similar ages and compositions.
Many stars are observed, and most or all may have originally formed in gravitationally bound, multiple-star systems. This is particularly true for very massive O and B class stars, 80% of which are believed to be part of multiple-star systems. The proportion of single star systems increases with decreasing star mass, so that only 25% of red dwarfs are known to have stellar companions. As 85% of all stars are red dwarfs, more than two thirds of stars in the Milky Way are likely single red dwarfs. In a 2017 study of the Perseus molecular cloud, astronomers found that most of the newly formed stars are in binary systems. In the model that best explained the data, all stars initially formed as binaries, though some binaries later split up and leave single stars behind.
The nearest star to the Earth, apart from the Sun, is Proxima Centauri, 4.2465 light-years (40.175 trillion kilometres) away. Travelling at the orbital speed of the Space Shuttle, 8 kilometres per second (29,000 kilometres per hour), it would take about 150,000 years to arrive. This is typical of stellar separations in galactic discs. Stars can be much closer to each other in the centres of galaxies and in globular clusters, or much farther apart in galactic halos.
Due to the relatively vast distances between stars outside the galactic nucleus, collisions between stars are thought to be rare. In denser regions such as the core of globular clusters or the galactic center, collisions can be more common. Such collisions can produce what are known as blue stragglers. These abnormal stars have a higher surface temperature and thus are bluer than stars at the main sequence turnoff in the cluster to which they belong; in standard stellar evolution, blue stragglers would already have evolved off the main sequence and thus would not be seen in the cluster.
## Characteristics
Almost everything about a star is determined by its initial mass, including such characteristics as luminosity, size, evolution, lifespan, and its eventual fate.
### Age
Most stars are between 1 billion and 10 billion years old. Some stars may even be close to 13.8 billion years old—the observed age of the universe. The oldest star yet discovered, HD 140283, nicknamed Methuselah star, is an estimated 14.46 ± 0.8 billion years old. (Due to the uncertainty in the value, this age for the star does not conflict with the age of the universe, determined by the Planck satellite as 13.799 ± 0.021).
The more massive the star, the shorter its lifespan, primarily because massive stars have greater pressure on their cores, causing them to burn hydrogen more rapidly. The most massive stars last an average of a few million years, while stars of minimum mass (red dwarfs) burn their fuel very slowly and can last tens to hundreds of billions of years.
### Chemical composition
When stars form in the present Milky Way galaxy, they are composed of about 71% hydrogen and 27% helium, as measured by mass, with a small fraction of heavier elements. Typically the portion of heavy elements is measured in terms of the iron content of the stellar atmosphere, as iron is a common element and its absorption lines are relatively easy to measure. The portion of heavier elements may be an indicator of the likelihood that the star has a planetary system.
The star with the lowest iron content ever measured is the dwarf HE1327-2326, with only 1/200,000th the iron content of the Sun. By contrast, the super-metal-rich star μ Leonis has nearly double the abundance of iron as the Sun, while the planet-bearing star 14 Herculis has nearly triple the iron. Chemically peculiar stars show unusual abundances of certain elements in their spectrum; especially chromium and rare earth elements. Stars with cooler outer atmospheres, including the Sun, can form various diatomic and polyatomic molecules.
### Diameter
Due to their great distance from the Earth, all stars except the Sun appear to the unaided eye as shining points in the night sky that twinkle because of the effect of the Earth's atmosphere. The Sun is close enough to the Earth to appear as a disk instead, and to provide daylight. Other than the Sun, the star with the largest apparent size is R Doradus, with an angular diameter of only 0.057 arcseconds.
The disks of most stars are much too small in angular size to be observed with current ground-based optical telescopes, and so interferometer telescopes are required to produce images of these objects. Another technique for measuring the angular size of stars is through occultation. By precisely measuring the drop in brightness of a star as it is occulted by the Moon (or the rise in brightness when it reappears), the star's angular diameter can be computed.
Stars range in size from neutron stars, which vary anywhere from 20 to 40 km (25 mi) in diameter, to supergiants like Betelgeuse in the Orion constellation, which has a diameter about 1,000 times that of the Sun with a much lower density.
### Kinematics
The motion of a star relative to the Sun can provide useful information about the origin and age of a star, as well as the structure and evolution of the surrounding galaxy. The components of motion of a star consist of the radial velocity toward or away from the Sun, and the traverse angular movement, which is called its proper motion.
Radial velocity is measured by the doppler shift of the star's spectral lines and is given in units of km/s. The proper motion of a star, its parallax, is determined by precise astrometric measurements in units of milli-arc seconds (mas) per year. With knowledge of the star's parallax and its distance, the proper motion velocity can be calculated. Together with the radial velocity, the total velocity can be calculated. Stars with high rates of proper motion are likely to be relatively close to the Sun, making them good candidates for parallax measurements.
When both rates of movement are known, the space velocity of the star relative to the Sun or the galaxy can be computed. Among nearby stars, it has been found that younger population I stars have generally lower velocities than older, population II stars. The latter have elliptical orbits that are inclined to the plane of the galaxy. A comparison of the kinematics of nearby stars has allowed astronomers to trace their origin to common points in giant molecular clouds, and are referred to as stellar associations.
### Magnetic field
The magnetic field of a star is generated within regions of the interior where convective circulation occurs. This movement of conductive plasma functions like a dynamo, wherein the movement of electrical charges induce magnetic fields, as does a mechanical dynamo. Those magnetic fields have a great range that extend throughout and beyond the star. The strength of the magnetic field varies with the mass and composition of the star, and the amount of magnetic surface activity depends upon the star's rate of rotation. This surface activity produces starspots, which are regions of strong magnetic fields and lower than normal surface temperatures. Coronal loops are arching magnetic field flux lines that rise from a star's surface into the star's outer atmosphere, its corona. The coronal loops can be seen due to the plasma they conduct along their length. Stellar flares are bursts of high-energy particles that are emitted due to the same magnetic activity.
Young, rapidly rotating stars tend to have high levels of surface activity because of their magnetic field. The magnetic field can act upon a star's stellar wind, functioning as a brake to gradually slow the rate of rotation with time. Thus, older stars such as the Sun have a much slower rate of rotation and a lower level of surface activity. The activity levels of slowly rotating stars tend to vary in a cyclical manner and can shut down altogether for periods of time. During the Maunder Minimum, for example, the Sun underwent a 70-year period with almost no sunspot activity.
### Mass
One of the most massive stars known is Eta Carinae, which, with 100–150 times as much mass as the Sun, will have a lifespan of only several million years. Studies of the most massive open clusters suggests as a rough upper limit for stars in the current era of the universe. This represents an empirical value for the theoretical limit on the mass of forming stars due to increasing radiation pressure on the accreting gas cloud. Several stars in the R136 cluster in the Large Magellanic Cloud have been measured with larger masses, but it has been determined that they could have been created through the collision and merger of massive stars in close binary systems, sidestepping the limit on massive star formation.
The first stars to form after the Big Bang may have been larger, up to , due to the complete absence of elements heavier than lithium in their composition. This generation of supermassive population III stars is likely to have existed in the very early universe (i.e., they are observed to have a high redshift), and may have started the production of chemical elements heavier than hydrogen that are needed for the later formation of planets and life. In June 2015, astronomers reported evidence for Population III stars in the Cosmos Redshift 7 galaxy at z = 6.60.
With a mass only 80 times that of Jupiter (), 2MASS J0523-1403 is the smallest known star undergoing nuclear fusion in its core. For stars with metallicity similar to the Sun, the theoretical minimum mass the star can have and still undergo fusion at the core, is estimated to be about 75 . When the metallicity is very low, the minimum star size seems to be about 8.3% of the solar mass, or about 87 . Smaller bodies called brown dwarfs, occupy a poorly defined grey area between stars and gas giants.
The combination of the radius and the mass of a star determines its surface gravity. Giant stars have a much lower surface gravity than do main sequence stars, while the opposite is the case for degenerate, compact stars such as white dwarfs. The surface gravity can influence the appearance of a star's spectrum, with higher gravity causing a broadening of the absorption lines.
### Rotation
The rotation rate of stars can be determined through spectroscopic measurement, or more exactly determined by tracking their starspots. Young stars can have a rotation greater than 100 km/s at the equator. The B-class star Achernar, for example, has an equatorial velocity of about 225 km/s or greater, causing its equator to bulge outward and giving it an equatorial diameter that is more than 50% greater than between the poles. This rate of rotation is just below the critical velocity of 300 km/s at which speed the star would break apart. By contrast, the Sun rotates once every 25–35 days depending on latitude, with an equatorial velocity of 1.93 km/s. A main sequence star's magnetic field and the stellar wind serve to slow its rotation by a significant amount as it evolves on the main sequence.
Degenerate stars have contracted into a compact mass, resulting in a rapid rate of rotation. However they have relatively low rates of rotation compared to what would be expected by conservation of angular momentum—the tendency of a rotating body to compensate for a contraction in size by increasing its rate of spin. A large portion of the star's angular momentum is dissipated as a result of mass loss through the stellar wind. In spite of this, the rate of rotation for a pulsar can be very rapid. The pulsar at the heart of the Crab nebula, for example, rotates 30 times per second. The rotation rate of the pulsar will gradually slow due to the emission of radiation.
### Temperature
The surface temperature of a main sequence star is determined by the rate of energy production of its core and by its radius, and is often estimated from the star's color index. The temperature is normally given in terms of an effective temperature, which is the temperature of an idealized black body that radiates its energy at the same luminosity per surface area as the star. The effective temperature is only representative of the surface, as the temperature increases toward the core. The temperature in the core region of a star is several million kelvins.
The stellar temperature will determine the rate of ionization of various elements, resulting in characteristic absorption lines in the spectrum. The surface temperature of a star, along with its visual absolute magnitude and absorption features, is used to classify a star (see classification below).
Massive main sequence stars can have surface temperatures of 50,000 K. Smaller stars such as the Sun have surface temperatures of a few thousand K. Red giants have relatively low surface temperatures of about 3,600 K; but they have a high luminosity due to their large exterior surface area.
## Radiation
The energy produced by stars, a product of nuclear fusion, radiates to space as both electromagnetic radiation and particle radiation. The particle radiation emitted by a star is manifested as the stellar wind, which streams from the outer layers as electrically charged protons and alpha and beta particles. A steady stream of almost massless neutrinos emanate directly from the star's core.
The production of energy at the core is the reason stars shine so brightly: every time two or more atomic nuclei fuse together to form a single atomic nucleus of a new heavier element, gamma ray photons are released from the nuclear fusion product. This energy is converted to other forms of electromagnetic energy of lower frequency, such as visible light, by the time it reaches the star's outer layers.
The color of a star, as determined by the most intense frequency of the visible light, depends on the temperature of the star's outer layers, including its photosphere. Besides visible light, stars emit forms of electromagnetic radiation that are invisible to the human eye. In fact, stellar electromagnetic radiation spans the entire electromagnetic spectrum, from the longest wavelengths of radio waves through infrared, visible light, ultraviolet, to the shortest of X-rays, and gamma rays. From the standpoint of total energy emitted by a star, not all components of stellar electromagnetic radiation are significant, but all frequencies provide insight into the star's physics.
Using the stellar spectrum, astronomers can determine the surface temperature, surface gravity, metallicity and rotational velocity of a star. If the distance of the star is found, such as by measuring the parallax, then the luminosity of the star can be derived. The mass, radius, surface gravity, and rotation period can then be estimated based on stellar models. (Mass can be calculated for stars in binary systems by measuring their orbital velocities and distances. Gravitational microlensing has been used to measure the mass of a single star.) With these parameters, astronomers can estimate the age of the star.
### Luminosity
The luminosity of a star is the amount of light and other forms of radiant energy it radiates per unit of time. It has units of power. The luminosity of a star is determined by its radius and surface temperature. Many stars do not radiate uniformly across their entire surface. The rapidly rotating star Vega, for example, has a higher energy flux (power per unit area) at its poles than along its equator.
Patches of the star's surface with a lower temperature and luminosity than average are known as starspots. Small, dwarf stars such as the Sun generally have essentially featureless disks with only small starspots. Giant stars have much larger, more obvious starspots, and they exhibit strong stellar limb darkening. That is, the brightness decreases towards the edge of the stellar disk. Red dwarf flare stars such as UV Ceti may possess prominent starspot features.
### Magnitude
The apparent brightness of a star is expressed in terms of its apparent magnitude. It is a function of the star's luminosity, its distance from Earth, the extinction effect of interstellar dust and gas, and the altering of the star's light as it passes through Earth's atmosphere. Intrinsic or absolute magnitude is directly related to a star's luminosity, and is the apparent magnitude a star would be if the distance between the Earth and the star were 10 parsecs (32.6 light-years).
Both the apparent and absolute magnitude scales are logarithmic units: one whole number difference in magnitude is equal to a brightness variation of about 2.5 times (the 5th root of 100 or approximately 2.512). This means that a first magnitude star (+1.00) is about 2.5 times brighter than a second magnitude (+2.00) star, and about 100 times brighter than a sixth magnitude star (+6.00). The faintest stars visible to the naked eye under good seeing conditions are about magnitude +6.
On both apparent and absolute magnitude scales, the smaller the magnitude number, the brighter the star; the larger the magnitude number, the fainter the star. The brightest stars, on either scale, have negative magnitude numbers. The variation in brightness (ΔL) between two stars is calculated by subtracting the magnitude number of the brighter star (m<sub>b</sub>) from the magnitude number of the fainter star (m<sub>f</sub>), then using the difference as an exponent for the base number 2.512; that is to say:
$\Delta{m} = m_\mathrm{f} - m_\mathrm{b}$
$2.512^{\Delta{m}} = \Delta{L}$
Relative to both luminosity and distance from Earth, a star's absolute magnitude (M) and apparent magnitude (m) are not equivalent; for example, the bright star Sirius has an apparent magnitude of −1.44, but it has an absolute magnitude of +1.41.
The Sun has an apparent magnitude of −26.7, but its absolute magnitude is only +4.83. Sirius, the brightest star in the night sky as seen from Earth, is approximately 23 times more luminous than the Sun, while Canopus, the second brightest star in the night sky with an absolute magnitude of −5.53, is approximately 14,000 times more luminous than the Sun. Despite Canopus being vastly more luminous than Sirius, the latter star appears the brighter of the two. This is because Sirius is merely 8.6 light-years from the Earth, while Canopus is much farther away at a distance of 310 light-years.
The most luminous known stars have absolute magnitudes of roughly −12, corresponding to 6 million times the luminosity of the Sun. Theoretically, the least luminous stars are at the lower limit of mass at which stars are capable of supporting nuclear fusion of hydrogen in the core; stars just above this limit have been located in the NGC 6397 cluster. The faintest red dwarfs in the cluster are absolute magnitude 15, while a 17th absolute magnitude white dwarf has been discovered.
## Classification
The current stellar classification system originated in the early 20th century, when stars were classified from A to Q based on the strength of the hydrogen line. It was thought that the hydrogen line strength was a simple linear function of temperature. Instead, it was more complicated: it strengthened with increasing temperature, peaked near 9000 K, and then declined at greater temperatures. The classifications were since reordered by temperature, on which the modern scheme is based.
Stars are given a single-letter classification according to their spectra, ranging from type O, which are very hot, to M, which are so cool that molecules may form in their atmospheres. The main classifications in order of decreasing surface temperature are: O, B, A, F, G, K, and M. A variety of rare spectral types are given special classifications. The most common of these are types L and T, which classify the coldest low-mass stars and brown dwarfs. Each letter has 10 sub-divisions, numbered from 0 to 9, in order of decreasing temperature. However, this system breaks down at extreme high temperatures as classes O0 and O1 may not exist.
In addition, stars may be classified by the luminosity effects found in their spectral lines, which correspond to their spatial size and is determined by their surface gravity. These range from 0 (hypergiants) through III (giants) to V (main sequence dwarfs); some authors add VII (white dwarfs). Main sequence stars fall along a narrow, diagonal band when graphed according to their absolute magnitude and spectral type. The Sun is a main sequence G2V yellow dwarf of intermediate temperature and ordinary size.
There is additional nomenclature in the form of lower-case letters added to the end of the spectral type to indicate peculiar features of the spectrum. For example, an "e" can indicate the presence of emission lines; "m" represents unusually strong levels of metals, and "var" can mean variations in the spectral type.
White dwarf stars have their own class that begins with the letter D. This is further sub-divided into the classes DA, DB, DC, DO, DZ, and DQ, depending on the types of prominent lines found in the spectrum. This is followed by a numerical value that indicates the temperature.
## Variable stars
Variable stars have periodic or random changes in luminosity because of intrinsic or extrinsic properties. Of the intrinsically variable stars, the primary types can be subdivided into three principal groups.
During their stellar evolution, some stars pass through phases where they can become pulsating variables. Pulsating variable stars vary in radius and luminosity over time, expanding and contracting with periods ranging from minutes to years, depending on the size of the star. This category includes Cepheid and Cepheid-like stars, and long-period variables such as Mira.
Eruptive variables are stars that experience sudden increases in luminosity because of flares or mass ejection events. This group includes protostars, Wolf-Rayet stars, and flare stars, as well as giant and supergiant stars.
Cataclysmic or explosive variable stars are those that undergo a dramatic change in their properties. This group includes novae and supernovae. A binary star system that includes a nearby white dwarf can produce certain types of these spectacular stellar explosions, including the nova and a Type 1a supernova. The explosion is created when the white dwarf accretes hydrogen from the companion star, building up mass until the hydrogen undergoes fusion. Some novae are recurrent, having periodic outbursts of moderate amplitude.
Stars can vary in luminosity because of extrinsic factors, such as eclipsing binaries, as well as rotating stars that produce extreme starspots. A notable example of an eclipsing binary is Algol, which regularly varies in magnitude from 2.1 to 3.4 over a period of 2.87 days.
## Structure
The interior of a stable star is in a state of hydrostatic equilibrium: the forces on any small volume almost exactly counterbalance each other. The balanced forces are inward gravitational force and an outward force due to the pressure gradient within the star. The pressure gradient is established by the temperature gradient of the plasma; the outer part of the star is cooler than the core. The temperature at the core of a main sequence or giant star is at least on the order of 10<sup>7</sup> K. The resulting temperature and pressure at the hydrogen-burning core of a main sequence star are sufficient for nuclear fusion to occur and for sufficient energy to be produced to prevent further collapse of the star.
As atomic nuclei are fused in the core, they emit energy in the form of gamma rays. These photons interact with the surrounding plasma, adding to the thermal energy at the core. Stars on the main sequence convert hydrogen into helium, creating a slowly but steadily increasing proportion of helium in the core. Eventually the helium content becomes predominant, and energy production ceases at the core. Instead, for stars of more than , fusion occurs in a slowly expanding shell around the degenerate helium core.
In addition to hydrostatic equilibrium, the interior of a stable star will maintain an energy balance of thermal equilibrium. There is a radial temperature gradient throughout the interior that results in a flux of energy flowing toward the exterior. The outgoing flux of energy leaving any layer within the star will exactly match the incoming flux from below.
The radiation zone is the region of the stellar interior where the flux of energy outward is dependent on radiative heat transfer, since convective heat transfer is inefficient in that zone. In this region the plasma will not be perturbed, and any mass motions will die out. Where this is not the case, then the plasma becomes unstable and convection will occur, forming a convection zone. This can occur, for example, in regions where very high energy fluxes occur, such as near the core or in areas with high opacity (making radiatative heat transfer inefficient) as in the outer envelope.
The occurrence of convection in the outer envelope of a main sequence star depends on the star's mass. Stars with several times the mass of the Sun have a convection zone deep within the interior and a radiative zone in the outer layers. Smaller stars such as the Sun are just the opposite, with the convective zone located in the outer layers. Red dwarf stars with less than are convective throughout, which prevents the accumulation of a helium core. For most stars the convective zones will vary over time as the star ages and the constitution of the interior is modified.
The photosphere is that portion of a star that is visible to an observer. This is the layer at which the plasma of the star becomes transparent to photons of light. From here, the energy generated at the core becomes free to propagate into space. It is within the photosphere that sun spots, regions of lower than average temperature, appear.
Above the level of the photosphere is the stellar atmosphere. In a main sequence star such as the Sun, the lowest level of the atmosphere, just above the photosphere, is the thin chromosphere region, where spicules appear and stellar flares begin. Above this is the transition region, where the temperature rapidly increases within a distance of only 100 km (62 mi). Beyond this is the corona, a volume of super-heated plasma that can extend outward to several million kilometres. The existence of a corona appears to be dependent on a convective zone in the outer layers of the star. Despite its high temperature, the corona emits very little light, due to its low gas density. The corona region of the Sun is normally only visible during a solar eclipse.
From the corona, a stellar wind of plasma particles expands outward from the star, until it interacts with the interstellar medium. For the Sun, the influence of its solar wind extends throughout a bubble-shaped region called the heliosphere.
## Nuclear fusion reaction pathways
When nuclei fuse, the mass of the fused product is less than the mass of the original parts. This lost mass is converted to electromagnetic energy, according to the mass–energy equivalence relationship $E=mc^2$. A variety of nuclear fusion reactions take place in the cores of stars, that depend upon their mass and composition.
The hydrogen fusion process is temperature-sensitive, so a moderate increase in the core temperature will result in a significant increase in the fusion rate. As a result, the core temperature of main sequence stars only varies from 4 million kelvin for a small M-class star to 40 million kelvin for a massive O-class star.
In the Sun, with a 16-million-kelvin core, hydrogen fuses to form helium in the proton–proton chain reaction:
4<sup>1</sup>H → 2<sup>2</sup>H + 2e<sup>+</sup> + 2ν<sub>e</sub>(2 x 0.4 MeV)
2e<sup>+</sup> + 2e<sup>−</sup> → 2γ (2 x 1.0 MeV)
2<sup>1</sup>H + 2<sup>2</sup>H → 2<sup>3</sup>He + 2γ (2 x 5.5 MeV)
2<sup>3</sup>He → <sup>4</sup>He + 2<sup>1</sup>H (12.9 MeV)
There are a couple other paths, in which <sup>3</sup>He and <sup>4</sup>He combine to form <sup>7</sup>Be, which eventually (with the addition of another proton) yields two <sup>4</sup>He, a gain of one.
All these reactions result in the overall reaction:
4<sup>1</sup>H → <sup>4</sup>He + 2γ + 2ν<sub>e</sub> (26.7 MeV)
where γ is a gamma ray photon, ν<sub>e</sub> is a neutrino, and H and He are isotopes of hydrogen and helium, respectively. The energy released by this reaction is in millions of electron volts. Each individual reaction produces only a tiny amount of energy, but because enormous numbers of these reactions occur constantly, they produce all the energy necessary to sustain the star's radiation output. In comparison, the combustion of two hydrogen gas molecules with one oxygen gas molecule releases only 5.7 eV.
In more massive stars, helium is produced in a cycle of reactions catalyzed by carbon called the carbon-nitrogen-oxygen cycle.
In evolved stars with cores at 100 million kelvin and masses between 0.5 and , helium can be transformed into carbon in the triple-alpha process that uses the intermediate element beryllium:
<sup>4</sup>He + <sup>4</sup>He + 92 keV → <sup>8\*</sup>Be
<sup>4</sup>He + <sup>8\*</sup>Be + 67 keV → <sup>12\*</sup>C
<sup>12\*</sup>C → <sup>12</sup>C + γ + 7.4 MeV
For an overall reaction of:
3<sup>4</sup>He → <sup>12</sup>C + γ + 7.2 MeV
In massive stars, heavier elements can be burned in a contracting core through the neon-burning process and oxygen-burning process. The final stage in the stellar nucleosynthesis process is the silicon-burning process that results in the production of the stable isotope iron-56. Any further fusion would be an endothermic process that consumes energy, and so further energy can only be produced through gravitational collapse.
## See also
- Fusor (astronomy)
- Outline of astronomy
- Sidereal time
- Star clocks
- Star count
- Stars and planetary systems in fiction
|
44,703,686 |
Scientific Detective Monthly
| 1,158,799,869 |
US pulp science fiction magazine
|
[
"Defunct science fiction magazines published in the United States",
"Fantasy fiction magazines",
"Hugo Gernsback",
"Magazines disestablished in 1931",
"Magazines established in 1930",
"Magazines published in New York City",
"Pulp magazines",
"Science fiction magazines established in the 1930s"
] |
Scientific Detective Monthly (also known as Amazing Detective Tales and Amazing Detective Stories) was a pulp magazine that published fifteen issues beginning in January 1930. It was launched by Hugo Gernsback as part of his second venture into science-fiction magazine publishing, and was intended to focus on detective and mystery stories with a scientific element. Many of the stories involved contemporary science without any imaginative elements—for example, a story in the first issue turned on the use of a bolometer to detect a black girl blushing—but there were also one or two science fiction stories in every issue.
The title was changed to Amazing Detective Tales with the June 1930 issue, perhaps to avoid the word "scientific", which may have given readers the impression of "a sort of scientific periodical", in Gernsback's words, rather than a magazine intended to entertain. At the same time, the editor—Hector Grey—was replaced by David Lasser, who was already editing Gernsback's other science-fiction magazines. The title change apparently did not make the magazine a success, and Gernsback closed it down with the October issue. He sold the title to publisher Wallace Bamber, who produced at least five more issues in 1931 under the title Amazing Detective Stories.
## Publication history
By the end of the 19th century, stories that were centered on scientific inventions and set in the future, in the tradition of Jules Verne, were appearing regularly in popular fiction magazines. The first science fiction (sf) magazine, Amazing Stories, was launched in 1926 by Hugo Gernsback at the height of the pulp magazine era. It was successful, and helped to form science fiction as a separately marketed genre, but in February 1929 Gernsback lost control of the publisher when it went bankrupt. By April he had formed a new company, Gernsback Publications Incorporated, and created two subsidiaries: Techni-Craft Publishing Corporation and Stellar Publishing Corporation. In the middle of the year he launched three new magazines: a non-sf magazine titled Radio Craft, and two sf pulps titled Science Wonder Stories and Air Wonder Stories. These were followed in September 1929 by the first issue of Science Wonder Quarterly, and in October Gernsback sent a letter to some of the writers he had already bought material from, letting them know that he was seeing more demand for "detective or criminal mystery stories with a good scientific background". He named Arthur B. Reeve's "Craig Kennedy" stories as an example, and also mentioned S.S. Van Dine's "Philo Vance" stories, which were very popular at the time. In the January 1930 issue of both the sf magazines, Gernsback advertised the new magazine that he hoped to populate with these stories: Scientific Detective Monthly.
Gernsback believed that science fiction was educational, claiming, for example, that "teachers encourage the reading of this fiction because they know that it gives the pupil a fundamental knowledge of science and aviation". He intended Scientific Detective Monthly to be a detective magazine in which the stories had a scientific background; it would entertain, but also instruct. The subgenre of scientific detective fiction was not new; it had first become popular in the U.S. between 1909 and 1919, and the appearance of Gernsback's magazine was part of a resurgence of popularity in the subgenre at the end of the 1920s. The first issue was dated January 1930 (meaning it would have been on the newsstands in mid-December 1929). The publisher was Techni-Craft Publishing company based in New York City. Gernsback was editor-in-chief, and had final say on the choice of stories, but the editorial work was done by his deputy, Hector Grey.
In February 1930, an article by Gernsback appeared in Writers' Digest titled "How to Write 'Science' Stories". In it, Gernsback offered advice on how to write stories for his new magazine, claiming that scientific detective stories represented the future of the genre, and that "the ordinary gangster and detective story will be relegated into the background in a very few years". Science fiction historian Gary Westfahl comments that the article also serves as a guide to writing science fiction in general, and that the article is the first "how to" article published for the new genre of science fiction.
With the June issue, the title was changed to Amazing Detective Tales. Gernsback merged Science Wonder Stories and Air Wonder Stories into Wonder Stories at the same time; he was concerned that the word "Science" was putting off some potential readers, who assumed that the magazine was, in his words, "a sort of scientific periodical". It is likely that the same reasoning motivated Scientific Detective Monthly's new title. In the following issue, Grey was replaced as editor by David Lasser, who was already editing Gernsback's other sf titles, and an attempt was made to include more stories with science fiction elements. Gernsback continued the magazine for five issues under the new title; the last issue was dated October 1930. The decision to cease publication was apparently taken suddenly, as the October issue included the announcement that the format would change in November from large to standard pulp size, and listed two stories planned for the November issue. Gernsback sold the title to Wallace Bamber, who published at least five more issues, starting in February 1931; no issues are known for June or July 1931, or after August.
## Contents
The stories in Scientific Detective Monthly were almost always detective stories, but they were only occasionally science fiction, as in many cases the science appearing in the stories already had practical applications. In the first issue, for example, "The Mystery of the Bulawayo Diamond", by Arthur B. Reeve, mentions unusual science, but the mystery is solved by the use of a bolometer to detect a blush on the face of a black woman. The murderer in "The Campus Murder Mystery", by Ralph W. Wilkins, freezes the body to conceal the manner of death; a chemical catalyst and electrical measurements of palm sweat provide the scientific elements in two other stories in the same issue. The only genuine science fiction story in the first issue is "The Perfect Counterfeit" by Captain S.P. Meek, in which a matter duplicator has been used to counterfeit paper money. Van Dine's Philo Vance novel, The Bishop Murder Case, began serialization in the first issue, which probably assisted sales, since the hardcover edition of the novel, which had appeared only a few months previously, had sold well. It was not science fiction, however, and throughout the magazine's run, only one or two stories per issue include elements that would qualify them as science fiction. Mike Ashley, a historian of the field, suggests that Gernsback was more interested in stories about the science of detection than in imaginary science: most of Scientific Detective Monthly's contents were gadget stories, of a kind which Gernsback had been publishing in his other magazines for some time. The cover for the first issue, by Jno Ruger, showed a detective using an electronic device to measure the reactions of a suspect.
Later issues included stories by some writers who either were already well known to readers of science fiction or would soon become so, including Lloyd Arthur Eshbach, David H. Keller, Ed Earl Repp, Neil R. Jones, and Edmond Hamilton, though even these stories were not always science fiction. Hamilton's "The Invisible Master", for example, describes a way to become invisible, but at the end of the story the science is revealed to be a hoax, and the story is straightforward detective fiction. Clark Ashton Smith, later to be better known for his fantasy than for science fiction, contributed "Murder in the Fourth Dimension" to the October 1930 issue; the protagonist uses the fourth dimension to dispose of his victim's corpse.
As well as fiction, there were some non-fiction departments, including readers' letters (even in the first issue—Gernsback obtained letters by advertising the magazine to readers who subscribed to his other magazines), book reviews, and miscellaneous crime or science-related fillers. The first issue included a test of the readers' powers of observation: it showed a crime scene, which the readers were supposed to study, and then posed questions to see how much they could remember of the details. There was also a questionnaire about science, which asked about scientific facts mentioned in the stories, and a "Science-Crime Notes" section containing news items about science and crime. Gernsback's editorial argued that science would eventually end crime, and suggested that both the police and criminals would make growing use of scientific innovations in the future. Gernsback included on the masthead the names of several experts on crime, such as Edwin Cooley, a professor of criminology at Fordham University; he also listed members of his staff on the masthead with made-up titles: C.P. Mason, a member of his editorial staff, was listed as "Scientific Criminologist", for example.
Following the sale, Bamber filled the magazine with ordinary detective fiction, including Edgar Wallace's The Feathered Serpent.
The first few covers of the magazine did not advertise the names of the authors whose work was inside, which was probably a mistake as existing science fiction readers might have been attracted by the names of writers with whom they were familiar. Conversely, the readers who might have been interested in the more sedate topics covered by the non-fiction were probably discouraged by the lurid cover artwork. Gernsback was unable to obtain enough fiction to make Scientific Detective Monthly a true mixture of the two genres, and the result was a magazine that failed to fully appeal to fans of either genre. It was, in a historian Robert Lowndes' words, a "fascinating experiment", but a failed one.
## Bibliographic details
Scientific Detective Monthly was published by Techni-Craft Publishing Co. of New York for the first ten issues, and then by Fiction Publishers, Inc., also of New York. The editor-in-chief was Hugo Gernsback for the first ten issues; the managing editor was Hector Grey for the first six issues, and David Lasser for the next four. The editor for the 1931 issues is not known. The first volume contained ten numbers, the second contained four, and the last contained only one. The title changed to Amazing Detective Tales with the June 1930 issue, and again to Amazing Detective Stories in February 1931. The magazine was in large pulp format throughout; it was 96 pages long and priced at 25 cents.
|
38,589,404 |
Throffer
| 1,106,008,571 |
In political philosophy, a type of proposal
|
[
"1970s neologisms",
"Coercion",
"Concepts in ethics",
"Concepts in political philosophy",
"English legal terminology",
"Political concepts"
] |
In political philosophy, a throffer is a proposal (also called an intervention) that mixes an offer with a threat which will be carried out if the offer is not accepted. The term was first used in print by political philosopher Hillel Steiner; while other writers followed, it has not been universally adopted and it is sometimes considered synonymous with carrot and stick. Though the threatening aspect of a throffer need not be obvious, or even articulated at all, an overt example is: "Kill this man and receive £100; fail to kill him and I'll kill you."
Steiner differentiated offers, threats and throffers based on the preferability of compliance and noncompliance for the subject when compared to the normal course of events that would have come about were no intervention made. Steiner's account was criticised by philosopher Robert Stevens, who instead suggested that what was important in differentiating the kinds of intervention was whether performing or not performing the requested action was more or less preferable than it would have been were no intervention made. Throffers form part of the wider moral and political considerations of coercion, and form part of the question of the possibility of coercive offers. Contrary to received wisdom that only threats can be coercive, throffers lacking explicit threats have been cited as an example of coercive offers, while some writers argue that offers, threats and throffers may all be coercive if certain conditions are met. For others, by contrast, if a throffer is coercive, it is explicitly the threat aspect that makes it so, and not all throffers can be considered coercive.
The theoretical concerns surrounding throffers have been practically applied concerning workfare programmes. In such systems, individuals receiving social welfare have their aid decreased if they refuse the offer of work or education. Robert Goodin criticised workfare programmes which presented throffers to individuals receiving welfare, and was responded to by Daniel Shapiro, who found his objections unconvincing. Several writers have also observed that throffers presented to people convicted of crimes, particularly sex offenders, can result in more lenient sentences if they accept medical treatment. Other examples are offered by psychiatrist Julio Arboleda-Flórez, who presents concerns about throffers in community psychiatry, and management expert John J. Clancey, who talks about throffers in employment.
## Origin and usage
The term throffer is a portmanteau of threat and offer. It was first used by Canadian philosopher Hillel Steiner in a 1974–75 Proceedings of the Aristotelian Society article. Steiner had considered a quote from the 1972 film The Godfather: "I'm gonna make him an offer he can't refuse". While the line seemed to be amusingly ironic (because a threat is being made, not an offer), Steiner was unsatisfied that the difference between an offer and a threat was merely that one promises to confer a benefit and the other a penalty. He thus coined throffer to describe the "offer" in The Godfather. One prominent thinker who adopted the term was political scientist Michael Taylor, and his work on throffers has been frequently cited.
Throffer has not, however, been universally adopted; Michael R. Rhodes notes that there has been some controversy in the literature on whether to use throffer, citing a number of writers, including Lawrence A. Alexander, David Zimmerman and Daniel Lyons, who do not use the term. Some, including political scientists Deiniol Jones and Andrew Rigby, consider throffer to be synonymous with carrot and stick, an idiom which refers to the way a donkey is offered a carrot to encourage compliance, while noncompliance is punished with a stick. Other writers, while electing to use the word, consider it a poor one. For instance, literary scholar Daniel Shore calls it "a somewhat unfortunate term", while using it in his analysis of John Milton's Paradise Regained.
## Definitions
In addition to Steiner's original account of throffers, other authors have suggested definitions and ideas on how to differentiate throffers from threats and offers.
### Steiner's account
In the article that introduces the term throffer, Steiner considers the difference between interventions in the form of a threat and those in the form of an offer. He concludes that the distinction is based on how the consequences of compliance or noncompliance differ for the subject of the intervention when compared with "the norm". Steiner observes that a concept of "normalcy" is presupposed in literature on coercion, as changes in well-being for the subject of an intervention are not merely relative, but absolute; any possibility of an absolute change requires a standard, and this standard is "the description of the normal and predictable course of events, that is, the course of events which would confront the recipient of the intervention were the intervention not to occur".
For an offer, such as "you may use my car whenever you like", the consequence of compliance "represents a situation which is preferred to the norm". Noncompliance, that is, not taking up the offer of the use of the car, is identical to the norm, and so neither more nor less preferable. Threats, on the other hand, are characterised by compliance that leads to an outcome less preferable to the norm, with noncompliance leading to an outcome less desirable still. For instance, if someone is threatened with "your money or your life", compliance would lead to them losing their money, while noncompliance would lead to them losing their life. Both are less desirable than the norm (that is, not being threatened at all), but, for the subject of the threat, losing money is more desirable than being killed. A throffer is a third kind of intervention. It differs from both a threat and an offer, as compliance is preferable to the norm, while noncompliance is less preferable than the norm.
For Steiner, all of offers, threats and throffers affect the practical deliberations of their recipient in the same way. What is significant for the subject of the intervention is not the extent to which the consequences of compliance or noncompliance differ in desirability from the norm, but the extent to which they differ in desirability from each other. Thus, an offer does not necessarily exert less influence on its recipient than a threat. The strength of the force exerted by an intervention depends upon the difference in desirability between compliance and noncompliance alone, regardless of the manner of the intervention.
### Stevens's account
Responding to Steiner, Robert Stevens provides examples of what he categorises variously as offers, threats and throffers that fail to meet Steiner's definitions. He gives an example of an intervention he considers a throffer, as opposed to a threat, but in which both compliance and noncompliance are less preferable to the norm. The example is that of someone who makes the demand "either you accept my offer of a handful of beans for your cow, or I kill you". For the subject, keeping the cow is preferred to both compliance and noncompliance with the throffer. Using this and other examples, Stevens argued that Steiner's account of differentiating the three kinds of interventions is incorrect.
In its place, Stevens suggests that determining whether an intervention is a throffer depends not on the desirability of compliance and noncompliance when compared to the norm, but on the desirability of the actions entailed in compliance or noncompliance when compared with what their desirability would have been were no intervention made. He proposes that a throffer is made if P attempts to encourage Q to do A by increasing "the desirability to Q of Q doing A relative to what it would have been if P made no proposal and decrease the desirability to Q of Q doing not-A relative to what it would have been if P made no proposal". An offer, by contrast, increases the desirability to Q of Q doing A compared to how it would have been without P's intervention, leaving the desirability to Q of Q doing not-A as it would have been. A threat decreases the desirability to Q of Q doing not-A compared to what it would have been without P's intervention, while leaving the desirability to Q of Q doing A as it would have been.
### Kristjánsson's account
Political philosopher Kristján Kristjánsson differentiates threats and offers by explaining that the former is a proposal that creates an obstacle, while the latter is one kind of proposal (another example being a request) that does not. He also draws a distinction between tentative proposals and final proposals, which he feels earlier authors ignored. A tentative proposal does not logically create any kind of obstacle for its subject, and, as such, is an offer. For instance, "if you fetch the paper for me, you'll get candy" is a tentative proposal, as it does not logically entail that a failure to fetch the paper will result in no candy; it is possible that candy can be acquired by another route. In other words, if the subject fetches the paper, then they get candy. By contrast, if the proposal was a final proposal, it would take the form of "if and only if you fetch the paper for me, you'll get candy". This entails that candy can only be acquired if the subject fetches the paper, and no other way. For Kristjánsson, this kind of final proposal constitutes a throffer. There is an offer to fetch the paper ("if"), and a threat that candy can only be acquired through this route ("only if"). As such, an obstacle has been placed on the route of acquiring candy.
Previous authors (Kristjánsson cites Joel Feinberg, Alan Wertheimer and Robert Nozick) provided moral and statistical analyses of various thought experiments to determine whether the proposals they involve are threats or offers. On Kristjánsson's account, by contrast, all of the thought experiments considered are throffers. Instead, he argues, the previous thinkers' analyses attempted to differentiate offers that limit freedom from those that do not. They conflate two tasks, that of differentiating threats and offers and that of differentiating freedom-restricting threats from non-freedom-restricting threats. He concludes that the thinkers' methods are also inadequate for determining the difference between freedom-restricting and non-freedom-restricting threats, for which a test of moral responsibility would be required.
### Rhodes's account
Political philosopher and legal theorist Michael R. Rhodes offers an account of threats, offers and throffers based upon the perception of the subject of the proposal (and, in the case of proposals from agents as opposed to nature, the perception of the agent making the proposal.) Rhodes presents seven different motivational-want-structures, that is, seven reasons why P may want to do what leads to B:
1. W<sub>1</sub> (intrinsic-attainment-want): "B is wanted in and of itself; B is perceived by P with immediate approbation; B is valued in and of itself by P."
2. W<sub>2</sub> (extrinsic-attainment-want): "B is perceived by P as a means to E where E is an intrinsic-attainment-want."
3. W<sub>3</sub> (compound-attainment-want): "B is both an intrinsic-attainment-want and an extrinsic-attainment-want; B is both W<sub>1</sub> and W<sub>2</sub>."
4. W<sub>4</sub> (extrinsic-avoidance-want): "B is perceived by P as a means of avoiding F where F is perceived by P with immediate disapprobation (F is feared by, or F is threatening to, P)."
5. W<sub>5</sub> (complex-want-type-A): "B is both W<sub>1</sub> and W<sub>4</sub>."
6. W<sub>6</sub> (complex-want-type-B): "B is both W<sub>2</sub> and W<sub>4</sub>."
7. W<sub>7</sub> (complex-want-type-C): "B is both W<sub>3</sub> and W<sub>4</sub>."
Proposals that motivate P to act because of W<sub>1</sub>, W<sub>2</sub> or W<sub>3</sub> represent offers. Those that do so because of W<sub>4</sub> represent threats. Rhodes notes that offers and threats are asymmetrical: while an offer requires only a slight approbation, a high degree of disapprobation is required before a proposal can be called a threat. The disapprobation must be high enough to provoke the "perception of a threat and correlative sense of fear". Rhodes labels as throffers those proposals that motivate P to act because of W<sub>5</sub>, W<sub>6</sub> or W<sub>7</sub>, but notes that the name is not universally used.
For Rhodes, throffers can not merely be biconditional proposals. If Q proposes that P pay \$10,000 so that Q withholds information that would lead to P's arrest, then despite the fact that the proposal is biconditional (that is, P may choose to pay or not pay, which would lead to different outcomes) it is not a throffer. This is because choosing to pay cannot be considered attractive for P independent of Q's proposal. P's paying of Q does not lead to the satisfaction of an attainment-want, which is a necessary condition for a proposal's being an offer under Rhodes's account. The exception to this is when an agent offers to help another overcome a background threat (a threat that was not introduced by the proposal). Biconditionals, in addition to either threats or offers, may contain neutral proposals, and so not be throffers. The possibility of another agent's not acting is necessarily neutral. Throffers are those biconditional proposals that contain both a threat and an offer, as opposed to biconditional proposals containing a threat and neutral proposal, or an offer and a neutral proposal. In the case of throffers, it is always going to be difficult or even impossible to determine whether an agent acts on the threatening aspect of the proposal or the offer.
## Throffers and coercion
Consideration of throffers forms part of the wider question of coercion and, specifically, the possibility of a coercive offer. Determining whether throffers are coercive, and, if so, to what extent, is difficult. The traditional assumption is that offers cannot be coercive, only threats can, but throffers can challenge this. The threatening aspect of a throffer need not be explicit, as it was in Steiner's examples. Instead, a throffer may take the form of an offer, but carry an implied threat. Philosopher John Kleinig sees a throffer as an example of an occasion when an offer alone may be considered coercive. Another example of a coercive offer may be when the situation in which the offer is made is already unacceptable; for instance, if a factory owner takes advantage of a poor economic environment to offer workers an unfair wage. For Jonathan Riley, a liberal society has a duty to protect its citizens from coercion, whether that coercion comes from a threat, offer, throffer or some other source. "If other persons ... attempt to frustrate the right-holder's wants, then a liberal society must take steps to prevent this, by law if necessary. All exercises of power by others to frustrate the relevant individual or group preferences constitute unwarranted 'interference' with liberty in purely private matters."
Ian Hunt concurs that offers may be considered coercive, and claims that, whatever form the interventions take, they may be considered coercive "when they are socially corrigible influences over action that diminish an agent's freedom overall". He accepts that a possible objection to his claim is that at least some coercive offers do seemingly increase the freedom of their recipients. For instance, in the thought experiment of the lecherous millionaire, a millionaire offers a mother money for treatment for her son's life-threatening illness in exchange for her becoming the millionaire's mistress. Joel Feinberg considers the offer coercive, but in offering a possibility of treatment, the millionaire has increased the options available to the mother, and thus her freedom. For Hunt, Feinberg "overlooks the fact that the millionaire's offer opens the option of [the mother] saving her child on condition that the option of not being [the millionaire's] mistress is closed". Hunt does not see the mother as more free; "while it is clear that she has a greater capacity to pursue her interests as a parent once the offer has been made, and to that extent can be regarded as freer, it is clear also that her capacity to pursue her sexual interests may have been diminished." Every coercive proposal, whether threat, offer or throffer, according to Hunt, contains a simultaneous loss and gain of freedom. Kristjánsson, by contrast, argues that Feinberg's account of "coercive offers" is flawed because these are not offers at all, but throffers.
Peter Westen and H. L. A. Hart argue that throffers are not always coercive, and, when they are, it is specifically the threat that makes them so. For a throffer to be coercive, they claim, the threat must meet three further conditions; firstly, the person making the throffer "must be intentionally bringing the threat to bear on X in order that X do something, Z<sub>1</sub>", secondly, the person making the throffer must know that "X would not otherwise do or wish to be constrained to do" Z<sub>1</sub>, and, thirdly, the threat part of the throffer must render "X's option of doing Z<sub>1</sub> more eligible in X's eyes than it would otherwise be". As such, for the authors, there is the possibility of non-coercive throffers. The pair present three possible examples. Firstly, when the threat aspect of the throffer is a joke; secondly, when the offer aspect is already so desirable to the subject that the threat does not affect their decision-making; or, thirdly, when the subject mistakenly believes the threat immaterial because of the attractiveness of the offer. Rhodes similarly concludes that if a throffer is coercive, it is because of the threatening aspect. For him, the question is "whether one regards the threat component of a throffer as both a necessary and sufficient condition of the performance of a behaviour". He argues that if the offer without the threat would have been enough for the agent subject to the proposal to act, then the proposal is not coercive. However, if both offer and threat aspects of the throffer are motivating factors, then it is tricky to determine whether the agent subject to the proposal was coerced. He suggests that differentiating between "pure coercion" and "partial coercion" may help solve this problem, and that the question of coercion in these cases is one of degree.
## Practical examples
The conceptual issues around throffers are practically applied in studies in a number of areas, but the term is also used outside of academia. For instance, it has seen use in British policing and in British courts.
### Workfare
Conceptual thinking about throffers is practically applied in considerations of conditional aid, such as is used in workfare systems. For philosopher and political theorist Gertrude Ezorsky, the denial of welfare when subjects refuse work is the epitome of a throffer. Conditional welfare is also labelled a throffer by political philosopher Robert Goodin. In the words of Daniel Shapiro, also a political philosopher, the offer aspect of workfare is seen in the "benefits one receives if one learns new skills, gets a job, alters destructive behaviors and the like", while the threat aspect is executed with "the elimination or reduction of aid, if the person does not, after a certain period of time, accept the offer". For Goodin, the moral questionability of the threat aspect of a throffer is generally mitigated by the attractiveness of the offer aspect. In this way, workfare can represent a "genuine" throffer, but only when a person receiving welfare payments does not need the payments to survive, and so possesses a genuine choice as to whether to accept the throffer. When, however, an individual would be unable to survive if he or she stopped receiving welfare payments, there is no genuine choice; the individual is, for Goodin, unable to refuse the throffer. This cancels out the morally mitigating factor usually possessed by a throffer. This is presented as an argument against workfare, and Goodin anticipates that advocates would respond paternalistically by claiming that, regardless of issues of freedom, the individual in question would benefit from taking part in the work or education offered.
Shapiro responds to Goodin's argument by challenging his factual assumption that individuals would starve if they refused the workfare throffer. In state-sponsored (see welfare state) workfare systems, he claims, only monetary assistance is eliminated by a refusal to accept the throffer, while in private systems (that is, non-state charities or organisations offering conditional aid), other groups than the one operating a workfare system exist. In either system, recipients of welfare may also turn to family and friends for help. For these reasons, he does not consider the throffer to be unrefusable in the cases in which Goodin believes it is. A second (and, Shapiro claims, more important) objection is also presented. State welfare without sanctions fails to mirror the way that working individuals who do not rely on welfare payments take responsibility for their lives. If a person who works stops working, Shapiro observes, then they will typically find their economic situation worsened. Unconditional state welfare does not reflect this, and instead reflects the unusual position of the person who would be no worse off if they refused to work. As unconditional welfare does not mirror the situation of ordinary workers, it is unable to determine whether or not people are willing to take responsibility for their lives.
For Ivar Lødemel and Heather Trickey, editors of 'An Offer You Can't Refuse': Workfare in International Perspective, workfare programmes' reliance on compulsion makes them throffers. Citing the Danish model as a particular example, the pair argue that workfare involves the use of compulsory offers; while the work or education is presented as an offer, because recipients of welfare are dependent upon the help they would lose if they refuse the offer, they effectively have no choice. The compulsive aspect reveals that at least some recipients of welfare, in the eyes of policy makers, require coercion before they will accept offers of work. Neither the chance of paid work nor participation in labour schemes are, alone, enough to encourage some to freely accept the offers they receive. Such compulsion serves to reintegrate people into the labour market, and serves as a kind of "new paternalism". The authors are concerned about this compulsion, and present several arguments against it which are possible or have been utilised in the literature: Firstly, it impacts the rights of those against whom it is used. This may make it objectionable in and of itself, or it may result in undesirable outcomes. Secondly, it can be argued that benefits must be unconditional in order to act as a genuine safety net. Thirdly, compulsion undermines consumer feedback, and so no differentiation can be made between good and poor programmes presented to those receiving welfare. Fourthly, such coercion may contribute to a culture of resistance among those receiving welfare.
### Prisoners and mental health
Forensic psychologist Eric Cullen and prison governor Tim Newell claim that prisoners face a throffer once they are told that they must acknowledge their guilt before they are offered parole or moved to an open prison. Cullen and Newell cite the example of a prisoner who falsely admitted guilt to move to an open prison; once there, however, he felt he could no longer lie about his guilt, and confessed to the prison's governor. He was subsequently transferred back to a maximum security prison. In the case of sex offenders, a throffer is presented when they are offered release if they take up treatment, but are threatened with extended sentences if they do not. Cullen and Newell are concerned about the predicament that these throffers present to prisoners, including those found innocent on appeal. Concerns surrounding throffers proposed to convicted sex offenders have also been discussed in print by Alex Alexandrowicz, himself wrongly imprisoned, and criminologist David Wilson. The latter observed the difficulties for those innocent people wrongly imprisoned who are faced with the throffer of having their sentence shortened if they "acknowledge their guilt", but noted that, as perspectives of prisoners were rarely considered, the problem is usually not visible.
Likewise, therapeutic treatment of non-criminals with mental health problems can be considered in terms of throffers. In community psychiatry, patients with mental health problems will sometimes be presented with the provision of social services, such as financial or housing aid, in exchange for changing their lifestyle and reporting for the administration of medicines. Psychiatrist Julio Arboleda-Flórez considers these throffers a form of social engineering, and worries that they
> have multiple implications in regard to coercive mechanisms from implicit curtailments of freedom to ascription of vulnerability. The former would include threats to personal autonomy, instilling fear in regard to a potential loss of freedom, an increase of dependency with mistrust of one's own capabilities to manage the business of living and, hence, an increase of feelings and attitudes of helplessness. The ascription of vulnerability overrides the principle of equality between the partners, constitutes and invasion of privacy and impacts on the positive rights of individuals.
### Business
According to management researcher John J. Clancey, scientific management can involve the use of throffers. While piecework had been utilised since the Middle Ages, Frederick Winslow Taylor blended rationalised management with piecework, to create a new system. Productivity processes were standardised, after which point managers were able to present a throffer to workers: higher pay was offered if they were able to exceed the standard, while lower pay was threatened for any who did not meet expectations.
## See also
- Carrot and stick
- Extortion
|
26,451,186 |
Portrait of a Lady (van der Weyden)
| 1,156,579,348 |
c. 1460 painting by Rogier van der Weyden
|
[
"1460s paintings",
"15th-century portraits",
"Collections of the National Gallery of Art",
"Paintings by Rogier van der Weyden",
"Portraits of women"
] |
Portrait of a Lady (or Portrait of a Woman) is a small oil-on-oak panel painting executed around 1460 by the Netherlandish painter Rogier van der Weyden. The composition is built from the geometric shapes that form the lines of the woman's veil, neckline, face, and arms, and by the fall of the light that illuminates her face and headdress. The vivid contrasts of darkness and light enhance the almost unnatural beauty and Gothic elegance of the model.
Van der Weyden was preoccupied by commissioned portraiture towards the end of his life and was highly regarded by later generations of painters for his penetrating evocations of character. In this work, the woman's humility and reserved demeanour are conveyed through her fragile physique, lowered eyes and tightly grasped fingers. She is slender and depicted according to the Gothic ideal of elongated features, indicated by her narrow shoulders, tightly pinned hair, high forehead and the elaborate frame set by the headdress. It is the only known portrait of a woman accepted as an autograph work by van der Weyden, yet the sitter's name is not recorded and he did not title the work.
Although van der Weyden did not adhere to the conventions of idealisation, he generally sought to flatter his sitters. He depicted his models in highly fashionable clothing, often with rounded—almost sculpted—facial features, some of which deviated from natural representation. He adapted his own aesthetic, and his portraits of women often bear a striking resemblance to each other.
The painting has been in the National Gallery of Art in Washington, D.C. since its donation in 1937, and is no. 34 in the de Vos catalogue raisonné of the artist. It has been described as "famous among all portraits of women of all schools".
## Composition
The woman, who is probably in her late teens or early twenties, is shown half-length and in three-quarters profile, set against a two-dimensional interior background of deep blue-green. The background is flat and lacks the attention to detail common in van der Weyden's devotional works. Like his contemporary Jan van Eyck (c. 1395 – 1441), when working in portraiture, he used dark planes to focus attention on the sitter. It was not until Hans Memling (c. 1435–1494), a pupil of van der Weyden, that a Netherlandish artist set a portrait against an exterior or landscape. In this work the flat setting allows the viewer to settle on the woman's face and quiet self-possession. Van der Weyden reduces his focus to four basic features: the woman's headdress, dress, face and hands. The background has darkened with age; it is likely that the angles created by the sitter's hennin and dress were once much sharper.
The woman wears an elegant low-cut black dress with dark bands of fur at the neck and wrist. Her clothes are of the then-fashionable Burgundian style, which emphasises the tall and thin aesthetic of the Gothic ideal. Her dress is buckled by a bright red sash pulled in below her breasts. The buff-coloured hennin headdress is draped with a large transparent veil, which spills over her shoulders, reaching her upper arms. Van der Weyden's attention to the structure of the clothing—the careful detailing of the pins pushed into the veil to fix its position—is typical for the artist.
The woman's veil forms a diamond shape, balanced by the inverse flow of a light vest worn beneath her dress. She is shown at a slight angle, but her pose is centred by the interlocked broad lines of arms, décolletage and veil. The woman's head is delicately lit, leaving no strong tonal contrasts on her skin. She has a long, thin face, plucked eyebrows and eyelids, and a plucked hairline to create a fashionably high forehead. Her hair is tightly pinned back on the rim of the bonnet and rests above her ear. Her high headdress and severe hairline accentuate her elongated face, giving it a sculpted appearance.
The woman's left ear is set, according to art historian Norbert Schneider, unnaturally high and far back, parallel to her eyes rather than to her nose; this position is probably an artistic device used to continue the flow of the diagonal line of the veil's inner-right wing. In the 15th century, veils were normally worn for modesty, to hide the sensuality of the flesh. In this work the veil has the opposite effect; the woman's face is framed by the headdress to draw attention to her beauty.
The woman's hands are crossed tightly as if in prayer, and positioned so low in the painting as to appear to be resting on the frame. They are rendered as tightly compressed into a small area of the picture; it is likely van der Weyden did not want them to result in an area of high tone that might distract from the description of her head. Her slender fingers are minutely detailed; van der Weyden often indicated the social position of his models through his rendering of their face and hands. The sleeve of her dress extends beyond her wrists. Her fingers are folded in layers; their intricate portrayal is the most detailed element in the painting, and echoes the pyramidal form of the upper portion of the painting.
Her eyes gaze downward in humility, in contrast to her relatively extravagant clothes. The piety of her expression is achieved through motifs common to van der Weyden's work. Her eyes and nose are elongated and her lower lip made fuller by the use of tone and pronounced finish. Some vertical lines around these features are emphasised, while her pupils are enlarged and her eyebrows slightly raised. In addition the contours of her face are highlighted in a manner that is slightly unnatural and abstract, and outside the usual spatial constraints of 15th-century human representation. This methodology was described by art historian Erwin Panofsky: "Rogier concentrated on certain salient features—salient both from a physiognomical and psychological point of view—which he expressed primarily by lines." Her high forehead and full mouth have been seen as suggestive of a nature at once intellectual, ascetic, and passionate, symbolic of "an unresolved conflict in her personality". Panofsky refers to a "smouldering excitability".
The sitter is unknown, although some art historians have speculated on her identity. On the grounds of similarity of facial features, writer Wilhelm Stein suggested in the early 20th century that she might be Marie de Valengin, the illegitimate daughter of Philip the Good of Burgundy. However, this is a contentious assertion and not widely held. Because her hands are shown as resting on the painting's lower frame, art historians generally accept that this was an independent portrait, rather than a devotional work. It is possible that it was intended as a pendant to a picture of the woman's husband, however no other portrait has been suggested as a likely companion.
## Break from idealisation
Van der Weyden worked in the same tradition of portraiture as contemporaries Jan van Eyck and Robert Campin. In the early to middle 15th century, these three artists were among the first generation of "Northern Renaissance" painters, and the first northern Europeans to portray members of the middle and upper classes naturalistically rather than in a medieval Christian idealised form. In earlier Netherlandish art the profile view was the dominant mode of representation for the nobles or clergy worthy of portraiture. In works such as Portrait of a Man in a Turban (1433), Jan van Eyck broke this tradition and used the three-quarter profile of the face which became the standard in Netherlandish art. Here, van der Weyden utilises the same profile, which better allows him to describe the shape of the head and facial features of the sitter. She is shown in half-length, which enables the artist to show her hands crossed at her waist.
Despite this new freedom, van der Weyden's portraits of women are strikingly similar in concept and structure, both to each other and to female portraits by Campin. Most are three-quarter face and half-length. They typically set their models in front of a dark background that is uniform and nondescript. While the portraits are noted for their expressive pathos, the facial features of the women strongly resemble one another. This indicates that although van der Weyden did not adhere to the tradition of idealised representation, he sought to please his sitters in a manner that reflected contemporary ideals of beauty. Most of van der Weyden's portraits were painted as commissions from the nobility; he painted only five (including Portrait of a Lady) that were not donor portraits. It is known that in his Portrait of Philip de Croÿ (c. 1460), van der Weyden complimented the young Flemish nobleman by concealing his large nose and undershot jaw. When describing this tendency in relation to the Washington portrait, art historian Norbert Schneider wrote, "While van Eyck shows nature 'in the raw', as it were, Rogier improves on physical reality, civilising and refining Nature and the human form with the help of a brush." The high quality of the painting is highlighted when compared to the National Gallery's very similar workshop painting. The London subject has softer, more rounded features and is younger and less individually characterised than the c. 1460 model. The technique also is less subtle and fine in the London work. However, both share a similar expressions and dress.
Van der Weyden was more concerned with the aesthetic and emotional response created by the pictures overall than in the specific portraits. Art historian and curator Lorne Campbell suggests that the popularity of the portrait is due more to the "elegant simplicity of the pattern which [the sitter] creates" than to the grace of her depiction. While van der Weyden did not stay within the traditional realms of idealisation, he created his own aesthetic, which he extended across his portraits and religious pictures. This aesthetic includes the mood of sorrowful devotion which forms the dominant tone in all his portraits. His figures may be more natural than those of earlier generations of artists; however, his individualistic approach to the depiction of his sitters' piety often leads to the abandonment of the rules of scale.
John Walker, former director of the National Gallery of Art, referred to the subject as "outré", but believed that despite the awkwardness of her individual features, the model was nonetheless "strangely beautiful". By the time of the work's completion van der Weyden had eclipsed even van Eyck in popularity, and this painting is typical of the austere spirituality, over the latter's sensuality, for which van der Weyden is renowned.
## Condition and provenance
Although van der Weyden did not title the work, and the sitter's name is not recorded in any of the early inventories, the style of her dress has been used to place the picture very late in van der Weyden's career. The c. 1460 dating is based on the high-fashion dress and the work's apparent chronological position in the evolution of van der Weyden's style. However, it is possible that it was executed even later (van der Weyden died in 1464).
Portrait of a Lady was painted on a single oak board with a vertical grain and has an unpainted margin on each side. The panel was prepared with gesso, upon which the figure was then painted in monochrome. Glazes of oil pigment were then added, which allowed for subtle and transparent tonal gradations. Infra-red reflectography reveals that van der Weyden did not sketch the work on the board before he began to paint, and there is no evidence of underdrawing. It shows that the lady was portrayed as more slender before changes were made as the work progressed; thickly applied background paint underlies some of the belt, demonstrating that the original silhouette was widened. These changes are also visible in x-ray images. It is in relatively good condition, having been cleaned a number of times, most recently in 1980. There is some loss of paint on the veil, headdress and sleeve, and abrasion on the ear.
The provenance of the painting is unclear, and there is doubt as to which painting is referred to in some early inventories. An Anhalt prince, likely Leopold Friedrich Franz (d. 1817) of Wörlitz, near Dessau, Germany, held it in the early 19th century, after which it is likely to have passed to Leopold Friedrich (d. 1871). The painting was loaned for exhibition in 1902, when it was shown at the Hôtel de Gouvernement Provincial, Bruges at the Exposition des primitifs flamands et d'art ancien. It was held by a Duke of Anhalt until 1926 when he sold it to the art dealers Duveen Brothers. They in turn sold it that year to Andrew W. Mellon. It was loaned the following year to the Royal Academy of Arts, London, for an exhibition covering six centuries of Flemish and Belgian art. Mellon willed the work to his Educational and Charitable Trust in 1932, which in 1937 donated it to the National Gallery of Art where it is on permanent display.
## Gallery
## See also
- List of works by Rogier van der Weyden
- 1400–1500 in fashion
|
501,984 |
Battle of the Alamo
| 1,173,161,345 |
Major battle of the Texas Revolution
|
[
"1836 in the Republic of Texas",
"Battles involving the Republic of Texas",
"Battles of the Texas Revolution",
"Conflicts in 1836",
"Davy Crockett",
"February 1836 events",
"History of San Antonio",
"History of the Southern United States",
"Last stands",
"March 1836 events",
"Sieges"
] |
The Battle of the Alamo (February 23 – March 6, 1836) was a pivotal event and military engagement in the Texas Revolution. Following a 13-day siege, Mexican troops under President General Antonio López de Santa Anna reclaimed the Alamo Mission near San Antonio de Béxar (modern-day San Antonio, Texas, United States), killing most of the occupants. Santa Anna's refusal to take prisoners during the battle inspired many Texians and Tejanos to join the Texian Army. Motivated by a desire for revenge, as well as their written desire to preserve a border open to immigration and the importation and practice of slavery, the Texians defeated the Mexican Army at the Battle of San Jacinto, on April 21, 1836, ending the conquering of the Mexican state of Coahuila y Tejas by the newly formed Republic of Texas.
Several months previously, Texians, some of whom were legal settlers, but primarily illegal immigrants from the United States, had killed or driven out all Mexican troops in Mexican Texas. About one hundred Texians were then garrisoned at the Alamo. The Texian force grew slightly with the arrival of reinforcements led by eventual Alamo co-commanders James Bowie and William B. Travis. On February 23, approximately 1,500 Mexicans marched into San Antonio de Béxar as the first step in a campaign to retake Texas. For the next 10 days, the two armies engaged in several skirmishes with minimal casualties. Aware that his garrison could not withstand an attack by such a large force, Travis wrote multiple letters pleading for more men and supplies from Texas and from the United States, but the Texians were reinforced by fewer than a hundred men, because the United States had a treaty with Mexico at the time, and supplying troops and weapons would have been an overt act of war against Mexico.
In the early morning hours of March 6, the Mexican Army advanced on the Alamo. After repelling two attacks, the Texians were unable to fend off a third attack. As Mexican soldiers scaled the walls, most of the Texian fighters withdrew into interior buildings. Those who were unable to reach these points were slain by the Mexican cavalry as they attempted to escape. Between five and seven Texians may have surrendered; if so, they were quickly executed. Several noncombatants were sent to Gonzales to spread word of the Texian defeat. The news sparked both a strong rush to join the Texian army and a panic, known as "The Runaway Scrape", in which the Texian army, most settlers, and the government of the new, self-proclaimed but officially unrecognized Republic of Texas fled eastward toward the U.S. ahead of the advancing Mexican Army.
Within Mexico, the battle has often been overshadowed by events from the Mexican–American War of 1846–1848. In 19th-century Texas, the Alamo complex gradually became known as a battle site rather than a former mission. The Texas Legislature purchased the land and buildings in the early part of the 20th century and designated the Alamo chapel as an official Texas State Shrine. The Alamo has been the subject of numerous non-fiction works beginning in 1843. Most Americans, however, are more familiar with the myths and legends spread by many of the movie and television adaptations, including the 1950s Disney miniseries Davy Crockett and John Wayne's 1960 film The Alamo.
## Background
In 1835, there was a drastic shift in the Mexican nation. The triumph of conservative forces in the elections unleashed a series of events that culminated on October 23, 1835, under a new constitution, after the repeal of the federalist Constitution of 1824. Las Siete Leyes (), or Seven Laws, were a series of constitutional changes that fundamentally altered the organizational structure of Mexico, ending the first federal period and creating a unitary republic, officially the Mexican Republic (Spanish: República Mexicana). Formalized under President Antonio López de Santa Anna on 15 December 1835, they were enacted in 1836. They were intended to centralize and strengthen the national government. The aim of the previous constitution was to create a political system that would emulate the success of the United States, but after a decade of political turmoil, economic stagnation, and threats and actual foreign invasion, conservatives concluded that a better path for Mexico was centralized power.
The new policies, the bans of slavery and immigration chief among them, and the increased enforcement of laws and import tariffs, incited many immigrants to revolt. The border region of Mexican Texas was largely populated by immigrants from the United States, some legal but most illegal. Some of these immigrants brought large numbers of slaves with them, so that by 1836, there were about 5,000 enslaved persons in a total non-native population estimated at 38,470. These people were accustomed to a federalist government which made special exemptions from Mexican law just for them, and to extensive individual rights including the right to own slaves, and they were quite vocal in their displeasure at Mexico's law enforcement and shift towards centralism. The centralized government ended local federal exemptions to the ban on slavery, which had been negotiated by Stephen Austin and others. Already suspicious after previous United States attempts to purchase Mexican Texas, Mexican authorities blamed much of the Texian unrest on United States immigrants, most of whom had entered illegally and made little effort to adapt to the Mexican culture and who continued to hold people in slavery when slavery had been abolished in Mexico.
In October, Texians engaged Mexican troops in the first official battle of the Texas Revolution. Determined to quell the rebellion of immigrants, Santa Anna began assembling a large force, the Army of Operations in Texas, to restore order. Most of his soldiers were raw recruits, and many had been forcibly conscripted.
The Texians systematically defeated the Mexican troops already stationed in Texas. The last group of Mexican soldiers in the region—commanded by Santa Anna's brother-in-law, General Martín Perfecto de Cos—surrendered on December 9 following the siege of Béxar. By this point, the Texian Army was dominated by very recent arrivals to the region, primarily illegal immigrants from the United States. Many Texas settlers, unprepared for a long campaign, had returned home. Angered by what he perceived to be United States interference in Mexican affairs, Santa Anna spearheaded a resolution classifying foreign immigrants found fighting in Texas as pirates. The resolution effectively banned the taking of prisoners of war: in this period of time, captured pirates were executed immediately. Santa Anna reiterated this message in a strongly worded letter to United States President Andrew Jackson. This letter was not widely distributed, and it is unlikely that most of the United States recruits serving in the Texian Army were aware that there would be no prisoners of war.
When Mexican troops departed San Antonio de Béxar (now San Antonio, Texas, USA) Texian soldiers captured the Mexican garrison at the Alamo Mission, a former Spanish religious outpost which had been converted to a makeshift fort by the recently expelled Mexican Army. Described by Santa Anna as an "irregular fortification hardly worthy of the name", the Alamo had been designed to withstand an attack by native tribes, not an artillery-equipped army. The complex sprawled across 3 acres (1.2 ha), providing almost 1,320 feet (400 m) of perimeter to defend. An interior plaza was bordered on the east by the chapel and to the south by a one-story building known as the Low Barracks. A wooden palisade stretched between these two buildings. The two-story Long Barracks extended north from the chapel. At the northern corner of the east wall stood a cattle pen and horse corral. The walls surrounding the complex were at least 2.75 feet (0.84 m) thick and ranged from 9–12 ft (2.7–3.7 m) high.
To compensate for the lack of firing ports, Texian engineer Green B. Jameson constructed catwalks to allow defenders to fire over the walls; this method, however, left the rifleman's upper body exposed. Mexican forces had left behind 19 cannons, which Jameson installed along the walls. A large 18-pounder had arrived in Texas with the New Orleans Greys. Jameson positioned this cannon in the southwest corner of the compound. He boasted to Texian Army commander Sam Houston that the Texians could "whip 10 to 1 with our artillery".
## Prelude to battle
The Texian garrison was woefully undermanned and underprovisioned, with fewer than 100 soldiers remaining by January 6, 1836. Colonel James C. Neill, the acting Alamo commander, wrote to the provisional government: "If there has ever been a dollar here I have no knowledge of it". Neill requested additional troops and supplies, stressing that the garrison was likely to be unable to withstand a siege lasting longer than four days. The Texian government was in turmoil and unable to provide much assistance. Four different men claimed to have been given command over the entire army. On January 14, Neill approached one of them, Sam Houston, for assistance in gathering supplies, clothing, and ammunition.
Houston could not spare the number of men necessary to mount a successful defense. Instead, he sent Colonel James Bowie with 30 men to remove the artillery from the Alamo and destroy the complex. Bowie was unable to transport the artillery since the Alamo garrison lacked the necessary draft animals. Neill soon persuaded Bowie that the location held strategic importance. In a letter to Governor Henry Smith, Bowie argued that "the salvation of Texas depends in great measure on keeping Béxar out of the hands of the enemy. It serves as the frontier picquet guard, and if it were in the possession of Santa Anna, there is no stronghold from which to repel him in his march towards the Sabine." The letter to Smith ended, "Colonel Neill and myself have come to the solemn resolution that we will rather die in these ditches than give it up to the enemy." Bowie also wrote to the provisional government, asking for "men, money, rifles, and cannon powder". Few reinforcements were authorized; cavalry officer William B. Travis arrived in Béxar with 30 men on February 3. Five days later, a small group of volunteers arrived, including the famous frontiersman and former U.S. Congressman David Crockett of Tennessee.
On February 11, Neill left the Alamo, determined to recruit additional reinforcements and gather supplies. He transferred command to Travis, the highest-ranking regular army officer in the garrison. Volunteers comprised much of the garrison, and they were unwilling to accept Travis as their leader. The men instead elected Bowie, who had a reputation as a fierce fighter, as their commander. Bowie celebrated by getting very intoxicated and creating havoc in Béxar. To mitigate the resulting ill feelings, Bowie agreed to share command with Travis.
As the Texians struggled to find men and supplies, Santa Anna continued to gather men at San Luis Potosi; by the end of 1835, his army numbered 6,019 soldiers. Rather than advance along the coast, where supplies and reinforcements could be easily delivered by sea, Santa Anna ordered his army inland to Béxar, the political center of Texas and the site of Cos's defeat. The army began its march north in late December. Officers used the long journey to train the men. Many of the new recruits did not know how to aim their muskets, and many refused to fire from the shoulder because of the strong recoil.
Progress was slow. There were not enough mules to transport all of the supplies, and many of the teamsters, all civilians, quit when their pay was delayed. The many soldaderas – women and children who followed the army – consumed much of the already scarce supplies. The soldiers were soon reduced to partial rations. On February 12 they crossed the Rio Grande. Temperatures in Texas reached record lows, and by February 13 an estimated 15–16 inches (38–41 cm) of snow had fallen. Hypothermia, dysentery, and Comanche raiding parties took a heavy toll on the Mexican soldiers.
On February 21, Santa Anna and his vanguard reached the banks of the Medina River, 25 miles (40 km) from Béxar. Unaware of the Mexican Army's proximity, the majority of the Alamo garrison joined Béxar residents at a fiesta. After learning of the planned celebration, Santa Anna ordered General Joaquín Ramírez y Sesma to immediately seize the unprotected Alamo, but sudden rains halted that raid.
## Siege
### Investment
In the early hours of February 23, residents began fleeing Béxar, fearing the Mexican army's imminent arrival. Although unconvinced by the reports, Travis stationed a soldier in the San Fernando church bell tower, the highest location in town, to watch for signs of an approaching force. Several hours later, Texian scouts reported seeing Mexican troops 1.5 miles (2.4 km) outside the town. Few arrangements had been made for a potential siege. One group of Texians scrambled to herd cattle into the Alamo, while others scrounged for food in the recently abandoned houses. Several members of the garrison who had been living in town brought their families with them when they reported to the Alamo. Among these were Almaron Dickinson, who brought his wife Susanna and their infant daughter Angelina; Bowie, who was accompanied by his deceased wife's cousins, Gertrudis Navarro and Juana Navarro Alsbury, and Alsbury's young son; and Gregorio Esparza, whose family climbed through the window of the Alamo chapel after the Mexican army arrived. Other members of the garrison failed to report for duty; most of the men working outside Béxar did not try to sneak past Mexican lines.
By late afternoon Béxar was occupied by about 1,500 Mexican soldiers. When the Mexican troops raised a blood-red flag signifying no quarter, Travis responded with a blast from the Alamo's largest cannon. Believing that Travis had acted hastily, Bowie sent Jameson to meet with Santa Anna. Travis was angered that Bowie had acted unilaterally and sent his own representative, Captain Albert Martin. Both emissaries met with Colonel Juan Almonte and José Bartres. According to Almonte, the Texians asked for an honorable surrender but were informed that any surrender must be unconditional. On learning this, Bowie and Travis mutually agreed to fire the cannon again.
### Skirmishes
The first night of the siege was relatively quiet. Over the next few days, Mexican soldiers established artillery batteries, initially about 1,000 feet (300 m) from the south and east walls of the Alamo. A third battery was positioned southeast of the fort. Each night the batteries inched closer to the Alamo walls. During the first week of the siege more than 200 cannonballs landed in the Alamo plaza. At first, the Texians matched Mexican artillery fire, often reusing the Mexican cannonballs. On February 26 Travis ordered the artillery to conserve powder and shot.
Two notable events occurred on Wednesday, February 24. At some point that day, Bowie collapsed from illness, leaving Travis in sole command of the garrison. Late that afternoon, two Mexican scouts became the first fatalities of the siege. The following morning, 200–300 Mexican soldiers crossed the San Antonio River and took cover in abandoned shacks near the Alamo walls. Several Texians ventured out to burn the huts while Texians within the Alamo provided cover fire. After a two-hour skirmish, the Mexican troops retreated to Béxar. Six Mexican soldiers were killed and four others were wounded. No Texians were injured.
A blue norther blew in on February 25, dropping the temperature to 39 °F (4 °C). Neither army was prepared for the cold temperatures. Texian attempts to gather firewood were thwarted by Mexican troops. On the evening of February 26 Colonel Juan Bringas engaged several Texians who were burning more huts. According to historian J.R. Edmondson, one Texian was killed. Four days later, Texians shot and killed Private First-Class Secundino Alvarez, a soldier from one of two battalions that Santa Anna had stationed on two sides of the Alamo. By March 1, the number of Mexican casualties was nine dead and four wounded, while the Texian garrison had lost only one man.
### Reinforcements
Santa Anna posted one company east of the Alamo, on the road to Gonzales. Almonte and 800 dragoons were stationed along the road to Goliad. Throughout the siege these towns had received multiple couriers, dispatched by Travis to plead for reinforcements and supplies. The most famous of his missives, written February 24, was addressed To the People of Texas & All Americans in the World. According to historian Mary Deborah Petite, the letter is "considered by many as one of the masterpieces of American patriotism." Copies of the letter were distributed across Texas, and eventually reprinted throughout the United States and much of Europe. At the end of the first day of the siege, Santa Anna's troops were reinforced by 600 men under General Joaquin Ramirez y Sesma, bringing the Mexican army up to more than 2,000 men.
As news of the siege spread throughout Texas, potential reinforcements gathered in Gonzales. They hoped to rendezvous with Colonel James Fannin, who was expected to arrive from Goliad with his garrison. On February 26, after days of indecision, Fannin ordered 320 men, four cannons, and several supply wagons to march towards the Alamo, 90 miles (140 km) away. This group traveled less than 1.0 mile (1.6 km) before turning back. Fannin blamed the retreat on his officers; the officers and enlisted men accused Fannin of aborting the mission.
Texians gathered in Gonzales were unaware of Fannin's return to Goliad, and most continued to wait. Impatient with the delay, on February 27 Travis ordered Samuel G. Bastian to go to Gonzales "to hurry up reinforcements". According to historian Thomas Ricks Lindley, Bastian encountered the Gonzales Ranging Company led by Lieutenant George C. Kimble and Travis' courier to Gonzales, Albert Martin, who had tired of waiting for Fannin. A Mexican patrol attacked, driving off four of the men including Bastian. In the darkness, the Texians fired on the remaining 32 men, whom they assumed were Mexican soldiers. One man was wounded, and his English curses convinced the occupiers to open the gates.
On March 3, the Texians watched from the walls as approximately 1,000 Mexicans marched into Béxar. The Mexican army celebrated loudly throughout the afternoon, both in honor of their reinforcements and at the news that troops under General José de Urrea had soundly defeated Texian Colonel Frank W. Johnson at the Battle of San Patricio on February 27. Most of the Texians in the Alamo believed that Sesma had been leading the Mexican forces during the siege, and they mistakenly attributed the celebration to the arrival of Santa Anna. The reinforcements brought the number of Mexican soldiers in Béxar to almost 3,100.
The arrival of the Mexican reinforcements prompted Travis to send three men, including Davy Crockett, to find Fannin's force, which he still believed to be en route. The scouts discovered a large group of Texians camped 20 miles (32 km) from the Alamo. Lindley's research indicates that up to 50 of these men had come from Goliad after Fannin's aborted rescue mission. The others had left Gonzales several days earlier. Just before daylight on March 4, part of the Texian force broke through Mexican lines and entered the Alamo. Mexican soldiers drove a second group across the prairie.
### Assault preparations
On March 4, the day after his reinforcements arrived, Santa Anna proposed an assault on the Alamo. Many of his senior officers recommended that they wait for two 12-pounder cannons anticipated to arrive on March 7. That evening, a local woman, likely Bowie's cousin-in-law Juana Navarro Alsbury, approached Santa Anna to negotiate a surrender for the Alamo occupiers. According to many historians, this visit probably increased Santa Anna's impatience; as historian Timothy Todish noted, "there would have been little glory in a bloodless victory". The following morning, Santa Anna announced to his staff that the assault would take place early on March 6. Santa Anna arranged for troops from Béxar to be excused from the front lines so that they would not be forced to fight their own families.
Legend holds that at some point on March 5, Travis gathered his men and explained that an attack was imminent, and that they were greatly outnumbered by the Mexican Army. He supposedly drew a line in the ground and asked those willing to die for the Texian cause to cross and stand alongside him; only one man (Moses Rose) was said to have declined. Most scholars disregard this tale as there is no primary source evidence to support it (the story only surfaced decades after the battle in a third-hand account). Travis apparently did, at some point prior to the final assault, assemble the men for a conference to inform them of the dire situation and giving them the chance to either escape or stay and die for the cause. Susannah Dickinson recalled Travis announcing that any men who wished to escape should let it be known and step out of ranks.
The last Texian verified to have left the Alamo was James Allen, a courier who carried personal messages from Travis and several of the other men on March 5.
## Final assault
### Exterior fighting
At 10 p.m. on March 5, the Mexican artillery ceased their bombardment. As Santa Anna had anticipated, the exhausted Texians soon fell into the first uninterrupted sleep many of them had since the siege began. Just after midnight, more than 2,000 Mexican soldiers began preparing for the final assault. Fewer than 1,800 were divided into four columns, commanded by Cos, Colonel Francisco Duque, Colonel José María Romero and Colonel Juan Morales. Veterans were positioned on the outside of the columns to better control the new recruits and conscripts in the middle. As a precaution, 500 Mexican cavalry were positioned around the Alamo to prevent the escape of either Texian or Mexican soldiers. Santa Anna remained in camp with the 400 reserves. Despite the bitter cold, the soldiers were ordered not to wear overcoats which could impede their movements. Clouds concealed the moon and thus the movements of the soldiers.
At 5:30 a.m. troops silently advanced. Cos and his men approached the northwest corner of the Alamo, while Duque led his men from the northwest towards a repaired breach in the Alamo's north wall. The column commanded by Romero marched towards the east wall, and Morales's column aimed for the low parapet by the chapel.
The three Texian sentinels stationed outside the walls were killed in their sleep, allowing Mexican soldiers to approach undetected within musket range of the walls. At this point, the silence was broken by shouts of "¡Viva Santa Anna!" and music from the buglers. The noise woke the Texians. Most of the noncombatants gathered in the church sacristy for safety. Travis rushed to his post yelling, "Come on boys, the Mexicans are upon us and we'll give them hell!" and, as he passed a group of Tejanos, "¡No rendirse, muchachos!" ("Don't surrender, boys").
In the initial moments of the assault, Mexican troops were at a disadvantage. Their column formation allowed only the front rows of soldiers to fire safely. Unaware of the dangers, the untrained recruits in the ranks "blindly fir[ed] their guns", injuring or killing the troops in front of them. The tight concentration of troops also offered an excellent target for the Texian artillery. Lacking canister shot, Texians filled their cannon with any metal they could find, including door hinges, nails, and chopped-up horseshoes, essentially turning the cannon into giant shotguns. According to the diary of José Enrique de la Peña, "a single cannon volley did away with half the company of chasseurs from Toluca". Duque fell from his horse after sustaining a wound in his thigh and was almost trampled by his own men. General Manuel Castrillón quickly assumed command of Duque's column.
Although some in the front of the Mexican ranks wavered, soldiers in the rear pushed them on. As the troops massed against the walls, Texians were forced to lean over the walls to shoot, leaving them exposed to Mexican fire. Travis became one of the first occupiers to die, shot while firing his shotgun into the soldiers below him, though one source says that he drew his sword and stabbed a Mexican officer who had stormed the wall before succumbing to his injury. Few of the Mexican ladders reached the walls. The few soldiers who were able to climb the ladders were quickly killed or beaten back. As the Texians discharged their previously loaded rifles, they found it increasingly difficult to reload while attempting to keep Mexican soldiers from scaling the walls.
Mexican soldiers withdrew and regrouped, but their second attack was repulsed. Fifteen minutes into the battle, they attacked a third time. During the third strike, Romero's column, aiming for the east wall, was exposed to cannon fire and shifted to the north, mingling with the second column. Cos' column, under fire from Texians on the west wall, also veered north. When Santa Anna saw that the bulk of his army was massed against the north wall, he feared a rout; "panicked", he sent the reserves into the same area. The Mexican soldiers closest to the north wall realized that the makeshift wall contained many gaps and toeholds. One of the first to scale the 12-foot (3.7 m) wall was General Juan Amador; at his challenge, his men began swarming up the wall. Amador opened the postern in the north wall, allowing Mexican soldiers to pour into the complex. Others climbed through gun ports in the west wall, which had few occupiers. As the Texian occupiers abandoned the north wall and the northern end of the west wall, Texian gunners at the south end of the mission turned their cannon towards the north and fired into the advancing Mexican soldiers. This left the south end of the mission unprotected; within minutes Mexican soldiers had climbed the walls and killed the gunners, gaining control of the Alamo's 18-pounder cannon. By this time Romero's men had taken the east wall of the compound and were pouring in through the cattle pen.
### Interior fighting
As previously planned, most of the Texians fell back to the barracks and the chapel. Holes had been carved in the walls to allow the Texians to fire. Unable to reach the barracks, Texians stationed along the west wall headed west for the San Antonio River. When the cavalry charged, the Texians took cover and began firing from a ditch. Sesma was forced to send reinforcements, and the Texians were eventually killed. Sesma reported that this skirmish involved 50 Texians, but Edmondson believes that number was inflated.
The occupiers in the cattle pen retreated into the horse corral. After discharging their weapons, the small band of Texians scrambled over the low wall, circled behind the church and raced on foot for the east prairie, which appeared empty. As the Mexican cavalry advanced on the group, Almaron Dickinson and his artillery crew turned a cannon around and fired into the cavalry, probably inflicting casualties. Nevertheless, all of the escaping Texians were killed.
The last Texian group to remain in the open were Crockett and his men, defending the low wall in front of the church. Unable to reload, they used their rifles as clubs and fought with knives. After a volley of fire and a wave of Mexican bayonets, the few remaining Texians in this group fell back towards the church. The Mexican army now controlled all of the outer walls and the interior of the Alamo compound except for the church and rooms along the east and west walls. Mexican soldiers turned their attention to a Texian flag waving from the roof of one building. Four Mexicans were killed before the flag of Mexico was raised in that location.
For the next hour, the Mexican army worked to secure complete control of the Alamo. Many of the remaining occupiers were ensconced in the fortified barracks rooms. In the confusion, the Texians had neglected to spike their cannon before retreating. Mexican soldiers turned the cannon towards the barracks. As each door was blown off, Mexican soldiers would fire a volley of muskets into the dark room, then charge in for hand-to-hand combat.
Too sick to participate in the battle, Bowie likely died in bed. Eyewitnesses to the battle gave conflicting accounts of his death. Some witnesses maintained that they saw several Mexican soldiers enter Bowie's room, bayonet him, and carry him alive from the room. Others claimed that Bowie shot himself or was killed by soldiers while too weak to lift his head. According to historian Wallace Chariton, the "most popular, and probably the most accurate" version is that Bowie died on his cot, "back braced against the wall, and using his pistols and his famous knife."
The last of the Texians to die were the 11 men manning the two 12-pounder cannons in the chapel. A shot from the 18-pounder cannon destroyed the barricades at the front of the church, and Mexican soldiers entered the building after firing an initial musket volley. Dickinson's crew fired their cannon from the apse into the Mexican soldiers at the door. With no time to reload, the Texians, including Dickinson, Gregorio Esparza and James Bonham, grabbed rifles and fired before being bayoneted to death. Texian Robert Evans, the master of ordnance, had been tasked with keeping the gunpowder from falling into Mexican hands. Wounded, he crawled towards the powder magazine but was killed by a musket ball with his torch only inches from the powder. Had he succeeded, the blast would have destroyed the church and killed the women and children hiding in the sacristy.
As soldiers approached the sacristy, one of the young sons of occupier Anthony Wolf stood to pull a blanket over his shoulders. In the dark, Mexican soldiers mistook him for an adult and killed him. Possibly the last Texian to die in battle was Jacob Walker, who attempted to hide behind Susannah Dickinson and was bayoneted in front of the women. Another Texian, Brigido Guerrero, also sought refuge in the sacristy. Guerrero, who had deserted from the Mexican Army in December 1835, was spared after convincing the soldiers he was a Texian prisoner.
By 6:30 a.m. the battle for the Alamo was over. Mexican soldiers inspected each corpse, bayoneting any body that moved. Even with all of the Texians dead, Mexican soldiers continued to shoot, some killing each other in the confusion. Mexican generals were unable to stop the bloodlust and appealed to Santa Anna for help. Although the general showed himself, the violence continued and the buglers were finally ordered to sound a retreat. For 15 minutes after that, soldiers continued to fire into dead bodies.
## Aftermath
### Casualties
According to many accounts of the battle, between five and seven Texians surrendered. Incensed that his orders had been ignored, Santa Anna demanded the immediate execution of the survivors. Weeks after the battle, stories circulated that Crockett was among those who surrendered. Ben, a former United States slave who cooked for one of Santa Anna's officers, maintained that Crockett's body was found surrounded by "no less than sixteen Mexican corpses". Historians disagree on which version of Crockett's death is accurate.
Santa Anna reportedly told Captain Fernando Urizza that the battle "was but a small affair". Another officer then remarked that "with another such victory as this, we'll go to the devil". In his initial report Santa Anna claimed that 600 Texians had been killed, with only 70 Mexican soldiers killed and 300 wounded. His secretary, Ramón Martínez Caro, reported 400 killed. Other estimates of the number of Mexican soldiers killed ranged from 60 to 200, with an additional 250–300 wounded. Some people, historians, and survivors such as Susanna Dickinson have estimated that over 1,000-1,600 Mexican soldiers were killed and wounded, but it is most likely that total casualties were less than 600. Texan Dr. J. H. Barnard who tended the Mexican soldiers reported 300-400 dead and 200-300 wounded. Most Alamo historians place the number of Mexican casualties at 400–600. This would represent about one quarter of the over 2,000 Mexican soldiers involved in the final assault, which Todish remarks is "a tremendous casualty rate by any standards". Most eyewitnesses counted between 182 and 257 Texians killed. Some historians believe that at least one Texian, Henry Warnell, successfully escaped from the battle. Warnell died several months later of wounds incurred either during the final battle or during his escape as a courier.
Mexican soldiers were buried in the local cemetery, Campo Santo. Shortly after the battle, Colonel José Juan Sanchez Navarro proposed that a monument should be erected to the fallen Mexican soldiers. Cos rejected the idea.
The Texian bodies were stacked and burned. The only exception was the body of Gregorio Esparza. His brother Francisco, an officer in Santa Anna's army, received permission to give Gregorio a proper burial. The ashes were left where they fell until February 1837, when Juan Seguín returned to Béxar to examine the remains. A simple coffin inscribed with the names Travis, Crockett, and Bowie was filled with ashes from the funeral pyres. According to a March 28, 1837, article in the Telegraph and Texas Register, Seguín buried the coffin under a peach tree grove. The spot was not marked and cannot now be identified. Seguín later claimed that he had placed the coffin in front of the altar at the San Fernando Cathedral. In July 1936 a coffin was discovered buried in that location, but according to historian Wallace Chariton, it is unlikely to actually contain the remains of the Alamo defenders. Fragments of uniforms were found in the coffin and the Texian soldiers who fought at the Alamo were known not to wear uniforms.
### Texian survivors
In an attempt to convince other slaves in Texas to support the Mexican government over the Texian rebellion, Santa Anna spared Travis' slave, Joe. The day after the battle, he interviewed each noncombatant individually. Impressed with Susanna Dickinson, Santa Anna offered to adopt her infant daughter Angelina and have the child educated in Mexico City. Dickinson refused the offer, which was not extended to Juana Navarro Alsbury although her son was of similar age. Each woman was given a blanket and two silver pesos. Alsbury and the other Tejano women were allowed to return to their homes in Béxar; Dickinson, her daughter and Joe were sent to Gonzales, escorted by Ben. They were encouraged to relate the events of the battle, and to inform the remainder of the Texian forces that Santa Anna's army was unbeatable.
### Impact on revolution
During the siege, newly elected delegates from across Texas met at the Convention of 1836. On March 2, the delegates declared independence, forming the Republic of Texas. Four days later, the delegates at the convention received a dispatch Travis had written March 3 warning of his dire situation. Unaware that the Alamo had fallen, Robert Potter called for the convention to adjourn and march immediately to relieve the Alamo. Sam Houston convinced the delegates to remain in Washington-on-the-Brazos to develop a constitution. After being appointed sole commander of all Texian troops, Houston journeyed to Gonzales to take command of the 400 volunteers who were still waiting for Fannin to lead them to the Alamo.
Within hours of Houston's arrival on March 11, Andres Barcenas and Anselmo Bergaras arrived with news that the Alamo had fallen and all Texians were slain. Hoping to halt a panic, Houston arrested the men as enemy spies. They were released hours later when Susannah Dickinson and Joe reached Gonzales and confirmed the report. Realizing that the Mexican army would soon advance towards the Texian settlements, Houston advised all civilians in the area to evacuate and ordered his new army to retreat. This sparked a mass exodus, known as the Runaway Scrape, and most Texians, including members of the new government, fled east.
Despite their losses at the Alamo, the Mexican army in Texas still outnumbered the Texian army by almost six to one. Santa Anna assumed that knowledge of the disparity in troop numbers and the fate of the Texian soldiers at the Alamo would quell the resistance, and that Texian soldiers would quickly leave the territory. News of the Alamo's fall had the opposite effect, and men flocked to join Houston's army. The New York Post editorialized that "had [Santa Anna] treated the vanquished with moderation and generosity, it would have been difficult if not impossible to awaken that general sympathy for the people of Texas which now impels so many adventurous and ardent spirits to throng to the aid of their brethren".
On the afternoon of April 21 the Texian army attacked Santa Anna's camp near Lynchburg Ferry. The Mexican army was taken by surprise, and the Battle of San Jacinto was essentially over after 18 minutes. During the fighting, many of the Texian soldiers repeatedly cried "Remember the Alamo!" as they slaughtered fleeing Mexican troops. Santa Anna was captured the following day, and reportedly told Houston: "That man may consider himself born to no common destiny who has conquered the Napoleon of the West. And now it remains for him to be generous to the vanquished." Houston replied, "You should have remembered that at the Alamo". Santa Anna's life was spared, and he was forced to order his troops out of Texas, ending Mexican control of the province and bestowing some legitimacy on the new republic.
## Legacy
Following the battle, Santa Anna was alternately viewed as a national hero or a pariah. Mexican perceptions of the battle often mirrored the prevailing viewpoint. Santa Anna had been disgraced following his capture at the Battle of San Jacinto, and many Mexican accounts of the battle were written by men who had been, or had become, his outspoken critics. Petite and many other historians believe that some of the stories, such as the execution of Crockett, may have been invented to further discredit Santa Anna. In Mexican history, the Texas campaign, including the Battle of the Alamo, was soon overshadowed by the Mexican–American War of 1846–1848.
In San Antonio de Béxar, the largely Tejano population viewed the Alamo complex as more than just a battle site; it represented decades of assistance—as a mission, a hospital, or a military post. As the English-speaking population increased, the complex became best known for the battle. Focus has centered primarily on the Texian occupiers, with little emphasis given to the role of the Tejano soldiers who served in the Texian army or the actions of the Mexican army. In the early 20th century the Texas Legislature purchased the property and appointed the Daughters of the Republic of Texas as permanent caretakers of what is now an official state shrine. In front of the church, in the center of Alamo Plaza, stands a cenotaph, designed by Pompeo Coppini, which commemorates the Texians and Tejanos who died during the battle. According to Bill Groneman's Battlefields of Texas, the Alamo has become "the most popular tourist site in Texas".
The first English-language histories of the battle were written and published by Texas Ranger and amateur historian John Henry Brown. The next major treatment of the battle was Reuben Potter's The Fall of the Alamo, published in The Magazine of American History in 1878. Potter based his work on interviews with many of the Mexican survivors of the battle. The first full-length, non-fiction book covering the battle, John Myers Myers' The Alamo, was published in 1948. In the decades since, the battle has featured prominently in many non-fiction works.
According to Todish et al., "there can be little doubt that most Americans have probably formed many of their opinions on what occurred at the Alamo not from books, but from the various movies made about the battle." The first film version of the battle appeared in 1911, when Gaston Méliès directed The Immortal Alamo. The battle became more widely known after it was featured in the 1950s Disney miniseries Davy Crockett, which was largely based on myth. Within several years, John Wayne directed and starred in one of the best-known, but questionably accurate, film versions, 1960's The Alamo. Another film also called The Alamo was released in 2004. CNN described it as possibly "the most character-driven of all the movies made on the subject". It is also considered more faithful to the actual events than other movies.
Several songwriters have been inspired by the Battle of the Alamo. Tennessee Ernie Ford's "The Ballad of Davy Crockett" spent 16 weeks on the country music charts, peaking at No. 4 in 1955. Marty Robbins recorded a version of the song "The Ballad of the Alamo" in 1960 which spent 13 weeks on the pop charts, peaking at No. 34. Jane Bowers' song "Remember the Alamo" has been recorded by artists including Johnny Cash, Willie Nelson, and Donovan. British hard rock band Babe Ruth's 1972 song "The Mexican" pictures the conflict through the eyes of a Mexican soldier. Singer-songwriter Phil Collins collected hundreds of items related to the battle, narrated a light and sound show about the Alamo, and has spoken at related events. In 2014 Collins donated his entire collection to the Alamo via the State of Texas.
The U.S. Postal Service issued two postage stamps in commemoration of Texas Statehood and the Battle of Alamo. The "Remember the Alamo" battle cry, as well as the Alamo Mission itself appear on the current version of the reverse side of the seal of Texas.
The battle also featured in episode 13 of The Time Tunnel, "The Alamo", first aired in 1966, and episode 5 of season one of the TV series Timeless, aired 2016.
As of 2023, the Alamo Trust (which operates the site) seeks to expand the property to build an Alamo museum. To do so, it would have to use eminent domain to seize a property containing an Alamo-themed bar called Moses Rose's Hideout (named after an Alamo deserter) that has operated for 12 years (circ. 2023). The Alamo Trust claims that if the bar owner continues to refuse to sell his property, it will put the \$400 million property at stake. Conversely, the bar owner says that he wishes to participate in the economic success of adding an Alamo museum and that there is a certain unjust irony of seizing his property to expand the Alamo.
## See also
- Last stand
- List of Alamo defenders
- List of last stands
- List of Texas Revolution battles
- List of Texan survivors of the Battle of the Alamo
## Explanatory notes
## General and cited references
|
1,796,650 |
Rwandan Civil War
| 1,171,618,467 |
1990–1994 conflict in Rwanda
|
[
"Civil wars involving the states and peoples of Africa",
"Civil wars of the 20th century",
"Conflicts in 1990",
"Ethnicity-based civil wars",
"History of Rwanda",
"Military history of Africa",
"Political history of Rwanda",
"Revolution-based civil wars",
"Rwandan genocide",
"Violence against women in Rwanda",
"Wars involving the Democratic Republic of the Congo"
] |
The Rwandan Civil War was a large-scale civil war in Rwanda which was fought between the Rwandan Armed Forces, representing the country's government, and the rebel Rwandan Patriotic Front (RPF) from 1 October 1990 to 18 July 1994. The war arose from the long-running dispute between the Hutu and Tutsi groups within the Rwandan population. A 1959–1962 revolution had replaced the Tutsi monarchy with a Hutu-led republic, forcing more than 336,000 Tutsi to seek refuge in neighbouring countries. A group of these refugees in Uganda founded the RPF which, under the leadership of Fred Rwigyema and Paul Kagame, became a battle-ready army by the late 1980s.
The war began on 1 October 1990 when the RPF invaded north-eastern Rwanda, advancing 60 km (37 mi) into the country. They suffered a major setback when Rwigyema was killed in action on the second day. The Rwandan Army, assisted by troops from France, gained the upper hand and the RPF were largely defeated by the end of October. Kagame, who had been in the United States during the invasion, returned to take command. He withdrew troops to the Virunga Mountains for several months before attacking again. The RPF began a guerrilla war, which continued until mid-1992 with neither side able to gain the upper hand. A series of protests forced Rwandan President Juvénal Habyarimana to begin peace negotiations with the RPF and domestic opposition parties. Despite disruption and killings by Hutu Power, a group of extremists opposed to any deal, and a fresh RPF offensive in early 1993, the negotiations were successfully concluded with the signing of the Arusha Accords in August 1993.
An uneasy peace followed, during which the terms of the accords were gradually implemented. RPF troops were deployed to a compound in Kigali and the peace-keeping United Nations Assistance Mission for Rwanda (UNAMIR) was sent to the country. The Hutu Power movement was steadily gaining influence and planned a "final solution" to exterminate the Tutsi. This plan was put into action following the assassination of President Habyarimana on 6 April 1994. Over the course of about a hundred days, between 500,000 and 1,000,000 Tutsi and moderate Hutu were killed in the Rwandan genocide. The RPF quickly resumed the civil war. They captured territory steadily, encircling cities and cutting off supply routes. By mid-June they had surrounded the capital, Kigali, and on 4 July they seized it. The war ended later that month when the RPF captured the last territory held by the interim government, forcing the government and genocidaires into Zaire.
The victorious RPF assumed control of the country, with Paul Kagame as de facto leader. Kagame served as vice president from 1994 and as president from 2000. The RPF began a programme of rebuilding the infrastructure and economy of the country, bringing genocide perpetrators to trial, and promoting reconciliation between Hutu and Tutsi. In 1996 the RPF-led Rwandan Government launched an offensive against refugee camps in Zaire, home to exiled leaders of the former regime and millions of Hutu refugees. This action started the First Congo War, which removed long-time dictator President Mobutu Sese Seko from power. As of 2023, Kagame and the RPF remain the dominant political force in Rwanda.
## Background
### Pre-independence Rwanda and origins of Hutu, Tutsi, and Twa
The earliest inhabitants of what is now Rwanda were the Twa, aboriginal pygmy hunter-gatherers who settled in the area between 8000 BC and 3000 BC and remain in Rwanda today. Between 700 BC and 1500 AD, Bantu groups migrated into the region and began to clear forest land for agriculture. The forest-dwelling Twa lost much of their land and moved to the slopes of mountains. Historians have several theories regarding the Bantu migrations. One theory is that the first settlers were Hutu, and the Tutsi migrated later and formed a distinct racial group, possibly originating from the Horn of Africa. An alternative theory is that the migration was slow and steady, with incoming groups integrating into rather than conquering the existing society. Under this theory, the Hutu and Tutsi are a later class, rather than racial, distinction.
The population coalesced, first into clans (ubwoko) and into around eight kingdoms by 1700. The Kingdom of Rwanda, ruled by the Tutsi Nyiginya clan, became dominant from the mid-eighteenth century, expanding through conquest and assimilation. It achieved its greatest extent under the reign of Kigeli Rwabugiri in 1853–1895. Rwabugiri expanded the kingdom west and north, and initiated administrative reforms which caused a rift to grow between the Hutu and Tutsi populations. These included uburetwa, a system of forced labour which Hutu had to perform to regain access to land seized from them, and ubuhake, under which Tutsi patrons ceded cattle to Hutu or Tutsi clients in exchange for economic and personal service. Rwanda and neighbouring Burundi were assigned to Germany by the Berlin Conference of 1884, and Germany established a presence in 1897 with the formation of an alliance with the King. German policy was to rule through the Rwandan monarchy, enabling colonisation with fewer European troops. The colonists favoured the Tutsi over the Hutu when assigning administrative roles, believing them to be migrants from Ethiopia and racially superior. The Rwandan King welcomed the Germans, and used their military strength to reinforce his rule and expand the kingdom. Belgian forces took control of Rwanda and Burundi during World War I, and from 1926 began a policy of more direct colonial rule. The Belgian administration, in conjunction with Catholic clerics, modernised the local economy. They also increased taxes and imposed forced labour on the population. Tutsi supremacy remained, reinforced by Belgian support of two monarchies, leaving the Hutu disenfranchised. In 1935, Belgium introduced identity cards classifying each individual as Tutsi, Hutu, Twa, or Naturalised. It had previously been possible for wealthy Hutu to become honorary Tutsi, but the identity cards prevented further movement between the groups.
### Revolution, exile of Tutsi, and the Hutu republic
After 1945, a Hutu counter-elite developed, demanding the transfer of power from Tutsi to Hutu. The Tutsi leadership responded by trying to negotiate a speedy independence on their terms but found that the Belgians no longer supported them. There was a simultaneous shift in the Catholic Church, with prominent conservative figures in the early Rwandan church replaced by younger clergy of working-class origin. Of these, a greater proportion were Flemish rather than Walloon Belgians and sympathised with the plight of the Hutu. In November 1959, the Hutu began a series of riots and arson attacks on Tutsi homes, following false rumours of the death of a Hutu sub-chief in an assault by Tutsi activists. Violence quickly spread across the whole country, beginning the Rwandan Revolution. The King and Tutsi politicians launched a counter-attack in an attempt to seize power and ostracise the Hutu and Belgians, but were thwarted by Belgian Colonel Guy Logiest, who was brought in by the colonial Governor. Logiest re-established law and order and began a programme of overt promotion and protection of the Hutu elite. He replaced many Tutsi chiefs with Hutu and effectively forced King Kigeli V into exile.
Logiest and Hutu leader Grégoire Kayibanda declared the country an autonomous republic in 1961 and it became independent in 1962. More than 336,000 Tutsi left Rwanda by 1964 to escape the Hutu purges, mostly to the neighbouring countries of Burundi, Uganda, Tanzania and Zaire. Many of the Tutsi exiles lived as refugees in their host countries, and sought to return to Rwanda. Some supported the new Rwandan Government, but others formed armed groups and launched attacks on Rwanda, the largest of which advanced close to Kigali in 1963. These groups were known in Kinyarwanda as the inyenzi (cockroaches). Historians do not know the origin of this term – it is possible the rebels coined it themselves, the name reflecting that they generally attacked at night. The inyenzi label resurfaced in the 1990s as a highly derogatory term for the Tutsi, used by Hutu hardliners to dehumanise them. The inyenzi attacks of the 1960s were poorly equipped and organised and the government defeated them. The last significant attack was made in desperation from Burundi in December 1963 but failed due to bad planning and lack of equipment. The government responded to this attack with the slaughter of an estimated 10,000 Tutsi within Rwanda.
Kayibanda presided over a Hutu republic for the next decade, imposing an autocratic rule similar to the pre-revolution feudal monarchy. In 1973 Hutu army officer Juvénal Habyarimana toppled Kayibanda in a coup. He founded the National Republican Movement for Democracy and Development (MRND) party in 1975, and promulgated a new constitution following a 1978 referendum, making the country a one-party state in which every citizen had to belong to the MRND. Anti-Tutsi discrimination continued under Habyarimana but the country enjoyed greater economic prosperity and reduced anti-Tutsi violence. A coffee price collapse in the late 1980s caused a loss of income for Rwanda's wealthy elite, precipitating a political fight for power and access to foreign aid receipts. The family of first lady Agathe Habyarimana, known as the akazu, were the principal winners in this fight. The family had a more respected lineage than that of the President, having ruled one of the independent states near Gisenyi in the nineteenth century. Habyarimana therefore relied on them in controlling the population of the north-west. The akazu exploited this to their advantage, and Habyarimana was increasingly unable to rule without them. The economic situation forced Habyarimana to greatly reduce the national budget, which led to civil unrest. On the advice of French president François Mitterrand, Habyarimana declared a commitment to multi-party politics but took no action to bring this about. Student protests followed and by late 1990 the country was in crisis.
### Formation of the RPF and preparation for war
The organisation which became the Rwandan Patriotic Front (RPF) was founded in 1979 in Uganda. It was initially known as the Rwandan Refugees Welfare Association and then from 1980 as the Rwandan Alliance for National Unity (RANU). It formed in response to persecution and discrimination against the Tutsi refugees by the regime of Ugandan President Milton Obote. Obote accused the Rwandans of collaboration with his predecessor, Idi Amin, including occupying the homes and stealing the cattle of Ugandans who had fled from Amin. Meanwhile, Tutsi refugees Fred Rwigyema and Paul Kagame had joined Yoweri Museveni's rebel Front for National Salvation (FRONASA). Museveni fought alongside Obote to defeat Amin in 1979 but withdrew from the government following Obote's disputed victory in the 1980 general election. With Rwigyema and Kagame he formed a new rebel army, the National Resistance Army (NRA). The NRA's goal was to overthrow Obote's government, in what became known as the Ugandan Bush War. President Obote remained hostile to the Rwandan refugees throughout his presidency and RANU was forced into exile in 1981, relocating to Nairobi in Kenya. In 1982, with the authority of Obote, local district councils in the Ankole region issued notices requiring refugees to be evicted from their homes and settled in camps. These evictions were violently implemented by Ankole youth militia. Many displaced Rwandans attempted to cross the border to Rwanda, but the Habyarimana regime confined them to isolated camps and closed the border to prevent further migration. Faced with the threat of statelessness, many more Tutsi refugees in Uganda chose to join Museveni's NRA.
In 1986 the NRA captured Kampala with a force of 14,000 soldiers, including 500 Rwandans, and formed a new government. After Museveni was inaugurated as president he appointed Kagame and Rwigyema as senior officers in the new Ugandan army. The experience of the Bush War inspired Rwigyema and Kagame to consider an attack against Rwanda, with the goal of allowing the refugees to return home. As well as fulfilling their army duties, the pair began building a covert network of Rwandan Tutsi refugees within the army's ranks, intended as the nucleus for such an attack. With the pro-refugee Museveni in power, RANU was able to move back to Kampala. At its 1987 convention it renamed itself to the Rwandan Patriotic Front and it too committed to returning the refugees to Rwanda by any means possible. In 1988 a leadership crisis within the RPF prompted Fred Rwigyema to intervene in the organisation and take control, replacing Peter Bayingana as RPF president. Kagame and other senior members of Rwigyema's Rwandan entourage within the NRA also joined, Kagame assuming the vice presidency. Bayingana remained as the other vice president but resented the loss of the leadership. Bayingana and his supporters attempted to start the war with an invasion in late 1989 without the support of Rwigyema, but this was quickly repelled by the Rwandan Army.
Rwandan President Habyarimana was aware of the increasing number of Tutsi exiles in the Ugandan Army and made representations to President Museveni on the matter. At the same time many native Ugandans and Baganda officers in the NRA began criticising Museveni over his appointment of Rwandan refugees to senior positions. He therefore demoted Kagame and Rwigyema in 1989. They remained de facto senior officers but the change in official status, and the possibility that they might lose access to the resources of the Ugandan military, caused them to accelerate their plans to invade Rwanda. In 1990 a dispute in south-western Uganda between Ugandan ranch owners and squatters on their land, many of whom were Rwandans, led to a wider debate on indigeneity and eventually to the explicit labeling of all Rwandan refugees as non-citizens. Realising the precariousness of their own positions, the opportunity afforded by both the renewed drive of refugees to leave Uganda, and the instability on the Rwandan domestic scene, Rwigyema and Kagame decided in mid-1990 to effect their invasion plans immediately. It is likely President Museveni knew of the planned invasion but did not explicitly support it. In mid-1990 Museveni ordered Rwigyema to attend an officer training course at the Command and General Staff College in Fort Leavenworth in the United States, and was also planning overseas deployments for other senior Rwandans in the army. This may have been a tactic to reduce the threat of an RPF invasion of Rwanda. After two days of discussion Rwigyema persuaded Museveni that following years of army duty he needed a break and was allowed to remain in Uganda. Museveni then ordered Kagame to attend instead. The RPF leadership allowed him to go, to avoid suspicion, even though it meant his missing the beginning of the war.
## Course of the war
### 1990 invasion and death of Rwigyema
On 1 October 1990 fifty RPF rebels deserted their Ugandan Army posts and crossed the border from Uganda into Rwanda, killing a Rwandan customs guard at the Kagitumba border post and forcing others to flee. They were followed by hundreds more rebels, dressed in the uniforms of the Ugandan national army and carrying stolen Ugandan weaponry, including machine guns, autocannons, mortars, and Soviet BM-21 multiple rocket launchers. According to RPF estimates, around 2,500 of the Ugandan Army's 4,000 Rwandan soldiers took part in the invasion, accompanied by 800 civilians, including medical staff and messengers. Both President Yoweri Museveni of Uganda and President Habyarimana of Rwanda were in New York City attending the United Nations World Summit for Children. In the first few days of fighting, the RPF advanced 60 km (37 mi) south to Gabiro. Their Rwandan Armed Forces opponents, fighting for Habyarimana's government, were numerically superior, with 5,200 soldiers, and possessed armoured cars and helicopters supplied by France, but the RPF benefited from the element of surprise. The Ugandan government set up roadblocks across the west of Uganda, to prevent further desertions and to block the rebels from returning to Uganda.
On 2 October the RPF leader Fred Rwigyema was shot in the head and killed. The exact circumstances of Rwigyema's death are disputed; the official line of Kagame's government, and the version mentioned by historian Gérard Prunier in his 1995 book on the subject, was that Rwigyema was killed by a stray bullet. In his 2009 book Africa's World War, Prunier says Rwigyema was killed by his subcommander Peter Bayingana, following an argument over tactics. According to this account, Rwigyema was conscious of the need to move slowly and attempt to win over the Hutu in Rwanda before assaulting Kigali, whereas Bayingana and fellow subcommander Chris Bunyenyezi wished to strike hard and fast, to achieve power as soon as possible. The argument boiled over, causing Bayingana to shoot Rwigyema dead. Another senior RPF officer, Stephen Nduguta, witnessed this shooting and informed President Museveni; Museveni sent his brother Salim Saleh to investigate, and Saleh ordered Bayingana's and Bunyenyezi's arrests and eventual executions.
When news of the RPF offensive broke, Habyarimana requested assistance from France in fighting the invasion. The French president's son, Jean-Christophe Mitterrand, was head of the government's Africa Cell and promised to send troops. On the night of 4 October, gunfire was heard in Kigali in a mysterious attack, which was attributed to RPF commandos. The attack was most likely staged by the Rwandan authorities, seeking to convince the French the regime was in imminent danger. As a result, 600 French soldiers arrived in Rwanda the following day, twice as many as initially pledged. The French operation was code-named Noroît and its official purpose was to protect French nationals. In reality the mission was to support Habyarimana's regime and the French parachute companies immediately set up positions blocking the RPF advance to the capital and Kigali International Airport. Belgium and Zaire also sent troops to Kigali in early October. The Belgian troops were deployed primarily to defend the country's citizens living in Rwanda but after a few days it became clear they were not in danger. Instead, the deployment created a political controversy as news reached Brussels of arbitrary arrests and massacres by the Habyarimana regime and its failure to deal with the underlying causes of the war. Faced with a growing domestic dispute over the issue, and with no obvious prospect of achieving peace, the Belgian government withdrew its troops by the beginning of November. Belgium provided no further military support to the Habyarimana government. Zairian President Mobutu Sese Seko's contribution was to send several hundred troops of the elite Special Presidential Division (DSP). Unlike the French, the Zairian troops went straight to the front line and began fighting the RPF, but their discipline was poor. The Zairian soldiers raped Rwandan civilians in the north of the country and looted their homes, prompting Habyarimana to expel them back to Zaire within a week of their arrival. With French assistance, and benefiting from the loss of RPF morale after Rwigyema's death, the Rwandan Army enjoyed a major tactical advantage. By the end of October they had regained all the ground taken by the RPF and pushed the rebels all the way back to the Ugandan border. Many soldiers deserted; some crossed back into Uganda and others went into hiding in the Akagera National Park. Habyarimana accused the Ugandan Government of supplying the RPF, establishing a "rear command" for the group in Kampala, and "flagging off" the invasion. The Rwandan Government announced on 30 October that the war was over.
The Rwandan Government used the attack on Kigali on 4 October as the pretext for the arbitrary arrest of more than 8,000 mostly Tutsi political opponents. Tutsi were increasingly viewed with suspicion; Radio Rwanda aired incitement to ethnic hatred and a pogrom was organised by local authorities on 11 October in the Kibilira commune of Gisenyi Province, killing 383 Tutsi. The burgomaster and the sous-préfet were dismissed from their posts and jailed, but released soon thereafter. It was the first time in nearly twenty years that massacres against Tutsi were perpetrated, as anti-Tutsi violence under the Habyarimana regime had been only low level up to that point.
### Kagame's reorganisation of the RPF
Paul Kagame was still in the United States at the time of the outbreak of war, attending the military training course in Fort Leavenworth. He and Rwigyema had been in frequent contact by telephone throughout his stay in Kansas, planning the final details for the October invasion. At the end of September Kagame informed the college that he was leaving the course, and was settling his affairs ready to return to Africa as the invasion began. The college allowed him to leave with several textbooks, which he later used in planning tactics for the war. When Kagame learned of Rwigyema's death on 5 October, he departed immediately to take command of the RPF troops. He flew through London and Addis Ababa to Entebbe Airport, where he was given safe passage by a friend in the Ugandan secret service; the police considered arresting him, but with Museveni out of the country and no specific orders, they allowed him to pass. Ugandan associates drove Kagame to the border and he crossed into Rwanda early on 15 October.
The RPF were in disarray by the time Kagame arrived, with troop morale very low. He later described his arrival as one of the worst experiences of his life; the troops lacked organisation following Rwigyema's death and were demoralised after their losses in the war. Kagame was well known to the RPF troops, many of whom had fought with him in the Ugandan Army, and they welcomed his arrival in the field. He spent the following weeks gathering intelligence with senior officers. By the end of October, with the RPF forced back to the Ugandan border, Kagame decided it was futile to continue fighting. He therefore withdrew most of the army from north-eastern Rwanda, moving them to the Virunga mountains, along the northwestern border. Kagame knew that the rugged terrain of the Virungas offered protection from attacks, even if the RPF's position was discovered. The march west took almost a week during which the soldiers crossed the border into Uganda several times, with the permission of President Museveni, taking advantage of personal friendships between the RPF soldiers and their ex-colleagues in the Ugandan Army.
Meanwhile, some RPF soldiers remained as a decoy to carry out small-scale attacks on the Rwandan Army, who remained unaware of the Front's relocation. The reorientation towards guerrilla warfare began with a raid on a Rwandan customs post across the border from Katuna. Following the attack, the Rwandan Government accused Uganda of deliberately sheltering the RPF. The RPF's new tactics inflicted heavy casualties on the Rwandan Army, which reacted by shelling Ugandan territory. Ugandan civilians were killed and a significant amount of damage to property was incurred, and there were reports of Rwandan troops crossing the border to loot and abduct locals.
Conditions in the Virungas were very harsh for the RPF. At an altitude of almost 5,000 metres (16,000 ft), there was no ready availability of food or supplies and, lacking warm clothing, several soldiers froze to death or lost limbs in the high-altitude cold climate. Kagame spent the next two months reorganising the army, without carrying out any military operations. Alexis Kanyarengwe, a Hutu colonel who had worked with Habyarimana but had fallen out with him and gone into exile, joined the RPF and was appointed chairman of the organisation. Another Hutu, Seth Sendashonga, became the RPF's liaison with Rwandan opposition parties. Most of the other senior recruits at the time were Ugandan-based Tutsi. Personnel numbers grew steadily, volunteers coming from the exile communities in Burundi, Zaire and other countries. Kagame maintained tight discipline in his army, enforcing a regimented training routine, as well as a large set of rules for soldier conduct. Soldiers were expected to pay for goods purchased in the community, refrain from alcohol and drugs, and to establish a good reputation for the RPF amongst the local population. The RPF punished personnel who broke these rules, sometimes with beatings, while more serious offences such as murder, rape, and desertion, were punishable by death.
The RPF carried out a major fundraising programme, spearheaded by Financial Commissioner Aloisia Inyumba in Kampala. They received donations from Tutsi exiles around the world, as well as from businessmen within Rwanda who had fallen out with the government. The sums involved were not enormous but, with tight financial discipline and a leadership willing to lead frugal lives, the RPF was able to grow its operational capability. It obtained its weapons and ammunition from a variety of sources, including the open market, taking advantage of a surplus of weaponry at the end of the Cold War. It is likely they also received weaponry from officers in the Ugandan Army; according to Gérard Prunier, Ugandans who had fought with Kagame in the Bush War remained loyal to him and secretly passed weaponry to the RPF. Museveni likely knew of this but was able to claim ignorance when dealing with the international community. Museveni later said that "faced with [a] fait accompli situation by our Rwandan brothers", Uganda went "to help the RPF, materially, so that they are not defeated because that would have been detrimental to the Tutsi people of Rwanda and would not have been good for Uganda's stability". Journalist Justus Muhanguzi Kampe reported that the taking of military equipment by deserted Tutsi members of the Ugandan Army meant the national arsenal "nearly got depleted"; he suspected the war "must have had a tremendous financial impact on the Ugandan government, especially Uganda's military budget", costing the country "trillions of shillings".
### Attack on Ruhengeri, January 1991
After three months of regrouping, Kagame decided in January 1991 that the RPF was ready to fight again. The target for the first attack was the northern city of Ruhengeri, south of the Virunga mountains. The city was the only provincial capital that could be attacked quickly from the Virungas while maintaining an element of surprise. Kagame also favoured an attack on Ruhengeri for cultural reasons. President Habyarimana, as well as his wife and her powerful family, came from the north-west of Rwanda and most Rwandans regarded the region as the heartland of the regime. An attack there guaranteed the population would become aware of the RPF's presence and Kagame hoped this would destabilise the government.
During the night of 22 January, seven hundred RPF fighters descended from the mountains into hidden locations around the city, assisted by RPF sympathisers living in the area. They attacked on the morning of 23 January. The Rwandan forces were taken by surprise and were mostly unable to defend against the invasion. The Rwandan Police and army succeeded in briefly repelling the invasion in areas around their stations, killing large numbers of rebel fighters in the process. It is likely the Rwandan Army forces were assisted by French troops, as the French Government later rewarded around fifteen French paratroopers for having taken part in the rearguard. By noon, the defending forces were defeated and the RPF held the whole city. Most of the civilian population fled.
One of the principal RPF targets in Ruhengeri was the prison, which was Rwanda's largest. When he learnt of the invasion the warden, Charles Uwihoreye [fr], telephoned the government in Kigali to request instructions. He spoke to Colonel Elie Sagatwa, one of the akazu, who ordered him to kill every inmate in the prison to avoid escape and defections during the fighting. He also wanted to prevent high-profile political prisoners and former insiders from sharing secret information with the RPF. Uwihoreye refused to obey, even after Sagatwa called him and repeated the order, having confirmed it with the president. Eventually, the RPF stormed the buildings and the prisoners were liberated. Several prisoners were recruited into the RPF, including Théoneste Lizinde, a former close ally of President Habyarimana, who had been arrested following a failed coup attempt in 1980.
The RPF forces held Ruhengeri through the afternoon of 23 January, before withdrawing into the mountains for the night. The raid undermined the Rwandan Government's claims that the RPF had been ejected from the country and had been reduced to conducting guerrilla operations from Uganda. The government sent troops to the city the following day and a state of emergency was declared, with strict curfews in Ruhengeri and the surrounding area. The RPF raided the city almost every night for several months, fighting with Rwandan army forces, and the country was back at war for the first time since the October invasion.
### Guerrilla war, 1991–1992
Following the action in Ruhengeri the RPF again began to wage guerrilla war. The Rwandan Army massed troops across the north of the country, occupying key positions and shelling RPF hideouts in the Virunga mountains, but the mountainous terrain prevented them from launching an all-out assault. Paul Kagame's troops attacked the Rwandan Army forces repeatedly and frequently, keen to ensure the diplomatic and psychological effect of the RPF's resurgence was not lost. Kagame employed tactics such as attacking simultaneously in up to ten locations across the north of the country, to prevent his opponents from concentrating their force in any one place. This low intensity war continued for many months, both sides launching successful attacks on the other, and neither able to gain the upper hand in the war. The RPF made some territorial gains including capturing the border town of Gatuna. This was significant as it blocked Rwanda's access to the port of Mombasa via the Northern Corridor, forcing all trade to go through Tanzania via the longer and costlier Central Corridor. By late 1991 the RPF controlled 5% of Rwanda, setting up its new headquarters in an abandoned tea factory near Mulindi, Byumba province. Many Hutu civilians in areas captured by the RPF fled to government-held areas, creating a large population of internally displaced persons in the country.
The renewed warfare had two effects in Rwanda. The first was a resurgence of violence against Tutsi still in the country. Hutu activists killed up to 1,000 Tutsi in attacks authorised by local officials, starting with the slaughter [fr] of 30–60 Bagogwe Tutsi pastoralists near Kinigi and then moving south and west to Ruhengeri and Gisenyi. These attacks continued until June 1991, when the government introduced measures to allow potential victims to move to safer areas such as Kigali. The akazu also began a major propaganda campaign, broadcasting and publishing material designed to persuade the Hutu population that the Tutsi were a separate and alien people, non-Christians seeking to re-establish the old Rwandan feudal monarchy with the final goal of enslaving the Hutu. This included the Hutu Ten Commandments, a set of "rules" published in the Kangura magazine, mandating Hutu supremacy in all aspects of Rwandan life. In response the RPF opened its own propaganda radio station, Radio Muhabura, which broadcast from Uganda into Rwanda. This was never hugely popular but gained listenership during 1992 and 1993.
The second development was that President Habyarimana announced that he was introducing multi-party politics into the country, following intense pressure from the international community, including his most loyal ally France. Habyarimana had originally promised this in mid-1990, and opposition groups had formed in the months since, including the Republican Democratic Movement (MDR), Social Democratic Party (PSD) and the Liberal Party (PL), but the one-party state law had remained in place. In mid-1991 Habyarimana officially allowed multi-party politics to begin, a change that saw a plethora of new parties come into existence. Many had manifestos which favoured full democracy and rapprochement with the RPF, but these were quite ineffective and had no political influence. The older opposition groups registered themselves as official parties and the country was notionally moving towards a multi-party inclusive cabinet with proper representation, but progress was continually hampered by the regime. The last opposition party to form was the Coalition for the Defence of the Republic (CDR), which was more hardline Hutu than Habyarimana's own party and had close links to the akazu.
Progress remained slow in 1991 and 1992. A cabinet set up in October 1991 contained almost no opposition, and the administrative hierarchy across the country recognised the authority of only Habyarimana's National Republican Movement for Democracy and Development party. Another one-party cabinet was announced in January 1992 which prompted large scale protests in Kigali, forcing Habyarimana to make real concessions. He announced his intention to negotiate with the RPF, and formed a multi-party cabinet in April. This was still dominated by Habyarimana's party, but with opposition figures in some key positions. The opposition members of this cabinet met with the RPF, and negotiated a ceasefire. In July 1992 the rebels agreed to stop fighting, and the parties began peace negotiations in the Tanzanian city of Arusha.
### Peace process, 1992–1993
The peace process was complicated by the fact that four distinct groups were involved, each with its own agenda. The Hutu hardliners, centred around the family of Agathe Habyarimana, were represented by the CDR as well as extremists within the president's own MRND party. The second group was the official opposition, which excluded the CDR. They had much more democratic and conciliatory aims but were also deeply suspicious of the RPF, whom they saw as trying to upset the "democratic" policy of Hutu rule established in the 1959 revolution. The third group was the RPF. Paul Kagame engaged with the peace process against the advice of some of his senior officers, in the knowledge that many of those on the other side of the table were hardliners who were not sincerely interested in negotiations. He feared that shunning the opportunity for peace would weaken the RPF politically and lose them international goodwill. Finally there was the group representing President Habyarimana himself, who sought primarily to hold on to his power in whatever form he could. This meant publicly striving for a middle ground compromise solution, but privately obstructing the process and trying to delay change to the status quo for as long as possible. Habyarimana recognised the danger posed to him by the radical Hutu faction and attempted in mid-1992 to remove them from senior army positions. This effort was only partially successful; akazu affiliates Augustin Ndindiliyimana and Théoneste Bagosora remained in influential posts, providing them with a link to power.
The delegates at the negotiations in Arusha made some progress in the latter half of 1992, despite wrangling between Habyarimana and hardline members of his party that compromised the government officials' negotiating power. In August the parties agreed to a "pluralistic transitional government", which would include the RPF. The CDR and hardline faction of the MRND reacted violently to this. Feeling sidelined by the developing Arusha process, they began killing Tutsi civilians in the Kibuye area; 85 were killed, and 500 homes burnt. Historian Gérard Prunier names late 1992 as the time when the idea of a genocidal "final solution" to kill every Tutsi in Rwanda was first mooted. Hardliners were busy setting up parallel institutions within the official organs of state, including the army, from which they hoped to effect a move away from the more conciliatory tone adopted by Habyarimana and the moderate opposition. Their goal was to take over from Habyarimana's government as the perceived source of power in the country amongst the Hutu masses, to maintain the line that the RPF and Tutsi more generally were a threat to Hutu freedoms, and to find a way to thwart any agreement negotiated in Arusha.
The situation deteriorated in early 1993 when the teams in Arusha signed a full power-sharing agreement, dividing government positions between the MRND, RPF and other major opposition parties, but excluding the CDR. This government was supposed to rule the country under a transitional constitution until free and fair elections could be held. The agreement reflected the balance of power at the time; Habyarimana, the mainstream opposition, and the RPF all accepted it, but the CDR and hardline MRND officers were violently opposed. MRND national secretary Mathieu Ngirumpatse announced that the party would not respect the agreement, contradicting the president and the party's negotiators in Arusha. The MRND hardliners organised demonstrations across the country and mobilised their supporters within the army and populace to begin a much larger killing spree than those that had previously occurred. The violence engulfed the whole north-west of Rwanda and lasted for six days; many houses were burned and hundreds of Tutsi killed.
### RPF offensive, February 1993
Paul Kagame responded by pulling out of the Arusha process and resuming the war, ending the six-month cease-fire. The RPF cited the CDR and MRND-hardliner violence as its reason for this, but according to foreign policy scholar Bruce D. Jones the offensive may actually have been intended primarily to increase the rebels' bargaining power at the peace talks. The next subject for the negotiations was the proportion of troops and officers to be allocated to each side in the new unified army. By demonstrating its military power in the field, through a successful offensive against the Rwandan Government forces, the RPF was able to secure an increased percentage of troops in the agreement.
The RPF began its offensive on 8 February, fighting southwards from the territory it already held in Rwanda's northern border regions. In contrast to the October 1990 and 1991–1992 campaigns, the RPF advance in 1993 was met by weak resistance from the Rwandan Army forces. The likely reason was a significant deterioration in morale and military experience within the government forces. The impact of the long-running war on the economy, and a heavy devaluation of the Rwandan franc, had left the government struggling to pay its soldiers regularly. The armed forces had also expanded rapidly, at one point growing from less than 10,000 troops to almost 30,000 in one-year. The new recruits were often poorly disciplined and not battle ready, with a tendency to get drunk and carry out abuse and rapes of civilians.
The RPF advance continued unchecked in February, its forces moving steadily south and gaining territory without opposition. They took Ruhengeri on the first day of fighting, and later the city of Byumba. Local Hutu civilians fled en masse from the areas the RPF were taking, most of them ending up in refugee camps on the outskirts of Kigali. The civilian cost of the offensive is unclear; according to André Guichaoua several thousand were killed, while Prunier labelled the RPF killing as "small-scale". This violence alienated the rebels from their potential allies in the democratic Rwandan opposition parties.
When it became clear that the Rwandan Army was losing ground to the RPF, Habyarimana requested urgent assistance from France. Fearing that the RPF could soon be in a position to seize Kigali, the French immediately dispatched 150 troops to Rwanda, along with arms and ammunition, to bolster the Rwandan Army forces. A further 250 French soldiers were sent on 20 February. The arrival of French troops in Kigali significantly changed the military situation on the ground. The RPF now found themselves under attack, French shells bombarding them as they advanced southwards.
By 20 February the RPF had advanced to within 30 km (19 mi) of the capital, Kigali, and many observers believed an assault on the city was imminent. The assault did not take place, and the RPF instead declared a cease-fire. Whether or not the RPF intended to advance on the capital is unknown. Kagame later said his aim at this point was to inflict as much damage as possible on Rwandan Army forces, capture their weapons, and gain ground slowly, but not to attack the capital or seek to end the war with an outright RPF victory. Kagame told journalist and author Stephen Kinzer such a victory would have ended international goodwill towards the RPF and led to charges that the war had simply been a bid to replace the Hutu state with a Tutsi one. The increased presence of French troops and the fierce loyalty of the Hutu population to the government meant an invasion of Kigali would not have been achieved with the same ease that the RPF had conquered the north. Fighting for the capital would have been a much more difficult and dangerous operation. Several of Kagame's senior officers urged him to go for outright victory but he overruled them. By the end of the February war more than a million civilians, mostly Hutu, had left their homes in the country's largest exodus to date.
### Arusha Accords and rise of Hutu Power, 1993–1994
The RPF cease-fire was followed by two days of negotiations in the Ugandan capital Kampala, attended by RPF leader Paul Kagame, and involving President Museveni and representatives of European nations. The Europeans insisted that RPF forces withdraw to the zone they had held before the February offensive. Kagame responded that he would agree to this only if the Rwandan army were forbidden from re-entering the newly conquered territory. Following a threat by Kagame to resume fighting and potentially take even more territory, the two sides reached a compromise deal. This entailed the RPF withdrawing to its pre-February territory, but also mandated the setting up of a demilitarised zone between the RPF area and the rest of the country. The deal was significant because it marked a formal concession by Habyarimana's regime of the northern zone to the rebels, recognising the RPF hold on that territory. There were many within the RPF senior command who felt Kagame had ceded too much, because the deal meant not only withdrawal to the pre-February boundaries, but also a promise not to encroach on the demilitarised zone. This therefore ended RPF ambitions of capturing more territory. Kagame used the authority he had accumulated through his successful leadership of the RPF to override these concerns, and the parties returned once more to the negotiating table in Arusha.
Despite the agreement and ongoing negotiations President Habyarimana, supported by the French Government, spent the subsequent months forging a "common front" against the RPF. This included members of his own party and the CDR and also factions from each of the other opposition parties in the power-sharing coalition. At the same time other members of the same parties issued a statement, in conjunction with the RPF, in which they condemned French involvement in the country and called for the Arusha process to be respected in full. The hardline factions within the parties became known as Hutu Power, a movement which transcended party politics. Apart from the CDR there was no party that was exclusively part of the Power movement. Instead almost every party was split into "moderate" and "Power" wings, with members of both camps claiming to represent the legitimate leadership of that party. Even the ruling party contained a Power wing, consisting of those who opposed Habyarimana's intention to sign a peace deal. Several radical youth militia groups emerged, attached to the Power wings of the parties; these included the Interahamwe, which was attached to the ruling party, and the CDR's Impuzamugambi. The youth militia began actively carrying out massacres across the country. The army trained the militias, sometimes in conjunction with the French, who were unaware the training they provided was being used to perpetrate the mass killings.
By June President Habyarimana had come to view Hutu Power, rather than the mainstream opposition, as the biggest threat to his leadership. This led him to change tactics and engage fully with the Arusha peace process, giving it the impetus it needed to draw to a completion. According to Prunier this support was more symbolic than genuine. Habyarimana believed he could maintain power more easily through a combination of limited concessions to the opposition and RPF than he could if Hutu Power were allowed to disrupt the peace process. The negotiation of troop numbers was protracted and difficult; twice the talks almost collapsed. The Rwandan Government wanted to allocate only 15% of the officer corps to the RPF, reflecting the proportion of Tutsi in the country, while the RPF was arguing for a 50/50 split. The RPF were in a superior position following their successful February campaign and were backed in their demands by Tanzania, which was chairing the talks. The government eventually agreed to their demands. As well as 50% of the officer corps, the RPF was allocated up to 40% of the non-command troops. The deal also mandated large-scale demobilisation; of the 35,000 Rwandan Army and 20,000 RPF soldiers at the time of the accords, only 19,000 would be drafted into the new national army. With all details agreed the Arusha Accords were finally signed on 4 August 1993 at a formal ceremony attended by President Habyarimana as well as heads of state from neighbouring countries.
An uneasy peace was once again entered into, which would last until 7 April of the following year. The agreement called for a United Nations peacekeeping force; this was titled the United Nations Assistance Mission for Rwanda (UNAMIR), and was in place in Rwanda by October 1993 under the command of Canadian General Roméo Dallaire. Another stipulation of the agreement was that the RPF would station diplomats in Kigali at the Conseil national de développement (CND), now known as the Chamber of Deputies, Rwanda's Parliament building. These men were protected by 600–1,000 RPF soldiers, who arrived in Kigali through UNAMIR's Operation Clean Corridor in December 1993. Meanwhile, the Hutu Power wings of the various parties were beginning plans for a genocide. The President of Burundi, Melchior Ndadaye, who had been elected in June as the country's first ever Hutu president, was assassinated by extremist Tutsi army officers in October 1993. The assassination reinforced the notion among Hutus that the Tutsi were their enemy and could not be trusted. The CDR and the Power wings of the other parties realised they could use this situation to their advantage. The idea of a "final solution", which had first been suggested in 1992 but had remained a fringe viewpoint, was now top of their agenda. An informant from the Interahamwe told UNAMIR officials a group of Hutu extremists were planning on disrupting the peace process and killing Tutsis in Kigali.
### Military operations during the 1994 genocide
The cease-fire ended abruptly on 6 April 1994 when President Habyarimana's plane was shot down near Kigali Airport, killing both Habyarimana and the new President of Burundi, Cyprien Ntaryamira. The pair were returning home from a regional summit in Dar es Salaam at which the leaders of Kenya, Uganda, and Tanzania, had urged Habyarimana to stop delaying the implementation of the Arusha accords. The attackers remain unknown. Prunier, in his book written shortly after the incident, concluded that it was most likely a coup carried out by extreme Hutu members of Habyarimana's government. This theory was disputed in 2006 by French judge Jean-Louis Bruguière and in 2008 by Spanish judge Fernando Andreu. Both alleged that Kagame and the RPF were responsible. At the end of 2010 the judges succeeding Bruguière ordered a more thorough scientific examination, which employed experts in ballistics and acoustics. This report seemed to reaffirm the initial theory that Hutu extremists assassinated Habyarimana. But the report did not lead the judges to drop the charges against the RPF suspects; this was finally done in 2018, due to lack of evidence.
The shooting down of the plane served as the catalyst for the Rwandan genocide, which began within a few hours. A crisis committee was formed by the military, headed by Colonel Théoneste Bagosora, which refused to recognise Prime Minister Agathe Uwilingiyimana as leader, even though she was legally next in the line of political succession. UN commander General Dallaire labelled this a coup and insisted that Uwilingiyimana be placed in charge, but Bagosora refused. The Presidential Guard killed Uwilingiyimana and her husband during the night, along with ten Belgian UNAMIR soldiers charged with her protection and other prominent moderate politicians and journalists. The crisis committee appointed an interim government, still effectively controlled by Bagosora, which began ordering the systematic killing of huge numbers of Tutsi, as well as some politically moderate Hutu, through well-planned attacks. Over the course of approximately 100 days between 500,000 and 1,000,000 were killed.
On 7 April, as the genocide started, RPF commander Paul Kagame warned the interim government and the United Nations peacekeepers that he would resume the civil war if the killing did not stop. The next day Rwandan Army forces attacked the national parliament building from several directions but RPF troops stationed there successfully fought back. The RPF then crossed the demilitarised zone from their territory in the north and began an attack on three fronts, leaving their opponents unsure of their true intentions or whether an assault on Kigali was imminent. UNAMIR contingents in the demilitarised zone withdrew to their camps to avoid being caught in the fighting. Kagame refused to talk to the interim government, believing it was just a cover for Bagosora's rule and not committed to ending the genocide. Over the next few days the RPF moved steadily south through the eastern part of the country, capturing Gabiro and large areas of the countryside to the north and east of Kigali. Their unit stationed in Kigali was isolated from the rest of their forces but a unit of young soldiers successfully crossed government-held territory to link up with them. They avoided attacking Kigali or Byumba at this stage but conducted manoeuvres designed to encircle the cities and cut off supply routes. The RPF also allowed Tutsi refugees from Uganda to settle behind the front line in the RPF controlled areas.
In April there were numerous attempts by the United Nations forces to establish a cease-fire, but Kagame insisted each time that the RPF would not stop fighting unless the killings stopped. In late April the RPF secured the whole of the Tanzanian border area and began to move west from Kibungo, to the south of Kigali. They encountered little resistance except around Kigali and Ruhengeri. By 16 May they had cut the road between Kigali and Gitarama, the temporary home of the interim government, and by 13 June had taken Gitarama itself. The taking of Gitarama followed an unsuccessful attempt by the Rwandan Army forces to reopen the road. The interim government was forced to relocate to Gisenyi in the far north-west. As well as fighting the war Kagame recruited heavily at this time to expand the RPF. The new recruits included Tutsi survivors of the genocide and Rwandan Tutsi refugees who had been living in Burundi, but they were less well trained and disciplined than the earlier recruits.
In late June 1994 France launched Opération Turquoise, a UN-mandated mission to create safe humanitarian areas for displaced persons, refugees, and civilians in danger. From bases in the Zairian cities of Goma and Bukavu, the French entered south-western Rwanda and established the Turquoise zone, within the Cyangugu–Kibuye–Gikongoro triangle, an area occupying approximately a fifth of Rwanda. Radio France International estimates that Turquoise saved around 15,000 lives, but with the genocide coming to an end and the RPF's ascendancy, many Rwandans interpreted Turquoise as a mission to protect the Hutus from the RPF, including some who had participated in the genocide. The French remained hostile to the RPF and their presence held up the RPF's advance in the south-west of the country. Opération Turquoise remained in Rwanda until 21 August 1994. French activity in Rwanda during the civil war later became a subject of much study and dispute, and generated an unprecedent debate about French foreign policy in Africa.
Having completed the encirclement of Kigali, the RPF spent the latter half of June fighting for the capital. The Rwandan Army forces had superior manpower and weapons, but the RPF steadily gained territory and conducted raids to rescue civilians from behind enemy lines. According to Dallaire, this success was due to Kagame's being a "master of psychological warfare"; he exploited the fact that the Rwandan Army were concentrating on the genocide rather than the fight for Kigali and exploited the government's loss of morale as it lost territory. The RPF finally defeated the Rwandan Army in Kigali on 4 July and on 18 July took Gisenyi and the rest of the north-west, forcing the interim government into Zaire. This RPF victory ended the genocide as well as the civil war. At the end of July 1994 Kagame's forces held the whole of Rwanda except for the Turquoise zone in the south-west. The date of the fall of Kigali, 4 July, was later designated Liberation Day by the RPF and is commemorated as a public holiday in Rwanda.
The UN peacekeeping force, UNAMIR, was in Rwanda during the genocide, but its Chapter VI mandate rendered it powerless to intervene militarily. Efforts by General Dallaire to broker peace were unsuccessful, and most of UNAMIR's Rwandan staff were killed in the early days of the genocide, severely limiting its ability to operate. Its most significant contribution was to provide refuge for thousands of Tutsi and moderate Hutu at its headquarters in Amahoro Stadium, as well as other secure UN sites, and to assist with the evacuation of foreign nationals. The Belgian Government, which had been one of the largest troop contributors to UNAMIR, pulled out in mid-April following the deaths of its ten soldiers protecting Prime Minister Uwilingiliyimana. In mid-May the UN conceded that "acts of genocide may have been committed", and agreed to reinforcement. The new soldiers started arriving in June, and following the end of the genocide in July they stayed to maintain security and stability, until the termination of their mission in 1996. Fifteen UN soldiers were killed in Rwanda between April and July 1994, including the ten Belgians, three Ghanaians, an Uruguayan, and Senegalese Mbaye Diagne who risked his life repeatedly to save Rwandans.
## Aftermath
The victorious RPF assumed control of Rwanda following the genocide, and as of 2021 remain the dominant political force in the country. They formed a government loosely based on the Arusha Accords, but Habyarimana's party was outlawed and the RPF took over the government positions allocated to it in the accords. The military wing of the RPF was renamed as the Rwandan Patriotic Army (RPA) and became the national army. Paul Kagame assumed the dual roles of Vice President of Rwanda and Minister of Defence; Pasteur Bizimungu, a Hutu who had been a civil servant under Habyarimana before fleeing to join the RPF, was appointed president. Bizimungu and his cabinet had some control over domestic affairs but Kagame remained commander-in-chief of the army and de facto ruler of the country.
### Domestic situation
The civil war severely disrupted Rwanda's formal economy, bringing coffee and tea cultivation to a halt, decimating tourism, diminishing food production, and diverting government spending towards defence and away from other priorities. Rwanda's infrastructure and economy suffered further during the genocide. Many buildings were uninhabitable and the former regime had taken all currency and moveable assets when they fled the country. Human resources were severely depleted, with over 40% of the population having fled or been killed. Women constituted about 70 percent of the population, as many men had fled or been killed. Outside of civilian deaths, 7,500 combatants had been killed during the war. Many of the remainder were traumatised: most had lost relatives, witnessed killings, or participated in the genocide. The long-term effects of war rape included social isolation, sexually transmitted diseases and unwanted pregnancies and babies, some women resorting to self-induced abortions. The army, led by Paul Kagame, maintained law and order while the government began the work of rebuilding the country's institutions and infrastructure.
Non-governmental organisations began to move back into the country but the international community did not provide significant assistance to the new regime. Most international aid was routed to the refugee camps which had formed in Zaire following the exodus of Hutu from Rwanda. Kagame strove to portray the government as inclusive and not Tutsi-dominated. He directed removal of ethnicity from citizens' national identity cards and the government began a policy of downplaying the distinctions between Hutu, Tutsi, and Twa.
During the genocide and in the months following the RPF victory, RPF soldiers killed many people they accused of participating in or supporting the genocide. The scale, scope, and source of ultimate responsibility of these killings is disputed. Human Rights Watch, as well as scholars such as Prunier, allege that the death toll might be as high as 100,000, and that Kagame and the RPF elite either tolerated or organised the killings. In an interview with Stephen Kinzer, Kagame acknowledged that killings had occurred but said they were carried out by rogue soldiers and had been impossible to control. The killings gained international attention after the 1995 Kibeho massacre, in which soldiers opened fire on a camp for internally displaced persons in Butare Province. Australian soldiers serving as part of UNAMIR estimated at least 4,000 people were killed; the Rwandan Government claimed the death toll was 338.
Paul Kagame took over the presidency from Pasteur Bizimungu in 2000 and began a large-scale national development drive, launching a programme to develop Rwanda as a middle income country by 2020. The country began developing strongly on key indicators, including the human development index, health care, and education. Annual growth between 2004 and 2010 averaged 8% per year, the poverty rate reduced from 57% to 45% between 2006 and 2011, and life expectancy rose from 46.6 years in 2000 to 64.3 years in 2018. A period of reconciliation began as well as the establishment of courts for trying genocide suspects. These included the International Criminal Tribunal for Rwanda (ICTR) and Gacaca, a traditional village court system reintroduced to handle the large caseloads involved. As women represented a larger share of the post-war population were not as frequently implicated in the genocide, they were entrusted by the regime with more tasks of reconciliation and reconstruction.
### Refugee crisis, insurgency, and Congo wars
Following the RPF victory, approximately two million Hutu fled to refugee camps in neighbouring countries, particularly Zaire, fearing RPF reprisals for the Rwandan genocide. The camps were crowded and squalid and tens of thousands of refugees died in disease epidemics, including cholera and dysentery. They were set up by the United Nations High Commissioner for Refugees (UNHCR) but were effectively controlled by the army and government of the former Hutu regime, including many leaders of the genocide, who began to rearm in a bid to return to power in Rwanda.
By late 1996, Hutu militants from the camps were launching regular cross-border incursions and the RPF-led Rwandan Government launched a counter-offensive. Rwanda provided troops and military training to the Banyamulenge, a Tutsi group in the Zairian South Kivu province, helping them to defeat Zairian security forces. Rwandan forces, the Banyamulenge, and other Zairian Tutsi, then attacked the refugee camps, targeting the Hutu militia. These attacks caused hundreds of thousands of refugees to flee; many returned to Rwanda despite the presence of the RPF, while others ventured further west into Zaire. The refugees fleeing further into Zaire were relentlessly pursued by the RPA under the cover of the AFDL rebellion, killing an estimated 232,000 people. The defeated forces of the former regime continued a cross-border insurgency campaign, supported initially by the predominantly Hutu population of Rwanda's north-western provinces. By 1999 a programme of propaganda and Hutu integration into the national army succeeded in bringing the Hutu to the government side and the insurgency was defeated.
As well as dismantling the refugee camps, Kagame began planning a war to remove Mobutu. Mobutu had supported the genocidaires based in the camps and was also accused of allowing attacks on Tutsi people within Zaire. The Rwandan and Ugandan governments supported an alliance of four rebel groups headed by Laurent-Désiré Kabila, which began waging the First Congo War. The rebels quickly took control of North and South Kivu provinces and then advanced west, gaining territory from the poorly organised and demotivated Zairian army with little fighting. They controlled the whole country by May 1997. Mobutu fled into exile and the country was renamed the Democratic Republic of the Congo (DRC). Rwanda fell out with the new Congolese regime in 1998 and Kagame supported a fresh rebellion, leading to the Second Congo War. This lasted until 2003 and caused millions of deaths and severe damage. A 2010 United Nations report accused the Rwandan Patriotic Army of wide-scale human rights violations and crimes against humanity during the two Congo wars, charges denied by the Rwandan Government.
In 2015 the Rwandan government paid reparations to Uganda for damage inflicted during the civil war to its border regions.
|
32,648,556 |
Kenneth R. Shadrick
| 1,144,482,334 |
United States Army soldier
|
[
"1931 births",
"1950 deaths",
"American military personnel killed in the Korean War",
"Military personnel from West Virginia",
"People from Harlan County, Kentucky",
"People from Pineville, West Virginia",
"United States Army personnel of the Korean War",
"United States Army soldiers"
] |
Kenneth R. Shadrick (August 4, 1931 – July 5, 1950) was a United States Army soldier who was killed at the onset of the Korean War. He was widely but incorrectly reported as the first American soldier killed in action in the war.
Shadrick was born in Harlan County, Kentucky, one of 10 children. After dropping out of high school in 1948, he joined the U.S. Army, and spent a year of service in Japan before being dispatched to South Korea at the onset of the Korean War in 1950 along with his unit, the 34th Infantry Regiment, 24th Infantry Division. During a patrol, Shadrick was killed by the machine gun of a North Korean T-34 tank, and his body was taken to an outpost where journalist Marguerite Higgins was covering the war. Higgins later reported that he was the first soldier killed in the war, a claim that was repeated in media across the United States. His life was widely profiled, and his funeral drew hundreds of people.
His death is now believed to have occurred after the first American combat fatalities in the Battle of Osan. Since the exact identities of the other soldiers killed before Shadrick remain unclear, he is still often incorrectly cited as the first U.S. soldier killed in the war.
## Early life and education
Shadrick was born on August 4, 1931, in Harlan County, Kentucky. He was the third of 10 children born to Lucille Shadrick and Theodore Shadrick, a coal miner. Growing up during the Great Depression, Kenneth Shadrick moved with his family to Wyoming, West Virginia, then to an outlying town called Skin Fork, 20 miles (32 km) away, as his father was looking for coal mining jobs. Shadrick was described by his family as "an avid reader" throughout his childhood, who had a variety of interests, including Westerns and magazines. He also enjoyed riding his bicycle and, occasionally, hunting.
Shadrick enrolled in Pineville High School in 1947 and received top marks in his classes. During his sophomore year in 1948, he developed an interest in football and made the school's team, though he was small for his age. The team could not afford uniforms, and Shadrick's father gave him five dollars to buy one, but it was stolen from his locker in October 1948. The incident upset Shadrick so much he dropped out of school, reportedly refusing to return from that day forward. One month later, he and a friend enlisted in the U.S. Army. Shadrick's father would later refer to the stolen school uniform as the reason Shadrick enlisted in the military, and said he felt it indirectly caused his son's death.
On November 10, 1948, Shadrick left for basic combat training at Fort Knox, Kentucky. As he was 17 years old, Shadrick had to convince his parents to sign papers allowing him to enlist. Shadrick completed this training in February 1949, and sailed for Japan to join the 34th Infantry Regiment, 24th Infantry Division, for post–World War II occupation duties. Shadrick spent a year on Kyushu island with the division. According to his family, Shadrick enjoyed his tour in Japan at first, but by June 1950 he was growing tired of the country, and indicated in letters he was feeling depressed.
## Career
On the night of June 25, 1950, 10 divisions of the North Korean army launched a full-scale invasion of South Korea. Advancing with 89,000 men in six columns, the North Koreans caught the disorganized, ill-equipped, and unprepared South Korean army by surprise and routed them. North Korean forces destroyed isolated resistance, pushing steadily down the peninsula against the opposing 38,000 front-line South Korean men. The majority of the South Korean forces retreated in the face of the invasion, and by June 28 the North Koreans had captured the southern capital, Seoul, and forced the government and its shattered forces to withdraw southward.
Meanwhile, the United Nations Security Council voted to send assistance to the collapsing country and U.S. President Harry S. Truman ordered ground troops to the country. U.S. forces in the Far East had been steadily decreasing since the end of World War II, five years earlier, and Shadrick's division was the closest to the warzone. Under the command of Major General William F. Dean, the division was understrength and most of its equipment was antiquated due to reductions in military spending. In spite of these deficiencies the division was ordered into South Korea, tasked with taking the initial shock of the North Korean advances until the rest of the Eighth United States Army could arrive and establish a defense.
Dean's plan was to airlift one battalion of the 24th Infantry Division into South Korea via C-54 Skymaster transport aircraft and to block advancing North Korean forces while the remainder of the division was transported on ships. The 21st Infantry Regiment was identified as the most combat-ready of the 24th Infantry Division's three regiments, and the 21st Infantry's 1st Battalion was selected because its commander, Lieutenant Colonel Charles B. Smith, was the most experienced, having commanded a battalion at the Battle of Guadalcanal during World War II. On July 5, Task Force Smith engaged North Korean forces at the Battle of Osan, delaying 5,000 North Korean infantry for seven hours before being defeated. The 540-man force suffered 60 killed, 21 wounded and 82 captured, 163 casualties in total. A very heavy casualty rate. In the chaos of the retreat, most of the bodies were left behind, and the fates of many of the missing were unknown for several weeks.
During that time, the 34th Infantry Regiment set up a line between the villages of Pyongtaek and Ansong, 10 miles (16 km) south of Osan, to fight the next delaying action against the advancing North Korean forces. The 34th Infantry Regiment was similarly unprepared for a fight, with few soldiers experienced in combat. At this time, Shadrick was part of an M9A1 Bazooka team with 1st Battalion, 34th Infantry.
## Death
About 90 minutes after Task Force Smith began its withdrawal from the Battle of Osan, the 34th Infantry sent Shadrick as part of a small scouting force northward to the village of Sojong-ni, 5 miles (8.0 km) south of Osan. The small force, under the command of Lieutenant Charles E. Payne and consisting mostly of bazooka teams and infantry, halted at a graveyard in the village, where they spotted a North Korean T-34/85 tank on a road to the north. Shadrick and the other bazooka operators began firing on the tank from long-range concealed positions at around 16:00. With them was Sergeant Charles R. Turnbull, a US Army combat photographer. Turnbull asked Shadrick to time a bazooka shot so its flash could be caught in Turnbull's photograph, and Shadrick complied. Shadrick made the shot and paused, then rose from his concealed position to see if he had successfully hit the tank, exposing himself. The T-34 returned fire with its machine gun, and two bullets struck Shadrick in the chest and arm. Shadrick died of wounds moments later.
Payne's patrol retreated without destroying the tank, taking Shadrick's body with them as the only casualty. The force returned to the 34th Infantry Command post in Pyongtaek to report to Brigadier General George B. Barth and Colonel Harold B. Ayres, who were commanding the troops in the town. Also present was Marguerite Higgins, a war correspondent for the New York Herald Tribune. Higgins subsequently reported Shadrick's death, referring to him as the first American killed in the Korean War.
Shadrick's family was informed of his death by a neighbor who had heard his name on a radio broadcast, and the news from the military came via telegraph several days later. The family was immediately inundated by reporters and local well-wishers. Shadrick's body was returned to the United States, and on June 17, 1951, a funeral attended by hundreds of local residents was held in Beckley, West Virginia. The service was set to coincide with the anniversary of the start of the war. His flag-draped casket was escorted down the streets of the town on a horse-drawn carriage, and he was buried at the American Legion cemetery in the town.
## Legacy
Higgins' account of Shadrick's death was widely republished. Time magazine published a story about Shadrick's death on July 17, 1950, citing Shadrick as the first "reported" death in Korea. Life magazine reported Shadrick for up to a year as the first US soldier to die in the war, and the claim has often been repeated, including as recently as July 4, 2011, in the local newspaper in Huntington, West Virginia, The Herald-Dispatch.
American Legion Post 133 erected a monument to Shadrick at the Wyoming County courthouse. The monument cites Shadrick's unit, date of death, and notes him as the "first casualty of the Korean conflict" with an epitaph that reads, "He stands first in the unbroken line of patriots who have dared to die that freedom might live, grow and increase its blessings. Freedom lives and through it he lives – in a way that humbles the undertakings of most men." It is one of several memorials to local residents who served in the military.
Subsequent publications have shed doubt on the accuracy of the claims of Shadrick's distinction. Eyewitness accounts at the Battle of Osan point to the first death as a machine gunner in the 21st Infantry Regiment, who had been killed at around 08:30, eight hours before Shadrick's death. This soldier was killed when a different T-34 tank was disabled at the battle and one of its crew members attacked nearby troops with a PPSh-41 "Burp Gun". In the confusion of the battle, many of the wounded and dead troops were left behind by retreating American troops, and a large part of the force was also captured; consequently, the identity of this first combat fatality remains a mystery.
## Awards and decorations
Shadrick's awards and decorations include:
|
2,685,664 |
Sonic the Hedgehog (2006 video game)
| 1,173,296,054 |
Platform game by Sega
|
[
"2006 video games",
"3D platform games",
"Action-adventure games",
"Apocalyptic video games",
"Cancelled Wii games",
"Cancelled Windows games",
"Cooperative video games",
"Genocide in fiction",
"Multiplayer and single-player video games",
"PlayStation 3 games",
"Post-apocalyptic video games",
"Sega video games",
"Sonic Team games",
"Sonic the Hedgehog video games",
"Video game reboots",
"Video games about time travel",
"Video games developed in Japan",
"Video games scored by Hideaki Kobayashi",
"Video games scored by Mariko Nanba",
"Video games scored by Takahito Eguchi",
"Video games scored by Tomoya Ohtani",
"Video games set in Venice",
"Video games using Havok",
"Xbox 360 games"
] |
(commonly referred to as Sonic '06) is a 2006 platform game developed by Sonic Team and published by Sega. It was produced in commemoration of the Sonic series' 15th anniversary and intended as a reboot for the seventh-generation video game consoles. Players control Sonic, Shadow, and the new character Silver, who battle Solaris, an ancient evil pursued by Doctor Eggman. Each playable character has his own campaign and abilities, and must complete levels, explore hub worlds and fight bosses to advance the story. In multiplayer modes, players can work cooperatively to collect Chaos Emeralds or race to the end of a level.
Development began in 2004, led by Sonic co-creator Yuji Naka. Sonic Team sought to create an appealing game in the vein of superhero films such as Batman Begins (2005), hoping it would advance the series with a realistic tone and multiple gameplay styles. Problems developed after Naka resigned to form his own company, Prope, and the team split to work on the Wii game Sonic and the Secret Rings (2007). As a result, Sonic the Hedgehog was rushed for release in time for the December holiday season. It was released for Xbox 360 in November 2006 and for PlayStation 3 the following month. Versions for Wii and Windows were canceled. Downloadable content featuring new single-player modes was released in 2007.
Sonic the Hedgehog received praise in prerelease showings, as journalists believed it could return to the series' roots after years of mixed reviews. However, it received negative reviews, with criticism for its loading times, camera system, story, voice acting, glitches, and controls. It is widely considered the worst Sonic game and led to the series' direction being rethought; subsequent games ignored its tone and most characters. In 2010, Sega delisted Sonic the Hedgehog from retailers, following its decision to remove all Sonic games with below-average Metacritic scores to increase the value of the franchise.
## Gameplay
Sonic the Hedgehog is a 3D platformer with action-adventure and role-playing elements. Like Sonic Adventure, the single player navigates through open-ended hub worlds where they can converse with townspeople and perform missions to advance the story. The main gameplay takes place in linear levels that become accessible as the game progresses. The main playable characters are three hedgehogs: Sonic, Shadow, and Silver, who feature in separate campaigns titled "episodes". A bonus "Last Episode", which involves all three hedgehogs and concludes the storyline, is unlocked upon completing the first three.
Sonic's story focuses on the speed-based platforming seen in previous Sonic games, with some sections having him run at full speed while dodging obstacles or riding a snowboard. Another character, Princess Elise, must be escorted in some stages, and she can use a special barrier to guard Sonic. Shadow's sections are similarly speedy, albeit more combat-oriented, with some segments having him ride vehicles. In contrast, Silver's levels are slower and revolve around his use of telekinesis to defeat enemies and solve puzzles. In certain areas, control is switched to one of several friend characters, with their own abilities.
Although each character traverses the same levels, their unique abilities allow the player to access different areas of each stage and prevent them from accessing certain items. Scattered through each level are golden rings, which serve as a form of health. The rings can protect a character from a single hit by an enemy or obstacle, at which point they will be scattered and blink before disappearing. The game begins with Sonic, Shadow, and Silver each assigned a limited number of lives. These lives are successively lost whenever, with no rings in their possession, the characters are hit by an enemy or obstacle or encounter other fatal hazards. The game ends when the player exhausts the characters' lives. Every few levels, players will encounter a boss stage; to proceed, they must defeat the boss by depleting its health meter.
Upon completion of a level or mission, players are given a grade depending on their performance, with an "S" rank being the best and a "D" rank being the worst. Players are given money for completing missions; more money is given to higher ranks. This money can be used to buy upgrades for the player character. Certain upgrades are required to complete the game. The game also features two multiplayer modes: "Tag", a cooperative mode where two players must work together to clear levels and collect Chaos Emeralds, and "Battle", a player versus player mode where two players race against each other.
## Plot
Doctor Eggman kidnaps Princess Elise of Soleanna in the hopes of harnessing the Flames of Disaster, a destructive power sealed within her. Aided by his friends Tails and Knuckles, Sonic works to protect Elise from Eggman. Meanwhile, Shadow, his fellow agent Rouge, and Eggman accidentally release an evil spirit, Mephiles. The spirit transports the agent duo to a post-apocalyptic future ravaged by a demonic monster, Iblis. When Mephiles meets survivors Silver and Blaze, he fools them into thinking Sonic is the cause of the destruction and sends them to the present to kill him.
Throughout the story, Sonic and friends travel between the past, present, and future in their efforts to stop Mephiles and Iblis and protect Elise from Doctor Eggman. Though at first Silver stalks Sonic and impedes his attempts to save Elise, Shadow reveals to him that Sonic is not the cause of his world's suffering but rather Mephiles, who is trying to change the past for his own evil purposes. They travel 10 years in the past and learn that Mephiles seeks to bond with Iblis, who was sealed within Elise as a child, as they are the two halves of Soleanna's omnipotent god, Solaris. Mephiles eventually succeeds after killing Sonic to make Elise cry over his death, releasing her seal on Iblis and merging with him with the use of Chaos Emeralds to become Solaris, who then attempts to consume time itself. The heroes and Elise respectively collect and use the power of the Chaos Emeralds to revive Sonic, and he, Shadow, and Silver transform into their super forms to defeat Solaris. Sonic and Elise are brought to the past and extinguish Solaris' flame, removing the god from existence and preventing the events from ever occurring. Despite this, Sonic and Elise show faint signs of recalling their encounter afterwards.
## Development
After finishing Billy Hatcher and the Giant Egg (2003), Sonic Team began to plan its next project. Among the ideas the team was considering was a game with a realistic tone and an advanced physics engine. When Sega reassigned the team to start working on a new game in the bestselling Sonic series, they decided to retain the realistic approach. Sonic the Hedgehog was conceived for sixth-generation consoles, but Sonic Team realized its release would coincide with the series' 15th anniversary and decided to develop it for seventh-generation consoles such as the PlayStation 3 and Xbox 360. Series co-creator and team lead Yuji Naka wanted the first Sonic game for seventh-generation systems to reach a wide audience. Naka noted the success of superhero films such as Spider-Man 2 (2004) and Batman Begins (2005): "When Marvel or DC Comics turn their characters into films, they are thinking of them as blockbusters, huge hits, and that's what we were trying to emulate with Sonic." Development on Sonic the Hedgehog began in late 2004. Sonic Team used the same title as the original 1991 Sonic the Hedgehog to indicate that it would be a major advance from the previous games. Sources commonly describe Sonic the Hedgehog as an attempted reboot of the franchise.
The Havok physics engine, previously used in their PlayStation 2 game Astro Boy (2004), allowed Sonic Team to create expansive levels previously impossible on earlier sixth-generation consoles and experiment with multiple play-styles. In addition, the engine also enabled Sonic Team to experiment with aspects such as global illumination, a night-and-day system, and giving Sonic new abilities like using ropes to leap into the air. Director Shun Nakamura demonstrated the engine during their stage shows at the Tokyo Game Show (TGS) in 2005. As the hardware of the Xbox 360 and PlayStation 3 was more powerful compared to the prior generation's consoles, the design team was able to create a more realistic setting than those of previous Sonic games. Sonic and Doctor Eggman were redesigned to better suit this updated environment: Sonic was made taller, with longer quills, and Eggman was made slimmer and given a more realistic appearance. Nakamura and producer Masahiro Kumono reasoned this was because the characters would be interacting with more humans, and felt it would make the game more appealing to older players. At one point, Sonic Team considered giving Sonic realistic fur and rubber textures.
While Sonic Team had a major focus on the visuals, they considered their primary challenge creating a game that was as appealing as the original Sega Genesis Sonic games. They felt Sonic Heroes (2003) and Shadow the Hedgehog (2005) had veered into different directions and wanted to return the series to its speed-based roots in new ways. For example, they wanted to include multiple paths in levels, like the Genesis games had, a goal the realistic environments helped achieve. Sonic Team sought to "aggressively" address problems with the virtual camera system from earlier Sonic games, about which they had received many complaints.
Silver the Hedgehog's gameplay style was born out of Sonic Team's desire to take advantage of Havok's realistic physics capabilities. The first design concept for Silver's character was an orange mink; he attained his final hedgehog look after over 50 design iterations. In designing Shadow's gameplay, the developers abandoned the concept of firearms previously used in Shadow the Hedgehog (2005) in favor of combat elements to differentiate him from the other characters. Shadow's gameplay was further fleshed out with the addition of vehicles; each vehicle uses its own physical engine. The game also features several CGI cutscenes produced by Blur Studio. Animation supervisor Leo Santos said Blur faced challenges animating the opening scene due to the placement of Sonic's mouth.
As development progressed, Sonic Team faced serious problems. In March 2006, Naka resigned as head of Sonic Team to form his own company, Prope. Naka has said he resigned because he did not want to continue making Sonic games and instead wished to focus on original properties. With his departure, "the heart and soul of Sonic" was gone, according to former Sega of America CEO Tom Kalinske. Sonic the Hedgehog was originally intended for release on all major seventh-generation consoles as well as Windows, but Sega was presented with development kits for Nintendo's less powerful Wii console. Sega believed porting the game to Wii would take too long, and so conceived a Sonic game that would use the motion detection function of its controller.
Therefore, the team was split in two: Nakamura led one team to finish Sonic the Hedgehog for Xbox 360 and PlayStation 3 while producer Yojiro Ogawa led the other to begin work on Sonic and the Secret Rings for the Wii. The split left an unusually small team to work on Sonic the Hedgehog. Sega pressured the team to finish the game in time for the 2006 holiday shopping season, so with the deadline quickly approaching, Sonic Team rushed the final stages of development, ignoring bug reports from Sega's quality assurance department and control problems. In retrospect, Ogawa noted that the final period proved to be a large challenge for the team. Not only was the Xbox 360 release imminent, but the PlayStation 3 launch was scheduled not long afterwards. This put tremendous pressure on the team to develop for both systems. Producer Takashi Iizuka similarly recalled, "we didn't have any time to polish and we were just churning out content as quick as we could."
### Audio
The English cast of the Sonic X anime series reprised their voice roles for Sonic the Hedgehog, and actress Lacey Chabert supplied the voice of series newcomer and damsel in distress Princess Elise. The score for the game was primarily composed by Tomoya Ohtani along with Hideaki Kobayashi, Mariko Nanba, Taihei Sato, and Takahito Eguchi. It was the first Sonic game that Ohtani, who had previously contributed to Sonic Heroes and Shadow the Hedgehog, worked on as sound director. The main theme for the game, the fantasy-rap song "His World", was performed by Ali Tabatabaee and Matty Lewis of the band Zebrahead. Crush 40 performed Shadow's theme, "All Hail Shadow", while vocalist Bentley Jones (previously known as Lee Brotherton) sang Silver's theme, "Dreams of an Absolution". R&B artist Akon performed a remix of the Dreams Come True song "Sweet Sweet Sweet", a song previously used as the ending theme to Sonic the Hedgehog 2 (1992). Donna De Lory sang Elise's theme, "My Destiny".
Because Sonic the Hedgehog was the first Sonic game for seventh-generation consoles, Ohtani "aimed to emphasise that it was an epic next-generation title". Two soundtrack albums were released on January 10, 2007, under Sega's Wave Master label: Sonic the Hedgehog Vocal Traxx: Several Wills and Sonic the Hedgehog Original Soundtrack. Vocal Traxx: Several Wills contains seven songs; four are from the game, while the remaining three are remixes, including a version of "His World" performed by Crush 40. Original Soundtrack includes all 93 tracks featured in Sonic the Hedgehog, spanning three discs.
## Release
Sonic the Hedgehog was announced in a closed-doors presentation at the Electronic Entertainment Expo (E3) in May 2005. Later that year, at TGS in September, Naka revealed the game's title and said its release would correspond with the series' 15th anniversary. A demo version of the game was playable at E3 2006. A second demo, featuring a short section of Sonic's gameplay, was released via Xbox Live in September 2006. Sega released several packages of desktop wallpaper featuring characters from the game, and American publisher Prima Games published an official strategy guide, written by Fletcher Black. Sega also made a deal with Microsoft to run advertisements for the game in Windows Live Messenger.
The Xbox 360 version of Sonic the Hedgehog was released in North America on November 14, 2006, followed by a European release on November 24. Both versions were released in Japan on December 21. The PlayStation 3 version was released in North America on January 30, 2007, and in Europe on March 23. The game is often referred to by critics and fans with colloquial terms that reference its year of release, such as Sonic 2006 or Sonic '06.
In 2007, Sega released several packages of downloadable content that added features to single-player gameplay. These include a more difficult single-player mode and a continuous battle mode with all of the game's bosses back-to-back. One downloadable addition, "Team Attack Amigo" mode, sends players through a multitude of levels, changing to a different character every two or three levels and culminating in a boss fight. The PlayStation 3 version was delayed to allow more time to incorporate the downloadable content, and thus launched alongside it.
The game was digitally rereleased via the Xbox Live Marketplace on April 15, 2010. The following October, various Sonic games with average or below average scores on the review aggregator website Metacritic, including Sonic the Hedgehog, were delisted from retailers. Sega reasoned this was to avoid confusing customers and increase the value of the brand, following positive prerelease responses to Sonic the Hedgehog 4: Episode I and Sonic Colors (both 2010). Sonic the Hedgehog was relisted on the Xbox 360 Marketplace in May 2022.
## Reception
Sonic the Hedgehog was well-received during prerelease showings. Reception to prior games Sonic Heroes and Shadow the Hedgehog had been mixed; after a number of well-received showings and demos, some felt Sonic the Hedgehog could be a return to the series' roots. GameSpot said the game "showed a considerable amount of promise" after playing a demo at E3 2006, and GameSpy praised its graphics and environments. In 2008 GamesRadar said that it had looked "amazing" before its release.
At the time of release, the game received widespread negative reviews. Metacritic classified both versions' reception as "generally unfavorable". Sega reported that the game sold strongly, with 870,000 units sold in the United States and Europe within four months. The Xbox 360 version was branded under the Platinum Hits budget line.
Critics were divided on the game's presentation. IGN called its graphics and audio "decent" and felt its interface and menu system worked well but lacked polish, but GameSpot said the graphics, while colorful, were bland and only a small improvement over sixth-generation games, a sentiment echoed by 1UP.com. Game Informer and Eurogamer noted several graphical glitches. Eurogamer also criticized the decision to continue the Sonic Adventure style of gameplay, believing that Sonic Team had learned nothing from the criticisms of past games.
Reviewers singled out the game's camera system, loading times, controls, level designs, and glitches. GameSpot said the level design was worsened by the frustrating camera system, and Game Informer criticized the game's high difficulty, citing the camera as causing most deaths. Some reviewers were unhappy that the majority of the game was not spent playing as Sonic; playing as Tails, GameSpot wrote, made a level boring. Similar criticism was offered by Eurogamer, finding that the supporting cast annoyed rather than fleshing the game out; they considered the camera system to be the worst they had ever seen in a video game. On the positive side, 1UP felt that despite the control and level design problems, the game still played like a Sonic game.
The plot was criticized as confusing and inappropriately dark. GamesRadar considered it overwrought and "conceptually challenged", and Eurogamer found its voice acting painful and its cutscenes cringeworthy. Some reviewers unfavorably compared the story to that of an anime or Final Fantasy. The romance between Sonic and the human Princess Elise was especially criticized; for GamesTM, it marked the point "the [Sonic] series had veered off into absolute nonsense."
"This ... is a mess from top to bottom", wrote GameSpot, that "only the most blindly reverent Sonic the Hedgehog fan could possibly squeeze any enjoyment out of". IGN said that the game had some redeeming qualities, with brief segments of gameplay that demonstrated how a next-generation Sonic game could work, but found it "rips them away as soon as it shows them" and concluded that the game failed to reinvent the series. Eurogamer believed that Sonic the Hedgehog's mistakes would have been noticed even if the game had been released in 1996.
Despite the mostly negative reception, Game Informer and Dave Halverson of Play Magazine defended the game. Game Informer described it as ambitious and praised the graphics, story, amount of content, and replay value, but believed only Sonic fans would enjoy the game. Halverson initially gave the Xbox 360 version a 9.5/10, praising each character's controls and abilities and calling it the best 3D Sonic game yet. In the following issue, Halverson reassessed it as 8.5/10, writing that he had been told that the load times and glitches in his review copy would not be in the final version of the game. In a later review of the PlayStation 3 version, Halverson was frustrated that the problems had still not been corrected and that the performance was worse despite the extra development time; Halverson gave this version a 5.5/10. The A.V. Club said in 2016 that despite the game's poor quality, the soundtrack has some "genuine rippers".
## Legacy
GameTrailers and GamesRadar considered Sonic the Hedgehog one of the most disappointing games of 2006. GamesTM singled out the game when it ranked the Sonic franchise at the top of their list of "Video Game Franchises That Lost Their Way". The A.V. Club, Kotaku, Game Informer, and USgamer called the game the worst in the Sonic series, and the staff of GamesRadar named it among the worst video games of all time. The game remains popular for "Let's Play" walkthroughs, with players showing off its glitches. In 2019, a video gained popularity in which a group of voice actors dub over the game's cutscenes in a single take, creating a nonsensical, improvisational storyline about video game culture. The official Sonic Twitter account also mocks the game.
Sonic the Hedgehog's critical failure had a lasting effect on the franchise. Hardcore Gamer wrote that following it, "Sonic Team struggled to land on a consistent vision for Sonic, releasing game-after-game with wildly different concepts." In particular, Sonic Team sought to avoid its serious tone, beginning with the next main Sonic game, Sonic Unleashed (2008). With Sonic Colors, The A.V. Club wrote that "the series rediscovered its strength for whimsical tales with light tones."
Sonic the Hedgehog introduced Silver the Hedgehog, Princess Elise, Mephiles, and Iblis to the franchise; most have made few appearances since. Silver is a playable character in Sonic Rivals (2006) and its sequel, in Sonic Riders: Zero Gravity (2008), and in Mario & Sonic at the Olympic Winter Games and its sequels, and is a minor character in the Nintendo DS version of Sonic Colors (2010) and Sonic Forces (2017). He also appeared in the Sonic the Hedgehog comic book series published by Archie Comics. The main theme of Sonic the Hedgehog and the theme of Sonic, "His World", was sampled in Drake's 2017 song "KMT".
To celebrate the Sonic franchise's 20th anniversary in 2011, Sega released Sonic Generations, which remade aspects of past Sonic games. The PlayStation 3, Xbox 360, and Windows versions feature a remake of Sonic the Hedgehog's "Crisis City" level, and every version, including the Nintendo 3DS version, includes a reimagined version of the boss battle with Silver. The decision to include Sonic the Hedgehog stages and bosses in Sonic Generations was criticized by critics and fans of the series; Jim Sterling of Destructoid referred to the Silver boss fight as the "catch" of the otherwise high-quality game.
In 2015, a fan group, Gistix, began developing a remake for Windows using the Unity engine. A demo was released in January 2017, and was positively received by journalists. A second demo was released in late 2017, which Eurogamer called ambitious. A second team of fans, led by ChaosX, began developing a separate PC remake in Unity, Sonic P-06, releasing multiple demos from 2019 onward.
|
1,829,382 |
Great Gold Robbery
| 1,155,731,331 |
1855 British train heist
|
[
"1855 crimes in Europe",
"1855 crimes in the United Kingdom",
"1855 in England",
"1855 in France",
"1855 in rail transport",
"1855 in the United Kingdom",
"May 1855 events",
"Robberies in England",
"Train robberies"
] |
The Great Gold Robbery took place on the night of 15 May 1855, when a routine shipment of three boxes of gold bullion and coins was stolen from the guard's van of the service between London Bridge station and Folkestone while it was being shipped to Paris. The robbers comprised four men, two of whom—William Tester and James Burgess—were employees of the South Eastern Railway (SER), the company that ran the rail service. They were joined by the planners of the crime: Edward Agar, a career criminal, and William Pierce, a former employee of the SER who had been dismissed for being a gambler.
During transit, the gold was held in "railway safes", which needed two keys to open. The men took wax impressions of the keys and made their own copies. When they knew a shipment was taking place, Tester ensured Burgess was on guard duty, and Agar hid in the guard's van. They emptied the safes of 224 pounds (102 kg) of gold, valued at the time at £12,000 (approximately ), then left the train at Dover. The theft was not discovered until the safes arrived in Paris. The police and railway authorities had no clues as to who had undertaken the theft, and arguments ensued as to whether it had been stolen in England, on the ship crossing the English Channel, or on the French leg of the journey.
When Agar was arrested for another crime, he asked Pierce to provide Fanny Kay—his former girlfriend—and child with funds. Pierce agreed and then reneged. In need of money, Kay went to the governor of Newgate Prison and told him who had undertaken the theft. Agar was questioned, admitted his guilt and testified as a witness. Pierce, Tester and Burgess were all arrested, tried and found guilty of the theft. Pierce received a sentence of two years' hard labour in England; Tester and Burgess were sentenced to penal transportation for 14 years.
The crime was the subject of a television play in 1960, with Colin Blakely as Pierce. The Great Train Robbery, a novel by the writer and director Michael Crichton, was published in 1975. Crichton adapted his work into a feature film, The First Great Train Robbery, with Sean Connery portraying Pierce.
## Background
### South Eastern Railway
In 1855 the South Eastern Railway (SER) ran a boat train service between London Bridge station and Folkestone, on the south coast of England. It provided part of the main route to Paris at the time, with a railway steamer from Folkestone to Boulogne-sur-Mer, northern France, and a train to complete the journey direct to Paris. The service ran at 8:00 am, 11:30 am and 4:30 pm; there was also an overnight mail service that left at 8:30 pm and a tidal ferry service. Periodically the line would carry shipments of gold from bullion merchants in London to their counterparts in Paris; these could be several hundredweights at a time. The bullion would be packed into wooden boxes, bound with iron hoops and with a wax seal bearing the coat of arms of the bullion dealers in question: Abell & Co, Adam Spielmann & Co and Messrs Bult & Co. The agents who arranged the carriage of the gold, including collecting the bullion from the three companies and delivering it to London Bridge, were Chaplin & Co. The gold shipments always went on the 8:30 pm train. At Boulogne the bullion boxes were collected by the French agents Messageries impériales before being transported by train to the Gare du Nord and then to the Bank of France.
As a security measure, the boxes were weighed when they were loaded onto the guard's van, at Folkestone, on arrival at Boulogne and then again on arrival in Paris. The company's guard's vans were fitted with three patented "railway safes" provided by Chubb & Son. These were three-feet (0.91 m) square and made of inch-thick (2.5 cm) steel. Access to the safe was through its lid, which was hinged for access; the exterior had two keyholes, high on the front. Each of the three safes had the same pair of locks, meaning that only two keys were needed to open all three safes. Copies of the keys were held separately by SER officials at London Bridge and Folkestone, and the company ensured no individual could hold both keys at the same time.
### Participants
The originator of the plan was William Pierce, a 37-year-old former employee of the SER who had been dismissed from its service after it was found that he was a gambler; he worked as a ticket printer in a betting shop after leaving the company. According to the historian Donald Thomas, Pierce was "a large-faced and rather clumsy man with a taste for loud waistcoats and fancy trousers. ... he was described as 'imperfectly educated'. The turf was his true schooling".
The burglar and safe-cracker Edward Agar was just under 40 at the time of the robbery and had been a professional thief since he was 18. He returned to the UK in 1853 after ten years spent in Australia and the US. He had £3,000 in government consol bonds and lived in the fashionable area of Shepherd's Bush, London. According to Thomas, the robbery "grew almost entirely from the absolute self-confidence and mental ability" of Agar.
James Burgess was a married, thrifty and respectable man who had worked at the SER since it had started running the Folkestone line in 1843. He worked for the company as a guard, and was often in charge of the trains that carried the bullion. As with many railwaymen of the time, Burgess's wages had been reduced as the railway boom had passed.
Fanny Kay, aged 23 in 1855, was Agar's partner and lived with him at his house, Cambridge Villa, in Shepherd's Bush. She had previously been an attendant at Tunbridge railway station and had been introduced to Agar by Burgess in 1853. She had a child with Agar and moved in with him in December 1854.
William Tester was a well-educated man who wore a monocle and had a desire to improve his position; he was briefly employed after the robbery as a general manager for a Swedish railway company. He worked in the traffic department at London Bridge station as the assistant to the superintendent, which gave him access to information about the carriage of valuable goods and the guards' rota.
James Townshend Saward, also known as Jim (or Jem) the Penman, was a barrister and special pleader at the Inner Temple. His activities were described by contemporary sources as "planning and perfecting schemes of fraud, the bold audacity of which is equalled only by their success". He was the head of a forgery gang who had been practising cheque fraud for several years.
## Planning and preparation
After being dismissed from the SER, Pierce continued to drink in the pubs and beer shops around London Bridge in which railway employees also drank. Over time he picked up detailed information about the gold shipments to Paris, while he watched and planned. He concluded that a theft would only be possible if he obtained copies of the keys to the safe. He relayed his thoughts to Agar before the latter's visit to the US; at the time Agar declined to take part, telling his friend the scheme was impracticable. When Agar returned to Britain, the two discussed the possibility again and Agar said that "it would be impossible to do it unless an impression of the keys could be procured". Pierce said he thought he knew how that could be arranged. They realised that for any theft to succeed, they needed the assistance of a guard travelling in the van with the safes, and an official with access to the staff rotas and who knew when the bullion shipments were to be made. It was at this stage that Pierce recruited Burgess and Tester to join the group.
In May 1854 Pierce and Agar travelled to Folkestone to watch the process involved at that end of the line, particularly the location and security surrounding the keys. They spent so long, and were so obvious, in their surveillance that they came to the notice of the municipal and railway police. As a result, Pierce returned to London and left Agar to watch alone. As part of his intelligence gathering, Agar drank in the Rose Inn, a public house near the pier, where railway staff also drank. The pair concluded that one of the keys was carried by the superintendent of the Folkestone end of the line; the other was locked in a cabinet at the railway offices on Folkestone pier.
One of the keys held at Folkestone was lost in July 1854 by Captain Mold of the steamship company. The SER sent the safes back to Chubb for the locks to be reconditioned and new keys issued. The clerk involved in corresponding with the company was Tester. By October, Chubb's work had been completed and the keys sent to the SER. Tester was able to smuggle them out of the office briefly, and met Pierce and Agar in a beer house on Tooley Street, London, where Agar made an impression of them in green wax. Tester was so nervous when he removed the keys, that he brought two identical ones with him, rather than one for each lock; the plotters were still missing one of the keys. Agar, using the false name of E. E. Archer, used his own funds to send £200 of gold sovereigns on the SER line. The box of bullion, labelled "E. R. Archer, care of Mr. Ledger, or Mr. Chapman", was sent through to Folkestone where Agar would collect it. Agar collected the package from the SER office and watched while the company's superintendent retrieved the safe key from a cupboard at the back of the room. Knowing where the keys were stored, the following weekend Agar and Pierce stayed in nearby Dover and walked to Folkestone. When the boat arrived from Boulogne, both members of the SER staff left the office to meet it; they left the door unlocked when they left. Pierce entered the office while Agar waited at the door on lookout. Pierce opened the cupboard and took the safe key to Agar who made a wax impression. The key was returned, and the two men returned to London via Dover.
Over the following months Pierce and Agar created rough keys from the impressions they had taken. In April and May 1855 Agar would travel along the Folkestone route when Burgess was on duty—seven or eight trips in total—and would hone the keys until they worked smoothly and without effort. Pierce and Agar then separately visited the Shot Tower, Lambeth, where they obtained two long hundredweight (220 lb; 100 kg) of lead shot. They also obtained courier bags, which could be strapped under a cloak, and carpet bags: these were to carry the lead shot onto the train, and the gold off it.
By May 1855 the men were now ready to carry out the robbery, and only needed to wait for a day when a gold shipment was taking place. Tester altered the staff rosters to ensure Burgess was working on the evening mail service for the month to ensure Agar had access to the safe. A signal was arranged whereby either Agar or Pierce would wait outside London Bridge station every day; if a shipment was being made, Burgess would walk out of the station and wipe his face with a white handkerchief to alert them. At the same time, Tester would travel to Redhill railway station and await the first stop of the train. He would take one of the bags of gold and return to London.
## Robbery: 15 May 1855
On 15 May 1855, while Agar was waiting outside London Bridge station, Burgess came out of the station, wiped his face with his handkerchief and went back inside. Agar notified Pierce and the two men purchased first class tickets for the journey to Folkestone. They gave their bags to Burgess for storage in the guard's van during the journey and, just before the train was due to leave, Pierce took his seat in the cabin, and Agar slipped into the guard's van and hid in the corner, covered by Burgess's overalls.
As soon as the train departed the station, Agar began work. Only one of the locks was secured—an SER employee later reported that typically only one lock was used—and Agar soon had the bullion boxes out of the safe. Instead of opening the box through the front, he used pincers to pull the rivets out of the iron bands that bound the box, and used wedges in the reverse of the box to open the lid without too much visible damage. He removed gold bars from inside the box from Abell & Co, weighed them with the scales he was carrying in the bag, and put the same weight of lead shot back into the box. He nailed the bars back around the box, then resealed a wax seal on the front, using a die he had made himself, rather than one of the official seals of the bullion dealers. He deduced—correctly—that on the poorly lit station at Folkestone, a cursory glance at the seals would not show any change. He managed to do this before the train arrived at Redhill, which was a 35-minute journey from London Bridge. When it arrived at Redhill, Agar again hid, while Tester was handed the bag containing some of the gold. He returned to the SER offices in London, as arranged, so that he could be seen by colleagues and give himself an alibi for later. Pierce took the opportunity to leave his carriage and join his confederates in the guard's van.
The other two boxes were examined after the train left Redhill. The box from Adam Spielmann & Co contained hundreds of American gold eagles worth \$10 each; these were weighed and lead shot was again left in their place before the box was resealed. The final box, from Messrs Bult & Co, contained more gold bars. These weighed more than the remaining lead they had left and many of the ingots were left behind to ensure there were no major differences in the weights of the boxes when they were later weighed. When they replaced the bands on the final box, it was damaged, but they repaired it as best they could and replaced it in the safe. The three men then cleared away the mess they had made—mostly splinters and drops of wax—and prepared themselves by strapping on the courier bags beneath their cloaks. When the train arrived in Folkestone at about 10:30 pm, Pierce and Agar hid in the van while the safes were removed by staff. They then left the van and entered the main part of the train, passing through until they reached first class, where they sat until it arrived in Dover. When the train reached Dover, Pierce and Agar alighted, collected their carpet bags full of gold from the guard's van, then went to a nearby hotel for supper. Agar threw the keys and tools into the sea before the two men returned to London on the 2:00 am train, which arrived at around 5:00 am. In total they had stolen 224 pounds (102 kg) of gold, valued at the time at £12,000.
## Immediate aftermath
When the steamer carrying the gold arrived in Boulogne, one of the crew saw that the bullion boxes were damaged, but, as staff at Folkestone had not mentioned it, saw no cause for concern. The boxes were weighed on arrival at Boulogne where the box from Abell was found to be 40 pounds (18 kg) lighter than it had been in London, whereas the other boxes both weighed more. They were transported to Paris, where they were weighed again, with the same results as at Boulogne. When they were opened the lead shot was found and the news relayed back to London.
When the working day began on 16 May, Pierce and Agar went to a money-changer's shop with some of the American eagles and obtained £213 for them; at a second such shop, they exchanged 200 of them to get a cheque for just over £203.
The three bullion merchants demanded recompense for the lost gold—most of Abell's gold was insured through the SER, but the company denied any culpability, claiming that the robbery must have taken in place in France. The French authorities pointed out that as the weights of the boxes in France both matched, and differed from that in England, it must have occurred in the UK; both the French and British companies stated "that the crime was an impossibility", according to Thomas. Newspapers reported that "It is supposed that so well planned a scheme could not have been executed in the rapid passage by railway from London to Folkestone". Burgess was examined, but not deemed a suspect because of his 14 years of service to the company. Tester had been seen at the SER offices while the train was still en route to Folkestone, so was also discounted as a potential thief. A reward of £300 was soon advertised in several newspapers for information regarding the case.
## Discovery, investigation and arrest
Pierce and Agar began to melt down the bars to create new, smaller bars of 100 ounces (2.8 kg), although they briefly set fire to the floor of Cambridge Villa when one of the crucibles cracked, spilling molten gold. Relations between Agar and Kay deteriorated around this time, and he moved out of their house to stay with Pierce while they continued to process and dispose of the bullion.
£2,500 of bullion was sold to Saward, acting as a fence, and the proceeds split evenly between Agar, Pierce, Tester and Burgess. Burgess invested his earnings in Turkish bonds, and shares in the brewing company Reid & Co; Pierce opened a betting shop near Covent Garden, telling friends he had won the capital by betting on Saucebox in the St Leger Stakes horse race at long odds. Tester put his money into Spanish Active bonds. That September he left the SER and became the general manager of a Swedish railway company.
At around the time Agar had separated from Kay, he met Emily Campbell, a 19-year-old prostitute, and the two began a relationship; Campbell's pimp, William Humphreys, took umbrage at the loss of her earnings. To overcome any problems, Agar lent Humphreys £235. When he went to collect the repaid money, he was arrested as one of Humphreys' associates passed him a bag of coins. Police stated that this was the proceeds of a cheque fraud in which he was involved and he was charged accordingly; Agar stated he knew nothing of the fraud, and he was trying to collect the money he had lent. Appearing at the Old Bailey in September 1855 on the charge of "feloniously forging and uttering an order for the payment of 700L [£700], with intend to defraud", Agar was found guilty and sentenced to penal transportation for life. Awaiting transportation in Pentonville Prison, Agar arranged for his solicitor, Thomas Wontner, to use the £3,000 Agar had in his bank account, and give it to Pierce with instructions that it should be used to support Kay and their child. Pierce agreed, then reneged around mid-1856. Desperate for money, she went to see John Weatherhead, the governor of Newgate Prison, and told him that she knew who was involved in the SER bullion robbery. An investigation was undertaken at Cambridge Villa; police found evidence that corroborated Kay's story, including the burnt floorboards, small specks of gold in the fireplace and under the floorboards, and evidence that the fireplace had been used at a very high temperature.
Agar was interviewed in October 1856, while still in Pentonville, by John Rees, the company solicitor for the SER; Agar refused to answer any questions, and so Rees returned around two weeks later and tried again. In the interim, Agar had heard that Pierce had not kept his word and so, angered by the deceit of his erstwhile partner, he turned Queen's evidence and gave Rees the full details of the crime. Pierce and Burgess were arrested on 5 November. As Tester was living in Sweden he could not be arrested, but he was informed that the police wanted to interview him.
## Legal process
In November and December 1856 hearings took place at the Mansion House, presided over by the Lord Mayor of London in his role as the Chief Magistrate of the City of London. For the first two hearings, Agar was not present, but was brought to the court on the third day. When questioned, he confirmed the story he had given to the police, and identified pieces of evidence that had been gathered. On 10 December Tester appeared in court, having been dismissed from his position with the Swedish company. When the Lord Mayor gave his decision on 24 December that the three men were to stand trial for the robbery, Pierce said "I have nothing at all to say. I reserve my defence." Burgess and Tester both stated "I am not guilty".
The trial took place at the Old Bailey between 13 and 15 January 1857, and received wide coverage in newspapers across Britain. Burgess, Tester and Pierce all pleaded not guilty. Agar gave evidence against his former colleagues again, and told the court he was, in Thomas's words, "a self-confessed professional criminal who had not made an honest living since the age of eighteen". Witnesses included the locksmith John Chubb, the bullion dealers, transportation agents, SER staff, the station staff of London Bridge and Folkestone, a customs officer from Boulogne, railway police, taverners and hotel keepers. All corroborated Agar's story that the four men knew each other, and were present together at various stages of the planning and execution of the crime.
It took the jury ten minutes to decide on the guilt of the three men, Pierce of larceny, Burgess and Tester of larceny as a servant. The judge, Sir Samuel Martin, showed what the journalist Fergus Linnane calls "a grudging admiration" for Agar during his summing up:
> The man Agar is a man who is as bad, I dare say, as bad can be, but that he is a man of most extraordinary ability no person who heard him examined can for a moment deny. ...
> Something has been said of the romance connected with that man's character, but let those who fancy that there is anything great in it consider his fate. It is obvious ... that he is a man of extraordinary talent; that he gave to this and, perhaps, to many other robberies, an amount of care and perseverance one-tenth of which devoted to honest pursuits must have raised him to a respectable station in life, and considering the commercial activity of this country during the last twenty years, would probably have enabled him to realise a large fortune.
Burgess and Tester were both sentenced to penal transportation for 14 years. Pierce, as he was not a member of SER staff, was given the lighter sentence of two years' hard labour in England, three months of which would be in solitary confinement.
## Later
Tester and Burgess were transported on board the Edwin Fox convict ship on 26 August 1858; the destination was the Swan River Colony in Western Australia. Burgess was given a ticket of leave in December 1859 and a conditional pardon in March 1862. Tester received his ticket of leave in July 1859 and a conditional pardon in October 1861. He left Australia in 1863. Agar remained in England for a little longer; he is known to have been held in Portland Prison in February 1857, before being transported to Australia on 23 September 1857. He was given his ticket of leave in September 1860, and a conditional pardon in September 1867. He left Australia to travel to Colombo, in modern-day Sri Lanka in 1869.
An account of the trial was published in 1857, with illustrations by Percy Cruikshank, the eldest son of Isaac Robert Cruikshank. The history of the robbery can be found in The First Great Train Robbery, written by David C. Hanrahan in 2011. In the May 1955 issue of The Railway Magazine the railway historian Michael Robbins wrote an article on the robbery; in November 1980 the Journal of the Railway and Canal Historical Society carried an account written by the historian John Fletcher.
On 25 December 1960 the television anthology series Armchair Theatre dramatised the crime under the title The Great Gold Bullion Robbery. Adapted by Malcolm Hulke and Eric Paice from a play by the lawyer Gerald Sparrow, and directed by John Llewellyn Moxey, it starred Colin Blakely as Pierce, James Booth as Agar, Henry McGee as Tester and Leslie Weston as Burgess.
The writer and director Michael Crichton produced his novel The Great Train Robbery in 1975; his introduction reads "The Great Train Robbery was not only shocking and appalling, but also 'daring', 'audacious' and 'masterful'." A feature film based on the novel, The First Great Train Robbery (1978), presents a highly fictionalised version of the event, portraying Pierce (played by Sean Connery), as a gentleman master criminal who eventually escapes from the police. The robbery also featured as one of the themes in the 2006 mystery novel Kept by D. J. Taylor.
## See also
- List of heists in the United Kingdom
- Train robbery
|
2,193,450 |
Richmond Bridge, London
| 1,088,027,065 |
18th-century stone arch bridge in London, England
|
[
"1774 establishments in England",
"Arch bridges in the United Kingdom",
"Bridges across the River Thames",
"Bridges completed in 1777",
"Bridges in London",
"Buildings and structures in the London Borough of Richmond upon Thames",
"Former toll bridges in England",
"Grade I listed bridges in London",
"Grade I listed buildings in the London Borough of Richmond upon Thames",
"Richmond, London",
"Stone bridges in the United Kingdom",
"Tourist attractions in the London Borough of Richmond upon Thames",
"Transport in the London Borough of Richmond upon Thames"
] |
Richmond Bridge is an 18th-century stone arch bridge that crosses the River Thames at Richmond, connecting the two halves of the present-day London Borough of Richmond upon Thames. It was designed by James Paine and Kenton Couse.
The bridge, which is Grade I listed, was built between 1774 and 1777, as a replacement for a ferry crossing which connected Richmond town centre on the east bank with its neighbouring district of East Twickenham to the west. Its construction was privately funded by a tontine scheme, for which tolls were charged until 1859. Because the river meanders from its general west to east direction, flowing from southeast to northwest in this part of London, what would otherwise be known as the north and south banks are often referred to as the "Middlesex" (Twickenham) and "Surrey" (Richmond) banks respectively, named after the historic counties to which each side once belonged.
The bridge was widened and slightly flattened in 1937–40, but otherwise still conforms to its original design. The eighth Thames bridge to be built in what is now Greater London, it is today the oldest surviving Thames bridge in London.
## Background
The small town of Sheen on the Surrey bank of the Thames, 10 miles (16 km) west of the City of London or 16 miles (26 km) by river, had been the site of a royal palace since 1299. After it was destroyed by fire in 1497, Henry VII built a new palace on the site, naming it Richmond Palace after his historic title of Earl of Richmond, and the central part of Sheen became known as Richmond.
Although a ferry had almost certainly existed at the site of the present-day bridge since Norman times, the earliest known crossing of the river at Richmond dates from 1439. The service was owned by the Crown, and operated by two boats, a small skiff for the transport of passengers and a larger boat for horses and small carts; the Twickenham Ferry, slightly upstream, was also in service from at least 1652. However, due to the steepness of the hill leading to the shore-line on the Surrey side neither ferry service was able to transport carriages or heavily laden carts, forcing them to make a very lengthy detour via Kingston Bridge.
In the 18th century Richmond and neighbouring Twickenham on the opposite bank of the Thames, both of which were distant from London but enjoyed efficient transport links to the city via the river, became extremely fashionable, and their populations began to grow rapidly. As the ferry was unable to handle large loads and was often cancelled due to weather conditions, the river crossing became a major traffic bottleneck.
Local resident William Windham had been sub-tutor to Prince William, Duke of Cumberland, and was the former husband of Mary, Lady Deloraine, mistress to George II. As a reward for his services, George II leased Windham the right to operate the ferry until 1798. Windham sub-let the right to operate the ferry to local resident Henry Holland. With the ferry unable to serve the demands of the area, in 1772 Windham sought Parliamentary approval to replace the ferry with a wooden bridge, to be paid for by tolls.
## Design
The plans for a wooden bridge proved unpopular, and in 1772 the Richmond Bridge Act was passed by Parliament, selecting 90 commissioners, including landscape architect Lancelot "Capability" Brown, historian and politician Horace Walpole and playwright and actor David Garrick, to oversee the construction of a stone bridge on the site of the ferry. The Act stipulated that no tax of any sort could be used to finance the bridge, and fixed a scale of tolls, ranging from 1⁄2d for a pedestrian to 2s 6d for a coach drawn by six horses (about 50p and £ respectively in 2023). Henry Holland was granted £5,350 (about £ in 2023) compensation for the loss of the ferry service. The commission appointed James Paine and Kenton Couse to design and build the new bridge.
The Act specified that the bridge was to be built on the site of the existing ferry "or as much lower down the river as the Commission can settle". Local residents lobbied for it to be built at Water Lane, a short distance downstream from the ferry site. The approach to the river was relatively flat, avoiding the steep slope to the existing ferry pier on the Surrey bank. However, the Dowager Duchess of Newcastle refused to allow the approach road on the Middlesex bank to pass through her land at Twickenham Park, and the commission was forced to build on the site of the ferry, despite a steep 1 in 16 (6.25%) incline.
The bridge was designed as a stone arch bridge of 300 feet (91 m) in length and 24 feet 9 inches (7.54 m) in width, supported by five elliptical arches of varying heights. The tall 60-foot (18 m) wide central span was designed to allow shipping to pass, giving Richmond Bridge a distinctive humpbacked appearance. It was built in Portland stone, and ran between Ferry Hill (Bridge Street today) on the Surrey side and Richmond Road on the Middlesex side; sharp curves in the approach roads on the Middlesex side (still in existence today) were needed to avoid the Dowager Duchess of Newcastle's land at Twickenham Park. Palladian toll houses were built in alcoves at each end.
## Construction
The building was put out to tender, and on 16 May 1774 Thomas Kerr was awarded the contract to build the bridge for the sum of £10,900 (about £ in 2023). With additional costs, such as compensating landowners and building new approach roads, total costs came to approximately £26,000 (about £ in 2023).
Most of the money needed was raised from the sale of shares at £100 each (approximately £ in 2023) in two tontine schemes, the first for £20,000 and the second for £5,000. The first was appropriately called the Richmond-Bridge Tontine, but when it became clear that the initial £20,000 would not be sufficient to complete construction a second tontine was set up. Each investor was guaranteed a return of 4% per annum, so £1,000 per annum from the income raised from tolls was divided amongst the investors in the two tontines. On the death of a shareholder their share of the dividend was divided among the surviving shareholders. To avoid fraud, each investor was obliged to sign an affidavit that they were alive before receiving their dividend. Any revenue over the £1,000 per annum required to pay the investors was held in a general fund for the maintenance of the bridge.
Construction began on 23 August 1774. The Prince of Wales was invited to lay the first stone but declined, and so the stone was laid by commission member Henry Hobart. The bridge opened to pedestrians in September 1776 and to other traffic on 12 January 1777, at which time the ferry service was closed, although work on the bridge was not completed until December 1777. A large milestone was placed at the Richmond end, giving the distances to other bridges and to local towns.
## Operation
There was no formal opening ceremony, and little initial recorded public reaction. However, the bridge soon became much admired for its design; an article in The London Magazine in 1779 said that the bridge was "a simple, yet elegant structure, and, from its happy situation, is ... one of the most beautiful ornaments of the river ... from whatever point of view the bridge is beheld, it presents the spectator with one of the richest landscapes nature and art ever produced by their joint efforts, and connoisseurs in painting will instantly be reminded of some of the best performances of Claude Lorraine". James Paine proudly illustrated it among the designs in the second volume of his Plans, Elevations, and Sections of Noblemen and Gentlemen's Houses, 1783. Richmond Bridge was the subject of paintings by many leading artists, including Thomas Rowlandson, John Constable and local resident J. M. W. Turner.
Severe penalties were imposed for vandalising the bridge. The Richmond Bridge Act 1772 specified that the punishment for "willful or malicious damage" to the bridge should be "transportation to one of His Majesty's Colonies in America for the space of seven years". A warning against damage can still be seen on the milestone at the Surrey end of the bridge.
Richmond Bridge was a commercial success, generating £1,300 per annum in tolls (about £ in 2023) in 1810. By 1822, the company had accumulated a sufficient surplus that all vehicle tolls were reduced to one penny.
On 10 March 1859 the last subscriber to the main tontine died, having for over five years received the full £800 per annum set aside for subscribers to the first tontine, and with the death of its last member the scheme expired. On 25 March 1859 Richmond Bridge became toll-free. A large procession made its way to the bridge, where a team of labourers symbolically removed toll gates from their hinges. The toll houses were demolished, replaced by seating in 1868; investment income from the revenue accumulated during the 83 years the tolls had been charged was sufficient to pay for the bridge's maintenance.
In 1846 the first railway line reached Richmond. Richmond gasworks opened in 1848, and Richmond began to develop into a significant town. The District Railway (later the District line) reached Richmond in 1877, connecting it to the London Underground. Commuting to central London became feasible and affordable, leading to further population growth in the previously relatively isolated Richmond and Twickenham areas.
## 20th-century remodelling
By the early 20th century the bridge was proving inadequate for the increasing traffic, particularly with the introduction of motorised transport, and a 10 miles per hour (16 km/h) speed limit was enforced. With the remaining investment income from tolls insufficient to pay for major reconstruction, on 31 March 1931 the bridge was taken into the joint public ownership of Surrey and Middlesex councils, and proposals were made to widen it. The plans were strongly opposed on aesthetic grounds, and the decision was taken to build instead a new bridge a short distance downstream to relieve traffic pressure.
The new Twickenham Bridge opened in 1933, but Richmond Bridge was still unable to handle the volume of traffic, so in 1933 Sir Harley Dalrymple-Hay proposed possible methods for widening the bridge without significantly affecting its appearance. The cheapest of Dalrymple-Hay's proposals, to transfer the footpaths onto stone corbels projecting from the sides of the bridge thus freeing the entire width for vehicle traffic, was rejected on aesthetic grounds, and a proposal to widen the bridge on both sides was rejected as impractical. A proposal to widen the bridge on the upstream side was settled on as causing the least disruption to nearby buildings, and in 1934 it was decided to widen the bridge by 11 feet (3.4 m), at a cost of £73,000 (about £ in 2023).
The Cleveland Bridge & Engineering Company of Darlington was appointed to carry out the rebuilding. In 1937 each stone on the upstream side was removed and numbered and the bridge widened; the stone facing of the upstream side was then reassembled and the bridge reopened to traffic in 1940. Throughout the redevelopment, a single lane of traffic was kept open at all times. It was found that the 18th-century foundations, consisting of wooden platforms sunk into the river bed, had largely rotted away, and they were reinforced with steel pilings and concrete foundations. During the widening works the opportunity was also taken to lower slightly the roadbed at the centre of the bridge and raise the access ramps, reducing the humpbacked nature of the bridge's central section.
## Legacy
James Paine went on to design three other Thames bridges after Richmond, at Chertsey (1783), Kew (1783), and Walton (1788). Paine and Couse renewed their working relationship on the design of Chertsey Bridge, the only one of the three still in existence. Paine became High Sheriff of Surrey in 1783.
In 1962, Richmond Council announced the replacement of the gaslamps on the bridge with electric lighting. The Richmond Society, a local pressure group, protested at the change to the character of the bridge, and succeeded in forcing the council to retain the Victorian gas lamp-posts, converted to electric light, which remain in place today.
In the history of Richmond Bridge there have only been two reported serious collisions between boats and the bridge. On 20 March 1964, three boats tied together at Eel Pie Island, 1+1⁄2 miles (2.4 km) upstream, broke from their moorings in a storm and were swept downstream, colliding with the bridge. Although no serious damage was caused to the bridge, the Princess Beatrice, an 1896 steamer once used by Gilbert and Sullivan, was damaged beyond repair. On 30 January 1987, the Brave Goose, the £3,500,000 yacht of National Car Parks founder Sir Donald Gosling, became wedged under the central arch of the bridge, eventually being freed at low tide the next day.
The eighth Thames bridge to be built in what is now Greater London, Richmond Bridge is currently the oldest surviving bridge over the Thames in Greater London, and the oldest Thames bridge between the sea and Abingdon Bridge in Oxfordshire. Richmond Bridge was Grade I listed in 1952 and it is the only Georgian bridge over the Thames in London. Its bicentenary was celebrated on 7 May 1977; the commemoration was held four months after the actual anniversary of 12 January, to avoid poor weather conditions.
The tradition of boat hire, repairs and boatbuilding continues at the bridge and tunnels at Richmond Bridge Boathouses under boatbuilder Mark Edwards, awarded his MBE in 2013 for "services to boatbuilding" including construction of the royal barge Gloriana.
Just to the south of the bridge, in a park at the Richmond end, is a bust of the first president of Chile, Bernardo O'Higgins, who studied in Richmond from 1795 until 1798. In 1998, 200 years after he left Richmond, the bust, whose sculptor is unknown, was unveiled. The patch of ground which the statue overlooks is called "O'Higgins Square". The Mayor of Richmond lays a wreath at the bust every year in the presence of staff from the Chilean Embassy in London.
## See also
- List of crossings of the River Thames
- List of bridges in London
|
4,110,093 |
X-10 Graphite Reactor
| 1,172,306,799 |
Decommissioned nuclear reactor in Tennessee
|
[
"1943 establishments in Tennessee",
"1963 disestablishments in Tennessee",
"Atomic tourism",
"Defunct nuclear reactors",
"Energy infrastructure on the National Register of Historic Places",
"Government buildings completed in 1943",
"Graphite moderated reactors",
"History of the Manhattan Project",
"Industrial buildings and structures on the National Register of Historic Places in Tennessee",
"Military facilities on the National Register of Historic Places in Tennessee",
"Military history of Tennessee",
"Military nuclear reactors",
"National Historic Landmarks in Tennessee",
"National Register of Historic Places in Roane County, Tennessee",
"Nuclear weapons infrastructure of the United States",
"Oak Ridge National Laboratory",
"Tourist attractions in Roane County, Tennessee",
"World War II on the National Register of Historic Places"
] |
The X-10 Graphite Reactor is a decommissioned nuclear reactor at Oak Ridge National Laboratory in Oak Ridge, Tennessee. Formerly known as the Clinton Pile and X-10 Pile, it was the world's second artificial nuclear reactor (after Enrico Fermi's Chicago Pile-1), and the first designed and built for continuous operation. It was built during World War II as part of the Manhattan Project.
While Chicago Pile-1 demonstrated the feasibility of nuclear reactors, the Manhattan Project's goal of producing enough plutonium for atomic bombs required reactors a thousand times as powerful, along with facilities to chemically separate the plutonium bred in the reactors from uranium and fission products. An intermediate step was considered prudent. The next step for the plutonium project, codenamed X-10, was the construction of a semiworks where techniques and procedures could be developed and training conducted. The centerpiece of this was the X-10 Graphite Reactor. It was air-cooled, used nuclear graphite as a neutron moderator, and pure natural uranium in metal form for fuel.
DuPont commenced construction of the plutonium semiworks at the Clinton Engineer Works in Oak Ridge on February 2, 1943. The reactor went critical on November 4, 1943, and produced its first plutonium in early 1944. It supplied the Los Alamos Laboratory with its first significant amounts of plutonium, and its first reactor-bred product. Studies of these samples heavily influenced bomb design. The reactor and chemical separation plant provided invaluable experience for engineers, technicians, reactor operators, and safety officials who then moved on to the Hanford site. X-10 operated as a plutonium production plant until January 1945, when it was turned over to research activities, and the production of radioactive isotopes for scientific, medical, industrial and agricultural uses. It was shut down in 1963 and was designated a National Historic Landmark in 1965.
## Origins
The discovery of nuclear fission by German chemists Otto Hahn and Fritz Strassmann in 1938, followed by its theoretical explanation (and naming) by Lise Meitner and Otto Frisch, opened up the possibility of a controlled nuclear chain reaction with uranium. At Columbia University, Enrico Fermi and Leo Szilard began exploring how this might be done. Szilard drafted a confidential letter to the President of the United States, Franklin D. Roosevelt, explaining the possibility of atomic bombs, and warning of the danger of a German nuclear weapon project. He convinced his old friend and collaborator Albert Einstein to co-sign it, lending his fame to the proposal. This resulted in support by the U.S. government for research into nuclear fission, which became the Manhattan Project.
In April 1941, the National Defense Research Committee (NDRC) asked Arthur Compton, a Nobel-Prize-winning physics professor at the University of Chicago, to report on the uranium program. His report, submitted in May 1941, foresaw the prospects of developing radiological weapons, nuclear propulsion for ships, and nuclear weapons using uranium-235 or the recently discovered plutonium. In October he wrote another report on the practicality of an atomic bomb. Niels Bohr and John Wheeler had theorized that heavy isotopes with even atomic numbers and odd number of neutrons were fissile. If so, then plutonium-239 was likely to be.
Emilio Segrè and Glenn Seaborg at the University of California produced 28 μg of plutonium in the 60-inch cyclotron there in May 1941, and found that it had 1.7 times the thermal neutron capture cross section of uranium-235. At the time plutonium-239 had been produced in minute quantities using cyclotrons, but it was not possible to produce large quantities that way. Compton discussed with Eugene Wigner from Princeton University how plutonium might be produced in a nuclear reactor, and with Robert Serber how the plutonium produced in a reactor might be separated from uranium.
The final draft of Compton's November 1941 report made no mention of using plutonium, but after discussing the latest research with Ernest Lawrence, Compton became convinced that a plutonium bomb was also feasible. In December, Compton was placed in charge of the plutonium project, which was codenamed X-10. Its objectives were to produce reactors to convert uranium to plutonium, to find ways to chemically separate the plutonium from the uranium, and to design and build an atomic bomb. It fell to Compton to decide which of the different types of reactor designs the scientists should pursue, even though a successful reactor had not yet been built. He felt that having teams at Columbia, Princeton, the University of Chicago and the University of California was creating too much duplication and not enough collaboration, and he concentrated the work at the Metallurgical Laboratory at the University of Chicago.
## Site selection
By June 1942, the Manhattan Project had reached the stage where the construction of production facilities could be contemplated. On June 25, 1942, the Office of Scientific Research and Development (OSRD) S-1 Executive Committee deliberated on where they should be located. Moving directly to a megawatt production plant looked like a big step, given that many industrial processes do not easily scale from the laboratory to production size. An intermediate step of building a pilot plant was considered prudent. For the pilot plutonium separation plant, a site was wanted close to the Metallurgical Laboratory, where the research was being carried out, but for reasons of safety and security, it was not desirable to locate the facilities in a densely populated area like Chicago.
Compton selected a site in the Argonne Forest, part of the Forest Preserve District of Cook County, about 20 miles (32 km) southwest of Chicago. The full-scale production facilities would be co-located with other Manhattan Project facilities at a still more remote location in Tennessee. Some 1,000 acres (400 ha) of land was leased from Cook County for the pilot facilities, while an 83,000-acre (34,000 ha) site for the production facilities was selected at Oak Ridge, Tennessee. By the S-1 Executive Committee meeting on September 13 and 14, it had become apparent that the pilot facilities would be too extensive for the Argonne site, so instead a research reactor would be built at Argonne, while the plutonium pilot facilities (a semiworks) would be built at the Clinton Engineer Works in Tennessee.
This site was selected on the basis of several criteria. The plutonium pilot facilities needed to be two to four miles (3.2 to 6.4 km) from the site boundary and any other installation, in case radioactive fission products escaped. While security and safety concerns suggested a remote site, it still needed to be near sources of labor, and accessible by road and rail transportation. A mild climate that allowed construction to proceed throughout the year was desirable. Terrain separated by ridges would reduce the impact of accidental explosions, but they could not be so steep as to complicate construction. The substratum needed to be firm enough to provide good foundations, but not so rocky that it would hinder excavation work. It needed large amounts of electrical power (available from the Tennessee Valley Authority) and cooling water. Finally, a War Department policy held that, as a rule, munitions facilities should not be located west of the Sierra or Cascade Ranges, east of the Appalachian Mountains, or within 200 miles (320 km) of the Canadian or Mexican borders.
In December, it was decided that the plutonium production facilities would not be built at Oak Ridge after all, but at the even more remote Hanford Site in Washington state. Compton and the staff at the Metallurgical Laboratory then reopened the question of building the plutonium semiworks at Argonne, but the engineers and management of DuPont, particularly Roger Williams, the head of its TNX Division, which was responsible for the company's role in the Manhattan Project, did not support this proposal. They felt that there would be insufficient space at Argonne, and that there were disadvantages in having a site that was so accessible, as they were afraid that it would permit the research staff from the Metallurgical Laboratory to interfere unduly with the design and construction, which they considered their prerogative. A better location, they felt, would be with the remote production facilities at Hanford. In the end a compromise was reached. On January 12, 1943, Compton, Williams, and Brigadier General Leslie R. Groves, Jr., the director of the Manhattan Project, agreed that the semiworks would be built at the Clinton Engineer Works.
Both Compton and Groves proposed that DuPont operate the semiworks. Williams counter-proposed that the semiworks be operated by the Metallurgical Laboratory. He reasoned that it would primarily be a research and educational facility, and that expertise was to be found at the Metallurgical Laboratory. Compton was shocked; the Metallurgical Laboratory was part of the University of Chicago, and therefore the university would be operating an industrial facility 500 miles (800 km) from its main campus. James B. Conant told him that Harvard University "wouldn't touch it with a ten-foot pole", but the University of Chicago's Vice President, Emery T. Filbey, took a different view, and instructed Compton to accept. When University President Robert Hutchins returned, he greeted Compton with "I see, Arthur, that while I was gone you doubled the size of my university".
## Design
The fundamental design decisions in building a reactor are the choice of fuel, coolant and neutron moderator. The choice of fuel was straightforward; only natural uranium was available. The decision that the reactor would use graphite as a neutron moderator caused little debate. Although with heavy water as moderator the number of neutrons produced for every one absorbed (known as k factor) was 10 percent more than in the purest graphite, heavy water would be unavailable in sufficient quantities for at least a year. This left the choice of coolant, over which there was much discussion. A limiting factor was that the fuel slugs would be clad in aluminum, so the operating temperature of the reactor could not exceed 200 °C (392 °F). The theoretical physicists in Wigner's group at the Metallurgical Laboratory developed several designs. In November 1942, the DuPont engineers chose helium gas as the coolant for the production plant, mainly on the basis that it did not absorb neutrons, but also because it was inert, which removed the issue of corrosion.
Not everyone agreed with the decision to use helium. Szilard, in particular, was an early proponent of using liquid bismuth; but the major opponent was Wigner, who argued forcefully in favor of a water-cooled reactor design. He realized that since water absorbed neutrons, k would be reduced by about 3 percent, but had sufficient confidence in his calculations that the water-cooled reactor would still be able to achieve criticality. From an engineering perspective, a water-cooled design was straightforward to design and build, while helium posed technological problems. Wigner's team produced a preliminary report on water cooling, designated CE-140 in April 1942, followed by a more detailed one, CE-197, titled "On a Plant with Water Cooling", in July 1942.
Fermi's Chicago Pile-1 reactor, constructed under the west viewing stands of the original Stagg Field at the University of Chicago, "went critical" on December 2, 1942. This graphite-moderated reactor only generated up to 200 W, but it demonstrated that k was higher than anticipated. This not only removed most of the objections to air-cooled and water-cooled reactor designs, it greatly simplified other aspects of the design. Wigner's team submitted blueprints of a water-cooled reactor to DuPont in January 1943. By this time, the concerns of DuPont's engineers about the corrosiveness of water had been overcome by the mounting difficulties of using helium, and all work on helium was terminated in February. At the same time, air cooling was chosen for the reactor at the pilot plant. Since it would be of a quite different design from the production reactors, the X-10 Graphite Reactor lost its value as a prototype, but its value as a working pilot facility remained, providing plutonium needed for research. It was hoped that problems would be found in time to deal with them in the production plants. The semiworks would also be used for training, and for developing procedures.
## Construction
Although the design of the reactor was not yet complete, DuPont began construction of the plutonium semiworks on February 2, 1943, on an isolated 112-acre (45.3 ha) site in the Bethel Valley about 10 miles (16 km) southwest of Oak Ridge officially known as the X-10 area. The site included research laboratories, a chemical separation plant, a waste storage area, a training facility for Hanford staff, and administrative and support facilities that included a laundry, cafeteria, first aid center, and fire station. Because of the subsequent decision to construct water-cooled reactors at Hanford, only the chemical separation plant operated as a true pilot. The semiworks eventually became known as the Clinton Laboratories, and was operated by the University of Chicago as part of the Metallurgical Project.
Construction work on the reactor had to wait until DuPont had completed the design. Excavation commenced on April 27, 1943. A large pocket of soft clay was soon discovered, necessitating additional foundations. Further delays occurred due to wartime difficulties in procuring building materials. There was an acute shortage of both common and skilled labor; the contractor had only three-quarters of the required workforce, and there was high turnover and absenteeism, mainly the result of poor accommodations and difficulties in commuting. The township of Oak Ridge was still under construction, and barracks were built to house workers. Special arrangements with individual workers increased their morale and reduced turnover. Finally, there was unusually heavy rainfall, with 9.3 inches (240 mm) falling in July 1943, more than twice the average of 4.3 inches (110 mm).
Some 700 short tons (640 t) of graphite blocks were purchased from National Carbon. The construction crews began stacking them in September 1943. Cast uranium billets came from Metal Hydrides, Mallinckrodt and other suppliers. These were extruded into cylindrical slugs, and then canned. The fuel slugs were canned to protect the uranium metal from corrosion that would occur if it came into contact with water, and to prevent the venting of gaseous radioactive fission products that might be formed when they were irradiated. Aluminum was chosen as it transmitted heat well but did not absorb too many neutrons. Alcoa started canning on June 14, 1943. General Electric and the Metallurgical Laboratory developed a new welding technique to seal the cans airtight, and the equipment for this was installed in the production line at Alcoa in October 1943.
Construction commenced on the pilot separation plant before a chemical process for separating plutonium from uranium had been selected. Not until May 1943 would DuPont managers decide to use the bismuth phosphate process in preference to one using lanthanum fluoride. The bismuth phosphate process was devised by Stanley G. Thompson at the University of California. Plutonium had two oxidation states; a tetravalent (+4) state, and hexavalent (+6) state, with different chemical properties. Bismuth phosphate (BiPO
<sub>4</sub>) was similar in its crystalline structure to plutonium phosphate, and plutonium would be carried with bismuth phosphate in a solution while other elements, including uranium, would be precipitated. The plutonium could be switched from being in solution to being precipitated by toggling its oxidation state. The plant consisted of six cells, separated from each other and the control room by thick concrete walls. The equipment was operated from the control room by remote control due to the radioactivity produced by fission products. Work was completed on November 26, 1943, but the plant could not operate until the reactor started producing irradiated uranium slugs.
## Operation
The X-10 Graphite Reactor was the world's second artificial nuclear reactor after Chicago Pile-1, and was the first reactor designed and built for continuous operation. It consisted of a huge block, 24 feet (7.3 m) long on each side, of nuclear graphite cubes, weighing around 1,500 short tons (1,400 t), that acted as a moderator. They were surrounded by seven feet (2.1 m) of high-density concrete as a radiation shield. In all, the reactor was 38 feet (12 m) wide, 47 feet (14 m) deep and 32 feet (9.8 m) high. There were 36 horizontal rows of 35 holes. Behind each was a metal channel into which uranium fuel slugs could be inserted. An elevator provided access to those higher up. Only 800 (\~64%) of the channels were ever used.
The reactor used cadmium-clad steel control rods. Made from neutron-absorbing cadmium, these could restrict or halt the reaction. Three 8-foot (2.4 m) rods penetrated the reactor vertically, held in place by a clutch to form the scram system. They were suspended from steel cables that were wound around a drum, and held in place by an electromagnetic clutch. If power was lost, they would drop into the reactor, halting it. The other four rods were made of boron steel and horizontally penetrated the reactor from the north side. Two of them, known as "shim" rods, were hydraulically controlled. Sand-filled hydraulic accumulators could be used in the event of a power failure. The other two rods were driven by electric motors.
The cooling system consisted of three electric fans running at 55,000 cubic feet per minute (1,600 m<sup>3</sup>/min). Because it was cooled using outside air, the reactor could be run at a higher power level on cold days. After going through the reactor, the air was filtered to remove radioactive particles larger than 0.00004 inches (0.0010 mm) in diameter. This took care of over 99 percent of the radioactive particles. It was then vented through a 200-foot (61 m) chimney. The reactor was operated from a control room in the southeast corner on the second floor.
In September 1942, Compton asked a physicist, Martin D. Whitaker, to form a skeleton operating staff for X-10. Whitaker became the inaugural director of the Clinton Laboratories, as the semiworks became officially known in April 1943. The first permanent operating staff arrived from the Metallurgical Laboratory in Chicago in April 1943, by which time DuPont began transferring its technicians to the site. They were augmented by one hundred technicians in uniform from the Army's Special Engineer Detachment. By March 1944, there were some 1,500 people working at X-10.
Supervised by Compton, Whitaker, and Fermi, the reactor went critical on November 4, 1943, with about 30 short tons (27 t) of uranium. A week later the load was increased to 36 short tons (33 t), raising its power generation to 500 kW, and by the end of the month the first 500 mg of plutonium was created. The reactor normally operated around the clock, with 10-hour weekly shutdowns for refueling. During startup, the safety rods and one shim rod were completely removed. The other shim rod was inserted at a predetermined position. When the desired power level was reached, the reactor was controlled by adjusting the partly inserted shim rod.
The first batch of canned slugs to be irradiated was received on December 20, 1943, allowing the first plutonium to be produced in early 1944. The slugs used pure metallic natural uranium, in air-tight aluminum cans 4.1 inches (100 mm) long and 1 inch (25 mm) in diameter. Each channel was loaded with between 24 and 54 fuel slugs. The reactor went critical with 30 short tons (27 t) of slugs, but in its later life was operated with as much as 54 short tons (49 t). To load a channel, the radiation-absorbing shield plug was removed, and the slugs inserted manually in the front (east) end with long rods. To unload them, they were pushed all the way through to the far (west) end, where they fell onto a neoprene slab and fell down a chute into a 20-foot-deep (6.1 m) pool of water that acted as a radiation shield. Following weeks of underwater storage to allow for decay in radioactivity, the slugs were delivered to the chemical separation building.
By February 1944, the reactor was irradiating a ton of uranium every three days. Over the next five months, the efficiency of the separation process was improved, with the percentage of plutonium recovered increasing from 40 to 90 percent. Modifications over time raised the reactor's power to 4,000 kW in July 1944. The effect of the neutron poison xenon-135, one of many fission products produced from the uranium fuel, was not detected during the early operation of the X-10 Graphite Reactor. Xenon-135 subsequently caused problems with the startup of the Hanford B reactor that nearly halted the plutonium project.
The X-10 semiworks operated as a plutonium production plant until January 1945, when it was turned over to research activities. By this time, 299 batches of irradiated slugs had been processed. A radioisotope building, a steam plant, and other structures were added in April 1946 to support the laboratory's peacetime educational and research missions. All work was completed by December 1946, adding another \$1,009,000 (equivalent to \$ in ) to the cost of construction at X-10, and bringing the total cost to \$13,041,000 (equivalent to \$ in ). Operational costs added another \$22,250,000 (equivalent to \$ in ).
X-10 supplied the Los Alamos Laboratory with the first significant samples of plutonium. Studies of these by Emilio G. Segrè and his P-5 Group at Los Alamos revealed that it contained impurities in the form of the isotope plutonium-240, which has a far higher spontaneous fission rate than plutonium-239. This meant that it would be highly likely that a plutonium gun-type nuclear weapon would predetonate and blow itself apart during the initial formation of a critical mass. The Los Alamos Laboratory was thus forced to turn its development efforts to creating an implosion-type nuclear weapon—a far more difficult feat.
The X-10 chemical separation plant also verified the bismuth-phosphate process that was used in the full-scale separation facilities at Hanford. Finally, the reactor and chemical separation plant provided invaluable experience for engineers, technicians, reactor operators, and safety officials who then moved on to the Hanford site.
## Peacetime use
After the war ended, the graphite reactor became the first facility in the world to produce radioactive isotopes for peacetime use. On August 2, 1946, Oak Ridge National Laboratory director Eugene Wigner presented a small container of carbon-14 to the director of the Barnard Free Skin and Cancer Hospital, for medical use at the hospital in St. Louis, Missouri. Subsequent shipments of radioisotopes, primarily iodine-131, phosphorus-32, carbon-14, and molybdenum-99/technetium-99m, were for scientific, medical, industrial and agricultural uses.
In August 1948, the reactor was used to produce the first electricity derived from nuclear power. Uranium slugs within an aluminum tube were irradiated within the reactor core. Water was circulated through the tube by means of an automatic feedwater system to generate steam. This steam was fed to a model steam engine, a Jensen Steam Engines \#50, which drove a small generator that powered a single bulb. The engine and generator are on display at the reactor loading face, just below the staircase leading to the loading platform.
The X-10 Graphite Reactor was shut down on November 4, 1963, after twenty years of use. It was designated a National Historic Landmark on December 21, 1965, and added to the National Register of Historic Places on October 15, 1966. In 1969 the American Society for Metals listed it as a landmark for its contributions to the advancement of materials science and technology, and in 2008 it was designated as a National Historic Chemical Landmark by the American Chemical Society. The control room and reactor face are accessible to the public during scheduled tours offered through the American Museum of Science and Energy.
## Similar reactors
The Brookhaven National Laboratory (BNL) Graphite Research Reactor was the first nuclear reactor to be constructed in the United States following World War II. Led by Lyle Benjamin Borst, the reactor construction began in 1947 and reached criticality for the first time on August 22, 1950. The reactor consisted of a 700-short-ton (640 t), 25-foot (7.6 m) cube of graphite fueled by natural uranium. Its primary mission was applied nuclear research in medicine, biology, chemistry, physics and nuclear engineering. One of the most significant discoveries at this facility was the development of production of molybdenum-99/technetium-99m, used today in tens of millions of medical diagnostic procedures annually, making it the most commonly used medical radioisotope. The BNL Graphite Research Reactor was shut down in 1969 and fully decommissioned in 2012.
When Britain began planning to build nuclear reactors to produce plutonium for weapons in 1946, it was decided to build a pair of air-cooled graphite reactors similar to the X-10 Graphite Reactor at Windscale. Natural uranium was used as enriched was not available, and similarly, graphite was chosen as a neutron moderator because beryllia was toxic and hard to manufacture, while heavy water was unavailable. Use of water as a coolant was considered, but there were concerns about the possibility of a catastrophic nuclear meltdown in the densely populated British Isles if the cooling system failed. Helium was again the preferred choice as a coolant gas, but the main source of it was the United States, and under the 1946 McMahon Act, the United States would not supply it for nuclear weapons production, so, in the end, air cooling was chosen. Construction began in September 1947, and the two reactors became operational in October 1950 and June 1951. Both were decommissioned after the disastrous Windscale fire in October 1957. They would be the last major air-cooled plutonium-producing reactors; the UK's follow-on Magnox and AGR designs used carbon dioxide instead.
As of 2016, another reactor of similar design to the X-10 Graphite Reactor is still in operation, the Belgian BR-1 reactor of the SCK•CEN, located in Mol, Belgium. Financed through the Belgian uranium export tax, and built with the help of British experts, the 4 MW research reactor went critical for the first time on May 11, 1956. It is used for scientific purposes, such as neutron activation analysis, neutron physics experiments, calibration of nuclear measurement devices and the production of neutron transmutation doped silicon.
|
451,170 |
York City F.C.
| 1,173,842,599 |
Association football club in York, England
|
[
"1922 establishments in England",
"Association football clubs established in 1922",
"Companies that have entered administration in the United Kingdom",
"English Football League clubs",
"Football clubs in North Yorkshire",
"Midland Football League (1889)",
"National League (English football) clubs",
"Sport in York",
"York City F.C."
] |
York City Football Club is a professional association football club based in the city of York, North Yorkshire, England. The team will compete in the National League, the fifth level of the English football league system, in the 2023–24 season.
Founded in 1922, the club played seven seasons in non-League football before joining the Football League. York played in the Third Division North and Fourth Division until 1959, when they were promoted for the first time. York achieved their best run in the FA Cup in 1954–55, when they met Newcastle United in the semi-final. They fluctuated between the Third and Fourth Divisions, before spending two seasons in the Second Division in the 1970s. York first played at Wembley Stadium in 1993, when they won the Third Division play-off final. At the end of 2003–04, they lost their Football League status after being relegated from the Third Division. The 2011–12 FA Trophy was the first national knockout competition won by York, and they returned to the Football League that season before being relegated back into non-League football in 2016.
York are nicknamed the Minstermen, after York Minster, and the team traditionally play in red kits. They played at Fulfordgate from 1922 to 1932, when they moved to Bootham Crescent, their home for 88 years. This ground had been subject to numerous improvements over the years, but the club lost ownership of it when it was transferred to a holding company in 1999. York bought it back five years later, but the terms of the loan used to do so necessitated a move to a new ground. They moved into their current ground, the York Community Stadium, in 2021. York have had rivalries with numerous clubs, but their traditional rivals are Hull City and Scarborough. The club's record appearance holder is Barry Jackson, who made 539 appearances, while their leading scorer is Norman Wilkinson, with 143 goals.
## History
### 1922–1946: Foundation and establishment in Football League
The club was founded with the formation of the York City Association Football and Athletic Club Limited in May 1922 and subsequently gained admission to the Midland League. York ranked in 19th place in 1922–23 and 1923–24, and entered the FA Cup for the first time in the latter. York played in the Midland League for seven seasons, achieving a highest finish of sixth, in 1924–25 and 1926–27. They surpassed the qualifying rounds of the FA Cup for the first time in 1926–27, when they were beaten 2–1 by Second Division club Grimsby Town in the second round. The club made its first serious attempt for election to the Football League in May 1927, but this was unsuccessful as Barrow and Accrington Stanley were re-elected. However, the club was successful two years later, being elected to the Football League in June 1929 to replace Ashington in the Third Division North.
York won 2–0 against Wigan Borough in their first match in the Football League, and finished 1929–30 sixth in the Third Division North. Three years later, York only avoided having to seek re-election after winning the last match of 1932–33. In the 1937–38 FA Cup, they eliminated First Division teams West Bromwich Albion and Middlesbrough, and drew 0–0 at home to Huddersfield Town in the sixth round, before losing the replay 2–1 at Leeds Road. York had been challenging for promotion in 1937–38 before faltering in the closing weeks, and in the following season only avoided having to apply for re-election with victory in the penultimate match. They participated in the regional competitions organised by the Football League upon the outbreak of the Second World War in September 1939. York played in wartime competitions for seven seasons, and in 1942 won the Combined Counties Cup.
### 1946–1981: FA Cup run, promotion and relegations
Peacetime football resumed in 1946–47 and York finished the next three seasons in midtable. However, they were forced to apply for re-election for the first time after finishing bottom of the Third Division North in 1949–50. York pursued promotion in 1952–53, before finishing fourth with 53 points, which were new club records in the Football League. The club's longest cup run came when they reached the semi-final of the 1954–55 FA Cup, a campaign in which Arthur Bottom scored eight goals. In the semi-final, York drew 1–1 with Newcastle United at Hillsborough, before being beaten 2–0 at Roker Park in the replay. This meant York had become the first third-tier club to play in an FA Cup semi-final replay. With a 13th-place finish in 1957–58, York became founder members of the Fourth Division, while the clubs finishing in the top half of the North and South sections formed the new Third Division.
York only missed out on the runners-up spot in 1958–59 on goal average, and were promoted for the first time in third place. However, they were relegated from the Third Division after just one season in 1959–60. York's best run in the League Cup came in 1961–62, the competition's second season, after reaching the fifth round. They were beaten 2–1 by divisional rivals Rochdale. York had to apply for re-election for the second time after finishing 22nd in 1963–64, but achieved a second promotion the next season, again in third place in the Fourth Division. York were again relegated after one season, finishing bottom of the Third Division in 1965–66. The club was forced to apply for re-election in three successive seasons, from 1966–67 to 1968–69, after finishing in the bottom four of the Fourth Division in each of those season. York's record of earning promotion every six years was maintained in 1970–71, with a fourth-place finish in the Fourth Division.
York avoided relegation from the Third Division in 1971–72 and 1972–73, albeit only on goal average in both seasons. After these two seasons they hit form in 1973–74, when "three up, three down" was introduced to the top three divisions. After being among the leaders most of the season, York were promoted to the Second Division for the first time in third place. The club's highest-ever league placing was achieved in mid October 1974 when York were fifth in the Second Division, and they finished 1974–75 in 15th place. York finished in 21st place the following season, and were relegated back to the Third Division. York dropped further still, being relegated in 1976–77 after finishing bottom of the Third Division. The 1977–78 season culminated in the club being forced to apply for re-election for the sixth time, after ranking third from bottom in the Fourth Division. Two midtable finishes followed before York made their seventh application for re-election, after they finished bottom of the Fourth Division in 1980–81.
### 1981–2004: Further promotions and relegation from Football League
In 1981–82, York endured a club-record run of 12 home matches without victory, but only missed out on promotion in 1982–83 due to their poor away form in the second half of the season. York won the Fourth Division championship with 101 points in 1983–84, becoming the first Football League team to achieve a three-figure points total in a season. In January 1985, York recorded a 1–0 home victory over First Division Arsenal in the fourth round of the 1984–85 FA Cup, courtesy of an 89th-minute penalty scored by Keith Houchen. They proceeded to draw 1–1 at home with European Cup holders Liverpool in February 1985, but lost 7–0 in the replay at Anfield; York's record cup defeat. The teams met again in the following season's FA Cup, and after another 1–1 home draw, Liverpool won 3–1 in the replay after extra time at Anfield. Their finish of seventh in the Third Division in 1985–86 marked the fifth consecutive season York had improved their end-of-season league ranking.
York only avoided relegation with a draw in the last match of 1986–87, but did go down the following season after finishing second from bottom in the Third Division. In 1992–93, York ended a five-year spell in the Third Division by gaining promotion to the Second Division via the play-offs. Crewe Alexandra were beaten in the play-off final at Wembley Stadium, with a 5–3 penalty shoot-out victory following a 1–1 extra time draw. York reached the Second Division play-offs at the first attempt, but lost 1–0 on aggregate to Stockport County in the semi-final. York recorded a 4–3 aggregate victory in the 1995–96 League Cup second round over the eventual Premier League and FA Cup double winners Manchester United. This included a 3–0 win in the first leg at Old Trafford against a strong United team that included some younger players, and a more experienced United team was unable to overcome the deficit in the second leg, York losing 3–1. They then beat Everton in the second round of the following season's League Cup; they drew the first leg 1–1 at Goodison Park, but won the second leg 3–2 at home.
York were relegated from the Second Division in 1998–99, after dropping into 21st place on the last day of the season. In December 2001, long-serving chairman Douglas Craig put the club and its ground up for sale for £4.5 million, before announcing that the club would resign from the Football League if a buyer was not found. Motor racing driver John Batchelor took over the club in March 2002, and by December the club had gone into administration. The Supporters' Trust (ST) bought the club in March 2003 after an offer of £100,000 as payment for £160,000 owed in tax was accepted by the Inland Revenue. Batchelor left having diverted almost all of the £400,000 received from a sponsorship deal with Persimmon to his racing team, and having failed to deliver on his promise of having ST members on the board. York failed to win any of their final 20 league fixtures in 2003–04 and finished bottom of the Third Division. This meant the club was relegated to the Football Conference, ending 75 years of Football League membership.
### 2004–present: Return to and relegation from Football League
York only avoided relegation late into their first Conference National season in 2004–05, before reaching the play-off semi-final in 2006–07, when they were beaten 2–1 on aggregate by Morecambe. Having only escaped relegation towards the end of 2008–09, York participated in the 2009 FA Trophy final, and were defeated 2–0 by Stevenage Borough at Wembley Stadium. They reached the 2010 Conference Premier play-off final at Wembley Stadium, but were beaten 3–1 by Oxford United. York won their first national knockout competition two years later, after they beat Newport County 2–0 in the 2012 FA Trophy final at Wembley Stadium. A week later they earned promotion to League Two after they beat Luton Town 2–1 at Wembley Stadium in the 2012 Conference Premier play-off final, marking the club's return to the Football League after an eight-year absence.
York only secured survival from relegation late into 2012–13, their first season back in the Football League. They made the League Two play-offs the following season, and were beaten 1–0 on aggregate by Fleetwood Town in the semi-final. However, York were relegated to the National League four years after returning to the Football League, with a bottom-place finish in League Two in 2015–16. York were further relegated to the National League North for the first time in 2016–17; however, they ended the season with a 3–2 win over Macclesfield Town at Wembley Stadium in the 2017 FA Trophy final. The club was promoted back to the National League at the end of the 2021–22 season via the play-offs, with a 2–0 victory over Boston United in the final. The ST purchased JM Packaging's 75% share of the club in July 2022 to regain its 100% shareholding, before transferring 51% of those shares to businessman Glen Henderson, who took over as chairman of the club.
## Club identity
York are nicknamed "the Minstermen", in reference to York Minster. It is believed to have been coined by a journalist who came to watch the team during a successful cup run, and was only first used officially in literature in 1972. Before this, York were known as "the Robins", because of the team's red shirts. They were billed "the Happy Wanderers", after a popular song, at the time of their run in the 1954–55 FA Cup.
For most of the club's history, York have worn red shirts. However, in the club's first season, 1922–23, the kit comprised maroon shirts, white shorts and black socks were worn. Maroon and white striped shirts were worn for three years in the mid 1920s, before the maroon shirts returned. In 1933, York changed their maroon jerseys to chocolate and cream stripes, a reference to the city's association with the confectionery industry. After four years they changed their colours to what were described as "distinctive red shirts", with the official explanation that the striped jerseys clashed with opponents too often. York continued to don red shirts before a two-year spell of wearing all-white kits from 1967 to 1969.
York resumed wearing maroon shirts with white shorts in 1969. To mark their promotion to the Second Division in 1974, a bold white "Y" was added to the shirts, which became known as the "Y-fronts". Red shirts returned in 1978, along with the introduction of navy blue shorts. In 2004, the club dropped navy from the kits and instead used plain red and white, until 2008 when a kit mostly of navy was introduced. For 2007–08, the club brought in a third kit, which comprised light blue shirts and socks, with maroon shorts. A kit with purple shirts was introduced for a one-off appearance in the 2009 FA Trophy final. Red shirts returned in 2010, and have been worn with red, navy blue, light blue and white shorts.
York adopted the city's coat of arms as their crest upon the club's formation, although it only featured on the shirts from 1950 to 1951. In 1959, a second crest was introduced, in the form of a shield that contained York Minster, the White Rose of York and a robin. This crest never appeared on the shirts, but from 1969 to 1973 they bore the letters "YCFC" running upwards from left to right, and from 1974 to 1978 the "Y-fronts" shirts included a stylised badge in which the "Y" and "C" were combined. The shirts bore a new crest in 1978, which depicted Bootham Bar, two heraldic lions and the club name in all-white, and in 1983 this was updated into a coloured version.
When Batchelor took over the club in 2002, the crest was replaced by one signifying the club's new name of "York City Soccer Club" and held a chequered flag motif. After Batchelor's one-year period at the club, the name reverted to "York City Football Club" and a new logo was introduced. It was selected following a supporters' vote held by the club, and the successful design was made by Michael Elgie. The badge features five lions, four of which are navy blue and are placed on a white "Y" shaped background. The rest of the background is red with the fifth lion in white, placed between the top part of the "Y".
Tables of kit suppliers and shirt sponsors appear below:
## Grounds
### Fulfordgate
York's first ground was Fulfordgate, which was located on Heslington Lane, Fulford in the south-east of York. With the ground not ready, York played their first two home matches at Mille Crux, Haxby Road, before they took to the field at Fulfordgate for a 4–1 win over Mansfield Town on 20 September 1922. Fulfordgate was gradually improved; terracing replaced banking behind one of the goals, the covered Popular Stand was extended to house 1,000 supporters, and a small seated stand was erected. By the time of York's election to the Football League in 1929, the ground was estimated to hold a capacity of 17,000. However, attendances declined in York's second and third Football League seasons, and the directors blamed this on the ground's location. In April 1932, York's shareholders voted to move to Bootham Crescent, which had been vacated by York Cricket Club, on a 21-year lease. This site was located near the city centre, and had a significantly higher population living nearby than Fulfordgate.
### Bootham Crescent
Bootham Crescent was renovated over the summer of 1932; the Main and Popular Stands were built and terraces were banked up behind the goals. The ground was officially opened on 31 August 1932, for York's 2–2 draw with Stockport County in the Third Division North. It was played before 8,106 supporters, and York's Tom Mitchell scored the first goal at the ground. There were teething problems in Bootham Crescent's early years: attendances were not higher than at Fulfordgate in its first four seasons, and there were questions over the quality of the pitch. In March 1938 the ground's record attendance was set when 28,123 people watched York play Huddersfield Town in the FA Cup. The ground endured slight damage during the Second World War, when bombs were dropped on houses along the Shipton Street End. Improvements were made shortly after the war ended, including the concreting of the banking at the Grosvenor Road End being completed.
With the club's finances in a strong position, York purchased Bootham Crescent for £4,075 in September 1948. Over the late 1940s and early 1950s, concreting was completed on the terracing in the Popular Stand and the Shipton Street End. The Main Stand was extended towards Shipton Street over the summer of 1955, and a year later a concrete wall was built at the Grosvenor Road End, as a safety precaution and as a support for additional banking and terracing. The ground was fitted with floodlights in 1959, which were officially switched on for a friendly against Newcastle United. The floodlights were updated and improved in 1980, and were officially switched on for a friendly with Grimsby Town. A gymnasium was built at the Grosvenor Road End in 1981, and two years later new offices for the manager, secretary, matchday and lottery manager were built, along with a vice-presidents' lounge.
During the early 1980s, the rear of the Grosvenor Road End was cordoned off as cracks had appeared in the rear wall, and this section of the ground was later segregated and allocated to away supporters. Extensive improvements were made over the mid 1980s, including new turnstiles, refurbished dressing rooms, new referees' changing room and physiotherapist's treatment room being readied, hospitality boxes being built to the Main Stand and crash barriers being strengthened. The David Longhurst Stand was constructed over the summer of 1991, and was named after the York player who collapsed and died from heart failure in a match a year earlier. It provided covered accommodation for supporters in what was previously the Shipton Street End, and was officially opened for a friendly match against Leeds United. In June 1995, new floodlights were installed, which were twice as powerful as the original floodlights.
In July 1999, York ceased ownership of Bootham Crescent when their real property assets were transferred to a holding company called Bootham Crescent Holdings. Craig announced the ground would close by 30 June 2002, and under Batchelor York's lease was replaced with one expiring in June 2003. In March 2003, York extended the lease to May 2004, and proceeded with plans to move to Huntington Stadium under the ownership of the Supporters' Trust. The club instead bought Bootham Crescent in February 2004, using a £2 million loan from the Football Stadia Improvement Fund (FSIF).
The ground was renamed KitKat Crescent in January 2005, as part of a sponsorship deal in which Nestlé made a donation to the club, although the ground was still commonly referred to as Bootham Crescent. The deal expired in January 2010, when Nestlé ended all their sponsorship arrangements with the club. There had not been any major investment in the ground since the 1990s, and it faced problems with holes in the Main Stand roof, crumbling in the Grosvenor Road End, drainage problems and toilet conditions.
### York Community Stadium
Per the terms of the FSIF loan, the club was required to have identified a site for a new stadium by 2007, and have detailed planning permission by 2009, to avoid financial penalties. York failed to formally identify a site by the end of 2007, and by March 2008 plans had ground to a halt. In May 2008, City of York Council announced its commitment to building a community stadium, for use by York and the city's rugby league club, York City Knights. In July 2010, the option of building an all-seater stadium at Monks Cross in Huntington, on the site of Huntington Stadium, was chosen by the council. In August 2014, the council named GLL as the preferred bidder to deliver an 8,000 all-seater stadium, a leisure complex and a community hub. Construction started in December 2017, and after a number of delays, was completed in December 2020. The club officially moved into the stadium in January 2021, with the first match being a 3–1 defeat to AFC Fylde on 16 February, which was played behind closed doors because of the COVID-19 pandemic in the United Kingdom. The stadium holds an all-seated capacity of 8,500.
## Supporters and rivalries
The club has a number of domestic supporters' groups, including the East Riding Minstermen, Harrogate Minstermen, York Minstermen, York City South and the Supporters' Trust. The now-disbanded group Jorvik Reds, who were primarily inspired by the continental ultras movement, were known for staging pre-match displays. The York Nomad Society is the hooligan firm associated with the club.
For home matches, the club produces a 60-page official match programme, entitled The Citizen. York have been the subject of a number of independent supporters' fanzines, including Terrace Talk, In The City, New Frontiers, Johnny Ward's Eyes, Ginner's Left Foot, RaBTaT and Y Front. The club mascot is a lion named Yorkie the Lion and he is known for performing comic antics before matches. John Sentamu, the Archbishop of York, became the club patron for 2007–08, having become a regular spectator at home matches as a season ticket holder.
The 2003 Football Fans Census revealed that no other team's supporters considered York to be among their club's main rivals. Traditionally, York's two main rivalries have been with Hull City and Scarborough. While York fans saw Hull as their main rival, this was not reciprocated by the East Yorkshire club, who saw Leeds United as their main rival. York also had a rivalry with Halifax Town and they were the team most local to York when the two played in the Conference. A rivalry with Luton Town developed during the club's final years in the Conference as both clubs met regularly in crucial matches, accompanied by a series of incidents involving crowd trouble, contentious transfers, and complaints about the behaviour of directors.
## Records and statistics
The record for the most appearances for York is held by Barry Jackson, who played 539 matches in all competitions. Jackson also holds the record for the most league appearances for the club, with 428. Norman Wilkinson is the club's top goalscorer with 143 goals in all competitions, which includes 127 in the league and 16 in the FA Cup. Six players, Keith Walwyn, Billy Fenton, Alf Patrick, Paul Aimson, Arthur Bottom and Tom Fenoughty, have also scored more than 100 goals for the club.
The first player to be capped at international level while playing for York was Eamon Dunphy, when he made his debut for the Republic of Ireland against Spain on 10 November 1965. The most capped player is Peter Scott, who earned seven caps for Northern Ireland while at the club. The first York player to score in an international match was Anthony Straker, who scored for Grenada against Haiti on 4 September 2015.
York's largest victory was a 9–1 win over Southport in the Third Division North in 1957, while the heaviest loss was 12–0 to Chester City in 1936 in the same division. Their widest victory margin in the FA Cup is by six goals, which was achieved five times. These were 7–1 wins over Horsforth in 1924, Stockton Malleable in 1927 and Stockton in 1928, and 6–0 wins over South Shields in 1968 and Rushall Olympic in 2007. York's record defeat in the FA Cup was 7–0 to Liverpool in 1985.
The club's highest attendance at their former Fulfordgate ground was 12,721 against Sheffield United in the FA Cup on 14 January 1931, while the lowest was 1,500 against Maltby Main on 23 September 1925 in the same competition. Their highest attendance at Bootham Crescent was 28,123, for an FA Cup match against Huddersfield Town on 5 March 1938; the lowest was 608 against Mansfield Town in the Conference League Cup on 4 November 2008.
The highest transfer fee received for a York player is £950,000 from Sheffield Wednesday for Richard Cresswell on 25 March 1999, while the most expensive player bought is Adrian Randall, who cost £140,000 from Burnley on 28 December 1995. The youngest player to play for the club is Reg Stockill, who was aged 15 years and 281 days on his debut against Wigan Borough in the Third Division North on 29 August 1929. The oldest player is Paul Musselwhite, who played his last match aged 43 years and 127 days against Forest Green Rovers in the Conference on 28 April 2012.
## Players
### First-team squad
Note: Flags indicate national team as defined under FIFA eligibility rules. Players may hold more than one non-FIFA nationality. Squad correct as of 28 August 2023.
### Former players
### Clubmen of the Year
## Club officials
Ownership
- 394 Sports (51%) / York City Supporters' Society (49%)
Board
- Co-chairs: Julie-Anne Uggla and Matthew Uggla
- Chief executive: Alastair Smith
- Marketing director: Mike Brown
Management and backroom staff
- Caretaker manager: Tony McMahon
- Goalkeeping coach: Joe Stead
- Lead sports therapist: Lewis Bulmer
- Head of recruitment: David Stockdale
- Sports scientist: Paddy McLaughlin
### Former managers
## Honours
York City's honours include the following:
League
- Third Division (level 3)
- Promoted: 1973–74
- Fourth Division / Third Division (level 4)
- Champions: 1983–84
- Promoted: 1958–59, 1964–65, 1970–71
- Play-off winners: 1993
- Conference Premier (level 5)
- Play-off winners: 2012
- National League North (level 6)
- Play-off winners: 2022
Cup
- FA Trophy
- Winners: 2011–12, 2016–17
- Runners-up: 2008–09
|
934,404 |
Giant eland
| 1,169,194,023 |
An open-forest and savanna antelope of the family Bovidae
|
[
"Bovids of Africa",
"Mammals described in 1847",
"Mammals of Cameroon",
"Mammals of Chad",
"Mammals of South Sudan",
"Mammals of West Africa",
"Mammals of the Central African Republic",
"Taurotragus",
"Taxa named by John Edward Gray",
"Taxobox binomials not recognized by IUCN"
] |
The giant eland (Taurotragus derbianus), also known as the Lord Derby's eland and greater eland, is an open-forest and savanna antelope. A species of the family Bovidae and genus Taurotragus, it was described in 1847 by John Edward Gray. The giant eland is the largest species of antelope, with a body length ranging from 220–290 cm (87–114 in). There are two subspecies: T. d. derbianus and T. d. gigas.
The giant eland is a herbivore, eating grasses, foliage and branches. They usually form small herds consisting of 15–25 members, both males and females. Giant elands are not territorial, and have large home ranges. They are naturally alert and wary, which makes them difficult to approach and observe. They can run at up to 70 km/h (43 mph) and use this speed as a defence against predators. Mating occurs throughout the year but peaks in the wet season. They mostly inhabit broad-leafed savannas, woodlands and glades.
The giant eland is native to Cameroon, Central African Republic, Chad, Democratic Republic of the Congo, Guinea, Mali, Senegal, and South Sudan. It is no longer present in The Gambia, Ghana, Ivory Coast, and Togo. It can also be found in the Jos wildlife park in Nigeria, Guinea-Bissau, and Uganda. The subspecies have been listed with different conservation statuses by the International Union for Conservation of Nature (IUCN).
## Etymology
The scientific name of the giant eland is Taurotragus derbianus, derived from three words: tauros, tragos, and derbianus. Tauros is Greek for a bull or bullock. Tragos is Greek for a male goat, and refers to the tuft of hair that grows in the eland's ear which resembles a goat's beard.
The giant eland is also called "Lord Derby's eland" in honour of Edward Smith-Stanley, 13th Earl of Derby. It was due to his efforts that the giant eland was first introduced to England between 1835 and 1851. Lord Derby sent botanist Joseph Burke to collect animals, either alive or dead, from South Africa for his museum and menagerie. The first elands introduced in England were a pair of common elands, and what would later be identified as a giant eland bull. The details were recorded in Smith-Stanley's privately printed work, Gleanings from the Menagerie at Knowsley Hall. The Latin name indicates that it "belonged to" (given by the suffix -anus) Derby, hence derbianus.
Although the giant eland is somewhat larger than the common eland, the epithet 'giant' actually refers to its large horns. The name 'eland' is Dutch for "elk" or "moose". It has a Baltic source similar to the Lithuanian élnis, which means "deer". It was borrowed earlier as ellan (French) in the 1610s or elend (German).
## Taxonomy
The giant eland was first described in 1847 by John Edward Gray, a British zoologist, who called it Boselaphus derbianus. At that time, it was also called the 'black-necked eland' and Gingi-ganga.
Giant eland is placed in the genus Taurotragus of family Bovidae. Giant elands are sometimes considered part of the genus Tragelaphus on the basis of molecular phylogenetics, but are usually categorized as Taurotragus, along with the common eland (T. oryx). Together with the bongo, Giant eland and common eland are the only antelopes in the tribe Tragelaphini to be given a generic name other than Tragelaphus. Although some authors, like Theodor Haltenorth, regarded the giant eland as conspecific with the common eland, they are usually considered two distinct species.
Two subspecies of giant eland have been recognized:
## Description
The giant elands are spiral-horned antelopes. Despite its common name, this species broadly overlaps in size with the common eland (Taurotragus oryx). However, the giant eland is somewhat larger on average than the common eland and is thus the largest species of antelope in the world. They are typically between 219 and 291 cm (7.19 and 9.55 ft) in head-and-body length and stand approximately 128 to 181 cm (4.20 to 5.94 ft) at the shoulder. Giant elands exhibit sexual dimorphism, as males are larger than females. The males weigh 400 to 1,200 kg (880 to 2,650 lb) and females weigh 300 to 600 kg (660 to 1,320 lb). The tail is long, having a dark tuft of hair, and averages 91 cm (36 in) in length. The life expectancy of giant elands is up to 25 years.
The smooth coat is reddish-brown to chestnut, usually darker in males than females, with 8–12 well-defined vertical white stripes on the torso. The colour of the male's coat darkens with age. According to zoologist Jakob Bro-Jørgensen, the colour of the male's coat can reflect the levels of androgen, a male hormone, which is highest during rutting. Comparing the subspecies, T. d. derbianus is characterised by 15 body stripes, smaller size, and a rufous colour, while T. d. gigas is larger, a sandy colour, and has 12 body stripes.
A crest of short black hair extends down the neck to the middle of the back, and is particularly prominent on the shoulders. The slender legs are slightly lighter on their inner surfaces, with black and white markings just above the hooves. There are large black spots on the upper forelegs. The bridge of the nose is charcoal black, and there is a thin, indistinct tan-coloured line, which is the chevron, between the eyes. The lips are white, as are several dots along the jawline. A pendulous dewlap, larger in males than females, originates from between the jowls and hangs to the upper chest when they reach sexual maturity, with a fringe of hair on its edge. The large ears of the giant eland serve as signaling devices. Giant elands have comparatively longer legs than the common eland, as well as much brighter black and white markings on the legs and pasterns.
Both sexes have tightly spiraled, 'V'-shaped horns. They can be up to 124 cm (4.07 ft) long on males and 67 cm (2.20 ft) on females. Males have horns that are thicker at the ends, longer, and more divergent than those of females. These features of the horns suggest that the giant eland evolved from an ancestor with true display horns.
### Parasites
Fecal studies of the western giant eland revealed the presence of a newly found species Eimeria derbani, of genus Eimeria, which consists of Apicomplexan parasites. The sporulation lasted for two days at a temperature of 23 °C (73 °F). The species has been differentiated from E. canna and E. triffittae, which parasitize the common eland (T. oryx). The giant eland is also parasitised by Carmyerius spatiosus (a trematode species), Taenia crocutae and T. hyaennae (two tapeworm species).
## Genetics and evolution
The giant eland has 31 male chromosomes and 32 female chromosomes. In a 2008 phylogenomic study of spiral-horned antelopes, chromosomal similarities were observed between cattle (Bos taurus) and eight species of spiral-horned antelopes, namely: nyala (Tragelaphus angasii), lesser kudu (T. imberbis), bongo (T. eurycerus), bushbuck (T. scriptus), greater kudu (T. strepsiceros), sitatunga (T. spekei), giant eland and common eland (T. oryx). It was found that chromosomes involved in centric fusions in these species used a complete set of cattle painting probes generated by laser microdissection. The study confirmed the presence of the chromosome translocation known as Robertsonian translocation (1;29), a widespread evolutionary marker common to all known tragelaphid species.
An accidental mating between a male giant eland and a female kudu produced a male offspring, but it was azoospermic. Analysis showed that it completely lacked germ cells, which produce gametes. Still, the hybrid had a strong male scent and exhibited male behaviour. Chromosomal examination showed that chromosomes 1, 3, 5, 9, and 11 differed from the parental karyotypes. Notable mixed inherited traits were pointed ears like the eland's, but a bit widened like kudu's. The tail was half the length of that of an eland with a tuft of hair at the end as in kudu.
Previous genetic studies of African savanna ungulates revealed the presence of a long-standing Pleistocene refugium in eastern and southern Africa, which also includes the giant eland. The common eland and giant eland have been estimated to have diverged about 1.6 million years ago.
## Habitat and distribution
Giant elands live in the broad-leafed savanna, woodlands, and glades of central and western Africa, which correspond to the two subspecies. They also live in forests as well as on the fringes of deserts. The giant elands can also live in deserts, as they produce very dry dung. They are found in South Sudan and Central African Republic into northern Cameroon and southern Chad.
They inhabit places near hilly or rocky landscapes and those with water sources nearby. Science author Jonathan Kingdon had thought the giant elands lived only in woodlands of Isoberlinia doka, an African hardwood tree. The giant eland is adapted to these broad-leafed, deciduous Isoberlinia woodlands. Recent studies proved that they also inhabit woodlands with trees of the genera Terminalia, Combretum, and Afzelia.
In the past, giant elands occurred throughout the relatively narrow belt of savanna woodland that extends across West and Central Africa from Senegal to the Nile. Today they are conserved in national parks and reserves, and occur mostly in Senegal. The western giant eland is largely restricted to Niokolo-Koba National Park in Senegal. The eastern giant eland is found in several reserves, for example in Bénoué National Park, Faro National Park and Bouba Njida National Park in Cameroon and in Manovo-Gounda St. Floris National Park in the Central African Republic. They are also kept in captivity.
## Ecology and behaviour
Primarily nocturnal, giant elands have large home ranges and seasonal migration patterns. They form separate groups of males and of females and juveniles. Adult males mainly remain alone, and often spend time with females for an hour to a week. A gregarious species, giant eland herds usually consist of 15–25 animals (sometimes even more) and do not disband during the wet season, suggesting that social rather than ecological factors are responsible for herding. During the day, herds often rest in sheltered areas. As many other animals do, giant elands scrape mineral lick sites with the help of horns to loosen soil.
Giant elands are alert and wary, making them difficult to approach and observe or to hunt. If a bull senses danger, he will give deep-throated barks while leaving the herd, repeating the process until the whole herd is aware of the danger. Giant elands can move quickly, running at over 70 km/h (43 mph), and despite their size are exceptional jumpers, easily clearing heights of 1.5 m (4.9 ft). Their primary predators are the lion, nile crocodile and spotted hyena, while young, sickly and a rare adult may be vulnerable to leopards, cheetahs and African wild dogs. Due to their large size, they prove a good meal for the predators. However, they are not easily taken by any predator, especially the heavier and larger horned bulls which can be a dangerous adversary even for a lion pride.
### Diet
Primarily a herbivore, the giant eland eats grasses and foliage, as well as other parts of a plant. In the rainy season, they browse in herds and feed on grasses. They can eat coarse, dry grass and weeds if nothing else is available. They eat fruits too, such as plums. A study in South Africa showed that an eland's diet consists of 75% shrubs and 25% grasses, with highly varying proportions. They often use their long horns to break off branches.
As they need a regular intake of water in their diet, they prefer living in places with a nearby water source. However, some adaptations they possess help them to survive even in the lack of water by maintaining a sufficient quantity of it in their body. They produce very dry dung compared to domestic cattle. In deserts, they can get their required water from the moisture of succulent plants. Another way in which they conserve water is by resting in the day and feeding at night, so that they minimize the water quantity required to cool themselves.
Several studies have investigated the eland's diet. A study of giant elands in the Bandia Natural reserve in Senegal revealed that the most important and most preferred plants were various species of Acacia, Terminalia and Combretum, along with Azadirachta indica, Daniellia oliveri, Gymnosporia senegalensis, Philenoptera laxiflora (syn. Lonchocarpus laxiflorus), Prosopis africana, Pterocarpus erinaceus, Saba senegalensis and pods of Piliostigma thonningii. Another study in Sudan showed that western giant elands preferred Cassia tora, which was the most abundant legume in the region.
In 2010, histological analysis of the feces of South African western giant elands was done in the Niokolo-Koba National Park and in the Bandia National Reserve. In both studies leaves, shoots of woody plants, and fruits were found to be the three major components. The other components that appeared in minor proportions were forbs and grasses, generally below five percent of the mean fecal volume. They were seen eating most foliage from Boscia angustifolia, Grewia bicolor, Hymenocardia acida, and Ziziphus mauritiana, and the fruits of Acacia and Strychnos spinosa. In the Bandia Reserve, differences in diet were marked among age classes. The conclusions were that in the dry season the eland was a pure browser, consuming grasses in small amounts.
### Reproduction
Mating occurs throughout the year, but peaks in the wet season. Females reach sexual maturity at about two years, and males at four to five years. A female can remain in estrus for three days, and the estrous cycle is 21–26 days long. As in all antelopes, mating occurs at a time of food abundance. In some areas distinct breeding seasons exist. In southern Africa, females have been seen giving birth from August to October, and are joined by the males from late October to January. In Zambia calves are born in July and August.
Fights occur for dominance, in which the bulls lock horns and try to twist the necks of their opponents. As an act during rut, the males rub their foreheads in fresh urine or mud. They also use their horns to thresh and throw loose earth on themselves. The horns of older males get worn out due to rubbing them on tree barks. Expressions of anger are not typically observed. Dominant males may mate with multiple females. The courtship is brief, consisting of a penetration and one ejaculatory thrust.
After the courtship, the gestational period begins, which is of nine months duration. The delivery usually takes place in the night, after which the mother ingests the afterbirth. Generally one calf is delivered, and it remains with its mother for six months. Lactation can last for four to five months. After the first six months the young eland might join a group of other juveniles.
A Senegalese study focused on the suckling behaviour of giant eland and common eland calves about one to five months old determined that suckling bouts increased with the age of the calves. No other change occurred in the farmed common eland calves, but in the giant eland calves, the males were found to suckle more than female ones and shorter suckling bouts were marked in primiparous mothers than multiparous ones. The results suggest that Derby elands that lived in their natural habitat adjusted their maternal behaviour so as to be able to readily maintain a vigilant lookout for predators and other similar risks. In contrast, the farmed common elands behaved as in the conditions of captivity, without predators.
## Populations
The eastern giant eland ranged from Nigeria, through Cameroon, Chad, the Central African Republic, and the Democratic Republic of the Congo (formerly Zaire) to Sudan and Uganda in 1980. But the rinderpest outbreak (1983–1984) caused a devastating 60–80% decline in the populations. The eastern giant eland is still found in extensive areas, though it has a decreasing population trend. Because of this, it is listed as 'Vulnerable' by the IUCN. It has many uninhabited habitats that are not expected to be occupied for human settlement, particularly in northern and eastern Central African Republic and south-western Sudan, where their population has notably increased. According to Rod East, 15,000 eastern giant elands existed as of 1999, of which 12,500 are in Central African Republic. The remaining areas are often disturbed by wars and conflicts—activities that can lead to a rapid decline in the eastern giant eland's numbers if not controlled.
The western giant eland is in a more dangerous situation, being listed as 'Critically Endangered' by the IUCN. Today they mostly occur in Senegal. In 1990, populations were about 1000, of which 700 to 800 were found in the Niokolo-Koba National Park and the rest in the region around the Falémé River. As of 2008, a population of less than 200 individuals occur there, and only a few elands exist in neighboring countries.
A study of the long-term conservation strategy of the western giant eland was done in the Bandia and Fathala reserves, using demographic and pedigree data based on continuous monitoring of reproduction during 2000 to 2009. In 2009, the semi-captive population was 54 individuals (26 males, 28 females). The female breeding probability was 84%, and the annual population growth was 1.36. With more population, the elands were divided into five groups for observation. Although the mean interbreeding level became 0.119, a potential gene diversity (GD) of 92% was retained. The authors concluded that with the introduction of new founders, the GD could be greatly improved in the next 100 years, and suggested that with proper management of the semi-captive population, the numbers of the western giant eland could be increased.
## Interaction with humans
### Threats and conservation
The major threats to the western giant eland population are overhunting for its rich meat and habitat destruction caused by the expansion of human and livestock populations. The eastern giant eland is also depleting for similar reasons, and natural causes like continued droughts and competition from domestic animals are contributing to the reduction in numbers. Populations of the eastern giant eland had already gone down due to the rinderpest attacks. The situation was worse during World War II and other civil wars and political conflicts that harmed their natural habitats.
The giant eland is already extirpated in The Gambia, Ghana, Ivory Coast, and Togo. The western giant eland was once reported in Togo, but is believed to have been confused with the bongo (Tragalephaus eurycerus). In 1970, it was reported eliminated in Uganda, during military operations. Its presence is uncertain in Guinea-Bissau and Nigeria.
Today the western giant eland is conserved in the Niokolo-Koba National Park and the Faheme Hunting Zone in Senegal. Field studies have proved that the Niokolo-Koba National Park is ecologically suitable for the giant eland. As observed in the 2000 census of the park, the number of deaths in a decade were only 90 to 150.
The eastern giant eland is conserved in the Faro National Park, Bénoué National Park, Bouba Njida National Park, Bamingui-Bangoran National Park and Manovo-Gounda St. Floris National Park. They are bred in captivity in the Bandia Reserve and Fathala Reserve in Senegal, and at White Oak Conservation in Yulee, Florida, United States. Eland born at White Oak have been sent to other countries, including Costa Rica and South Africa, to initiate breeding programs.
### Uses
Giant elands give large quantities of tender meat and high-quality hides even if fed a low-quality diet. These are game animals and are also hunted for trophies. Their milk is comparatively richer in proteins and milkfat than dairy cows, which may be an explanation for the quick growth of eland calves. Eland's milk has about triple the fat content and twice the protein of a dairy cow's milk. Its docility and profitable characteristics have made it a target of domestication in Africa and Russia and has also resulted in hunting.
Many people prefer to tame and raise eland rather than cattle due to their numerous benefits. Elands can survive on scarce water, which is a great advantage over domestic cattle. They can also eat coarse grasses, and can even manage to ingest some poisonous plants that can prove fatal for cattle. They are also immune to some diseases to which cattle may succumb.
|
51,375,515 |
Long Island Tercentenary half dollar
| 1,154,024,836 |
Commemorative half dollar in 1936
|
[
"1936 establishments in the United States",
"Early United States commemorative coins",
"Fifty-cent coins",
"Flatlands, Brooklyn",
"Long Island",
"Native Americans on coins",
"New York (state) historical anniversaries",
"Ships on coins",
"Tricentennial anniversaries"
] |
The Long Island Tercentenary half dollar was a commemorative half dollar struck by the United States Bureau of the Mint in 1936. The obverse depicts a male Dutch settler and an Algonquian tribesman, and the reverse shows a Dutch sailing ship. It was designed by Howard Weinman, the son of Mercury dime designer Adolph A. Weinman.
The Long Island Tercentenary Committee wanted a coin to mark the 300th anniversary of the first European settlement there, at modern Flatlands, Brooklyn, New York City. The authorizing bill passed through Congress without opposition. Still, it was amended in the Senate to add protections against past commemorative coin abuses, such as low mintages or an assortment of varieties. On April 13, 1936, the bill became law with the signature of President Franklin D. Roosevelt.
The coins were not struck until August of that year, too late for the anniversary celebrations, which had been held in May. The coins were placed on sale to the public, and four-fifths of the 100,000 coins sent to the Tercentenary Committee were sold, a result deemed to be successful given the significant issue and a lack of advertising. The remainder was sent back to the Philadelphia Mint for redemption and melting. The half dollar catalogs up to the low hundreds of dollars.
## Background and inception
The first European known to have sighted Long Island, now part of New York State, was Henry Hudson in 1609. At the time of what was later deemed its discovery, 13 tribes of Native Americans inhabited the island. The first European settlement, on Jamaica Bay, was by the Dutch. The first deed for land on Long Island was dated June 16, 1636, for land conveyed to two Dutch colonists, Andries Hudde and Wolphert Gerretse. The transfer was facilitated by the Canarsee Sachem Penhawitz, and under Hudde and Gerretse the land became the Achtervelt farm and then the Town of New Amersfoort, and later modern Flatlands, Brooklyn. The Dutch called the island as a whole Lange Eylandt; after the British took possession of the area in the 1660s, they attempted to rename it Nassau, but this never became popularly used.
In 1936, commemorative coins were not sold by the government—Congress, in authorizing legislation, usually designated an organization that had the exclusive right to purchase them at face value and tend them to the public at a premium. In the case of the Long Island half dollar, the responsible group was the Long Island Tercentenary Committee, acting through either its president or its secretary. That committee was formed to organize the anniversary celebrations to take place on Long Island.
## Legislation
The political influence of the members of the Tercentenary Committee was sufficient to get a bill into Congress. Introduced into the House of Representatives by John J. Delaney of New York on February 20, 1936, the bill called for a minimum of 100,000 half dollars to be struck (no maximum was stated). The bill was referred to the Committee on Coinage, Weights, and Measures. That committee reported back on February 28, 1936, through Andrew Somers of New York, recommending passage. Somers was the committee chair; both he and Delaney represented Brooklyn. John J. Cochran of Missouri brought the bill to the House floor on March 6, saying he was doing so on behalf of Somers and Delaney, and on his motion the bill passed without debate or opposition.
In the Senate, the bill was referred to the Committee on Banking and Currency; it was one of several commemorative coin bills to be considered on March 11, 1936, by a subcommittee led by Colorado's Alva B. Adams. Senator Adams had heard of the commemorative coin abuses of the mid-1930s, with issuers increasing the number of coins needed for a complete set by having them issued at different mints with different mint marks; authorizing legislation placed no prohibition on this. Lyman W. Hoffecker, a Texas coin dealer and official of the American Numismatic Association, testified and told the subcommittee that some issues, like the Oregon Trail Memorial half dollar, first struck in 1926, had been issued over the course of years with different dates and mint marks. Other issues had been entirely bought up by single dealers, and some low-mintage varieties of commemorative coins were selling at high prices. The many varieties and inflated prices for some issues that resulted from these practices angered coin collectors trying to keep their collections current.
On March 26, the committee, through Senator Adams, issued a report recommending the bill pass once amended. That amendment required that the coins be struck at only one mint, that they only be issued for a year and bear the date of authorization (1936) regardless of when coined. A minimum of 5,000 and a maximum of 100,000 were to be issued. Adams recommended that these provisions appear in future commemorative coin bills. The Senate considered the bill on March 27, the last in a series of six commemorative coin bills being considered by that body, and like the others, the Long Island bill was amended and passed without debate or dissent.
As the two houses had passed different versions, the bill returned to the House of Representatives, where, on March 30, Cochran asked that the House agree to the Senate amendment. Bertrand H. Snell of New York requested an explanation of the Senate amendment; he was told by Cochran that it was a strengthening of the language to ensure there was no expense to the federal government. The House agreed to the amendment and passed the bill without dissent. It was passed into law, authorizing 100,000 half dollars, with the signature of President Franklin D. Roosevelt on April 13, 1936. The provision that the coins only be struck at a single mint and the one requiring that all coins bear the same date were firsts for commemorative coin legislation.
## Preparation
At the recommendation of the federal Commission of Fine Arts (CFA), the Tercentenary Committee engaged sculptor Howard Kenneth Weinman, the son of sculptor Adolph Alexander Weinman. The CFA was responsible for making recommendations on the artistic merit of public artworks, including coins. The elder Weinman was known for designing the Mercury dime and Walking Liberty half dollar and wrote to CFA secretary H.R. Caemmerer on April 2, 1936, relating that Howard Weinman had been hired, and asking for details of the procedure for commemorative coin approval. Caemmerer replied the following day, stating that the designs should be sent to the Philadelphia Mint once the authorization bill had been given final approval.
On April 19, Howard Weinman wrote to Caemmerer, stating that due to the Tercentenary Committee having gotten off to a late start, only preliminary sketches had been made, and asking at what stage the designs needed to be submitted for approval. Caemmerer replied on the 21st, stating that for purposes of CFA approval, it would be best to send copies of the photographs of the completed plaster model to himself, and to Lee Lawrie, sculptor-member of the CFA. Caemmerer also suggested that Howard Weinman consult his father as to the procedure for submission to the Mint, as Adolph Weinman had done it many times. By May, Howard Weinman had completed his models. Lawrie had a few minor suggestions, but was greatly pleased with the work. The CFA concurred on the 26th, having some additional suggestions, such as placing HALF DOLLAR under the ship on the reverse (something not adopted).
After the CFA granted preliminary approval, Adolph Weinman met with the Director of the Mint, Nellie Tayloe Ross, and with the Assistant Director, Mary Margaret O'Reilly, to come to terms on the recommended changes. For example, to ensure greater clarity, the legend IN GOD WE TRUST, appearing incuse, graven into the surface beneath the ship, was to be engraved on the master die directly by John R. Sinnock, the Chief Engraver. When Howard Weinman wrote to Caemmerer on June 22, he stated that he was working in haste, so that the coins would be available as quickly as possible. The Commission gave its approval; Howard Weinman's models were reduced to coin-sized hubs by the Medallic Art Company of New York City.
## Design
The obverse of the half dollar depicts jugate busts of a Dutch settler and a member of the Algonquin tribe of Native Americans. Howard Weinman wrote of this, "I shall try to infer by the harmonious balance of the heads the peaceful settlement of the island by the Dutch". Texas coin dealer B. Max Mehl described the obverse in 1937 as "conjoined portraits of two rather tough looking gentlemen, but so far I have been unable to ascertain just who they are or who they are supposed to represent". Other critics have compared the two heads, with their lantern jaws and prominent noses, to two boxers about to square off. Also present on the obverse are some of the inscriptions required by law, LIBERTY and E PLURIBUS UNUM.
The reverse depicts a Dutch three-masted ship sailing to the right. The design resembles the depiction of Henry Hudson's ship Halve Maen on the 1935 Hudson Sesquicentennial half dollar but is more stylized. In the waves the ship rides over is the text, IN GOD WE TRUST, with the name of the country and the denomination of the coin surrounding the scene, together with the legend, LONG ISLAND TERCENTENARY.
David Bullowa, in his 1938 volume on commemorative coins, noted that the designs had generally been criticized as a number of previous commemoratives had borne busts in a similar matter to the Long Island piece, and others had depicted ships. Art historian Cornelius Vermeule, in his volume on American coins and medals, took a mixed view of the Long Island half dollar, "The Dutch pioneer looks like a character out of Shakespeare (a peasant part), and the Indian could easily play professional football any Sunday afternoon across the United States. Otherwise, beyond those cliches brought about in an effort to modernize traditionally ideal subjects, the ship has a correct amount of simplicity, and the lettering seems to fade into the background in a satisfying fashion."
## Distribution
A total of 100,053 Long Island Tercentenary half dollars were struck at the Philadelphia Mint during August 1936, with 53 pieces to be retained at the mint to be available for inspection and testing at the 1937 meeting of the annual Assay Commission. The issuance of the half dollar made the Weinmans the second parent and child to have both designed U.S. coins, the first having been Chief Engravers William Barber (1869–1879) and Charles Barber (1880–1917) of the U.S. Mint. Advance sales accounted for almost 19,000 coins. By the time of issue, the celebrations on Long Island had passed, having been held under the auspices of the Tercentenary Committee in May. Arlie Slabaugh wrote in his book on commemoratives, "Even so the Long Island Tercentenary Committee did a surprisingly good job of selling these through local banks". After the coins were delivered from the mint to the National City Bank in Brooklyn, they were sold to the public at various places for \$1 each. The office of the Brooklyn Eagle made 50,000 coins available. In addition, 25,000 coins were offered for sale in Queens, 15,000 in Nassau County and 10,000 in Suffolk County. They were for sale at Brooklyn department stores. Despite arriving late, the coins sold relatively well, with 81,826 coins out of 100,000 disposed of despite almost no advertising.
In August 1936, examples of the new half dollar were presented by the Tercentenary Commission to President Roosevelt. Sales continued through the first few months of 1937. As was the norm with other early commemoratives, the remaining unsold coins were returned to the mint for melting. Unlike other commemorative coins of the 1930s, there were no complaints about the manner of distribution, as anyone who wanted one could buy one; nor was there any profiteering. The coin was purchased both by the coin collecting community and by residents of Long Island.
## Collecting
As the coins sold well, the Long Island Tercentenary half dollar is often considered one of the more common early commemoratives. However, few coins survive in gem condition. Problems commonly encountered include wear or bag marks (abrasions) on the high points of the coin, such as on the cheek of the Dutch settler on the obverse and the sails of the ship on the reverse. One reason for this is that the coin design, especially on the reverse, is relatively flat, thus making it prone to bag marks. Other pieces were handled carelessly while in the hands of the public. Marty Rubenstein, a local coin dealer, stated, "Long Islands don't generally come nice."
The Long Island Tercentenary half dollar sold at retail for about \$1.25 in uncirculated condition in 1940. It thereafter increased in value, selling for about \$4 by 1955, and \$140 by 1985. The deluxe edition of R. S. Yeoman's A Guide Book of United States Coins, published in 2018, lists the coin for between \$85 and \$450, depending on condition. An exceptional specimen sold for \$9,988 in 2015. Harry Miller, a Patchogue, Long Island, coin dealer, stated in 2002, "I find most collectors on Long Island want to have one even if they don't specialize in commemoratives".
|
5,930,321 |
Ninja Gaiden (NES video game)
| 1,169,284,621 |
1988 video game
|
[
"1988 video games",
"Java platform games",
"Mobile games",
"Ninja Gaiden games",
"Nintendo Entertainment System games",
"Nintendo Switch Online games",
"Platform games",
"Retrofuturistic video games",
"Side-scrolling video games",
"Single-player video games",
"Tecmo games",
"TurboGrafx-16 games",
"Video games developed in Japan",
"Video games scored by Keiji Yamagishi",
"Video games set in 1988",
"Video games set in South America",
"Video games set in the 1980s",
"Video games set in the United States",
"Virtual Console games for Nintendo 3DS",
"Virtual Console games for Wii",
"Virtual Console games for Wii U"
] |
Ninja Gaiden, released in Japan as and as Shadow Warriors in Europe, is an action-platform video game developed and published by Tecmo for the Nintendo Entertainment System. Its development and release coincided with the beat 'em up arcade version of the same name. It was released in December 1988 in Japan, in March 1989 in North America, and in August 1991 in Europe. It has been ported to several other platforms, including the PC Engine, the Super NES, and mobile phones.
Set in a retro-futuristic version of 1988, the story follows a ninja named Ryu Hayabusa as he journeys to America to avenge his murdered father. There, he learns that a person named "the Jaquio" plans to take control of the world by unleashing an ancient demon through the power contained in two statues. Featuring side-scrolling platform gameplay similar to Castlevania, players control Ryu through six "Acts" that comprise 20 levels; they encounter enemies that must be dispatched with Ryu's katana and other secondary weapons.
Ninja Gaiden has an elaborate story told through anime-like cinematic cutscenes. It received extensive coverage and won several awards from video gaming magazines, while criticism focused on its high difficulty, particularly in the later levels. Director Hideo Yoshizawa named Ninja Gaiden as his most commercially successful project. The game continued to receive acclaim from print and online publications, being cited as one of the greatest video games of all time. It was novelized as part of the Worlds of Power game adaptations written by Seth Godin and Peter Lerangis. A manga-styled comic book, Ninja Gaiden '88, published by Dark Horse Comics and continued the narrative of the five original games.
## Plot
Ninja Gaiden features a ninja named Ryu Hayabusa who seeks revenge for the death of his father and gradually finds himself involved in a sinister plot that threatens the entire world. The story opens with Ryu's father Ken seemingly killed in a duel by an unknown assailant. After the duel, Ryu finds a letter written by Ken which tells him to find an archeologist named Walter Smith in America. Before Ryu can find Walter, he is shot and kidnapped by a mysterious young woman; she hands him a demonic-looking statue before releasing him. Ryu then finds Walter who tells him of the demon statues he and Ken had found in the Amazon ruins. Walter tells Ryu of an evil demon named Jashin, that "SHINOBI" defeated, whose power was confined into "Light" and "Shadow" demon statues. Ryu shows Walter the "Shadow" demon statue given to him by the woman, but during their conversation, a masked figure, named Basaquer, suddenly breaks into the cabin and steals the Shadow statue. Ryu gives chase, defeats the masked figure, and retrieves the statue; but when he returns he finds that Walter is dying, and the Light statue is missing. Right after Walter dies, three armed men confront Ryu and tell him to come with them.
Ryu is taken to an interrogation room, where he meets Foster, head of the Special Auxiliary Unit of the Central Intelligence Agency. Foster tells him about a more-than-2000-year-old temple Walter discovered in some ruins in the Amazon. He continues saying one day Walter mysteriously sealed the ruins, and nobody has since ventured near them. Foster explains to Ryu they have been monitoring the activity of someone named Guardia de Mieux, also known as "the Jaquio", who recently moved into the temple where the body of the demon was confined. Using the statues, the Jaquio plans to awaken Jashin and use it to destroy the world. Foster asks Ryu to go to the temple and eliminate him. After making it into the temple, Ryu discovers the Jaquio is holding captive the girl who handed him the "Shadow" statue earlier. He orders Ryu to give up the demon statue after threatening the girl's life. Ryu is then dropped from sight through a trapdoor and into a catacomb.
After fighting his way back to the top of the temple, Ryu encounters Bloody Malth, whom he defeats. As he is dying, Malth reveals that he was the one who dueled with Ryu's father, that his father is still alive, and Ryu will meet him as he presses onward. When he reaches the temple's inner chambers, he discovers his father was not killed, but was possessed by an evil figure instead. He destroys the evil figure, which releases Ken from its hold. Jaquio, enraged by Ken's release from his possession, shows himself; he tries to kill Ryu immediately with a fiery projectile, but Ken throws himself in front of Ryu and takes the hit. Jaquio is killed by Ryu during the ensuing fight, but then a lunar eclipse occurs, causing the demon statues to transform into Jashin. After Ryu defeats the demon, Ken tells him he does not have much longer to live because of Jaquio's attack. He tells Ryu to leave him behind in the temple while it collapses, and to take the young woman with him. Afterwards, Foster, communicating via satellite, orders the girl to kill Ryu and steal the demon statues; she chooses to be with Ryu instead of carrying out the order. The two kiss, and the girl tells Ryu her name, Irene Lew; they watch as the sun rises.
## Gameplay
Ninja Gaiden is a side-scrolling platform game in which the player takes control of the player character, Ryu Hayabusa, and guides him through six "Acts" that comprise 20 levels. A life meter represents Ryu's physical strength, which decreases when he is hit by an enemy or projectile. A "life" is lost when the life meter is depleted entirely, when Ryu falls off the screen, or when the timer runs out. A game over screen appears when all lives are lost; however, the player may restart the level where this occurred by continuing. At the end of every Act, the player fights a boss; bosses have life meters depleted by player attacks. When its life meter is depleted entirely, a boss is defeated. Each boss is one of the "Malice Four"—evil underlings of the Jaquio, the game's main antagonist. The Malice Four consist of Barbarian, Bomberhead, Basaquer, and their leader Bloody Malth.
Players attack enemies by thrusting at them with Ryu's Dragon Sword—a katana-like sword passed down by the Hayabusa clan for generations. They can also use secondary weapons that consume Ryu's "spiritual strength". These include throwing stars, "windmill throwing stars", which cut through enemies and return like boomerangs, a series of twirling fireballs named "the art of the fire wheel", and a mid-air slashing technique called the "jump & slash". When Ryu's spiritual strength meter is too low, the player cannot use secondary weapons. Players can replenish Ryu's spiritual strength by collecting red and blue "spiritual strength" items found in lamps and lanterns. Other items found along the way include hourglasses that freeze all enemies and projectiles for five seconds, bonus point containers, potions that restore six units of physical strength, "invincible fire wheels" that make Ryu temporarily invincible to attacks and 1-ups.
Ryu can jump on and off ladders and walls, and by using the directional pad, he can climb up or down ladders. Ryu can spring off walls by holding the directional pad in the opposite direction he is facing and pressing the jump button. He cannot attack while on walls or ladders. Players can use this technique to get Ryu to climb up spaces between walls and columns by holding down the jump button and alternating between left and right on the directional pad. He can also climb a single wall vertically by springing off it and then quickly pressing the directional pad back towards the wall.
## Development
Tecmo first announced the Famicom version of the game in the January 15, 1988, issue of Family Computer Magazine under the title Ninja Gaiden (which would later be used for the game's American version). The game was released in Japan on December 9, 1988, under its final title Ninja Ryūkenden, which roughly translates to Legend of the Dragon Sword. It was developed and released around the same time as the beat 'em up arcade version of the same name; neither of the games were ports of each other but were parallel projects developed by different teams. According to developer Masato Kato (listed as "Runmaru" in the game's credits), the term "ninja" was gaining popularity in North America, prompting Tecmo to develop a ninja-related game for the NES at the same time the arcade version of Ninja Gaiden was being developed. Hideo Yoshizawa (listed as "Sakurazaki") developed and directed the NES version. Ninja Gaiden was Masato Kato's first full-time project as a video game designer, and he contributed the game's graphics, animations and instruction manual illustrations.
Drawing inspiration from the Mario series, Yoshizawa kept the same title but changed everything else; it became a platform game as opposed to a beat 'em up such as Double Dragon; the gameplay was modeled after Konami's Castlevania, with Ryu being equipped with a katana-like Dragon Sword, shurikens, and ninpo techniques such as fire wheels. In designing the protagonist Ryu Hayabusa, the development team wanted him to be unique from other ninjas. They designed him with a ninja vest to place emphasis on his muscles, and they furnished him with a cowl that arched outward. They originally wanted to equip Ryu with sensors and a helmet with an inside monitor to check his surroundings, but that idea was scrapped. According to Kato, they used specific locations and environments to justify the need for having a ninja for a main character. A further concern, according to Yoshizawa, was to appeal to the gameplay-oriented expectations of Ninja Gaiden's target audience, mainly represented by experienced players who appreciated challenging game design. He recalled that during development, Tecmo adhered to "the philosophy that the user would throw a game away if it wasn't hard enough". As a result, Yoshizawa decided to give the game an overall high level of difficulty.
Yoshizawa placed greater emphasis on the story, unlike the arcade version, and wrote and designed a plot that included over 20 minutes of cinematic cutscenes—the first time an NES game contained such sequences. Yoshizawa stated that the adoption of this presentational style came from his earlier aspiration for a career in commercial filmmaking, which led him to seek an opportunity "to put in a movie somehow". His idea was to reverse the then-prevailing trend wherein the narrative aspects of contemporary NES games were undervalued by consumers with the inclusion of an interesting plot that could engage those players. Tecmo called the cutscene system "Tecmo Theater" where the game reveals the storyline between Acts through the use of animated sequences. They are used at the beginning of each Act to introduce new characters such as Irene Lew, Walter Smith, and the Jaquio. This feature uses techniques such as close-ups, alternate camera angles, differing background music, and sound effects to make the game more enjoyable for players. Unlike earlier titles such as Final Fantasy, the cutscenes consisted of large anime art on the top half of the screen with dialogue on the bottom half. This made the artistic style more reminiscent of other manga titles such as Lupin III and Golgo 13. Dimitri Criona, Tecmo USA's director of sales and marketing, said that console games had an advantage over arcade games in that they allowed the creation of a longer game and the inclusion of cutscenes, which Tecmo trademarked as "cinema screens". He noted console games required a different reward structure than arcade games. The game contains a feature that was originally a glitch but was left in the final game intentionally, according to Masato Kato; having lost to any of the game's last three bosses, the player is sent back to the beginning of the sixth act.
When the game's text was translated from Japanese to English, the game needed to be reprogrammed to accomplish this; different companies handled this process in different ways. Tecmo's Japanese writers wrote rough translations in English and then faxed them to the American division. According to Criona, the American division would "edit it and put it back together, telling the story in a context that an American English speaker would understand. This would go back and forth several times". Moreover, the game's text was stored in picture files instead of raw computer text. Because of the NES's hardware limitations, the English text needed to be very clear and concise to fall within those limitations; many times, different words with the same meaning but with fewer characters had to be used. All symbols and objects were scrutinized by Nintendo of America, who had specific rules on what could be included for North American releases; for instance, any Satanic, Christian, or any other religious, sexual, or drug-related references were not allowed.
## Marketing and release
Since the game's title was deemed too difficult for English audiences to read, it was renamed when it was released in Western markets. In early 1988 advertisements from Nintendo Fun Club News, Tecmo used Ninja Dragon as a tentative title for the U.S. release. They decided to use the title Ninja Gaiden (its original working title) when the game was released in the U.S. in March 1989. The title literally means "Ninja Side-Story", but the game was not intended as a spin-off of any prior work. According to an interview with developer Masato Kato, when deciding how to translate "Ryukenden" into English, the staff chose Ninja Gaiden "because it sounded cool". In Europe, the game was scheduled to be released in September 1990, but was delayed until September 1991. It was retitled as Shadow Warriors—just as Teenage Mutant Ninja Turtles was renamed Teenage Mutant Hero Turtles—as ninjas were considered a taboo subject in Europe. It was one of many ninja-related video games around the time, such as The Legend of Kage, Ninja Warriors, and Shinobi.
Upon Ninja Gaiden's North American release, Nintendo of America, whose play-testers liked the game and gave it high ratings, decided to help with its marketing. Nintendo's house organ Nintendo Power featured it prominently. According to Criona, it did not take a lot of effort to market the game through the magazine, nor did Tecmo or Nintendo do much else to promote it. Ninja Gaiden received strong publicity in Nintendo Power in 1989 and 1990. Ninja Gaiden received preview coverage in the January–February 1989 issue of Nintendo Power in its "Pak Watch" section. It "got the highest marks of any title ... [the magazine's staff had] seen in a long time". It was expected to be No. 1 on their "Player's Poll" quickly. The preview compared Ryu's ability to climb and spring off walls to the gameplay in Metroid. It was featured on the cover of the magazine's March–April 1989 issue and was referenced in the following issue in a Howard and Nester comic strip. It was one of the featured games in both March–April and May–June 1989 issues of the magazine; both issues included a walkthrough up to the fifth Act, a review, and a plot overview. Underlining the game's difficulty, it appeared in several issues in the magazine's "Counselor's Corner" and "Classified Information" help sections.
The game was unveiled at the 1989 International Winter Consumer Electronics Show in Las Vegas. Its display featured a demo of the game and a live person dressed as a ninja. Tecmo predicted that the game would be the top-selling, third-party title for the NES.
Demand for the game eventually exceeded its supply. While Tecmo anticipated the game would be a hit, according to Kohler they did not realize at the time the impact it would have on the video game industry "with its groundbreaking use of cinematics". Yoshizawa would go on to direct the sequel Ninja Gaiden II: The Dark Sword of Chaos (1990) and remained as an executive producer for Ninja Gaiden III: The Ancient Ship of Doom (1991), while Masato Kato took over directing the game design.
### Ports
A PC Engine port of Ninja Ryūkenden was produced in 1992, published by Hudson Soft and released only in Japan. It features more colorful and detailed graphics, along with difficulty and gameplay tweaks and a different soundtrack. This version also supports three different language settings with Japanese, English and Chinese as the available options. However, the English translation used in this version differs from the one used in the earlier NES version.
Ninja Gaiden appeared as a remake of the Ninja Gaiden Trilogy compilation for the Super NES in 1995. Some reviewers appreciated the redrawn graphics and music in this version, but others found them to be an inadequate effort. Electronic Gaming Monthly reviewers compared it unfavorably to another updated NES remake, Mega Man: The Wily Wars; they called the version "an exact port-over with no noticeable enhancements in graphics, sound and play control". Along with the other two games in the Ninja Gaiden trilogy, the SNES version was featured as an unlockable game in the 2004 Xbox Ninja Gaiden game.
The NES version was released on Wii's Virtual Console on April 10, 2007, in Japan and on May 14 in North America. Europeans, Australians, and New Zealanders were able to purchase the game as part of "Hanabi Festival" on September 21. The PC Engine version was released for Virtual Console in Japan on April 21, 2009. The NES version was also released for the Nintendo 3DS Virtual Console, with an original release date set for November 8, 2012, but was delayed until December 13. The NES version was released on Wii U's Virtual Console in 2014. The game was also re-released as part of the NES Classic Edition dedicated console in November 2016, and for the Nintendo Switch in December 2018 as part of the NES: Nintendo Switch Online service.
## Related media
In July 1990 Scholastic Corporation published a novelization of Ninja Gaiden under the Worlds of Power series of NES game adaptations, created and packaged by Seth Godin under the pseudonym F. X. Nine. Godin and Peter Lerangis, under the pseudonym A. L. Singer, wrote the novelization. As with the other Worlds of Power books, the amount of violence present in the video game was toned down in the novelization, because Godin and Scholastic were concerned that some of the material in the video game was inappropriate for a young audience. The novel did not adhere strictly to the game's storyline; for instance, the ending was changed so that Ryu's father survived. Godin believed the revised ending was consistent with the Worlds of Power character. As real-life fathers Godin and Lerangis were reluctant to leave Ryu fatherless. On the book's cover, otherwise a copy of the North American box art, the kunai held in Ryu's front hand was airbrushed out, leaving him prodding the air with an empty fist.
Pony Canyon released a soundtrack CD, Ninja Ryukenden: Tecmo GSM-1, in February 1989. The first half of the CD starts with an arranged medley of the game's music. It continues with enhanced versions of the game's music which used stereophonic sound and additional PCM channels. The rest of the CD features music from the arcade version. In 2017, Brave Wave Productions released a vinyl box set, Ninja Gaiden- the Definitive Soundtrack, mastered by original composer Keiji Yamagishi.
## Reception
### Critical reception
The game debuted at No. 3 on Nintendo Power's Top 30 list for July–August 1989, behind Zelda II: The Adventure of Link and Super Mario Bros. 2; it stayed at No. 3 in the September–October 1989 issue. The Nintendo Power Awards '89 featured the game as one of the top games that year. It was nominated for Best Graphics and Sound, Best Challenge, Best Theme, Fun, Best Character (Ryu Hayabusa), Best Ending, and Best Overall; and it won for Best Challenge and Best Ending. In its preview of Ninja Gaiden II: The Dark Sword of Chaos, the magazine said that "the colorful, detailed and dynamic cinema scenes of the original Ninja Gaiden set a standard for action game narration that has since been widely emulated. These cinema scenes made Ninja Gaiden play almost like a movie."
Reflecting on his career as a game designer, Yoshizawa considered Ninja Gaiden–along with Klonoa: Door to Phantomile–his proudest accomplishment, explaining that the title enjoyed the best sales performance out of all of his projects. Beyond press coverage by Nintendo Power, the game received strong reviews and publicity from other video gaming magazines upon its release. In a review from VideoGames & Computer Entertainment, the presentation and gameplay were compared to Castlevania, while the cinematic cutscenes were compared favorably to Karateka and other computer games by Cinemaware. The review praised the game's animation in these cutscenes and noted Tecmo's usage of close-ups and body movements. The reviewer said that while the cutscenes were not fluid, they were effective and entertaining and provided important information about what the player was supposed to do. He appreciated the game had unlimited continues which slightly offset its difficulty, but he criticized it for having over-detailed background graphics especially in the indoor levels, saying that some bottomless pits and items in these levels become slightly camouflaged. From July to October 1989, the game was listed at No. 1 on Electronic Gaming Monthly's Top Ten Video Games list; it fell to No. 2 on the list behind Mega Man 2 in the November issue. In their Best and Worst of 1989, it received awards for Best Game of the Year for the NES and Best Ending in a Video Game for all consoles. The staff said that Ninja Gaiden "proved to be an instant winner" with its cinematic cutscenes and unique gameplay. They added the game's climax was better than some movies' climaxes at the time and that it established continuity for a sequel, which would be released the following year. Later in June 1994, the magazine ranked it at No. 4 on a special list of Top Ten Most Difficult Games of all time for all consoles.
The July 1990 pilot issue of UK magazine Mean Machines featured Ninja Gaiden on the cover; the magazine was distributed as part of the July 1990 issue of Computer and Video Games. In its review, Julian Rignall compared the game to its beat 'em up arcade counterpart, which was titled Shadow Warriors. He noted the game has great graphics that feature diverse backgrounds and character sprites; he especially praised its use of cartoon-like animation sequences between Acts where the game's plot unfolds. He enjoyed the game's difficulty especially with the bosses, but he noted the game will seem tough at first until players become accustomed to the controls. He criticized the game for its sound, which he said did not fit with the graphics and was "racy" but added "what's there is atmospheric and suits the action". He highly recommended the game to fans of the beat 'em up and combat genres.
Mean Machines reviewed the game again (the NES version now officially titled Shadow Warriors in Europe) in its July 1991 issue. In the review, Matt Regan and Paul Glancey praised its detailed and animated character sprites and its difficulty level. The game's high standards of gameplay, sound, and overall depth impressed Regan; he noted the game's frustrating difficulty but pointed out that it has unlimited continues. Glancey compared the game to the 1990 NES version of Batman (released later in 1990) with its similar wall-jumping mechanics; he said that its graphics were not as well-developed as Batman's but were still satisfactory. He praised its detailed sprites and their animations along with the "Tecmo Theater" concept, noting that the cutscenes "help supply a lot of atmosphere". He said it is one of the best arcade-style games on the NES as well as the best ninja-related game on the system.
The Japanese magazine Famitsu gave it a score of 28 out of 40. The game received some praise and criticism in the August 1991 issue of German magazine Power Play. The review praised the game for its attention to detail and challenge and noted players need to master certain gameplay skills to move on. Criticisms included a "lack of variety" and dullness in gameplay which was compared to a "visit to the tax office". The PC Engine version was briefly mentioned in the December 1991 issue of Electronic Gaming Monthly as part of a review of games that had been released outside the U.S. They noted the faithful translation from the NES version as well as the revamped and more detailed graphics, saying "PC Engine owners should not miss this one!"
### Awards
- US Arcade game of the year: 1990
### Legacy
In 2004, Tecmo began releasing low-priced episodic installments of Ninja Gaiden for AT&T and Verizon mobile phones on both BREW and Java platforms. The official English Tecmo Games' mobile website advertised it for a future release along with a mobile version of Tecmo Bowl. The company planned to release the entire game throughout 2004 in a series of four installments—similar to what Upstart Games did when they ported the NES version of Castlevania to mobile phones. The port featured the same visuals and soundtrack as the NES version. Each installment was to consist of several levels of gameplay at a time. The first installment, titled Ninja Gaiden Episode I: Destiny, was released on July 15, 2004; it included only the first Act from the NES version but added two new levels. The second installment was planned to be released in North America and was previewed by GameSpot in September 2004, but it—along with the third and fourth installments—was never released.
The mobile phone port of Ninja Gaiden was met with some praise and criticism. IGN's Levi Buchanan and GameSpot's Damon Brown praised the port for its accurate translation from the NES to mobile phones, saying the gameplay, graphics, and cinematic cutscenes remain true to the NES version. They praised the game's controls, despite the omission of the ability to duck so that pressing "down" on the phone's directional pad could be used for secondary weapons; Brown said the port had better controls than most other mobile phone games at the time. They both criticized the port for its lack of sound quality, but Buchanan said this was not Tecmo's fault. In a preview of the port, GameSpot's Avery Score pointed to generally inferior American-made handsets as the reason for the sound's shortcomings.
Retro Gamer took a look back at Ninja Gaiden in its March 2004 issue, when the Xbox remake was released. They said the game broke the mold of conventional video game titles by including a plot with cinematic cutscenes added between gameplay segments, adding that the concept of adding cinematics for a game's introduction, plot, and ending was a new concept which "naturally impressed the gaming public". The article noted the game's high level of difficulty, saying the game "threw up an immense challenge even for the veteran gamer, and almost dared you to complete it mentally and physically intact". Chris Kohler, in his 2004 book Power-Up: How Japanese Video Games Gave the World an Extra Life, said, while it was not as far-reaching as Tecmo Bowl, "it ended up revolutionizing video games with its courageous, unique, and trailblazing use of cinema scenes".
Upon its release on the Virtual Console, Ninja Gaiden was met with high praise, especially for its elaborate story, amount of narrative, and use of anime-like cinematic sequences. Some critics have bemoaned its gameplay for being too similar to Castlevania; similarities include identical displays on the top of the screen, items contained in breakable lanterns, and a nearly identical "secondary weapons" feature. A 1UP.com review noted that the two games have different dynamics and that several actions possible in Ninja Gaiden would be impossible in Castlevania. Contemporary reviews have considered the game "groundbreaking" for its pioneering use of stylized cutscenes, high quality music, and dark atmosphere. One review said that the game makes up for its high difficulty level with good gameplay. IGN said that it is one of the best platform games of all time.
Reviewers have criticized the game for its high and unforgiving difficulty level especially late in the game, and it has been considered an example of "Nintendo Hard" video games. A review by 1UP.com referred to the later levels as an "unfair display of intentional cheapness". In his review of the Virtual Console version, GameSpot's Alex Navarro said "the game will beat you to a pulp" and that it "assaults you time and time again with its punishing difficulty, insidiously placed enemies, and rage-inducing boss fights". According to his review, the game starts easy, but the difficulty begins to increase halfway through the second Act and continues through the sixth Act; Navarro describes the sixth Act's difficulty as "one of the bottom levels of gaming hell". IGN said that the game was one of the most difficult video games of all time, setting the trend for the rest of the series; however, they pointed out that its difficulty and graphics are "defining characteristics [that] have carried over through the years into modern day [Ninja] Gaiden sequels". ScrewAttack listed the game as the seventh hardest title in the NES library.
Over fifteen years after its creation, Ninja Gaiden has maintained its position as one of the most popular games for the NES. A 2006 Joystiq reader poll, with over 12,000 votes, listed the game at No. 10 on a list of top NES games. Another reader poll from GameSpot listed the game at No. 10 in its top 10 NES games list. It was No. 17 on IGN's Top 100 NES Games list. In August 2001 in its 100th issue, Game Informer listed the game at No. 93 in their Top 100 Games of All Time list. In 2006 Electronic Gaming Monthly featured a follow-up to their The 200 Greatest Videogames of Their Time, where readers wrote in and discussed games they felt were ignored on the list; the game was listed at No. 16 of the top 25 games discussed. At the end of 2005, Nintendo Power ran a serial feature titled The Top 200 Nintendo Games Ever. The list, which included games for all Nintendo systems, placed the game at No. 89. In August 2008, the same magazine ranked it the tenth best NES game of all time; they praised the gameplay and described the cinematic cutscenes as revolutionary for its time. The game's music received honorable mention on IGN's list of Best 8-Bit Soundtracks. IGN featured its introduction on its Top 100 Video Game Moments list at \#53; it was also listed as the second best video game cutscene of all time in Complex magazine.
Nintendo Power honored the game in its November 2010 issue, which celebrated the 25th anniversary of the NES. The magazine listed its box art, which depicts a ninja with a burning city in the background, as one of its favorite designs in the NES library. The magazine's Editor-in-Chief Chris Slate was equally impressed by the game's box art. He also reminisced about the game's high level of difficulty with its re-spawning enemies and enemy birds that knocked players into pits, saying this game "may have taught me how to curse". He further praised gameplay features such as clinging on walls and using ninpo techniques, and he noted the game's cinematic cutscenes, including the ominous opening sequence that featured two ninjas who launch into the air at each other clashing their swords in the moonlight. He said that "Ninja Gaiden was about as cool as an 8-bit game could be, especially for ninja-crazed kids of the '80s who, like me, had worn out their VHS copies of Enter the Ninja". In a July 2011 issue, Retro Gamer listed the game's opening as one of the most popular at the time. The magazine noted how its use of cutscenes, animations, and overall presentation put the game above most other action titles at the time. While it lauded the controls and gameplay elements, as with other reviews, it criticized the difficulty, calling it "one of the most challenging games on the console". It noted how defeated enemies re-spawn in certain spots, how enemies are placed on the edges of platforms, and the structure of the final level.
## See also
- Ninja Gaiden series
|
5,542,862 |
Ontario Highway 71
| 1,168,188,923 |
Ontario provincial highway
|
[
"1937 establishments in Ontario",
"Ontario provincial highways",
"Roads in Kenora District",
"Roads in Rainy River District",
"Trans-Canada Highway",
"Transport in Fort Frances",
"Transport in Kenora"
] |
King's Highway 71, commonly referred to as Highway 71, is a provincially maintained highway in the Canadian province of Ontario. The 194-kilometre-long (121 mi) route begins at the Fort Frances–International Falls International Bridge in Fort Frances, continuing from US Highway 53 (US 53) and US 71 in Minnesota, and travels west concurrently with Highway 11 for 40 kilometres (25 mi) to Chapple. At that point, Highway 11 continues west while Highway 71 branches north and travels 154 kilometres (96 mi) to a junction with Highway 17 just east of Kenora. Highway 71 forms a branch of the Trans-Canada Highway for its entire length, with the exception of the extremely short segment south of Highway 11 in Fort Frances.
The current routing of Highway 71 was created out of a route renumbering that took place on April 1, 1960, to extend Highway 11 from Thunder Bay to Rainy River. The portion of the highway that is concurrent with Highway 11 follows the Cloverleaf Trail, constructed by the end of 1880s and improved over the next several decades. The portion between Highway 11 and Highway 17 follows the Heenan Highway, constructed to connect the Rainy River region with Kenora and the remainder of Ontario's road network; before its opening, the area was accessible only from across the United States border. Both highways were incorporated into the provincial highway system in 1937 following the merger of the Department of Highways (DHO) and the Department of Northern Development.
## Route description
Highway 71 connects the Rainy River region with the Trans-Canada Highway near Kenora. The first 65 kilometres (40 mi) of the highway traverses the largest pocket of arable land in northern Ontario. Following that, the route suddenly enters the Canadian Shield, where the land is unsuitable for agricultural development.
The highway begins at the international bridge in Fort Frances; within the United States, the road continues south as US 53 and US 71 in Minnesota. From the bridge, it proceeds along Central Avenue, encountering Highway 11 one block north. The two routes travel north concurrently to 3 Street West, where both turn west. At the Fort Frances Cemetery, the route branches southwest and exits Fort Frances after splitting with the Colonization Road (Highway 602). It follows the old Cloverleaf Trail west through Devlin, where it intersects Highway 613, and Emo, where it merges with the Colonization Road. Approximately six kilometres (3.7 mi) west of Emo, in the Manitou Rapids First Nations Reserve, Highway 71 branches north, while Highway 11 continues west to Rainy River.
North of the Manitou Rapids Reserve, Highway 71 presses through a large swath of land mostly occupied by horse and cattle ranches. It intersects Highway 600 and Highway 615, both of which have historical connections to Highway 71. The highway passes through Finland and enters the Boreal Forest, descending into the Canadian Shield over the course of a kilometre and a half (approximately one mile). From this point to its northern terminus, the highway crosses through rugged and isolated terrain, curving around lakes, rivers and mountains on its northward journey. It passes through the community of Caliper Lake before crossing between Rainy River District and Kenora District midway between there and Nestor Falls.
North of Nestor Falls, the highway travels along the eastern shore of Lake of the Woods, providing access to the community of Crow Lake on the Sabaskong Bay 35D reserve of the Ojibways of Onigaming First Nation between Lake of the Woods and Kakagi Lake, as well as to the Whitefish Bay 32A reserve of the Naotkamegwanning First Nation immediately southeast of Sioux Narrows. Here the route crosses the Sioux Narrows Bridge, the last part of the highway to be constructed and a formidable engineering obstacle in the 1930s. North of Sioux Narrows, the highway meanders northward through an uninhabited region, zigzagging among the numerous lakes that dot Kenora District and crossing the Black River. It provides access to Eagle Dogtooth and Rushing River Provincial Parks several kilometres south of its northern terminus at Highway 17, four kilometres (2.5 mi) east of the split with Highway 17A and 20 kilometres (12 mi) east of downtown Kenora.
## History
Highway 71 was created out of a renumbering of several highways in the Rainy River District during the late 1950s as Highway 11 was extended west of Thunder Bay. The history of the route is tied to the two major highways in Rainy River District: the Cloverleaf Trail and the Heenan Highway.
The Cloverleaf Trail, the older of the two roads, was initially developed as the Rainy River colonization road. A line was blazed as early as 1875, possibly as part of the Dawson Trail, and improved in 1885 into a trail. This initial trail followed the Rainy River west from Fort Frances to Lake of the Woods; Highway 602 now follows the road between Fort Frances and Emo. In 1911, James Arthur Mathieu was elected as a Member of Provincial Parliament (MPP) in the Rainy River riding. As a lumber merchant, Mathieu promoted improved road access in the region. Between 1911 and 1915, he oversaw construction of the gravel Cloverleaf Trail between Fort Frances and Rainy River.
The Heenan Highway would become the first Canadian link to the Rainy River area; before its opening in the mid-1930s, the only way to drive to the area was via the United States. In 1922, Kenora MPP Peter Heenan and Dr. McTaggart approached the government to lobby for construction of a road between Nestor Falls and Kenora. Nestor Falls was the northernmost point accessible by road from the Rainy River area. Heenan would become the Minister of Lands and Forests in Mitch Hepburn's cabinet. This provided the impetus for construction to begin in 1934. Unlike the Cloverleaf Trail, the Fort Frances – Kenora Highway, as it was known prior to its opening, was constructed through the rugged terrain of the Canadian Shield. Rocks, forests, lakes, muskeg, and insects served as major hindrances during construction of the 100-kilometre-long (62 mi) highway, which progressed from both ends. By late 1935, the only remaining gap in the road was the Sioux Narrows Bridge. Construction on this bridge was underway by March 1936; it was rapidly assembled using old-growth Douglas fir from British Columbia (BC) as the main structural members. These timbers were cut in BC, and shipped to be built on-site like a jig-saw puzzle. The bridge was finished on June 15, 1936, completing the link between Fort Frances and Kenora.
On July 1, 1936, Premier Mitch Hepburn attended a ceremony in front of the Rainy Lake Hotel in Fort Frances. On a rainy afternoon, at 5:30 p.m., Peter Heenan handed Hepburn a pair of scissors with which to cut the ribbon crossing the road and declare the highway open. Hepburn, addressing the crowd that was gathered, asked "What would you say if we call it the Heenan Highway, what would you think of that?". The crowd cheered and Hepburn cut the ribbon.
The Cloverleaf Trail and the Heenan Highway were assumed by the DHO shortly after its merger with the Department of Northern Development. Following the merger, the DHO began assigning trunk roads throughout northern Ontario as part of the provincial highway network. Highway 71 was assigned on September 1, 1937, along the Cloverleaf Trail. The portion of the Heenan Highway lying within Kenora District was designated as Highway 70 on the same day. The portion within Rainy River District was designated as Highway 70 on September 29.
The original route of Highway 70 split in two south of Finland; Highway 70 turned east to Off Lake Corner, then south to Emo, while Highway 70A turned west to Black Hawk then south to Barwick. The northern end of the highway was also concurrent with Highway 17 for 21.7 kilometres (13.5 mi) into Kenora, and the southern end concurrent with Highway 71 for 37.0 kilometres (23.0 mi) between Emo and Fort Frances. During 1952, the highway was extended south from its split to Highway 71, midway between Barwick and Emo. By 1953, the new road was opened and informally designated as the new route of Highway 70. The old routes were decommissioned on February 8, and the new route designated several weeks later on March 10, 1954. Both forks were later redesignated as Highway 600 and Highway 615.
Throughout the mid- to late 1950s, a new highway was constructed west from Thunder Bay towards Fort Frances. Initially this road was designated as Highway 120. In 1959, it was instead decided to make this new link a westward extension of Highway 11; a major renumbering took place on April 1, 1960: Highway 11 was established between Rainy River and Fort Frances, Highway 71 was truncated west of the Highway 70 junction, and the entirety of Highway 70 was renumbered as Highway 71. This established the current routing of the highway.
Although now rebuilt as a steel structure, the original Sioux Narrows Bridge was considered to be the longest single span wooden bridge in the world, at 64 metres (210 ft). The original bridge remained in place until 2003, when an engineering inspection revealed that 78% of the structure had failed. A temporary bridge was erected while a new structure was built. The new bridge was completed in November 2007, incorporating the old timber truss as a decorative element. A ribbon cutting ceremony to dedicate the bridge was held on July 1, 2008, 72 years after the original dedication by Mitch Hepburn.
## Major intersections
|
850,604 |
Eric Brewer (ice hockey)
| 1,169,533,804 |
Professional ice hockey player
|
[
"1979 births",
"Anaheim Ducks players",
"Canadian ice hockey defencemen",
"Edmonton Oilers players",
"Ice hockey people from British Columbia",
"Ice hockey players at the 2002 Winter Olympics",
"Living people",
"Lowell Lock Monsters players",
"Medalists at the 2002 Winter Olympics",
"National Hockey League All-Stars",
"National Hockey League first-round draft picks",
"New York Islanders draft picks",
"New York Islanders players",
"Olympic gold medalists for Canada",
"Olympic ice hockey players for Canada",
"Olympic medalists in ice hockey",
"People from the Thompson-Nicola Regional District",
"Prince George Cougars players",
"Sportspeople from Vernon, British Columbia",
"St. Louis Blues players",
"Tampa Bay Lightning players",
"Toronto Maple Leafs players"
] |
Eric Peter Brewer (born April 17, 1979) is a Canadian former professional ice hockey defenceman who played sixteen seasons in the National Hockey League (NHL), last suiting up for the Toronto Maple Leafs. He is an NHL All-Star and Olympic gold medalist.
He began his career as a distinguished junior ice hockey player, named to the Western Hockey League (WHL) West Second All-Star Team and the Western Conference roster for the 1998 WHL All-Star Game (although he missed the game due to injury). Drafted in the first round, fifth overall by the New York Islanders in the 1997 NHL Entry Draft, Brewer has spent parts of his sixteen-year NHL career with the Islanders, the Edmonton Oilers, the St. Louis Blues, Tampa Bay Lightning, Anaheim Ducks and Maple Leafs, and captained the Blues for two years. He has also suited up for the Prince George Cougars of the WHL and the Lowell Lock Monsters of the American Hockey League (AHL). In 1999, Brewer was selected for the Prince George Cougars' all-time team in a Canadian Hockey League promotion.
Brewer has represented Canada at eight International Ice Hockey Federation-sanctioned events, winning three Ice Hockey World Championships gold medals and one World Cup of Hockey gold medal. He won his Olympic gold medal during the 2002 Winter Olympics. For this accomplishment, he was inducted into the BC Sports Hall of Fame with his British Columbian teammates in 2003.
## Personal life
Brewer was born on April 17, 1979, in Vernon, British Columbia, to Anna and Frank Brewer. He was raised in Ashcroft, British Columbia, and began playing ice hockey in the Ashcroft Minor Hockey program. When he was fourteen, his family moved to Kamloops, British Columbia, where he attended junior and senior high school. Brewer excelled with the Kamloops Bantam AAA Jardine Blazers of the British Columbia Amateur Hockey Association (BCAHA). In 1995, Brewer was exposed to BCAHA Best Ever, a program designed to find and develop players and coaches for play in international competition. As a young hockey player, Brewer looked up to NHL stars Scott Niedermayer and Jeremy Roenick as role models.
In mid-2004, Brewer married Rebecca Flann, whom he met while playing junior hockey with the Prince George Cougars; they live in Vancouver, British Columbia. The Brewers have two daughters. Brewer's sister, Kristi, played for the University of British Columbia Thunderbirds women's ice hockey team.
Brewer is involved in numerous charitable organizations. During the 2004–05 NHL lockout, Brewer participated in several charity hockey games, playing in the four-game Ryan Smyth and Friends All-Star Charity Tour, the three-game Brad May and Friends Hockey Challenge, as well as the Our Game to Give charity hockey game held at Ivor Wynne Stadium in Hamilton, Ontario. During off-seasons, Brewer has participated in numerous charity golf tournaments, including the Burn Fund Golf Tournament in Prince George and the Recchi-Doan Charity Classic in Kamloops.
In 2014, Brewer became a part owner of the Prince George Cougars, joining a group of investors that includes fellow Cougars alumnus Dan Hamhuis.
## Playing career
### Prince George Cougars
Brewer was drafted in the sixth round, 81st overall, by the Prince George Cougars in the 1994 WHL Bantam Draft. After being drafted, he played one final season with the Jardine Blazers, recording 38 points in only forty games. The following year, Brewer began his WHL career with the Cougars, playing 63 games in the 1995–96 season. Brewer finished his rookie WHL season with fourteen points, including four goals, and was named Cougars' Rookie of the Year.
In his sophomore season, Brewer became a leader on the Cougars' blue line. He was named to play for Team Orr in the 1997 CHL Top Prospects Game in February 1997 at Maple Leaf Gardens. He doubled his point total from the previous season, finishing with 29 points in 71 games played. Brewer followed his regular season by helping the Cougars go on a playoff run. After clinching the last spot in the West Division with a losing record, the Cougars defeated the number-one seed Portland Winter Hawks in the conference quarterfinals and the third-ranked Spokane Chiefs in the conference semifinals before finally losing to the second-ranked Seattle Thunderbirds in the Western Conference final. Brewer finished this run with six points in the Cougars' fifteen games.
Brewer's final season with Prince George was his best, statistically, in the WHL. After representing Canada at the 1998 World Junior Ice Hockey Championships, he was named to the Western Conference team for the WHL All-Star Game in Regina, Saskatchewan, which he missed, as well as much of the season, due to injury. However, Brewer finished the year with 33 points in only 34 games, a near one point-per-game average, and was named to the WHL West Second All-Star Team. Brewer was the highest ranked defenceman at sixth overall among North American skaters heading into the 1997 NHL Entry Draft. He was drafted fifth overall by the New York Islanders in June 1997.
### New York Islanders
Just over a year after being drafted, Brewer signed his first professional contract with his draft team, the New York Islanders, in August 1998. Entering the NHL, Brewer was regarded as a future Norris Trophy candidate, and as a result, his contract was an entry level three-year, \$2.775-million deal complemented by a \$1-million signing bonus, the highest base salary available for a rookie. Brewer made his NHL debut on October 10, 1998, against the Pittsburgh Penguins, and on November 5, Brewer scored his first career goal against the Carolina Hurricanes' Trevor Kidd. Throughout his rookie season, Brewer was considered an integral part of the Islanders' defence, and, along with Zdeno Chára, Kenny Jönsson and Roberto Luongo, was considered untouchable by management at the 1999 NHL trade deadline. Brewer finished his rookie season with eleven points in 63 games.
After playing just three games of the 1999–2000 NHL season, Brewer was assigned to the Islanders' AHL affiliate, the Lowell Lock Monsters. It was speculated that the reason behind this move was laziness by Brewer, who was benched during the final thirty minutes by head coach Butch Goring after losing a race for the puck against Mike Knuble in the Islanders' October 11, 1999, loss to the New York Rangers. Brewer also took a bad penalty earlier in the game, putting the Islanders down two men. After a two-week, five-game stint with the Lock Monsters, Brewer was subsequently recalled by the Islanders. After playing 26 games with the Islanders in which he only recorded two assists, Brewer was reassigned to the Lock Monsters on January 8, 2000, for the remainder of the season. Shortly after joining the Lock Monsters, Brewer suffered a sprained knee and missed the next two-and-a-half months of the season. Brewer went on to play 25 games for the Lock Monsters, recording two goals and two assists. He also participated in his first professional playoffs, as the Lock Monsters swept the Saint John Flames in three games in the first round, before being swept themselves in four games in the Eastern Conference semifinals by the Providence Bruins.
### Edmonton Oilers
At the 2000 NHL Entry Draft, the Islanders traded Brewer, Josh Green and their second round selection (Brad Winchester) in the same draft to the Edmonton Oilers for Roman Hamrlík. Although surprised to be traded, Brewer was excited at the prospect of playing for the Oilers, who saw Brewer as a top-four defenceman. However, Brewer's Oiler career began on a sour note as he suffered a bruised left hip and tailbone in his first game with the team. Brewer missed the next four games before returning to the lineup. Brewer scored his first goal as an Oiler on November 7, 2000, against the New York Rangers. Brewer finished his first Oiler season with career highs in goals, assists and points, as well as the best plus/minus rating on the Oilers team, a plus-15. Further, Brewer gained his first NHL playoff experience, a quarterfinal series versus the Dallas Stars. Brewer had six points, but the Oilers were eliminated four games to two by the Stars.
The Oilers re-signed Brewer, who was a free agent, to a one-year, \$907,500 contract in August 2001. In his second season with the Oilers, Brewer was assigned to play against the opposing teams' best offensive players by Oilers head coach Craig MacTavish. Brewer began to play more minutes in games, typically placing among the NHL leaders in average minutes played per game. With this enhanced role on the team, Brewer finished his season with new career highs in assists and points for the second consecutive season, and matched his career high in goals. Although his single year contract expired, his role on the Oilers had become more important and Brewer expected a large raise for his third season with the Oilers.
After a long holdout that lasted until the beginning of Oilers training camp, Brewer finally signed a two-year, \$4-million contract in September 2002. At the half of the 2002–03 NHL season, Brewer was named to play in his first NHL All-Star Game, dressing for the Western Conference in the fifty-third edition of the game. He finished with career highs for assists and points and set a career high for goals for the third consecutive season. He appeared in his second NHL playoffs, another quarterfinal series against the Dallas Stars in which the Oilers were once again eliminated four games to two. Brewer finished the playoffs with four points in the Oilers' six games.
In his fourth season with the Oilers, Brewer continued his role as a top defenceman. On November 22, 2003, Brewer was among the participants in the historic 2003 Heritage Classic ice hockey game versus the Montreal Canadiens at Commonwealth Stadium in Edmonton. Brewer scored the Oilers first goal of the game in a 4–3 loss in front of a then record crowd of 57,167. Later in the season, in a game on January 29, 2004, versus the Chicago Blackhawks, Brewer recorded his one-hundredth career point. Since his team depended on Brewer to play against the opposing teams' best offensive players, he finished the season with an average time on ice of 24:39, ranking fourteenth in the league. In the final year of his two-year contract, Brewer finished the season with his point totals matching those from his 2001–02 NHL season, a slight fall from his career highs set in his third season with the Oilers.
With the Oilers unwilling to pay what he was expecting, Brewer decided to go to salary arbitration to get a new contract. However, on August 4, 2004, Brewer signed a one-year, \$2.65 million contract with the Oilers, avoiding his arbitration hearing set for only a few days later. Brewer was unable to play out his new contract due to the 2004–05 NHL lockout.
### St. Louis Blues
In August 2005, following the lockout, the Oilers traded Brewer, Jeff Woywitka and Doug Lynch to the St. Louis Blues in exchange for defenceman Chris Pronger. At the time of the trade, Brewer was a restricted free agent, so on August 15, 2005, Brewer accepted the Blues' qualifying offer, signing a one-year, \$2-million contract. Brewer's first season with the Blues was a particularly poor one. After playing the first 18 games of the season, Brewer separated his shoulder on November 16, 2005, in a 2–0 victory over the Columbus Blue Jackets. Brewer missed ten games before being activated from the injured reserve list, returning to the St. Louis line-up for a game on December 17, 2005, against the Philadelphia Flyers. Less than a month later, in a game on January 13, 2006, against the Atlanta Thrashers, Brewer collided with the Thrashers' centre Karl Stewart, and dislocated his left shoulder, which ended his season. In just 32 games, Brewer finished his season with nine points, including six goals, two shy of his career best of eight set in the 2002–04 season. Despite his limited play, the Blues re-signed Brewer to a one-year, \$2.014 million contract for the 2006–07 season.
Brewer's second season with the Blues began as a disappointment. By the first half of December 2006, Brewer had only amassed six points and a plus-minus rating of –11, often referred to as "the worst player on the ice" by both the media and Blues fans alike. Brewer was often involved in trade rumours, as he was set to become an unrestricted free agent following the completion of the season. Brewer believed his performance was the result of having only played in 32 NHL games since the 2003–04 season. However, after the firing of head coach Mike Kitchen on December 11, 2006, Brewer began playing much better under new head coach, Andy Murray. Over the next nineteen games, Brewer changed his –11 into a +2 and became an integral part of the Blues' defence. His turnaround was rewarded on February 24, 2007, when, rather than being traded as was previously rumoured, Brewer signed a four-year, \$17-million contract extension with the Blues. Brewer continued his turnaround through the end of the season, finishing the year with six goals and 23 assists for 29 points, tying his career high for points set in the 2002–03 season and setting a new career high for assists.
In his third season with the Blues, Brewer continued to do well under Andy Murray. Brewer evolved into one of the top two-way defencemen in the NHL, with comparisons being made between him and former first overall draft pick Chris Phillips of the Ottawa Senators. His play and leadership abilities were recognized, when on February 8, 2008, Brewer was named as the nineteenth captain in the history of the St. Louis Blues, filling the vacancy created when former Blues captain Dallas Drake had his contract bought out following the 2006–07 season. On February 17, 2008, in a game against the Columbus Blue Jackets, Brewer set a career high for points in a game with four, all assists, eclipsing his previous career high of three points set on January 16, 2007. Brewer finished the season with only one goal in his 77 games played, his lowest goal total since the 1999–2000 season, although he added 21 assists, three short of a career high. At the completion of the season, Brewer underwent reconstructive surgery on his right shoulder to repair damage suffered in a fight in the Blues' season opening game against the Phoenix Coyotes on October 4, 2007.
Despite Brewer's end-of-season shoulder surgery, he was able to join the Blues for his fourth season with the team in time for their season-opening game against the Nashville Predators, where he led the Blues with 24:43 of ice time in a 5–2 victory. Eight days later, Brewer playing in his 600th career NHL game, a 4–3 shootout victory against the Chicago Blackhawks. Prior to the Christmas break, Brewer underwent season-ending back surgery. The surgery ended Brewer's season after only 28 games played and six points, his lowest games played and point totals since his sophomore season with the New York Islanders. Brewer subsequently underwent two more surgeries that off-season, including a second back surgery in April and a knee surgery in August.
Unlike the previous season, Brewer's off-season surgeries delayed the start of his fifth season with the Blues until the team's eleventh game of the season, a 2–0 loss to the Arizona Coyotes. Brewer had missed the Blues' previous 64 games prior to his return against the Coyotes, which was earlier than expected after rehabbing from his back surgeries that had been considered career threatening. Brewer's health largely remained stable throughout the season as he finished the year with 15 points in 59 games played, including tying his career-high in goals with eight.
### Tampa Bay Lightning
On February 18, 2011, Brewer was traded to the Tampa Bay Lightning for Brock Beukeboom and Tampa Bay's third-round selection in the 2011 NHL Entry Draft. In 22 games with Tampa Bay, he notched a goal and an assist. He also led the Lightning in average ice time per game, with 21:34. At the end of the 2010–11 season, Brewer recorded a career-high nine goals and also amassed 81 penalty minutes, good for the second-highest total of his NHL career. He also appeared in a career-high 18 post-season contests with the Lightning in the 2011 playoffs, registering three goals and 17 points, helping the Bolts in their first playoff appearance in four seasons. During the 2011 playoffs, he set a career-high with three points in Game 2 of the Eastern Conference Quarter-finals against the Pittsburgh Penguins and ranked third among all post-season skaters with 51 blocked shots. On June 24, 2011, Brewer signed a four-year, \$15.4 million contract extension with the Lightning.
### Anaheim Ducks
On November 28, 2014, while in the final year of his contract with the Lightning, Brewer was traded to the Anaheim Ducks in exchange for a third-round pick in the 2015 NHL Entry Draft. At the time, Brewer had appeared in 17 games with the Lightning during the 2014–15 NHL season and had four assists on the year. He finished his career with the Lightning having played in 246 games and scored ten goals and 46 assists. The next night, Brewer played in his first game with the Ducks, recording 15:07 in ice time in a 6–4 loss to the San Jose Sharks. On December 3, 2014, after playing in only two games for the Ducks, the team announced that Brewer was expected to miss four-to-six weeks after breaking a bone in his foot from a blocked shot. Brewer returned to the Ducks lineup two months later on February 3, 2015 against the Carolina Hurricanes and scored his first goal for Anaheim on February 8, 2015 against his former teammates in a 5–3 loss to the Tampa Bay Lightning.
### Toronto Maple Leafs
On March 2, 2015, the 2015 NHL trade deadline, the Ducks traded Brewer and a fifth-round pick in the 2016 NHL Entry Draft to the Toronto Maple Leafs in exchange for defenceman Korbinian Holzer, ending a brief nine game career with the Anaheim Ducks. Brewer's first game as a member of the Maple Leafs came on March 5, 2015 against the Tampa Bay Lighting. On March 21, 2015, Brewer became the 300th NHL player to play in 1,000 career games in a 5–3 loss to the Ottawa Senators. Two days later, the Maple Leafs honored Brewer's achievement with a ceremony prior to their game against the Minnesota Wild, at which they presented him with a silver stick, and Rolex watch, and a \$10,000 charitable donation in his name. He scored his first goal with the club two games later, the game-winning overtime goal in a 4–3 victory over the Ottawa Senators on March 28, 2015. After missing the playoffs with the Maple Leafs, Brewer finished the 2014–15 NHL season with two goals and three assists in eighteen games with the Maple Leafs. Through his entire 2014–15 NHL season with the Lightning, Ducks, and Maple Leafs, Brewer scored three goals and eight assists in 44 games played, the fewest games played of his career since his injury-plagued 2008–09 NHL season with the St. Louis Blues.
Ahead of the 2015–16 NHL season, Brewer was not re-signed by the Maple Leafs and became an unrestricted free agent on July 1, 2015. He spent the summer training in hopes of signing an NHL contract, but realized by mid-August that he was unlikely to find a team interested in his services, and chose not to extend his career by playing in Europe.
## International play
Throughout his career, Brewer has represented Canada at various international ice hockey tournaments. He first competed internationally as a member of Team Pacific Canada at the 1995 World U-17 Hockey Challenge in Moncton, New Brunswick. Three years later, he represented Canada as a whole as a member of the national junior team at the 1998 World Junior Ice Hockey Championships, where he was named an alternate captain. This was the tournament in which Canada had its worst ever showing, an eighth-place finish including a loss to Kazakhstan, giving Brewer an unkind welcome to IIHF international ice hockey. Although eligible for the 1999 edition of the same tournament, Brewer was unable to play due to NHL commitments with the New York Islanders.
Brewer made his debut with the Canadian national men's team on April 24, 2001, when he joined Canada for the 2001 Men's World Ice Hockey Championships in Nuremberg, Cologne, and Hanover, Germany. Later that year, on July 24, 2001, Brewer was invited to the orientation camp for the Canadian team for the 2002 Winter Olympics in Salt Lake City, Utah. Five months later, on December 12, 2001, Brewer was named to the final Canadian roster for the tournament. In the opening game of the tournament against Sweden, Brewer scored Canada's second goal of the game in a 5–2 loss, while in the semi-finals of the tournament, Brewer scored the game-winning goal against Belarus in a 7–1 victory, helping send Canada to the gold medal game against the host United States. Canada would go on to defeat the Americans by a score of 5–2, winning their first Olympic gold medal in fifty years.
Shortly after his Olympic experience, Brewer was named to the Canadian roster for the 2002 Men's World Ice Hockey Championships in Gothenburg, Karlstad and Jönköping, Sweden, his second consecutive Ice Hockey World Championships. He represented Canada once again the following year, when on April 22, 2003, Brewer was named to the Canadian roster for the 2003 Men's World Ice Hockey Championships. In the tournament quarterfinals versus Germany, Brewer scored the game-winning goal 37 seconds into overtime to give Canada a 3–2 victory. Canada would go on to win their first Ice Hockey World Championships gold medal since the 1997 tournament, defeating Sweden 3–2 in overtime in the final. Brewer once again participated for Canada at the 2004 Men's World Ice Hockey Championships, his fourth consecutive Ice Hockey World Championships, where he helped Canada win its second consecutive championship after defeating Sweden 5–3 in the gold medal game.
On May 15, 2004, Brewer was named to the Canadian roster for the 2004 World Cup of Hockey. In the semifinal of the tournament, Brewer scored Canada's first goal of the game in 4–3 overtime victory against the Czech Republic. Team Canada would go on to win the tournament on home ice in Toronto, defeating Finland 3–2 in the final. Just under one year following his World Cup appearance, Brewer was named to the orientation camp for the Canadian team at the 2006 Winter Olympics in Turin, Italy held from August 15–20, 2005, in Vancouver and Kelowna, British Columbia. Following the camp, on October 18, 2005, Brewer was named to the preliminary 81-man Canadian roster for the tournament. However, when the final roster was announced on December 21, 2005, Brewer was not among the 26 players listed. As a result, it would be nearly three years before Brewer would next suit up for his country, when on April 3, 2007, Brewer was among the first five players named to play for Canada at the 2007 IIHF World Championship in Moscow and Mytishchi, Russia. For the tournament, Brewer was named as the team's only permanent alternate captain and helped the team to its third gold medal at the tournament in the past five years.
## Career statistics
### Regular season and playoffs
### International
## Awards and achievements
### Junior
### NHL
## Transactions
- June 21, 1997 – Drafted in the first round, fifth overall by the New York Islanders in the 1997 NHL Entry Draft.
- June 24, 2000 – Traded by the New York Islanders with Josh Green and the Islanders' second round selection (Brad Winchester) in the 2000 NHL Entry Draft to the Edmonton Oilers for Roman Hamrlík.
- August 2, 2005 – Traded by the Edmonton Oilers with Jeff Woywitka and Doug Lynch to the St. Louis Blues for Chris Pronger.
- February 18, 2011 – Traded by the St. Louis Blues to the Tampa Bay Lightning for Brock Beukeboom and the Blues' third round selection (Jordan Binnington) in the 2011 NHL Entry Draft.
- November 28, 2014 – Traded by the Tampa Bay Lightning to the Anaheim Ducks for a third round selection in the 2015 NHL Entry Draft.
- March 2, 2015 – Traded by the Anaheim Ducks to the Toronto Maple Leafs, along with a fifth round selection in the 2016 NHL Entry Draft, in exchange for Korbinian Holzer.
## See also
- List of NHL players with 1000 games played
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.