id
int64
12
1.07M
title
stringlengths
1
124
text
stringlengths
0
228k
paragraphs
list
abstract
stringlengths
0
123k
date_created
stringlengths
0
20
date_modified
stringlengths
20
20
templates
list
url
stringlengths
31
154
9,030
Dachau, Bavaria
Dachau (German pronunciation: [ˈdaxaʊ]) is a town in the Upper Bavaria district of Bavaria, a state in the southern part of Germany. It is a major district town—a Große Kreisstadt—of the administrative region of Upper Bavaria, about 20 kilometres (12 miles) north-west of Munich. It is now a popular residential area for people working in Munich, with roughly 45,000 inhabitants. The historic centre of town with its 18th-century castle is situated on an elevation and visible over a great distance. Dachau was founded in the 9th century. It was home to many artists during the late 19th and early 20th centuries; well-known author and editor Ludwig Thoma lived here for two years. The town is known for its proximity to the Dachau concentration camp, operated by Nazi Germany between 1933 and 1945, in which tens of thousands of prisoners died. The origin of the name is not known. It may have originated with the Celts who lived there before the Germans came. An alternative idea is that it comes from the Old High German word daha meaning clay, and ouwe, water overflown land. As the Amper River would divert into backwaters in several places, there were many fords making it possible to cross the river. The oldest findings of human presence here date back to the Stone Age. The most noteworthy findings were discovered near Feldgeding in the adjoining municipality Bergkirchen. Around 1000 B.C. the Celts arrived in this area and settled. The name “Dachau” originated in the Celtic Dahauua, which roughly translates to “loamy meadow” and also alludes to the loamy soil of the surrounding hills. Some theories assume the name “Amper” river may derive from the Celtic word for “water”. Approximately at the turn of the first millennium the Romans conquered the area and incorporated it into the province of Rhaetia. A Roman trade road between Salzburg and today's Augsburg is said to have run through Dachau. Remains of this old route are found along the Amper marshlands. The first known documentation of Dachau occurs in a medieval deed issued by the Noble Erchana of Dahauua to the prince-bishop of Freising, both descendants of the lineage of the Aribonids. With this deed, dated to August 15, 805 A.D. (the Feast of the Assumption of the Blessed Virgin Mary), she donated her entire property in Dachau, including five so-called Colonenhöfe and some serfs and bondsman, to devolve to the Bishop of the Diocese of Freising after her death. During much of the 12th century, Dachau was the primary residence of a smaller branch from the House of Wittelsbach led by Otto I, Count of Scheyern-Dauchau. When Conrad III died in 1182, Duke Otto I of Bavaria purchased the land and granted it market rights, that were then affirmed between 1270 and 1280 by Duke Ludwig II der Strenge (the Strict). In 1467 Sigismund, Duke of Bavaria resigned and then kept only Bavaria-Dachau as his domain until his death in 1501. Between 1546 and 1577, the House of Wittelsbach had the Dachau Palace erected in the Renaissance style. From June 1715 to Autumn 1717, Joseph Effner remodeled the palace to suit the contemporary taste in style. At the beginning of the 19th century, the castle's north-, east- and south-wing had to be demolished due to their state of disrepair. The west-wing housing the dance hall with a superb view of the enchanting gardens, still remains today. On the first floor the original renaissance wood carved, coffered ceiling can be admired by visitors. During the second half of the 19th century, the town began to attract landscape artists. The Dachau art colony, which flourished between 1890 and 1914, brought the town recognition as one of the most important artist's colonies in Germany beside Worpswede. In 1933, the Dachau concentration camp was built east of the city by the SS of Nazi Germany and operated until 1945. It was the first of what became many Nazi concentration camps. 14,100 prisoners were killed in the camp by the Nazis and almost another 10,000 in its sub-camps. Dachau is 20 km (12 mi) northwest of Munich. It is 483 meters above sea level by the river Amper, with a boundary demarcated by lateral moraines formed during the last ice age and the Amper glacial valley. It is also close to a large marshy area called Dachauer Moos. Highest elevation of the district is the so-called "Schlossberg", the lowest point is near the neighborhood of Prittlbach, at the border to the next community of Hebertshausen. The bordering communities are Bergkirchen to the west, Schwabhausen to the northwest, Röhrmoos to the north, Hebertshausen to the northeast, and Karlsfeld to the south. To the east the greater district Dachau borders on the greater district of Munich with the community of Oberschleißheim. The city is divided into 3 zones: Since 1972, the former municipality of Pellheim, along with the communities of Pellheim proper, Pullhausen, Assenhausen, Lohfeld, and Viehgarten, have been incorporated into Dachau. Running from the west, the river Amper runs south of Dachau's old town, changes its direction at the former paper milling plant to the northeast and continues through Prittlbach into Hebertshausen. Coming from Karlsfeld, the Würm crosses Dachau-East and merges into the river Amper just outside the district limit of Hebertshausen. The Gröbenbach, which has its source south of Puchheim, runs through town coming from the south and merges into the Amper river at several locations near the festival grounds. The Mühlbach, a man made canal, is diverted from the river Amper at the electrical power plant and runs parallel and flows back into it after passing the paper mill. The name derives from the frequent mills in former times along the canal which took advantage of the decline between Mühlbach and Amper. West of the so-called Festwiese runs another canal, called Lodererbach. In town there are still parts of the Schleißheimer canal remaining today. This canal was built in the mid-eighteenth century as part of the northern Munich canal system to which the Nymphenburger Canal belongs as well. It functioned as a transportation route between Dachau and Schleißheim. The building material recovered from the demolition of three wings of the Dachau castle was transported to Schleißheim this way. By allowing it to run to seed and through deliberate cultivation by the town of Dachau the canal is only still recognizable as such between Frühlingstraße and the Pollnbach. Outside the city limit the original canal continues on to Schloss Schleißheim. Within the city boundaries, in Dachau Süd (South), there is also a small lake called Stadtweiher. The city is served by Munich S-Bahn (S2) and Deutsche Bahn via Dachau railway station located in the South of the town. The station is also annexed to the central bus terminal. In Dachau the line S2 is split in two directions: Petershausen and Altomünster. Both lines are named S2 but with different direction names. The offshoot to Altomünster is also served by Dachau Stadt Railway Station which is much smaller than the main railway station. There are five bus lines which are operated by Stadtwerke Dachau: 719, 720, 722, 724 and 726. There is no tramway transport. Dachau has a well-developed road infrastructure for regional transportation. The city is connected to Bundesautobahn 8 (via Fürstenfeldbruck) with Munich-Pasing southbound, and westbound terminating in Karlsruhe. Dachau is connected to Bundesautobahn 92 via Oberschleißheim connector which is located east of Dachau. Bundesautobahn 99 is connected with Dachau via Karlsfeld which is located south of Dachau. Bundesstraße No. 471 (via Rothschwaige) connects eastbound towns such as the neighboring city Fürstenfeldbruck and westbound towns such as Oberschleißheim. Bundesstraße No. 304 starts in the south of the city and connects southbound towns until the German-Austrian border. Additionally, several Staatsstraßen connect Dachau with surrounding towns and villages. City of Dachau Dachau is twinned with: Dachau also cooperates with:
[ { "paragraph_id": 0, "text": "Dachau (German pronunciation: [ˈdaxaʊ]) is a town in the Upper Bavaria district of Bavaria, a state in the southern part of Germany. It is a major district town—a Große Kreisstadt—of the administrative region of Upper Bavaria, about 20 kilometres (12 miles) north-west of Munich. It is now a popular residential area for people working in Munich, with roughly 45,000 inhabitants. The historic centre of town with its 18th-century castle is situated on an elevation and visible over a great distance.", "title": "" }, { "paragraph_id": 1, "text": "Dachau was founded in the 9th century. It was home to many artists during the late 19th and early 20th centuries; well-known author and editor Ludwig Thoma lived here for two years. The town is known for its proximity to the Dachau concentration camp, operated by Nazi Germany between 1933 and 1945, in which tens of thousands of prisoners died.", "title": "" }, { "paragraph_id": 2, "text": "The origin of the name is not known. It may have originated with the Celts who lived there before the Germans came. An alternative idea is that it comes from the Old High German word daha meaning clay, and ouwe, water overflown land.", "title": "Etymology" }, { "paragraph_id": 3, "text": "As the Amper River would divert into backwaters in several places, there were many fords making it possible to cross the river. The oldest findings of human presence here date back to the Stone Age. The most noteworthy findings were discovered near Feldgeding in the adjoining municipality Bergkirchen. Around 1000 B.C. the Celts arrived in this area and settled. The name “Dachau” originated in the Celtic Dahauua, which roughly translates to “loamy meadow” and also alludes to the loamy soil of the surrounding hills. Some theories assume the name “Amper” river may derive from the Celtic word for “water”. Approximately at the turn of the first millennium the Romans conquered the area and incorporated it into the province of Rhaetia. A Roman trade road between Salzburg and today's Augsburg is said to have run through Dachau. Remains of this old route are found along the Amper marshlands.", "title": "History" }, { "paragraph_id": 4, "text": "The first known documentation of Dachau occurs in a medieval deed issued by the Noble Erchana of Dahauua to the prince-bishop of Freising, both descendants of the lineage of the Aribonids. With this deed, dated to August 15, 805 A.D. (the Feast of the Assumption of the Blessed Virgin Mary), she donated her entire property in Dachau, including five so-called Colonenhöfe and some serfs and bondsman, to devolve to the Bishop of the Diocese of Freising after her death.", "title": "History" }, { "paragraph_id": 5, "text": "During much of the 12th century, Dachau was the primary residence of a smaller branch from the House of Wittelsbach led by Otto I, Count of Scheyern-Dauchau. When Conrad III died in 1182, Duke Otto I of Bavaria purchased the land and granted it market rights, that were then affirmed between 1270 and 1280 by Duke Ludwig II der Strenge (the Strict).", "title": "History" }, { "paragraph_id": 6, "text": "In 1467 Sigismund, Duke of Bavaria resigned and then kept only Bavaria-Dachau as his domain until his death in 1501.", "title": "History" }, { "paragraph_id": 7, "text": "Between 1546 and 1577, the House of Wittelsbach had the Dachau Palace erected in the Renaissance style. From June 1715 to Autumn 1717, Joseph Effner remodeled the palace to suit the contemporary taste in style.", "title": "History" }, { "paragraph_id": 8, "text": "At the beginning of the 19th century, the castle's north-, east- and south-wing had to be demolished due to their state of disrepair. The west-wing housing the dance hall with a superb view of the enchanting gardens, still remains today. On the first floor the original renaissance wood carved, coffered ceiling can be admired by visitors.", "title": "History" }, { "paragraph_id": 9, "text": "During the second half of the 19th century, the town began to attract landscape artists. The Dachau art colony, which flourished between 1890 and 1914, brought the town recognition as one of the most important artist's colonies in Germany beside Worpswede.", "title": "History" }, { "paragraph_id": 10, "text": "In 1933, the Dachau concentration camp was built east of the city by the SS of Nazi Germany and operated until 1945. It was the first of what became many Nazi concentration camps. 14,100 prisoners were killed in the camp by the Nazis and almost another 10,000 in its sub-camps.", "title": "History" }, { "paragraph_id": 11, "text": "Dachau is 20 km (12 mi) northwest of Munich. It is 483 meters above sea level by the river Amper, with a boundary demarcated by lateral moraines formed during the last ice age and the Amper glacial valley. It is also close to a large marshy area called Dachauer Moos. Highest elevation of the district is the so-called \"Schlossberg\", the lowest point is near the neighborhood of Prittlbach, at the border to the next community of Hebertshausen. The bordering communities are Bergkirchen to the west, Schwabhausen to the northwest, Röhrmoos to the north, Hebertshausen to the northeast, and Karlsfeld to the south. To the east the greater district Dachau borders on the greater district of Munich with the community of Oberschleißheim.", "title": "Geography" }, { "paragraph_id": 12, "text": "The city is divided into 3 zones:", "title": "Local administrative divisions" }, { "paragraph_id": 13, "text": "Since 1972, the former municipality of Pellheim, along with the communities of Pellheim proper, Pullhausen, Assenhausen, Lohfeld, and Viehgarten, have been incorporated into Dachau.", "title": "Local administrative divisions" }, { "paragraph_id": 14, "text": "Running from the west, the river Amper runs south of Dachau's old town, changes its direction at the former paper milling plant to the northeast and continues through Prittlbach into Hebertshausen.", "title": "Bodies of water" }, { "paragraph_id": 15, "text": "Coming from Karlsfeld, the Würm crosses Dachau-East and merges into the river Amper just outside the district limit of Hebertshausen.", "title": "Bodies of water" }, { "paragraph_id": 16, "text": "The Gröbenbach, which has its source south of Puchheim, runs through town coming from the south and merges into the Amper river at several locations near the festival grounds.", "title": "Bodies of water" }, { "paragraph_id": 17, "text": "The Mühlbach, a man made canal, is diverted from the river Amper at the electrical power plant and runs parallel and flows back into it after passing the paper mill. The name derives from the frequent mills in former times along the canal which took advantage of the decline between Mühlbach and Amper. West of the so-called Festwiese runs another canal, called Lodererbach.", "title": "Bodies of water" }, { "paragraph_id": 18, "text": "In town there are still parts of the Schleißheimer canal remaining today. This canal was built in the mid-eighteenth century as part of the northern Munich canal system to which the Nymphenburger Canal belongs as well. It functioned as a transportation route between Dachau and Schleißheim. The building material recovered from the demolition of three wings of the Dachau castle was transported to Schleißheim this way.", "title": "Bodies of water" }, { "paragraph_id": 19, "text": "By allowing it to run to seed and through deliberate cultivation by the town of Dachau the canal is only still recognizable as such between Frühlingstraße and the Pollnbach. Outside the city limit the original canal continues on to Schloss Schleißheim.", "title": "Bodies of water" }, { "paragraph_id": 20, "text": "Within the city boundaries, in Dachau Süd (South), there is also a small lake called Stadtweiher.", "title": "Bodies of water" }, { "paragraph_id": 21, "text": "The city is served by Munich S-Bahn (S2) and Deutsche Bahn via Dachau railway station located in the South of the town. The station is also annexed to the central bus terminal. In Dachau the line S2 is split in two directions: Petershausen and Altomünster. Both lines are named S2 but with different direction names. The offshoot to Altomünster is also served by Dachau Stadt Railway Station which is much smaller than the main railway station. There are five bus lines which are operated by Stadtwerke Dachau: 719, 720, 722, 724 and 726. There is no tramway transport.", "title": "Transport" }, { "paragraph_id": 22, "text": "Dachau has a well-developed road infrastructure for regional transportation. The city is connected to Bundesautobahn 8 (via Fürstenfeldbruck) with Munich-Pasing southbound, and westbound terminating in Karlsruhe. Dachau is connected to Bundesautobahn 92 via Oberschleißheim connector which is located east of Dachau. Bundesautobahn 99 is connected with Dachau via Karlsfeld which is located south of Dachau. Bundesstraße No. 471 (via Rothschwaige) connects eastbound towns such as the neighboring city Fürstenfeldbruck and westbound towns such as Oberschleißheim. Bundesstraße No. 304 starts in the south of the city and connects southbound towns until the German-Austrian border. Additionally, several Staatsstraßen connect Dachau with surrounding towns and villages.", "title": "Transport" }, { "paragraph_id": 23, "text": "City of Dachau", "title": "Sights" }, { "paragraph_id": 24, "text": "Dachau is twinned with:", "title": "Twin-towns – sister-cities" }, { "paragraph_id": 25, "text": "Dachau also cooperates with:", "title": "Twin-towns – sister-cities" } ]
Dachau is a town in the Upper Bavaria district of Bavaria, a state in the southern part of Germany. It is a major district town—a Große Kreisstadt—of the administrative region of Upper Bavaria, about 20 kilometres north-west of Munich. It is now a popular residential area for people working in Munich, with roughly 45,000 inhabitants. The historic centre of town with its 18th-century castle is situated on an elevation and visible over a great distance. Dachau was founded in the 9th century. It was home to many artists during the late 19th and early 20th centuries; well-known author and editor Ludwig Thoma lived here for two years. The town is known for its proximity to the Dachau concentration camp, operated by Nazi Germany between 1933 and 1945, in which tens of thousands of prisoners died.
2023-05-21T08:19:33Z
[ "Template:Clear", "Template:See also", "Template:Reflist", "Template:Wikivoyage-inline", "Template:Convert", "Template:Cite web", "Template:Cities and towns in Dachau (district)", "Template:Wide image", "Template:Multiple image", "Template:In lang", "Template:Infobox German location", "Template:Flagicon", "Template:Commons category", "Template:Curlie", "Template:Authority control", "Template:IPA-de" ]
https://en.wikipedia.org/wiki/Dachau,_Bavaria
9,032
Drosophila
Drosophila (/drəˈsɒfɪlə, drɒ-, droʊ-/) is a genus of flies, belonging to the family Drosophilidae, whose members are often called "small fruit flies" or (less frequently) pomace flies, vinegar flies, or wine flies, a reference to the characteristic of many species to linger around overripe or rotting fruit. They should not be confused with the Tephritidae, a related family, which are also called fruit flies (sometimes referred to as "true fruit flies"); tephritids feed primarily on unripe or ripe fruit, with many species being regarded as destructive agricultural pests, especially the Mediterranean fruit fly. One species of Drosophila in particular, D. melanogaster, has been heavily used in research in genetics and is a common model organism in developmental biology. The terms "fruit fly" and "Drosophila" are often used synonymously with D. melanogaster in modern biological literature. The entire genus, however, contains more than 1,500 species and is very diverse in appearance, behavior, and breeding habitat. The term "Drosophila", meaning "dew-loving", is a modern scientific Latin adaptation from Greek words δρόσος, drósos, "dew", and φιλία, philía, "lover". Drosophila species are small flies, typically pale yellow to reddish brown to black, with red eyes. When the eyes (essentially a film of lenses) are removed, the brain is revealed. Drosophila brain structure and function develop and age significantly from larval to adult stage. Developing brain structures make these flies a prime candidate for neuro-genetic research. Many species, including the noted Hawaiian picture-wings, have distinct black patterns on the wings. The plumose (feathery) arista, bristling of the head and thorax, and wing venation are characters used to diagnose the family. Most are small, about 2–4 millimetres (0.079–0.157 in) long, but some, especially many of the Hawaiian species, are larger than a house fly. Environmental challenge by natural toxins helped to prepare Drosophilae to detox DDT, by shaping the glutathione S-transferase mechanism that metabolizes both. The Drosophila genome is subject to a high degree of selection, especially unusually widespread negative selection compared to other taxa. A majority of the genome is under selection of some sort, and a supermajority of this is occurring in non-coding DNA. Effective population size has been credibly suggested to positively correlate with the effect size of both negative and positive selection. Recombination is likely to be a significant source of diversity. There is evidence that crossover is positively correlated with polymorphism in D. populations. Drosophila species are found all around the world, with more species in the tropical regions. Drosophila made their way to the Hawaiian Islands and radiated into over 800 species. They can be found in deserts, tropical rainforest, cities, swamps, and alpine zones. Some northern species hibernate. The northern species D. montana is the best cold-adapted, and is primarily found at high altitudes. Most species breed in various kinds of decaying plant and fungal material, including fruit, bark, slime fluxes, flowers, and mushrooms. Drosophila species that are fruit-breeding are attracted to various products of fermentation, especially ethanol and methanol. Fruits exploited by Drosophila species include those with a high pectin concentration, which is an indicator of how much alcohol will be produced during fermentation. Citrus, morinda, apples, pears, plums, and apricots belong into this category. The larvae of at least one species, D. suzukii, can also feed in fresh fruit and can sometimes be a pest. A few species have switched to being parasites or predators. Many species can be attracted to baits of fermented bananas or mushrooms, but others are not attracted to any kind of baits. Males may congregate at patches of suitable breeding substrate to compete for the females, or form leks, conducting courtship in an area separate from breeding sites. Several Drosophila species, including Drosophila melanogaster, D. immigrans, and D. simulans, are closely associated with humans, and are often referred to as domestic species. These and other species (D. subobscura, and from a related genus Zaprionus indianus) have been accidentally introduced around the world by human activities such as fruit transports. Males of this genus are known to have the longest sperm cells of any studied organism on Earth, including one species, Drosophila bifurca, that has sperm cells that are 58 mm (2.3 in) long. The cells mostly consist of a long, thread-like tail, and are delivered to the females in tangled coils. The other members of the genus Drosophila also make relatively few giant sperm cells, with that of D. bifurca being the longest. D. melanogaster sperm cells are a more modest 1.8 mm long, although this is still about 35 times longer than a human sperm. Several species in the D. melanogaster species group are known to mate by traumatic insemination. Drosophila species vary widely in their reproductive capacity. Those such as D. melanogaster that breed in large, relatively rare resources have ovaries that mature 10–20 eggs at a time, so that they can be laid together on one site. Others that breed in more-abundant but less nutritious substrates, such as leaves, may only lay one egg per day. The eggs have one or more respiratory filaments near the anterior end; the tips of these extend above the surface and allow oxygen to reach the embryo. Larvae feed not on the vegetable matter itself, but on the yeasts and microorganisms present on the decaying breeding substrate. Development time varies widely between species (between 7 and more than 60 days) and depends on the environmental factors such as temperature, breeding substrate, and crowding. Fruit flies lay eggs in response to environmental cycles. Eggs laid at a time (e.g., night) during which likelihood of survival is greater than in eggs laid at other times (e.g., day) yield more larvae than eggs that were laid at those times. Ceteris paribus, the habit of laying eggs at this 'advantageous' time would yield more surviving offspring, and more grandchildren, than the habit of laying eggs during other times. This differential reproductive success would cause D. melanogaster to adapt to environmental cycles, because this behavior has a major reproductive advantage. Their median lifespan is 35–45 days. The following section is based on the following Drosophila species: Drosophila simulans and Drosophila melanogaster. Courtship behavior of male Drosophila is an attractive behaviour. Females respond via their perception of the behavior portrayed by the male. Male and female Drosophila use a variety of sensory cues to initiate and assess courtship readiness of a potential mate. The cues include the following behaviours: positioning, pheromone secretion, following females, making tapping sounds with legs, singing, wing spreading, creating wing vibrations, genitalia licking, bending the stomach, attempt to copulate, and the copulatory act itself. The songs of Drosophila melanogaster and Drosophila simulans have been studied extensively. These luring songs are sinusoidal in nature and varies within and between species. The courtship behavior of Drosophila melanogaster has also been assessed for sex-related genes, which have been implicated in courtship behavior in both the male and female. Recent experiments explore the role of fruitless (fru) and doublesex (dsx), a group of sex-behaviour linked genes. The fruitless (fru) gene in Drosophila helps regulate the network for male courtship behavior; when a mutation to this gene occurs altered same sex sexual behavior in males is observed. Male Drosophila with the fru mutation direct their courtship towards other males as opposed to typical courtship, which would be directed towards females. Loss of the fru mutation leads back to the typical courtship behavior. A novel class of pheromones was found to be conserved across the subgenus Drosophila in 11 desert dwelling species. These pheromones are triacylglycerides that are secreted exclusively by males from their ejaculatory bulb and transferred to females during mating. The function of the pheromones is to make the females unattractive to subsequent suitors and thus inhibit courtship by other males. The following section is based on the following Drosophila species: Drosophila serrata, Drosophila pseudoobscura, Drosophila melanogaster, and Drosophila neotestacea. Polyandry is a prominent mating system among Drosophila. Females mating with multiple sex partners has been a beneficial mating strategy for Drosophila. The benefits include both pre and post copulatory mating. Pre-copulatory strategies are the behaviours associated with mate choice and the genetic contributions, such as production of gametes, that are exhibited by both male and female Drosophila regarding mate choice. Post copulatory strategies include sperm competition, mating frequency, and sex-ratio meiotic drive. These lists are not inclusive. Polyandry among the Drosophila pseudoobscura in North America vary in their number of mating partners. There is a connection between the number of time females choose to mate and chromosomal variants of the third chromosome. It is believed that the presence of the inverted polymorphism is why re-mating by females occurs. The stability of these polymorphisms may be related to the sex-ratio meiotic drive. However, for Drosophila subobscura, the main mating system is monandry, not normally seen in Drosophila. The following section is based on the following Drosophila species: Drosophila melanogaster, Drosophila simulans, and Drosophila mauritiana. Sperm competition is a process that polyandrous Drosophila females use to increase the fitness of their offspring. The female Drosophila has two sperm storage organs, the spermathecae and seminal receptacle, that allows her to choose the sperm that will be used to inseminate her eggs. However, some species of Drosophila have evolved to only use one or the other. Females have little control when it comes to cryptic female choice. Female Drosophila through cryptic choice, one of several post-copulatory mechanisms, which allows for the detection and expelling of sperm that reduces inbreeding possibilities. Manier et al. 2013 has categorized the post copulatory sexual selection of Drosophila melanogaster, Drosophila simulans, and Drosophila mauritiana into the following three stages: insemination, sperm storage, and fertilizable sperm. Among the preceding species there are variations at each stage that play a role in the natural selection process. This sperm competition has been found to be a driving force in the establishment of reproductive isolation during speciation. Parthenogenesis does not occur in D. melanogaster, but in the gyn-f9 mutant, gynogenesis occurs at low frequency. The natural populations of D. mangebeirai are entirely female, making it the only obligate parthenogenetic species of Drosophila. Parthenogenesis is facultative in parthenogenetica and mercatorum. D. melanogaster is a popular experimental animal because it is easily cultured en masse out of the wild, has a short generation time, and mutant animals are readily obtainable. In 1906, Thomas Hunt Morgan began his work on D. melanogaster and reported his first finding of a white eyed mutant in 1910 to the academic community. He was in search of a model organism to study genetic heredity and required a species that could randomly acquire genetic mutation that would visibly manifest as morphological changes in the adult animal. His work on Drosophila earned him the 1933 Nobel Prize in Medicine for identifying chromosomes as the vector of inheritance for genes. This and other Drosophila species are widely used in studies of genetics, embryogenesis, chronobiology, speciation, neurobiology, and other areas. However, some species of Drosophila are difficult to culture in the laboratory, often because they breed on a single specific host in the wild. For some, it can be done with particular recipes for rearing media, or by introducing chemicals such as sterols that are found in the natural host; for others, it is (so far) impossible. In some cases, the larvae can develop on normal Drosophila lab medium, but the female will not lay eggs; for these it is often simply a matter of putting in a small piece of the natural host to receive the eggs. The Drosophila Species Stock Center located at Cornell University in Ithaca, New York, maintains cultures of hundreds of species for researchers. Drosophila is considered one of the most valuable genetic model organisms; both adults and embryos are experimental models. Drosophila is a prime candidate for genetic research because the relationship between human and fruit fly genes is very close. Human and fruit fly genes are so similar, that disease-producing genes in humans can be linked to those in flies. The fly has approximately 15,500 genes on its four chromosomes, whereas humans have about 22,000 genes among their 23 chromosomes. Thus the density of genes per chromosome in Drosophila is higher than the human genome. Low and manageable number of chromosomes make Drosophila species easier to study. These flies also carry genetic information and pass down traits throughout generations, much like their human counterparts. The traits can then be studied through different Drosophila lineages and the findings can be applied to deduce genetic trends in humans. Research conducted on Drosophila help determine the ground rules for transmission of genes in many organisms. Drosophila is a useful in vivo tool to analyze Alzheimer's disease. Rhomboid proteases were first detected in Drosophila but then found to be highly conserved across eukaryotes, mitochondria, and bacteria. Melanin's ability to protect DNA against ionizing radiation has been most extensively demonstrated in Drosophila, including in the formative study by Hopwood et al 1985. Like other animals, Drosophila is associated with various bacteria in its gut. The fly gut microbiota or microbiome seems to have a central influence on Drosophila fitness and life history characteristics. The microbiota in the gut of Drosophila represents an active current research field. Drosophila species also harbour vertically transmitted endosymbionts, such as Wolbachia and Spiroplasma. These endosymbionts can act as reproductive manipulators, such as cytoplasmic incompatibility induced by Wolbachia or male-killing induced by the D. melanogaster Spiroplasma poulsonii (named MSRO). The male-killing factor of the D. melanogaster MSRO strain was discovered in 2018, solving a decades-old mystery of the cause of male-killing. This represents the first bacterial factor that affects eukaryotic cells in a sex-specific fashion, and is the first mechanism identified for male-killing phenotypes. Alternatively, they may protect theirs hosts from infection. Drosophila Wolbachia can reduce viral loads upon infection, and is explored as a mechanism of controlling viral diseases (e.g. Dengue fever) by transferring these Wolbachia to disease-vector mosquitoes. The S. poulsonii strain of Drosophila neotestacea protects its host from parasitic wasps and nematodes using toxins that preferentially attack the parasites instead of the host. Since the Drosophila species is one of the most used model organisms, it was vastly used in genetics. However, the effect abiotic factors, such as temperature, has on the microbiome on Drosophila species has recently been of great interest. Certain variations in temperature have an impact on the microbiome. It was observed that higher temperatures (31 °C) lead to an increase of Acetobacter populations in the gut microbiome of Drosophila melanogaster as compared to lower temperatures (13 °C). In low temperatures (13 °C), the flies were more cold resistant and also had the highest concentration of Wolbachia. The microbiome in the gut can also be transplanted among organisms. It was found that Drosophila melanogaster became more cold-tolerant when the gut microbiota from Drosophila melanogaster that were reared at low temperatures. This depicted that the gut microbiome is correlated to physiological processes. Moreover, the microbiome plays a role in aggression, immunity, egg-laying preferences, locomotion and metabolism. As for aggression, it plays a role to a certain degree during courtship. It was observed that germ-free flies were not as competitive compared to the wild-type males. Microbiome of the Drosophila species is also known to promote aggression by octopamine OA signalling. The microbiome has been shown to impact these fruit flies' social interactions, specifically aggressive behaviour that is seen during courtship and mating. Drosophila species are prey for many generalist predators, such as robber flies. In Hawaii, the introduction of yellowjackets from mainland United States has led to the decline of many of the larger species. The larvae are preyed on by other fly larvae, staphylinid beetles, and ants. As with many Eukaryotes, this genus is known to express SNAREs, and as with several others the components of the SNARE complex are known to be somewhat substitutable: Although the loss of SNAP-25 - a component of neuronal SNAREs - is lethal, SNAP-24 can fully replace it. For another example, an R-SNARE not normally found in synapses can substitute for synaptobrevin. The Spätzle protein is a ligand of Toll. In addition to melanin's more commonly known roles in the endoskeleton and in neurochemistry, melanization is one step in the immune responses to some pathogens. Dudzic et al 2019 additionally find a large number of shared serine protease messengers between Spätzle/Toll and melanization and a large amount of crosstalk between these pathways. The genus Drosophila as currently defined is paraphyletic (see below) and contains 1,450 described species, while the total number of species is estimated at thousands. The majority of the species are members of two subgenera: Drosophila (about 1,100 species) and Sophophora (including D. (S.) melanogaster; around 330 species). The Hawaiian species of Drosophila (estimated to be more than 500, with roughly 380 species described) are sometimes recognized as a separate genus or subgenus, Idiomyia, but this is not widely accepted. About 250 species are part of the genus Scaptomyza, which arose from the Hawaiian Drosophila and later recolonized continental areas. Evidence from phylogenetic studies suggests these genera arose from within the genus Drosophila: Several of the subgeneric and generic names are based on anagrams of Drosophila, including Dorsilopha, Lordiphosa, Siphlodora, Phloridosa, and Psilodorha. Drosophila species are extensively used as model organisms in genetics (including population genetics), cell biology, biochemistry, and especially developmental biology. Therefore, extensive efforts are made to sequence drosphilid genomes. The genomes of these species have been fully sequenced: The data have been used for many purposes, including evolutionary genome comparisons. D. simulans and D. sechellia are sister species, and provide viable offspring when crossed, while D. melanogaster and D. simulans produce infertile hybrid offspring. The Drosophila genome is often compared with the genomes of more distantly related species such as the honeybee Apis mellifera or the mosquito Anopheles gambiae. The modEncode consortium is currently sequencing eight more Drosophila genomes, and even more genomes are being sequenced by the i5K consortium. Curated data are available at FlyBase. The Drosophila 12 Genomes Consortium – led by Andrew G. Clark, Michael Eisen, Douglas Smith, Casey Bergman, Brian Oliver, Therese Ann Markow, Thomas Kaufman, Manolis Kellis, William Gelbart, Venky Iyer, Daniel Pollard, Timothy Sackton, Amanda Larracuente, Nadia Singh, and including Wojciech Makalowski, Mohamed Noor, Temple F. Smith, Craig Venter, Peter Keightley, and Leonid Boguslavsky among its contributors – presents ten new genomes and combines those with previously released genomes for D. melanogaster and D. pseudoobscura to analyse the evolutionary history and common genomic structure of the genus. This includes the discovery of transposable elements and illumination of their evolutionary history. Bartolomé et al 2009 find at least 1⁄3 of the TEs in D. melanogaster, D. simulans and D. yakuba have been acquired by horizontal transfer. They find an average of 0.035 HT TEs⁄TE family⁄million years. Bartolomé also finds HT TEs follow other relatedness metrics, with D. melanogaster⇔D. simulans events being twice as common as either of them ⇔ D. yakuba.
[ { "paragraph_id": 0, "text": "Drosophila (/drəˈsɒfɪlə, drɒ-, droʊ-/) is a genus of flies, belonging to the family Drosophilidae, whose members are often called \"small fruit flies\" or (less frequently) pomace flies, vinegar flies, or wine flies, a reference to the characteristic of many species to linger around overripe or rotting fruit. They should not be confused with the Tephritidae, a related family, which are also called fruit flies (sometimes referred to as \"true fruit flies\"); tephritids feed primarily on unripe or ripe fruit, with many species being regarded as destructive agricultural pests, especially the Mediterranean fruit fly.", "title": "" }, { "paragraph_id": 1, "text": "One species of Drosophila in particular, D. melanogaster, has been heavily used in research in genetics and is a common model organism in developmental biology. The terms \"fruit fly\" and \"Drosophila\" are often used synonymously with D. melanogaster in modern biological literature. The entire genus, however, contains more than 1,500 species and is very diverse in appearance, behavior, and breeding habitat.", "title": "" }, { "paragraph_id": 2, "text": "The term \"Drosophila\", meaning \"dew-loving\", is a modern scientific Latin adaptation from Greek words δρόσος, drósos, \"dew\", and φιλία, philía, \"lover\".", "title": "Etymology" }, { "paragraph_id": 3, "text": "Drosophila species are small flies, typically pale yellow to reddish brown to black, with red eyes. When the eyes (essentially a film of lenses) are removed, the brain is revealed. Drosophila brain structure and function develop and age significantly from larval to adult stage. Developing brain structures make these flies a prime candidate for neuro-genetic research. Many species, including the noted Hawaiian picture-wings, have distinct black patterns on the wings. The plumose (feathery) arista, bristling of the head and thorax, and wing venation are characters used to diagnose the family. Most are small, about 2–4 millimetres (0.079–0.157 in) long, but some, especially many of the Hawaiian species, are larger than a house fly.", "title": "Morphology" }, { "paragraph_id": 4, "text": "Environmental challenge by natural toxins helped to prepare Drosophilae to detox DDT, by shaping the glutathione S-transferase mechanism that metabolizes both.", "title": "Evolution" }, { "paragraph_id": 5, "text": "The Drosophila genome is subject to a high degree of selection, especially unusually widespread negative selection compared to other taxa. A majority of the genome is under selection of some sort, and a supermajority of this is occurring in non-coding DNA.", "title": "Evolution" }, { "paragraph_id": 6, "text": "Effective population size has been credibly suggested to positively correlate with the effect size of both negative and positive selection. Recombination is likely to be a significant source of diversity. There is evidence that crossover is positively correlated with polymorphism in D. populations.", "title": "Evolution" }, { "paragraph_id": 7, "text": "Drosophila species are found all around the world, with more species in the tropical regions. Drosophila made their way to the Hawaiian Islands and radiated into over 800 species. They can be found in deserts, tropical rainforest, cities, swamps, and alpine zones. Some northern species hibernate. The northern species D. montana is the best cold-adapted, and is primarily found at high altitudes. Most species breed in various kinds of decaying plant and fungal material, including fruit, bark, slime fluxes, flowers, and mushrooms. Drosophila species that are fruit-breeding are attracted to various products of fermentation, especially ethanol and methanol. Fruits exploited by Drosophila species include those with a high pectin concentration, which is an indicator of how much alcohol will be produced during fermentation. Citrus, morinda, apples, pears, plums, and apricots belong into this category.", "title": "Biology" }, { "paragraph_id": 8, "text": "The larvae of at least one species, D. suzukii, can also feed in fresh fruit and can sometimes be a pest. A few species have switched to being parasites or predators. Many species can be attracted to baits of fermented bananas or mushrooms, but others are not attracted to any kind of baits. Males may congregate at patches of suitable breeding substrate to compete for the females, or form leks, conducting courtship in an area separate from breeding sites.", "title": "Biology" }, { "paragraph_id": 9, "text": "Several Drosophila species, including Drosophila melanogaster, D. immigrans, and D. simulans, are closely associated with humans, and are often referred to as domestic species. These and other species (D. subobscura, and from a related genus Zaprionus indianus) have been accidentally introduced around the world by human activities such as fruit transports.", "title": "Biology" }, { "paragraph_id": 10, "text": "Males of this genus are known to have the longest sperm cells of any studied organism on Earth, including one species, Drosophila bifurca, that has sperm cells that are 58 mm (2.3 in) long. The cells mostly consist of a long, thread-like tail, and are delivered to the females in tangled coils. The other members of the genus Drosophila also make relatively few giant sperm cells, with that of D. bifurca being the longest. D. melanogaster sperm cells are a more modest 1.8 mm long, although this is still about 35 times longer than a human sperm. Several species in the D. melanogaster species group are known to mate by traumatic insemination.", "title": "Biology" }, { "paragraph_id": 11, "text": "Drosophila species vary widely in their reproductive capacity. Those such as D. melanogaster that breed in large, relatively rare resources have ovaries that mature 10–20 eggs at a time, so that they can be laid together on one site. Others that breed in more-abundant but less nutritious substrates, such as leaves, may only lay one egg per day. The eggs have one or more respiratory filaments near the anterior end; the tips of these extend above the surface and allow oxygen to reach the embryo. Larvae feed not on the vegetable matter itself, but on the yeasts and microorganisms present on the decaying breeding substrate. Development time varies widely between species (between 7 and more than 60 days) and depends on the environmental factors such as temperature, breeding substrate, and crowding.", "title": "Biology" }, { "paragraph_id": 12, "text": "Fruit flies lay eggs in response to environmental cycles. Eggs laid at a time (e.g., night) during which likelihood of survival is greater than in eggs laid at other times (e.g., day) yield more larvae than eggs that were laid at those times. Ceteris paribus, the habit of laying eggs at this 'advantageous' time would yield more surviving offspring, and more grandchildren, than the habit of laying eggs during other times. This differential reproductive success would cause D. melanogaster to adapt to environmental cycles, because this behavior has a major reproductive advantage.", "title": "Biology" }, { "paragraph_id": 13, "text": "Their median lifespan is 35–45 days.", "title": "Biology" }, { "paragraph_id": 14, "text": "The following section is based on the following Drosophila species: Drosophila simulans and Drosophila melanogaster.", "title": "Biology" }, { "paragraph_id": 15, "text": "Courtship behavior of male Drosophila is an attractive behaviour. Females respond via their perception of the behavior portrayed by the male. Male and female Drosophila use a variety of sensory cues to initiate and assess courtship readiness of a potential mate. The cues include the following behaviours: positioning, pheromone secretion, following females, making tapping sounds with legs, singing, wing spreading, creating wing vibrations, genitalia licking, bending the stomach, attempt to copulate, and the copulatory act itself. The songs of Drosophila melanogaster and Drosophila simulans have been studied extensively. These luring songs are sinusoidal in nature and varies within and between species.", "title": "Biology" }, { "paragraph_id": 16, "text": "The courtship behavior of Drosophila melanogaster has also been assessed for sex-related genes, which have been implicated in courtship behavior in both the male and female. Recent experiments explore the role of fruitless (fru) and doublesex (dsx), a group of sex-behaviour linked genes.", "title": "Biology" }, { "paragraph_id": 17, "text": "The fruitless (fru) gene in Drosophila helps regulate the network for male courtship behavior; when a mutation to this gene occurs altered same sex sexual behavior in males is observed. Male Drosophila with the fru mutation direct their courtship towards other males as opposed to typical courtship, which would be directed towards females. Loss of the fru mutation leads back to the typical courtship behavior.", "title": "Biology" }, { "paragraph_id": 18, "text": "A novel class of pheromones was found to be conserved across the subgenus Drosophila in 11 desert dwelling species. These pheromones are triacylglycerides that are secreted exclusively by males from their ejaculatory bulb and transferred to females during mating. The function of the pheromones is to make the females unattractive to subsequent suitors and thus inhibit courtship by other males.", "title": "Biology" }, { "paragraph_id": 19, "text": "The following section is based on the following Drosophila species: Drosophila serrata, Drosophila pseudoobscura, Drosophila melanogaster, and Drosophila neotestacea. Polyandry is a prominent mating system among Drosophila. Females mating with multiple sex partners has been a beneficial mating strategy for Drosophila. The benefits include both pre and post copulatory mating. Pre-copulatory strategies are the behaviours associated with mate choice and the genetic contributions, such as production of gametes, that are exhibited by both male and female Drosophila regarding mate choice. Post copulatory strategies include sperm competition, mating frequency, and sex-ratio meiotic drive.", "title": "Biology" }, { "paragraph_id": 20, "text": "These lists are not inclusive. Polyandry among the Drosophila pseudoobscura in North America vary in their number of mating partners. There is a connection between the number of time females choose to mate and chromosomal variants of the third chromosome. It is believed that the presence of the inverted polymorphism is why re-mating by females occurs. The stability of these polymorphisms may be related to the sex-ratio meiotic drive.", "title": "Biology" }, { "paragraph_id": 21, "text": "However, for Drosophila subobscura, the main mating system is monandry, not normally seen in Drosophila.", "title": "Biology" }, { "paragraph_id": 22, "text": "The following section is based on the following Drosophila species: Drosophila melanogaster, Drosophila simulans, and Drosophila mauritiana. Sperm competition is a process that polyandrous Drosophila females use to increase the fitness of their offspring. The female Drosophila has two sperm storage organs, the spermathecae and seminal receptacle, that allows her to choose the sperm that will be used to inseminate her eggs. However, some species of Drosophila have evolved to only use one or the other. Females have little control when it comes to cryptic female choice. Female Drosophila through cryptic choice, one of several post-copulatory mechanisms, which allows for the detection and expelling of sperm that reduces inbreeding possibilities. Manier et al. 2013 has categorized the post copulatory sexual selection of Drosophila melanogaster, Drosophila simulans, and Drosophila mauritiana into the following three stages: insemination, sperm storage, and fertilizable sperm. Among the preceding species there are variations at each stage that play a role in the natural selection process. This sperm competition has been found to be a driving force in the establishment of reproductive isolation during speciation.", "title": "Biology" }, { "paragraph_id": 23, "text": "Parthenogenesis does not occur in D. melanogaster, but in the gyn-f9 mutant, gynogenesis occurs at low frequency. The natural populations of D. mangebeirai are entirely female, making it the only obligate parthenogenetic species of Drosophila. Parthenogenesis is facultative in parthenogenetica and mercatorum.", "title": "Biology" }, { "paragraph_id": 24, "text": "D. melanogaster is a popular experimental animal because it is easily cultured en masse out of the wild, has a short generation time, and mutant animals are readily obtainable. In 1906, Thomas Hunt Morgan began his work on D. melanogaster and reported his first finding of a white eyed mutant in 1910 to the academic community. He was in search of a model organism to study genetic heredity and required a species that could randomly acquire genetic mutation that would visibly manifest as morphological changes in the adult animal. His work on Drosophila earned him the 1933 Nobel Prize in Medicine for identifying chromosomes as the vector of inheritance for genes. This and other Drosophila species are widely used in studies of genetics, embryogenesis, chronobiology, speciation, neurobiology, and other areas.", "title": "Biology" }, { "paragraph_id": 25, "text": "However, some species of Drosophila are difficult to culture in the laboratory, often because they breed on a single specific host in the wild. For some, it can be done with particular recipes for rearing media, or by introducing chemicals such as sterols that are found in the natural host; for others, it is (so far) impossible. In some cases, the larvae can develop on normal Drosophila lab medium, but the female will not lay eggs; for these it is often simply a matter of putting in a small piece of the natural host to receive the eggs.", "title": "Biology" }, { "paragraph_id": 26, "text": "The Drosophila Species Stock Center located at Cornell University in Ithaca, New York, maintains cultures of hundreds of species for researchers.", "title": "Biology" }, { "paragraph_id": 27, "text": "Drosophila is considered one of the most valuable genetic model organisms; both adults and embryos are experimental models. Drosophila is a prime candidate for genetic research because the relationship between human and fruit fly genes is very close. Human and fruit fly genes are so similar, that disease-producing genes in humans can be linked to those in flies. The fly has approximately 15,500 genes on its four chromosomes, whereas humans have about 22,000 genes among their 23 chromosomes. Thus the density of genes per chromosome in Drosophila is higher than the human genome. Low and manageable number of chromosomes make Drosophila species easier to study. These flies also carry genetic information and pass down traits throughout generations, much like their human counterparts. The traits can then be studied through different Drosophila lineages and the findings can be applied to deduce genetic trends in humans. Research conducted on Drosophila help determine the ground rules for transmission of genes in many organisms. Drosophila is a useful in vivo tool to analyze Alzheimer's disease. Rhomboid proteases were first detected in Drosophila but then found to be highly conserved across eukaryotes, mitochondria, and bacteria. Melanin's ability to protect DNA against ionizing radiation has been most extensively demonstrated in Drosophila, including in the formative study by Hopwood et al 1985.", "title": "Biology" }, { "paragraph_id": 28, "text": "Like other animals, Drosophila is associated with various bacteria in its gut. The fly gut microbiota or microbiome seems to have a central influence on Drosophila fitness and life history characteristics. The microbiota in the gut of Drosophila represents an active current research field.", "title": "Biology" }, { "paragraph_id": 29, "text": "Drosophila species also harbour vertically transmitted endosymbionts, such as Wolbachia and Spiroplasma. These endosymbionts can act as reproductive manipulators, such as cytoplasmic incompatibility induced by Wolbachia or male-killing induced by the D. melanogaster Spiroplasma poulsonii (named MSRO). The male-killing factor of the D. melanogaster MSRO strain was discovered in 2018, solving a decades-old mystery of the cause of male-killing. This represents the first bacterial factor that affects eukaryotic cells in a sex-specific fashion, and is the first mechanism identified for male-killing phenotypes. Alternatively, they may protect theirs hosts from infection. Drosophila Wolbachia can reduce viral loads upon infection, and is explored as a mechanism of controlling viral diseases (e.g. Dengue fever) by transferring these Wolbachia to disease-vector mosquitoes. The S. poulsonii strain of Drosophila neotestacea protects its host from parasitic wasps and nematodes using toxins that preferentially attack the parasites instead of the host.", "title": "Biology" }, { "paragraph_id": 30, "text": "Since the Drosophila species is one of the most used model organisms, it was vastly used in genetics. However, the effect abiotic factors, such as temperature, has on the microbiome on Drosophila species has recently been of great interest. Certain variations in temperature have an impact on the microbiome. It was observed that higher temperatures (31 °C) lead to an increase of Acetobacter populations in the gut microbiome of Drosophila melanogaster as compared to lower temperatures (13 °C). In low temperatures (13 °C), the flies were more cold resistant and also had the highest concentration of Wolbachia.", "title": "Biology" }, { "paragraph_id": 31, "text": "The microbiome in the gut can also be transplanted among organisms. It was found that Drosophila melanogaster became more cold-tolerant when the gut microbiota from Drosophila melanogaster that were reared at low temperatures. This depicted that the gut microbiome is correlated to physiological processes.", "title": "Biology" }, { "paragraph_id": 32, "text": "Moreover, the microbiome plays a role in aggression, immunity, egg-laying preferences, locomotion and metabolism. As for aggression, it plays a role to a certain degree during courtship. It was observed that germ-free flies were not as competitive compared to the wild-type males. Microbiome of the Drosophila species is also known to promote aggression by octopamine OA signalling. The microbiome has been shown to impact these fruit flies' social interactions, specifically aggressive behaviour that is seen during courtship and mating.", "title": "Biology" }, { "paragraph_id": 33, "text": "Drosophila species are prey for many generalist predators, such as robber flies. In Hawaii, the introduction of yellowjackets from mainland United States has led to the decline of many of the larger species. The larvae are preyed on by other fly larvae, staphylinid beetles, and ants.", "title": "Biology" }, { "paragraph_id": 34, "text": "As with many Eukaryotes, this genus is known to express SNAREs, and as with several others the components of the SNARE complex are known to be somewhat substitutable: Although the loss of SNAP-25 - a component of neuronal SNAREs - is lethal, SNAP-24 can fully replace it. For another example, an R-SNARE not normally found in synapses can substitute for synaptobrevin.", "title": "Biology" }, { "paragraph_id": 35, "text": "The Spätzle protein is a ligand of Toll. In addition to melanin's more commonly known roles in the endoskeleton and in neurochemistry, melanization is one step in the immune responses to some pathogens. Dudzic et al 2019 additionally find a large number of shared serine protease messengers between Spätzle/Toll and melanization and a large amount of crosstalk between these pathways.", "title": "Biology" }, { "paragraph_id": 36, "text": "The genus Drosophila as currently defined is paraphyletic (see below) and contains 1,450 described species, while the total number of species is estimated at thousands. The majority of the species are members of two subgenera: Drosophila (about 1,100 species) and Sophophora (including D. (S.) melanogaster; around 330 species).", "title": "Systematics" }, { "paragraph_id": 37, "text": "The Hawaiian species of Drosophila (estimated to be more than 500, with roughly 380 species described) are sometimes recognized as a separate genus or subgenus, Idiomyia, but this is not widely accepted. About 250 species are part of the genus Scaptomyza, which arose from the Hawaiian Drosophila and later recolonized continental areas.", "title": "Systematics" }, { "paragraph_id": 38, "text": "Evidence from phylogenetic studies suggests these genera arose from within the genus Drosophila:", "title": "Systematics" }, { "paragraph_id": 39, "text": "Several of the subgeneric and generic names are based on anagrams of Drosophila, including Dorsilopha, Lordiphosa, Siphlodora, Phloridosa, and Psilodorha.", "title": "Systematics" }, { "paragraph_id": 40, "text": "Drosophila species are extensively used as model organisms in genetics (including population genetics), cell biology, biochemistry, and especially developmental biology. Therefore, extensive efforts are made to sequence drosphilid genomes. The genomes of these species have been fully sequenced:", "title": "Genetics" }, { "paragraph_id": 41, "text": "The data have been used for many purposes, including evolutionary genome comparisons. D. simulans and D. sechellia are sister species, and provide viable offspring when crossed, while D. melanogaster and D. simulans produce infertile hybrid offspring. The Drosophila genome is often compared with the genomes of more distantly related species such as the honeybee Apis mellifera or the mosquito Anopheles gambiae.", "title": "Genetics" }, { "paragraph_id": 42, "text": "The modEncode consortium is currently sequencing eight more Drosophila genomes, and even more genomes are being sequenced by the i5K consortium.", "title": "Genetics" }, { "paragraph_id": 43, "text": "Curated data are available at FlyBase.", "title": "Genetics" }, { "paragraph_id": 44, "text": "The Drosophila 12 Genomes Consortium – led by Andrew G. Clark, Michael Eisen, Douglas Smith, Casey Bergman, Brian Oliver, Therese Ann Markow, Thomas Kaufman, Manolis Kellis, William Gelbart, Venky Iyer, Daniel Pollard, Timothy Sackton, Amanda Larracuente, Nadia Singh, and including Wojciech Makalowski, Mohamed Noor, Temple F. Smith, Craig Venter, Peter Keightley, and Leonid Boguslavsky among its contributors – presents ten new genomes and combines those with previously released genomes for D. melanogaster and D. pseudoobscura to analyse the evolutionary history and common genomic structure of the genus. This includes the discovery of transposable elements and illumination of their evolutionary history. Bartolomé et al 2009 find at least 1⁄3 of the TEs in D. melanogaster, D. simulans and D. yakuba have been acquired by horizontal transfer. They find an average of 0.035 HT TEs⁄TE family⁄million years. Bartolomé also finds HT TEs follow other relatedness metrics, with D. melanogaster⇔D. simulans events being twice as common as either of them ⇔ D. yakuba.", "title": "Genetics" } ]
Drosophila is a genus of flies, belonging to the family Drosophilidae, whose members are often called "small fruit flies" or pomace flies, vinegar flies, or wine flies, a reference to the characteristic of many species to linger around overripe or rotting fruit. They should not be confused with the Tephritidae, a related family, which are also called fruit flies; tephritids feed primarily on unripe or ripe fruit, with many species being regarded as destructive agricultural pests, especially the Mediterranean fruit fly. One species of Drosophila in particular, D. melanogaster, has been heavily used in research in genetics and is a common model organism in developmental biology. The terms "fruit fly" and "Drosophila" are often used synonymously with D. melanogaster in modern biological literature. The entire genus, however, contains more than 1,500 species and is very diverse in appearance, behavior, and breeding habitat.
2001-12-29T11:50:12Z
2023-12-22T13:39:40Z
[ "Template:About", "Template:Expand section", "Template:Multiple image", "Template:Small", "Template:Pp-move-indef", "Template:IPAc-en", "Template:Citation needed", "Template:Fraction", "Template:Reflist", "Template:Cite web", "Template:Cite journal", "Template:Lay source", "Template:Taxonbar", "Template:Visible anchor", "Template:Cite book", "Template:Cite encyclopedia", "Template:Page needed", "Template:Portal bar", "Template:Convert", "Template:Rp", "Template:Cladogram", "Template:Wikispecies", "Template:Automatic taxobox", "Template:Clarify", "Template:Frac", "Template:Refn", "Template:Commons category", "Template:Short description", "Template:Lang", "Template:Model Organisms", "Template:Endash", "Template:Cite conference", "Template:Authority control" ]
https://en.wikipedia.org/wiki/Drosophila
9,033
Dictatorship
A dictatorship is an autocratic form of government which is characterized by a leader, or a group of leaders, who hold governmental powers with few to no limitations. Politics in a dictatorship are controlled by a dictator, and they are facilitated through an inner circle of elites that includes advisers, generals, and other high-ranking officials. The dictator maintains control by influencing and appeasing the inner circle and repressing any opposition, which may include rival political parties, armed resistance, or disloyal members of the dictator's inner circle. Dictatorships can be formed by a military coup that overthrows the previous government through force or they can be formed by a self-coup in which elected leaders make their rule permanent. Dictatorships are authoritarian or totalitarian and they can be classified as military dictatorships, one-party dictatorships, personalist dictatorships, or absolute monarchies. The use of the term "dictatorship" emerged in the Roman Republic, referring to "a temporary grant of absolute power to a leader to handle some emergency." The earliest military dictatorships developed in the post-classical era, particularly in Shogun-era Japan and in England under Cromwell. Modern dictatorships first developed in the 19th century, which included Bonapartism in Europe and caudillos in Latin America. The 20th century saw the rise of fascist and communist dictatorships in Europe; fascism was eradicated in the aftermath of World War II in 1945, while communism spread to other continents, maintaining prominence until the end of the Cold War in 1991. The 20th century also saw the rise of personalist dictatorships in Africa and military dictatorships in Latin America, both of which became prominent in the 1960s and 1970s. The period following the collapse of the Soviet Union witnessed a sporadic rise in democracies across the world, despite several dictatorships persisting into the 21st century, particularly in Africa and Asia. During the early 21st century, democratic governments came to outnumber authoritarian states by 98 to 80. The second decade was marked by a "democratic recession", following the 2008 global financial crisis which drastically reduced the appeal of the Western model across the world. By 2019, the number of authoritarian governments had again surmounted that of democracies by 92 to 87. Dictatorships often attempt to portray a democratic facade, frequently holding elections in order to establish their legitimacy or provide incentives to members of the ruling party, but these elections are not competitive for the opposition. Stability in a dictatorship is maintained through coercion and political repression, which involves the restriction of access to information, the tracking of the political opposition, and acts of violence. Dictatorships that fail to repress the opposition are susceptible to collapse through a coup or a revolution. The power structures of dictatorships vary, and different definitions of dictatorship consider different elements of this structure. Political scientists such as Juan José Linz and Samuel P. Huntington identify key attributes that define the power structure of a dictatorship, including a single leader or a small group of leaders, the exercise of power with few limitations, limited political pluralism, and limited mass mobilization. The dictator exercises most or total power over the government and society, but sometimes elites are necessary to carry out the dictator's rule. They form an inner circle, making up a class of elites that hold a degree of power within the dictatorship and receive benefits in exchange for their support. They may be military officers, party members, or friends or family of the dictator. Elites are also the primary political threats of a dictator, as they can leverage their power to influence or overthrow the dictatorship. The inner circle's support is necessary for a dictator's orders to be carried out, causing elites to serve as a check on the dictator's power. To enact policy, a dictator must either appease the regime's elites or attempt to replace them. Elites must also compete to wield more power than one another, but the amount of power held by elites also depends on their unity. Factions or divisions among the elites will mitigate their ability to bargain with the dictator, resulting in the dictator having more unrestrained power. A unified inner circle has the capacity to overthrow a dictator, and the dictator must make greater concessions to the inner circle to stay in power. This is particularly true when the inner circle is made up of military officers that have the resources to carry out a military coup. The opposition to a dictatorship represents all of the factions that are not part of the dictatorship and anyone that does not support the regime. Organized opposition is a threat to the stability of a dictatorship, as it seeks to undermine public support for the dictator and calls for regime change. A dictator may address the opposition by repressing it through force, modifying laws to restrict its power, or appeasing it with limited benefits. The opposition can be an external group, or it can also include current and former members of the dictator's inner circle. Totalitarianism is a variation of dictatorship characterized by the presence of a single political party and more specifically, by a powerful leader who imposes personal and political prominence. Power is enforced through a steadfast collaboration between the government and a highly developed ideology. A totalitarian government has "total control of mass communications and social and economic organizations". Political philosopher Hannah Arendt describes totalitarianism as a new and extreme form of dictatorship composed of "atomized, isolated individuals" in which ideology plays a leading role in defining how the entire society should be organized. Political scientist Juan José Linz identifies a spectrum of political systems with democracies and totalitarian regimes separated by authoritarian regimes with varied classifications of hybrid systems. He describes totalitarian regimes as exercising control over politics and political mobilization rather than merely suppressing it. A dictatorship is formed when a specific group seizes power, with the composition of this group affecting how power is seized and how the eventual dictatorship will rule. The group may be military or political, it may be organized or disorganized, and it may disproportionately represent a certain demographic. After power is seized, the group must determine what positions its members will hold in the new government and how this government will operate, sometimes resulting in disagreements that split the group. Members of the group will typically make up the elites in a dictator's inner circle at the beginning of a new dictatorship, though the dictator may remove them as a means to gain additional power. Unless they have undertaken a self-coup, those seizing power typically have little governmental experience and do not have a detailed policy plan in advance. If the dictator has not seized power through a political party, then a party may be formed as a mechanism to reward supporters and to concentrate power in the hands of political allies instead of militant allies. Parties formed after the seizure of power often have little influence and only exist to serve the dictator. Most dictatorships are formed through military means or through a political party. Nearly half of dictatorships start as a military coup, though others have been started by foreign intervention, elected officials ending competitive elections, insurgent takeovers, popular uprisings by citizens, or legal maneuvering by autocratic elites to take power within their government. Between 1946 and 2010, 42% of dictatorships began by overthrowing a different dictatorship, and 26% began after achieving independence from a foreign government. Many others developed following a period of warlordism. A classification of dictatorships, which began with political scientist Barbara Geddes in 1999, focuses on where power lies. Under this system, there are three types of dictatorships. Military dictatorships are controlled by military officers, one-party dictatorships are controlled by the leadership of a political party, and personalist dictatorships are controlled by a single individual. In some circumstances, monarchies are also considered dictatorships if the monarchs hold a significant amount of political power. Hybrid dictatorships are regimes that have a combination of these classifications. Military dictatorships are regimes in which military officers hold power, determine who will lead the country, and exercise influence over policy. They are most common in developing nations in Africa, Asia, and Latin America. They are often unstable, and the average duration of a military dictatorship is only five years, but they are often followed by additional military coups and military dictatorships. While common in the 20th century, the prominence of military dictatorships declined in the 1970s and 1980s. Military dictatorships are typically formed by a military coup in which senior officers use the military to overthrow the government. In democracies, the threat of a military coup is associated with the period immediately after a democracy's creation but prior to large-scale military reforms. In oligarchies, the threat of a military coup comes from the strength of the military weighed against the concessions made to the military. Other factors associated with military coups include extensive natural resources, limited use of the military internationally, and use of the military as an oppressive force domestically. Military coups do not necessarily result in military dictatorships, as power may then be passed to an individual or the military may allow democratic elections to take place. Military dictatorships often have traits in common due to the shared background of military dictators. These dictators may view themselves as impartial in their oversight of a country due to their nonpartisan status, and they may view themselves as "guardians of the state". The predominance of violent force in military training manifests in an acceptance of violence as a political tool and the ability to organize violence on a large scale. Military dictators may also be less trusting or diplomatic and underestimate the use of bargaining and compromise in politics. One-party dictatorships are governments in which a single political party dominates politics. Single-party dictatorships are one-party states in which only the party in power is legalized, sometimes along with minor allied parties, and all opposition parties are banned. Dominant-party dictatorships or electoral authoritarian dictatorships are one-party dictatorships in which opposition parties are nominally legal but cannot meaningfully influence government. Single-party dictatorships were most common during the Cold War, with dominant-party dictatorships becoming more common after the fall of the Soviet Union. Ruling parties in one-party dictatorships are distinct from political parties that were created to serve a dictator in that the ruling party in a one-party dictatorship permeates every level of society. One-party dictatorships are more stable than other forms of authoritarian rule, as they are less susceptible to insurgency and see higher economic growth. Ruling parties allow a dictatorship to more broadly influence the populace and facilitate political agreement between party elites. Between 1950 and 2016, one-party dictatorships made up 57% of authoritarian regimes in the world, and one-party dictatorships have continued to expand more quickly than other forms of dictatorship in the latter half of the 20th century. Due to the structure of their leadership, one-party dictatorships are significantly less likely to face civil conflict, insurgency, or terrorism than other forms of dictatorship. The use of ruling parties also provides more legitimacy to its leadership and elites than other forms of dictatorship and facilitates a peaceful transfer of power at the end of a dictator's rule. One-party dictatorships became prominent in Asia and Eastern Europe during the Cold War as communist governments were installed in several countries. One-party rule also developed in several countries in Africa during decolonization in the 1960s and 1970s, some of which produced authoritarian regimes. A ruling party in a one-party dictatorship may rule under any ideology or it may have no guiding ideology. Marxist one-party states are sometimes distinguished from other one-party states, but they function similarly. When a one-party dictatorship develops gradually through legal means, it can result in conflict between the party organization and the state apparatus and civil service, as the party rules in parallel and increasingly appoints its own members to positions of power. Parties that take power through violence are often able to implement larger changes in a shorter period of time. Personalist dictatorships are regimes in which all of the power lies in the hands of a single individual. They differ from other forms of dictatorships in that the dictator has greater access to key political positions and the government's treasury, and they are more commonly subject to the discretion of the dictator. Personalist dictators may be members of the military or leaders of a political party, but neither the military nor the party exercises power independently from the dictator. In personalist dictatorships, the elite corps are usually made up of close friends or family members of the dictator, who typically handpicks these individuals to serve their posts. These dictatorships often emerge either from loosely organized seizures of power, giving the leader opportunity to consolidate power, or from democratically elected leaders in countries with weak institutions, giving the leader opportunity to change the constitution. Personalist dictatorships are more common in Sub-Saharan Africa due to less established institutions in the region. Personalist dictators typically favor loyalty over competence in their governments and have a general distrust of intelligentsia. Elites in personalist dictatorships often do not have a professional political career and are unqualified for positions they are given. A personalist dictator will manage these appointees by segmenting the government so that they cannot collaborate. The result is that such regimes have no internal checks and balances, and are thus unrestrained when exerting repression on their people, making radical shifts in foreign policy, or starting wars with other countries. Due to the lack of accountability and the smaller group of elites, personalist dictatorships are more prone to corruption than other forms of dictatorship, and they are more repressive than other forms of dictatorship. Personalist dictatorships often collapse with the death of the dictator. They are more likely to end in violence and less likely to democratize than other forms of dictatorship. Personalist dictatorships fit the exact classic stereotype of authoritarian rule. Within a personalist regime an issue called "The dictator's dilemma" arises. This idea references the heavy reliance on repression of the public in order to stay in power, which creates incentives for all constituents to falsify their preferences, which does not allow for dictators to know the genuine popular beliefs or their realistic measure of societal support. As a result of authoritarian politics, a series of major issues may ensue. Preference falsification, Internal politics, data scarcity, and restriction of media are just a few examples of the dangers of a personalistic authoritarian regime. Although, when it comes to polling and elections a dictator could use their power to override private preferences. Many personalist regimes will install open ballots to protect their regimes and implement heavy security measures and censorship for those whose personal preferences do not align with the values of the leader. The shift in the power relation between the dictator and their inner circle has severe consequences for the behavior of such regimes as a whole. Personalist regimes diverge from other regimes when it comes to their longevity, methods of breakdown, levels of corruption, and proneness to conflicts. On average, they last twice as long as military dictatorships, but not as long as one-party dictatorships. Personalist dictatorships also experience growth differently, as they often lack the institutions or qualified leadership to sustain an economy. An absolute monarchy is a monarchy in which the monarch rules without legal limitations. This makes it distinct from constitutional monarchy and ceremonial monarchy. In an absolute monarchy, power is limited to the royal family, and legitimacy is established by historical factors. Monarchies may be dynastic, in which the royal family serves as a ruling institution similar to a political party in a one-party state, or they may be non-dynastic, in which the monarch rules independently of the royal family as a personalist dictator. Monarchies allow for strict rules of succession that produce a peaceful transfer of power on the monarch's death, but this can also result in succession disputes if multiple members of the royal family claim a right to succeed. In the modern era, absolute monarchies are most common in the Middle East. Dictatorship is historically associated with the Ancient Greek concept of tyranny, and several ancient Greek rulers have been described as "tyrants" that are comparable to modern dictators. The concept of "dictator" was first developed during the Roman Republic. A Roman dictator was a special magistrate that was temporarily appointed by the consul during times of crisis and granted total executive authority. The role of dictator was created for instances when a single leader was needed to command and restore stability. At least 85 such dictators were chosen over the course of the Roman Republic, the last of which was chosen to wage the Second Punic War. The dictatorship was revived 120 years later by Sulla after his crushing of a populist movement, and 33 years after that by Julius Caesar. Caesar subverted the tradition of temporary dictatorships when he was made dictator perpetuo, or a dictator for life, which led to the creation of the Roman Empire. The rule of a dictator was not necessarily considered tyrannical in Ancient Rome, though it has been described in some accounts as a "temporary tyranny" or an "elective tyranny". Asia saw several military dictatorships during the post-classical era. Korea experienced military dictatorships under the rule of Yeon Gaesomun in the 7th century and under the rule of the Goryeo military regime in the 12th and 13th centuries. Shoguns were de facto military dictators in Japan beginning in 1185 and continuing for over six hundred years. During the Lê dynasty of Vietnam between the 16th and 18th centuries, the country was under de facto military rule by two rival military families: the Trịnh lords in the north and the Nguyễn lords in the south. In Europe, the Commonwealth of England under Oliver Cromwell, formed in 1649 after the Second English Civil War, has been described as a military dictatorship by its contemporary opponents and by some modern academics. Maximilien Robespierre has been similarly described as a dictator while he controlled the National Convention in France and carried out the Reign of Terror in 1793 and 1794. Dictatorship developed as a major form of government in the 19th century, though the concept was not universally seen pejoratively at the time, with both a tyrannical concept and a quasi-constitutional concept of dictatorship understood to exist. In Europe it was often thought of in terms of Bonapartism and Caesarism, with the former describing the military rule of Napoleon and the latter describing the imperial rule of Napoleon III in the vein of Julius Caesar. The Spanish American wars of independence took place in the early-19th century, creating many new Latin American governments. Many of these governments fell under the control of caudillos, or personalist dictators. Most caudillos came from a military background, and their rule was typically associated with pageantry and glamor. Caudillos were often nominally constrained by a constitution, but the caudillo had the power to draft a new constitution as he wished. Many are noted for their cruelty, while others are honored as national heroes. In the time between World War I and World War II, several dictatorships were established in Europe through coups which were carried out by far-left and far-right movements. The aftermath of World War I resulted in a major shift in European politics, establishing new governments, facilitating internal change in older governments, and redrawing the boundaries between countries, allowing opportunities for these movements to seize power. The societal upheaval caused by World War I and the unstable peace it produced further contributed to instability that benefited extremist movements and rallied support for their causes. Far-left and far-right dictatorships used similar methods to maintain power, including cult of personality, concentration camps, forced labour, mass murder, and genocide. The first communist state was created by Vladimir Lenin and the Bolsheviks with the establishment of Soviet Russia during the Russian Revolution in 1917. The government was described as a dictatorship of the proletariat in which power was exercised by soviets. The Bolsheviks consolidated power by 1922, forming the Soviet Union. Lenin was followed by Joseph Stalin in 1924, who consolidated total power and implemented totalitarian rule by 1929. The Russian Revolution inspired a wave of left-wing revolutionary movements in Europe between 1917 and 1923, but none saw the same level of success. At the same time, nationalist movements grew throughout Europe. These movements were a response to what they perceived as decadence and societal decay due to the changing social norms and race relations brought about by liberalism. Fascism developed in Europe as a rejection of liberalism, socialism, and modernism, and the first fascist political parties formed in the 1920s. Italian dictator Benito Mussolini seized power in 1922, and began implementing reforms in 1925 to create the first fascist dictatorship. These reforms incorporated totalitarianism, fealty to the state, expansionism, corporatism, and anti-communism. Adolf Hitler and the Nazi Party created a second fascist dictatorship in Germany in 1933, obtaining absolute power through a combination of electoral victory, violence, and emergency powers. Other nationalist movements in Europe established dictatorships based on the fascist model. During World War II, Italy and Germany occupied several countries in Europe, imposing fascist puppet states upon many of the countries that they invaded. After being defeated in World War II, the far-right dictatorships of Europe collapsed, with the exceptions of Spain and Portugal. The Soviet Union occupied nationalist dictatorships in the east and replaced them with communist dictatorships, while others established liberal democratic governments in the Western Bloc. Dictatorships in Latin America persisted into the 20th century, and further military coups established new regimes, often in the name of nationalism. After a brief period of democratization, Latin America underwent a rapid transition toward dictatorship in the 1930s. Populist movements were strengthened following the economic turmoil of the Great Depression, producing populist dictatorships in several Latin American countries. European fascism was imported to Latin America as well, and the Vargas Era of Brazil was heavily influenced by the corporatism practiced in fascist Italy. The decolonisation of Africa prompted the creation of new governments, many of which became dictatorships in the 1960s and 1970s. Early African dictatorships were primarily personalist socialist dictatorships, in which a single socialist would take power instead of a ruling party. As the Cold War went on, the Soviet Union increased its influence in Africa, and Marxist–Leninist dictatorships developed in several African countries. Military coups were also a common occurrence after decolonisation, with 14 African countries experiencing at least three successful military coups between 1959 and 2001. These new African governments were marked by severe instability, which provided opportunities for regime change and made fair elections a rare occurrence on the continent. This instability in turn required rulers to become increasingly authoritarian to stay in power, further propagating dictatorship in Africa. The Chinese Civil War ended in 1949, splitting the Republic of China under Chiang Kai-shek and the People's Republic of China under Mao Zedong. Mao established the People's Republic of China as a one-party communist state under his governing ideology of Maoism. While the People's Republic of China was initially aligned with the Soviet Union, relations between the two countries deteriorated as the Soviet Union underwent de-Stalinization in the late-1950s. Mao consolidated his control of the People's Republic of China with the Cultural Revolution in the 1960s, which involved the destruction of all elements of capitalism and traditionalism in China. Deng Xiaoping took power as the de facto leader of China after Mao's death and implemented reforms to restore stability following the Cultural Revolution and reestablish free market economics. Chiang Kai-shek continued to rule as dictator of the National government's rump state in Taiwan until his death in 1975. Marxist and nationalist movements became popular in Southeast Asia as a response to colonial control and the subsequent Japanese occupation of Southeast Asia, with both ideologies facilitating the creation of dictatorships after World War II. Communist dictatorships in the region aligned with China following the latter's establishment as a communist state. A similar phenomenon took place in Korea, where Kim Il Sung created a Soviet-backed communist dictatorship in North Korea and Syngman Rhee created a US-backed nationalist dictatorship in South Korea. The Middle East was decolonized during the Cold War, and many nationalist movements gained strength post-independence. These nationalist movements supported non-alignment, keeping most Middle Eastern dictatorships out of the American and Soviet spheres of influence. These movements supported pan-Arab Nasserism during most of the Cold War, but they were largely replaced by Islamic nationalism by the 1980s. Several Middle Eastern countries were the subject of military coups in the 1950s and 1960s, including Iraq, Syria, North Yemen, and South Yemen. A 1953 coup overseen by the American and British governments restored Mohammad Reza Pahlavi as the absolute monarch of Iran, who in turn was overthrown during the Iranian Revolution of 1979 that established Ruhollah Khomeini as the Supreme Leader of Iran under an Islamist government. During World War II, many countries of Central and Eastern Europe had been occupied by the Soviet Union. When the war ended, these countries were incorporated into the Soviet sphere of influence, and the Soviet Union exercised control over their governments. Josip Broz Tito declared a communist government in Yugoslavia during World War II, which was initially aligned with the Soviet Union. The relations between the countries were strained by Soviet attempts to influence Yugoslavia, leading to the Tito–Stalin split in 1948. Albania was established as a communist dictatorship under Enver Hoxha in 1944. It was initially aligned with Yugoslavia, but its alignment shifted throughout the Cold War between Yugoslavia, the Soviet Union, and China. The stability of the Soviet Union weakened in the 1980s. The Soviet economy became unsustainable, and communist governments lost the support of intellectuals and their population in general. In 1989, the Soviet Union was dissolved, and communism was abandoned by the countries of Central and Eastern Europe through a series of revolutions. Military dictatorships remained prominent in Latin America during the Cold War, though the number of coups declined starting in the 1980s. Between 1967 and 1991, 12 Latin American countries underwent at least one military coup, with Haiti and Honduras experiencing three and Bolivia experiencing eight. A one-party communist dictatorship was formed in Cuba when a US-backed dictatorship was overthrown in the Cuban Revolution, creating the only Soviet-backed dictatorship in the western hemisphere. To maintain power, Chilean dictator Augusto Pinochet organized Operation Condor with other South American dictators to facilitate cooperation between their respective intelligence agencies and secret police organizations. The nature of dictatorship changed in much of the world at the onset of the 21st century. Between the 1990s and the 2000s, most dictators moved away from being "larger-than-life figures" that controlled the populace through terror and isolated themselves from the global community. This was replaced by a trend of developing a positive public image to maintain support among the populace and moderating rhetoric to integrate with the global community. In contrast to the overtly repressive nature of 20th century dictatorships, authoritarian strongmen of the 21st century are sometimes labelled "spin dictators", rulers who attempt to monopolise power by authoritarian upgrading, appealing to democratic sentiments and covertly pursue repressive measures; such as embracing modern technology, manipulation of information content, regulation of cyberspace, slandering dissidents, etc. On the other hand, a handful of dictators like Bashar al-Assad and Kim Jong Un rule with deadly repression, violence and state-terrorism to establish extensive securitization through fear, in line with many 20th century dictatorships. The development of the internet and digital communication in the 21st century have prompted dictatorships to shift from traditional means of control to digital ones, including the use of artificial intelligence to analyze mass communications, internet censorship to restrict the flow of information, and troll farms to manipulate public opinion. 21st century dictatorships regularly hold sham elections with massive approval ratings, for seeking public legitimacy and maintaining the autocrat's image as a popular figure loved by the masses. The manipulated election results are often weaponized as propaganda tools in information warfare, to galvanize supporters of the dictatorships against dissidents as well as to manufacture compliance of the masses by publicising falsified data figures. Another objective is to portray the dictator as the guardian figure who unifies the country, without whom its security disintegrates and chaos ensues. North Korea is the only country in East Asia to be ruled by the Kim family after the death of Kim Il-sung and hands over to his son Kim Jong-il in 1994 and grandson Kim Jong-un in 2011, as of today in the 21st century. Dictatorship in Europe largely ended after the fall of the Soviet Union in 1991, and the liberalization of most communist states. Belarus under the rule of Alexander Lukashenko has been described as "the last European dictatorship", though the rule of Vladimir Putin in Russia has also been described as a dictatorship. Latin America saw a period of liberalization similar to that of Europe at the end of the Cold War, with Cuba being the only Latin American country that did not experience any degree of liberalization between 1992 and 2010. The countries of Central Asia did not liberalize after the fall of the Soviet Union, instead forming as dictatorships led by former elites of the Communist Party and then later by successive dictators. These countries maintain parliaments and human rights organizations, but these remain under the control of the countries' respective dictators. The Middle East and Northern Africa did not undergo liberalization during the third wave of democratisation, and most countries in this region remain dictatorships in the 21st century. Dictatorships in the Middle East and Northern Africa are either illiberal republics in which a president holds power through unfair elections, or they are absolute monarchies in which power is inherited. Iraq, Israel, Lebanon, and Palestine are the only democratic nations in the region, with Israel being the only nation in this region that affords broad political liberties to its citizens. Most dictatorships exist in countries with high levels of poverty. Poverty has a destabilizing effect on government, causing democracy to fail and regimes to fall more often. The form of government does not correlate with the amount of economic growth, and dictatorships on average grow at the same rate as democracies, though dictatorships have been found to have larger fluctuations. Dictators are more likely to implement long-term investments into the country's economy if they feel secure in their power. Exceptions to the pattern of poverty in dictatorships include oil-rich Middle Eastern dictatorships and the East Asian Tigers during their periods of dictatorship. The type of economy in a dictatorship can affect how it functions. Economies based on natural resources allow dictators more power, as they can easily extract rents without strengthening or cooperating with other institutions. More complex economies require additional cooperation between the dictator and other groups. The economic focus of a dictatorship often depends on the strength of the opposition, as a weaker opposition allows a dictator to extract additional wealth from the economy through corruption. Several factors determine the stability of a dictatorship, and they must maintain some degree of popular support to prevent resistance groups from growing. This may be ensured through incentives, such as distribution of financial resources or promises of security, or it may be through repression, in which failing to support the regime is punished. Stability can be weakened when opposition groups grow and unify or when elites are not loyal to the regime. One-party dictatorships are generally more stable and last longer than military or personalist dictatorships. A dictatorship may fall because of a military coup, foreign intervention, negotiation, or popular revolution. A military coup is often carried out when a regime is threatening the country's stability or during periods of societal unrest. Foreign intervention takes place when another country seeks to topple a regime by invading the country or supporting the opposition. A dictator may negotiate the end of a regime if it has lost legitimacy or if a violent removal seems likely. Revolution takes place when the opposition group grows large enough that elites in the regime cannot suppress it or choose not to. Negotiated removals are more likely to end in democracy, while removals by force are more likely to result in a new dictatorial regime. A dictator that has concentrated significant power is more likely to be exiled, imprisoned, or killed after ouster, and accordingly they are more likely to refuse negotiation and cling to power. Dictatorships are typically more aggressive than democracy when in conflict with other nations, as dictators do not have to fear electoral costs of war. Military dictatorships are more prone to conflict due to the inherent military strength associated with such a regime, and personalist dictatorships are more prone to conflict due to the weaker institutions to check the dictator's power. In the 21st century, dictatorships have moved toward greater integration with the global community and increasingly attempt to present themselves as democratic. Dictatorships are often recipients of foreign aid on the condition that they make advances toward democratization. A study found that dictatorships that engage in oil drilling are more likely to remain in power, with 70.63% of the dictators who engage in oil drilling still being in power after 5 years of dictatorship, while only 59.92% of the non-oil producing dictators survive the first 5 years. Most dictatorships hold elections to maintain legitimacy and stability, but these elections are typically uncompetitive and the opposition is not permitted to win. Elections allow a dictatorship to exercise some control over the opposition by setting the terms under which the opposition challenges the regime. Elections are also used to control elites within the dictatorship by requiring them to compete with one another and incentivizing them to build support with the populace, allowing the most popular and most competent elites to be promoted in the regime. Elections also support the legitimacy of a dictatorship by presenting the image of a democracy, establishing plausible deniability of its status as a dictatorship for both the populace and foreign governments. Should a dictatorship fail, elections also permit dictators and elites to accept defeat without fearing violent recourse. Dictatorships may influence the results of an election through electoral fraud, intimidation or bribing of candidates and voters, use of state resources such as media control, manipulation of electoral laws, restricting who may run as a candidate, or disenfranchising demographics that may oppose the dictatorship. In the 20th century, most dictatorships held elections in which voters could only choose to support the dictatorship, with only one-quarter of partisan dictatorships permitting opposition candidates to participate. Since the end of the Cold War, more dictatorships have established "semi-competitive" elections in which opposition is allowed to participate in elections but is not allowed to win, with approximately two-thirds of dictatorships permitting opposition candidates in 2018. Opposition parties in dictatorships may be restricted by preventing them from campaigning, banning more popular opposition parties, preventing opposition members from forming a party, or requiring that candidates be a member of the ruling party. Dictatorships may hold semi-competitive elections to qualify for foreign aid, to demonstrate a dictator's control over the government, or to incentivize the party to expand its information-gathering capacity, particularly at the local level. Semi-competitive elections also have the effect of incentivizing members of the ruling party to provide better treatment of citizens so they will be chosen as party nominees due to their popularity. In a dictatorship, violence is used to coerce or repress all opposition to the dictator's rule, and the strength of a dictatorship depends on its use of violence. This violence is frequently exercised through institutions such as military or police forces. The use of violence by a dictator is frequently most severe during the first few years of a dictatorship, because the regime has not yet solidified its rule and more detailed information for targeted coercion is not yet available. As the dictatorship becomes more established, it moves away from violence by resorting to the use of other coercive measures, such as restricting people's access to information and tracking the political opposition. Dictators are incentivized to avoid the use of violence once a reputation of violence is established, as it damages the dictatorship's other institutions and poses a threat to the dictator's rule should government forces become disloyal. Institutions that coerce the opposition through the use of violence may serve different roles or they may be used to counterbalance one another in order to prevent one institution from becoming too powerful. Secret police are used to gather information about specific political opponents and carry out targeted acts of violence against them, paramilitary forces defend the regime from coups, and formal militaries defend the dictatorship during foreign invasions and major civil conflicts. Terrorism is less common in dictatorships. Allowing the opposition to have representation in the regime, such as through a legislature, further reduces the likelihood of terrorist attacks in a dictatorship. Military and one-party dictatorships are more likely to experience terrorism than personalist dictatorships, as these regimes are under more pressure to undergo institutional change in response to terrorism.
[ { "paragraph_id": 0, "text": "A dictatorship is an autocratic form of government which is characterized by a leader, or a group of leaders, who hold governmental powers with few to no limitations. Politics in a dictatorship are controlled by a dictator, and they are facilitated through an inner circle of elites that includes advisers, generals, and other high-ranking officials. The dictator maintains control by influencing and appeasing the inner circle and repressing any opposition, which may include rival political parties, armed resistance, or disloyal members of the dictator's inner circle. Dictatorships can be formed by a military coup that overthrows the previous government through force or they can be formed by a self-coup in which elected leaders make their rule permanent. Dictatorships are authoritarian or totalitarian and they can be classified as military dictatorships, one-party dictatorships, personalist dictatorships, or absolute monarchies.", "title": "" }, { "paragraph_id": 1, "text": "The use of the term \"dictatorship\" emerged in the Roman Republic, referring to \"a temporary grant of absolute power to a leader to handle some emergency.\" The earliest military dictatorships developed in the post-classical era, particularly in Shogun-era Japan and in England under Cromwell. Modern dictatorships first developed in the 19th century, which included Bonapartism in Europe and caudillos in Latin America. The 20th century saw the rise of fascist and communist dictatorships in Europe; fascism was eradicated in the aftermath of World War II in 1945, while communism spread to other continents, maintaining prominence until the end of the Cold War in 1991. The 20th century also saw the rise of personalist dictatorships in Africa and military dictatorships in Latin America, both of which became prominent in the 1960s and 1970s.", "title": "" }, { "paragraph_id": 2, "text": "The period following the collapse of the Soviet Union witnessed a sporadic rise in democracies across the world, despite several dictatorships persisting into the 21st century, particularly in Africa and Asia. During the early 21st century, democratic governments came to outnumber authoritarian states by 98 to 80. The second decade was marked by a \"democratic recession\", following the 2008 global financial crisis which drastically reduced the appeal of the Western model across the world. By 2019, the number of authoritarian governments had again surmounted that of democracies by 92 to 87.", "title": "" }, { "paragraph_id": 3, "text": "Dictatorships often attempt to portray a democratic facade, frequently holding elections in order to establish their legitimacy or provide incentives to members of the ruling party, but these elections are not competitive for the opposition. Stability in a dictatorship is maintained through coercion and political repression, which involves the restriction of access to information, the tracking of the political opposition, and acts of violence. Dictatorships that fail to repress the opposition are susceptible to collapse through a coup or a revolution.", "title": "" }, { "paragraph_id": 4, "text": "The power structures of dictatorships vary, and different definitions of dictatorship consider different elements of this structure. Political scientists such as Juan José Linz and Samuel P. Huntington identify key attributes that define the power structure of a dictatorship, including a single leader or a small group of leaders, the exercise of power with few limitations, limited political pluralism, and limited mass mobilization.", "title": "Structure" }, { "paragraph_id": 5, "text": "The dictator exercises most or total power over the government and society, but sometimes elites are necessary to carry out the dictator's rule. They form an inner circle, making up a class of elites that hold a degree of power within the dictatorship and receive benefits in exchange for their support. They may be military officers, party members, or friends or family of the dictator. Elites are also the primary political threats of a dictator, as they can leverage their power to influence or overthrow the dictatorship. The inner circle's support is necessary for a dictator's orders to be carried out, causing elites to serve as a check on the dictator's power. To enact policy, a dictator must either appease the regime's elites or attempt to replace them. Elites must also compete to wield more power than one another, but the amount of power held by elites also depends on their unity. Factions or divisions among the elites will mitigate their ability to bargain with the dictator, resulting in the dictator having more unrestrained power. A unified inner circle has the capacity to overthrow a dictator, and the dictator must make greater concessions to the inner circle to stay in power. This is particularly true when the inner circle is made up of military officers that have the resources to carry out a military coup.", "title": "Structure" }, { "paragraph_id": 6, "text": "The opposition to a dictatorship represents all of the factions that are not part of the dictatorship and anyone that does not support the regime. Organized opposition is a threat to the stability of a dictatorship, as it seeks to undermine public support for the dictator and calls for regime change. A dictator may address the opposition by repressing it through force, modifying laws to restrict its power, or appeasing it with limited benefits. The opposition can be an external group, or it can also include current and former members of the dictator's inner circle.", "title": "Structure" }, { "paragraph_id": 7, "text": "Totalitarianism is a variation of dictatorship characterized by the presence of a single political party and more specifically, by a powerful leader who imposes personal and political prominence. Power is enforced through a steadfast collaboration between the government and a highly developed ideology. A totalitarian government has \"total control of mass communications and social and economic organizations\". Political philosopher Hannah Arendt describes totalitarianism as a new and extreme form of dictatorship composed of \"atomized, isolated individuals\" in which ideology plays a leading role in defining how the entire society should be organized. Political scientist Juan José Linz identifies a spectrum of political systems with democracies and totalitarian regimes separated by authoritarian regimes with varied classifications of hybrid systems. He describes totalitarian regimes as exercising control over politics and political mobilization rather than merely suppressing it.", "title": "Structure" }, { "paragraph_id": 8, "text": "A dictatorship is formed when a specific group seizes power, with the composition of this group affecting how power is seized and how the eventual dictatorship will rule. The group may be military or political, it may be organized or disorganized, and it may disproportionately represent a certain demographic. After power is seized, the group must determine what positions its members will hold in the new government and how this government will operate, sometimes resulting in disagreements that split the group. Members of the group will typically make up the elites in a dictator's inner circle at the beginning of a new dictatorship, though the dictator may remove them as a means to gain additional power.", "title": "Formation" }, { "paragraph_id": 9, "text": "Unless they have undertaken a self-coup, those seizing power typically have little governmental experience and do not have a detailed policy plan in advance. If the dictator has not seized power through a political party, then a party may be formed as a mechanism to reward supporters and to concentrate power in the hands of political allies instead of militant allies. Parties formed after the seizure of power often have little influence and only exist to serve the dictator.", "title": "Formation" }, { "paragraph_id": 10, "text": "Most dictatorships are formed through military means or through a political party. Nearly half of dictatorships start as a military coup, though others have been started by foreign intervention, elected officials ending competitive elections, insurgent takeovers, popular uprisings by citizens, or legal maneuvering by autocratic elites to take power within their government. Between 1946 and 2010, 42% of dictatorships began by overthrowing a different dictatorship, and 26% began after achieving independence from a foreign government. Many others developed following a period of warlordism.", "title": "Formation" }, { "paragraph_id": 11, "text": "A classification of dictatorships, which began with political scientist Barbara Geddes in 1999, focuses on where power lies. Under this system, there are three types of dictatorships. Military dictatorships are controlled by military officers, one-party dictatorships are controlled by the leadership of a political party, and personalist dictatorships are controlled by a single individual. In some circumstances, monarchies are also considered dictatorships if the monarchs hold a significant amount of political power. Hybrid dictatorships are regimes that have a combination of these classifications.", "title": "Types of dictatorships" }, { "paragraph_id": 12, "text": "Military dictatorships are regimes in which military officers hold power, determine who will lead the country, and exercise influence over policy. They are most common in developing nations in Africa, Asia, and Latin America. They are often unstable, and the average duration of a military dictatorship is only five years, but they are often followed by additional military coups and military dictatorships. While common in the 20th century, the prominence of military dictatorships declined in the 1970s and 1980s.", "title": "Types of dictatorships" }, { "paragraph_id": 13, "text": "Military dictatorships are typically formed by a military coup in which senior officers use the military to overthrow the government. In democracies, the threat of a military coup is associated with the period immediately after a democracy's creation but prior to large-scale military reforms. In oligarchies, the threat of a military coup comes from the strength of the military weighed against the concessions made to the military. Other factors associated with military coups include extensive natural resources, limited use of the military internationally, and use of the military as an oppressive force domestically. Military coups do not necessarily result in military dictatorships, as power may then be passed to an individual or the military may allow democratic elections to take place.", "title": "Types of dictatorships" }, { "paragraph_id": 14, "text": "Military dictatorships often have traits in common due to the shared background of military dictators. These dictators may view themselves as impartial in their oversight of a country due to their nonpartisan status, and they may view themselves as \"guardians of the state\". The predominance of violent force in military training manifests in an acceptance of violence as a political tool and the ability to organize violence on a large scale. Military dictators may also be less trusting or diplomatic and underestimate the use of bargaining and compromise in politics.", "title": "Types of dictatorships" }, { "paragraph_id": 15, "text": "One-party dictatorships are governments in which a single political party dominates politics. Single-party dictatorships are one-party states in which only the party in power is legalized, sometimes along with minor allied parties, and all opposition parties are banned. Dominant-party dictatorships or electoral authoritarian dictatorships are one-party dictatorships in which opposition parties are nominally legal but cannot meaningfully influence government. Single-party dictatorships were most common during the Cold War, with dominant-party dictatorships becoming more common after the fall of the Soviet Union. Ruling parties in one-party dictatorships are distinct from political parties that were created to serve a dictator in that the ruling party in a one-party dictatorship permeates every level of society.", "title": "Types of dictatorships" }, { "paragraph_id": 16, "text": "One-party dictatorships are more stable than other forms of authoritarian rule, as they are less susceptible to insurgency and see higher economic growth. Ruling parties allow a dictatorship to more broadly influence the populace and facilitate political agreement between party elites. Between 1950 and 2016, one-party dictatorships made up 57% of authoritarian regimes in the world, and one-party dictatorships have continued to expand more quickly than other forms of dictatorship in the latter half of the 20th century. Due to the structure of their leadership, one-party dictatorships are significantly less likely to face civil conflict, insurgency, or terrorism than other forms of dictatorship. The use of ruling parties also provides more legitimacy to its leadership and elites than other forms of dictatorship and facilitates a peaceful transfer of power at the end of a dictator's rule.", "title": "Types of dictatorships" }, { "paragraph_id": 17, "text": "One-party dictatorships became prominent in Asia and Eastern Europe during the Cold War as communist governments were installed in several countries. One-party rule also developed in several countries in Africa during decolonization in the 1960s and 1970s, some of which produced authoritarian regimes. A ruling party in a one-party dictatorship may rule under any ideology or it may have no guiding ideology. Marxist one-party states are sometimes distinguished from other one-party states, but they function similarly. When a one-party dictatorship develops gradually through legal means, it can result in conflict between the party organization and the state apparatus and civil service, as the party rules in parallel and increasingly appoints its own members to positions of power. Parties that take power through violence are often able to implement larger changes in a shorter period of time.", "title": "Types of dictatorships" }, { "paragraph_id": 18, "text": "Personalist dictatorships are regimes in which all of the power lies in the hands of a single individual. They differ from other forms of dictatorships in that the dictator has greater access to key political positions and the government's treasury, and they are more commonly subject to the discretion of the dictator. Personalist dictators may be members of the military or leaders of a political party, but neither the military nor the party exercises power independently from the dictator. In personalist dictatorships, the elite corps are usually made up of close friends or family members of the dictator, who typically handpicks these individuals to serve their posts. These dictatorships often emerge either from loosely organized seizures of power, giving the leader opportunity to consolidate power, or from democratically elected leaders in countries with weak institutions, giving the leader opportunity to change the constitution. Personalist dictatorships are more common in Sub-Saharan Africa due to less established institutions in the region.", "title": "Types of dictatorships" }, { "paragraph_id": 19, "text": "Personalist dictators typically favor loyalty over competence in their governments and have a general distrust of intelligentsia. Elites in personalist dictatorships often do not have a professional political career and are unqualified for positions they are given. A personalist dictator will manage these appointees by segmenting the government so that they cannot collaborate. The result is that such regimes have no internal checks and balances, and are thus unrestrained when exerting repression on their people, making radical shifts in foreign policy, or starting wars with other countries. Due to the lack of accountability and the smaller group of elites, personalist dictatorships are more prone to corruption than other forms of dictatorship, and they are more repressive than other forms of dictatorship. Personalist dictatorships often collapse with the death of the dictator. They are more likely to end in violence and less likely to democratize than other forms of dictatorship.", "title": "Types of dictatorships" }, { "paragraph_id": 20, "text": "Personalist dictatorships fit the exact classic stereotype of authoritarian rule. Within a personalist regime an issue called \"The dictator's dilemma\" arises. This idea references the heavy reliance on repression of the public in order to stay in power, which creates incentives for all constituents to falsify their preferences, which does not allow for dictators to know the genuine popular beliefs or their realistic measure of societal support. As a result of authoritarian politics, a series of major issues may ensue. Preference falsification, Internal politics, data scarcity, and restriction of media are just a few examples of the dangers of a personalistic authoritarian regime. Although, when it comes to polling and elections a dictator could use their power to override private preferences. Many personalist regimes will install open ballots to protect their regimes and implement heavy security measures and censorship for those whose personal preferences do not align with the values of the leader.", "title": "Types of dictatorships" }, { "paragraph_id": 21, "text": "The shift in the power relation between the dictator and their inner circle has severe consequences for the behavior of such regimes as a whole. Personalist regimes diverge from other regimes when it comes to their longevity, methods of breakdown, levels of corruption, and proneness to conflicts. On average, they last twice as long as military dictatorships, but not as long as one-party dictatorships. Personalist dictatorships also experience growth differently, as they often lack the institutions or qualified leadership to sustain an economy.", "title": "Types of dictatorships" }, { "paragraph_id": 22, "text": "An absolute monarchy is a monarchy in which the monarch rules without legal limitations. This makes it distinct from constitutional monarchy and ceremonial monarchy. In an absolute monarchy, power is limited to the royal family, and legitimacy is established by historical factors. Monarchies may be dynastic, in which the royal family serves as a ruling institution similar to a political party in a one-party state, or they may be non-dynastic, in which the monarch rules independently of the royal family as a personalist dictator. Monarchies allow for strict rules of succession that produce a peaceful transfer of power on the monarch's death, but this can also result in succession disputes if multiple members of the royal family claim a right to succeed. In the modern era, absolute monarchies are most common in the Middle East.", "title": "Types of dictatorships" }, { "paragraph_id": 23, "text": "Dictatorship is historically associated with the Ancient Greek concept of tyranny, and several ancient Greek rulers have been described as \"tyrants\" that are comparable to modern dictators. The concept of \"dictator\" was first developed during the Roman Republic. A Roman dictator was a special magistrate that was temporarily appointed by the consul during times of crisis and granted total executive authority. The role of dictator was created for instances when a single leader was needed to command and restore stability. At least 85 such dictators were chosen over the course of the Roman Republic, the last of which was chosen to wage the Second Punic War. The dictatorship was revived 120 years later by Sulla after his crushing of a populist movement, and 33 years after that by Julius Caesar. Caesar subverted the tradition of temporary dictatorships when he was made dictator perpetuo, or a dictator for life, which led to the creation of the Roman Empire. The rule of a dictator was not necessarily considered tyrannical in Ancient Rome, though it has been described in some accounts as a \"temporary tyranny\" or an \"elective tyranny\".", "title": "History" }, { "paragraph_id": 24, "text": "Asia saw several military dictatorships during the post-classical era. Korea experienced military dictatorships under the rule of Yeon Gaesomun in the 7th century and under the rule of the Goryeo military regime in the 12th and 13th centuries. Shoguns were de facto military dictators in Japan beginning in 1185 and continuing for over six hundred years. During the Lê dynasty of Vietnam between the 16th and 18th centuries, the country was under de facto military rule by two rival military families: the Trịnh lords in the north and the Nguyễn lords in the south. In Europe, the Commonwealth of England under Oliver Cromwell, formed in 1649 after the Second English Civil War, has been described as a military dictatorship by its contemporary opponents and by some modern academics. Maximilien Robespierre has been similarly described as a dictator while he controlled the National Convention in France and carried out the Reign of Terror in 1793 and 1794.", "title": "History" }, { "paragraph_id": 25, "text": "Dictatorship developed as a major form of government in the 19th century, though the concept was not universally seen pejoratively at the time, with both a tyrannical concept and a quasi-constitutional concept of dictatorship understood to exist. In Europe it was often thought of in terms of Bonapartism and Caesarism, with the former describing the military rule of Napoleon and the latter describing the imperial rule of Napoleon III in the vein of Julius Caesar. The Spanish American wars of independence took place in the early-19th century, creating many new Latin American governments. Many of these governments fell under the control of caudillos, or personalist dictators. Most caudillos came from a military background, and their rule was typically associated with pageantry and glamor. Caudillos were often nominally constrained by a constitution, but the caudillo had the power to draft a new constitution as he wished. Many are noted for their cruelty, while others are honored as national heroes.", "title": "History" }, { "paragraph_id": 26, "text": "In the time between World War I and World War II, several dictatorships were established in Europe through coups which were carried out by far-left and far-right movements. The aftermath of World War I resulted in a major shift in European politics, establishing new governments, facilitating internal change in older governments, and redrawing the boundaries between countries, allowing opportunities for these movements to seize power. The societal upheaval caused by World War I and the unstable peace it produced further contributed to instability that benefited extremist movements and rallied support for their causes. Far-left and far-right dictatorships used similar methods to maintain power, including cult of personality, concentration camps, forced labour, mass murder, and genocide.", "title": "History" }, { "paragraph_id": 27, "text": "The first communist state was created by Vladimir Lenin and the Bolsheviks with the establishment of Soviet Russia during the Russian Revolution in 1917. The government was described as a dictatorship of the proletariat in which power was exercised by soviets. The Bolsheviks consolidated power by 1922, forming the Soviet Union. Lenin was followed by Joseph Stalin in 1924, who consolidated total power and implemented totalitarian rule by 1929. The Russian Revolution inspired a wave of left-wing revolutionary movements in Europe between 1917 and 1923, but none saw the same level of success.", "title": "History" }, { "paragraph_id": 28, "text": "At the same time, nationalist movements grew throughout Europe. These movements were a response to what they perceived as decadence and societal decay due to the changing social norms and race relations brought about by liberalism. Fascism developed in Europe as a rejection of liberalism, socialism, and modernism, and the first fascist political parties formed in the 1920s. Italian dictator Benito Mussolini seized power in 1922, and began implementing reforms in 1925 to create the first fascist dictatorship. These reforms incorporated totalitarianism, fealty to the state, expansionism, corporatism, and anti-communism.", "title": "History" }, { "paragraph_id": 29, "text": "Adolf Hitler and the Nazi Party created a second fascist dictatorship in Germany in 1933, obtaining absolute power through a combination of electoral victory, violence, and emergency powers. Other nationalist movements in Europe established dictatorships based on the fascist model. During World War II, Italy and Germany occupied several countries in Europe, imposing fascist puppet states upon many of the countries that they invaded. After being defeated in World War II, the far-right dictatorships of Europe collapsed, with the exceptions of Spain and Portugal. The Soviet Union occupied nationalist dictatorships in the east and replaced them with communist dictatorships, while others established liberal democratic governments in the Western Bloc.", "title": "History" }, { "paragraph_id": 30, "text": "Dictatorships in Latin America persisted into the 20th century, and further military coups established new regimes, often in the name of nationalism. After a brief period of democratization, Latin America underwent a rapid transition toward dictatorship in the 1930s. Populist movements were strengthened following the economic turmoil of the Great Depression, producing populist dictatorships in several Latin American countries. European fascism was imported to Latin America as well, and the Vargas Era of Brazil was heavily influenced by the corporatism practiced in fascist Italy.", "title": "History" }, { "paragraph_id": 31, "text": "The decolonisation of Africa prompted the creation of new governments, many of which became dictatorships in the 1960s and 1970s. Early African dictatorships were primarily personalist socialist dictatorships, in which a single socialist would take power instead of a ruling party. As the Cold War went on, the Soviet Union increased its influence in Africa, and Marxist–Leninist dictatorships developed in several African countries. Military coups were also a common occurrence after decolonisation, with 14 African countries experiencing at least three successful military coups between 1959 and 2001. These new African governments were marked by severe instability, which provided opportunities for regime change and made fair elections a rare occurrence on the continent. This instability in turn required rulers to become increasingly authoritarian to stay in power, further propagating dictatorship in Africa.", "title": "History" }, { "paragraph_id": 32, "text": "The Chinese Civil War ended in 1949, splitting the Republic of China under Chiang Kai-shek and the People's Republic of China under Mao Zedong. Mao established the People's Republic of China as a one-party communist state under his governing ideology of Maoism. While the People's Republic of China was initially aligned with the Soviet Union, relations between the two countries deteriorated as the Soviet Union underwent de-Stalinization in the late-1950s. Mao consolidated his control of the People's Republic of China with the Cultural Revolution in the 1960s, which involved the destruction of all elements of capitalism and traditionalism in China. Deng Xiaoping took power as the de facto leader of China after Mao's death and implemented reforms to restore stability following the Cultural Revolution and reestablish free market economics. Chiang Kai-shek continued to rule as dictator of the National government's rump state in Taiwan until his death in 1975.", "title": "History" }, { "paragraph_id": 33, "text": "Marxist and nationalist movements became popular in Southeast Asia as a response to colonial control and the subsequent Japanese occupation of Southeast Asia, with both ideologies facilitating the creation of dictatorships after World War II. Communist dictatorships in the region aligned with China following the latter's establishment as a communist state. A similar phenomenon took place in Korea, where Kim Il Sung created a Soviet-backed communist dictatorship in North Korea and Syngman Rhee created a US-backed nationalist dictatorship in South Korea.", "title": "History" }, { "paragraph_id": 34, "text": "The Middle East was decolonized during the Cold War, and many nationalist movements gained strength post-independence. These nationalist movements supported non-alignment, keeping most Middle Eastern dictatorships out of the American and Soviet spheres of influence. These movements supported pan-Arab Nasserism during most of the Cold War, but they were largely replaced by Islamic nationalism by the 1980s. Several Middle Eastern countries were the subject of military coups in the 1950s and 1960s, including Iraq, Syria, North Yemen, and South Yemen. A 1953 coup overseen by the American and British governments restored Mohammad Reza Pahlavi as the absolute monarch of Iran, who in turn was overthrown during the Iranian Revolution of 1979 that established Ruhollah Khomeini as the Supreme Leader of Iran under an Islamist government.", "title": "History" }, { "paragraph_id": 35, "text": "During World War II, many countries of Central and Eastern Europe had been occupied by the Soviet Union. When the war ended, these countries were incorporated into the Soviet sphere of influence, and the Soviet Union exercised control over their governments. Josip Broz Tito declared a communist government in Yugoslavia during World War II, which was initially aligned with the Soviet Union. The relations between the countries were strained by Soviet attempts to influence Yugoslavia, leading to the Tito–Stalin split in 1948. Albania was established as a communist dictatorship under Enver Hoxha in 1944. It was initially aligned with Yugoslavia, but its alignment shifted throughout the Cold War between Yugoslavia, the Soviet Union, and China. The stability of the Soviet Union weakened in the 1980s. The Soviet economy became unsustainable, and communist governments lost the support of intellectuals and their population in general. In 1989, the Soviet Union was dissolved, and communism was abandoned by the countries of Central and Eastern Europe through a series of revolutions.", "title": "History" }, { "paragraph_id": 36, "text": "Military dictatorships remained prominent in Latin America during the Cold War, though the number of coups declined starting in the 1980s. Between 1967 and 1991, 12 Latin American countries underwent at least one military coup, with Haiti and Honduras experiencing three and Bolivia experiencing eight. A one-party communist dictatorship was formed in Cuba when a US-backed dictatorship was overthrown in the Cuban Revolution, creating the only Soviet-backed dictatorship in the western hemisphere. To maintain power, Chilean dictator Augusto Pinochet organized Operation Condor with other South American dictators to facilitate cooperation between their respective intelligence agencies and secret police organizations.", "title": "History" }, { "paragraph_id": 37, "text": "The nature of dictatorship changed in much of the world at the onset of the 21st century. Between the 1990s and the 2000s, most dictators moved away from being \"larger-than-life figures\" that controlled the populace through terror and isolated themselves from the global community. This was replaced by a trend of developing a positive public image to maintain support among the populace and moderating rhetoric to integrate with the global community. In contrast to the overtly repressive nature of 20th century dictatorships, authoritarian strongmen of the 21st century are sometimes labelled \"spin dictators\", rulers who attempt to monopolise power by authoritarian upgrading, appealing to democratic sentiments and covertly pursue repressive measures; such as embracing modern technology, manipulation of information content, regulation of cyberspace, slandering dissidents, etc. On the other hand, a handful of dictators like Bashar al-Assad and Kim Jong Un rule with deadly repression, violence and state-terrorism to establish extensive securitization through fear, in line with many 20th century dictatorships.", "title": "History" }, { "paragraph_id": 38, "text": "The development of the internet and digital communication in the 21st century have prompted dictatorships to shift from traditional means of control to digital ones, including the use of artificial intelligence to analyze mass communications, internet censorship to restrict the flow of information, and troll farms to manipulate public opinion. 21st century dictatorships regularly hold sham elections with massive approval ratings, for seeking public legitimacy and maintaining the autocrat's image as a popular figure loved by the masses. The manipulated election results are often weaponized as propaganda tools in information warfare, to galvanize supporters of the dictatorships against dissidents as well as to manufacture compliance of the masses by publicising falsified data figures. Another objective is to portray the dictator as the guardian figure who unifies the country, without whom its security disintegrates and chaos ensues. North Korea is the only country in East Asia to be ruled by the Kim family after the death of Kim Il-sung and hands over to his son Kim Jong-il in 1994 and grandson Kim Jong-un in 2011, as of today in the 21st century.", "title": "History" }, { "paragraph_id": 39, "text": "Dictatorship in Europe largely ended after the fall of the Soviet Union in 1991, and the liberalization of most communist states. Belarus under the rule of Alexander Lukashenko has been described as \"the last European dictatorship\", though the rule of Vladimir Putin in Russia has also been described as a dictatorship. Latin America saw a period of liberalization similar to that of Europe at the end of the Cold War, with Cuba being the only Latin American country that did not experience any degree of liberalization between 1992 and 2010. The countries of Central Asia did not liberalize after the fall of the Soviet Union, instead forming as dictatorships led by former elites of the Communist Party and then later by successive dictators. These countries maintain parliaments and human rights organizations, but these remain under the control of the countries' respective dictators.", "title": "History" }, { "paragraph_id": 40, "text": "The Middle East and Northern Africa did not undergo liberalization during the third wave of democratisation, and most countries in this region remain dictatorships in the 21st century. Dictatorships in the Middle East and Northern Africa are either illiberal republics in which a president holds power through unfair elections, or they are absolute monarchies in which power is inherited. Iraq, Israel, Lebanon, and Palestine are the only democratic nations in the region, with Israel being the only nation in this region that affords broad political liberties to its citizens.", "title": "History" }, { "paragraph_id": 41, "text": "Most dictatorships exist in countries with high levels of poverty. Poverty has a destabilizing effect on government, causing democracy to fail and regimes to fall more often. The form of government does not correlate with the amount of economic growth, and dictatorships on average grow at the same rate as democracies, though dictatorships have been found to have larger fluctuations. Dictators are more likely to implement long-term investments into the country's economy if they feel secure in their power. Exceptions to the pattern of poverty in dictatorships include oil-rich Middle Eastern dictatorships and the East Asian Tigers during their periods of dictatorship.", "title": "Economics" }, { "paragraph_id": 42, "text": "The type of economy in a dictatorship can affect how it functions. Economies based on natural resources allow dictators more power, as they can easily extract rents without strengthening or cooperating with other institutions. More complex economies require additional cooperation between the dictator and other groups. The economic focus of a dictatorship often depends on the strength of the opposition, as a weaker opposition allows a dictator to extract additional wealth from the economy through corruption.", "title": "Economics" }, { "paragraph_id": 43, "text": "Several factors determine the stability of a dictatorship, and they must maintain some degree of popular support to prevent resistance groups from growing. This may be ensured through incentives, such as distribution of financial resources or promises of security, or it may be through repression, in which failing to support the regime is punished. Stability can be weakened when opposition groups grow and unify or when elites are not loyal to the regime. One-party dictatorships are generally more stable and last longer than military or personalist dictatorships.", "title": "Legitimacy and stability" }, { "paragraph_id": 44, "text": "A dictatorship may fall because of a military coup, foreign intervention, negotiation, or popular revolution. A military coup is often carried out when a regime is threatening the country's stability or during periods of societal unrest. Foreign intervention takes place when another country seeks to topple a regime by invading the country or supporting the opposition. A dictator may negotiate the end of a regime if it has lost legitimacy or if a violent removal seems likely. Revolution takes place when the opposition group grows large enough that elites in the regime cannot suppress it or choose not to. Negotiated removals are more likely to end in democracy, while removals by force are more likely to result in a new dictatorial regime. A dictator that has concentrated significant power is more likely to be exiled, imprisoned, or killed after ouster, and accordingly they are more likely to refuse negotiation and cling to power.", "title": "Legitimacy and stability" }, { "paragraph_id": 45, "text": "Dictatorships are typically more aggressive than democracy when in conflict with other nations, as dictators do not have to fear electoral costs of war. Military dictatorships are more prone to conflict due to the inherent military strength associated with such a regime, and personalist dictatorships are more prone to conflict due to the weaker institutions to check the dictator's power. In the 21st century, dictatorships have moved toward greater integration with the global community and increasingly attempt to present themselves as democratic. Dictatorships are often recipients of foreign aid on the condition that they make advances toward democratization. A study found that dictatorships that engage in oil drilling are more likely to remain in power, with 70.63% of the dictators who engage in oil drilling still being in power after 5 years of dictatorship, while only 59.92% of the non-oil producing dictators survive the first 5 years.", "title": "Legitimacy and stability" }, { "paragraph_id": 46, "text": "Most dictatorships hold elections to maintain legitimacy and stability, but these elections are typically uncompetitive and the opposition is not permitted to win. Elections allow a dictatorship to exercise some control over the opposition by setting the terms under which the opposition challenges the regime. Elections are also used to control elites within the dictatorship by requiring them to compete with one another and incentivizing them to build support with the populace, allowing the most popular and most competent elites to be promoted in the regime. Elections also support the legitimacy of a dictatorship by presenting the image of a democracy, establishing plausible deniability of its status as a dictatorship for both the populace and foreign governments. Should a dictatorship fail, elections also permit dictators and elites to accept defeat without fearing violent recourse. Dictatorships may influence the results of an election through electoral fraud, intimidation or bribing of candidates and voters, use of state resources such as media control, manipulation of electoral laws, restricting who may run as a candidate, or disenfranchising demographics that may oppose the dictatorship.", "title": "Legitimacy and stability" }, { "paragraph_id": 47, "text": "In the 20th century, most dictatorships held elections in which voters could only choose to support the dictatorship, with only one-quarter of partisan dictatorships permitting opposition candidates to participate. Since the end of the Cold War, more dictatorships have established \"semi-competitive\" elections in which opposition is allowed to participate in elections but is not allowed to win, with approximately two-thirds of dictatorships permitting opposition candidates in 2018. Opposition parties in dictatorships may be restricted by preventing them from campaigning, banning more popular opposition parties, preventing opposition members from forming a party, or requiring that candidates be a member of the ruling party. Dictatorships may hold semi-competitive elections to qualify for foreign aid, to demonstrate a dictator's control over the government, or to incentivize the party to expand its information-gathering capacity, particularly at the local level. Semi-competitive elections also have the effect of incentivizing members of the ruling party to provide better treatment of citizens so they will be chosen as party nominees due to their popularity.", "title": "Legitimacy and stability" }, { "paragraph_id": 48, "text": "In a dictatorship, violence is used to coerce or repress all opposition to the dictator's rule, and the strength of a dictatorship depends on its use of violence. This violence is frequently exercised through institutions such as military or police forces. The use of violence by a dictator is frequently most severe during the first few years of a dictatorship, because the regime has not yet solidified its rule and more detailed information for targeted coercion is not yet available. As the dictatorship becomes more established, it moves away from violence by resorting to the use of other coercive measures, such as restricting people's access to information and tracking the political opposition. Dictators are incentivized to avoid the use of violence once a reputation of violence is established, as it damages the dictatorship's other institutions and poses a threat to the dictator's rule should government forces become disloyal.", "title": "Legitimacy and stability" }, { "paragraph_id": 49, "text": "Institutions that coerce the opposition through the use of violence may serve different roles or they may be used to counterbalance one another in order to prevent one institution from becoming too powerful. Secret police are used to gather information about specific political opponents and carry out targeted acts of violence against them, paramilitary forces defend the regime from coups, and formal militaries defend the dictatorship during foreign invasions and major civil conflicts.", "title": "Legitimacy and stability" }, { "paragraph_id": 50, "text": "Terrorism is less common in dictatorships. Allowing the opposition to have representation in the regime, such as through a legislature, further reduces the likelihood of terrorist attacks in a dictatorship. Military and one-party dictatorships are more likely to experience terrorism than personalist dictatorships, as these regimes are under more pressure to undergo institutional change in response to terrorism.", "title": "Legitimacy and stability" } ]
A dictatorship is an autocratic form of government which is characterized by a leader, or a group of leaders, who hold governmental powers with few to no limitations. Politics in a dictatorship are controlled by a dictator, and they are facilitated through an inner circle of elites that includes advisers, generals, and other high-ranking officials. The dictator maintains control by influencing and appeasing the inner circle and repressing any opposition, which may include rival political parties, armed resistance, or disloyal members of the dictator's inner circle. Dictatorships can be formed by a military coup that overthrows the previous government through force or they can be formed by a self-coup in which elected leaders make their rule permanent. Dictatorships are authoritarian or totalitarian and they can be classified as military dictatorships, one-party dictatorships, personalist dictatorships, or absolute monarchies. The use of the term "dictatorship" emerged in the Roman Republic, referring to "a temporary grant of absolute power to a leader to handle some emergency." The earliest military dictatorships developed in the post-classical era, particularly in Shogun-era Japan and in England under Cromwell. Modern dictatorships first developed in the 19th century, which included Bonapartism in Europe and caudillos in Latin America. The 20th century saw the rise of fascist and communist dictatorships in Europe; fascism was eradicated in the aftermath of World War II in 1945, while communism spread to other continents, maintaining prominence until the end of the Cold War in 1991. The 20th century also saw the rise of personalist dictatorships in Africa and military dictatorships in Latin America, both of which became prominent in the 1960s and 1970s. The period following the collapse of the Soviet Union witnessed a sporadic rise in democracies across the world, despite several dictatorships persisting into the 21st century, particularly in Africa and Asia. During the early 21st century, democratic governments came to outnumber authoritarian states by 98 to 80. The second decade was marked by a "democratic recession", following the 2008 global financial crisis which drastically reduced the appeal of the Western model across the world. By 2019, the number of authoritarian governments had again surmounted that of democracies by 92 to 87. Dictatorships often attempt to portray a democratic facade, frequently holding elections in order to establish their legitimacy or provide incentives to members of the ruling party, but these elections are not competitive for the opposition. Stability in a dictatorship is maintained through coercion and political repression, which involves the restriction of access to information, the tracking of the political opposition, and acts of violence. Dictatorships that fail to repress the opposition are susceptible to collapse through a coup or a revolution.
2001-12-29T12:43:00Z
2023-12-21T16:10:01Z
[ "Template:Good article", "Template:See also", "Template:Reflist", "Template:Cite book", "Template:Cite news", "Template:Authority control", "Template:Main", "Template:Div col end", "Template:Citation", "Template:Cite magazine", "Template:Authoritarian types of rule", "Template:Div col", "Template:Cite web", "Template:Cite report", "Template:Short description", "Template:Use dmy dates", "Template:Basic forms of government", "Template:Sfn", "Template:Further", "Template:Lang", "Template:Cite journal", "Template:Wikiquote" ]
https://en.wikipedia.org/wiki/Dictatorship
9,039
Django Reinhardt
Jean Reinhardt (23 January 1910 – 16 May 1953), known by his Romani nickname Django (French: [dʒãŋɡo ʁɛjnaʁt] or [dʒɑ̃ɡo ʁenɑʁt]), was a Romani-French jazz guitarist and composer. He was one of the first major jazz talents to emerge in Europe and has been hailed as one of its most significant exponents. With violinist Stéphane Grappelli, Reinhardt formed the Paris-based Quintette du Hot Club de France in 1934. The group was among the first to play jazz that featured the guitar as a lead instrument. Reinhardt recorded in France with many visiting American musicians, including Coleman Hawkins and Benny Carter, and briefly toured the United States with Duke Ellington's orchestra in 1946. He died suddenly of a stroke in 1953 at the age of 43. Reinhardt's most popular compositions have become standards within gypsy jazz, including "Minor Swing", "Daphne", "Belleville", "Djangology", "Swing '42", and "Nuages". Jazz guitarist Frank Vignola says that nearly every major popular-music guitarist in the world has been influenced by Reinhardt. Over the last few decades, annual Django festivals have been held throughout Europe and the U.S., and a biography has been written about his life. In February 2017, the Berlin International Film Festival held the world premiere of the French film Django. Reinhardt was born on 23 January 1910 in Liberchies, Pont-à-Celles, Belgium, into a French/Belgian family of Manouche Romani descent. His French, Alsacian father, Jean Eugene Weiss, domiciled in Paris with his wife, went by Jean-Baptiste Reinhardt, his wife's surname, to avoid French military conscription. His mother, Laurence Reinhardt, was a dancer. The birth certificate refers to "Jean Reinhart, son of Jean Baptiste Reinhart, artist, and Laurence Reinhart, housewife, domiciled in Paris". A number of authors have repeated the claim that Reinhardt's nickname, Django, is Romani for "I awake"; however, it may also simply have been a diminutive, or local Walloon version, of "Jean". Reinhardt spent most of his youth in Romani encampments close to Paris, where he started playing the violin, banjo and guitar. He became adept at stealing chickens. His father reportedly played music in a family band comprising himself and seven brothers; a surviving photograph shows this band including his father on piano. Reinhardt was attracted to music at an early age, first playing the violin. At the age of 12, he received a banjo-guitar as a gift. He quickly taught himself to play, mimicking the fingerings of musicians he watched, who would have included local virtuoso players of the day such as Jean "Poulette" Castro and Auguste "Gusti" Malha, as well as from his uncle Guiligou, who played violin, banjo and guitar. Reinhardt was able to make a living playing music by the time he was 15, busking in cafés, often with his brother Joseph. At this time, he had not started playing jazz, although he had probably heard and had been intrigued by the version of jazz played by American expatriate bands like Billy Arnold's. He received little formal education and acquired the rudiments of literacy only in adult life. At the age of 17, Reinhardt married Florine "Bella" Mayer, a girl from the same Romani settlement, according to Romani custom (although not an official marriage under French law). The following year he recorded for the first time. On these recordings, made in 1928, Reinhardt plays the "banjo" (actually the banjo-guitar) accompanying the accordionists Maurice Alexander, Jean Vaissade and Victor Marceau, and the singer Maurice Chaumel. His name was now drawing international attention, such as from British bandleader Jack Hylton, who came to France just to hear him play. Hylton offered him a job on the spot, and Reinhardt accepted. Before he had a chance to start with the band, however, Reinhardt nearly died. On the night of 2 November 1928, Reinhardt was going to bed in the wagon that he and his wife shared in the caravan. He knocked over a candle, which ignited the extremely flammable celluloid that his wife used to make artificial flowers. The wagon was quickly engulfed in flames. The couple escaped, but Reinhardt suffered extensive burns over half his body. During his 18-month hospitalization, doctors recommended amputation of his badly damaged right leg. Reinhardt refused the surgery and was eventually able to walk with the aid of a cane. More crucial to his music, the fourth finger (ring finger) and fifth finger (little) of Reinhardt's left hand were badly burned. Doctors believed that he would never play guitar again. During many months of recuperation, Reinhardt taught himself to play again using primarily the index and third fingers of his left hand by making use of a new six-string steel-strung acoustic guitar that was bought for him by his brother, Joseph Reinhardt, who was also an accomplished guitarist. While he never regained the use of those two fingers, Reinhardt regained his musical mastery by focusing on his left index and middle fingers, using the two injured fingers only for chord work. Within a year of the fire, in 1929, Bella Mayer gave birth to their son, Henri "Lousson" Reinhardt. Soon thereafter, the couple split up. The son eventually took the surname of his mother's new husband. As Lousson Baumgartner, the son himself became an accomplished musician who went on to record with his biological father. After parting from his wife and son, Reinhardt traveled throughout France, getting occasional jobs playing music at small clubs. He had no specific goals, living a hand-to-mouth existence, spending his earnings as quickly as he made them. Accompanying him on his travels was his new girlfriend, Sophie Ziegler. Nicknamed "Naguine," she was a distant cousin. In the years after the fire, Reinhardt was rehabilitating and experimenting on the guitar that his brother had given him. After having played a broad spectrum of music, he was introduced to American jazz by an acquaintance, Émile Savitry, whose record collection included such musical luminaries as Louis Armstrong, Duke Ellington, Joe Venuti, Eddie Lang, and Lonnie Johnson. (The swinging sound of Venuti's jazz violin and Eddie Lang's virtuoso guitar-playing anticipated the more famous sound of Reinhardt and Grappelli's later ensemble.) Hearing their music triggered in Reinhardt a vision and goal of becoming a jazz professional. While developing his interest in jazz, Reinhardt met Stéphane Grappelli, a young violinist with similar musical interests. In 1928, Grappelli had been a member of the orchestra at the Ambassador Hotel while bandleader Paul Whiteman and Joe Venuti were performing there. In early 1934 both Reinhardt and Grappelli were members of Louis Vola's band. From 1934 until the outbreak of World War II in 1939, Reinhardt and Grappelli worked together as the principal soloists of their newly formed quintet, the Quintette du Hot Club de France, in Paris. It became the most accomplished and innovative European jazz group of the period. Reinhardt's brother Joseph and Roger Chaput also played on guitar, and Louis Vola was on bass. The Quintette was one of the few well-known jazz ensembles composed only of stringed instruments. In Paris on 14 March 1933, Reinhardt recorded two takes each of "Parce que je vous aime" and "Si, j'aime Suzy", vocal numbers with lots of guitar fills and guitar support. He used three guitarists along with an accordion lead, violin, and bass. In August 1934, he made other recordings with more than one guitar (Joseph Reinhardt, Roger Chaput, and Reinhardt), including the first recording by the Quintette. In both years the great majority of their recordings featured a wide variety of horns, often in multiples, piano, and other instruments, but the all-string instrumentation is the one most often adopted by emulators of the Hot Club sound. Decca Records in the United States released three records of Quintette tunes with Reinhardt on guitar, and one other, credited to "Stephane Grappelli & His Hot 4 with Django Reinhardt", in 1935. Reinhardt also played and recorded with many American jazz musicians, such as Adelaide Hall, Coleman Hawkins, Benny Carter, and Rex Stewart (who later stayed in Paris). He participated in a jam session and radio performance with Louis Armstrong. Later in his career, Reinhardt played with Dizzy Gillespie in France. Also in the neighborhood was the artistic salon R-26, at which Reinhardt and Grappelli performed regularly as they developed their unique musical style. In 1938, Reinhardt's quintet played to thousands at an all-star show held in London's Kilburn State auditorium. While playing, he noticed American film actor Eddie Cantor in the front row. When their set ended, Cantor rose to his feet, then went up on stage and kissed Reinhardt's hand, paying no concern to the audience. A few weeks later the quintet played at the London Palladium. When World War II broke out, the original quintet was on tour in the United Kingdom. Reinhardt returned to Paris at once, leaving his wife in the UK. Grappelli remained in the United Kingdom for the duration of the war. Reinhardt re-formed the quintet, with Hubert Rostaing on clarinet replacing Grappelli. While he tried to continue with his music, war with the Nazis presented Reinhardt with a potentially catastrophic obstacle, as he was a Romani jazz musician. Beginning in 1933, all German Romani were barred from living in cities, herded into settlement camps, and routinely sterilized. Romani men were required to wear a brown Gypsy ID triangle sewn on their chest, similar to the pink triangle that homosexuals wore, and much like the yellow Star of David that Jews had to subsequently wear. During the war, Romani were systematically killed in concentration camps. In France, they were used as slave labour on farms and in factories. During the Holocaust an estimated 600,000 to 1.5 million Romani throughout Europe were killed. Hitler and Joseph Goebbels viewed jazz as un-German counterculture. Nonetheless, Goebbels stopped short of a complete ban on jazz, which now had many fans in Germany and elsewhere. Official policy towards jazz was much less strict in occupied France, according to author Andy Fry, with jazz music frequently played on both Radio France, the official station of Vichy France, and Radio Paris, which was controlled by the Germans. A new generation of French jazz enthusiasts, the Zazous, had arisen and swollen the ranks of the Hot Club. In addition to the increased interest, many American musicians based in Paris during the thirties had returned to the US at the beginning of the war, leaving more work for French musicians. Reinhardt was the most famous jazz musician in Europe at the time, working steadily during the early war years and earning a great deal of money, yet always under threat. Reinhardt expanded his musical horizons during this period. Using an early amplification system, he was able to work in more of a big-band format, in large ensembles with horn sections. He also experimented with classical composition, writing a Mass for the Gypsies and a symphony. Since he did not read music, Reinhardt worked with an assistant to notate what he was improvising. His modernist piece "Rythme Futur" was also intended to be acceptable to the Nazis. In this ["Nuages"] graceful and eloquent melody, Django evoked the woes of the war that weighed on people's souls—and then transcended it all. Biographer Michael Dregni In 1943, Reinhardt married his long-term partner Sophie "Naguine" Ziegler in Salbris. They had a son, Babik Reinhardt, who became a respected guitarist. In 1943 the tide of war turned against the Germans, with a considerable darkening of the situation in Paris. Severe rationing was in place, and members of Reinhardt's circle were being captured by the Nazis or joining the resistance. Reinhardt's first attempt at escape from Occupied France led to capture. Fortunately for him, a jazz-loving German, Luftwaffe officer Dietrich Schulz-Köhn [de], allowed him to return to Paris. Reinhardt made a second attempt a few days later, but was stopped in the middle of the night by Swiss border guards, who forced him to return to Paris again. One of his tunes, 1940's "Nuages", became an unofficial anthem in Paris to signify hope for liberation. During a concert at the Salle Pleyel, the popularity of the tune was such that the crowd made him replay it three times in a row. The single sold over 100,000 copies. Unlike the estimated 600,000 Romani people who were interned and killed in the Porajmos, the Romani Holocaust, Reinhardt survived the war. After the war, Reinhardt rejoined Grappelli in the UK. In the autumn of 1946, he made his first tour in the United States, debuting at Cleveland Music Hall as a special guest soloist with Duke Ellington and His Orchestra. He played with many musicians and composers, such as Maury Deutsch. At the end of the tour, Reinhardt played two nights at Carnegie Hall in New York City; he received a great ovation and took six curtain calls on the first night. Despite his pride in touring with Ellington (one of two letters to Grappelli relates his excitement), he was not fully integrated into the band. He played a few tunes at the end of the show, backed by Ellington, with no special arrangements written for him. After the tour, Reinhardt secured an engagement at Café Society Uptown, where he played four solos a day, backed by the resident band. These performances drew large audiences. Having failed to bring his usual Selmer Modèle Jazz, he played on a borrowed electric guitar, which he felt hampered the delicacy of his style. He had been promised jobs in California, but they failed to develop. Tired of waiting, Reinhardt returned to France in February 1947. After his return, Reinhardt appeared to find it difficult to adjust. He sometimes showed up for scheduled concerts without a guitar or amplifier, or wandered off to the park or beach. On a few occasions he refused to get out of bed. Reinhardt developed a reputation among his band, fans, and managers as extremely unreliable. He skipped sold-out concerts to "walk to the beach" or "smell the dew." During this period he continued to attend the R-26 artistic salon in Montmartre, improvising with his devoted collaborator, Stéphane Grappelli. In Rome in 1949, Reinhardt recruited three Italian jazz players (on bass, piano, and snare drum) and recorded over 60 tunes in an Italian studio. He united with Grappelli, and used his acoustic Selmer-Maccaferri. The recording was issued for the first time in the late 1950s. Back in Paris, in June 1950, Reinhardt was invited to join an entourage to welcome the return of Benny Goodman. He also attended a reception for Goodman, who, after the war ended, had asked Reinhardt to join him in the U.S. Goodman repeated his invitation and, out of politeness, Reinhardt accepted. However, Reinhardt later had second thoughts about what role he could play alongside Goodman, who was the "King of Swing", and remained in France. In 1951, Reinhardt retired to Samois-sur-Seine, near Fontainebleau, where he lived until his death. He continued to play in Paris jazz clubs and began playing electric guitar. (He often used a Selmer fitted with an electric pickup, despite his initial hesitation about the instrument.) In his final recordings, made with his Nouvelle Quintette in the last few months of his life, he had begun moving in a new musical direction, in which he assimilated the vocabulary of bebop and fused it with his own melodic style. On 16 May 1953, while walking home from Fontainebleau–Avon station after playing in a Paris club, he collapsed outside his house from a brain hemorrhage. It was a Saturday, and it took a full day for a doctor to arrive. Reinhardt was declared dead on arrival at the hospital in Fontainebleau, at the age of 43. Reinhardt developed his initial musical approach via tutoring by relatives and exposure to other gypsy guitar players of the day, then playing the banjo-guitar alongside accordionists in the world of the Paris bal musette. He played mainly with a plectrum for maximum volume and attack (particularly in the 1920s-early 30s when amplification in venues was minimal or non-existent), although he could also play fingerstyle on occasion, as evidenced by some recorded introductions and solos. Following his accident in 1928 in which his left hand was severely burned, he was left with the use of only his first two fingers. As a result, he developed a completely new left hand technique and started performing on guitar accompanying popular singers of the day, before discovering jazz and presenting his new hybrid style of gypsy approach plus jazz to the outside world via the Quintette du Hot Club de France. Despite his left hand handicap, Reinhardt was able to recapture (in modified form) and then surpass his previous level of proficiency on the guitar (by now his main instrument), not only as a lead instrumental voice but also as a driving and harmonically interesting rhythm player; his virtuosity, incorporating many gypsy-derived influences, was also matched with a superb sense of melodic invention as well as general musicality in terms of choice of notes, timing, dynamics, and utilizing the maximum tonal range from an instrument previously thought of by many critics as potentially limited in expression. Playing completely by ear (he could neither read nor write music), he roamed freely across the full range of the fretboard giving full flight to his musical imagination and could play with ease in any key. Guitarists, particularly in Britain and the United States, could scarcely believe what they heard on the records that the Quintette was making; guitarist, gypsy jazz enthusiast and educator Ian Cruickshank writes: It wasn't until 1938, and the Quintet's first tour of England, that guitarists [in the U.K.] were able to witness Django's amazing abilities. His hugely innovative technique included, on a grand scale, such unheard of devices as melodies played in octaves, tremolo chords with shifting notes that sounded like whole horn sections, a complete array of natural and artificial harmonics, highly charged dissonances, super-fast chromatic runs from the open bass strings to the highest notes on the 1st string, an unbelievably flexible and driving right-hand, two and three octave arpeggios, advanced and unconventional chords and a use of the flattened fifth that predated be-bop by a decade. Add to all this Django's staggering harmonic and melodic concept, huge sound, pulsating swing, sense of humour and sheer speed of execution, and it is little wonder that guitar players were knocked sideways upon their first encounter with this full-blown genius. Because of his damaged left hand (his ring and pinky fingers helped little in his playing) Reinhardt had to modify both his chordal and melodic approach extensively. For chords he developed a novel system based largely around 3-note chords, each of which could serve as the equivalent of several conventional chords in different inversions; for the treble notes he could employ his ring and little fingers to fret the relevant high strings even though he could not articulate these fingers independently, while in some chords he also employed his left hand thumb on the lowest string. Within his rapid melodic runs he frequently incorporated arpeggios, which could be played using two notes per string (played with his two "good" fingers, being his index and middle fingers) while shifting up or down the fingerboard, as opposed to the more conventional "box" approach of moving across strings within a single fretboard position (location). He also produced some of his characteristic "effects" by moving a fixed shape (such as a diminished chord) rapidly up and down the fretboard, resulting in what one writer has called "intervallic cycling of melodic motifs and chords". For an unsurpassed insight into these techniques in use, interested persons should not miss viewing the only known synchronised (sound and vision) footage of Reinhardt in performance, playing on an instrumental version of the song "J'Attendrai" for the short jazz film Le Jazz Hot in 1938–39 (copies available on YouTube and elsewhere). Hugues Panassié, in his 1942 book The Real Jazz, wrote: First of all, his instrumental technique is vastly superior to that of all other jazz guitarists. This technique permits him to play with an inconceivable velocity and makes his instrument completely versatile. Though his virtuosity is stupefying, it is no less so than his creative invention. In his solos [...] his melodic ideas are sparkling and ravishing, and their abundance scarcely gives the listener time to catch his breath. Django's ability to bend his guitar to the most fantastic audacities, combined with his expressive inflections and vibrato, is no less wonderful; one feels an extraordinary flame burning through every note. Writing in 1945, Billy Neil and E. Gates stated that Reinhardt set new standards by an almost incredible and hitherto unthought-of technique ... His ideas have a freshness and spontaneity that are at once fascinating and alluring ... [Nevertheless] The characteristics of Reinhardt's music are primarily emotional. His relative association of experience, reinforced by a profound rational knowledge of his instrument; the guitar's possibilities and limitations; his love for music and the expression of it—all are a necessary adjunct to the means of expressing these emotions. Django-style enthusiast John Jorgenson has been quoted as saying: Django's guitar playing always has so much personality in it, and seems to contain such joy and feeling that it is infectious. He also pushes himself to the edge nearly all the time, and rides a wave of inspiration that sometimes gets dangerous. Even the few times he does not quite make his ideas flow out flawlessly it is still so exciting that mistakes don't matter! Django's seemingly never-ending bag of licks, tricks and colors always keep the song interesting, and his intensity level is rarely met by any guitarist. Django's technique was not only phenomenal, but it was personal and unique to him due to his handicap. It is very difficult to achieve the same tone, articulation and clarity using all 5 left hand fingers. It is possible to get closer with only 2 fingers, but again is quite challenging. Probably the thing about this music that makes it always challenging and exciting to play is that Django raised the bar so high, that it is like chasing genius to get close to his level of playing. In his later style (c. 1946 onwards) Reinhardt began to incorporate more bebop influences in his compositions and improvisations, also fitting a Stimer electric pickup to his acoustic guitar. With the addition of amplification, his playing became more linear and "horn like", with the greater facility of the amplified instrument for longer sustain and to be heard in quiet passages, and in general less reliance on his gypsy "bag of tricks" as developed for his acoustic guitar style (also, in some of his late recordings, with a very different supporting group context from his "classic", pre-war Quintette sound). These "electric period" Reinhardt recordings have in general received less popular re-release and critical analysis than his pre-war releases (the latter also extending to the period from 1940 to 1945 when Grappelli was absent, which included some of his most famous compositions such as "Nuages"), but are also a fascinating area of Reinhardt's work to study, and have begun to be revived by players such as the Rosenberg Trio (with their 2010 release "Djangologists") and Biréli Lagrène. Wayne Jefferies, in his article "Django's Forgotten Era", writes: Early in 1951, armed with his amplified Maccaferri – which he used to the very end – he put together a new band of the best young modern musicians in Paris; including Hubert Fol, an altoist in the Charlie Parker mould. Although Django was twenty years older than the rest of the band, he was completely in command of the modern style. Whilst his solos became less chordal and his lines more Christian-like, he retained his originality. I believe he should be rated much more highly as a be-bop guitarist. His infallible technique, his daring, 'on the edge' improvisations coupled with his vastly advanced harmonic sense, took him to musical heights that Christian and many other Bop musicians never came near. The live cuts from Club St. Germain in February 1951 are a revelation. Django is on top form; full of new ideas that are executed with amazing fluidity, cutting angular lines that always retain that ferocious swing. Reinhardt's first son, Lousson (a.k.a. Henri Baumgartner), played jazz in a mostly bebop style in the 1950s and 1960s. He followed the Romani lifestyle and was relatively little recorded. Reinhardt's second son, Babik, became a guitarist in a more contemporary jazz style, and recorded a number of albums before his death in 2001. After Reinhardt died, his younger brother Joseph at first swore to abandon music, but he was persuaded to perform and record again. Joseph's son Markus Reinhardt is a violinist in the Romani style. A third generation of direct descendants has developed as musicians: David Reinhardt, Reinhardt's grandson (by his son Babik), leads his own trio. Dallas Baumgartner, a great-grandson by Lousson, is a guitarist who travels with the Romani and keeps a low public profile. A distant relative, violinist Schnuckenack Reinhardt, became known in Germany as a performer of gypsy music and gypsy jazz up to his death in 2006, and assisted in keeping Reinhardt's legacy alive through the period following Django's death. Reinhardt is regarded as one of the greatest guitar players of all time, and the first important European jazz musician to make a major contribution with jazz guitar. During his career he wrote nearly 100 songs, according to jazz guitarist Frank Vignola. Using a Selmer guitar in the mid-1930s, his style took on new volume and expressiveness. Because of his physical disability, he played mainly using his index and middle fingers, and invented a distinctive style of jazz guitar. For about a decade after Reinhardt's death, interest in his musical style was minimal. In the fifties, bebop superseded swing in jazz, rock and roll took off, and electric instruments became dominant in popular music. Since the mid-sixties, there has been a revival of interest in Reinhardt's music, a revival that has extended into the 21st century, with annual festivals and periodic tribute concerts. His devotees included classical guitarist Julian Bream and country guitarist Chet Atkins, who considered him one of the ten greatest guitarists of the twentieth century. Jazz guitarists in the U.S., such as Charlie Byrd and Wes Montgomery, were influenced by his style. In fact, Byrd, who lived from 1925 to 1999, said that Reinhardt was his primary influence. Guitarist Mike Peters notes that "the word 'genius' is bantered about too much. But in jazz, Louis Armstrong was a genius, Duke Ellington was another one, and Reinhardt was also." David Grisman adds, "As far as I'm concerned, no one since has come anywhere close to Django Reinhardt as an improviser or technician." The popularity of gypsy jazz has generated an increasing number of festivals, such as the Festival Django Reinhardt held every last weekend of June since 1983 in Samois-sur-Seine (France), and since 2017 in nearby Fontainebleau; the various DjangoFests held throughout Europe and the US; and "Django in June", an annual camp for Gypsy jazz musicians and aficionados held at Smith College in Massachusetts. Woody Allen's film Sweet and Lowdown (1999), the story of a Django Reinhardt-like character, mentions Reinhardt and includes actual recordings in the film. In February 2017, the Berlin International Film Festival held the world premiere of Django, a French film directed by Etienne Comar. The movie covers Django's escape from Nazi-occupied Paris in 1943 and the fact that even under "constant danger, flight and the atrocities committed against his family", he continued composing and performing. Reinhardt's music was re-recorded for the film by the Dutch jazz band Rosenberg Trio with lead guitarist Stochelo Rosenberg. The documentary film, Djangomania! was released in 2005. The hour-long film was directed and written by Jamie Kastner, who traveled throughout the world to show the influence of Django's music in various countries. In 1984 the Kool Jazz Festival, held in Carnegie Hall and Avery Fisher Hall, was dedicated entirely to Reinhardt. Performers included Grappelli, Benny Carter, and Mike Peters with his group of seven musicians. The festival was organized by George Wein. Reinhardt is celebrated annually in the village of Liberchies, his birthplace. Numerous musicians have written and recorded tributes to Reinhardt. The jazz standard "Django" (1954) was composed by John Lewis of the Modern Jazz Quartet in honour of Reinhardt. The Allman Brothers Band song "Jessica" was written by Dickey Betts in tribute to Reinhardt. American country music artists Willie Nelson and Merle Haggard named their sixth and final collaborative studio album "Django and Jimmie". It was released on 2 June 2015, by Legacy Recordings. The album contains the song "Django and Jimmie" which is a tribute to musicians Django Reinhardt and Jimmie Rodgers. Ramelton, Co. Donegal, Ireland, each year hosts a festival in tribute to Django called "Django sur Lennon" or "Django on the Lennon" the Lennon being the name of the local river that runs through the village. In coincidence with the 110th anniversary in 2020 of Django's birth, a graphic novel depicting his youth years was published under the title Django Main de Feu, by writer Salva Rubio and artist Efa through Belgian publisher Dupuis. On 23 January 2010, Google Doodle celebrated Django Reinhard’s 100th Birthday. The instant I heard Django, I flipped. I chose his style because it spoke to me. He was too far ahead of his time. He was something else. French recording artist, Serge Krief Many guitar players and other musicians have expressed admiration for Reinhardt or have cited him as a major influence. Jeff Beck described Reinhardt as "by far the most astonishing guitar player ever" and "quite superhuman". Grateful Dead's Jerry Garcia and Black Sabbath's Tony Iommi, both of whom lost fingers in accidents, were inspired by Reinhardt's example of becoming an accomplished guitar player despite his injuries. Garcia was quoted in June 1985 in Frets Magazine: His technique is awesome! Even today, nobody has really come to the state that he was playing at. As good as players are, they haven't gotten to where he is. There's a lot of guys that play fast and a lot of guys that play clean, and the guitar has come a long way as far as speed and clarity go, but nobody plays with the whole fullness of expression that Django has. I mean, the combination of incredible speed – all the speed you could possibly want – but also the thing of every note have a specific personality. You don't hear it. I really haven't heard it anywhere but with Django. Denny Laine and Jimmy McCulloch, members of Paul McCartney's band Wings, have mentioned him as an inspiration. Django is still one of my main influences, I think, for lyricism. He can make me cry when I hear him. Toots Thielemans Andrew Latimer, of the band Camel, has stated that he was influenced by Reinhardt. Willie Nelson has been a lifelong Reinhardt fan, stating in his memoir, "This was a man who changed my musical life by giving me a whole new perspective on the guitar and, on an even more profound level, on my relationship with sound...During my formative years, as I listened to Django's records, especially songs like 'Nuages' that I would play for the rest of my life, I studied his technique. Even more, I studied his gentleness. I love the human sound he gave his acoustic guitar." Jimmy Page of Led Zeppelin: "Django Reinhardt was fantastic. He must have been playing all the time to be that good." Reinhardt recorded over 900 sides in his recording career, from 1928 to 1953, the majority as sides of the then-prevalent 78-RPM records, with the remainder as acetates, transcription discs, private and off-air recordings (of radio broadcasts), and part of a film soundtrack. Only one session (eight tracks) from March 1953 was ever recorded specifically for album release by Norman Granz in the then-new LP format, but Reinhardt died before the album could be released. In his earliest recordings Reinhardt played banjo (or, more accurately, banjo-guitar) accompanying accordionists and singers on dances and popular tunes of the day, with no jazz content, whereas in the last recordings before his death he played amplified guitar in the bebop idiom with a pool of younger, more modern French musicians. A full chronological listing of his lifetime recorded output is available from the source cited here, and an index of individual tunes is available from the source cited here. A few fragments of film performance (without original sound) also survive, as does one complete performance with sound, of the tune "J'Attendrai" performed with the Quintet in 1938 for the short film Le Jazz Hot. Since his death, Reinhardt's music has been released on many compilations. Intégrale Django Reinhardt, volumes 1–20 (40 CDs), released by the French company Frémeaux from 2002 to 2005, tried to include every known track on which he played. A small number of waltzes composed by Reinhardt in his youth were never recorded by the composer, but were retained in the repertoire of his associates and several are still played today. They came to light via recordings by Matelo Ferret in 1960 (the waltzes "Montagne Sainte-Genevieve", "Gagoug", "Chez Jacquet" and "Choti"; Disques Vogue (F)EPL7740) and 1961 ("Djalamichto" and "En Verdine"; Disques Vogue (F)EPL7829). The first four are now available on Matelo's CD Tziganskaïa and Other Rare Recordings, released by Hot Club Records (subsequently reissued as Tziganskaïa: The Django Reinhardt Waltzes); "Chez Jacquet" was also recorded by Baro Ferret in 1966. The names "Gagoug" and "Choti" were reportedly conferred by Reinhardt's widow Naguine on request from Matelo, who had learned the tunes without names. Reinhardt also worked on composing a Mass for use by the gypsies, which was not completed although an 8-minute extract exists, played by the organist Léo Chauliac for Reinhardt's benefit, via a 1944 radio broadcast; this can be found on the CD release "Gipsy Jazz School" and also on volume 12 of the "Intégrale Django Reinhardt" CD compilation.
[ { "paragraph_id": 0, "text": "Jean Reinhardt (23 January 1910 – 16 May 1953), known by his Romani nickname Django (French: [dʒãŋɡo ʁɛjnaʁt] or [dʒɑ̃ɡo ʁenɑʁt]), was a Romani-French jazz guitarist and composer. He was one of the first major jazz talents to emerge in Europe and has been hailed as one of its most significant exponents.", "title": "" }, { "paragraph_id": 1, "text": "With violinist Stéphane Grappelli, Reinhardt formed the Paris-based Quintette du Hot Club de France in 1934. The group was among the first to play jazz that featured the guitar as a lead instrument. Reinhardt recorded in France with many visiting American musicians, including Coleman Hawkins and Benny Carter, and briefly toured the United States with Duke Ellington's orchestra in 1946. He died suddenly of a stroke in 1953 at the age of 43.", "title": "" }, { "paragraph_id": 2, "text": "Reinhardt's most popular compositions have become standards within gypsy jazz, including \"Minor Swing\", \"Daphne\", \"Belleville\", \"Djangology\", \"Swing '42\", and \"Nuages\". Jazz guitarist Frank Vignola says that nearly every major popular-music guitarist in the world has been influenced by Reinhardt. Over the last few decades, annual Django festivals have been held throughout Europe and the U.S., and a biography has been written about his life. In February 2017, the Berlin International Film Festival held the world premiere of the French film Django.", "title": "" }, { "paragraph_id": 3, "text": "Reinhardt was born on 23 January 1910 in Liberchies, Pont-à-Celles, Belgium, into a French/Belgian family of Manouche Romani descent. His French, Alsacian father, Jean Eugene Weiss, domiciled in Paris with his wife, went by Jean-Baptiste Reinhardt, his wife's surname, to avoid French military conscription. His mother, Laurence Reinhardt, was a dancer. The birth certificate refers to \"Jean Reinhart, son of Jean Baptiste Reinhart, artist, and Laurence Reinhart, housewife, domiciled in Paris\".", "title": "Biography" }, { "paragraph_id": 4, "text": "A number of authors have repeated the claim that Reinhardt's nickname, Django, is Romani for \"I awake\"; however, it may also simply have been a diminutive, or local Walloon version, of \"Jean\". Reinhardt spent most of his youth in Romani encampments close to Paris, where he started playing the violin, banjo and guitar. He became adept at stealing chickens. His father reportedly played music in a family band comprising himself and seven brothers; a surviving photograph shows this band including his father on piano.", "title": "Biography" }, { "paragraph_id": 5, "text": "Reinhardt was attracted to music at an early age, first playing the violin. At the age of 12, he received a banjo-guitar as a gift. He quickly taught himself to play, mimicking the fingerings of musicians he watched, who would have included local virtuoso players of the day such as Jean \"Poulette\" Castro and Auguste \"Gusti\" Malha, as well as from his uncle Guiligou, who played violin, banjo and guitar. Reinhardt was able to make a living playing music by the time he was 15, busking in cafés, often with his brother Joseph. At this time, he had not started playing jazz, although he had probably heard and had been intrigued by the version of jazz played by American expatriate bands like Billy Arnold's.", "title": "Biography" }, { "paragraph_id": 6, "text": "He received little formal education and acquired the rudiments of literacy only in adult life.", "title": "Biography" }, { "paragraph_id": 7, "text": "At the age of 17, Reinhardt married Florine \"Bella\" Mayer, a girl from the same Romani settlement, according to Romani custom (although not an official marriage under French law). The following year he recorded for the first time. On these recordings, made in 1928, Reinhardt plays the \"banjo\" (actually the banjo-guitar) accompanying the accordionists Maurice Alexander, Jean Vaissade and Victor Marceau, and the singer Maurice Chaumel. His name was now drawing international attention, such as from British bandleader Jack Hylton, who came to France just to hear him play. Hylton offered him a job on the spot, and Reinhardt accepted.", "title": "Biography" }, { "paragraph_id": 8, "text": "Before he had a chance to start with the band, however, Reinhardt nearly died. On the night of 2 November 1928, Reinhardt was going to bed in the wagon that he and his wife shared in the caravan. He knocked over a candle, which ignited the extremely flammable celluloid that his wife used to make artificial flowers. The wagon was quickly engulfed in flames. The couple escaped, but Reinhardt suffered extensive burns over half his body. During his 18-month hospitalization, doctors recommended amputation of his badly damaged right leg. Reinhardt refused the surgery and was eventually able to walk with the aid of a cane.", "title": "Biography" }, { "paragraph_id": 9, "text": "More crucial to his music, the fourth finger (ring finger) and fifth finger (little) of Reinhardt's left hand were badly burned. Doctors believed that he would never play guitar again. During many months of recuperation, Reinhardt taught himself to play again using primarily the index and third fingers of his left hand by making use of a new six-string steel-strung acoustic guitar that was bought for him by his brother, Joseph Reinhardt, who was also an accomplished guitarist. While he never regained the use of those two fingers, Reinhardt regained his musical mastery by focusing on his left index and middle fingers, using the two injured fingers only for chord work.", "title": "Biography" }, { "paragraph_id": 10, "text": "Within a year of the fire, in 1929, Bella Mayer gave birth to their son, Henri \"Lousson\" Reinhardt. Soon thereafter, the couple split up. The son eventually took the surname of his mother's new husband. As Lousson Baumgartner, the son himself became an accomplished musician who went on to record with his biological father.", "title": "Biography" }, { "paragraph_id": 11, "text": "After parting from his wife and son, Reinhardt traveled throughout France, getting occasional jobs playing music at small clubs. He had no specific goals, living a hand-to-mouth existence, spending his earnings as quickly as he made them. Accompanying him on his travels was his new girlfriend, Sophie Ziegler. Nicknamed \"Naguine,\" she was a distant cousin.", "title": "Biography" }, { "paragraph_id": 12, "text": "In the years after the fire, Reinhardt was rehabilitating and experimenting on the guitar that his brother had given him. After having played a broad spectrum of music, he was introduced to American jazz by an acquaintance, Émile Savitry, whose record collection included such musical luminaries as Louis Armstrong, Duke Ellington, Joe Venuti, Eddie Lang, and Lonnie Johnson. (The swinging sound of Venuti's jazz violin and Eddie Lang's virtuoso guitar-playing anticipated the more famous sound of Reinhardt and Grappelli's later ensemble.) Hearing their music triggered in Reinhardt a vision and goal of becoming a jazz professional.", "title": "Biography" }, { "paragraph_id": 13, "text": "While developing his interest in jazz, Reinhardt met Stéphane Grappelli, a young violinist with similar musical interests. In 1928, Grappelli had been a member of the orchestra at the Ambassador Hotel while bandleader Paul Whiteman and Joe Venuti were performing there. In early 1934 both Reinhardt and Grappelli were members of Louis Vola's band.", "title": "Biography" }, { "paragraph_id": 14, "text": "From 1934 until the outbreak of World War II in 1939, Reinhardt and Grappelli worked together as the principal soloists of their newly formed quintet, the Quintette du Hot Club de France, in Paris. It became the most accomplished and innovative European jazz group of the period.", "title": "Biography" }, { "paragraph_id": 15, "text": "Reinhardt's brother Joseph and Roger Chaput also played on guitar, and Louis Vola was on bass. The Quintette was one of the few well-known jazz ensembles composed only of stringed instruments.", "title": "Biography" }, { "paragraph_id": 16, "text": "In Paris on 14 March 1933, Reinhardt recorded two takes each of \"Parce que je vous aime\" and \"Si, j'aime Suzy\", vocal numbers with lots of guitar fills and guitar support. He used three guitarists along with an accordion lead, violin, and bass. In August 1934, he made other recordings with more than one guitar (Joseph Reinhardt, Roger Chaput, and Reinhardt), including the first recording by the Quintette. In both years the great majority of their recordings featured a wide variety of horns, often in multiples, piano, and other instruments, but the all-string instrumentation is the one most often adopted by emulators of the Hot Club sound.", "title": "Biography" }, { "paragraph_id": 17, "text": "Decca Records in the United States released three records of Quintette tunes with Reinhardt on guitar, and one other, credited to \"Stephane Grappelli & His Hot 4 with Django Reinhardt\", in 1935.", "title": "Biography" }, { "paragraph_id": 18, "text": "Reinhardt also played and recorded with many American jazz musicians, such as Adelaide Hall, Coleman Hawkins, Benny Carter, and Rex Stewart (who later stayed in Paris). He participated in a jam session and radio performance with Louis Armstrong. Later in his career, Reinhardt played with Dizzy Gillespie in France. Also in the neighborhood was the artistic salon R-26, at which Reinhardt and Grappelli performed regularly as they developed their unique musical style.", "title": "Biography" }, { "paragraph_id": 19, "text": "In 1938, Reinhardt's quintet played to thousands at an all-star show held in London's Kilburn State auditorium. While playing, he noticed American film actor Eddie Cantor in the front row. When their set ended, Cantor rose to his feet, then went up on stage and kissed Reinhardt's hand, paying no concern to the audience. A few weeks later the quintet played at the London Palladium.", "title": "Biography" }, { "paragraph_id": 20, "text": "When World War II broke out, the original quintet was on tour in the United Kingdom. Reinhardt returned to Paris at once, leaving his wife in the UK. Grappelli remained in the United Kingdom for the duration of the war. Reinhardt re-formed the quintet, with Hubert Rostaing on clarinet replacing Grappelli.", "title": "Biography" }, { "paragraph_id": 21, "text": "While he tried to continue with his music, war with the Nazis presented Reinhardt with a potentially catastrophic obstacle, as he was a Romani jazz musician. Beginning in 1933, all German Romani were barred from living in cities, herded into settlement camps, and routinely sterilized. Romani men were required to wear a brown Gypsy ID triangle sewn on their chest, similar to the pink triangle that homosexuals wore, and much like the yellow Star of David that Jews had to subsequently wear. During the war, Romani were systematically killed in concentration camps. In France, they were used as slave labour on farms and in factories. During the Holocaust an estimated 600,000 to 1.5 million Romani throughout Europe were killed.", "title": "Biography" }, { "paragraph_id": 22, "text": "Hitler and Joseph Goebbels viewed jazz as un-German counterculture. Nonetheless, Goebbels stopped short of a complete ban on jazz, which now had many fans in Germany and elsewhere. Official policy towards jazz was much less strict in occupied France, according to author Andy Fry, with jazz music frequently played on both Radio France, the official station of Vichy France, and Radio Paris, which was controlled by the Germans. A new generation of French jazz enthusiasts, the Zazous, had arisen and swollen the ranks of the Hot Club. In addition to the increased interest, many American musicians based in Paris during the thirties had returned to the US at the beginning of the war, leaving more work for French musicians. Reinhardt was the most famous jazz musician in Europe at the time, working steadily during the early war years and earning a great deal of money, yet always under threat.", "title": "Biography" }, { "paragraph_id": 23, "text": "Reinhardt expanded his musical horizons during this period. Using an early amplification system, he was able to work in more of a big-band format, in large ensembles with horn sections. He also experimented with classical composition, writing a Mass for the Gypsies and a symphony. Since he did not read music, Reinhardt worked with an assistant to notate what he was improvising. His modernist piece \"Rythme Futur\" was also intended to be acceptable to the Nazis.", "title": "Biography" }, { "paragraph_id": 24, "text": "In this [\"Nuages\"] graceful and eloquent melody, Django evoked the woes of the war that weighed on people's souls—and then transcended it all.", "title": "Biography" }, { "paragraph_id": 25, "text": "Biographer Michael Dregni", "title": "Biography" }, { "paragraph_id": 26, "text": "In 1943, Reinhardt married his long-term partner Sophie \"Naguine\" Ziegler in Salbris. They had a son, Babik Reinhardt, who became a respected guitarist.", "title": "Biography" }, { "paragraph_id": 27, "text": "In 1943 the tide of war turned against the Germans, with a considerable darkening of the situation in Paris. Severe rationing was in place, and members of Reinhardt's circle were being captured by the Nazis or joining the resistance.", "title": "Biography" }, { "paragraph_id": 28, "text": "Reinhardt's first attempt at escape from Occupied France led to capture. Fortunately for him, a jazz-loving German, Luftwaffe officer Dietrich Schulz-Köhn [de], allowed him to return to Paris. Reinhardt made a second attempt a few days later, but was stopped in the middle of the night by Swiss border guards, who forced him to return to Paris again.", "title": "Biography" }, { "paragraph_id": 29, "text": "One of his tunes, 1940's \"Nuages\", became an unofficial anthem in Paris to signify hope for liberation. During a concert at the Salle Pleyel, the popularity of the tune was such that the crowd made him replay it three times in a row. The single sold over 100,000 copies.", "title": "Biography" }, { "paragraph_id": 30, "text": "Unlike the estimated 600,000 Romani people who were interned and killed in the Porajmos, the Romani Holocaust, Reinhardt survived the war.", "title": "Biography" }, { "paragraph_id": 31, "text": "After the war, Reinhardt rejoined Grappelli in the UK. In the autumn of 1946, he made his first tour in the United States, debuting at Cleveland Music Hall as a special guest soloist with Duke Ellington and His Orchestra. He played with many musicians and composers, such as Maury Deutsch. At the end of the tour, Reinhardt played two nights at Carnegie Hall in New York City; he received a great ovation and took six curtain calls on the first night.", "title": "Biography" }, { "paragraph_id": 32, "text": "Despite his pride in touring with Ellington (one of two letters to Grappelli relates his excitement), he was not fully integrated into the band. He played a few tunes at the end of the show, backed by Ellington, with no special arrangements written for him. After the tour, Reinhardt secured an engagement at Café Society Uptown, where he played four solos a day, backed by the resident band. These performances drew large audiences. Having failed to bring his usual Selmer Modèle Jazz, he played on a borrowed electric guitar, which he felt hampered the delicacy of his style. He had been promised jobs in California, but they failed to develop. Tired of waiting, Reinhardt returned to France in February 1947.", "title": "Biography" }, { "paragraph_id": 33, "text": "After his return, Reinhardt appeared to find it difficult to adjust. He sometimes showed up for scheduled concerts without a guitar or amplifier, or wandered off to the park or beach. On a few occasions he refused to get out of bed. Reinhardt developed a reputation among his band, fans, and managers as extremely unreliable. He skipped sold-out concerts to \"walk to the beach\" or \"smell the dew.\" During this period he continued to attend the R-26 artistic salon in Montmartre, improvising with his devoted collaborator, Stéphane Grappelli.", "title": "Biography" }, { "paragraph_id": 34, "text": "In Rome in 1949, Reinhardt recruited three Italian jazz players (on bass, piano, and snare drum) and recorded over 60 tunes in an Italian studio. He united with Grappelli, and used his acoustic Selmer-Maccaferri. The recording was issued for the first time in the late 1950s.", "title": "Biography" }, { "paragraph_id": 35, "text": "Back in Paris, in June 1950, Reinhardt was invited to join an entourage to welcome the return of Benny Goodman. He also attended a reception for Goodman, who, after the war ended, had asked Reinhardt to join him in the U.S. Goodman repeated his invitation and, out of politeness, Reinhardt accepted. However, Reinhardt later had second thoughts about what role he could play alongside Goodman, who was the \"King of Swing\", and remained in France.", "title": "Biography" }, { "paragraph_id": 36, "text": "In 1951, Reinhardt retired to Samois-sur-Seine, near Fontainebleau, where he lived until his death. He continued to play in Paris jazz clubs and began playing electric guitar. (He often used a Selmer fitted with an electric pickup, despite his initial hesitation about the instrument.) In his final recordings, made with his Nouvelle Quintette in the last few months of his life, he had begun moving in a new musical direction, in which he assimilated the vocabulary of bebop and fused it with his own melodic style.", "title": "Biography" }, { "paragraph_id": 37, "text": "On 16 May 1953, while walking home from Fontainebleau–Avon station after playing in a Paris club, he collapsed outside his house from a brain hemorrhage. It was a Saturday, and it took a full day for a doctor to arrive. Reinhardt was declared dead on arrival at the hospital in Fontainebleau, at the age of 43.", "title": "Biography" }, { "paragraph_id": 38, "text": "Reinhardt developed his initial musical approach via tutoring by relatives and exposure to other gypsy guitar players of the day, then playing the banjo-guitar alongside accordionists in the world of the Paris bal musette. He played mainly with a plectrum for maximum volume and attack (particularly in the 1920s-early 30s when amplification in venues was minimal or non-existent), although he could also play fingerstyle on occasion, as evidenced by some recorded introductions and solos. Following his accident in 1928 in which his left hand was severely burned, he was left with the use of only his first two fingers. As a result, he developed a completely new left hand technique and started performing on guitar accompanying popular singers of the day, before discovering jazz and presenting his new hybrid style of gypsy approach plus jazz to the outside world via the Quintette du Hot Club de France.", "title": "Technique and musical approach" }, { "paragraph_id": 39, "text": "Despite his left hand handicap, Reinhardt was able to recapture (in modified form) and then surpass his previous level of proficiency on the guitar (by now his main instrument), not only as a lead instrumental voice but also as a driving and harmonically interesting rhythm player; his virtuosity, incorporating many gypsy-derived influences, was also matched with a superb sense of melodic invention as well as general musicality in terms of choice of notes, timing, dynamics, and utilizing the maximum tonal range from an instrument previously thought of by many critics as potentially limited in expression. Playing completely by ear (he could neither read nor write music), he roamed freely across the full range of the fretboard giving full flight to his musical imagination and could play with ease in any key. Guitarists, particularly in Britain and the United States, could scarcely believe what they heard on the records that the Quintette was making; guitarist, gypsy jazz enthusiast and educator Ian Cruickshank writes:", "title": "Technique and musical approach" }, { "paragraph_id": 40, "text": "It wasn't until 1938, and the Quintet's first tour of England, that guitarists [in the U.K.] were able to witness Django's amazing abilities. His hugely innovative technique included, on a grand scale, such unheard of devices as melodies played in octaves, tremolo chords with shifting notes that sounded like whole horn sections, a complete array of natural and artificial harmonics, highly charged dissonances, super-fast chromatic runs from the open bass strings to the highest notes on the 1st string, an unbelievably flexible and driving right-hand, two and three octave arpeggios, advanced and unconventional chords and a use of the flattened fifth that predated be-bop by a decade. Add to all this Django's staggering harmonic and melodic concept, huge sound, pulsating swing, sense of humour and sheer speed of execution, and it is little wonder that guitar players were knocked sideways upon their first encounter with this full-blown genius.", "title": "Technique and musical approach" }, { "paragraph_id": 41, "text": "Because of his damaged left hand (his ring and pinky fingers helped little in his playing) Reinhardt had to modify both his chordal and melodic approach extensively. For chords he developed a novel system based largely around 3-note chords, each of which could serve as the equivalent of several conventional chords in different inversions; for the treble notes he could employ his ring and little fingers to fret the relevant high strings even though he could not articulate these fingers independently, while in some chords he also employed his left hand thumb on the lowest string. Within his rapid melodic runs he frequently incorporated arpeggios, which could be played using two notes per string (played with his two \"good\" fingers, being his index and middle fingers) while shifting up or down the fingerboard, as opposed to the more conventional \"box\" approach of moving across strings within a single fretboard position (location). He also produced some of his characteristic \"effects\" by moving a fixed shape (such as a diminished chord) rapidly up and down the fretboard, resulting in what one writer has called \"intervallic cycling of melodic motifs and chords\". For an unsurpassed insight into these techniques in use, interested persons should not miss viewing the only known synchronised (sound and vision) footage of Reinhardt in performance, playing on an instrumental version of the song \"J'Attendrai\" for the short jazz film Le Jazz Hot in 1938–39 (copies available on YouTube and elsewhere).", "title": "Technique and musical approach" }, { "paragraph_id": 42, "text": "Hugues Panassié, in his 1942 book The Real Jazz, wrote:", "title": "Technique and musical approach" }, { "paragraph_id": 43, "text": "First of all, his instrumental technique is vastly superior to that of all other jazz guitarists. This technique permits him to play with an inconceivable velocity and makes his instrument completely versatile. Though his virtuosity is stupefying, it is no less so than his creative invention. In his solos [...] his melodic ideas are sparkling and ravishing, and their abundance scarcely gives the listener time to catch his breath. Django's ability to bend his guitar to the most fantastic audacities, combined with his expressive inflections and vibrato, is no less wonderful; one feels an extraordinary flame burning through every note.", "title": "Technique and musical approach" }, { "paragraph_id": 44, "text": "Writing in 1945, Billy Neil and E. Gates stated that", "title": "Technique and musical approach" }, { "paragraph_id": 45, "text": "Reinhardt set new standards by an almost incredible and hitherto unthought-of technique ... His ideas have a freshness and spontaneity that are at once fascinating and alluring ... [Nevertheless] The characteristics of Reinhardt's music are primarily emotional. His relative association of experience, reinforced by a profound rational knowledge of his instrument; the guitar's possibilities and limitations; his love for music and the expression of it—all are a necessary adjunct to the means of expressing these emotions.", "title": "Technique and musical approach" }, { "paragraph_id": 46, "text": "Django-style enthusiast John Jorgenson has been quoted as saying:", "title": "Technique and musical approach" }, { "paragraph_id": 47, "text": "Django's guitar playing always has so much personality in it, and seems to contain such joy and feeling that it is infectious. He also pushes himself to the edge nearly all the time, and rides a wave of inspiration that sometimes gets dangerous. Even the few times he does not quite make his ideas flow out flawlessly it is still so exciting that mistakes don't matter! Django's seemingly never-ending bag of licks, tricks and colors always keep the song interesting, and his intensity level is rarely met by any guitarist. Django's technique was not only phenomenal, but it was personal and unique to him due to his handicap. It is very difficult to achieve the same tone, articulation and clarity using all 5 left hand fingers. It is possible to get closer with only 2 fingers, but again is quite challenging. Probably the thing about this music that makes it always challenging and exciting to play is that Django raised the bar so high, that it is like chasing genius to get close to his level of playing.", "title": "Technique and musical approach" }, { "paragraph_id": 48, "text": "In his later style (c. 1946 onwards) Reinhardt began to incorporate more bebop influences in his compositions and improvisations, also fitting a Stimer electric pickup to his acoustic guitar. With the addition of amplification, his playing became more linear and \"horn like\", with the greater facility of the amplified instrument for longer sustain and to be heard in quiet passages, and in general less reliance on his gypsy \"bag of tricks\" as developed for his acoustic guitar style (also, in some of his late recordings, with a very different supporting group context from his \"classic\", pre-war Quintette sound). These \"electric period\" Reinhardt recordings have in general received less popular re-release and critical analysis than his pre-war releases (the latter also extending to the period from 1940 to 1945 when Grappelli was absent, which included some of his most famous compositions such as \"Nuages\"), but are also a fascinating area of Reinhardt's work to study, and have begun to be revived by players such as the Rosenberg Trio (with their 2010 release \"Djangologists\") and Biréli Lagrène. Wayne Jefferies, in his article \"Django's Forgotten Era\", writes:", "title": "Technique and musical approach" }, { "paragraph_id": 49, "text": "Early in 1951, armed with his amplified Maccaferri – which he used to the very end – he put together a new band of the best young modern musicians in Paris; including Hubert Fol, an altoist in the Charlie Parker mould. Although Django was twenty years older than the rest of the band, he was completely in command of the modern style. Whilst his solos became less chordal and his lines more Christian-like, he retained his originality. I believe he should be rated much more highly as a be-bop guitarist. His infallible technique, his daring, 'on the edge' improvisations coupled with his vastly advanced harmonic sense, took him to musical heights that Christian and many other Bop musicians never came near. The live cuts from Club St. Germain in February 1951 are a revelation. Django is on top form; full of new ideas that are executed with amazing fluidity, cutting angular lines that always retain that ferocious swing.", "title": "Technique and musical approach" }, { "paragraph_id": 50, "text": "Reinhardt's first son, Lousson (a.k.a. Henri Baumgartner), played jazz in a mostly bebop style in the 1950s and 1960s. He followed the Romani lifestyle and was relatively little recorded. Reinhardt's second son, Babik, became a guitarist in a more contemporary jazz style, and recorded a number of albums before his death in 2001. After Reinhardt died, his younger brother Joseph at first swore to abandon music, but he was persuaded to perform and record again. Joseph's son Markus Reinhardt is a violinist in the Romani style.", "title": "Family" }, { "paragraph_id": 51, "text": "A third generation of direct descendants has developed as musicians: David Reinhardt, Reinhardt's grandson (by his son Babik), leads his own trio. Dallas Baumgartner, a great-grandson by Lousson, is a guitarist who travels with the Romani and keeps a low public profile. A distant relative, violinist Schnuckenack Reinhardt, became known in Germany as a performer of gypsy music and gypsy jazz up to his death in 2006, and assisted in keeping Reinhardt's legacy alive through the period following Django's death.", "title": "Family" }, { "paragraph_id": 52, "text": "Reinhardt is regarded as one of the greatest guitar players of all time, and the first important European jazz musician to make a major contribution with jazz guitar. During his career he wrote nearly 100 songs, according to jazz guitarist Frank Vignola.", "title": "Legacy" }, { "paragraph_id": 53, "text": "Using a Selmer guitar in the mid-1930s, his style took on new volume and expressiveness. Because of his physical disability, he played mainly using his index and middle fingers, and invented a distinctive style of jazz guitar.", "title": "Legacy" }, { "paragraph_id": 54, "text": "For about a decade after Reinhardt's death, interest in his musical style was minimal. In the fifties, bebop superseded swing in jazz, rock and roll took off, and electric instruments became dominant in popular music. Since the mid-sixties, there has been a revival of interest in Reinhardt's music, a revival that has extended into the 21st century, with annual festivals and periodic tribute concerts. His devotees included classical guitarist Julian Bream and country guitarist Chet Atkins, who considered him one of the ten greatest guitarists of the twentieth century.", "title": "Legacy" }, { "paragraph_id": 55, "text": "Jazz guitarists in the U.S., such as Charlie Byrd and Wes Montgomery, were influenced by his style. In fact, Byrd, who lived from 1925 to 1999, said that Reinhardt was his primary influence. Guitarist Mike Peters notes that \"the word 'genius' is bantered about too much. But in jazz, Louis Armstrong was a genius, Duke Ellington was another one, and Reinhardt was also.\" David Grisman adds, \"As far as I'm concerned, no one since has come anywhere close to Django Reinhardt as an improviser or technician.\"", "title": "Legacy" }, { "paragraph_id": 56, "text": "The popularity of gypsy jazz has generated an increasing number of festivals, such as the Festival Django Reinhardt held every last weekend of June since 1983 in Samois-sur-Seine (France), and since 2017 in nearby Fontainebleau; the various DjangoFests held throughout Europe and the US; and \"Django in June\", an annual camp for Gypsy jazz musicians and aficionados held at Smith College in Massachusetts.", "title": "Legacy" }, { "paragraph_id": 57, "text": "Woody Allen's film Sweet and Lowdown (1999), the story of a Django Reinhardt-like character, mentions Reinhardt and includes actual recordings in the film.", "title": "Legacy" }, { "paragraph_id": 58, "text": "In February 2017, the Berlin International Film Festival held the world premiere of Django, a French film directed by Etienne Comar. The movie covers Django's escape from Nazi-occupied Paris in 1943 and the fact that even under \"constant danger, flight and the atrocities committed against his family\", he continued composing and performing. Reinhardt's music was re-recorded for the film by the Dutch jazz band Rosenberg Trio with lead guitarist Stochelo Rosenberg.", "title": "Legacy" }, { "paragraph_id": 59, "text": "The documentary film, Djangomania! was released in 2005. The hour-long film was directed and written by Jamie Kastner, who traveled throughout the world to show the influence of Django's music in various countries.", "title": "Legacy" }, { "paragraph_id": 60, "text": "In 1984 the Kool Jazz Festival, held in Carnegie Hall and Avery Fisher Hall, was dedicated entirely to Reinhardt. Performers included Grappelli, Benny Carter, and Mike Peters with his group of seven musicians. The festival was organized by George Wein. Reinhardt is celebrated annually in the village of Liberchies, his birthplace.", "title": "Legacy" }, { "paragraph_id": 61, "text": "Numerous musicians have written and recorded tributes to Reinhardt. The jazz standard \"Django\" (1954) was composed by John Lewis of the Modern Jazz Quartet in honour of Reinhardt. The Allman Brothers Band song \"Jessica\" was written by Dickey Betts in tribute to Reinhardt. American country music artists Willie Nelson and Merle Haggard named their sixth and final collaborative studio album \"Django and Jimmie\". It was released on 2 June 2015, by Legacy Recordings. The album contains the song \"Django and Jimmie\" which is a tribute to musicians Django Reinhardt and Jimmie Rodgers.", "title": "Legacy" }, { "paragraph_id": 62, "text": "Ramelton, Co. Donegal, Ireland, each year hosts a festival in tribute to Django called \"Django sur Lennon\" or \"Django on the Lennon\" the Lennon being the name of the local river that runs through the village.", "title": "Legacy" }, { "paragraph_id": 63, "text": "In coincidence with the 110th anniversary in 2020 of Django's birth, a graphic novel depicting his youth years was published under the title Django Main de Feu, by writer Salva Rubio and artist Efa through Belgian publisher Dupuis.", "title": "Legacy" }, { "paragraph_id": 64, "text": "On 23 January 2010, Google Doodle celebrated Django Reinhard’s 100th Birthday.", "title": "Legacy" }, { "paragraph_id": 65, "text": "The instant I heard Django, I flipped. I chose his style because it spoke to me. He was too far ahead of his time. He was something else.", "title": "Influence" }, { "paragraph_id": 66, "text": "French recording artist, Serge Krief", "title": "Influence" }, { "paragraph_id": 67, "text": "Many guitar players and other musicians have expressed admiration for Reinhardt or have cited him as a major influence. Jeff Beck described Reinhardt as \"by far the most astonishing guitar player ever\" and \"quite superhuman\".", "title": "Influence" }, { "paragraph_id": 68, "text": "Grateful Dead's Jerry Garcia and Black Sabbath's Tony Iommi, both of whom lost fingers in accidents, were inspired by Reinhardt's example of becoming an accomplished guitar player despite his injuries. Garcia was quoted in June 1985 in Frets Magazine:", "title": "Influence" }, { "paragraph_id": 69, "text": "His technique is awesome! Even today, nobody has really come to the state that he was playing at. As good as players are, they haven't gotten to where he is. There's a lot of guys that play fast and a lot of guys that play clean, and the guitar has come a long way as far as speed and clarity go, but nobody plays with the whole fullness of expression that Django has. I mean, the combination of incredible speed – all the speed you could possibly want – but also the thing of every note have a specific personality. You don't hear it. I really haven't heard it anywhere but with Django.", "title": "Influence" }, { "paragraph_id": 70, "text": "Denny Laine and Jimmy McCulloch, members of Paul McCartney's band Wings, have mentioned him as an inspiration.", "title": "Influence" }, { "paragraph_id": 71, "text": "Django is still one of my main influences, I think, for lyricism. He can make me cry when I hear him.", "title": "Influence" }, { "paragraph_id": 72, "text": "Toots Thielemans", "title": "Influence" }, { "paragraph_id": 73, "text": "Andrew Latimer, of the band Camel, has stated that he was influenced by Reinhardt.", "title": "Influence" }, { "paragraph_id": 74, "text": "Willie Nelson has been a lifelong Reinhardt fan, stating in his memoir, \"This was a man who changed my musical life by giving me a whole new perspective on the guitar and, on an even more profound level, on my relationship with sound...During my formative years, as I listened to Django's records, especially songs like 'Nuages' that I would play for the rest of my life, I studied his technique. Even more, I studied his gentleness. I love the human sound he gave his acoustic guitar.\"", "title": "Influence" }, { "paragraph_id": 75, "text": "Jimmy Page of Led Zeppelin: \"Django Reinhardt was fantastic. He must have been playing all the time to be that good.\"", "title": "Influence" }, { "paragraph_id": 76, "text": "Reinhardt recorded over 900 sides in his recording career, from 1928 to 1953, the majority as sides of the then-prevalent 78-RPM records, with the remainder as acetates, transcription discs, private and off-air recordings (of radio broadcasts), and part of a film soundtrack. Only one session (eight tracks) from March 1953 was ever recorded specifically for album release by Norman Granz in the then-new LP format, but Reinhardt died before the album could be released. In his earliest recordings Reinhardt played banjo (or, more accurately, banjo-guitar) accompanying accordionists and singers on dances and popular tunes of the day, with no jazz content, whereas in the last recordings before his death he played amplified guitar in the bebop idiom with a pool of younger, more modern French musicians.", "title": "Discography" }, { "paragraph_id": 77, "text": "A full chronological listing of his lifetime recorded output is available from the source cited here, and an index of individual tunes is available from the source cited here. A few fragments of film performance (without original sound) also survive, as does one complete performance with sound, of the tune \"J'Attendrai\" performed with the Quintet in 1938 for the short film Le Jazz Hot.", "title": "Discography" }, { "paragraph_id": 78, "text": "Since his death, Reinhardt's music has been released on many compilations. Intégrale Django Reinhardt, volumes 1–20 (40 CDs), released by the French company Frémeaux from 2002 to 2005, tried to include every known track on which he played.", "title": "Discography" }, { "paragraph_id": 79, "text": "A small number of waltzes composed by Reinhardt in his youth were never recorded by the composer, but were retained in the repertoire of his associates and several are still played today. They came to light via recordings by Matelo Ferret in 1960 (the waltzes \"Montagne Sainte-Genevieve\", \"Gagoug\", \"Chez Jacquet\" and \"Choti\"; Disques Vogue (F)EPL7740) and 1961 (\"Djalamichto\" and \"En Verdine\"; Disques Vogue (F)EPL7829). The first four are now available on Matelo's CD Tziganskaïa and Other Rare Recordings, released by Hot Club Records (subsequently reissued as Tziganskaïa: The Django Reinhardt Waltzes); \"Chez Jacquet\" was also recorded by Baro Ferret in 1966.", "title": "Discography" }, { "paragraph_id": 80, "text": "The names \"Gagoug\" and \"Choti\" were reportedly conferred by Reinhardt's widow Naguine on request from Matelo, who had learned the tunes without names. Reinhardt also worked on composing a Mass for use by the gypsies, which was not completed although an 8-minute extract exists, played by the organist Léo Chauliac for Reinhardt's benefit, via a 1944 radio broadcast; this can be found on the CD release \"Gipsy Jazz School\" and also on volume 12 of the \"Intégrale Django Reinhardt\" CD compilation.", "title": "Discography" } ]
Jean Reinhardt, known by his Romani nickname Django, was a Romani-French jazz guitarist and composer. He was one of the first major jazz talents to emerge in Europe and has been hailed as one of its most significant exponents. With violinist Stéphane Grappelli, Reinhardt formed the Paris-based Quintette du Hot Club de France in 1934. The group was among the first to play jazz that featured the guitar as a lead instrument. Reinhardt recorded in France with many visiting American musicians, including Coleman Hawkins and Benny Carter, and briefly toured the United States with Duke Ellington's orchestra in 1946. He died suddenly of a stroke in 1953 at the age of 43. Reinhardt's most popular compositions have become standards within gypsy jazz, including "Minor Swing", "Daphne", "Belleville", "Djangology", "Swing '42", and "Nuages". Jazz guitarist Frank Vignola says that nearly every major popular-music guitarist in the world has been influenced by Reinhardt. Over the last few decades, annual Django festivals have been held throughout Europe and the U.S., and a biography has been written about his life. In February 2017, the Berlin International Film Festival held the world premiere of the French film Django.
2001-12-30T05:12:15Z
2023-12-13T23:10:00Z
[ "Template:Use dmy dates", "Template:Infobox musical artist", "Template:Notelist", "Template:OCLC", "Template:Cite news", "Template:Django Reinhardt", "Template:Rp", "Template:Lang", "Template:Blockquote", "Template:Webarchive", "Template:Authority control", "Template:Quote box", "Template:Nbsp", "Template:Div col end", "Template:IMDb name", "Template:Cite encyclopedia", "Template:Div col", "Template:IPA-fr", "Template:Ill", "Template:Circa", "Template:Main", "Template:Efn", "Template:Portal", "Template:Cbignore", "Template:Commons category", "Template:Short description", "Template:ISBN", "Template:Cite book", "Template:Reflist", "Template:Discogs artist", "Template:Sfn", "Template:Cite web", "Template:Citation" ]
https://en.wikipedia.org/wiki/Django_Reinhardt
9,041
Digit
Digit may refer to:
[ { "paragraph_id": 0, "text": "Digit may refer to:", "title": "" } ]
Digit may refer to:
2023-07-16T17:42:56Z
[ "Template:TOC right", "Template:Disambiguation", "Template:Wiktionary" ]
https://en.wikipedia.org/wiki/Digit
9,048
Dana Plato
Dana Michelle Plato (née Strain; November 7, 1964 – May 8, 1999) was an American actress. An influential teen idol of the late 1970s and early 1980s, she was best known for playing the role of Kimberly Drummond on the NBC/ABC sitcom Diff'rent Strokes (1978–1986). Plato was born to a teen mother and was adopted as an infant. She was raised in the San Fernando Valley and was an accomplished figure skater before acting. Her acting career began with numerous commercial appearances, and her television debut came at the age of 10 with a brief appearance on the television series The Six Million Dollar Man (1975). Plato subsequently appeared in the horror films Exorcist II: The Heretic (1977) and Return to Boggy Creek (1977). Plato's breakthrough feature was the Academy Award-winning film California Suite (1978), in which she played Jenny Warren. She earned widespread recognition and acclaim for playing Kimberly Drummond on Diff'rent Strokes. The role also earned Plato nominations for a Young Artist Award for Best Young Actress in a Comedy Series and two TV Land Awards for Best Quintessential Non-Traditional Family. Following Diff'rent Strokes, she worked sporadically in independent films and B movies. Plato was married twice; she had a child in 1984 during her marriage to guitarist Lanny Lambert. Plato struggled with substance abuse for most of her life. She was arrested in 1991 for robbing a video store, and again the following year for forging a drug prescription. On May 8, 1999, at age 34, Plato was found dead in her motor home from an overdose of prescription drugs. Her death was initially considered accidental, but later ruled a suicide. Her personal life, in retrospect, has been described as a "tragedy". Dana Plato was born Dana Michelle Strain on November 7, 1964, in Maywood, California, to Linda Strain, a teenager who was already caring for an 18-month-old child. In June 1965, the seven-month-old Dana was adopted by Dean Plato, who owned a trucking company, and his wife Florine "Kay" Plato. She was raised in the San Fernando Valley. When she was three, her adoptive parents divorced and she lived with her mother. At a very young age, Plato began attending auditions with her mother, and at seven years old had appeared in over 100 television commercials. Plato was also an accomplished figure skater. During her years on Diff'rent Strokes, Plato struggled with drug and alcohol problems; she admitted to drinking alcohol, using cannabis and cocaine, and suffering an overdose of diazepam when she was aged 14. In 1995, during an appearance on The Marilyn Kagen Show alongside co-star Todd Bridges, she spoke of her childhood with her mother, stating: "My mother made sure that I was normal. The only thing that she did, the mistake she made, was that she kept me in a plastic bubble. So, I didn't learn about reality and life skills." Kagen suggested that Plato may have been used for a free meal ticket, which Plato denied, explaining that her mother's ways were so that she would not become a prima donna. Plato made her television acting debut at the age of 10, making a brief appearance on the ABC television show The Six Million Dollar Man. She then starred in the 1975 made-for-television film Beyond the Bermuda Triangle. Plato made her film debut at the age of 13, appearing as Sandra Phalor in the horror film Exorcist II: The Heretic (1977), for which she was uncredited, and also starred as Evie Joe in the horror film Return to Boggy Creek in the same year; both films were received negatively by critics. Better received was the family-comedy film California Suite (1978), in which Plato played Jenny Warren; the film was also a commercial success, and earned accolades from the Academy Awards and the Golden Globe Awards. When Plato made a brief appearance on The Gong Show, she was spotted by a producer who helped cast her as Kimberly Drummond—the older sister of adopted brothers Arnold and Willis Jackson—on the NBC sitcom Diff'rent Strokes. The series debuted in 1978 and became an immediate hit. Plato appeared regularly on the show throughout its run, notably top-billed for four years. She was nominated for a Young Artist Award for her work on the program, and also was part of two TV Land Award nominations given to its cast. In 1984, following the birth of her son Tyler, Plato was dismissed from her starring role due to both her pregnancy and struggles in her personal life, which producers felt would negatively impact their "wholesome family comedy". She made a one episode appearance on season 8 episode 12 of "The Love Boat". Thereafter, Plato appeared recurringly on Diff'rent Strokes from 1985 to 1986, the show's end; in season 8, the episode which aired on January 17, 1986, was Plato's final appearance on the show, which showed her character suffering from bulimia. CBC News described her performance in the episode as a "series highpoint". In 1981, Plato appeared in the television special A Step in Time, which earned her a second Young Artist Award nomination. In 1983, she starred in the television film High School U.S.A. as Cara Ames, alongside Diff'rent Strokes co-star Todd Bridges, who played Otto Lipton. In spite of the film being met with a mixed response from critics and viewers alike, it gained popularity at the time of its premiere, particularly for its cast. Plato attempted to establish herself as a serious actress, but found it difficult to achieve success. She had breast implants and modeled for a June 1989 Playboy pictorial. She also started taking roles in such B movies as Bikini Beach Race (1989) and Lethal Cowboy (1992). In 1990 she made a brief attempt at a musical career, sponsored by producer Howie Rice. She recorded six tracks with songwriter/producer Daniel Liston Keller at Paramount Studios in Hollywood, California, but the recordings were shelved and not released. In 1992, Plato starred in the video game Night Trap, becoming one of the first celebrities to appear in a video game. She was eager to work on the game, and Rob Fulop—one of the designers of Night Trap—said that he and Plato had enjoyed working together. She made little effort to hide the fact that the project was a step-down compared to her previous career ventures. The game was a moderate success, but is considered a pioneering title because it was the first to use live actors. Night Trap received mixed to negative reviews upon release, and in retrospective has continued to polarize critics and audiences. It is best remembered for the controversy it created over the violence and sexuality that, along with that surrounding Mortal Kombat, eventually led to the creation of the Entertainment Software Rating Board (ESRB). Toward the end of her career, Plato chose roles that were erotic; she appeared nude in Prime Suspect (1989) and Compelling Evidence (1995), and in the softcore erotic drama Different Strokes: The Story of Jack and Jill...and Jill (1998), the title of which was changed after filming in order to tie it to Plato's past. In the same year, following her appearance in the film, Plato appeared in a cover story of the lesbian lifestyle-magazine Girlfriends. Plato's last works include Desperation Boulevard (1998), in which she appears as herself and which appears to be based on her life; Silent Scream (1999), in which she appears as Emma Jones; and Pacino Is Missing (2002), which was released after her death, in which she appears as an attorney. In December 1983, Plato moved in with her boyfriend, rock guitarist Lanny Lambert. The couple married on April 24, 1984, and their only child, Tyler Edward Lambert, was born on July 2, 1984. When it was revealed that she was pregnant, she was written out of Diff'rent Strokes. Her co-star Conrad Bain revealed that she was happy about her baby, stating in an interview with People magazine: "She deliberately got pregnant while doing the series, when I spoke to her about it, she was enthusiastic about having done that... [saying that] 'When I get the baby, I will never be alone again.'" Plato separated from Lambert in January 1988, the same week her mother died of scleroderma. In desperation, she signed over power of attorney to an accountant who disappeared with the majority of her money, leaving her with less than $150,000. She claimed the accountant was never found nor prosecuted despite an exhaustive search, and that he had also stolen more than $11 million from other clients. During her March 1990 divorce, Plato lost custody of her son to Lambert and was given visitation rights. She then became engaged to Fred Potts, a filmmaker, but the romance ended. She was later married to actor and producer Scott Atkins (Scotty Gelt) in Vancouver for one month, but the marriage was annulled. Before her death, Plato was engaged to her manager Robert Menchaca, six years her junior, with whom she lived in a motor home in Navarre, Florida. On February 28, 1991, Plato entered a Las Vegas video store, produced a pellet gun, and demanded the money in the cash register. After she left with the money, the clerk called 9-1-1 and said, "I've just been robbed by the girl who played Kimberly on Diff'rent Strokes." Approximately fifteen minutes after the robbery, Plato returned to the scene and was immediately arrested. She had stolen $164. Entertainer Wayne Newton posted her $13,000 bail, and Plato was given five years' probation. She subsequently became a subject of the national debate surrounding troubled child stars, particularly given the difficulties of her Diff'rent Strokes co-stars Todd Bridges and Gary Coleman. In January 1992, Plato was arrested a second time, for forging a prescription for diazepam. She served thirty days in jail for violating the terms of her probation and immediately entered a drug rehabilitation program. Plato later moved to Las Vegas, Nevada, where she struggled with poverty and unemployment. At one point she worked at a dry-cleaning store, where customers reported being impressed by her lack of airs. On May 7, 1999, the day before she died, Plato appeared on The Howard Stern Show. She spoke about her life, discussing her financial problems and past run-ins with the law. She admitted to being a recovering alcoholic and drug addict, but claimed she had been sober for more than ten years by that point and was not using any drugs, with the exception of prescribed painkillers due to the recent extraction of her wisdom teeth. Many callers to the show insulted Plato and questioned her sobriety, which angered and provoked her, and she defiantly offered to take a drug test on the air. Some callers, as well as host Howard Stern, came to Plato's defense, though Stern also referred to himself as "an enabler" and sarcastically offered Plato drugs. Although she allowed a hair to be cut for the test, Stern later claimed she asked for it back after the interview. On May 8, 1999, Plato and Menchaca were returning to California and stopped at Menchaca's mother's home in Moore, Oklahoma, for a Mother's Day visit. Later on in the visit, Plato said that she felt unwell and took a few doses of a hydrocodone / acetaminophen painkiller (Lortab), along with the muscle-relaxant carisoprodol (Soma), and went to lie down with Menchaca inside her Winnebago motor home, which was parked outside the house. Upon waking up, Menchaca and the family discovered that Plato had died in her sleep – initially assumed an accidental overdose but later ruled a suicide based on Plato's long history of substance use. Some of Plato's friends, including her former Diff'rent Strokes costar Todd Bridges, have publicly disagreed with the ruling. Plato's body was cremated and her ashes were scattered over the Pacific Ocean. In 2000, Fox broadcast a television movie based on Plato, titled After Diff'rent Strokes: When the Laughter Stopped. The film was focused on her life and work after the show, including her death. It featured actors who at the time were unknown, as well as Bridges, who made a cameo appearance. In 2006, NBC aired the television film Behind the Camera: The Unauthorized Story of Diff'rent Strokes, which was based on the lives of the child stars who had worked on the show. Bridges and Coleman appear at the end of the film standing near Plato's grave. On May 6, 2010, two days before the eleventh anniversary of Plato's death, her son Tyler died by suicide with a self-inflicted gunshot wound to the head. He was 25 years old. On November 7, 2019, on what would have been Plato's 55th birthday, Bridges commented on Twitter about their friendship, leaving a tribute to Plato: "You were the one person I could always talk to. You were one of my best friends. I will never forget you and love you forever. HAPPY BIRTHDAY Dana Plato R.I.P you are free my friend."
[ { "paragraph_id": 0, "text": "Dana Michelle Plato (née Strain; November 7, 1964 – May 8, 1999) was an American actress. An influential teen idol of the late 1970s and early 1980s, she was best known for playing the role of Kimberly Drummond on the NBC/ABC sitcom Diff'rent Strokes (1978–1986).", "title": "" }, { "paragraph_id": 1, "text": "Plato was born to a teen mother and was adopted as an infant. She was raised in the San Fernando Valley and was an accomplished figure skater before acting. Her acting career began with numerous commercial appearances, and her television debut came at the age of 10 with a brief appearance on the television series The Six Million Dollar Man (1975). Plato subsequently appeared in the horror films Exorcist II: The Heretic (1977) and Return to Boggy Creek (1977).", "title": "" }, { "paragraph_id": 2, "text": "Plato's breakthrough feature was the Academy Award-winning film California Suite (1978), in which she played Jenny Warren. She earned widespread recognition and acclaim for playing Kimberly Drummond on Diff'rent Strokes. The role also earned Plato nominations for a Young Artist Award for Best Young Actress in a Comedy Series and two TV Land Awards for Best Quintessential Non-Traditional Family. Following Diff'rent Strokes, she worked sporadically in independent films and B movies. Plato was married twice; she had a child in 1984 during her marriage to guitarist Lanny Lambert.", "title": "" }, { "paragraph_id": 3, "text": "Plato struggled with substance abuse for most of her life. She was arrested in 1991 for robbing a video store, and again the following year for forging a drug prescription. On May 8, 1999, at age 34, Plato was found dead in her motor home from an overdose of prescription drugs. Her death was initially considered accidental, but later ruled a suicide. Her personal life, in retrospect, has been described as a \"tragedy\".", "title": "" }, { "paragraph_id": 4, "text": "Dana Plato was born Dana Michelle Strain on November 7, 1964, in Maywood, California, to Linda Strain, a teenager who was already caring for an 18-month-old child. In June 1965, the seven-month-old Dana was adopted by Dean Plato, who owned a trucking company, and his wife Florine \"Kay\" Plato. She was raised in the San Fernando Valley. When she was three, her adoptive parents divorced and she lived with her mother.", "title": "Early life" }, { "paragraph_id": 5, "text": "At a very young age, Plato began attending auditions with her mother, and at seven years old had appeared in over 100 television commercials. Plato was also an accomplished figure skater. During her years on Diff'rent Strokes, Plato struggled with drug and alcohol problems; she admitted to drinking alcohol, using cannabis and cocaine, and suffering an overdose of diazepam when she was aged 14.", "title": "Early life" }, { "paragraph_id": 6, "text": "In 1995, during an appearance on The Marilyn Kagen Show alongside co-star Todd Bridges, she spoke of her childhood with her mother, stating: \"My mother made sure that I was normal. The only thing that she did, the mistake she made, was that she kept me in a plastic bubble. So, I didn't learn about reality and life skills.\" Kagen suggested that Plato may have been used for a free meal ticket, which Plato denied, explaining that her mother's ways were so that she would not become a prima donna.", "title": "Early life" }, { "paragraph_id": 7, "text": "Plato made her television acting debut at the age of 10, making a brief appearance on the ABC television show The Six Million Dollar Man. She then starred in the 1975 made-for-television film Beyond the Bermuda Triangle. Plato made her film debut at the age of 13, appearing as Sandra Phalor in the horror film Exorcist II: The Heretic (1977), for which she was uncredited, and also starred as Evie Joe in the horror film Return to Boggy Creek in the same year; both films were received negatively by critics. Better received was the family-comedy film California Suite (1978), in which Plato played Jenny Warren; the film was also a commercial success, and earned accolades from the Academy Awards and the Golden Globe Awards.", "title": "Career" }, { "paragraph_id": 8, "text": "When Plato made a brief appearance on The Gong Show, she was spotted by a producer who helped cast her as Kimberly Drummond—the older sister of adopted brothers Arnold and Willis Jackson—on the NBC sitcom Diff'rent Strokes. The series debuted in 1978 and became an immediate hit. Plato appeared regularly on the show throughout its run, notably top-billed for four years. She was nominated for a Young Artist Award for her work on the program, and also was part of two TV Land Award nominations given to its cast. In 1984, following the birth of her son Tyler, Plato was dismissed from her starring role due to both her pregnancy and struggles in her personal life, which producers felt would negatively impact their \"wholesome family comedy\". She made a one episode appearance on season 8 episode 12 of \"The Love Boat\". Thereafter, Plato appeared recurringly on Diff'rent Strokes from 1985 to 1986, the show's end; in season 8, the episode which aired on January 17, 1986, was Plato's final appearance on the show, which showed her character suffering from bulimia. CBC News described her performance in the episode as a \"series highpoint\".", "title": "Career" }, { "paragraph_id": 9, "text": "In 1981, Plato appeared in the television special A Step in Time, which earned her a second Young Artist Award nomination. In 1983, she starred in the television film High School U.S.A. as Cara Ames, alongside Diff'rent Strokes co-star Todd Bridges, who played Otto Lipton. In spite of the film being met with a mixed response from critics and viewers alike, it gained popularity at the time of its premiere, particularly for its cast. Plato attempted to establish herself as a serious actress, but found it difficult to achieve success. She had breast implants and modeled for a June 1989 Playboy pictorial. She also started taking roles in such B movies as Bikini Beach Race (1989) and Lethal Cowboy (1992). In 1990 she made a brief attempt at a musical career, sponsored by producer Howie Rice. She recorded six tracks with songwriter/producer Daniel Liston Keller at Paramount Studios in Hollywood, California, but the recordings were shelved and not released.", "title": "Career" }, { "paragraph_id": 10, "text": "In 1992, Plato starred in the video game Night Trap, becoming one of the first celebrities to appear in a video game. She was eager to work on the game, and Rob Fulop—one of the designers of Night Trap—said that he and Plato had enjoyed working together. She made little effort to hide the fact that the project was a step-down compared to her previous career ventures. The game was a moderate success, but is considered a pioneering title because it was the first to use live actors. Night Trap received mixed to negative reviews upon release, and in retrospective has continued to polarize critics and audiences. It is best remembered for the controversy it created over the violence and sexuality that, along with that surrounding Mortal Kombat, eventually led to the creation of the Entertainment Software Rating Board (ESRB).", "title": "Career" }, { "paragraph_id": 11, "text": "Toward the end of her career, Plato chose roles that were erotic; she appeared nude in Prime Suspect (1989) and Compelling Evidence (1995), and in the softcore erotic drama Different Strokes: The Story of Jack and Jill...and Jill (1998), the title of which was changed after filming in order to tie it to Plato's past. In the same year, following her appearance in the film, Plato appeared in a cover story of the lesbian lifestyle-magazine Girlfriends.", "title": "Career" }, { "paragraph_id": 12, "text": "Plato's last works include Desperation Boulevard (1998), in which she appears as herself and which appears to be based on her life; Silent Scream (1999), in which she appears as Emma Jones; and Pacino Is Missing (2002), which was released after her death, in which she appears as an attorney.", "title": "Career" }, { "paragraph_id": 13, "text": "In December 1983, Plato moved in with her boyfriend, rock guitarist Lanny Lambert. The couple married on April 24, 1984, and their only child, Tyler Edward Lambert, was born on July 2, 1984. When it was revealed that she was pregnant, she was written out of Diff'rent Strokes. Her co-star Conrad Bain revealed that she was happy about her baby, stating in an interview with People magazine: \"She deliberately got pregnant while doing the series, when I spoke to her about it, she was enthusiastic about having done that... [saying that] 'When I get the baby, I will never be alone again.'\"", "title": "Personal life" }, { "paragraph_id": 14, "text": "Plato separated from Lambert in January 1988, the same week her mother died of scleroderma. In desperation, she signed over power of attorney to an accountant who disappeared with the majority of her money, leaving her with less than $150,000. She claimed the accountant was never found nor prosecuted despite an exhaustive search, and that he had also stolen more than $11 million from other clients. During her March 1990 divorce, Plato lost custody of her son to Lambert and was given visitation rights. She then became engaged to Fred Potts, a filmmaker, but the romance ended. She was later married to actor and producer Scott Atkins (Scotty Gelt) in Vancouver for one month, but the marriage was annulled. Before her death, Plato was engaged to her manager Robert Menchaca, six years her junior, with whom she lived in a motor home in Navarre, Florida.", "title": "Personal life" }, { "paragraph_id": 15, "text": "On February 28, 1991, Plato entered a Las Vegas video store, produced a pellet gun, and demanded the money in the cash register. After she left with the money, the clerk called 9-1-1 and said, \"I've just been robbed by the girl who played Kimberly on Diff'rent Strokes.\" Approximately fifteen minutes after the robbery, Plato returned to the scene and was immediately arrested. She had stolen $164. Entertainer Wayne Newton posted her $13,000 bail, and Plato was given five years' probation. She subsequently became a subject of the national debate surrounding troubled child stars, particularly given the difficulties of her Diff'rent Strokes co-stars Todd Bridges and Gary Coleman.", "title": "Personal life" }, { "paragraph_id": 16, "text": "In January 1992, Plato was arrested a second time, for forging a prescription for diazepam. She served thirty days in jail for violating the terms of her probation and immediately entered a drug rehabilitation program. Plato later moved to Las Vegas, Nevada, where she struggled with poverty and unemployment. At one point she worked at a dry-cleaning store, where customers reported being impressed by her lack of airs.", "title": "Personal life" }, { "paragraph_id": 17, "text": "On May 7, 1999, the day before she died, Plato appeared on The Howard Stern Show. She spoke about her life, discussing her financial problems and past run-ins with the law. She admitted to being a recovering alcoholic and drug addict, but claimed she had been sober for more than ten years by that point and was not using any drugs, with the exception of prescribed painkillers due to the recent extraction of her wisdom teeth. Many callers to the show insulted Plato and questioned her sobriety, which angered and provoked her, and she defiantly offered to take a drug test on the air. Some callers, as well as host Howard Stern, came to Plato's defense, though Stern also referred to himself as \"an enabler\" and sarcastically offered Plato drugs. Although she allowed a hair to be cut for the test, Stern later claimed she asked for it back after the interview.", "title": "Personal life" }, { "paragraph_id": 18, "text": "On May 8, 1999, Plato and Menchaca were returning to California and stopped at Menchaca's mother's home in Moore, Oklahoma, for a Mother's Day visit. Later on in the visit, Plato said that she felt unwell and took a few doses of a hydrocodone / acetaminophen painkiller (Lortab), along with the muscle-relaxant carisoprodol (Soma), and went to lie down with Menchaca inside her Winnebago motor home, which was parked outside the house. Upon waking up, Menchaca and the family discovered that Plato had died in her sleep – initially assumed an accidental overdose but later ruled a suicide based on Plato's long history of substance use. Some of Plato's friends, including her former Diff'rent Strokes costar Todd Bridges, have publicly disagreed with the ruling. Plato's body was cremated and her ashes were scattered over the Pacific Ocean.", "title": "Death" }, { "paragraph_id": 19, "text": "In 2000, Fox broadcast a television movie based on Plato, titled After Diff'rent Strokes: When the Laughter Stopped. The film was focused on her life and work after the show, including her death. It featured actors who at the time were unknown, as well as Bridges, who made a cameo appearance. In 2006, NBC aired the television film Behind the Camera: The Unauthorized Story of Diff'rent Strokes, which was based on the lives of the child stars who had worked on the show. Bridges and Coleman appear at the end of the film standing near Plato's grave.", "title": "Death" }, { "paragraph_id": 20, "text": "On May 6, 2010, two days before the eleventh anniversary of Plato's death, her son Tyler died by suicide with a self-inflicted gunshot wound to the head. He was 25 years old.", "title": "Death" }, { "paragraph_id": 21, "text": "On November 7, 2019, on what would have been Plato's 55th birthday, Bridges commented on Twitter about their friendship, leaving a tribute to Plato:", "title": "Death" }, { "paragraph_id": 22, "text": "\"You were the one person I could always talk to. You were one of my best friends. I will never forget you and love you forever. HAPPY BIRTHDAY Dana Plato R.I.P you are free my friend.\"", "title": "Death" } ]
Dana Michelle Plato was an American actress. An influential teen idol of the late 1970s and early 1980s, she was best known for playing the role of Kimberly Drummond on the NBC/ABC sitcom Diff'rent Strokes (1978–1986). Plato was born to a teen mother and was adopted as an infant. She was raised in the San Fernando Valley and was an accomplished figure skater before acting. Her acting career began with numerous commercial appearances, and her television debut came at the age of 10 with a brief appearance on the television series The Six Million Dollar Man (1975). Plato subsequently appeared in the horror films Exorcist II: The Heretic (1977) and Return to Boggy Creek (1977). Plato's breakthrough feature was the Academy Award-winning film California Suite (1978), in which she played Jenny Warren. She earned widespread recognition and acclaim for playing Kimberly Drummond on Diff'rent Strokes. The role also earned Plato nominations for a Young Artist Award for Best Young Actress in a Comedy Series and two TV Land Awards for Best Quintessential Non-Traditional Family. Following Diff'rent Strokes, she worked sporadically in independent films and B movies. Plato was married twice; she had a child in 1984 during her marriage to guitarist Lanny Lambert. Plato struggled with substance abuse for most of her life. She was arrested in 1991 for robbing a video store, and again the following year for forging a drug prescription. On May 8, 1999, at age 34, Plato was found dead in her motor home from an overdose of prescription drugs. Her death was initially considered accidental, but later ruled a suicide. Her personal life, in retrospect, has been described as a "tragedy".
2002-02-25T15:51:15Z
2023-12-18T03:02:09Z
[ "Template:Portal", "Template:Cite web", "Template:Cite journal", "Template:Dead link", "Template:Cite book", "Template:Cite episode", "Template:Use mdy dates", "Template:Nom", "Template:TCMDb name", "Template:Short description", "Template:Use American English", "Template:Cite news", "Template:IMDb name", "Template:Authority control", "Template:Infobox person", "Template:Blockquote", "Template:Reflist", "Template:Cite magazine", "Template:AllMovie name" ]
https://en.wikipedia.org/wiki/Dana_Plato
9,051
Drop kick
A drop kick is a type of kick in various codes of football. It involves a player intentionally dropping the ball and then kicking it either (different sports have different definitions) 'as it rises from the first bounce' (rugby) or 'as, or immediately after, it touches the ground' (gridiron football). Drop kicks are used as a method of restarting play and scoring points in rugby union and rugby league. Also, association football goalkeepers often return the ball to play with drop kicks. The kick was once in wide use in both Australian rules football and gridiron football, but is rarely used anymore by either sport. The drop kick technique in rugby codes is usually to hold the ball with one end pointing downwards in two hands above the kicking leg. The ball is dropped onto the ground in front of the kicking foot, which makes contact at the moment or fractionally after the ball touches the ground, called the half-volley. The kicking foot usually makes contact with the ball slightly on the instep. In a rugby union kick-off, or drop out, the kicker usually aims to kick the ball high but not a great distance, and so usually strikes the ball after it has started to bounce off the ground, so the contact is made close to the bottom of the ball. In rugby league, drop kicks are mandatory to restart play from the goal line (called a goal line drop-out) after the defending team is tackled or knocks on in the in-goal area or the defending team causes the ball to go dead or into touch-in-goal. Drop kicks are also mandatory to restart play from the 20 metre line after an unsuccessful penalty goal attempt goes dead or into touch-in-goal and to score a drop goal (sometimes known as a field goal) in open play, which is worth one point. Drop kicks are optional for a penalty kick to score a penalty goal (this being done rarely, as place kicks are generally used) and when kicking for touch (the sideline) from a penalty, although the option of a punt kick is usually taken instead. In rugby union, a drop kick is used for the kick-off and restarts and to score a drop goal (sometimes called a field goal). Originally, it was one of only two ways to score points, along with the place kick. Drop kicks are mandatory from the centre spot to start a half (a kick-off), from the centre spot to restart the game after points have been scored, to restart play from the 22-metre line (called a drop-out) after the ball is touched down or made dead in the in-goal area by the defending team when the attacking team kicked or took the ball into the in-goal area, and to score a drop goal (sometimes called a field goal) in open play, which is worth three points. Drop kicks are optional for a conversion kick after a try has been scored. The usage of drop kicks in rugby sevens is the same as in rugby union, except that drop kicks are used for all conversion attempts and for penalty kicks, both of which must be taken within 40 seconds of the try being scored or the award of the penalty. In both American and Canadian football, one method of scoring a field goal, fair-catch kick (American only), or extra point is by drop-kicking the football through the goal, although the technique is very rarely used in modern play. It contrasts with the punt, wherein the player kicks the ball without letting it hit the ground first, and the place kick, wherein the player kicks a stationary ball off the ground: "from placement". A drop kick is significantly more difficult; as Jim Thorpe once explained, "I regard the place kick as almost two to one safer than the drop kick in attempting a goal from the field." The drop kick was often used in early football as a surprise tactic. The ball was snapped or lateraled to a back, who faked a run or pass, then drop-kicked a field goal attempt. This method of scoring worked well in the 1920s and early 1930s, when the ball was rounder at the ends, similar to a modern rugby ball. Early football stars Thorpe, Charles Brickley, Frank Hudson, Paddy Driscoll, and Al Bloodgood were skilled drop-kickers; Driscoll in 1925 and Bloodgood in 1926 hold a tied NFL record of four drop kicked field goals in a single game. Driscoll's 55-yard drop kick in 1924 stood as the unofficial record for field goal range until Bert Rechichar kicked a 56-yard field goal (by placekick) in 1953. The ball was made more pointed at the ends in 1934; its creation is generally credited to Shorty Ray, a college football official at the time, and later the NFL's head of officiating. This made passing the ball easier, as was its intent, but made the drop kick obsolete, as the more pointed ball did not bounce up from the ground reliably. The drop kick was supplanted by the place kick, which cannot be attempted out of a formation generally used as a running or passing set. While it remains in the rules, the drop kick is seldom seen, and as explained below, is rarely effective when attempted. In Canadian football, there are no formal restrictions on the circumstances under which a drop or a place kick can be attempted. Before the NFL–AFL merger, the last successful drop kick in the NFL was executed in 1941 on an extra point by Ray McLean of the Chicago Bears, coming late in their 37–9 victory over the New York Giants in the NFL Championship Game at Chicago's Wrigley Field on December 21. It was the final point of the game, with the outcome already decided, and followed a fumble recovery and run for the final touchdown with under two minutes remaining. The last drop kick for a field goal in the NFL was more than four years earlier, a nine-yarder by player-coach Dutch Clark of the Detroit Lions in 1937. It was the initial score in a 16–7 home win over the Chicago Cardinals on September 19. Though it was not part of the NFL at the time, the All-America Football Conference (AAFC) saw its last successful drop kick in 1948, when Joe Vetrano of the San Francisco 49ers drop kicked an extra point after a muffed snap in a 31–28 home loss to the undefeated Cleveland Browns on November 28. To date, the only successful drop kick in the NFL since 1941 was by Doug Flutie, the backup quarterback of the New England Patriots, against the Miami Dolphins on January 1, 2006, for an extra point after a touchdown. Flutie had estimated "an 80 percent chance" of making the drop kick, which was called to give Flutie, 43 at the time, the opportunity to make a historic kick in his final NFL game; the drop kick was his last play in the NFL. Dallas Cowboys punter Mat McBriar attempted a maneuver similar to a drop kick during the 2010 Thanksgiving Day game after a botched punt attempt, but the ball bounced several times before the kick and the sequence of events is officially recorded as a fumble, followed by an illegal kick, with the fumble being recovered by the New Orleans Saints 29 yards downfield from the spot of the kick. The Saints declined the illegal kick penalty. Patriots kicker Stephen Gostkowski attempted an onside drop kick on a free kick after a safety against the Pittsburgh Steelers on October 30, 2011; it went out of bounds. Saints quarterback Drew Brees, a former teammate of Flutie's, attempted a drop kick on an extra point late in the fourth quarter of the 2012 Pro Bowl, but it fell short. On December 20, 2015, Buffalo Bills punter Colton Schmidt executed what is believed to be an unintentional drop kick after a botched punt against the Washington Redskins; because the Redskins recovered the kick, it was treated as a punt (and not as a field goal attempt, which would have pushed the ball back to the spot of the kick). Seattle Seahawks punter Michael Dickson drop kicked a kickoff from the 50-yard-line on September 17, 2018, against the Bears. The kick landed inside the five-yard-line and was returned to a spot less far out than a touchback would have been automatically returned to, making it a successful strategy. Dickson made an onside drop kick attempt at the end of the same game, which was unsuccessful (recovered by the Bears). Seahawks head coach Pete Carroll noted that he considered Dickson the team's backup kicker and would kick field goals and extra point attempts with the drop kick should there be an injury to placekicker Sebastian Janikowski. Following an injury to Janikowski, Dickson attempted several drop kickoffs on January 5, 2019, against the Dallas Cowboys, including an onside kick which was received normally as a fair catch. The drop kick came under controversy in 2019, after Justin Tucker of the Baltimore Ravens used the maneuver on a kickoff late in a game against the Kansas City Chiefs. The drop kick was intended to force the Chiefs to fair catch the ball, preventing them from running out the clock. As 2:01 was showing on the game clock and a fair-caught kickoff does not run any time off the clock, it would force the Chiefs to run a play before the two-minute warning. Several weeks after the kick, league offices claimed the maneuver was illegal. Ravens head coach John Harbaugh disputed this, noting that they had cleared it with the NFL before using the drop kick and were not penalized by the in-game officials. The NFL's statement claimed that the ball was not kicked immediately after the bounce. Tucker made his approach and dropped the ball to the ground. He did not like the bounce and picked the ball up, retreating back for a second approach and dropped the ball a second time before kicking it. The NFL's statement suggested a false start should have been called on Tucker for not kicking the ball on the first drop. An article on CBS Sports stated that the NFL had made a midseason rule change banning the drop kick, but no statement from the NFL has ever confirmed this. It was later clarified that Tucker's drop kick action was illegal because he did not kick the ball "immediately" after the ball touched the ground. Rather, Tucker threw the ball upwards, allowed it to drop to the ground, then kicked the ball as it was falling from its apex after bouncing. San Francisco 49ers' kicker Robbie Gould attempted an onside drop kick against the Philadelphia Eagles on October 4, 2020; the recovery was unsuccessful. Five weeks later, the Bears also attempted an unsuccessful onside drop kick against the Tennessee Titans on November 8. The last successful drop kick extra point in the NCAA was by Jason Millgan of Hartwick College on December 11, 1998, St. Lawrence University. Frosty Peters of Montana State College made 17 drop kicks in one game in 1924. On October 24, 2020, Iowa State University backup punter Corey Dunn attempted a surprise on-side drop kick off against Oklahoma State University. It was nearly successful as the Cowboys failed to cleanly field the kick, but Oklahoma State recovered the ball in the ensuing scrum. In the Canadian game, the drop kick can be attempted at any time by either team. Any player on the kicking team behind the kicker, and including the kicker, can recover the kick. When a drop kick goes out of bounds, possession on the next scrimmage goes to the non-kicking team. On September 8, 1974, Tom Wilkinson, quarterback for the Edmonton Eskimos, unsuccessfully attempted a drop kick field goal in the final seconds of a 24–2 romp over the Winnipeg Blue Bombers. During one game in 1993, Hamilton Tiger-Cats wide receiver Earl Winfield was unable to field a punt properly; in frustration, he kicked the ball out of bounds. The kick was considered a drop kick and led to a change of possession, with the punting team, Winnipeg, regaining possession of the ball. In the former AFL (North American Arena Football League), a drop-kicked extra point was worth two points, rather than one point, while a drop-kicked field goal counted for four points rather than three. The most recent conversion of a drop kick was by Geoff Boyer of the Pittsburgh Power on June 16, 2012; it was the first successful conversion in the AFL since 1997. In 2018, Maine Mammoths kicker Henry Nell converted a drop kick as a PAT against the Massachusetts Pirates in the National Arena League. In 2022, Salina Liberty kicker Jimmy Allen successfully converted 3 drop kick PAT attempts against the Topeka Tropics in a Champions Indoor Football game. Jimmy also converted a drop kick PAT playing for the Iowa Barnstormers in the IFL during a game against the Colorado Crush during a 2016 game. Once the preferred method of conveying the ball over long distances, the drop kick has been superseded by the drop punt as a more accurate means of delivering the ball to a fellow player. Drop kicks were last regularly used in the 1970s, and by that time mostly for kicking in after a behind and very rarely in general play. AFL historian and statistician Col Hutchison believes that Sam Newman was the last player to kick a set-shot goal with a drop kick, in 1980, although goals in general play from a drop kick do occur on rare occasions, including subsequent goals by players such as Alastair Lynch and Darren Bewick. Hutchison says drop kicks were phased out of the game by Norm Smith in defence due to their risky nature; Ron Barassi, a player Smith coached, took this onboard for his own coaching career, banning it for all but Barry Cable, who, according to Hutchison, was a "magnificent disposer of the ball".
[ { "paragraph_id": 0, "text": "A drop kick is a type of kick in various codes of football. It involves a player intentionally dropping the ball and then kicking it either (different sports have different definitions) 'as it rises from the first bounce' (rugby) or 'as, or immediately after, it touches the ground' (gridiron football).", "title": "" }, { "paragraph_id": 1, "text": "Drop kicks are used as a method of restarting play and scoring points in rugby union and rugby league. Also, association football goalkeepers often return the ball to play with drop kicks. The kick was once in wide use in both Australian rules football and gridiron football, but is rarely used anymore by either sport.", "title": "" }, { "paragraph_id": 2, "text": "The drop kick technique in rugby codes is usually to hold the ball with one end pointing downwards in two hands above the kicking leg. The ball is dropped onto the ground in front of the kicking foot, which makes contact at the moment or fractionally after the ball touches the ground, called the half-volley. The kicking foot usually makes contact with the ball slightly on the instep.", "title": "Rugby" }, { "paragraph_id": 3, "text": "In a rugby union kick-off, or drop out, the kicker usually aims to kick the ball high but not a great distance, and so usually strikes the ball after it has started to bounce off the ground, so the contact is made close to the bottom of the ball.", "title": "Rugby" }, { "paragraph_id": 4, "text": "In rugby league, drop kicks are mandatory to restart play from the goal line (called a goal line drop-out) after the defending team is tackled or knocks on in the in-goal area or the defending team causes the ball to go dead or into touch-in-goal. Drop kicks are also mandatory to restart play from the 20 metre line after an unsuccessful penalty goal attempt goes dead or into touch-in-goal and to score a drop goal (sometimes known as a field goal) in open play, which is worth one point.", "title": "Rugby" }, { "paragraph_id": 5, "text": "Drop kicks are optional for a penalty kick to score a penalty goal (this being done rarely, as place kicks are generally used) and when kicking for touch (the sideline) from a penalty, although the option of a punt kick is usually taken instead.", "title": "Rugby" }, { "paragraph_id": 6, "text": "In rugby union, a drop kick is used for the kick-off and restarts and to score a drop goal (sometimes called a field goal). Originally, it was one of only two ways to score points, along with the place kick.", "title": "Rugby" }, { "paragraph_id": 7, "text": "Drop kicks are mandatory from the centre spot to start a half (a kick-off), from the centre spot to restart the game after points have been scored, to restart play from the 22-metre line (called a drop-out) after the ball is touched down or made dead in the in-goal area by the defending team when the attacking team kicked or took the ball into the in-goal area, and to score a drop goal (sometimes called a field goal) in open play, which is worth three points.", "title": "Rugby" }, { "paragraph_id": 8, "text": "Drop kicks are optional for a conversion kick after a try has been scored.", "title": "Rugby" }, { "paragraph_id": 9, "text": "The usage of drop kicks in rugby sevens is the same as in rugby union, except that drop kicks are used for all conversion attempts and for penalty kicks, both of which must be taken within 40 seconds of the try being scored or the award of the penalty.", "title": "Rugby" }, { "paragraph_id": 10, "text": "In both American and Canadian football, one method of scoring a field goal, fair-catch kick (American only), or extra point is by drop-kicking the football through the goal, although the technique is very rarely used in modern play.", "title": "Gridiron football" }, { "paragraph_id": 11, "text": "It contrasts with the punt, wherein the player kicks the ball without letting it hit the ground first, and the place kick, wherein the player kicks a stationary ball off the ground: \"from placement\". A drop kick is significantly more difficult; as Jim Thorpe once explained, \"I regard the place kick as almost two to one safer than the drop kick in attempting a goal from the field.\"", "title": "Gridiron football" }, { "paragraph_id": 12, "text": "The drop kick was often used in early football as a surprise tactic. The ball was snapped or lateraled to a back, who faked a run or pass, then drop-kicked a field goal attempt. This method of scoring worked well in the 1920s and early 1930s, when the ball was rounder at the ends, similar to a modern rugby ball.", "title": "Gridiron football" }, { "paragraph_id": 13, "text": "Early football stars Thorpe, Charles Brickley, Frank Hudson, Paddy Driscoll, and Al Bloodgood were skilled drop-kickers; Driscoll in 1925 and Bloodgood in 1926 hold a tied NFL record of four drop kicked field goals in a single game. Driscoll's 55-yard drop kick in 1924 stood as the unofficial record for field goal range until Bert Rechichar kicked a 56-yard field goal (by placekick) in 1953.", "title": "Gridiron football" }, { "paragraph_id": 14, "text": "The ball was made more pointed at the ends in 1934; its creation is generally credited to Shorty Ray, a college football official at the time, and later the NFL's head of officiating. This made passing the ball easier, as was its intent, but made the drop kick obsolete, as the more pointed ball did not bounce up from the ground reliably. The drop kick was supplanted by the place kick, which cannot be attempted out of a formation generally used as a running or passing set. While it remains in the rules, the drop kick is seldom seen, and as explained below, is rarely effective when attempted.", "title": "Gridiron football" }, { "paragraph_id": 15, "text": "In Canadian football, there are no formal restrictions on the circumstances under which a drop or a place kick can be attempted.", "title": "Gridiron football" }, { "paragraph_id": 16, "text": "Before the NFL–AFL merger, the last successful drop kick in the NFL was executed in 1941 on an extra point by Ray McLean of the Chicago Bears, coming late in their 37–9 victory over the New York Giants in the NFL Championship Game at Chicago's Wrigley Field on December 21. It was the final point of the game, with the outcome already decided, and followed a fumble recovery and run for the final touchdown with under two minutes remaining. The last drop kick for a field goal in the NFL was more than four years earlier, a nine-yarder by player-coach Dutch Clark of the Detroit Lions in 1937. It was the initial score in a 16–7 home win over the Chicago Cardinals on September 19. Though it was not part of the NFL at the time, the All-America Football Conference (AAFC) saw its last successful drop kick in 1948, when Joe Vetrano of the San Francisco 49ers drop kicked an extra point after a muffed snap in a 31–28 home loss to the undefeated Cleveland Browns on November 28.", "title": "Gridiron football" }, { "paragraph_id": 17, "text": "To date, the only successful drop kick in the NFL since 1941 was by Doug Flutie, the backup quarterback of the New England Patriots, against the Miami Dolphins on January 1, 2006, for an extra point after a touchdown. Flutie had estimated \"an 80 percent chance\" of making the drop kick, which was called to give Flutie, 43 at the time, the opportunity to make a historic kick in his final NFL game; the drop kick was his last play in the NFL.", "title": "Gridiron football" }, { "paragraph_id": 18, "text": "Dallas Cowboys punter Mat McBriar attempted a maneuver similar to a drop kick during the 2010 Thanksgiving Day game after a botched punt attempt, but the ball bounced several times before the kick and the sequence of events is officially recorded as a fumble, followed by an illegal kick, with the fumble being recovered by the New Orleans Saints 29 yards downfield from the spot of the kick. The Saints declined the illegal kick penalty.", "title": "Gridiron football" }, { "paragraph_id": 19, "text": "Patriots kicker Stephen Gostkowski attempted an onside drop kick on a free kick after a safety against the Pittsburgh Steelers on October 30, 2011; it went out of bounds.", "title": "Gridiron football" }, { "paragraph_id": 20, "text": "Saints quarterback Drew Brees, a former teammate of Flutie's, attempted a drop kick on an extra point late in the fourth quarter of the 2012 Pro Bowl, but it fell short. On December 20, 2015, Buffalo Bills punter Colton Schmidt executed what is believed to be an unintentional drop kick after a botched punt against the Washington Redskins; because the Redskins recovered the kick, it was treated as a punt (and not as a field goal attempt, which would have pushed the ball back to the spot of the kick).", "title": "Gridiron football" }, { "paragraph_id": 21, "text": "Seattle Seahawks punter Michael Dickson drop kicked a kickoff from the 50-yard-line on September 17, 2018, against the Bears. The kick landed inside the five-yard-line and was returned to a spot less far out than a touchback would have been automatically returned to, making it a successful strategy. Dickson made an onside drop kick attempt at the end of the same game, which was unsuccessful (recovered by the Bears). Seahawks head coach Pete Carroll noted that he considered Dickson the team's backup kicker and would kick field goals and extra point attempts with the drop kick should there be an injury to placekicker Sebastian Janikowski. Following an injury to Janikowski, Dickson attempted several drop kickoffs on January 5, 2019, against the Dallas Cowboys, including an onside kick which was received normally as a fair catch.", "title": "Gridiron football" }, { "paragraph_id": 22, "text": "The drop kick came under controversy in 2019, after Justin Tucker of the Baltimore Ravens used the maneuver on a kickoff late in a game against the Kansas City Chiefs. The drop kick was intended to force the Chiefs to fair catch the ball, preventing them from running out the clock. As 2:01 was showing on the game clock and a fair-caught kickoff does not run any time off the clock, it would force the Chiefs to run a play before the two-minute warning. Several weeks after the kick, league offices claimed the maneuver was illegal. Ravens head coach John Harbaugh disputed this, noting that they had cleared it with the NFL before using the drop kick and were not penalized by the in-game officials. The NFL's statement claimed that the ball was not kicked immediately after the bounce. Tucker made his approach and dropped the ball to the ground. He did not like the bounce and picked the ball up, retreating back for a second approach and dropped the ball a second time before kicking it. The NFL's statement suggested a false start should have been called on Tucker for not kicking the ball on the first drop. An article on CBS Sports stated that the NFL had made a midseason rule change banning the drop kick, but no statement from the NFL has ever confirmed this. It was later clarified that Tucker's drop kick action was illegal because he did not kick the ball \"immediately\" after the ball touched the ground. Rather, Tucker threw the ball upwards, allowed it to drop to the ground, then kicked the ball as it was falling from its apex after bouncing.", "title": "Gridiron football" }, { "paragraph_id": 23, "text": "San Francisco 49ers' kicker Robbie Gould attempted an onside drop kick against the Philadelphia Eagles on October 4, 2020; the recovery was unsuccessful. Five weeks later, the Bears also attempted an unsuccessful onside drop kick against the Tennessee Titans on November 8.", "title": "Gridiron football" }, { "paragraph_id": 24, "text": "The last successful drop kick extra point in the NCAA was by Jason Millgan of Hartwick College on December 11, 1998, St. Lawrence University. Frosty Peters of Montana State College made 17 drop kicks in one game in 1924.", "title": "Gridiron football" }, { "paragraph_id": 25, "text": "On October 24, 2020, Iowa State University backup punter Corey Dunn attempted a surprise on-side drop kick off against Oklahoma State University. It was nearly successful as the Cowboys failed to cleanly field the kick, but Oklahoma State recovered the ball in the ensuing scrum.", "title": "Gridiron football" }, { "paragraph_id": 26, "text": "In the Canadian game, the drop kick can be attempted at any time by either team. Any player on the kicking team behind the kicker, and including the kicker, can recover the kick. When a drop kick goes out of bounds, possession on the next scrimmage goes to the non-kicking team.", "title": "Gridiron football" }, { "paragraph_id": 27, "text": "On September 8, 1974, Tom Wilkinson, quarterback for the Edmonton Eskimos, unsuccessfully attempted a drop kick field goal in the final seconds of a 24–2 romp over the Winnipeg Blue Bombers.", "title": "Gridiron football" }, { "paragraph_id": 28, "text": "During one game in 1993, Hamilton Tiger-Cats wide receiver Earl Winfield was unable to field a punt properly; in frustration, he kicked the ball out of bounds. The kick was considered a drop kick and led to a change of possession, with the punting team, Winnipeg, regaining possession of the ball.", "title": "Gridiron football" }, { "paragraph_id": 29, "text": "In the former AFL (North American Arena Football League), a drop-kicked extra point was worth two points, rather than one point, while a drop-kicked field goal counted for four points rather than three. The most recent conversion of a drop kick was by Geoff Boyer of the Pittsburgh Power on June 16, 2012; it was the first successful conversion in the AFL since 1997. In 2018, Maine Mammoths kicker Henry Nell converted a drop kick as a PAT against the Massachusetts Pirates in the National Arena League.", "title": "Gridiron football" }, { "paragraph_id": 30, "text": "In 2022, Salina Liberty kicker Jimmy Allen successfully converted 3 drop kick PAT attempts against the Topeka Tropics in a Champions Indoor Football game. Jimmy also converted a drop kick PAT playing for the Iowa Barnstormers in the IFL during a game against the Colorado Crush during a 2016 game.", "title": "Gridiron football" }, { "paragraph_id": 31, "text": "Once the preferred method of conveying the ball over long distances, the drop kick has been superseded by the drop punt as a more accurate means of delivering the ball to a fellow player. Drop kicks were last regularly used in the 1970s, and by that time mostly for kicking in after a behind and very rarely in general play. AFL historian and statistician Col Hutchison believes that Sam Newman was the last player to kick a set-shot goal with a drop kick, in 1980, although goals in general play from a drop kick do occur on rare occasions, including subsequent goals by players such as Alastair Lynch and Darren Bewick. Hutchison says drop kicks were phased out of the game by Norm Smith in defence due to their risky nature; Ron Barassi, a player Smith coached, took this onboard for his own coaching career, banning it for all but Barry Cable, who, according to Hutchison, was a \"magnificent disposer of the ball\".", "title": "Australian rules football" } ]
A drop kick is a type of kick in various codes of football. It involves a player intentionally dropping the ball and then kicking it either 'as it rises from the first bounce' (rugby) or 'as, or immediately after, it touches the ground'. Drop kicks are used as a method of restarting play and scoring points in rugby union and rugby league. Also, association football goalkeepers often return the ball to play with drop kicks. The kick was once in wide use in both Australian rules football and gridiron football, but is rarely used anymore by either sport.
2001-12-31T21:43:58Z
2023-12-22T02:59:10Z
[ "Template:Webarchive", "Template:Main", "Template:Cite web", "Template:Cite book", "Template:Cite journal", "Template:Short description", "Template:About", "Template:Australian rules football terminology", "Template:Reflist", "Template:Cite news", "Template:Gridiron football plays", "Template:See also", "Template:Nfly", "Template:Unreferenced section", "Template:Open access" ]
https://en.wikipedia.org/wiki/Drop_kick
9,053
Diaeresis
Diaeresis (dieresis, diæresis, diëresis) may refer to:
[ { "paragraph_id": 0, "text": "Diaeresis (dieresis, diæresis, diëresis) may refer to:", "title": "" } ]
Diaeresis may refer to: Diaeresis (prosody), pronunciation of vowels in a diphthong separately, or the division made in a line of poetry when the end of a foot coincides with the end of a word Diaeresis (linguistics), or hiatus, the separation of adjacent vowels into syllables, not separated by a consonant or pause and not merged into a diphthong Diaeresis (diacritic), a diacritic consisting of two side-by-side dots that marks disyllabicity Diaeresis (computing), the two-dot diacritic used in unicode
2001-09-13T17:37:30Z
2023-11-09T19:45:02Z
[ "Template:Distinguish", "Template:Wiktionary", "Template:Disambiguation" ]
https://en.wikipedia.org/wiki/Diaeresis
9,055
Derry
Derry, officially Londonderry, is the largest city in County Londonderry, the second-largest in Northern Ireland and the fifth-largest on the island of Ireland. The old walled city lies on the west bank of the River Foyle, which is spanned by two road bridges and one footbridge. The city now covers both banks (Cityside on the west and Waterside on the east). The population of the city was 85,279 at the 2021 census, while the Derry Urban Area had a population of 105,066 in 2011. The district administered by Derry City and Strabane District Council contains both Londonderry Port and City of Derry Airport. Derry is close to the border with County Donegal, with which it has had a close link for many centuries. The person traditionally seen as the founder of the original Derry is Saint Colmcille, a holy man from Tír Chonaill, the old name for almost all of modern County Donegal, of which the west bank of the Foyle was a part before 1610. In 2013, Derry was the inaugural UK City of Culture, having been awarded the title in 2010. Despite the official name, the city is also commonly known as Derry, which is an anglicisation of the Irish Daire or Doire, and translates as 'oak-grove/oak-wood'. The name derives from the settlement's earliest references, Daire Calgaich ('oak-grove of Calgach'). The name was changed from Derry in 1613 during the Plantation of Ulster to reflect the establishment of the city by the London guilds. Derry has been used in the names of the local government district and council since 1984, when the council changed its name from Londonderry City Council to Derry City Council. This also changed the name of the district, which had been created in 1973 and included both the city and surrounding rural areas. In the 2015 local government reform, the district was merged with the Strabane district to form the Derry City and Strabane district, with the councils likewise merged. According to the city's Royal Charter of 10 April 1662, the official name is Londonderry. This was reaffirmed in a High Court decision in 2007. The 2007 court case arose because Derry City Council wanted clarification on whether the 1984 name change of the council and district had changed the official name of the city and what the procedure would be to effect a name change. The court clarified that Londonderry remained the official name and that the correct procedure to change the name would be via a petition to the Privy Council. Derry City Council afterward began this process, and was involved in conducting an equality impact assessment report (EQIA). Firstly it held an opinion poll of district residents in 2009, which reported that 75% of Catholics and 77% of Nationalists found the proposed change acceptable, compared to 6% of Protestants and 8% of Unionists. The EQIA then held two consultative forums, and solicited comments from the general public on whether or not the city should have its name changed to Derry. A total of 12,136 comments were received, of which 3,108 were broadly in favour of the proposal, and 9,028 opposed it. On 23 July 2015, the council voted in favour of a motion to change the official name of the city to Derry and to write to Mark H. Durkan, the Northern Irish Minister for the Environment, to ask how the change could be effected. The name Derry is preferred by nationalists and it is broadly used throughout Northern Ireland's Catholic community, as well as that of the Republic of Ireland, whereas many unionists prefer Londonderry; however, in everyday conversation Derry is used by most Protestant residents of the city. Linguist Kevin McCafferty argues that "It is not, strictly speaking, correct that Northern Ireland Catholics call it Derry, while Protestants use the Londonderry form, although this pattern has become more common locally since the mid-1980s, when the city council changed its name by dropping the prefix". In McCafferty's survey of language use in the city, "only very few interviewees—all Protestants—use the official form". Apart from the name of the local council, the city is usually known as Londonderry in official use within the UK. In the Republic of Ireland, the city and county are almost always referred to as Derry, on maps, in the media and in conversation. In April 2009, however, the Republic of Ireland's Minister for Foreign Affairs, Micheál Martin, announced that Irish passport holders who were born there could record either Derry or Londonderry as their place of birth. Whereas official road signs in the Republic use the name Derry, those in Northern Ireland bear Londonderry (sometimes abbreviated to L'derry), although some of these have been defaced with the reference to London obscured. Usage varies among local organisations, with both names being used. Examples are City of Derry Airport, City of Derry Rugby Club, Derry City FC and the Protestant Apprentice Boys of Derry, as opposed to Londonderry Port, Londonderry YMCA Rugby Club and Londonderry Chamber of Commerce. The bishopric has always remained that of Derry, both in the (Protestant, formerly-established) Church of Ireland (now combined with the bishopric of Raphoe), and in the Roman Catholic Church. Most companies within the city choose local area names such as Pennyburn, Rosemount or Foyle from the River Foyle to avoid alienating the other community. Londonderry railway station is often referred to as Waterside railway station within the city, but is called Derry/Londonderry at other stations. The council changed the name of the local government district covering the city to Derry on 7 May 1984, consequently renaming itself Derry City Council. This did not change the name of the city, although the city is coterminous with the district, and in law, the city council is also the Corporation of Londonderry or, more formally, the Mayor, Aldermen and Citizens of the City of Londonderry. The form Londonderry is used for the post town by the Royal Mail; however, use of Derry will still ensure delivery. The city is also nicknamed "the Maiden City" by virtue of the fact that its walls were never breached despite being besieged on three separate occasions in the 17th century, the most notable being the Siege of Derry of 1688–1689. It was also nicknamed "Stroke City" by local broadcaster Gerry Anderson, owing to the politically correct use by some of the dual name Derry/Londonderry (which has itself been used by BBC Television). A later addition to the landscape has been the erection of several large stone columns on main roads into the city welcoming drivers, euphemistically, to 'the Walled City'. Derry is a common place name in Ireland, with at least six towns bearing that name and at least a further 79 places. The word Derry often forms part of the place name, for example, Derrybeg, Derryboy, Derrylea and Derrymore. Londonderry, Yorkshire, near the Yorkshire Dales, was named for the Marquesses of Londonderry, as is Londonderry Island off Tierra del Fuego in Chile. In the United States, twin towns in New Hampshire called Derry and Londonderry lie not far from Londonderry, Vermont, with additional namesakes in Derry, Pennsylvania, Londonderry, Ohio, and in Canada Londonderry, Nova Scotia and Londonderry, Edmonton, Alberta. There is also Londonderry, New South Wales and the associated Londonderry electorate. Derry is the only remaining completely intact walled city in Ireland, and one of the finest examples of a walled city in Europe. The walls constitute the largest monument in State care in Northern Ireland and, as part of the last walled city to be built in Europe, stand as the most complete and spectacular. The Walls were built in 1613–1619 by The Honourable The Irish Society as defences for early 17th-century settlers from England and Scotland. The Walls, which are approximately one mile (1.5 kilometres) in circumference and which vary in height and width between 3.7 and 10.7 metres (12 and 35 feet), are completely intact and form a walkway around the inner city. They provide a unique promenade to view the layout of the original town which still preserves its Renaissance-style street plan. The four original gates to the Walled City are Bishop's Gate, Ferryquay Gate, Butcher Gate and Shipquay Gate. Three further gates were added later, Magazine Gate, Castle Gate and New Gate, making seven gates in total. The architect was Peter Benson, a London-born builder, who was rewarded with several grants of land. It is one of the few cities in Europe that never saw its fortifications breached, withstanding several sieges, including the famous Siege of Derry in 1689 which lasted 105 days; hence the city's nickname, The Maiden City. Derry is one of the oldest continuously inhabited places in Ireland. The earliest historical references date to the 6th century when a monastery was founded there by St Columba or Colmcille, a famous saint from what is now County Donegal, but for thousands of years before that people had been living in the vicinity. Before leaving Ireland to spread Christianity elsewhere, Colmcille founded a monastery at Derry (which was then called Doire Calgach), on the west bank of the Foyle. According to oral and documented history, the site was granted to Colmcille by a local king. The monastery then remained in the hands of the federation of Columban churches who regarded Colmcille as their spiritual mentor. The year 546 is often referred to as the date that the original settlement was founded. However, it is now accepted by historians that this was an erroneous date assigned by medieval chroniclers. It is accepted that between the 6th century and the 11th century, Derry was known primarily as a monastic settlement. The town became strategically more significant during the Tudor conquest of Ireland and came under frequent attack. During O'Doherty's Rebellion in 1608 it was attacked by Sir Cahir O'Doherty, Irish chieftain of Inishowen, who burnt much of the town and killed the governor George Paulet. The soldier and statesman Sir Henry Docwra made vigorous efforts to develop the town, earning the reputation of being "the founder of Derry"; but he was accused of failing to prevent the O'Doherty attack, and returned to England. What became the City of Derry was part of the relatively new County Donegal up until 1610. In that year, the west bank of the future city was transferred by the English Crown to The Honourable The Irish Society and was combined with County Coleraine, part of County Antrim and a large portion of County Tyrone to form County Londonderry. Planters organised by London livery companies through The Honourable The Irish Society arrived in the 17th century as part of the Plantation of Ulster, and rebuilt the town with high walls to defend it from Irish insurgents who opposed the plantation. The aim was to settle Ulster with a population supportive of the Crown. It was then renamed "Londonderry". This city was the first planned city in Ireland: it was begun in 1613, with the walls being completed in 1619, at a cost of £10,757. The central diamond within a walled city with four gates was thought to be a good design for defence. The grid pattern chosen was subsequently much copied in the colonies of British North America. The charter initially defined the city as extending three Irish miles (about 6.1 km) from the centre. The modern city preserves the 17th-century layout of four main streets radiating from a central Diamond to four gateways – Bishop's Gate, Ferryquay Gate, Shipquay Gate and Butcher's Gate. The city's oldest surviving building was also constructed at this time: the 1633 Plantation Gothic cathedral of St Columb. In the porch of the cathedral is a stone that records completion with the inscription: "If stones could speake, then London's prayse should sound, Who built this church and cittie from the grounde." During the 1640s, the city suffered in the Wars of the Three Kingdoms, which began with the Irish Rebellion of 1641, when the Gaelic Irish insurgents made a failed attack on the city. In 1649 the city and its garrison, which supported the republican Parliament in London, were besieged by Scottish Presbyterian forces loyal to King Charles I. The Parliamentarians besieged in Derry were relieved by a strange alliance of Roundhead troops under George Monck and the Irish Catholic general Owen Roe O'Neill. These temporary allies were soon fighting each other again however, after the landing in Ireland of the New Model Army in 1649. The war in Ulster was finally brought to an end when the Parliamentarians crushed the Irish Catholic Ulster army at the Battle of Scarrifholis, near Letterkenny in nearby County Donegal, in 1650. During the Glorious Revolution, only Derry and nearby Enniskillen had a Protestant garrison by November 1688. An army of around 1,200 men, mostly "Redshanks" (Highlanders), under Alexander MacDonnell, 3rd Earl of Antrim, was slowly organised (they set out on the week William of Orange landed in England). When they arrived on 7 December 1688 the gates were closed against them and the Siege of Derry began. In April 1689, King James came to the city and summoned it to surrender. The King was rebuffed and the siege lasted until the end of July with the arrival of a relief ship. The city was rebuilt in the 18th century with many of its fine Georgian style houses still surviving. The city's first bridge across the River Foyle was built in 1790. During the 18th and 19th centuries, the port became an important embarkation point for Irish emigrants setting out for North America. Also during the 19th century, it became a destination for migrants fleeing areas more severely affected by the Great Famine. One of the most notable shipping lines was the McCorkell Line operated by Wm. McCorkell & Co. Ltd. from 1778. The McCorkell's most famous ship was the Minnehaha, which was known as the "Green Yacht from Derry". During World War I, the city contributed over 5,000 men to the British Army from Catholic and Protestant families. During the Irish War of Independence, the area was rocked by sectarian violence, partly prompted by the guerilla war raging between the Irish Republican Army and British forces, but also influenced by economic and social pressures. By mid-1920 there was severe sectarian rioting in the city. Many people died and in addition, many Catholics and Protestants were expelled from their homes during this communal unrest. After a week's violence, a truce was negotiated by local politicians on both unionist and republican sides. In 1921, following the Anglo-Irish Treaty and the Partition of Ireland, it unexpectedly became a 'border city', separated from much of its traditional economic hinterland in County Donegal. During World War II, the city played an important part in the Battle of the Atlantic. Ships from the Royal Navy, the Royal Canadian Navy, and other Allied navies were stationed in the city and the United States military established a base. Over 20,000 Royal Navy, 10,000 Royal Canadian Navy, and 6,000 United States Navy personnel were stationed in the city during the war. The establishment of the American presence in the city was the result of a secret agreement between the Americans and the British before the Americans entered the war. It was the first American naval base in Europe and the terminal for American convoys en route to Europe. The reason for such a high degree of military and naval activity was self-evident: Derry was the United Kingdom's westernmost port; indeed, the city was the westernmost Allied port in Europe: thus, Derry was a crucial jumping-off point, together with Glasgow and Liverpool, for the shipping convoys that ran between Europe and North America. The large numbers of military personnel in Derry substantially altered the character of the city, bringing in some outside colour to the local area, as well as some cosmopolitan and economic buoyancy during these years. Several airfields were built in the outlying regions of the city at this time, Maydown, Eglinton and Ballykelly. RAF Eglinton went on to become City of Derry Airport. The city contributed a significant number of men to the war effort throughout the services, most notably the 500 men in the 9th (Londonderry) Heavy Anti-Aircraft Regiment, known as the 'Derry Boys'. This regiment served in North Africa, the Sudan, Italy and mainland UK. Many others served in the Merchant Navy taking part in the convoys that supplied the UK and Russia during the war. The border location of the city, and the influx of trade from the military convoys allowed for significant smuggling operations to develop in the city. At the conclusion of the Second World War, eventually some 60 U-boats of the German Kriegsmarine ended in the city's harbour at Lisahally after their surrender. The initial surrender was attended by Admiral Sir Max Horton, Commander-in-Chief of the Western Approaches, and Sir Basil Brooke, third Prime Minister of Northern Ireland. The city languished after the second world war, with unemployment and development stagnating. A large campaign, led by the University for Derry Committee, to have Northern Ireland's second university located in the city, ended in failure. Derry was a focal point for the nascent civil rights movement in Northern Ireland. Catholics were discriminated against under Unionist government in Northern Ireland, both politically and economically. In the late 1960s the city became the flashpoint of disputes about institutional gerrymandering. Political scientist John Whyte explains that: All the accusations of gerrymandering, practically all the complaints about housing and regional policy, and a disproportionate amount of the charges about public and private employment come from this area. The area – which consisted of Counties Tyrone and Fermanagh, Londonderry County Borough, and portions of Counties Londonderry and Armagh – had less than a quarter of the total population of Northern Ireland yet generated not far short of three-quarters of the complaints of discrimination...The unionist government must bear its share of responsibility. It put through the original gerrymander which underpinned so many of the subsequent malpractices, and then, despite repeated protests, did nothing to stop those malpractices continuing. The most serious charge against the Northern Ireland government is not that it was directly responsible for widespread discrimination, but that it allowed discrimination on such a scale over a substantial segment of Northern Ireland. A civil rights demonstration in 1968 led by the Northern Ireland Civil Rights Association was banned by the Government and blocked using force by the Royal Ulster Constabulary. The events that followed the August 1969 Apprentice Boys parade resulted in the Battle of the Bogside, when Catholic rioters fought the police, leading to widespread civil disorder in Northern Ireland and is often dated as the starting point of the Troubles. On Sunday 30 January 1972, 13 unarmed civilians were shot dead by British paratroopers during a civil rights march in the Bogside area. Another 13 were wounded and one further man later died of his wounds. This event came to be known as Bloody Sunday. The conflict which became known as the Troubles is widely regarded as having started in Derry with the Battle of the Bogside. The Civil Rights Movement had also been very active in the city. In the early 1970s, the city was heavily militarised and there was widespread civil unrest. Several districts in the city constructed barricades to control access and prevent the forces of the state from entering. Violence eased towards the end of the Troubles in the late 1980s and early 1990s. Irish journalist Ed Maloney claims in The Secret History of the IRA that republican leaders there negotiated a de facto ceasefire in the city as early as 1991. Whether this is true or not, the city did see less bloodshed by this time than Belfast or other localities. The city was visited by an orca in November 1977 at the height of the Troubles; it was dubbed Dopey Dick by the thousands who came from miles around to see him. From 1613 the city was governed by the Londonderry Corporation. In 1898 this became Londonderry County Borough Council, until 1969 when administration passed to the unelected Londonderry Development Commission. In 1973 a new district council with boundaries extending to the rural south-west was established under the name Londonderry City Council, renamed in 1984 to Derry City Council, consisting of five electoral areas: Cityside, Northland, Rural, Shantallow and Waterside. The council of 30 members was re-elected every four years. The council merged with Strabane District Council in April 2015 under local government reorganisation to become Derry and Strabane District Council. The councillors elected in 2019 for the city are: The devices on the city's arms are a skeleton and a three-towered castle on a black field, with the "chief" or top third of the shield showing the arms of the City of London: a red cross and sword on white. In the centre of the cross is a gold harp. In unofficial use the harp sometimes appears above the arms as a crest. The arms were confirmed by Daniel Molyneux, the Ulster King of Arms, in 1613, following the town's incorporation. Molyneux's notes state that the original arms of Derry were "the picture of death (or a skeleton) sitting on a mossie ston and in the dexter point a castle". To this design he added, at the request of the new mayor, "a chief, the armes of London". Molyneux goes on to state that the skeleton is symbolic of Derry's ruin at the hands of the Irish rebel Cahir O'Doherty, and that the silver castle represents its renewal through the efforts of the London guilds: "[Derry] hath since bene (as it were) raysed from the dead by the worthy undertakinge of the Ho'ble Cittie of London, in memorie whereof it is hence forth called and knowen by the name of London Derrie." Local legend offers different theories as to the origin of the skeleton. One identifies it as Walter de Burgh, who was starved to death in the Earl of Ulster's dungeons in 1332. Another identifies it as Cahir O'Doherty himself, who was killed in a skirmish near Kilmacrennan in 1608 (but was popularly believed to have wasted away while sequestered in his castle at Buncrana). In the days of gerrymandering and anti-Catholic discrimination, Derry's Catholics often claimed in dark wit that the skeleton was a Catholic waiting for a job and a council house. However, a report commissioned by the city council in 1979 established that there was no basis for any of the popular theories, and that the skeleton "[is] purely symbolic and does not refer to any identifiable person". The 1613 arms depicted a harp in the centre of the cross, but this was omitted from later depictions of the city arms, and in the 1952 letters patent confirming the arms to the Londonderry Corporation. In 2002 Derry City Council applied to the College of Arms to have the harp restored, and Garter and Norroy & Ulster Kings of Arms issued letters patent to that effect in 2003, having accepted the 17th-century evidence. The motto attached to the coat of arms reads in Latin, "Vita, Veritas, Victoria". This translates into English as "Life, Truth, Victory". Derry is characterised by its distinctively hilly topography. The River Foyle forms a deep valley as it flows through the city, making Derry a place of very steep streets and sudden, startling views. The original walled city of Londonderry lies on a hill on the west bank of the River Foyle. In the past, the river branched and enclosed this wooded hill as an island; over the centuries, however, the western branch of the river dried up and became a low-lying and boggy district that is now called the Bogside. Today, modern Derry extends considerably north and west of the city walls and east of the river. The half of the city on the west of the Foyle is known as the Cityside and the area east is called the Waterside. The Cityside and Waterside are connected by the Craigavon Bridge and Foyle Bridge, and by a footbridge in the centre of the city called Peace Bridge. The district also extends into rural areas to the southeast of the city. This much larger city, however, remains characterised by the often extremely steep hills that form much of its terrain on both sides of the river. A notable exception to this lies on the northeastern edge of the city, on the shores of Lough Foyle, where large expanses of sea and mudflats were reclaimed in the middle of the 19th century. Today, these sloblands are protected from the sea by miles of sea walls and dikes. The area is an internationally important bird sanctuary, ranked among the top 30 wetland sites in the UK. Other important nature reserves lie at Ness Country Park, 10 miles (16 kilometres) east of Derry; and at Prehen Wood, within the city's south-eastern suburbs. Derry has, like most of Ireland, a temperate maritime climate (Cfb) according to the Köppen climate classification system. The nearest official Met Office Weather Station for which climate data is available is Carmoney, just west of City of Derry Airport and about five miles (eight kilometres) northeast of the city centre. However, observations ceased in 2004 and the nearest Weather Station is currently Ballykelly, due 12 miles (19 kilometres) east-northeast. Typically, 27 nights of the year will report an air frost at Ballykelly, and at least 1 mm of precipitation will be reported on 170 days (1981–2010 averages). The lowest temperature recorded at Carmoney was −11.0 °C (12.2 °F) on 27 December 1995. Derry Urban Area (DUA), including the city and the neighbouring settlements of Culmore, Newbuildings and Strathfoyle, is classified as a city by the Northern Ireland Statistics and Research Agency (NISRA) since its population exceeds 75,000. The mid-2006 population estimate for the wider Derry City Council area was 107,300. Population growth in 2005/06 was driven by natural change, with net out-migration of approximately 100 people. The city was one of the few in Ireland to experience an increase in population during the Great Famine as migrants came to it from other, more heavily affected areas. On census day (27 March 2011) there were 105,066 people living in Derry Urban Area. Of these, 27% were aged under 16 years and 14% were aged 60 and over; 49% of the population were male and 51% were female; 75% were from a Roman Catholic background and 23% (up three per cent from 2001) were from a Protestant background. On census day (21 March 2021) there were 85,279 people living in Derry City and of these 77.88% (66,413) were from a Catholic background, 16.98% (14,481) were from Protestant and Other Christian (including Christian related) background, 1.24% had another religious background and 3.9% had no religion. 60.73% of individuals identify as Irish only, 13.18% identify as British only, 16.12% identify as Northern Irish only. Concerns have been raised by both communities over the increasingly divided nature of the city. There were about 17,000 Protestants on the west bank of the River Foyle in 1971. The proportion rapidly declined during the 1970s; the 2011 census recorded 3,169 Protestants on the west bank, compared to 54,976 Catholics, and it is feared that the city could become permanently divided. However, concerted efforts have been made by the local community, church and political leaders from both traditions to redress the problem. A conference to bring together key actors and promote tolerance was held in October 2006. Ken Good, the Church of Ireland Bishop of Derry and Raphoe, said he was happy living on the cityside. "I feel part of it. It is my city and I want to encourage other Protestants to feel exactly the same", he said. Support for Protestants in the district has been strong from the SDLP politician Helen Quigley, who formerly served as the mayor of Derry. She made inclusion and tolerance key themes of her mayoralty. Cllr. Quigley said it was time for "everyone to take a stand to stop the scourge of sectarian and other assaults in the city." The economy of the district was based significantly on the textile industry until relatively recently. For many years women were commonly the sole wage earners working in the shirt factories while the men in comparison had high levels of unemployment. This led to significant male emigration. The history of shirt making in the city dates to 1831, said to have been started by William Scott and his family who first exported shirts to Glasgow. Within 50 years, shirt making in the city was the most prolific in the UK with garments being exported all over the world. It was known so well that the industry received a mention in Das Kapital by Karl Marx, when discussing the factory system: The shirt factory of Messrs. Tille at Londonderry, which employs 1,000 operatives in the factory itself, and 9,000 people spread up and down the country and working in their own houses. The industry reached its peak in the 1920s employing around 18,000 people. In modern times, however, the textile industry declined due largely to lower Asian wages. A long-term foreign employer in the area is Du Pont, which has been based at Maydown since 1958, its first European production facility. Originally Neoprene was manufactured at Maydown and subsequently followed by Hypalon. More recently Lycra and Kevlar production units were active. Thanks to a worldwide demand for Kevlar, which is made at the plant, the facility undertook a £40 million upgrade to expand its global Kevlar production. As of 2002, the three largest private-sector employers were American firms. Economic successes have included call centres and a large investment by Seagate, which has operated a factory in the Springtown Industrial Estate since 1993. As of 2019, Seagate was employing approximately 1,400 people in Derry. A controversial new employer in the area was Raytheon Systems Limited, a software division of the American defence contractor, which was set up in Derry in 1999. Although some of the local people welcomed the jobs boost, others in the area objected to the jobs being provided by a firm involved heavily in the arms trade. Following four years of protest by the Foyle Ethical Investment Campaign, in 2004 Derry City Council passed a motion declaring the district "a 'no – go' area for the arms trade", and in 2006 its offices were briefly occupied by anti-war protestors who became known as the Raytheon 9. In 2009, the company announced that it was not renewing its lease when it expired in 2010 and was looking for a new location for its operations. Other significant multinational employers in the region include Firstsource of India, INVISTA, Stream International, Perfecseal, NTL, Northbrook Technology of the United States, Arntz Belting and Invision Software of Germany, and Homeloan Management of the UK. Major local business employers include Desmonds, Northern Ireland's largest privately owned company, manufacturing and sourcing garments, E&I Engineering, St. Brendan's Irish Cream Liqueur and McCambridge Duffy, one of the largest insolvency practices in the UK. Even though the city provides cheap labour by standards in Western Europe, critics have noted that the grants offered by the Northern Ireland Industrial Development Board have helped land jobs for the area that only last as long as the funding lasts. This was reflected in questions to the Parliamentary Under-Secretary of State for Northern Ireland, Richard Needham, in 1990. It was noted that it cost £30,000 to create one job in an American firm in Northern Ireland. Critics of investment decisions affecting the district often point to the decision to build a new university building in nearby (predominantly Protestant) Coleraine rather than developing the Ulster University Magee Campus. Another major government decision affecting the city was the decision to create the new town of Craigavon outside Belfast, which again was detrimental to the development of the city. Even in October 2005, there was perceived bias against the comparatively impoverished North West of the province, with a major civil service job contract going to Belfast. Mark Durkan, the Social Democratic and Labour Party (SDLP) leader and Member of Parliament (MP) for Foyle was quoted in the Belfast Telegraph as saying: The fact is there has been consistent under-investment in the North West and a reluctance on the part of the Civil Service to see or support anything west of the Bann, except when it comes to rate increases, then they treat us equally. In July 2005, the Irish Minister for Finance, Brian Cowen, called for a joint task force to drive economic growth in the cross-border region. This would have implications for Counties Londonderry, Tyrone, and Donegal across the border. The city is the north west's foremost shopping district, housing two large shopping centres along with numerous shop-packed streets serving much of the greater county, as well as Tyrone and Donegal. The city centre has two main shopping centres; the Foyleside Shopping Centre which has 45 stores and 1,430 parking spaces, and the Richmond Centre, which has 39 retail units. The Quayside Shopping Centre also serves the city side and there is also Lisnagelvin Shopping Centre on the Waterside. These centres, as well as local-run businesses, feature numerous national and international stores. Crescent Link Retail Park, located in the Waterside, has several chain stores and has become the second largest retail park in Northern Ireland (second only to Sprucefield in Lisburn). Plans have also been approved for Derry's first Asda store, which will be located at the retail park sharing a unit with Homebase. Sainsbury's also applied for planning permission for a store at Crescent Link, but Environment Minister Alex Attwood turned it down. Until the store's closure in March 2016, the city was also home to the world's oldest independent department store, Austins. Established in 1830, Austins predates Jenners of Edinburgh by 5 years, Harrods of London by 15 years and Macy's of New York by 25 years. The store's five-story Edwardian building is located within the walled city in the area known as The Diamond. Derry is renowned for its architecture. This can be primarily ascribed to the formal planning of the historic walled city of Derry at the core of the modern city. This is centred on the Diamond with a collection of late Georgian, Victorian and Edwardian buildings maintaining the gridlines of the main thoroughfares (Shipquay Street, Ferryquay Street, Butcher Street and Bishop Street) to the City Gates. St Columb's Cathedral does not follow the grid pattern reinforcing its civic status. This Church of Ireland Cathedral was the first post-Reformation Cathedral built for an Anglican church. The construction of the Roman Catholic St Eugene's Cathedral in the Bogside in the 19th century was another major architectural addition to the city. The Townscape Heritage Initiative has funded restoration works to key listed buildings and other older structures. In the three centuries since their construction, the city walls have been adapted to meet the needs of a changing city. The best example of this adaptation is the insertion of three additional gates – Castle Gate, New Gate and Magazine Gate – into the walls in the course of the 19th century. Today, the fortifications form a continuous promenade around the city centre, complete with cannon, avenues of mature trees and views across Derry. Historic buildings within the city walls include St Augustine's Church, which sits on the city walls close to the site of the original monastic settlement; the copper-domed Austin's department store, which claims to be the oldest such store in the world; and the imposing Greek Revival Courthouse on Bishop Street. The red-brick late-Victorian Guildhall, also crowned by a copper dome, stands just beyond Shipquay Gate and close to the riverfront. There are many museums and sites of interest in and around the city, including the Foyle Valley Railway Centre, the Amelia Earhart Centre And Wildlife Sanctuary, the Apprentice Boys Memorial Hall, Ballyoan Cemetery, The Bogside, numerous murals by the Bogside Artists, Derry Craft Village, Free Derry Corner, O'Doherty Tower (now home to part of the Tower Museum), the Harbour Museum, the Museum of Free Derry, Chapter House Museum, the Workhouse Museum, the Nerve Centre, St. Columb's Park and Leisure Centre, Creggan Country Park, Brooke Park, The Millennium Forum, the Void Gallery, and the Foyle and Craigavon bridges. Attractions include museums, a vibrant shopping centre and trips to the Giant's Causeway, which is approximately 50 miles (80 kilometres) away, though poorly connected by public transport. Lonely Planet called Derry the fourth best city in the world to see in 2013. On 25 June 2011, the Peace Bridge opened. It is a cycle and footbridge that begins from the Guild Hall in the city centre of Derry City to Ebrington Square and St Columb's Park on the far side of the River Foyle. It was funded jointly by the Department for Social Development (NI), the Department of the Environment, Community and Local Government along with matching funding, totalling £14 million, from the SEUPB Peace III programme. Future projects include the Walled City Signature Project, which intends to ensure that the city's walls become a world-class tourist experience. The transport network is built out of a complex array of old and modern roads and railways throughout the city and county. The city's road network also makes use of two bridges to cross the River Foyle, the Craigavon Bridge and the Foyle Bridge, the longest bridge in Ireland. Derry also serves as a major transport hub for travel throughout nearby County Donegal. In spite of it being the second city of Northern Ireland (and it being the second-largest city in all of Ulster), road and rail links to other cities are below par for its standing. Many business leaders claim that government investment in the city and infrastructure has been badly lacking. Some have stated that this is due to its outlying border location whilst others have cited a sectarian bias against the region west of the River Bann due to its high proportion of Catholics. There is no direct motorway link with Dublin or Belfast. The rail link to Belfast has been downgraded over the years so that, presently, it is not a viable alternative to the roads for industry to rely on. As of 2008, there were plans for £1 billion worth of transport infrastructure investment in and around the district. Planned upgrades to the A5 Dublin road agreed as part of the Good Friday Agreement and St Andrews Talks fell through when the government of the Republic of Ireland reneged on its funding citing the post-2008 economic downturn. Most public transport in Northern Ireland is operated by the subsidiaries of Translink. Originally the city's internal bus network was run by Ulsterbus, which still provides the city's connections with other towns in Northern Ireland. The city's buses are now run by Ulsterbus Foyle, just as Translink Metro now provides the bus service in Belfast. The Ulsterbus Foyle network offers 13 routes across the city into the suburban areas, excluding an Easibus link which connects to the Waterside and Drumahoe, and a free Rail Link Bus runs from the Waterside Railway Station to the city centre. All buses leave from the Foyle Street Bus Station in the city centre. Long-distance buses depart from Foyle Street Bus Station to destinations throughout Ireland. Buses are operated by both Ulsterbus and Bus Éireann on cross-border routes. Lough Swilly formerly operated buses to County Donegal, but the company entered liquidation and is no longer in operation. There is a half-hourly service to Belfast every day, called the Maiden City Flyer, which is the Goldline Express flagship route. There are hourly services to Strabane, Omagh, Coleraine, Letterkenny and Buncrana, and up to twelve services a day to bring people to Dublin. There is a daily service to Sligo, Galway, Shannon Airport and Limerick. TFI Local Link provides additional cross-border public transport routes, with route 244 (Moville/Derry), 245 (Greencastle/Derry), 288 (Ballybofey/Derry), 952 (Carndonagh/Derry), 957 (Shrove/Derry, via Moville),1426 (Stranorlar/Derry) all servicing the city. Private coach operator, Patrick Gallagher Coaches, also runs 2 routes during the week that service the city. The first goes from Crolly in County Donegal to Belfast (to the Leonardo Hotel in Belfast city centre, formerly Jurys Inn), and another that runs from County Donegal to the city. City of Derry Airport, the council-owned airport near Eglinton, has grown during the early 21st century, with new investment in extending the runway and plans to redevelop the terminal. The A2 (a dual carriageway) from Maydown to Eglinton, serves the airport. City of Derry airport is the main regional airport for County Donegal, County Londonderry and west County Tyrone as well as Derry City itself. The airport is served by Loganair and Ryanair with scheduled flights to Glasgow Airport, Edinburgh Airport, Manchester Airport, Liverpool John Lennon Airport and London Stansted all year round with a summer schedule to Mallorca with TUI Airways The city is served by a single rail link terminating at Derry ~ Londonderry railway station in Waterside that is subsidised, alongside much of Northern Ireland's railways, by Northern Ireland Railways (N.I.R.). The link primarily provides passenger services from the city to Belfast, via several stops that include Coleraine, Ballymoney, and Antrim, and connections to links with other parts of Northern Ireland. The route itself is the only remaining rail link used by trains; most of the lines developed in the mid-19th century fell into decline towards the mid-20th century from competition by new road networks. The original rail network that served the city included four different railways that, between them, linked the city with much of the province of Ulster, plus a harbour railway network that linked the other four lines, and a tramway on the City side of the Foyle. Usage of the rail link between Derry and Belfast remains questionable for commuters, due to the journey time of over two hours making it slower centre-to-centre than the 100-minute Ulsterbus Goldline Express service. Several railways began operation around the city of Derry within the middle of the 19th century. The companies that set up links helped to provide key links for the city towards other towns and cities across Ireland, for the transportation of passengers and freight. The lines that were constructed featured a mixture of Irish gauge and narrow gauge railways, and companies that operated them included: In 1900, the 3 ft (914 mm) gauge Donegal Railway was extended to the city from Strabane, with construction establishing the Londonderry Victoria Road railway terminus and creating a junction with the LPHC railway. The LPHC line was altered to dual gauge which allowed 3 ft (914 mm) gauge traffic between the Donegal Railway and L&LSR as well as Irish gauge traffic between the GNR and B&NCR. By 1905, the government of the United Kingdom offered subsidies to both the L&LSR and the Donegal Railway to build extensions to their railway networks into remote parts of County Donegal, which soon developed Derry (alongside Strabane) into becoming a key rail hub by 1905 for the county and surrounding regions. In 1906 the Northern Counties Committee (NCC, successor to the B&NCR) and the GNR jointly took over the Donegal Railway, making it the County Donegal Railways Joint Committee (CDRJC). Alongside the railways, the city was served by a standard gauge (1,435 mm (4 ft 8+1⁄2 in)) tramway, the City of Derry Tramways. The tramway was opened in 1897 and consisted of horse trams that operated along a single line, 1+1⁄2 miles (2.5 kilometres) long, which ran along the City side of the Foyle parallel to the LPHC's line on that side of the river. The line never converted to electrically operated trams, and was closed in 1919. In 1922, the partition of Ireland dramatically caused disruptions to the city's rail links, except for the NNC route to Coleraine. The creation of an international frontier with County Donegal changed trade patterns to the detriment of the railways affected by the partition, placing border posts on every line to and from Derry, causing great delays to trains and disrupting timekeeping from custom inspections - the L&LSR faced inspections between Pennyburn and Bridge End; the CDRJC faced inspections beyond Strabane; and the GNR line faced inspections between Derry and Strabane. Custom agreements negotiated over the next few years between Britain and Ireland enabled GNR trains to travel to and from Derry - such trains would be allowed to pass without inspection through the Free State, unless they served local stations on the west bank of the Foyle - while goods transported by all railways between different parts of the Free State would be allowed to pass through Northern Ireland under customs bond. Despite these agreements, local passenger and goods traffic continued to be delayed by customs examinations. The decline of most of Derry's rail links took place after the Second World War, due to increasing competition by road links. The L&LSR closed its line in 1953, followed by the CDRJC in 1954. The Ulster Transport Authority, who took over the NCC in 1949 and the GNR's lines in Northern Ireland in 1958, took control of the LPHC railway before closing it in 1962, before eventually shutting down the former GNR line to Derry in 1965, after the submission of The Benson Report to the Northern Ireland Government two years prior to the closure. This left the former L&CR line to Coleraine as the sole railway link for the city, providing a passenger service to Belfast, alongside CIÉ freight services to Donegal. By the 1990s, the service began to deteriorate. In 2008, the Department for Regional Development announced plans to relay the track between Derry and Coleraine - the plan, aimed at being completed by 2013, included adding a passing loop to increase traffic capacity, and increasing the number of trains with two additional diesel multiple units. Additional phases of the plan also included improvements to existing stations along the line, and the restoration of the former Victoria Road terminus building to prepare for the relocation of the city's current terminus station to the site, all for completion by late 2019. Costing around £86 million, the improvements were aimed at reducing the journey time to Belfast by 30 minutes and allowing commuter trains to arrive before 9 a.m. for the first time. The largest road investment in the north west's history took place during 2010, with the building of the 'A2 Broadbridge Maydown to City of Derry Airport dualling' project and announcement of the 'A6 Londonderry to Dungiven Dualling Scheme' with the intention to reduce the travel time to Belfast. The latter project brings a dual-carriageway link between Northern Ireland's two largest cities one step closer. The project is costing £320 million and is expected to be completed in 2016. In October 2006 the Government of Ireland announced that it was to invest €1 billion in Northern Ireland; with the planned projects including 'the A5 Western Transport Corridor', the complete upgrade of the A5 Derry – Omagh – Aughnacloy (– Dublin) road, around 90 kilometres (55 miles) long, to dual carriageway standard. In June 2008 Conor Murphy, Minister for Regional Development, announced that there will be a study into the feasibility of connecting the A5 and A6. Should it proceed, the scheme would most likely run from Drumahoe to south of Prehen along the south east of the city. Londonderry Port at Lisahally is the United Kingdom's most westerly port and has capacity for 30,000-ton vessels. The Londonderry Port and Harbour Commissioners (LPHC) announced record turnover, record profits and record tonnage figures for the year ended March 2008. The figures are the result of a significant capital expenditure programme for the period 2000 to 2007 of about £22 million. Tonnage handled by LPHC increased by almost 65% between 2000 and 2007. The port gave vital Allied service in the longest-running campaign of the Second World War, the Battle of the Atlantic, and saw the surrender of the German U-boat fleet at Lisahally on 8 May 1945. The tidal River Foyle is navigable from the coast at Derry to approximately 10 miles (16 km) inland. In 1796, the Strabane Canal was opened, continuing the navigation a further 4 miles (6 km) southwards to Strabane. The canal was closed in 1962. Derry is home to the Magee Campus of Ulster University, formerly Magee College. However, Lockwood's 1960s decision to locate Northern Ireland's second university in Coleraine rather than Derry helped contribute to the formation of the civil rights movement that ultimately led to The Troubles. Derry was the town more closely associated with higher learning, with Magee College already more than a century old by that time. In the mid-1980s an attempt was made at address this by forming Magee College as a campus of the Ulster University, but this failed to stifle calls for the establishment of an independent University in Derry. As of 2021, the Magee campus reportedly accommodated approximately 4,400 students, out of a total Ulster University student population of approximately 24,000, of which 15,000 are in the Belfast campus. The North West Regional College is also based in the city, and accommodates over 10,000 student enrolments annually. One of the two oldest secondary schools in Northern Ireland, Foyle College, is located in Derry. It was founded in 1616 by the Merchant Taylors. Other secondary schools include St. Columb's College, Oakgrove Integrated College, St Cecilia's College, St Mary's College, St. Joseph's Boys' School, Lisneal College, Thornhill College, Lumen Christi College and St. Brigid's College. There are also numerous primary schools. The city is home to sports clubs and teams. Both association football and Gaelic football are popular in the area. In association football, the city's most prominent clubs include Derry City who play in the national league of the Republic of Ireland; Institute of the NIFL Championship as well as Maiden City and Trojans, both of the Northern Ireland Intermediate League. In addition to these clubs, which all play in national leagues, other clubs are based in the city. The local football league governed by the IFA is the North-West Junior League, which contains many clubs from the city, such as BBOB (Boys Brigade Old Boys) and Lincoln Courts. The city's other junior league is the Derry and District League and teams from the city and surrounding areas participate, including Don Boscos and Creggan Swifts. The Foyle Cup youth soccer tournament is held annually in the city. It has attracted many notable teams in the past, including Werder Bremen, IFK Göteborg and Ferencváros. In Gaelic football Derry GAA are the county team and play in the Gaelic Athletic Association's National Football League, Ulster Senior Football Championship and All-Ireland Senior Football Championship. They also field hurling teams in the equivalent tournaments. There are many Gaelic games clubs in and around the city, for example Na Magha CLG, Steelstown GAC, Doire Colmcille CLG, Seán Dolans GAC, Na Piarsaigh CLG Doire Trasna and Slaughtmanus GAC. There are many boxing clubs, the most well-known being the Ring Amateur Boxing Club, which is based on the City side, and associated with boxers Charlie Nash and John Duddy. Rochester's Amateur Boxing club is a club in the city's Waterside area. Rugby union is also quite popular in the city, with the City of Derry Rugby Club situated not far from the city centre. City of Derry won both the Ulster Towns Cup and the Ulster Junior Cup in 2009. Londonderry YMCA RFC is another rugby club and is based in the village of Drumahoe which is on the outskirts of the city. The city's only basketball club is North Star Basketball Club which has teams in the Basketball Northern Ireland senior and junior Leagues. Cricket is also played in the city, particularly in the Waterside. The city is home to two cricket clubs, Brigade Cricket Club and Glendermott Cricket Club, both of whom play in the North West Senior League. There are two golf clubs situated in the city, City of Derry Golf Club and Foyle International Golf Centre. Artists and writers associated with the city and surrounding countryside include the Nobel Prize-winning poet Seamus Heaney, poet Seamus Deane, playwright Brian Friel, writer and music critic Nik Cohn, artist Willie Doherty, socio-political commentator and activist Eamonn McCann and bands such as The Undertones. The large political gable-wall murals of Bogside Artists, Free Derry Corner, the Foyle Film Festival, the Derry Walls, St Eugene's and St Columb's Cathedrals and the annual Halloween street carnival are popular tourist attractions. In 2010, Derry was named the UK's tenth 'most musical' city by PRS for Music. In May 2013 a perpetual Peace Flame Monument was unveiled by Martin Luther King III and Presbyterian minister Rev. David Latimer. The flame was lit by children from both traditions in the city and is one of only 15 such flames across the world. The local newspapers, the Derry Journal (known as the Londonderry Journal until 1880) and the Londonderry Sentinel, reflect the divided history of the city: the Journal was founded in 1772 and is Ireland's second oldest newspaper; the Sentinel newspaper was formed in 1829 when new owners of the Journal embraced Catholic emancipation, and the editor left the paper to set up the Sentinel. There are numerous radio stations receivable: the largest stations based in the city are BBC Radio Foyle and the commercial station Q102.9. There was a locally based television station, C9TV, one of only two local or 'restricted' television services in Northern Ireland, which ceased broadcasts in 2007. The city's nightlife is mainly focused on the weekends, with several bars and clubs providing "student nights" during the weekdays. Waterloo Street and Strand Road provide the main venues. Waterloo Street, a steep street lined with both Irish traditional and modern pubs, frequently has live rock and traditional music at night. Notable people who were born or have lived in Derry include: The following people and military units have received the Freedom of the City of Derry.
[ { "paragraph_id": 0, "text": "Derry, officially Londonderry, is the largest city in County Londonderry, the second-largest in Northern Ireland and the fifth-largest on the island of Ireland. The old walled city lies on the west bank of the River Foyle, which is spanned by two road bridges and one footbridge. The city now covers both banks (Cityside on the west and Waterside on the east).", "title": "" }, { "paragraph_id": 1, "text": "The population of the city was 85,279 at the 2021 census, while the Derry Urban Area had a population of 105,066 in 2011. The district administered by Derry City and Strabane District Council contains both Londonderry Port and City of Derry Airport. Derry is close to the border with County Donegal, with which it has had a close link for many centuries. The person traditionally seen as the founder of the original Derry is Saint Colmcille, a holy man from Tír Chonaill, the old name for almost all of modern County Donegal, of which the west bank of the Foyle was a part before 1610.", "title": "" }, { "paragraph_id": 2, "text": "In 2013, Derry was the inaugural UK City of Culture, having been awarded the title in 2010.", "title": "" }, { "paragraph_id": 3, "text": "Despite the official name, the city is also commonly known as Derry, which is an anglicisation of the Irish Daire or Doire, and translates as 'oak-grove/oak-wood'. The name derives from the settlement's earliest references, Daire Calgaich ('oak-grove of Calgach'). The name was changed from Derry in 1613 during the Plantation of Ulster to reflect the establishment of the city by the London guilds.", "title": "Name" }, { "paragraph_id": 4, "text": "Derry has been used in the names of the local government district and council since 1984, when the council changed its name from Londonderry City Council to Derry City Council. This also changed the name of the district, which had been created in 1973 and included both the city and surrounding rural areas. In the 2015 local government reform, the district was merged with the Strabane district to form the Derry City and Strabane district, with the councils likewise merged.", "title": "Name" }, { "paragraph_id": 5, "text": "According to the city's Royal Charter of 10 April 1662, the official name is Londonderry. This was reaffirmed in a High Court decision in 2007.", "title": "Name" }, { "paragraph_id": 6, "text": "The 2007 court case arose because Derry City Council wanted clarification on whether the 1984 name change of the council and district had changed the official name of the city and what the procedure would be to effect a name change. The court clarified that Londonderry remained the official name and that the correct procedure to change the name would be via a petition to the Privy Council. Derry City Council afterward began this process, and was involved in conducting an equality impact assessment report (EQIA). Firstly it held an opinion poll of district residents in 2009, which reported that 75% of Catholics and 77% of Nationalists found the proposed change acceptable, compared to 6% of Protestants and 8% of Unionists. The EQIA then held two consultative forums, and solicited comments from the general public on whether or not the city should have its name changed to Derry. A total of 12,136 comments were received, of which 3,108 were broadly in favour of the proposal, and 9,028 opposed it. On 23 July 2015, the council voted in favour of a motion to change the official name of the city to Derry and to write to Mark H. Durkan, the Northern Irish Minister for the Environment, to ask how the change could be effected.", "title": "Name" }, { "paragraph_id": 7, "text": "The name Derry is preferred by nationalists and it is broadly used throughout Northern Ireland's Catholic community, as well as that of the Republic of Ireland, whereas many unionists prefer Londonderry; however, in everyday conversation Derry is used by most Protestant residents of the city. Linguist Kevin McCafferty argues that \"It is not, strictly speaking, correct that Northern Ireland Catholics call it Derry, while Protestants use the Londonderry form, although this pattern has become more common locally since the mid-1980s, when the city council changed its name by dropping the prefix\". In McCafferty's survey of language use in the city, \"only very few interviewees—all Protestants—use the official form\".", "title": "Name" }, { "paragraph_id": 8, "text": "Apart from the name of the local council, the city is usually known as Londonderry in official use within the UK. In the Republic of Ireland, the city and county are almost always referred to as Derry, on maps, in the media and in conversation. In April 2009, however, the Republic of Ireland's Minister for Foreign Affairs, Micheál Martin, announced that Irish passport holders who were born there could record either Derry or Londonderry as their place of birth. Whereas official road signs in the Republic use the name Derry, those in Northern Ireland bear Londonderry (sometimes abbreviated to L'derry), although some of these have been defaced with the reference to London obscured. Usage varies among local organisations, with both names being used. Examples are City of Derry Airport, City of Derry Rugby Club, Derry City FC and the Protestant Apprentice Boys of Derry, as opposed to Londonderry Port, Londonderry YMCA Rugby Club and Londonderry Chamber of Commerce. The bishopric has always remained that of Derry, both in the (Protestant, formerly-established) Church of Ireland (now combined with the bishopric of Raphoe), and in the Roman Catholic Church. Most companies within the city choose local area names such as Pennyburn, Rosemount or Foyle from the River Foyle to avoid alienating the other community. Londonderry railway station is often referred to as Waterside railway station within the city, but is called Derry/Londonderry at other stations. The council changed the name of the local government district covering the city to Derry on 7 May 1984, consequently renaming itself Derry City Council. This did not change the name of the city, although the city is coterminous with the district, and in law, the city council is also the Corporation of Londonderry or, more formally, the Mayor, Aldermen and Citizens of the City of Londonderry. The form Londonderry is used for the post town by the Royal Mail; however, use of Derry will still ensure delivery.", "title": "Name" }, { "paragraph_id": 9, "text": "The city is also nicknamed \"the Maiden City\" by virtue of the fact that its walls were never breached despite being besieged on three separate occasions in the 17th century, the most notable being the Siege of Derry of 1688–1689. It was also nicknamed \"Stroke City\" by local broadcaster Gerry Anderson, owing to the politically correct use by some of the dual name Derry/Londonderry (which has itself been used by BBC Television). A later addition to the landscape has been the erection of several large stone columns on main roads into the city welcoming drivers, euphemistically, to 'the Walled City'.", "title": "Name" }, { "paragraph_id": 10, "text": "Derry is a common place name in Ireland, with at least six towns bearing that name and at least a further 79 places. The word Derry often forms part of the place name, for example, Derrybeg, Derryboy, Derrylea and Derrymore.", "title": "Name" }, { "paragraph_id": 11, "text": "Londonderry, Yorkshire, near the Yorkshire Dales, was named for the Marquesses of Londonderry, as is Londonderry Island off Tierra del Fuego in Chile. In the United States, twin towns in New Hampshire called Derry and Londonderry lie not far from Londonderry, Vermont, with additional namesakes in Derry, Pennsylvania, Londonderry, Ohio, and in Canada Londonderry, Nova Scotia and Londonderry, Edmonton, Alberta. There is also Londonderry, New South Wales and the associated Londonderry electorate.", "title": "Name" }, { "paragraph_id": 12, "text": "", "title": "City walls" }, { "paragraph_id": 13, "text": "Derry is the only remaining completely intact walled city in Ireland, and one of the finest examples of a walled city in Europe. The walls constitute the largest monument in State care in Northern Ireland and, as part of the last walled city to be built in Europe, stand as the most complete and spectacular.", "title": "City walls" }, { "paragraph_id": 14, "text": "The Walls were built in 1613–1619 by The Honourable The Irish Society as defences for early 17th-century settlers from England and Scotland. The Walls, which are approximately one mile (1.5 kilometres) in circumference and which vary in height and width between 3.7 and 10.7 metres (12 and 35 feet), are completely intact and form a walkway around the inner city. They provide a unique promenade to view the layout of the original town which still preserves its Renaissance-style street plan. The four original gates to the Walled City are Bishop's Gate, Ferryquay Gate, Butcher Gate and Shipquay Gate. Three further gates were added later, Magazine Gate, Castle Gate and New Gate, making seven gates in total. The architect was Peter Benson, a London-born builder, who was rewarded with several grants of land.", "title": "City walls" }, { "paragraph_id": 15, "text": "It is one of the few cities in Europe that never saw its fortifications breached, withstanding several sieges, including the famous Siege of Derry in 1689 which lasted 105 days; hence the city's nickname, The Maiden City.", "title": "City walls" }, { "paragraph_id": 16, "text": "Derry is one of the oldest continuously inhabited places in Ireland. The earliest historical references date to the 6th century when a monastery was founded there by St Columba or Colmcille, a famous saint from what is now County Donegal, but for thousands of years before that people had been living in the vicinity.", "title": "History" }, { "paragraph_id": 17, "text": "Before leaving Ireland to spread Christianity elsewhere, Colmcille founded a monastery at Derry (which was then called Doire Calgach), on the west bank of the Foyle. According to oral and documented history, the site was granted to Colmcille by a local king. The monastery then remained in the hands of the federation of Columban churches who regarded Colmcille as their spiritual mentor. The year 546 is often referred to as the date that the original settlement was founded. However, it is now accepted by historians that this was an erroneous date assigned by medieval chroniclers. It is accepted that between the 6th century and the 11th century, Derry was known primarily as a monastic settlement.", "title": "History" }, { "paragraph_id": 18, "text": "The town became strategically more significant during the Tudor conquest of Ireland and came under frequent attack. During O'Doherty's Rebellion in 1608 it was attacked by Sir Cahir O'Doherty, Irish chieftain of Inishowen, who burnt much of the town and killed the governor George Paulet. The soldier and statesman Sir Henry Docwra made vigorous efforts to develop the town, earning the reputation of being \"the founder of Derry\"; but he was accused of failing to prevent the O'Doherty attack, and returned to England.", "title": "History" }, { "paragraph_id": 19, "text": "What became the City of Derry was part of the relatively new County Donegal up until 1610. In that year, the west bank of the future city was transferred by the English Crown to The Honourable The Irish Society and was combined with County Coleraine, part of County Antrim and a large portion of County Tyrone to form County Londonderry. Planters organised by London livery companies through The Honourable The Irish Society arrived in the 17th century as part of the Plantation of Ulster, and rebuilt the town with high walls to defend it from Irish insurgents who opposed the plantation. The aim was to settle Ulster with a population supportive of the Crown. It was then renamed \"Londonderry\".", "title": "History" }, { "paragraph_id": 20, "text": "This city was the first planned city in Ireland: it was begun in 1613, with the walls being completed in 1619, at a cost of £10,757. The central diamond within a walled city with four gates was thought to be a good design for defence. The grid pattern chosen was subsequently much copied in the colonies of British North America. The charter initially defined the city as extending three Irish miles (about 6.1 km) from the centre.", "title": "History" }, { "paragraph_id": 21, "text": "The modern city preserves the 17th-century layout of four main streets radiating from a central Diamond to four gateways – Bishop's Gate, Ferryquay Gate, Shipquay Gate and Butcher's Gate. The city's oldest surviving building was also constructed at this time: the 1633 Plantation Gothic cathedral of St Columb. In the porch of the cathedral is a stone that records completion with the inscription: \"If stones could speake, then London's prayse should sound, Who built this church and cittie from the grounde.\"", "title": "History" }, { "paragraph_id": 22, "text": "During the 1640s, the city suffered in the Wars of the Three Kingdoms, which began with the Irish Rebellion of 1641, when the Gaelic Irish insurgents made a failed attack on the city. In 1649 the city and its garrison, which supported the republican Parliament in London, were besieged by Scottish Presbyterian forces loyal to King Charles I. The Parliamentarians besieged in Derry were relieved by a strange alliance of Roundhead troops under George Monck and the Irish Catholic general Owen Roe O'Neill. These temporary allies were soon fighting each other again however, after the landing in Ireland of the New Model Army in 1649. The war in Ulster was finally brought to an end when the Parliamentarians crushed the Irish Catholic Ulster army at the Battle of Scarrifholis, near Letterkenny in nearby County Donegal, in 1650.", "title": "History" }, { "paragraph_id": 23, "text": "During the Glorious Revolution, only Derry and nearby Enniskillen had a Protestant garrison by November 1688. An army of around 1,200 men, mostly \"Redshanks\" (Highlanders), under Alexander MacDonnell, 3rd Earl of Antrim, was slowly organised (they set out on the week William of Orange landed in England). When they arrived on 7 December 1688 the gates were closed against them and the Siege of Derry began. In April 1689, King James came to the city and summoned it to surrender. The King was rebuffed and the siege lasted until the end of July with the arrival of a relief ship.", "title": "History" }, { "paragraph_id": 24, "text": "The city was rebuilt in the 18th century with many of its fine Georgian style houses still surviving. The city's first bridge across the River Foyle was built in 1790. During the 18th and 19th centuries, the port became an important embarkation point for Irish emigrants setting out for North America.", "title": "History" }, { "paragraph_id": 25, "text": "Also during the 19th century, it became a destination for migrants fleeing areas more severely affected by the Great Famine. One of the most notable shipping lines was the McCorkell Line operated by Wm. McCorkell & Co. Ltd. from 1778. The McCorkell's most famous ship was the Minnehaha, which was known as the \"Green Yacht from Derry\".", "title": "History" }, { "paragraph_id": 26, "text": "During World War I, the city contributed over 5,000 men to the British Army from Catholic and Protestant families.", "title": "History" }, { "paragraph_id": 27, "text": "During the Irish War of Independence, the area was rocked by sectarian violence, partly prompted by the guerilla war raging between the Irish Republican Army and British forces, but also influenced by economic and social pressures. By mid-1920 there was severe sectarian rioting in the city. Many people died and in addition, many Catholics and Protestants were expelled from their homes during this communal unrest. After a week's violence, a truce was negotiated by local politicians on both unionist and republican sides.", "title": "History" }, { "paragraph_id": 28, "text": "In 1921, following the Anglo-Irish Treaty and the Partition of Ireland, it unexpectedly became a 'border city', separated from much of its traditional economic hinterland in County Donegal.", "title": "History" }, { "paragraph_id": 29, "text": "During World War II, the city played an important part in the Battle of the Atlantic. Ships from the Royal Navy, the Royal Canadian Navy, and other Allied navies were stationed in the city and the United States military established a base. Over 20,000 Royal Navy, 10,000 Royal Canadian Navy, and 6,000 United States Navy personnel were stationed in the city during the war. The establishment of the American presence in the city was the result of a secret agreement between the Americans and the British before the Americans entered the war. It was the first American naval base in Europe and the terminal for American convoys en route to Europe.", "title": "History" }, { "paragraph_id": 30, "text": "The reason for such a high degree of military and naval activity was self-evident: Derry was the United Kingdom's westernmost port; indeed, the city was the westernmost Allied port in Europe: thus, Derry was a crucial jumping-off point, together with Glasgow and Liverpool, for the shipping convoys that ran between Europe and North America. The large numbers of military personnel in Derry substantially altered the character of the city, bringing in some outside colour to the local area, as well as some cosmopolitan and economic buoyancy during these years. Several airfields were built in the outlying regions of the city at this time, Maydown, Eglinton and Ballykelly. RAF Eglinton went on to become City of Derry Airport.", "title": "History" }, { "paragraph_id": 31, "text": "The city contributed a significant number of men to the war effort throughout the services, most notably the 500 men in the 9th (Londonderry) Heavy Anti-Aircraft Regiment, known as the 'Derry Boys'. This regiment served in North Africa, the Sudan, Italy and mainland UK. Many others served in the Merchant Navy taking part in the convoys that supplied the UK and Russia during the war.", "title": "History" }, { "paragraph_id": 32, "text": "The border location of the city, and the influx of trade from the military convoys allowed for significant smuggling operations to develop in the city.", "title": "History" }, { "paragraph_id": 33, "text": "At the conclusion of the Second World War, eventually some 60 U-boats of the German Kriegsmarine ended in the city's harbour at Lisahally after their surrender. The initial surrender was attended by Admiral Sir Max Horton, Commander-in-Chief of the Western Approaches, and Sir Basil Brooke, third Prime Minister of Northern Ireland.", "title": "History" }, { "paragraph_id": 34, "text": "The city languished after the second world war, with unemployment and development stagnating. A large campaign, led by the University for Derry Committee, to have Northern Ireland's second university located in the city, ended in failure.", "title": "History" }, { "paragraph_id": 35, "text": "Derry was a focal point for the nascent civil rights movement in Northern Ireland.", "title": "History" }, { "paragraph_id": 36, "text": "Catholics were discriminated against under Unionist government in Northern Ireland, both politically and economically. In the late 1960s the city became the flashpoint of disputes about institutional gerrymandering. Political scientist John Whyte explains that:", "title": "History" }, { "paragraph_id": 37, "text": "All the accusations of gerrymandering, practically all the complaints about housing and regional policy, and a disproportionate amount of the charges about public and private employment come from this area. The area – which consisted of Counties Tyrone and Fermanagh, Londonderry County Borough, and portions of Counties Londonderry and Armagh – had less than a quarter of the total population of Northern Ireland yet generated not far short of three-quarters of the complaints of discrimination...The unionist government must bear its share of responsibility. It put through the original gerrymander which underpinned so many of the subsequent malpractices, and then, despite repeated protests, did nothing to stop those malpractices continuing. The most serious charge against the Northern Ireland government is not that it was directly responsible for widespread discrimination, but that it allowed discrimination on such a scale over a substantial segment of Northern Ireland.", "title": "History" }, { "paragraph_id": 38, "text": "A civil rights demonstration in 1968 led by the Northern Ireland Civil Rights Association was banned by the Government and blocked using force by the Royal Ulster Constabulary. The events that followed the August 1969 Apprentice Boys parade resulted in the Battle of the Bogside, when Catholic rioters fought the police, leading to widespread civil disorder in Northern Ireland and is often dated as the starting point of the Troubles.", "title": "History" }, { "paragraph_id": 39, "text": "On Sunday 30 January 1972, 13 unarmed civilians were shot dead by British paratroopers during a civil rights march in the Bogside area. Another 13 were wounded and one further man later died of his wounds. This event came to be known as Bloody Sunday.", "title": "History" }, { "paragraph_id": 40, "text": "The conflict which became known as the Troubles is widely regarded as having started in Derry with the Battle of the Bogside. The Civil Rights Movement had also been very active in the city. In the early 1970s, the city was heavily militarised and there was widespread civil unrest. Several districts in the city constructed barricades to control access and prevent the forces of the state from entering.", "title": "History" }, { "paragraph_id": 41, "text": "Violence eased towards the end of the Troubles in the late 1980s and early 1990s. Irish journalist Ed Maloney claims in The Secret History of the IRA that republican leaders there negotiated a de facto ceasefire in the city as early as 1991. Whether this is true or not, the city did see less bloodshed by this time than Belfast or other localities.", "title": "History" }, { "paragraph_id": 42, "text": "The city was visited by an orca in November 1977 at the height of the Troubles; it was dubbed Dopey Dick by the thousands who came from miles around to see him.", "title": "History" }, { "paragraph_id": 43, "text": "From 1613 the city was governed by the Londonderry Corporation. In 1898 this became Londonderry County Borough Council, until 1969 when administration passed to the unelected Londonderry Development Commission. In 1973 a new district council with boundaries extending to the rural south-west was established under the name Londonderry City Council, renamed in 1984 to Derry City Council, consisting of five electoral areas: Cityside, Northland, Rural, Shantallow and Waterside. The council of 30 members was re-elected every four years. The council merged with Strabane District Council in April 2015 under local government reorganisation to become Derry and Strabane District Council.", "title": "Governance" }, { "paragraph_id": 44, "text": "The councillors elected in 2019 for the city are:", "title": "Governance" }, { "paragraph_id": 45, "text": "The devices on the city's arms are a skeleton and a three-towered castle on a black field, with the \"chief\" or top third of the shield showing the arms of the City of London: a red cross and sword on white. In the centre of the cross is a gold harp. In unofficial use the harp sometimes appears above the arms as a crest.", "title": "Governance" }, { "paragraph_id": 46, "text": "The arms were confirmed by Daniel Molyneux, the Ulster King of Arms, in 1613, following the town's incorporation. Molyneux's notes state that the original arms of Derry were \"the picture of death (or a skeleton) sitting on a mossie ston and in the dexter point a castle\". To this design he added, at the request of the new mayor, \"a chief, the armes of London\". Molyneux goes on to state that the skeleton is symbolic of Derry's ruin at the hands of the Irish rebel Cahir O'Doherty, and that the silver castle represents its renewal through the efforts of the London guilds: \"[Derry] hath since bene (as it were) raysed from the dead by the worthy undertakinge of the Ho'ble Cittie of London, in memorie whereof it is hence forth called and knowen by the name of London Derrie.\"", "title": "Governance" }, { "paragraph_id": 47, "text": "Local legend offers different theories as to the origin of the skeleton. One identifies it as Walter de Burgh, who was starved to death in the Earl of Ulster's dungeons in 1332. Another identifies it as Cahir O'Doherty himself, who was killed in a skirmish near Kilmacrennan in 1608 (but was popularly believed to have wasted away while sequestered in his castle at Buncrana). In the days of gerrymandering and anti-Catholic discrimination, Derry's Catholics often claimed in dark wit that the skeleton was a Catholic waiting for a job and a council house. However, a report commissioned by the city council in 1979 established that there was no basis for any of the popular theories, and that the skeleton \"[is] purely symbolic and does not refer to any identifiable person\".", "title": "Governance" }, { "paragraph_id": 48, "text": "The 1613 arms depicted a harp in the centre of the cross, but this was omitted from later depictions of the city arms, and in the 1952 letters patent confirming the arms to the Londonderry Corporation. In 2002 Derry City Council applied to the College of Arms to have the harp restored, and Garter and Norroy & Ulster Kings of Arms issued letters patent to that effect in 2003, having accepted the 17th-century evidence.", "title": "Governance" }, { "paragraph_id": 49, "text": "The motto attached to the coat of arms reads in Latin, \"Vita, Veritas, Victoria\". This translates into English as \"Life, Truth, Victory\".", "title": "Governance" }, { "paragraph_id": 50, "text": "Derry is characterised by its distinctively hilly topography. The River Foyle forms a deep valley as it flows through the city, making Derry a place of very steep streets and sudden, startling views. The original walled city of Londonderry lies on a hill on the west bank of the River Foyle. In the past, the river branched and enclosed this wooded hill as an island; over the centuries, however, the western branch of the river dried up and became a low-lying and boggy district that is now called the Bogside.", "title": "Geography" }, { "paragraph_id": 51, "text": "Today, modern Derry extends considerably north and west of the city walls and east of the river. The half of the city on the west of the Foyle is known as the Cityside and the area east is called the Waterside. The Cityside and Waterside are connected by the Craigavon Bridge and Foyle Bridge, and by a footbridge in the centre of the city called Peace Bridge. The district also extends into rural areas to the southeast of the city.", "title": "Geography" }, { "paragraph_id": 52, "text": "This much larger city, however, remains characterised by the often extremely steep hills that form much of its terrain on both sides of the river. A notable exception to this lies on the northeastern edge of the city, on the shores of Lough Foyle, where large expanses of sea and mudflats were reclaimed in the middle of the 19th century. Today, these sloblands are protected from the sea by miles of sea walls and dikes. The area is an internationally important bird sanctuary, ranked among the top 30 wetland sites in the UK.", "title": "Geography" }, { "paragraph_id": 53, "text": "Other important nature reserves lie at Ness Country Park, 10 miles (16 kilometres) east of Derry; and at Prehen Wood, within the city's south-eastern suburbs.", "title": "Geography" }, { "paragraph_id": 54, "text": "Derry has, like most of Ireland, a temperate maritime climate (Cfb) according to the Köppen climate classification system. The nearest official Met Office Weather Station for which climate data is available is Carmoney, just west of City of Derry Airport and about five miles (eight kilometres) northeast of the city centre. However, observations ceased in 2004 and the nearest Weather Station is currently Ballykelly, due 12 miles (19 kilometres) east-northeast. Typically, 27 nights of the year will report an air frost at Ballykelly, and at least 1 mm of precipitation will be reported on 170 days (1981–2010 averages).", "title": "Geography" }, { "paragraph_id": 55, "text": "The lowest temperature recorded at Carmoney was −11.0 °C (12.2 °F) on 27 December 1995.", "title": "Geography" }, { "paragraph_id": 56, "text": "Derry Urban Area (DUA), including the city and the neighbouring settlements of Culmore, Newbuildings and Strathfoyle, is classified as a city by the Northern Ireland Statistics and Research Agency (NISRA) since its population exceeds 75,000. The mid-2006 population estimate for the wider Derry City Council area was 107,300. Population growth in 2005/06 was driven by natural change, with net out-migration of approximately 100 people.", "title": "Demography" }, { "paragraph_id": 57, "text": "The city was one of the few in Ireland to experience an increase in population during the Great Famine as migrants came to it from other, more heavily affected areas.", "title": "Demography" }, { "paragraph_id": 58, "text": "On census day (27 March 2011) there were 105,066 people living in Derry Urban Area. Of these, 27% were aged under 16 years and 14% were aged 60 and over; 49% of the population were male and 51% were female; 75% were from a Roman Catholic background and 23% (up three per cent from 2001) were from a Protestant background.", "title": "Demography" }, { "paragraph_id": 59, "text": "On census day (21 March 2021) there were 85,279 people living in Derry City and of these 77.88% (66,413) were from a Catholic background, 16.98% (14,481) were from Protestant and Other Christian (including Christian related) background, 1.24% had another religious background and 3.9% had no religion. 60.73% of individuals identify as Irish only, 13.18% identify as British only, 16.12% identify as Northern Irish only.", "title": "Demography" }, { "paragraph_id": 60, "text": "Concerns have been raised by both communities over the increasingly divided nature of the city. There were about 17,000 Protestants on the west bank of the River Foyle in 1971. The proportion rapidly declined during the 1970s; the 2011 census recorded 3,169 Protestants on the west bank, compared to 54,976 Catholics, and it is feared that the city could become permanently divided.", "title": "Demography" }, { "paragraph_id": 61, "text": "However, concerted efforts have been made by the local community, church and political leaders from both traditions to redress the problem. A conference to bring together key actors and promote tolerance was held in October 2006. Ken Good, the Church of Ireland Bishop of Derry and Raphoe, said he was happy living on the cityside. \"I feel part of it. It is my city and I want to encourage other Protestants to feel exactly the same\", he said.", "title": "Demography" }, { "paragraph_id": 62, "text": "Support for Protestants in the district has been strong from the SDLP politician Helen Quigley, who formerly served as the mayor of Derry. She made inclusion and tolerance key themes of her mayoralty. Cllr. Quigley said it was time for \"everyone to take a stand to stop the scourge of sectarian and other assaults in the city.\"", "title": "Demography" }, { "paragraph_id": 63, "text": "The economy of the district was based significantly on the textile industry until relatively recently. For many years women were commonly the sole wage earners working in the shirt factories while the men in comparison had high levels of unemployment. This led to significant male emigration. The history of shirt making in the city dates to 1831, said to have been started by William Scott and his family who first exported shirts to Glasgow. Within 50 years, shirt making in the city was the most prolific in the UK with garments being exported all over the world. It was known so well that the industry received a mention in Das Kapital by Karl Marx, when discussing the factory system:", "title": "Economy" }, { "paragraph_id": 64, "text": "The shirt factory of Messrs. Tille at Londonderry, which employs 1,000 operatives in the factory itself, and 9,000 people spread up and down the country and working in their own houses.", "title": "Economy" }, { "paragraph_id": 65, "text": "The industry reached its peak in the 1920s employing around 18,000 people. In modern times, however, the textile industry declined due largely to lower Asian wages.", "title": "Economy" }, { "paragraph_id": 66, "text": "A long-term foreign employer in the area is Du Pont, which has been based at Maydown since 1958, its first European production facility. Originally Neoprene was manufactured at Maydown and subsequently followed by Hypalon. More recently Lycra and Kevlar production units were active. Thanks to a worldwide demand for Kevlar, which is made at the plant, the facility undertook a £40 million upgrade to expand its global Kevlar production.", "title": "Economy" }, { "paragraph_id": 67, "text": "As of 2002, the three largest private-sector employers were American firms. Economic successes have included call centres and a large investment by Seagate, which has operated a factory in the Springtown Industrial Estate since 1993. As of 2019, Seagate was employing approximately 1,400 people in Derry.", "title": "Economy" }, { "paragraph_id": 68, "text": "A controversial new employer in the area was Raytheon Systems Limited, a software division of the American defence contractor, which was set up in Derry in 1999. Although some of the local people welcomed the jobs boost, others in the area objected to the jobs being provided by a firm involved heavily in the arms trade. Following four years of protest by the Foyle Ethical Investment Campaign, in 2004 Derry City Council passed a motion declaring the district \"a 'no – go' area for the arms trade\", and in 2006 its offices were briefly occupied by anti-war protestors who became known as the Raytheon 9. In 2009, the company announced that it was not renewing its lease when it expired in 2010 and was looking for a new location for its operations.", "title": "Economy" }, { "paragraph_id": 69, "text": "Other significant multinational employers in the region include Firstsource of India, INVISTA, Stream International, Perfecseal, NTL, Northbrook Technology of the United States, Arntz Belting and Invision Software of Germany, and Homeloan Management of the UK. Major local business employers include Desmonds, Northern Ireland's largest privately owned company, manufacturing and sourcing garments, E&I Engineering, St. Brendan's Irish Cream Liqueur and McCambridge Duffy, one of the largest insolvency practices in the UK.", "title": "Economy" }, { "paragraph_id": 70, "text": "Even though the city provides cheap labour by standards in Western Europe, critics have noted that the grants offered by the Northern Ireland Industrial Development Board have helped land jobs for the area that only last as long as the funding lasts. This was reflected in questions to the Parliamentary Under-Secretary of State for Northern Ireland, Richard Needham, in 1990. It was noted that it cost £30,000 to create one job in an American firm in Northern Ireland.", "title": "Economy" }, { "paragraph_id": 71, "text": "Critics of investment decisions affecting the district often point to the decision to build a new university building in nearby (predominantly Protestant) Coleraine rather than developing the Ulster University Magee Campus. Another major government decision affecting the city was the decision to create the new town of Craigavon outside Belfast, which again was detrimental to the development of the city. Even in October 2005, there was perceived bias against the comparatively impoverished North West of the province, with a major civil service job contract going to Belfast. Mark Durkan, the Social Democratic and Labour Party (SDLP) leader and Member of Parliament (MP) for Foyle was quoted in the Belfast Telegraph as saying:", "title": "Economy" }, { "paragraph_id": 72, "text": "The fact is there has been consistent under-investment in the North West and a reluctance on the part of the Civil Service to see or support anything west of the Bann, except when it comes to rate increases, then they treat us equally.", "title": "Economy" }, { "paragraph_id": 73, "text": "In July 2005, the Irish Minister for Finance, Brian Cowen, called for a joint task force to drive economic growth in the cross-border region. This would have implications for Counties Londonderry, Tyrone, and Donegal across the border.", "title": "Economy" }, { "paragraph_id": 74, "text": "The city is the north west's foremost shopping district, housing two large shopping centres along with numerous shop-packed streets serving much of the greater county, as well as Tyrone and Donegal.", "title": "Economy" }, { "paragraph_id": 75, "text": "The city centre has two main shopping centres; the Foyleside Shopping Centre which has 45 stores and 1,430 parking spaces, and the Richmond Centre, which has 39 retail units. The Quayside Shopping Centre also serves the city side and there is also Lisnagelvin Shopping Centre on the Waterside. These centres, as well as local-run businesses, feature numerous national and international stores. Crescent Link Retail Park, located in the Waterside, has several chain stores and has become the second largest retail park in Northern Ireland (second only to Sprucefield in Lisburn). Plans have also been approved for Derry's first Asda store, which will be located at the retail park sharing a unit with Homebase. Sainsbury's also applied for planning permission for a store at Crescent Link, but Environment Minister Alex Attwood turned it down.", "title": "Economy" }, { "paragraph_id": 76, "text": "Until the store's closure in March 2016, the city was also home to the world's oldest independent department store, Austins. Established in 1830, Austins predates Jenners of Edinburgh by 5 years, Harrods of London by 15 years and Macy's of New York by 25 years. The store's five-story Edwardian building is located within the walled city in the area known as The Diamond.", "title": "Economy" }, { "paragraph_id": 77, "text": "Derry is renowned for its architecture. This can be primarily ascribed to the formal planning of the historic walled city of Derry at the core of the modern city. This is centred on the Diamond with a collection of late Georgian, Victorian and Edwardian buildings maintaining the gridlines of the main thoroughfares (Shipquay Street, Ferryquay Street, Butcher Street and Bishop Street) to the City Gates. St Columb's Cathedral does not follow the grid pattern reinforcing its civic status. This Church of Ireland Cathedral was the first post-Reformation Cathedral built for an Anglican church. The construction of the Roman Catholic St Eugene's Cathedral in the Bogside in the 19th century was another major architectural addition to the city. The Townscape Heritage Initiative has funded restoration works to key listed buildings and other older structures.", "title": "Landmarks" }, { "paragraph_id": 78, "text": "In the three centuries since their construction, the city walls have been adapted to meet the needs of a changing city. The best example of this adaptation is the insertion of three additional gates – Castle Gate, New Gate and Magazine Gate – into the walls in the course of the 19th century. Today, the fortifications form a continuous promenade around the city centre, complete with cannon, avenues of mature trees and views across Derry. Historic buildings within the city walls include St Augustine's Church, which sits on the city walls close to the site of the original monastic settlement; the copper-domed Austin's department store, which claims to be the oldest such store in the world; and the imposing Greek Revival Courthouse on Bishop Street. The red-brick late-Victorian Guildhall, also crowned by a copper dome, stands just beyond Shipquay Gate and close to the riverfront.", "title": "Landmarks" }, { "paragraph_id": 79, "text": "There are many museums and sites of interest in and around the city, including the Foyle Valley Railway Centre, the Amelia Earhart Centre And Wildlife Sanctuary, the Apprentice Boys Memorial Hall, Ballyoan Cemetery, The Bogside, numerous murals by the Bogside Artists, Derry Craft Village, Free Derry Corner, O'Doherty Tower (now home to part of the Tower Museum), the Harbour Museum, the Museum of Free Derry, Chapter House Museum, the Workhouse Museum, the Nerve Centre, St. Columb's Park and Leisure Centre, Creggan Country Park, Brooke Park, The Millennium Forum, the Void Gallery, and the Foyle and Craigavon bridges.", "title": "Landmarks" }, { "paragraph_id": 80, "text": "Attractions include museums, a vibrant shopping centre and trips to the Giant's Causeway, which is approximately 50 miles (80 kilometres) away, though poorly connected by public transport. Lonely Planet called Derry the fourth best city in the world to see in 2013.", "title": "Landmarks" }, { "paragraph_id": 81, "text": "On 25 June 2011, the Peace Bridge opened. It is a cycle and footbridge that begins from the Guild Hall in the city centre of Derry City to Ebrington Square and St Columb's Park on the far side of the River Foyle. It was funded jointly by the Department for Social Development (NI), the Department of the Environment, Community and Local Government along with matching funding, totalling £14 million, from the SEUPB Peace III programme.", "title": "Landmarks" }, { "paragraph_id": 82, "text": "Future projects include the Walled City Signature Project, which intends to ensure that the city's walls become a world-class tourist experience.", "title": "Landmarks" }, { "paragraph_id": 83, "text": "The transport network is built out of a complex array of old and modern roads and railways throughout the city and county. The city's road network also makes use of two bridges to cross the River Foyle, the Craigavon Bridge and the Foyle Bridge, the longest bridge in Ireland. Derry also serves as a major transport hub for travel throughout nearby County Donegal.", "title": "Transport" }, { "paragraph_id": 84, "text": "In spite of it being the second city of Northern Ireland (and it being the second-largest city in all of Ulster), road and rail links to other cities are below par for its standing. Many business leaders claim that government investment in the city and infrastructure has been badly lacking. Some have stated that this is due to its outlying border location whilst others have cited a sectarian bias against the region west of the River Bann due to its high proportion of Catholics. There is no direct motorway link with Dublin or Belfast. The rail link to Belfast has been downgraded over the years so that, presently, it is not a viable alternative to the roads for industry to rely on. As of 2008, there were plans for £1 billion worth of transport infrastructure investment in and around the district. Planned upgrades to the A5 Dublin road agreed as part of the Good Friday Agreement and St Andrews Talks fell through when the government of the Republic of Ireland reneged on its funding citing the post-2008 economic downturn.", "title": "Transport" }, { "paragraph_id": 85, "text": "Most public transport in Northern Ireland is operated by the subsidiaries of Translink. Originally the city's internal bus network was run by Ulsterbus, which still provides the city's connections with other towns in Northern Ireland. The city's buses are now run by Ulsterbus Foyle, just as Translink Metro now provides the bus service in Belfast. The Ulsterbus Foyle network offers 13 routes across the city into the suburban areas, excluding an Easibus link which connects to the Waterside and Drumahoe, and a free Rail Link Bus runs from the Waterside Railway Station to the city centre. All buses leave from the Foyle Street Bus Station in the city centre.", "title": "Transport" }, { "paragraph_id": 86, "text": "Long-distance buses depart from Foyle Street Bus Station to destinations throughout Ireland. Buses are operated by both Ulsterbus and Bus Éireann on cross-border routes. Lough Swilly formerly operated buses to County Donegal, but the company entered liquidation and is no longer in operation. There is a half-hourly service to Belfast every day, called the Maiden City Flyer, which is the Goldline Express flagship route. There are hourly services to Strabane, Omagh, Coleraine, Letterkenny and Buncrana, and up to twelve services a day to bring people to Dublin. There is a daily service to Sligo, Galway, Shannon Airport and Limerick.", "title": "Transport" }, { "paragraph_id": 87, "text": "TFI Local Link provides additional cross-border public transport routes, with route 244 (Moville/Derry), 245 (Greencastle/Derry), 288 (Ballybofey/Derry), 952 (Carndonagh/Derry), 957 (Shrove/Derry, via Moville),1426 (Stranorlar/Derry) all servicing the city.", "title": "Transport" }, { "paragraph_id": 88, "text": "Private coach operator, Patrick Gallagher Coaches, also runs 2 routes during the week that service the city. The first goes from Crolly in County Donegal to Belfast (to the Leonardo Hotel in Belfast city centre, formerly Jurys Inn), and another that runs from County Donegal to the city.", "title": "Transport" }, { "paragraph_id": 89, "text": "City of Derry Airport, the council-owned airport near Eglinton, has grown during the early 21st century, with new investment in extending the runway and plans to redevelop the terminal.", "title": "Transport" }, { "paragraph_id": 90, "text": "The A2 (a dual carriageway) from Maydown to Eglinton, serves the airport. City of Derry airport is the main regional airport for County Donegal, County Londonderry and west County Tyrone as well as Derry City itself.", "title": "Transport" }, { "paragraph_id": 91, "text": "The airport is served by Loganair and Ryanair with scheduled flights to Glasgow Airport, Edinburgh Airport, Manchester Airport, Liverpool John Lennon Airport and London Stansted all year round with a summer schedule to Mallorca with TUI Airways", "title": "Transport" }, { "paragraph_id": 92, "text": "The city is served by a single rail link terminating at Derry ~ Londonderry railway station in Waterside that is subsidised, alongside much of Northern Ireland's railways, by Northern Ireland Railways (N.I.R.). The link primarily provides passenger services from the city to Belfast, via several stops that include Coleraine, Ballymoney, and Antrim, and connections to links with other parts of Northern Ireland. The route itself is the only remaining rail link used by trains; most of the lines developed in the mid-19th century fell into decline towards the mid-20th century from competition by new road networks. The original rail network that served the city included four different railways that, between them, linked the city with much of the province of Ulster, plus a harbour railway network that linked the other four lines, and a tramway on the City side of the Foyle. Usage of the rail link between Derry and Belfast remains questionable for commuters, due to the journey time of over two hours making it slower centre-to-centre than the 100-minute Ulsterbus Goldline Express service.", "title": "Transport" }, { "paragraph_id": 93, "text": "Several railways began operation around the city of Derry within the middle of the 19th century. The companies that set up links helped to provide key links for the city towards other towns and cities across Ireland, for the transportation of passengers and freight. The lines that were constructed featured a mixture of Irish gauge and narrow gauge railways, and companies that operated them included:", "title": "Transport" }, { "paragraph_id": 94, "text": "In 1900, the 3 ft (914 mm) gauge Donegal Railway was extended to the city from Strabane, with construction establishing the Londonderry Victoria Road railway terminus and creating a junction with the LPHC railway. The LPHC line was altered to dual gauge which allowed 3 ft (914 mm) gauge traffic between the Donegal Railway and L&LSR as well as Irish gauge traffic between the GNR and B&NCR. By 1905, the government of the United Kingdom offered subsidies to both the L&LSR and the Donegal Railway to build extensions to their railway networks into remote parts of County Donegal, which soon developed Derry (alongside Strabane) into becoming a key rail hub by 1905 for the county and surrounding regions. In 1906 the Northern Counties Committee (NCC, successor to the B&NCR) and the GNR jointly took over the Donegal Railway, making it the County Donegal Railways Joint Committee (CDRJC).", "title": "Transport" }, { "paragraph_id": 95, "text": "Alongside the railways, the city was served by a standard gauge (1,435 mm (4 ft 8+1⁄2 in)) tramway, the City of Derry Tramways. The tramway was opened in 1897 and consisted of horse trams that operated along a single line, 1+1⁄2 miles (2.5 kilometres) long, which ran along the City side of the Foyle parallel to the LPHC's line on that side of the river. The line never converted to electrically operated trams, and was closed in 1919.", "title": "Transport" }, { "paragraph_id": 96, "text": "In 1922, the partition of Ireland dramatically caused disruptions to the city's rail links, except for the NNC route to Coleraine. The creation of an international frontier with County Donegal changed trade patterns to the detriment of the railways affected by the partition, placing border posts on every line to and from Derry, causing great delays to trains and disrupting timekeeping from custom inspections - the L&LSR faced inspections between Pennyburn and Bridge End; the CDRJC faced inspections beyond Strabane; and the GNR line faced inspections between Derry and Strabane. Custom agreements negotiated over the next few years between Britain and Ireland enabled GNR trains to travel to and from Derry - such trains would be allowed to pass without inspection through the Free State, unless they served local stations on the west bank of the Foyle - while goods transported by all railways between different parts of the Free State would be allowed to pass through Northern Ireland under customs bond. Despite these agreements, local passenger and goods traffic continued to be delayed by customs examinations.", "title": "Transport" }, { "paragraph_id": 97, "text": "The decline of most of Derry's rail links took place after the Second World War, due to increasing competition by road links. The L&LSR closed its line in 1953, followed by the CDRJC in 1954. The Ulster Transport Authority, who took over the NCC in 1949 and the GNR's lines in Northern Ireland in 1958, took control of the LPHC railway before closing it in 1962, before eventually shutting down the former GNR line to Derry in 1965, after the submission of The Benson Report to the Northern Ireland Government two years prior to the closure. This left the former L&CR line to Coleraine as the sole railway link for the city, providing a passenger service to Belfast, alongside CIÉ freight services to Donegal. By the 1990s, the service began to deteriorate.", "title": "Transport" }, { "paragraph_id": 98, "text": "In 2008, the Department for Regional Development announced plans to relay the track between Derry and Coleraine - the plan, aimed at being completed by 2013, included adding a passing loop to increase traffic capacity, and increasing the number of trains with two additional diesel multiple units. Additional phases of the plan also included improvements to existing stations along the line, and the restoration of the former Victoria Road terminus building to prepare for the relocation of the city's current terminus station to the site, all for completion by late 2019. Costing around £86 million, the improvements were aimed at reducing the journey time to Belfast by 30 minutes and allowing commuter trains to arrive before 9 a.m. for the first time.", "title": "Transport" }, { "paragraph_id": 99, "text": "The largest road investment in the north west's history took place during 2010, with the building of the 'A2 Broadbridge Maydown to City of Derry Airport dualling' project and announcement of the 'A6 Londonderry to Dungiven Dualling Scheme' with the intention to reduce the travel time to Belfast. The latter project brings a dual-carriageway link between Northern Ireland's two largest cities one step closer. The project is costing £320 million and is expected to be completed in 2016.", "title": "Transport" }, { "paragraph_id": 100, "text": "In October 2006 the Government of Ireland announced that it was to invest €1 billion in Northern Ireland; with the planned projects including 'the A5 Western Transport Corridor', the complete upgrade of the A5 Derry – Omagh – Aughnacloy (– Dublin) road, around 90 kilometres (55 miles) long, to dual carriageway standard.", "title": "Transport" }, { "paragraph_id": 101, "text": "In June 2008 Conor Murphy, Minister for Regional Development, announced that there will be a study into the feasibility of connecting the A5 and A6. Should it proceed, the scheme would most likely run from Drumahoe to south of Prehen along the south east of the city.", "title": "Transport" }, { "paragraph_id": 102, "text": "Londonderry Port at Lisahally is the United Kingdom's most westerly port and has capacity for 30,000-ton vessels. The Londonderry Port and Harbour Commissioners (LPHC) announced record turnover, record profits and record tonnage figures for the year ended March 2008. The figures are the result of a significant capital expenditure programme for the period 2000 to 2007 of about £22 million. Tonnage handled by LPHC increased by almost 65% between 2000 and 2007.", "title": "Transport" }, { "paragraph_id": 103, "text": "The port gave vital Allied service in the longest-running campaign of the Second World War, the Battle of the Atlantic, and saw the surrender of the German U-boat fleet at Lisahally on 8 May 1945.", "title": "Transport" }, { "paragraph_id": 104, "text": "The tidal River Foyle is navigable from the coast at Derry to approximately 10 miles (16 km) inland. In 1796, the Strabane Canal was opened, continuing the navigation a further 4 miles (6 km) southwards to Strabane. The canal was closed in 1962.", "title": "Transport" }, { "paragraph_id": 105, "text": "Derry is home to the Magee Campus of Ulster University, formerly Magee College. However, Lockwood's 1960s decision to locate Northern Ireland's second university in Coleraine rather than Derry helped contribute to the formation of the civil rights movement that ultimately led to The Troubles. Derry was the town more closely associated with higher learning, with Magee College already more than a century old by that time. In the mid-1980s an attempt was made at address this by forming Magee College as a campus of the Ulster University, but this failed to stifle calls for the establishment of an independent University in Derry. As of 2021, the Magee campus reportedly accommodated approximately 4,400 students, out of a total Ulster University student population of approximately 24,000, of which 15,000 are in the Belfast campus.", "title": "Education" }, { "paragraph_id": 106, "text": "The North West Regional College is also based in the city, and accommodates over 10,000 student enrolments annually.", "title": "Education" }, { "paragraph_id": 107, "text": "One of the two oldest secondary schools in Northern Ireland, Foyle College, is located in Derry. It was founded in 1616 by the Merchant Taylors. Other secondary schools include St. Columb's College, Oakgrove Integrated College, St Cecilia's College, St Mary's College, St. Joseph's Boys' School, Lisneal College, Thornhill College, Lumen Christi College and St. Brigid's College. There are also numerous primary schools.", "title": "Education" }, { "paragraph_id": 108, "text": "The city is home to sports clubs and teams. Both association football and Gaelic football are popular in the area.", "title": "Sports" }, { "paragraph_id": 109, "text": "In association football, the city's most prominent clubs include Derry City who play in the national league of the Republic of Ireland; Institute of the NIFL Championship as well as Maiden City and Trojans, both of the Northern Ireland Intermediate League. In addition to these clubs, which all play in national leagues, other clubs are based in the city. The local football league governed by the IFA is the North-West Junior League, which contains many clubs from the city, such as BBOB (Boys Brigade Old Boys) and Lincoln Courts. The city's other junior league is the Derry and District League and teams from the city and surrounding areas participate, including Don Boscos and Creggan Swifts. The Foyle Cup youth soccer tournament is held annually in the city. It has attracted many notable teams in the past, including Werder Bremen, IFK Göteborg and Ferencváros.", "title": "Sports" }, { "paragraph_id": 110, "text": "In Gaelic football Derry GAA are the county team and play in the Gaelic Athletic Association's National Football League, Ulster Senior Football Championship and All-Ireland Senior Football Championship. They also field hurling teams in the equivalent tournaments. There are many Gaelic games clubs in and around the city, for example Na Magha CLG, Steelstown GAC, Doire Colmcille CLG, Seán Dolans GAC, Na Piarsaigh CLG Doire Trasna and Slaughtmanus GAC.", "title": "Sports" }, { "paragraph_id": 111, "text": "There are many boxing clubs, the most well-known being the Ring Amateur Boxing Club, which is based on the City side, and associated with boxers Charlie Nash and John Duddy. Rochester's Amateur Boxing club is a club in the city's Waterside area.", "title": "Sports" }, { "paragraph_id": 112, "text": "Rugby union is also quite popular in the city, with the City of Derry Rugby Club situated not far from the city centre. City of Derry won both the Ulster Towns Cup and the Ulster Junior Cup in 2009. Londonderry YMCA RFC is another rugby club and is based in the village of Drumahoe which is on the outskirts of the city.", "title": "Sports" }, { "paragraph_id": 113, "text": "The city's only basketball club is North Star Basketball Club which has teams in the Basketball Northern Ireland senior and junior Leagues.", "title": "Sports" }, { "paragraph_id": 114, "text": "Cricket is also played in the city, particularly in the Waterside. The city is home to two cricket clubs, Brigade Cricket Club and Glendermott Cricket Club, both of whom play in the North West Senior League.", "title": "Sports" }, { "paragraph_id": 115, "text": "There are two golf clubs situated in the city, City of Derry Golf Club and Foyle International Golf Centre.", "title": "Sports" }, { "paragraph_id": 116, "text": "Artists and writers associated with the city and surrounding countryside include the Nobel Prize-winning poet Seamus Heaney, poet Seamus Deane, playwright Brian Friel, writer and music critic Nik Cohn, artist Willie Doherty, socio-political commentator and activist Eamonn McCann and bands such as The Undertones. The large political gable-wall murals of Bogside Artists, Free Derry Corner, the Foyle Film Festival, the Derry Walls, St Eugene's and St Columb's Cathedrals and the annual Halloween street carnival are popular tourist attractions. In 2010, Derry was named the UK's tenth 'most musical' city by PRS for Music.", "title": "Culture" }, { "paragraph_id": 117, "text": "In May 2013 a perpetual Peace Flame Monument was unveiled by Martin Luther King III and Presbyterian minister Rev. David Latimer. The flame was lit by children from both traditions in the city and is one of only 15 such flames across the world.", "title": "Culture" }, { "paragraph_id": 118, "text": "The local newspapers, the Derry Journal (known as the Londonderry Journal until 1880) and the Londonderry Sentinel, reflect the divided history of the city: the Journal was founded in 1772 and is Ireland's second oldest newspaper; the Sentinel newspaper was formed in 1829 when new owners of the Journal embraced Catholic emancipation, and the editor left the paper to set up the Sentinel.", "title": "Culture" }, { "paragraph_id": 119, "text": "There are numerous radio stations receivable: the largest stations based in the city are BBC Radio Foyle and the commercial station Q102.9.", "title": "Culture" }, { "paragraph_id": 120, "text": "There was a locally based television station, C9TV, one of only two local or 'restricted' television services in Northern Ireland, which ceased broadcasts in 2007.", "title": "Culture" }, { "paragraph_id": 121, "text": "The city's nightlife is mainly focused on the weekends, with several bars and clubs providing \"student nights\" during the weekdays. Waterloo Street and Strand Road provide the main venues. Waterloo Street, a steep street lined with both Irish traditional and modern pubs, frequently has live rock and traditional music at night.", "title": "Culture" }, { "paragraph_id": 122, "text": "Notable people who were born or have lived in Derry include:", "title": "Notable people" }, { "paragraph_id": 123, "text": "The following people and military units have received the Freedom of the City of Derry.", "title": "Freedom of the City" } ]
Derry, officially Londonderry, is the largest city in County Londonderry, the second-largest in Northern Ireland and the fifth-largest on the island of Ireland. The old walled city lies on the west bank of the River Foyle, which is spanned by two road bridges and one footbridge. The city now covers both banks. The population of the city was 85,279 at the 2021 census, while the Derry Urban Area had a population of 105,066 in 2011. The district administered by Derry City and Strabane District Council contains both Londonderry Port and City of Derry Airport. Derry is close to the border with County Donegal, with which it has had a close link for many centuries. The person traditionally seen as the founder of the original Derry is Saint Colmcille, a holy man from Tír Chonaill, the old name for almost all of modern County Donegal, of which the west bank of the Foyle was a part before 1610. In 2013, Derry was the inaugural UK City of Culture, having been awarded the title in 2010.
2002-01-01T10:46:15Z
2023-12-31T00:00:04Z
[ "Template:Incomplete list", "Template:Main", "Template:Notelist", "Template:ODNB", "Template:Citequote", "Template:Portal", "Template:Cite book", "Template:Cite magazine", "Template:Blacklisted-links", "Template:Bar box", "Template:Hugman", "Template:IrishCities", "Template:Commons", "Template:Circa", "Template:RailGauge", "Template:Col-begin", "Template:Dead link", "Template:Use British English", "Template:Cite journal", "Template:Cite report", "Template:Lang", "Template:Anchor", "Template:Cite news", "Template:Wikivoyage", "Template:Use DMY dates", "Template:Update inline", "Template:Reflist", "Template:Harvp", "Template:Infobox UK place", "Template:Efn", "Template:Pp", "Template:Weather box", "Template:Div col", "Template:County Londonderry", "Template:Redirect", "Template:Convert", "Template:Failed verification", "Template:Poemquote", "Template:Webarchive", "Template:UK City of Culture", "Template:Authority control", "Template:Short description", "Template:Pp-move", "Template:Rws", "Template:Col-end", "Template:Div col end", "Template:Cite web", "Template:UK cities", "Template:Blockquote", "Template:Party name with colour", "Template:Col-2", "Template:Cite Sports-Reference", "Template:Citation needed" ]
https://en.wikipedia.org/wiki/Derry
9,058
European influence in Afghanistan
European influence in Afghanistan has been present in the country since the Victorian era, when the competing imperial powers of Britain and Russia contested for control over Afghanistan as part of the Great Game. After the decline of the Durrani dynasty in 1823, Dost Mohammad Khan established the Barakzai dynasty. Dost Mohammad achieved prominence among his brothers through clever use of the support of his mother's Qizilbash tribesmen and his own youthful apprenticeship under his brother, Fateh Khan. However, in the same year, the Afghans lost their former stronghold of Peshawar to the Sikh Khalsa Army of Ranjit Singh at the Battle of Nowshera. The Afghan forces in the battle were supported by Azim Khan, half-brother of Dost Mohammad. In 1834 Dost Mohammad defeated an invasion by the former ruler, Shuja Shah Durrani, but his absence from Kabul gave the Sikhs the opportunity to expand westward. Ranjit Singh's forces moved from Peshawar into territory ruled directly by Kabul. In 1836 Dost Mohammad's forces, under the command of his son Akbar Khan, defeated the Sikhs at the Battle of Jamrud, a post fifteen kilometres west of Peshawar. This was a pyrrhic victory and they failed to fully dislodge the Sikhs from Jamrud. The Afghan leader did not follow up this triumph by retaking Peshawar, however, but instead contacted Lord Auckland, the new British governor-general in British India, for help in dealing with the Sikhs. The letter marked the beginning of British influence in Afghanistan, and the subsequent Anglo-Russian struggle known as the Great Game. The British became the major European power in the Indian subcontinent after the 1763 Treaty of Paris and began to show interest in Afghanistan as early as their 1809 treaty with Shuja Shah Durrani. It was the threat of the expanding Russian Empire beginning to push for an advantage in the Afghanistan region that placed pressure on British India, in what became known as the Great Game. The Great Game set in motion the confrontation of the British and Russian empires, whose spheres of influence moved steadily closer to one another until they met in Afghanistan. It also involved repeated attempts by the British to establish a puppet government in Kabul. The remainder of the 19th century saw greater European involvement in Afghanistan and her surrounding territories and heightened conflict among the ambitious local rulers as Afghanistan's fate played out globally. The débâcle of the Afghan civil war left a vacuum in the Hindu Kush area that concerned the British, who were well aware of the many times in history it had been employed as an invasion route to South Asia. In the early decades of the 19th century, it became clear to the British that the major threat to their interests in India would not come from the fragmented Afghan empire, the Iranians, or the French, but from the Russians, who had already begun a steady advance southward from the Caucasus winning decisive wars against the Ottomans and Persians. At the same time, the Russians feared the possibility a permanent British foothold in Central Asia as the British expanded northward, incorporating the Punjab, Sindh, and Kashmir into their empire; later to become Pakistan. The British viewed Russia's absorption of the Caucasus, the Kyrgyz and Turkmen lands, the Khanate of Khiva, and the Emirate of Bukhara with equal suspicion as a threat to their interests in the Indian subcontinent. In addition to this rivalry between Britain and Russia, there were two specific reasons for British concern over Russia's intentions. First was the Russian influence at the Iranian court, which prompted the Russians to support Iran in its attempt to take Herat, historically the western gateway to Afghanistan and northern India. In 1837 Iran advanced on Herat with the support and advice of Russian officers. The second immediate reason was the presence in Kabul in 1837 of a Russian agent, Yan Vitkevich, who was ostensibly there, as was the British agent Alexander Burnes, for commercial discussions. The British demanded that Dost Mohammad sever all contact with the Iranians and Russians, remove Vitkevich from Kabul, surrender all claims to Peshawar, and respect Peshawar's independence as well as that of Kandahar, which was under the control of his brothers at the time. In return, the British government intimated that it would ask Ranjit Singh to reconcile with the Afghans. When Auckland refused to put the agreement in writing, Dost Mohammad suspended negotiations the British and began negotiations with Vitkevich. In 1838 Auckland, Ranjit Singh, and Shuja signed an agreement stating that Shuja would regain control of Kabul and Kandahar with the help of the British and Sikhs; he would accept Sikh rule of the former Afghan provinces already controlled by Ranjit Singh, and that Herat would remain independent. In practice, the plan replaced Dost Mohammad with a British figurehead whose autonomy would be similar to the princes who ruled over the princely states in British India. It soon became apparent to the British that Sikh participation, advancing toward Kabul through the Khyber Pass while Shuja and the British advanced through Kandahar, would not be forthcoming. Auckland's plan in the spring of 1838 was for the Sikhs to place Shuja on the Afghan throne, with British support. By the end of the summer however, the plan had changed; now the British alone would impose the pliant Shuja Shah. As a prelude to his invasion plans, the Governor-General of India Lord Auckland issued the Simla Manifesto in October 1838, setting forth the necessary reasons for British intervention in Afghanistan. The manifesto stated that in order to ensure the welfare of India, the British must have a trustworthy ally on India's western frontier. The British claim that their troops were merely supporting Shah Shujah's small army in retaking what was once his throne fooled no one. Although the Simla Manifesto stated that British troops would be withdrawn as soon as Shuja was installed in Kabul, Shuja's rule depended entirely on British support to suppress rebellion and on British funds to buy the support of tribal chiefs. The British denied that they were invading Afghanistan, instead claiming they were supporting its legitimate Shuja government "against foreign interference and factious opposition". In November 1841 insurrection and massacre flared up in Kabul. The British vacillated and disagreed and were beleaguered in their inadequate cantonments. The British negotiated with the most influential sirdars, cut off as they were by winter and insurgent tribes from any hope of relief. Mohammad Akbar Khan, son of the captive Dost Mohammad, arrived in Kabul and became effective leader of the sirdars. At a conference with them Sir William MacNaghten was killed, but in spite of this, the sirdars' demands were agreed to by the British and they withdrew. During the withdrawal they were attacked by Ghilzai tribesmen and in running battles through the snowbound passes nearly the entire column of 4,500 troops and 12,000 camp followers were killed. Of the British only one, Dr. William Brydon, reached Jalalabad, while a few others were captured. Afghan forces loyal to Akbar Khan besieged the remaining British contingents at Kandahar, Ghazni and Jalalabad. Ghazni fell, but the other garrisons held out, and with the help of reinforcements from India their besiegers were defeated. While preparations were under way for a renewed advance on Kabul, the new Governor-General Lord Ellenborough ordered British forces to leave Afghanistan after securing the release of the prisoners from Kabul and taking reprisals. The forces from Kandahar and Jalalabad again defeated Akbar Khan, retook and sacked Ghazni and Kabul, rescuing the prisoners before withdrawing through the Khyber Pass. After months of chaos in Kabul, Mohammad Akbar Khan secured local control and in April 1843 his father Dost Mohammad, who had been released by the British, returned to the throne in Afghanistan. In the following decade, Dost Mohammad concentrated his efforts on reconquering Mazari Sharif, Konduz, Badakhshan, and Kandahar. Mohammad Akbar Khan died in 1845. During the Second Anglo-Sikh War (1848–49), Dost Mohammad's last effort to take Peshawar failed. By 1854 the British wanted to resume relations with Dost Mohammad, whom they had essentially ignored in the intervening twelve years. The 1855 Treaty of Peshawar reopened diplomatic relations, proclaimed respect for each side's territorial integrity, and pledged both sides as friends of each other's friends and enemies of each other's enemies. In 1857 an addendum to the 1855 treaty permitted a British military mission to become a presence in Kandahar (but not Kabul) during a conflict with the Persians, who had attacked Herat in 1856. During the Indian Rebellion of 1857, some British officials suggested restoring Peshawar to Dost Mohammad, in return for his support against the rebellious sepoys of the Bengal Army, but this view was rejected by British political officers on the North West frontier, who believed that Dost Mohammad would see this as a sign of weakness and turn against the British. In 1863 Dost Mohammad retook Herat with British acquiescence. A few months later, he died. Sher Ali Khan, his third son, and proclaimed successor, failed to recapture Kabul from his older brother, Mohammad Afzal (whose troops were led by his son, Abdur Rahman) until 1868, after which Abdur Rahman retreated across the Amu Darya and bided his time. In the years immediately following the First Anglo-Afghan War, and especially after the Indian Rebellion of 1857 against the British in India, Liberal Party governments in London took a political view of Afghanistan as a buffer state. By the time Sher Ali had established control in Kabul in 1868, he found the British ready to support his regime with arms and funds, but nothing more. Over the next ten years, relations between the Afghan ruler and Britain deteriorated steadily. The Afghan ruler was worried about the southward encroachment of Russia, which by 1873 had taken over the lands of the khan, or ruler, of Khiva. Sher Ali sent an envoy seeking British advice and support. The previous year the British had signed an agreement with the Russians in which the latter agreed to respect the northern boundaries of Afghanistan and to view the territories of the Afghan Emir as outside their sphere of influence. The British, however, refused to give any assurances to the disappointed Sher Ali. After tension between Russia and Britain in Europe ended with the June 1878 Congress of Berlin, Russia turned its attention to Central Asia. That same summer, Russia sent an uninvited diplomatic mission to Kabul. Sher Ali tried, but failed, to keep them out. Russian envoys arrived in Kabul on 22 July 1878 and on 14 August, the British demanded that Sher Ali accept a British mission too. The amir not only refused to receive a British mission but threatened to stop it if it were dispatched. Lord Lytton, the viceroy, ordered a diplomatic mission to set out for Kabul in September 1878 but the mission was turned back as it approached the eastern entrance of the Khyber Pass, triggering the Second Anglo-Afghan War. A British force of about 40,000 fighting men was distributed into military columns which penetrated Afghanistan at three different points. An alarmed Sher Ali attempted to appeal in person to the Tsar for assistance, but unable to do so, he returned to Mazari Sharif, where he died on 21 February 1879. With British forces occupying much of the country, Sher Ali's son and successor, Mohammad Yaqub Khan, signed the Treaty of Gandamak in May 1879 in order to put a quick end to the conflict. According to this agreement and in return for an annual subsidy and vague assurances of assistance in case of foreign aggression, Yaqub relinquished control of Afghan foreign affairs to the British. British representatives were installed in Kabul and other locations, British control was extended to the Khyber and Michni Passes, and Afghanistan ceded various frontier areas and Quetta to Britain. The British forces then withdrew. Soon afterwards, an uprising in Kabul led to the killings of Britain's Resident in Kabul, Sir Pierre Cavagnari and his guards and staff on 3 September 1879, provoking the second phase of the Second Afghan War. Major General Sir Frederick Roberts led the Kabul Field Force over the Shutargardan Pass into central Afghanistan, defeated the Afghan Army at Char Asiab on 6 October 1879 and occupied Kabul. Ghazi Mohammad Jan Khan Wardak staged an uprising and attacked British forces near Kabul in the Siege of the Sherpur Cantonment in December 1879, but his defeat there resulted in the collapse of this rebellion. Yaqub Khan, suspected of complicity in the killings of Cavagnari and his staff, was obliged to abdicate. The British considered a number of possible political settlements, including partitioning Afghanistan between multiple rulers or placing Yaqub's brother Ayub Khan on the throne, but ultimately decided to install his cousin Abdur Rahman Khan as emir instead. Ayub Khan, who had been serving as governor of Herat, rose in revolt, defeated a British detachment at the Battle of Maiwand in July 1880 and besieged Kandahar. Roberts then led the main British force from Kabul and decisively defeated Ayub Khan in September at the Battle of Kandahar, bringing his rebellion to an end. Abdur Rahman had confirmed the Treaty of Gandamak, leaving the British in control of the territories ceded by Yaqub Khan and ensuring British control of Afghanistan's foreign policy in exchange for protection and a subsidy. Abandoning the provocative policy of maintaining a British resident in Kabul, but having achieved all their other objectives, the British withdrew. As far as British interests were concerned, Abdur Rahman answered their prayers: a forceful, intelligent leader capable of welding his divided people into a state; and he was willing to accept limitations to his power imposed by British control of his country's foreign affairs and the British buffer state policy. His twenty-one-year reign was marked by efforts to modernize and establish control of the kingdom, whose boundaries were delineated by the two empires bordering it. Abdur Rahman turned his considerable energies to what evolved into the creation of the modern state of Afghanistan. He achieved this consolidation of Afghanistan in three ways. He suppressed various rebellions and followed up his victories with harsh punishment, execution, and deportation. He broke the stronghold of Pashtun tribes by forcibly transplanting them. He transplanted his most powerful Pashtun enemies, the Ghilzai, and other tribes from southern and south-central Afghanistan to areas north of the Hindu Kush with predominantly non-Pashtun populations. The last non-Muslim Afghans of Kafiristan north of Kabul were forcefully converted to Islam. Finally, he created a system of provincial governorates different from old tribal boundaries. Provincial governors had a great deal of power in local matters, and an army was placed at their disposal to enforce tax collection and suppress dissent. Abdur Rahman kept a close eye on these governors, however, by creating an effective intelligence system. During his reign, tribal organization began to be eroded as provincial government officials allowed land to change hands outside the traditional clan and tribal limits. The Pashtuns battled and conquered the Uzbeks and forced them into the status of ruled people who were discriminated against. Out of anti-Russian strategic interests, the British assisted the Afghan conquest of the Uzbek Khanates, giving weapons to the Afghans and supporting the Afghan government's colonization of northern Afghanistan by Pashtuns, which involved sending massive amounts of Pashtun colonists onto Uzbek land. In addition to forging a nation from the splintered regions making up Afghanistan, Abdur Rahman tried to modernize his kingdom by forging a regular army and the first institutionalized bureaucracy. Despite his distinctly authoritarian personality, Abdur Rahman called for a loya jirga, an assemblage of royal princes, important notables, and religious leaders. According to his autobiography, Abdur Rahman had three goals: subjugating the tribes, extending government control through a strong, visible army, and reinforcing the power of the ruler and the royal family. During his visit to Rawalpindi in 1885, the Amir requested the Viceroy of India to depute a Muslim Envoy to Kabul who was noble birth and of ruling family background. Mirza Atta Ullah Khan, Sardar Bahadur s/o Khan Bahadur Mirza Fakir Ullah Khan (Saman Burj Wazirabad), a direct descendant of Jarral Rajput Rajas of Rajauri, was selected and approved by the Amir to be the British Envoy to Kabul. Abdur Rahman also paid attention to technological advance. He brought foreign physicians, engineers (especially for mining), geologists, and printers to Afghanistan. He imported European machinery and encouraged the establishment of small factories to manufacture soap, candles, and leather goods. He sought European technical advice on communications, transport, and irrigation. Local Afghan tribes strongly resisted this modernization. Workmen making roads had to be protected by the army against local warriors. Nonetheless, despite these sweeping internal policies, Abdur Rahman's foreign policy was completely in foreign hands. The first important frontier dispute was the Panjdeh crisis of 1885, precipitated by Russian encroachment into Central Asia. Having seized the Merv (now Mary) Oasis by 1884, Russian forces were directly adjacent to Afghanistan. Claims to the Panjdeh Oasis were in debate, with the Russians keen to take over all the region's Turkoman domains. After battling Afghan forces in the spring of 1885, the Russians seized the oasis. Russian and British troops were quickly alerted, but the two powers reached a compromise; Russia was in possession of the oasis, and Britain believed it could keep the Russians from advancing any farther. Without an Afghan say in the matter, the Joint Anglo-Russian Boundary Commission agreed that the Russians would relinquish the farthest territory captured in their advance but retain Panjdeh. This agreement on these border sections delineated for Afghanistan a permanent northern frontier at the Amu Darya, but also involved the loss of much territory, especially around Panjdeh. The second section of Afghan border demarcated during Abdur Rahman's reign was in the Wakhan. The British insisted that Abdur Rahman accept sovereignty over this remote region, where unruly Kyrgyz held sway; he had no choice but to accept Britain's compromise. In 1895 and 1896, another Joint Anglo-Russian Boundary Commission agreed on the frontier boundary to the far northeast of Afghanistan, which bordered Chinese territory (although the Chinese did not formally accept this as a boundary between the two countries until 1964.) For Abdur Rahman, delineating the boundary with India (through the Pashtun area) was far more significant, and it was during his reign that the Durand Line was drawn. Under pressure, Abdur Rahman agreed in 1893 to accept a mission headed by the British Indian foreign secretary, Sir Mortimer Durand, to define the limits of British and Afghan control in the Pashtun territories. Boundary limits were agreed on by Durand and Abdur Rahman before the end of 1893, but there is some question about the degree to which Abdur Rahman willingly ceded certain regions. There were indications that he regarded the Durand Line as a delimitation of separate areas of political responsibility, not a permanent international frontier, and that he did not explicitly cede control over certain parts (such as Kurram and Chitral) that were already in British control under the Treaty of Gandamak. The Durand Line cut through tribes and bore little relation to the realities of demography or military strategy. The line laid the foundation not for peace between the border regions, but for heated disagreement between the governments of Afghanistan and British India, and later, Afghanistan and Pakistan over what came to be known as the issue of Pashtunistan or 'Land of the Pashtuns'. (See Siege of Malakand). The clearest manifestation that Abdur Rahman had established control in Afghanistan was the peaceful succession of his eldest son, Habibullah Khan, to the throne on his father's death in October 1901. Although Abdur Rahman had fathered many children, he groomed Habibullah to succeed him, and he made it difficult for his other sons to contest the succession by keeping power from them and sequestering them in Kabul under his control. Habibullah Khan, Abdur Rahman Khan's eldest son and child of a slave mother, kept a close watch on the palace intrigues revolving around his father's more distinguished wife (a granddaughter of Dost Mohammad), who sought the throne for her own son. Although made secure in his position as ruler by virtue of support from the army which was created by his father, Habibullah was not as domineering as Abdur Rahman. Consequently, the influence of religious leaders as well as that of Mahmud Tarzi, a cousin of the king, increased during his reign. Mahmud Tarzi, a highly educated, well-traveled poet and journalist, founded an Afghan nationalist newspaper with Habibullah's agreement, and until 1919 he used the newspaper as a platform for rebutting clerical criticism of Western-influenced changes in government and society, for espousing full Afghan independence, and for other reforms. Tarzi's passionate Afghan nationalism influenced a future generation of Asian reformers. The boundary with Iran was firmly delineated in 1904, replacing the ambiguous line made by a British commission in 1872. Agreement could not be reached, however, on sharing the waters of the Helmand River. Like all foreign policy developments of this period affecting Afghanistan, the conclusion of the "Great Game" between Russia and Britain occurred without the Afghan ruler's participation. The 1907 Anglo-Russian Convention (the Convention of St. Petersburg) not only divided the region into separate areas of Russian and British influence but also established foundations for Afghan neutrality. The convention provided for Russian acquiescence that Afghanistan was now outside this sphere of influence, and for Russia to consult directly with Britain on matters relating to Russian-Afghan relations. Britain, for its part, would not occupy or annex Afghan territory, or interfere in Afghanistan's internal affairs. During World War I, Afghanistan remained neutral despite pressure to support Turkey when its sultan proclaimed his nation's participation in what it considered a holy war. Habibullah did, however, entertain an Indo-German–Turkish mission in Kabul in 1915 that had as its titular head the Indian nationalist Mahendra Pratap and was led by Oskar Niedermayer and the German legate Werner Otto von Hentig. After much procrastination, he won an agreement from the Central Powers for a huge payment and arms provision in exchange for attacking British India. But the crafty Afghan ruler clearly viewed the war as an opportunity to play one side off against the other, for he also offered the British to resist a Central Powers attack on India in exchange for an end to British control of Afghan foreign policy. Amanullah's ten years of reign initiated a period of dramatic change in Afghanistan in both foreign and domestic politics. Amanullah declared full independence and sparked the Third Anglo-Afghan War. Amanullah altered foreign policy in his new relations with external powers and transformed domestic politics with his social, political, and economic reforms. Although his reign ended abruptly, he achieved some notable successes, and his efforts failed as much due to the centrifugal forces of tribal Afghanistan and the machinations of Russia and Britain as to any political folly on his part. Amanullah came to power just as the entente between Russia and Britain broke down following the Russian Revolution of 1917. Once again Afghanistan provided a stage on which the great powers played out their schemes against one another. Keen to modernise his country and remove all foreign influence, Amanullah, sought to shore up his powerbase. Amidst intrigue in the Afghan court, and political and civil unrest in India, he sought to divert attention from the internal divisions of Afghanistan and unite all faction behind him by attacking the British. Using the civil unrest in India as an excuse to move troops to the Durand Line, Afghan troops crossed the border at the western end of the Khyber Pass on 3 May 1919 and occupied the village of Bagh, the scene of an earlier uprising in April. In response, the Indian government ordered a full mobilisation and on 6 May 1919 declared war. For the British it had come at a time when they were still recovering from the First World War. The troops that were stationed in India were mainly reserves and Territorials, who were awaiting demobilisation and keen to return to Britain, whilst the few regular regiments that were available were tired and depleted from five years of fighting. Afghan forces achieved success in the initial days of the war, taking the British and Indians by surprise in two main thrusts as the Afghan regular army was joined by large numbers of Pashtun tribesmen from both sides of the border. A series of skirmishes then followed as the British and Indians recovered from their initial surprise. As a counterbalance to deficiencies in manpower and morale, the British had a considerable advantage in terms of equipment, possessing machine guns, armoured cars, motor transport, wireless communications and aircraft and it was the latter that would prove decisive. British forces deployed air forces for the first time in the region, and the King's home was directly targeted in what is the first case of aerial bombardment in Afghanistan's history. The attacks played a key role in forcing an armistice but brought an angry rebuke from King Amanullah. He wrote: "It is a matter of great regret that the throwing of bombs by zeppelins on London was denounced as a most savage act and the bombardment of places of worship and sacred spots was considered a most abominable operation. While we now see with our own eyes that such operations were a habit which is prevalent among all civilized people of the west". The fighting concluded in August 1919 and Britain virtually dictated the terms of the Anglo-Afghan Treaty of 1919, a temporary armistice that provided, on one somewhat ambiguous interpretation, for Afghan self-determination in foreign affairs. Before final negotiations were concluded in 1921, however, Afghanistan had already begun to establish its own foreign policy without repercussions anyway, including diplomatic relations with the new government in the Soviet Union in 1919. During the 1920s, Afghanistan established diplomatic relations with most major countries. On 20 February 1919, Habibullah Khan was assassinated on a hunting trip. He had not declared a succession, but left his third son, Amanullah Khan, in charge in Kabul. Amanullah did have an older brother, Nasrullah Khan. But, because Amanullah controlled both the national treasury and the army, Amanullah was well situated to seize power. The army's support allowed Amanullah to suppress other claims and imprison those relatives who would not swear loyalty to him. Within a few months, the new amir had gained the allegiance of most tribal leaders and established control over the cities. Amanullah Khan's reforms were heavily influenced by Europe. This came through the influence of Mahmud Tarzi, who was both Amanullah Khan's father-in-law and Foreign Minister. Mahmud Tarzi, a highly educated, well-traveled poet, journalist, and diplomat, was a key figure that brought Western dress and etiquette to Afghanistan. He also fought for progressive reforms such as woman's rights, educational rights, and freedom of press. All of these influences, brought by Tarzi and others, were welcomed by Amanullah Khan. In 1926, Amanullah ended the Emirate of Afghanistan and proclaimed the Kingdom of Afghanistan with himself as king. In 1927 and 1928, King Amanullah Khan and his wife Soraya Tarzi visited Europe. On this trip they were honored and feted. In fact, in 1928 the King and Queen of Afghanistan received honorary degrees from the University of Oxford. This was an era when other Muslim nations, like Turkey and Egypt were also on the path to modernization. King Amanullah was so impressed with the social progress of Europe that he tried to implement them right away, this met with heavy resistance from the conservative society and eventually led to his demise. Amanullah enjoyed early popularity within Afghanistan and he used his power to modernize the country. Amanullah created new cosmopolitan schools for both boys and girls in the region and overturned centuries-old traditions such as strict dress codes for women. He created a new capital city and increased trade with Europe and Asia. He also advanced a modernist constitution that incorporated equal rights and individual freedoms. This rapid modernization though, created a backlash, and a reactionary uprising known as the Khost rebellion which was suppressed in 1925. After Amanullah travelled to Europe in late 1927, opposition to his rule increased. An uprising in Jalalabad culminated in a march to the capital, and much of the army deserted rather than resist. On 14 January 1929, Amanullah abdicated in favor of his brother, King Inayatullah Khan. On 17 January, Inayatullah abdicated and Habibullah Kalakani became the next ruler of Afghanistan and restored the emirate. However, his rule was short lived and, on 17 October 1929, Habibullah Kalakani was overthrown and replaced by King Nadir Khan. After his abdication in 1929, Amanullah went into temporary exile in India. When he attempted to return to Afghanistan, he had little support from the people. From India, the ex-king traveled to Europe and settled in Italy, and later in Switzerland. Meanwhile, Nadir Khan made sure his return to Afghanistan was impossible by engaging in a propaganda war. Nadir Khan accused Amanullah Khan of kufr with his pro western policies. In 1933, after the assassination of Nadir Khan, Mohammed Zahir Shah became king. In 1940, the Afghan legation in Berlin asked that if Germany won the Second World War would the Reich give all of British India up to the Indus river to Afghanistan. Ernst von Weizsacker, the State Secretary at the Auswärtiges Amt wrote to the German minister in Kabul on 3 October 1940: "The Afghan minister called on me on September 30 and conveyed greetings from his minister president, as well as their good wishes for a favourable outcome of the war. He inquired whether German aims in Asia coincided with Afghan hopes; he alluded to the oppression of Arab countries and referred to the 15m Afghans (Pashtuns, mainly in the North West Frontier province) who were forced to suffer on Indian territory.My statement that Germany's goal was the liberation of the peoples of the region referred to, who were under the British yoke was received with satisfaction by the Afghan minister. He stated that justice for Afghanistan would be created only when the country's frontier had been extended to the Indus; this would also apply if India should secede from Britain. The Afghan remarked that Afghanistan had given proof of her loyal attitude by vigorously resisting English pressure to break off relations with Germany." No Afghan government ever accepted the Durand Line which divided the ethnically Pashtun population into the North-West Frontier Province of the British Indian Empire (modern north-western Pakistan) and Afghanistan, and it was the hope of Kabul that if Germany won the war, then all of the Pashtun people might be united into one realm.
[ { "paragraph_id": 0, "text": "European influence in Afghanistan has been present in the country since the Victorian era, when the competing imperial powers of Britain and Russia contested for control over Afghanistan as part of the Great Game.", "title": "" }, { "paragraph_id": 1, "text": "After the decline of the Durrani dynasty in 1823, Dost Mohammad Khan established the Barakzai dynasty. Dost Mohammad achieved prominence among his brothers through clever use of the support of his mother's Qizilbash tribesmen and his own youthful apprenticeship under his brother, Fateh Khan. However, in the same year, the Afghans lost their former stronghold of Peshawar to the Sikh Khalsa Army of Ranjit Singh at the Battle of Nowshera. The Afghan forces in the battle were supported by Azim Khan, half-brother of Dost Mohammad.", "title": "Rise of Dost Mohammad Khan" }, { "paragraph_id": 2, "text": "In 1834 Dost Mohammad defeated an invasion by the former ruler, Shuja Shah Durrani, but his absence from Kabul gave the Sikhs the opportunity to expand westward. Ranjit Singh's forces moved from Peshawar into territory ruled directly by Kabul. In 1836 Dost Mohammad's forces, under the command of his son Akbar Khan, defeated the Sikhs at the Battle of Jamrud, a post fifteen kilometres west of Peshawar. This was a pyrrhic victory and they failed to fully dislodge the Sikhs from Jamrud. The Afghan leader did not follow up this triumph by retaking Peshawar, however, but instead contacted Lord Auckland, the new British governor-general in British India, for help in dealing with the Sikhs. The letter marked the beginning of British influence in Afghanistan, and the subsequent Anglo-Russian struggle known as the Great Game.", "title": "Rise of Dost Mohammad Khan" }, { "paragraph_id": 3, "text": "The British became the major European power in the Indian subcontinent after the 1763 Treaty of Paris and began to show interest in Afghanistan as early as their 1809 treaty with Shuja Shah Durrani. It was the threat of the expanding Russian Empire beginning to push for an advantage in the Afghanistan region that placed pressure on British India, in what became known as the Great Game. The Great Game set in motion the confrontation of the British and Russian empires, whose spheres of influence moved steadily closer to one another until they met in Afghanistan. It also involved repeated attempts by the British to establish a puppet government in Kabul. The remainder of the 19th century saw greater European involvement in Afghanistan and her surrounding territories and heightened conflict among the ambitious local rulers as Afghanistan's fate played out globally.", "title": "The Great Game" }, { "paragraph_id": 4, "text": "The débâcle of the Afghan civil war left a vacuum in the Hindu Kush area that concerned the British, who were well aware of the many times in history it had been employed as an invasion route to South Asia. In the early decades of the 19th century, it became clear to the British that the major threat to their interests in India would not come from the fragmented Afghan empire, the Iranians, or the French, but from the Russians, who had already begun a steady advance southward from the Caucasus winning decisive wars against the Ottomans and Persians.", "title": "The Great Game" }, { "paragraph_id": 5, "text": "At the same time, the Russians feared the possibility a permanent British foothold in Central Asia as the British expanded northward, incorporating the Punjab, Sindh, and Kashmir into their empire; later to become Pakistan. The British viewed Russia's absorption of the Caucasus, the Kyrgyz and Turkmen lands, the Khanate of Khiva, and the Emirate of Bukhara with equal suspicion as a threat to their interests in the Indian subcontinent.", "title": "The Great Game" }, { "paragraph_id": 6, "text": "In addition to this rivalry between Britain and Russia, there were two specific reasons for British concern over Russia's intentions. First was the Russian influence at the Iranian court, which prompted the Russians to support Iran in its attempt to take Herat, historically the western gateway to Afghanistan and northern India. In 1837 Iran advanced on Herat with the support and advice of Russian officers. The second immediate reason was the presence in Kabul in 1837 of a Russian agent, Yan Vitkevich, who was ostensibly there, as was the British agent Alexander Burnes, for commercial discussions.", "title": "The Great Game" }, { "paragraph_id": 7, "text": "The British demanded that Dost Mohammad sever all contact with the Iranians and Russians, remove Vitkevich from Kabul, surrender all claims to Peshawar, and respect Peshawar's independence as well as that of Kandahar, which was under the control of his brothers at the time. In return, the British government intimated that it would ask Ranjit Singh to reconcile with the Afghans. When Auckland refused to put the agreement in writing, Dost Mohammad suspended negotiations the British and began negotiations with Vitkevich.", "title": "The Great Game" }, { "paragraph_id": 8, "text": "In 1838 Auckland, Ranjit Singh, and Shuja signed an agreement stating that Shuja would regain control of Kabul and Kandahar with the help of the British and Sikhs; he would accept Sikh rule of the former Afghan provinces already controlled by Ranjit Singh, and that Herat would remain independent. In practice, the plan replaced Dost Mohammad with a British figurehead whose autonomy would be similar to the princes who ruled over the princely states in British India.", "title": "The Great Game" }, { "paragraph_id": 9, "text": "It soon became apparent to the British that Sikh participation, advancing toward Kabul through the Khyber Pass while Shuja and the British advanced through Kandahar, would not be forthcoming. Auckland's plan in the spring of 1838 was for the Sikhs to place Shuja on the Afghan throne, with British support. By the end of the summer however, the plan had changed; now the British alone would impose the pliant Shuja Shah.", "title": "The Great Game" }, { "paragraph_id": 10, "text": "As a prelude to his invasion plans, the Governor-General of India Lord Auckland issued the Simla Manifesto in October 1838, setting forth the necessary reasons for British intervention in Afghanistan. The manifesto stated that in order to ensure the welfare of India, the British must have a trustworthy ally on India's western frontier. The British claim that their troops were merely supporting Shah Shujah's small army in retaking what was once his throne fooled no one. Although the Simla Manifesto stated that British troops would be withdrawn as soon as Shuja was installed in Kabul, Shuja's rule depended entirely on British support to suppress rebellion and on British funds to buy the support of tribal chiefs. The British denied that they were invading Afghanistan, instead claiming they were supporting its legitimate Shuja government \"against foreign interference and factious opposition\".", "title": "First Anglo-Afghan War, 1838–1842" }, { "paragraph_id": 11, "text": "In November 1841 insurrection and massacre flared up in Kabul. The British vacillated and disagreed and were beleaguered in their inadequate cantonments. The British negotiated with the most influential sirdars, cut off as they were by winter and insurgent tribes from any hope of relief. Mohammad Akbar Khan, son of the captive Dost Mohammad, arrived in Kabul and became effective leader of the sirdars. At a conference with them Sir William MacNaghten was killed, but in spite of this, the sirdars' demands were agreed to by the British and they withdrew. During the withdrawal they were attacked by Ghilzai tribesmen and in running battles through the snowbound passes nearly the entire column of 4,500 troops and 12,000 camp followers were killed. Of the British only one, Dr. William Brydon, reached Jalalabad, while a few others were captured.", "title": "First Anglo-Afghan War, 1838–1842" }, { "paragraph_id": 12, "text": "Afghan forces loyal to Akbar Khan besieged the remaining British contingents at Kandahar, Ghazni and Jalalabad. Ghazni fell, but the other garrisons held out, and with the help of reinforcements from India their besiegers were defeated. While preparations were under way for a renewed advance on Kabul, the new Governor-General Lord Ellenborough ordered British forces to leave Afghanistan after securing the release of the prisoners from Kabul and taking reprisals. The forces from Kandahar and Jalalabad again defeated Akbar Khan, retook and sacked Ghazni and Kabul, rescuing the prisoners before withdrawing through the Khyber Pass.", "title": "First Anglo-Afghan War, 1838–1842" }, { "paragraph_id": 13, "text": "After months of chaos in Kabul, Mohammad Akbar Khan secured local control and in April 1843 his father Dost Mohammad, who had been released by the British, returned to the throne in Afghanistan. In the following decade, Dost Mohammad concentrated his efforts on reconquering Mazari Sharif, Konduz, Badakhshan, and Kandahar. Mohammad Akbar Khan died in 1845. During the Second Anglo-Sikh War (1848–49), Dost Mohammad's last effort to take Peshawar failed.", "title": "Mid-nineteenth century" }, { "paragraph_id": 14, "text": "By 1854 the British wanted to resume relations with Dost Mohammad, whom they had essentially ignored in the intervening twelve years. The 1855 Treaty of Peshawar reopened diplomatic relations, proclaimed respect for each side's territorial integrity, and pledged both sides as friends of each other's friends and enemies of each other's enemies.", "title": "Mid-nineteenth century" }, { "paragraph_id": 15, "text": "In 1857 an addendum to the 1855 treaty permitted a British military mission to become a presence in Kandahar (but not Kabul) during a conflict with the Persians, who had attacked Herat in 1856. During the Indian Rebellion of 1857, some British officials suggested restoring Peshawar to Dost Mohammad, in return for his support against the rebellious sepoys of the Bengal Army, but this view was rejected by British political officers on the North West frontier, who believed that Dost Mohammad would see this as a sign of weakness and turn against the British.", "title": "Mid-nineteenth century" }, { "paragraph_id": 16, "text": "In 1863 Dost Mohammad retook Herat with British acquiescence. A few months later, he died. Sher Ali Khan, his third son, and proclaimed successor, failed to recapture Kabul from his older brother, Mohammad Afzal (whose troops were led by his son, Abdur Rahman) until 1868, after which Abdur Rahman retreated across the Amu Darya and bided his time.", "title": "Mid-nineteenth century" }, { "paragraph_id": 17, "text": "In the years immediately following the First Anglo-Afghan War, and especially after the Indian Rebellion of 1857 against the British in India, Liberal Party governments in London took a political view of Afghanistan as a buffer state. By the time Sher Ali had established control in Kabul in 1868, he found the British ready to support his regime with arms and funds, but nothing more. Over the next ten years, relations between the Afghan ruler and Britain deteriorated steadily. The Afghan ruler was worried about the southward encroachment of Russia, which by 1873 had taken over the lands of the khan, or ruler, of Khiva. Sher Ali sent an envoy seeking British advice and support. The previous year the British had signed an agreement with the Russians in which the latter agreed to respect the northern boundaries of Afghanistan and to view the territories of the Afghan Emir as outside their sphere of influence. The British, however, refused to give any assurances to the disappointed Sher Ali.", "title": "Mid-nineteenth century" }, { "paragraph_id": 18, "text": "After tension between Russia and Britain in Europe ended with the June 1878 Congress of Berlin, Russia turned its attention to Central Asia. That same summer, Russia sent an uninvited diplomatic mission to Kabul. Sher Ali tried, but failed, to keep them out. Russian envoys arrived in Kabul on 22 July 1878 and on 14 August, the British demanded that Sher Ali accept a British mission too.", "title": "Second Anglo-Afghan War, 1878–1880" }, { "paragraph_id": 19, "text": "The amir not only refused to receive a British mission but threatened to stop it if it were dispatched. Lord Lytton, the viceroy, ordered a diplomatic mission to set out for Kabul in September 1878 but the mission was turned back as it approached the eastern entrance of the Khyber Pass, triggering the Second Anglo-Afghan War. A British force of about 40,000 fighting men was distributed into military columns which penetrated Afghanistan at three different points. An alarmed Sher Ali attempted to appeal in person to the Tsar for assistance, but unable to do so, he returned to Mazari Sharif, where he died on 21 February 1879.", "title": "Second Anglo-Afghan War, 1878–1880" }, { "paragraph_id": 20, "text": "With British forces occupying much of the country, Sher Ali's son and successor, Mohammad Yaqub Khan, signed the Treaty of Gandamak in May 1879 in order to put a quick end to the conflict. According to this agreement and in return for an annual subsidy and vague assurances of assistance in case of foreign aggression, Yaqub relinquished control of Afghan foreign affairs to the British. British representatives were installed in Kabul and other locations, British control was extended to the Khyber and Michni Passes, and Afghanistan ceded various frontier areas and Quetta to Britain. The British forces then withdrew. Soon afterwards, an uprising in Kabul led to the killings of Britain's Resident in Kabul, Sir Pierre Cavagnari and his guards and staff on 3 September 1879, provoking the second phase of the Second Afghan War. Major General Sir Frederick Roberts led the Kabul Field Force over the Shutargardan Pass into central Afghanistan, defeated the Afghan Army at Char Asiab on 6 October 1879 and occupied Kabul. Ghazi Mohammad Jan Khan Wardak staged an uprising and attacked British forces near Kabul in the Siege of the Sherpur Cantonment in December 1879, but his defeat there resulted in the collapse of this rebellion.", "title": "Second Anglo-Afghan War, 1878–1880" }, { "paragraph_id": 21, "text": "Yaqub Khan, suspected of complicity in the killings of Cavagnari and his staff, was obliged to abdicate. The British considered a number of possible political settlements, including partitioning Afghanistan between multiple rulers or placing Yaqub's brother Ayub Khan on the throne, but ultimately decided to install his cousin Abdur Rahman Khan as emir instead. Ayub Khan, who had been serving as governor of Herat, rose in revolt, defeated a British detachment at the Battle of Maiwand in July 1880 and besieged Kandahar. Roberts then led the main British force from Kabul and decisively defeated Ayub Khan in September at the Battle of Kandahar, bringing his rebellion to an end. Abdur Rahman had confirmed the Treaty of Gandamak, leaving the British in control of the territories ceded by Yaqub Khan and ensuring British control of Afghanistan's foreign policy in exchange for protection and a subsidy. Abandoning the provocative policy of maintaining a British resident in Kabul, but having achieved all their other objectives, the British withdrew.", "title": "Second Anglo-Afghan War, 1878–1880" }, { "paragraph_id": 22, "text": "As far as British interests were concerned, Abdur Rahman answered their prayers: a forceful, intelligent leader capable of welding his divided people into a state; and he was willing to accept limitations to his power imposed by British control of his country's foreign affairs and the British buffer state policy. His twenty-one-year reign was marked by efforts to modernize and establish control of the kingdom, whose boundaries were delineated by the two empires bordering it. Abdur Rahman turned his considerable energies to what evolved into the creation of the modern state of Afghanistan.", "title": "The Iron Amir, 1880–1901" }, { "paragraph_id": 23, "text": "He achieved this consolidation of Afghanistan in three ways. He suppressed various rebellions and followed up his victories with harsh punishment, execution, and deportation. He broke the stronghold of Pashtun tribes by forcibly transplanting them. He transplanted his most powerful Pashtun enemies, the Ghilzai, and other tribes from southern and south-central Afghanistan to areas north of the Hindu Kush with predominantly non-Pashtun populations. The last non-Muslim Afghans of Kafiristan north of Kabul were forcefully converted to Islam. Finally, he created a system of provincial governorates different from old tribal boundaries. Provincial governors had a great deal of power in local matters, and an army was placed at their disposal to enforce tax collection and suppress dissent. Abdur Rahman kept a close eye on these governors, however, by creating an effective intelligence system. During his reign, tribal organization began to be eroded as provincial government officials allowed land to change hands outside the traditional clan and tribal limits.", "title": "The Iron Amir, 1880–1901" }, { "paragraph_id": 24, "text": "The Pashtuns battled and conquered the Uzbeks and forced them into the status of ruled people who were discriminated against. Out of anti-Russian strategic interests, the British assisted the Afghan conquest of the Uzbek Khanates, giving weapons to the Afghans and supporting the Afghan government's colonization of northern Afghanistan by Pashtuns, which involved sending massive amounts of Pashtun colonists onto Uzbek land.", "title": "The Iron Amir, 1880–1901" }, { "paragraph_id": 25, "text": "In addition to forging a nation from the splintered regions making up Afghanistan, Abdur Rahman tried to modernize his kingdom by forging a regular army and the first institutionalized bureaucracy. Despite his distinctly authoritarian personality, Abdur Rahman called for a loya jirga, an assemblage of royal princes, important notables, and religious leaders. According to his autobiography, Abdur Rahman had three goals: subjugating the tribes, extending government control through a strong, visible army, and reinforcing the power of the ruler and the royal family.", "title": "The Iron Amir, 1880–1901" }, { "paragraph_id": 26, "text": "During his visit to Rawalpindi in 1885, the Amir requested the Viceroy of India to depute a Muslim Envoy to Kabul who was noble birth and of ruling family background. Mirza Atta Ullah Khan, Sardar Bahadur s/o Khan Bahadur Mirza Fakir Ullah Khan (Saman Burj Wazirabad), a direct descendant of Jarral Rajput Rajas of Rajauri, was selected and approved by the Amir to be the British Envoy to Kabul.", "title": "The Iron Amir, 1880–1901" }, { "paragraph_id": 27, "text": "Abdur Rahman also paid attention to technological advance. He brought foreign physicians, engineers (especially for mining), geologists, and printers to Afghanistan. He imported European machinery and encouraged the establishment of small factories to manufacture soap, candles, and leather goods. He sought European technical advice on communications, transport, and irrigation. Local Afghan tribes strongly resisted this modernization. Workmen making roads had to be protected by the army against local warriors. Nonetheless, despite these sweeping internal policies, Abdur Rahman's foreign policy was completely in foreign hands.", "title": "The Iron Amir, 1880–1901" }, { "paragraph_id": 28, "text": "The first important frontier dispute was the Panjdeh crisis of 1885, precipitated by Russian encroachment into Central Asia. Having seized the Merv (now Mary) Oasis by 1884, Russian forces were directly adjacent to Afghanistan. Claims to the Panjdeh Oasis were in debate, with the Russians keen to take over all the region's Turkoman domains. After battling Afghan forces in the spring of 1885, the Russians seized the oasis. Russian and British troops were quickly alerted, but the two powers reached a compromise; Russia was in possession of the oasis, and Britain believed it could keep the Russians from advancing any farther. Without an Afghan say in the matter, the Joint Anglo-Russian Boundary Commission agreed that the Russians would relinquish the farthest territory captured in their advance but retain Panjdeh. This agreement on these border sections delineated for Afghanistan a permanent northern frontier at the Amu Darya, but also involved the loss of much territory, especially around Panjdeh.", "title": "The Iron Amir, 1880–1901" }, { "paragraph_id": 29, "text": "The second section of Afghan border demarcated during Abdur Rahman's reign was in the Wakhan. The British insisted that Abdur Rahman accept sovereignty over this remote region, where unruly Kyrgyz held sway; he had no choice but to accept Britain's compromise. In 1895 and 1896, another Joint Anglo-Russian Boundary Commission agreed on the frontier boundary to the far northeast of Afghanistan, which bordered Chinese territory (although the Chinese did not formally accept this as a boundary between the two countries until 1964.)", "title": "The Iron Amir, 1880–1901" }, { "paragraph_id": 30, "text": "For Abdur Rahman, delineating the boundary with India (through the Pashtun area) was far more significant, and it was during his reign that the Durand Line was drawn. Under pressure, Abdur Rahman agreed in 1893 to accept a mission headed by the British Indian foreign secretary, Sir Mortimer Durand, to define the limits of British and Afghan control in the Pashtun territories. Boundary limits were agreed on by Durand and Abdur Rahman before the end of 1893, but there is some question about the degree to which Abdur Rahman willingly ceded certain regions. There were indications that he regarded the Durand Line as a delimitation of separate areas of political responsibility, not a permanent international frontier, and that he did not explicitly cede control over certain parts (such as Kurram and Chitral) that were already in British control under the Treaty of Gandamak.", "title": "The Iron Amir, 1880–1901" }, { "paragraph_id": 31, "text": "The Durand Line cut through tribes and bore little relation to the realities of demography or military strategy. The line laid the foundation not for peace between the border regions, but for heated disagreement between the governments of Afghanistan and British India, and later, Afghanistan and Pakistan over what came to be known as the issue of Pashtunistan or 'Land of the Pashtuns'. (See Siege of Malakand).", "title": "The Iron Amir, 1880–1901" }, { "paragraph_id": 32, "text": "The clearest manifestation that Abdur Rahman had established control in Afghanistan was the peaceful succession of his eldest son, Habibullah Khan, to the throne on his father's death in October 1901. Although Abdur Rahman had fathered many children, he groomed Habibullah to succeed him, and he made it difficult for his other sons to contest the succession by keeping power from them and sequestering them in Kabul under his control.", "title": "The Iron Amir, 1880–1901" }, { "paragraph_id": 33, "text": "Habibullah Khan, Abdur Rahman Khan's eldest son and child of a slave mother, kept a close watch on the palace intrigues revolving around his father's more distinguished wife (a granddaughter of Dost Mohammad), who sought the throne for her own son. Although made secure in his position as ruler by virtue of support from the army which was created by his father, Habibullah was not as domineering as Abdur Rahman. Consequently, the influence of religious leaders as well as that of Mahmud Tarzi, a cousin of the king, increased during his reign.", "title": "Habibullah Khan, 1901–1919" }, { "paragraph_id": 34, "text": "Mahmud Tarzi, a highly educated, well-traveled poet and journalist, founded an Afghan nationalist newspaper with Habibullah's agreement, and until 1919 he used the newspaper as a platform for rebutting clerical criticism of Western-influenced changes in government and society, for espousing full Afghan independence, and for other reforms. Tarzi's passionate Afghan nationalism influenced a future generation of Asian reformers.", "title": "Habibullah Khan, 1901–1919" }, { "paragraph_id": 35, "text": "The boundary with Iran was firmly delineated in 1904, replacing the ambiguous line made by a British commission in 1872. Agreement could not be reached, however, on sharing the waters of the Helmand River.", "title": "Habibullah Khan, 1901–1919" }, { "paragraph_id": 36, "text": "Like all foreign policy developments of this period affecting Afghanistan, the conclusion of the \"Great Game\" between Russia and Britain occurred without the Afghan ruler's participation. The 1907 Anglo-Russian Convention (the Convention of St. Petersburg) not only divided the region into separate areas of Russian and British influence but also established foundations for Afghan neutrality. The convention provided for Russian acquiescence that Afghanistan was now outside this sphere of influence, and for Russia to consult directly with Britain on matters relating to Russian-Afghan relations. Britain, for its part, would not occupy or annex Afghan territory, or interfere in Afghanistan's internal affairs.", "title": "Habibullah Khan, 1901–1919" }, { "paragraph_id": 37, "text": "During World War I, Afghanistan remained neutral despite pressure to support Turkey when its sultan proclaimed his nation's participation in what it considered a holy war. Habibullah did, however, entertain an Indo-German–Turkish mission in Kabul in 1915 that had as its titular head the Indian nationalist Mahendra Pratap and was led by Oskar Niedermayer and the German legate Werner Otto von Hentig. After much procrastination, he won an agreement from the Central Powers for a huge payment and arms provision in exchange for attacking British India. But the crafty Afghan ruler clearly viewed the war as an opportunity to play one side off against the other, for he also offered the British to resist a Central Powers attack on India in exchange for an end to British control of Afghan foreign policy.", "title": "Habibullah Khan, 1901–1919" }, { "paragraph_id": 38, "text": "Amanullah's ten years of reign initiated a period of dramatic change in Afghanistan in both foreign and domestic politics. Amanullah declared full independence and sparked the Third Anglo-Afghan War. Amanullah altered foreign policy in his new relations with external powers and transformed domestic politics with his social, political, and economic reforms. Although his reign ended abruptly, he achieved some notable successes, and his efforts failed as much due to the centrifugal forces of tribal Afghanistan and the machinations of Russia and Britain as to any political folly on his part.", "title": "Third Anglo-Afghan War and Independence" }, { "paragraph_id": 39, "text": "Amanullah came to power just as the entente between Russia and Britain broke down following the Russian Revolution of 1917. Once again Afghanistan provided a stage on which the great powers played out their schemes against one another. Keen to modernise his country and remove all foreign influence, Amanullah, sought to shore up his powerbase. Amidst intrigue in the Afghan court, and political and civil unrest in India, he sought to divert attention from the internal divisions of Afghanistan and unite all faction behind him by attacking the British.", "title": "Third Anglo-Afghan War and Independence" }, { "paragraph_id": 40, "text": "Using the civil unrest in India as an excuse to move troops to the Durand Line, Afghan troops crossed the border at the western end of the Khyber Pass on 3 May 1919 and occupied the village of Bagh, the scene of an earlier uprising in April. In response, the Indian government ordered a full mobilisation and on 6 May 1919 declared war. For the British it had come at a time when they were still recovering from the First World War. The troops that were stationed in India were mainly reserves and Territorials, who were awaiting demobilisation and keen to return to Britain, whilst the few regular regiments that were available were tired and depleted from five years of fighting.", "title": "Third Anglo-Afghan War and Independence" }, { "paragraph_id": 41, "text": "Afghan forces achieved success in the initial days of the war, taking the British and Indians by surprise in two main thrusts as the Afghan regular army was joined by large numbers of Pashtun tribesmen from both sides of the border. A series of skirmishes then followed as the British and Indians recovered from their initial surprise. As a counterbalance to deficiencies in manpower and morale, the British had a considerable advantage in terms of equipment, possessing machine guns, armoured cars, motor transport, wireless communications and aircraft and it was the latter that would prove decisive.", "title": "Third Anglo-Afghan War and Independence" }, { "paragraph_id": 42, "text": "British forces deployed air forces for the first time in the region, and the King's home was directly targeted in what is the first case of aerial bombardment in Afghanistan's history. The attacks played a key role in forcing an armistice but brought an angry rebuke from King Amanullah. He wrote: \"It is a matter of great regret that the throwing of bombs by zeppelins on London was denounced as a most savage act and the bombardment of places of worship and sacred spots was considered a most abominable operation. While we now see with our own eyes that such operations were a habit which is prevalent among all civilized people of the west\".", "title": "Third Anglo-Afghan War and Independence" }, { "paragraph_id": 43, "text": "The fighting concluded in August 1919 and Britain virtually dictated the terms of the Anglo-Afghan Treaty of 1919, a temporary armistice that provided, on one somewhat ambiguous interpretation, for Afghan self-determination in foreign affairs. Before final negotiations were concluded in 1921, however, Afghanistan had already begun to establish its own foreign policy without repercussions anyway, including diplomatic relations with the new government in the Soviet Union in 1919. During the 1920s, Afghanistan established diplomatic relations with most major countries.", "title": "Third Anglo-Afghan War and Independence" }, { "paragraph_id": 44, "text": "On 20 February 1919, Habibullah Khan was assassinated on a hunting trip. He had not declared a succession, but left his third son, Amanullah Khan, in charge in Kabul. Amanullah did have an older brother, Nasrullah Khan. But, because Amanullah controlled both the national treasury and the army, Amanullah was well situated to seize power. The army's support allowed Amanullah to suppress other claims and imprison those relatives who would not swear loyalty to him. Within a few months, the new amir had gained the allegiance of most tribal leaders and established control over the cities.", "title": "Amanullah Khan, 1919–1929" }, { "paragraph_id": 45, "text": "Amanullah Khan's reforms were heavily influenced by Europe. This came through the influence of Mahmud Tarzi, who was both Amanullah Khan's father-in-law and Foreign Minister. Mahmud Tarzi, a highly educated, well-traveled poet, journalist, and diplomat, was a key figure that brought Western dress and etiquette to Afghanistan. He also fought for progressive reforms such as woman's rights, educational rights, and freedom of press. All of these influences, brought by Tarzi and others, were welcomed by Amanullah Khan.", "title": "Amanullah Khan, 1919–1929" }, { "paragraph_id": 46, "text": "In 1926, Amanullah ended the Emirate of Afghanistan and proclaimed the Kingdom of Afghanistan with himself as king. In 1927 and 1928, King Amanullah Khan and his wife Soraya Tarzi visited Europe. On this trip they were honored and feted. In fact, in 1928 the King and Queen of Afghanistan received honorary degrees from the University of Oxford. This was an era when other Muslim nations, like Turkey and Egypt were also on the path to modernization. King Amanullah was so impressed with the social progress of Europe that he tried to implement them right away, this met with heavy resistance from the conservative society and eventually led to his demise.", "title": "Amanullah Khan, 1919–1929" }, { "paragraph_id": 47, "text": "Amanullah enjoyed early popularity within Afghanistan and he used his power to modernize the country. Amanullah created new cosmopolitan schools for both boys and girls in the region and overturned centuries-old traditions such as strict dress codes for women. He created a new capital city and increased trade with Europe and Asia. He also advanced a modernist constitution that incorporated equal rights and individual freedoms. This rapid modernization though, created a backlash, and a reactionary uprising known as the Khost rebellion which was suppressed in 1925.", "title": "Amanullah Khan, 1919–1929" }, { "paragraph_id": 48, "text": "After Amanullah travelled to Europe in late 1927, opposition to his rule increased. An uprising in Jalalabad culminated in a march to the capital, and much of the army deserted rather than resist. On 14 January 1929, Amanullah abdicated in favor of his brother, King Inayatullah Khan. On 17 January, Inayatullah abdicated and Habibullah Kalakani became the next ruler of Afghanistan and restored the emirate. However, his rule was short lived and, on 17 October 1929, Habibullah Kalakani was overthrown and replaced by King Nadir Khan.", "title": "Amanullah Khan, 1919–1929" }, { "paragraph_id": 49, "text": "After his abdication in 1929, Amanullah went into temporary exile in India. When he attempted to return to Afghanistan, he had little support from the people. From India, the ex-king traveled to Europe and settled in Italy, and later in Switzerland. Meanwhile, Nadir Khan made sure his return to Afghanistan was impossible by engaging in a propaganda war. Nadir Khan accused Amanullah Khan of kufr with his pro western policies.", "title": "Amanullah Khan, 1919–1929" }, { "paragraph_id": 50, "text": "In 1933, after the assassination of Nadir Khan, Mohammed Zahir Shah became king.", "title": "Mohammed Zahir Shah, 1933–1973" }, { "paragraph_id": 51, "text": "In 1940, the Afghan legation in Berlin asked that if Germany won the Second World War would the Reich give all of British India up to the Indus river to Afghanistan. Ernst von Weizsacker, the State Secretary at the Auswärtiges Amt wrote to the German minister in Kabul on 3 October 1940:", "title": "Mohammed Zahir Shah, 1933–1973" }, { "paragraph_id": 52, "text": "\"The Afghan minister called on me on September 30 and conveyed greetings from his minister president, as well as their good wishes for a favourable outcome of the war. He inquired whether German aims in Asia coincided with Afghan hopes; he alluded to the oppression of Arab countries and referred to the 15m Afghans (Pashtuns, mainly in the North West Frontier province) who were forced to suffer on Indian territory.My statement that Germany's goal was the liberation of the peoples of the region referred to, who were under the British yoke was received with satisfaction by the Afghan minister. He stated that justice for Afghanistan would be created only when the country's frontier had been extended to the Indus; this would also apply if India should secede from Britain. The Afghan remarked that Afghanistan had given proof of her loyal attitude by vigorously resisting English pressure to break off relations with Germany.\"", "title": "Mohammed Zahir Shah, 1933–1973" }, { "paragraph_id": 53, "text": "No Afghan government ever accepted the Durand Line which divided the ethnically Pashtun population into the North-West Frontier Province of the British Indian Empire (modern north-western Pakistan) and Afghanistan, and it was the hope of Kabul that if Germany won the war, then all of the Pashtun people might be united into one realm.", "title": "Mohammed Zahir Shah, 1933–1973" } ]
European influence in Afghanistan has been present in the country since the Victorian era, when the competing imperial powers of Britain and Russia contested for control over Afghanistan as part of the Great Game.
2002-01-01T17:14:26Z
2023-10-27T14:15:21Z
[ "Template:See also", "Template:Cite book", "Template:Usurped", "Template:Short description", "Template:History of Afghanistan", "Template:Fact", "Template:Refimprove", "Template:Cite web", "Template:ISBN", "Template:Reflist", "Template:Anglo-Afghan War", "Template:Afghanistan topics", "Template:Main", "Template:Wikisource", "Template:Unclear style", "Template:British colonial campaigns", "Template:Use dmy dates", "Template:Expand section", "Template:Cite news" ]
https://en.wikipedia.org/wiki/European_influence_in_Afghanistan
9,059
Dementia praecox
Dementia praecox (meaning a "premature dementia" or "precocious madness") is a disused psychiatric diagnosis that originally designated a chronic, deteriorating psychotic disorder characterized by rapid cognitive disintegration, usually beginning in the late teens or early adulthood. Over the years, the term dementia praecox was gradually replaced by the term schizophrenia, which initially had a meaning that included what is today considered the autism spectrum. The term dementia praecox was first used by German psychiatrist Heinrich Schüle in 1880. It was also used in 1891 by Arnold Pick (1851–1924), a professor of psychiatry at Charles University in Prague. In a brief clinical report, he described a person with a psychotic disorder resembling "hebephrenia" (an adolescent-onset psychotic condition). German psychiatrist Emil Kraepelin (1856–1926) popularised the term dementia praecox in his first detailed textbook descriptions of a condition that eventually became a different disease concept later relabeled as schizophrenia. Kraepelin reduced the complex psychiatric taxonomies of the nineteenth century by dividing them into two classes: manic-depressive psychosis and dementia praecox. This division, commonly referred to as the Kraepelinian dichotomy, had a fundamental impact on twentieth-century psychiatry, though it has also been questioned. The primary disturbance in dementia praecox was seen to be a disruption in cognitive or mental functioning in attention, memory, and goal-directed behaviour. Kraepelin contrasted this with manic-depressive psychosis, now termed bipolar disorder, and also with other forms of mood disorder, including major depressive disorder. He eventually concluded that it was not possible to distinguish his categories on the basis of cross-sectional symptoms. Kraepelin viewed dementia praecox as a progressively deteriorating disease from which no one recovered. However, by 1913, and more explicitly by 1920, Kraepelin admitted that while there may be a residual cognitive defect in most cases, the prognosis was not as uniformly dire as he had stated in the 1890s. Still, he regarded it as a specific disease concept that implied incurable, inexplicable madness. The history of dementia praecox is really that of psychiatry as a whole. Dementia is an ancient term which has been in use since at least the time of Lucretius in 50 BC where it meant "being out of one's mind". Until the seventeenth century, dementia referred to states of cognitive and behavioural deterioration leading to psychosocial incompetence. This condition could be innate or acquired, and the concept had no reference to a necessarily irreversible condition. It is the concept in this popular notion of psychosocial incapacity that forms the basis for the idea of legal incapacity. By the eighteenth century, at the period when the term entered into European medical discourse, clinical concepts were added to the vernacular understanding such that dementia was now associated with intellectual deficits arising from any cause and at any age. By the end of the nineteenth century, the modern 'cognitive paradigm' of dementia was taking root. This holds that dementia is understood in terms of criteria relating to aetiology, age and course which excludes former members of the family of the demented such as adults with acquired head trauma or children with cognitive deficits. Moreover, it was now understood as an irreversible condition and a particular emphasis was placed on memory loss in regard to the deterioration of intellectual functions. The term démence précoce was used in passing to describe the characteristics of a subset of young mental patients by the French physician Bénédict Augustin Morel in 1852 in the first volume of his Études cliniques. and the term is used more frequently in his textbook Traité des maladies mentales which was published in 1860. Morel, whose name will be forever associated with religiously inspired concept of degeneration theory in psychiatry, used the term in a descriptive sense and not to define a specific and novel diagnostic category. It was applied as a means of setting apart a group of young men and women with "stupor". As such their condition was characterised by a certain torpor, enervation, and disorder of the will and was related to the diagnostic category of melancholia. He did not conceptualise their state as irreversible and thus his use of the term dementia was equivalent to that formed in the eighteenth century as outlined above. While some have sought to interpret, if in a qualified fashion, the use by Morel of the term démence précoce as amounting to the discovery of schizophrenia, others have argued convincingly that Morel's descriptive use of the term should not be considered in any sense as a precursor to Kraepelin's dementia praecox disease concept. This is due to the fact that their concepts of dementia differed significantly from each other, with Kraepelin employing the more modern sense of the word and that Morel was not describing a diagnostic category. Indeed, until the advent of Pick and Kraepelin, Morel's term had vanished without a trace and there is little evidence to suggest that either Pick or indeed Kraepelin were even aware of Morel's use of the term until long after they had published their own disease concepts bearing the same name. As Eugène Minkowski stated, "An abyss separates Morel's démence précoce from that of Kraepelin." Morel described several psychotic disorders that ended in dementia, and as a result he may be regarded as the first alienist or psychiatrist to develop a diagnostic system based on presumed outcome rather than on the current presentation of signs and symptoms. Morel, however, did not conduct any long-term or quantitative research on the course and outcome of dementia praecox (Kraepelin would be the first in history to do that) so this prognosis was based on speculation. It is impossible to discern whether the condition briefly described by Morel was equivalent to the disorder later called dementia praecox by Pick and Kraepelin. Psychiatric nosology in the nineteenth-century was chaotic and characterised by a conflicting mosaic of contradictory systems. Psychiatric disease categories were based upon short-term and cross-sectional observations of patients from which were derived the putative characteristic signs and symptoms of a given disease concept. The dominant psychiatric paradigms which gave a semblance of order to this fragmentary picture were Morelian degeneration theory and the concept of "unitary psychosis" (Einheitspsychose). This latter notion, derived from the Belgian psychiatrist Joseph Guislain (1797–1860), held that the variety of symptoms attributed to mental illness were manifestations of a single underlying disease process. While these approaches had a diachronic aspect they lacked a conception of mental illness that encompassed a coherent notion of change over time in terms of the natural course of the illness and based upon an empirical observation of changing symptomatology. In 1863, the Danzig-based psychiatrist Karl Ludwig Kahlbaum (1828–1899) published his text on psychiatric nosology Die Gruppierung der psychischen Krankheiten (The Classification of Psychiatric Diseases). Although with the passage of time this work would prove profoundly influential, when it was published it was almost completely ignored by German academia despite the sophisticated and intelligent disease classification system which it proposed. In this book Kahlbaum categorized certain typical forms of psychosis (vesania typica) as a single coherent type based upon their shared progressive nature which betrayed, he argued, an ongoing degenerative disease process. For Kahlbaum the disease process of vesania typica was distinguished by the passage of the patient through clearly defined disease phases: a melancholic stage; a manic stage; a confusional stage; and finally a demented stage. In 1866, Kahlbaum became the director of a private psychiatric clinic in Görlitz (Prussia, today Saxony, a small town near Dresden). He was accompanied by his younger assistant, Ewald Hecker (1843–1909), and during a ten-year collaboration they conducted a series of research studies on young psychotic patients that would become a major influence on the development of modern psychiatry. Together Kahlbaum and Hecker were the first to describe and name such syndromes as dysthymia, cyclothymia, paranoia, catatonia, and hebephrenia. Perhaps their most lasting contribution to psychiatry was the introduction of the "clinical method" from medicine to the study of mental diseases, a method which is now known as psychopathology. When the element of time was added to the concept of diagnosis, a diagnosis became more than just a description of a collection of symptoms: diagnosis now also defined by prognosis (course and outcome). An additional feature of the clinical method was that the characteristic symptoms that define syndromes should be described without any prior assumption of brain pathology (although such links would be made later as scientific knowledge progressed). Karl Kahlbaum made an appeal for the adoption of the clinical method in psychiatry in his 1874 book on catatonia. Without Kahlbaum and Hecker there would be no dementia praecox. Upon his appointment to a full professorship in psychiatry at the University of Dorpat (now Tartu, Estonia) in 1886, Kraepelin gave an inaugural address to the faculty outlining his research programme for the years ahead. Attacking the "brain mythology" of Meynert and the positions of Griesinger and Gudden, Kraepelin advocated that the ideas of Kahlbaum, who was then a marginal and little known figure in psychiatry, should be followed. Therefore, he argued, a research programme into the nature of psychiatric illness should look at a large number of patients over time to discover the course which mental disease could take. It has also been suggested that Kraepelin's decision to accept the Dorpat post was informed by the fact that there he could hope to gain experience with chronic patients and this, it was presumed, would facilitate the longitudinal study of mental illness. Understanding that objective diagnostic methods must be based on scientific practice, Kraepelin had been conducting psychological and drug experiments on patients and normal subjects for some time when, in 1891, he left Dorpat and took up a position as professor and director of the psychiatric clinic at Heidelberg University. There he established a research program based on Kahlbaum's proposal for a more exact qualitative clinical approach, and his own innovation: a quantitative approach involving meticulous collection of data over time on each new patient admitted to the clinic (rather than only the interesting cases, as had been the habit until then). Kraepelin believed that by thoroughly describing all of the clinic's new patients on index cards, which he had been using since 1887, researcher bias could be eliminated from the investigation process. He described the method in his posthumously published memoir: ... after the first thorough examination of a new patient, each of us had to throw in a note [in a "diagnosis box"] with his diagnosis written on it. After a while, the notes were taken out of the box, the diagnoses were listed, and the case was closed, the final interpretation of the disease was added to the original diagnosis. In this way, we were able to see what kind of mistakes had been made and were able to follow-up the reasons for the wrong original diagnosis. The fourth edition of his textbook, Psychiatrie, published in 1893, two years after his arrival at Heidelberg, contained some impressions of the patterns Kraepelin had begun to find in his index cards. Prognosis (course and outcome) began to feature alongside signs and symptoms in the description of syndromes, and he added a class of psychotic disorders designated "psychic degenerative processes", three of which were borrowed from Kahlbaum and Hecker: dementia paranoides (a degenerative type of Kahlbaum's paranoia, with sudden onset), catatonia (per Kahlbaum, 1874) and dementia praecox, (Hecker's hebephrenia of 1871). Kraepelin continued to equate dementia praecox with hebephrenia for the next six years. In the March 1896 fifth edition of Psychiatrie, Kraepelin expressed confidence that his clinical method, involving analysis of both qualitative and quantitative data derived from long term observation of patients, would produce reliable diagnoses including prognosis: What convinced me of the superiority of the clinical method of diagnosis (followed here) over the traditional one, was the certainty with which we could predict (in conjunction with our new concept of disease) the future course of events. Thanks to it the student can now find his way more easily in the difficult subject of psychiatry. In this edition dementia praecox is still essentially hebephrenia, and it, dementia paranoides and catatonia are described as distinct psychotic disorders among the "metabolic disorders leading to dementia". In the 1899 (6th) edition of Psychiatrie, Kraepelin established a paradigm for psychiatry that would dominate the following century, sorting most of the recognized forms of insanity into two major categories: dementia praecox and manic-depressive illness. Dementia praecox was characterized by disordered intellectual functioning, whereas manic-depressive illness was principally a disorder of affect or mood; and the former featured constant deterioration, virtually no recoveries and a poor outcome, while the latter featured periods of exacerbation followed by periods of remission, and many complete recoveries. The class, dementia praecox, comprised the paranoid, catatonic and hebephrenic psychotic disorders, and these forms were found in the Diagnostic and Statistical Manual of Mental Disorders until the fifth edition was released, in May 2013. These terms, however, are still found in general psychiatric nomenclature. In the seventh, 1904, edition of Psychiatrie, Kraepelin accepted the possibility that a small number of patients may recover from dementia praecox. Eugen Bleuler reported in 1908 that in many cases there was no inevitable progressive decline, there was temporary remission in some cases, and there were even cases of near recovery with the retention of some residual defect. In the eighth edition of Kraepelin's textbook, published in four volumes between 1909 and 1915, he described eleven forms of dementia, and dementia praecox was classed as one of the "endogenous dementias". Modifying his previous more gloomy prognosis in line with Bleuler's observations, Kraepelin reported that about 26% of his patients experienced partial remission of symptoms. Kraepelin died while working on the ninth edition of Psychiatrie with Johannes Lange (1891–1938), who finished it and brought it to publication in 1927. Though his work and that of his research associates had revealed a role for heredity, Kraepelin realized nothing could be said with certainty about the aetiology of dementia praecox, and he left out speculation regarding brain disease or neuropathology in his diagnostic descriptions. Nevertheless, from the 1896 edition onwards Kraepelin made clear his belief that poisoning of the brain, "auto-intoxication," probably by sex hormones, may underlie dementia praecox – a theory also entertained by Eugen Bleuler. Both theorists insisted dementia praecox is a biological disorder, not the product of psychological trauma. Thus, rather than a disease of hereditary degeneration or of structural brain pathology, Kraepelin believed dementia praecox was due to a systemic or "whole body" disease process, probably metabolic, which gradually affected many of the tissues and organs of the body before affecting the brain in a final, decisive cascade. Kraepelin, recognizing dementia praecox in Chinese, Japanese, Tamil and Malay patients, suggested in the eighth edition of Psychiatrie that, "we must therefore seek the real cause of dementia praecox in conditions which are spread all over the world, which thus do not lie in race or in climate, in food or in any other general circumstance of life..." Kraepelin had experimented with hypnosis but found it wanting, and disapproved of Freud's and Jung's introduction, based on no evidence, of psychogenic assumptions to the interpretation and treatment of mental illness. He argued that, without knowing the underlying cause of dementia praecox or manic-depressive illness, there could be no disease-specific treatment, and recommended the use of long baths and the occasional use of drugs such as opiates and barbiturates for the amelioration of distress, as well as occupational activities, where suitable, for all institutionalized patients. Based on his theory that dementia praecox is the product of autointoxication emanating from the sex glands, Kraepelin experimented, without success, with injections of thyroid, gonad and other glandular extracts. Kraepelin noted the dissemination of his new disease concept when in 1899 he enumerated the term's appearance in almost twenty articles in the German-language medical press. In the early years of the twentieth century the twin pillars of the Kraepelinian dichotomy, dementia praecox and manic depressive psychosis, were assiduously adopted in clinical and research contexts among the Germanic psychiatric community. German-language psychiatric concepts were always introduced much faster in America (than, say, Britain) where émigré German, Swiss and Austrian physicians essentially created American psychiatry. Swiss-émigré Adolf Meyer (1866–1950), arguably the most influential psychiatrist in America for the first half of the 20th century, published the first critique of dementia praecox in an 1896 book review of the 5th edition of Kraepelin's textbook. But it was not until 1900 and 1901 that the first three American publications regarding dementia praecox appeared, one of which was a translation of a few sections of Kraepelin's 6th edition of 1899 on dementia praecox. Adolf Meyer was the first to apply the new diagnostic term in America. He used it at the Worcester Lunatic Hospital in Massachusetts in the fall of 1896. He was also the first to apply Eugen Bleuler's term "schizophrenia" (in the form of "schizophrenic reaction") in 1913 at the Henry Phipps Psychiatric Clinic of the Johns Hopkins Hospital. The dissemination of Kraepelin's disease concept to the Anglophone world was facilitated in 1902 when Ross Diefendorf, a lecturer in psychiatry at Yale, published an adapted version of the sixth edition of the Lehrbuch der Psychiatrie. This was republished in 1904 and with a new version, based on the seventh edition of Kraepelin's Lehrbuch appearing in 1907 and reissued in 1912. Both dementia praecox (in its three classic forms) and "manic-depressive psychosis" gained wider popularity in the larger institutions in the eastern United States after being included in the official nomenclature of diseases and conditions for record-keeping at Bellevue Hospital in New York City in 1903. The term lived on due to its promotion in the publications of the National Committee on Mental Hygiene (founded in 1909) and the Eugenics Records Office (1910). But perhaps the most important reason for the longevity of Kraepelin's term was its inclusion in 1918 as an official diagnostic category in the uniform system adopted for comparative statistical record-keeping in all American mental institutions, The Statistical Manual for the Use of Institutions for the Insane. Its many revisions served as the official diagnostic classification scheme in America until 1952 when the first edition of the Diagnostic and Statistical Manual: Mental Disorders, or DSM-I, appeared. Dementia praecox disappeared from official psychiatry with the publication of DSM-I, replaced by the Bleuler/Meyer hybridization, "schizophrenic reaction". Schizophrenia was mentioned as an alternate term for dementia praecox in the 1918 Statistical Manual. In both clinical work as well as research, between 1918 and 1952 five different terms were used interchangeably: dementia praecox, schizophrenia, dementia praecox (schizophrenia), schizophrenia (dementia praecox) and schizophrenic reaction. This made the psychiatric literature of the time confusing since, in a strict sense, Kraepelin's disease was not Bleuler's disease. They were defined differently, had different population parameters, and different concepts of prognosis. The reception of dementia praecox as an accepted diagnosis in British psychiatry came more slowly, perhaps only taking hold around the time of World War I. There was substantial opposition to the use of the term "dementia" as misleading, partly due to findings of remission and recovery. Some argued that existing diagnoses such as "delusional insanity" or "adolescent insanity" were better or more clearly defined. In France a psychiatric tradition regarding the psychotic disorders predated Kraepelin, and the French never fully adopted Kraepelin's classification system. Instead the French maintained an independent classification system throughout the 20th century. From 1980, when DSM-III totally reshaped psychiatric diagnosis, French psychiatry began to finally alter its views of diagnosis to converge with the North American system. Kraepelin thus finally conquered France via America. Due to the influence of alienists such as Adolf Meyer, August Hoch, George Kirby, Charles Macphie Campbell, Smith Ely Jelliffe and William Alanson White, psychogenic theories of dementia praecox dominated the American scene by 1911. In 1925 Bleuler's schizophrenia rose in prominence as an alternative to Kraepelin's dementia praecox. When Freudian perspectives became influential in American psychiatry in the 1920s schizophrenia became an attractive alternative concept. Bleuler corresponded with Freud and was connected to Freud's psychoanalytic movement, and the inclusion of Freudian interpretations of the symptoms of schizophrenia in his publications on the subject, as well as those of C.G. Jung, eased the adoption of his broader version of dementia praecox (schizophrenia) in America over Kraepelin's narrower and prognostically more negative one. The term "schizophrenia" was first applied by American alienists and neurologists in private practice by 1909 and officially in institutional settings in 1913, but it took many years to catch on. It is first mentioned in The New York Times in 1925. Until 1952 the terms dementia praecox and schizophrenia were used interchangeably in American psychiatry, with occasional use of the hybrid terms "dementia praecox (schizophrenia)" or "schizophrenia (dementia praecox)". Editions of the Diagnostic and Statistical Manual of Mental Disorders since the first in 1952 had reflected views of schizophrenia as "reactions" or "psychogenic" (DSM-I), or as manifesting Freudian notions of "defense mechanisms" (as in DSM-II of 1969 in which the symptoms of schizophrenia were interpreted as "psychologically self-protected"). The diagnostic criteria were vague, minimal and wide, including either concepts that no longer exist or that are now labeled as personality disorders (for example, schizotypal personality disorder). There was also no mention of the dire prognosis Kraepelin had made. Schizophrenia seemed to be more prevalent and more psychogenic and more treatable than either Kraepelin or Bleuler would have allowed. As a direct result of the effort to construct Research Diagnostic Criteria in the 1970s that were independent of any clinical diagnostic manual, Kraepelin's idea that categories of mental disorder should reflect discrete and specific disease entities with a biological basis began to return to prominence. Vague dimensional approaches based on symptoms—so highly favored by the Meyerians and psychoanalysts—were overthrown. For research purposes, the definition of schizophrenia returned to the narrow range allowed by Kraepelin's dementia praecox concept. Furthermore, after 1980 the disorder was a progressively deteriorating one once again, with the notion that recovery, if it happened at all, was rare. This revision of schizophrenia became the basis of the diagnostic criteria in DSM-III (1980). Some of the psychiatrists who worked to bring about this revision referred to themselves as the "neo-Kraepelinians".
[ { "paragraph_id": 0, "text": "Dementia praecox (meaning a \"premature dementia\" or \"precocious madness\") is a disused psychiatric diagnosis that originally designated a chronic, deteriorating psychotic disorder characterized by rapid cognitive disintegration, usually beginning in the late teens or early adulthood. Over the years, the term dementia praecox was gradually replaced by the term schizophrenia, which initially had a meaning that included what is today considered the autism spectrum.", "title": "" }, { "paragraph_id": 1, "text": "The term dementia praecox was first used by German psychiatrist Heinrich Schüle in 1880.", "title": "" }, { "paragraph_id": 2, "text": "It was also used in 1891 by Arnold Pick (1851–1924), a professor of psychiatry at Charles University in Prague. In a brief clinical report, he described a person with a psychotic disorder resembling \"hebephrenia\" (an adolescent-onset psychotic condition).", "title": "" }, { "paragraph_id": 3, "text": "German psychiatrist Emil Kraepelin (1856–1926) popularised the term dementia praecox in his first detailed textbook descriptions of a condition that eventually became a different disease concept later relabeled as schizophrenia. Kraepelin reduced the complex psychiatric taxonomies of the nineteenth century by dividing them into two classes: manic-depressive psychosis and dementia praecox. This division, commonly referred to as the Kraepelinian dichotomy, had a fundamental impact on twentieth-century psychiatry, though it has also been questioned.", "title": "" }, { "paragraph_id": 4, "text": "The primary disturbance in dementia praecox was seen to be a disruption in cognitive or mental functioning in attention, memory, and goal-directed behaviour. Kraepelin contrasted this with manic-depressive psychosis, now termed bipolar disorder, and also with other forms of mood disorder, including major depressive disorder. He eventually concluded that it was not possible to distinguish his categories on the basis of cross-sectional symptoms.", "title": "" }, { "paragraph_id": 5, "text": "Kraepelin viewed dementia praecox as a progressively deteriorating disease from which no one recovered. However, by 1913, and more explicitly by 1920, Kraepelin admitted that while there may be a residual cognitive defect in most cases, the prognosis was not as uniformly dire as he had stated in the 1890s. Still, he regarded it as a specific disease concept that implied incurable, inexplicable madness.", "title": "" }, { "paragraph_id": 6, "text": "The history of dementia praecox is really that of psychiatry as a whole.", "title": "History" }, { "paragraph_id": 7, "text": "Dementia is an ancient term which has been in use since at least the time of Lucretius in 50 BC where it meant \"being out of one's mind\". Until the seventeenth century, dementia referred to states of cognitive and behavioural deterioration leading to psychosocial incompetence. This condition could be innate or acquired, and the concept had no reference to a necessarily irreversible condition. It is the concept in this popular notion of psychosocial incapacity that forms the basis for the idea of legal incapacity. By the eighteenth century, at the period when the term entered into European medical discourse, clinical concepts were added to the vernacular understanding such that dementia was now associated with intellectual deficits arising from any cause and at any age. By the end of the nineteenth century, the modern 'cognitive paradigm' of dementia was taking root. This holds that dementia is understood in terms of criteria relating to aetiology, age and course which excludes former members of the family of the demented such as adults with acquired head trauma or children with cognitive deficits. Moreover, it was now understood as an irreversible condition and a particular emphasis was placed on memory loss in regard to the deterioration of intellectual functions.", "title": "History" }, { "paragraph_id": 8, "text": "The term démence précoce was used in passing to describe the characteristics of a subset of young mental patients by the French physician Bénédict Augustin Morel in 1852 in the first volume of his Études cliniques. and the term is used more frequently in his textbook Traité des maladies mentales which was published in 1860. Morel, whose name will be forever associated with religiously inspired concept of degeneration theory in psychiatry, used the term in a descriptive sense and not to define a specific and novel diagnostic category. It was applied as a means of setting apart a group of young men and women with \"stupor\". As such their condition was characterised by a certain torpor, enervation, and disorder of the will and was related to the diagnostic category of melancholia. He did not conceptualise their state as irreversible and thus his use of the term dementia was equivalent to that formed in the eighteenth century as outlined above.", "title": "History" }, { "paragraph_id": 9, "text": "While some have sought to interpret, if in a qualified fashion, the use by Morel of the term démence précoce as amounting to the discovery of schizophrenia, others have argued convincingly that Morel's descriptive use of the term should not be considered in any sense as a precursor to Kraepelin's dementia praecox disease concept. This is due to the fact that their concepts of dementia differed significantly from each other, with Kraepelin employing the more modern sense of the word and that Morel was not describing a diagnostic category. Indeed, until the advent of Pick and Kraepelin, Morel's term had vanished without a trace and there is little evidence to suggest that either Pick or indeed Kraepelin were even aware of Morel's use of the term until long after they had published their own disease concepts bearing the same name. As Eugène Minkowski stated, \"An abyss separates Morel's démence précoce from that of Kraepelin.\"", "title": "History" }, { "paragraph_id": 10, "text": "Morel described several psychotic disorders that ended in dementia, and as a result he may be regarded as the first alienist or psychiatrist to develop a diagnostic system based on presumed outcome rather than on the current presentation of signs and symptoms. Morel, however, did not conduct any long-term or quantitative research on the course and outcome of dementia praecox (Kraepelin would be the first in history to do that) so this prognosis was based on speculation. It is impossible to discern whether the condition briefly described by Morel was equivalent to the disorder later called dementia praecox by Pick and Kraepelin.", "title": "History" }, { "paragraph_id": 11, "text": "Psychiatric nosology in the nineteenth-century was chaotic and characterised by a conflicting mosaic of contradictory systems. Psychiatric disease categories were based upon short-term and cross-sectional observations of patients from which were derived the putative characteristic signs and symptoms of a given disease concept. The dominant psychiatric paradigms which gave a semblance of order to this fragmentary picture were Morelian degeneration theory and the concept of \"unitary psychosis\" (Einheitspsychose). This latter notion, derived from the Belgian psychiatrist Joseph Guislain (1797–1860), held that the variety of symptoms attributed to mental illness were manifestations of a single underlying disease process. While these approaches had a diachronic aspect they lacked a conception of mental illness that encompassed a coherent notion of change over time in terms of the natural course of the illness and based upon an empirical observation of changing symptomatology.", "title": "History" }, { "paragraph_id": 12, "text": "In 1863, the Danzig-based psychiatrist Karl Ludwig Kahlbaum (1828–1899) published his text on psychiatric nosology Die Gruppierung der psychischen Krankheiten (The Classification of Psychiatric Diseases). Although with the passage of time this work would prove profoundly influential, when it was published it was almost completely ignored by German academia despite the sophisticated and intelligent disease classification system which it proposed. In this book Kahlbaum categorized certain typical forms of psychosis (vesania typica) as a single coherent type based upon their shared progressive nature which betrayed, he argued, an ongoing degenerative disease process. For Kahlbaum the disease process of vesania typica was distinguished by the passage of the patient through clearly defined disease phases: a melancholic stage; a manic stage; a confusional stage; and finally a demented stage.", "title": "History" }, { "paragraph_id": 13, "text": "In 1866, Kahlbaum became the director of a private psychiatric clinic in Görlitz (Prussia, today Saxony, a small town near Dresden). He was accompanied by his younger assistant, Ewald Hecker (1843–1909), and during a ten-year collaboration they conducted a series of research studies on young psychotic patients that would become a major influence on the development of modern psychiatry.", "title": "History" }, { "paragraph_id": 14, "text": "Together Kahlbaum and Hecker were the first to describe and name such syndromes as dysthymia, cyclothymia, paranoia, catatonia, and hebephrenia. Perhaps their most lasting contribution to psychiatry was the introduction of the \"clinical method\" from medicine to the study of mental diseases, a method which is now known as psychopathology.", "title": "History" }, { "paragraph_id": 15, "text": "When the element of time was added to the concept of diagnosis, a diagnosis became more than just a description of a collection of symptoms: diagnosis now also defined by prognosis (course and outcome). An additional feature of the clinical method was that the characteristic symptoms that define syndromes should be described without any prior assumption of brain pathology (although such links would be made later as scientific knowledge progressed). Karl Kahlbaum made an appeal for the adoption of the clinical method in psychiatry in his 1874 book on catatonia. Without Kahlbaum and Hecker there would be no dementia praecox.", "title": "History" }, { "paragraph_id": 16, "text": "Upon his appointment to a full professorship in psychiatry at the University of Dorpat (now Tartu, Estonia) in 1886, Kraepelin gave an inaugural address to the faculty outlining his research programme for the years ahead. Attacking the \"brain mythology\" of Meynert and the positions of Griesinger and Gudden, Kraepelin advocated that the ideas of Kahlbaum, who was then a marginal and little known figure in psychiatry, should be followed. Therefore, he argued, a research programme into the nature of psychiatric illness should look at a large number of patients over time to discover the course which mental disease could take. It has also been suggested that Kraepelin's decision to accept the Dorpat post was informed by the fact that there he could hope to gain experience with chronic patients and this, it was presumed, would facilitate the longitudinal study of mental illness.", "title": "History" }, { "paragraph_id": 17, "text": "Understanding that objective diagnostic methods must be based on scientific practice, Kraepelin had been conducting psychological and drug experiments on patients and normal subjects for some time when, in 1891, he left Dorpat and took up a position as professor and director of the psychiatric clinic at Heidelberg University. There he established a research program based on Kahlbaum's proposal for a more exact qualitative clinical approach, and his own innovation: a quantitative approach involving meticulous collection of data over time on each new patient admitted to the clinic (rather than only the interesting cases, as had been the habit until then).", "title": "History" }, { "paragraph_id": 18, "text": "Kraepelin believed that by thoroughly describing all of the clinic's new patients on index cards, which he had been using since 1887, researcher bias could be eliminated from the investigation process. He described the method in his posthumously published memoir:", "title": "History" }, { "paragraph_id": 19, "text": "... after the first thorough examination of a new patient, each of us had to throw in a note [in a \"diagnosis box\"] with his diagnosis written on it. After a while, the notes were taken out of the box, the diagnoses were listed, and the case was closed, the final interpretation of the disease was added to the original diagnosis. In this way, we were able to see what kind of mistakes had been made and were able to follow-up the reasons for the wrong original diagnosis.", "title": "History" }, { "paragraph_id": 20, "text": "The fourth edition of his textbook, Psychiatrie, published in 1893, two years after his arrival at Heidelberg, contained some impressions of the patterns Kraepelin had begun to find in his index cards. Prognosis (course and outcome) began to feature alongside signs and symptoms in the description of syndromes, and he added a class of psychotic disorders designated \"psychic degenerative processes\", three of which were borrowed from Kahlbaum and Hecker: dementia paranoides (a degenerative type of Kahlbaum's paranoia, with sudden onset), catatonia (per Kahlbaum, 1874) and dementia praecox, (Hecker's hebephrenia of 1871). Kraepelin continued to equate dementia praecox with hebephrenia for the next six years.", "title": "History" }, { "paragraph_id": 21, "text": "In the March 1896 fifth edition of Psychiatrie, Kraepelin expressed confidence that his clinical method, involving analysis of both qualitative and quantitative data derived from long term observation of patients, would produce reliable diagnoses including prognosis:", "title": "History" }, { "paragraph_id": 22, "text": "What convinced me of the superiority of the clinical method of diagnosis (followed here) over the traditional one, was the certainty with which we could predict (in conjunction with our new concept of disease) the future course of events. Thanks to it the student can now find his way more easily in the difficult subject of psychiatry.", "title": "History" }, { "paragraph_id": 23, "text": "In this edition dementia praecox is still essentially hebephrenia, and it, dementia paranoides and catatonia are described as distinct psychotic disorders among the \"metabolic disorders leading to dementia\".", "title": "History" }, { "paragraph_id": 24, "text": "In the 1899 (6th) edition of Psychiatrie, Kraepelin established a paradigm for psychiatry that would dominate the following century, sorting most of the recognized forms of insanity into two major categories: dementia praecox and manic-depressive illness. Dementia praecox was characterized by disordered intellectual functioning, whereas manic-depressive illness was principally a disorder of affect or mood; and the former featured constant deterioration, virtually no recoveries and a poor outcome, while the latter featured periods of exacerbation followed by periods of remission, and many complete recoveries. The class, dementia praecox, comprised the paranoid, catatonic and hebephrenic psychotic disorders, and these forms were found in the Diagnostic and Statistical Manual of Mental Disorders until the fifth edition was released, in May 2013. These terms, however, are still found in general psychiatric nomenclature.", "title": "Kraepelin's influence on the next century" }, { "paragraph_id": 25, "text": "In the seventh, 1904, edition of Psychiatrie, Kraepelin accepted the possibility that a small number of patients may recover from dementia praecox. Eugen Bleuler reported in 1908 that in many cases there was no inevitable progressive decline, there was temporary remission in some cases, and there were even cases of near recovery with the retention of some residual defect. In the eighth edition of Kraepelin's textbook, published in four volumes between 1909 and 1915, he described eleven forms of dementia, and dementia praecox was classed as one of the \"endogenous dementias\". Modifying his previous more gloomy prognosis in line with Bleuler's observations, Kraepelin reported that about 26% of his patients experienced partial remission of symptoms. Kraepelin died while working on the ninth edition of Psychiatrie with Johannes Lange (1891–1938), who finished it and brought it to publication in 1927.", "title": "Kraepelin's influence on the next century" }, { "paragraph_id": 26, "text": "Though his work and that of his research associates had revealed a role for heredity, Kraepelin realized nothing could be said with certainty about the aetiology of dementia praecox, and he left out speculation regarding brain disease or neuropathology in his diagnostic descriptions. Nevertheless, from the 1896 edition onwards Kraepelin made clear his belief that poisoning of the brain, \"auto-intoxication,\" probably by sex hormones, may underlie dementia praecox – a theory also entertained by Eugen Bleuler. Both theorists insisted dementia praecox is a biological disorder, not the product of psychological trauma. Thus, rather than a disease of hereditary degeneration or of structural brain pathology, Kraepelin believed dementia praecox was due to a systemic or \"whole body\" disease process, probably metabolic, which gradually affected many of the tissues and organs of the body before affecting the brain in a final, decisive cascade. Kraepelin, recognizing dementia praecox in Chinese, Japanese, Tamil and Malay patients, suggested in the eighth edition of Psychiatrie that, \"we must therefore seek the real cause of dementia praecox in conditions which are spread all over the world, which thus do not lie in race or in climate, in food or in any other general circumstance of life...\"", "title": "Kraepelin's influence on the next century" }, { "paragraph_id": 27, "text": "Kraepelin had experimented with hypnosis but found it wanting, and disapproved of Freud's and Jung's introduction, based on no evidence, of psychogenic assumptions to the interpretation and treatment of mental illness. He argued that, without knowing the underlying cause of dementia praecox or manic-depressive illness, there could be no disease-specific treatment, and recommended the use of long baths and the occasional use of drugs such as opiates and barbiturates for the amelioration of distress, as well as occupational activities, where suitable, for all institutionalized patients. Based on his theory that dementia praecox is the product of autointoxication emanating from the sex glands, Kraepelin experimented, without success, with injections of thyroid, gonad and other glandular extracts.", "title": "Kraepelin's influence on the next century" }, { "paragraph_id": 28, "text": "Kraepelin noted the dissemination of his new disease concept when in 1899 he enumerated the term's appearance in almost twenty articles in the German-language medical press. In the early years of the twentieth century the twin pillars of the Kraepelinian dichotomy, dementia praecox and manic depressive psychosis, were assiduously adopted in clinical and research contexts among the Germanic psychiatric community. German-language psychiatric concepts were always introduced much faster in America (than, say, Britain) where émigré German, Swiss and Austrian physicians essentially created American psychiatry. Swiss-émigré Adolf Meyer (1866–1950), arguably the most influential psychiatrist in America for the first half of the 20th century, published the first critique of dementia praecox in an 1896 book review of the 5th edition of Kraepelin's textbook. But it was not until 1900 and 1901 that the first three American publications regarding dementia praecox appeared, one of which was a translation of a few sections of Kraepelin's 6th edition of 1899 on dementia praecox.", "title": "Kraepelin's influence on the next century" }, { "paragraph_id": 29, "text": "Adolf Meyer was the first to apply the new diagnostic term in America. He used it at the Worcester Lunatic Hospital in Massachusetts in the fall of 1896. He was also the first to apply Eugen Bleuler's term \"schizophrenia\" (in the form of \"schizophrenic reaction\") in 1913 at the Henry Phipps Psychiatric Clinic of the Johns Hopkins Hospital.", "title": "Kraepelin's influence on the next century" }, { "paragraph_id": 30, "text": "The dissemination of Kraepelin's disease concept to the Anglophone world was facilitated in 1902 when Ross Diefendorf, a lecturer in psychiatry at Yale, published an adapted version of the sixth edition of the Lehrbuch der Psychiatrie. This was republished in 1904 and with a new version, based on the seventh edition of Kraepelin's Lehrbuch appearing in 1907 and reissued in 1912. Both dementia praecox (in its three classic forms) and \"manic-depressive psychosis\" gained wider popularity in the larger institutions in the eastern United States after being included in the official nomenclature of diseases and conditions for record-keeping at Bellevue Hospital in New York City in 1903. The term lived on due to its promotion in the publications of the National Committee on Mental Hygiene (founded in 1909) and the Eugenics Records Office (1910). But perhaps the most important reason for the longevity of Kraepelin's term was its inclusion in 1918 as an official diagnostic category in the uniform system adopted for comparative statistical record-keeping in all American mental institutions, The Statistical Manual for the Use of Institutions for the Insane. Its many revisions served as the official diagnostic classification scheme in America until 1952 when the first edition of the Diagnostic and Statistical Manual: Mental Disorders, or DSM-I, appeared. Dementia praecox disappeared from official psychiatry with the publication of DSM-I, replaced by the Bleuler/Meyer hybridization, \"schizophrenic reaction\".", "title": "Kraepelin's influence on the next century" }, { "paragraph_id": 31, "text": "Schizophrenia was mentioned as an alternate term for dementia praecox in the 1918 Statistical Manual. In both clinical work as well as research, between 1918 and 1952 five different terms were used interchangeably: dementia praecox, schizophrenia, dementia praecox (schizophrenia), schizophrenia (dementia praecox) and schizophrenic reaction. This made the psychiatric literature of the time confusing since, in a strict sense, Kraepelin's disease was not Bleuler's disease. They were defined differently, had different population parameters, and different concepts of prognosis.", "title": "Kraepelin's influence on the next century" }, { "paragraph_id": 32, "text": "The reception of dementia praecox as an accepted diagnosis in British psychiatry came more slowly, perhaps only taking hold around the time of World War I. There was substantial opposition to the use of the term \"dementia\" as misleading, partly due to findings of remission and recovery. Some argued that existing diagnoses such as \"delusional insanity\" or \"adolescent insanity\" were better or more clearly defined. In France a psychiatric tradition regarding the psychotic disorders predated Kraepelin, and the French never fully adopted Kraepelin's classification system. Instead the French maintained an independent classification system throughout the 20th century. From 1980, when DSM-III totally reshaped psychiatric diagnosis, French psychiatry began to finally alter its views of diagnosis to converge with the North American system. Kraepelin thus finally conquered France via America.", "title": "Kraepelin's influence on the next century" }, { "paragraph_id": 33, "text": "Due to the influence of alienists such as Adolf Meyer, August Hoch, George Kirby, Charles Macphie Campbell, Smith Ely Jelliffe and William Alanson White, psychogenic theories of dementia praecox dominated the American scene by 1911. In 1925 Bleuler's schizophrenia rose in prominence as an alternative to Kraepelin's dementia praecox. When Freudian perspectives became influential in American psychiatry in the 1920s schizophrenia became an attractive alternative concept. Bleuler corresponded with Freud and was connected to Freud's psychoanalytic movement, and the inclusion of Freudian interpretations of the symptoms of schizophrenia in his publications on the subject, as well as those of C.G. Jung, eased the adoption of his broader version of dementia praecox (schizophrenia) in America over Kraepelin's narrower and prognostically more negative one.", "title": "From dementia praecox to schizophrenia" }, { "paragraph_id": 34, "text": "The term \"schizophrenia\" was first applied by American alienists and neurologists in private practice by 1909 and officially in institutional settings in 1913, but it took many years to catch on. It is first mentioned in The New York Times in 1925. Until 1952 the terms dementia praecox and schizophrenia were used interchangeably in American psychiatry, with occasional use of the hybrid terms \"dementia praecox (schizophrenia)\" or \"schizophrenia (dementia praecox)\".", "title": "From dementia praecox to schizophrenia" }, { "paragraph_id": 35, "text": "Editions of the Diagnostic and Statistical Manual of Mental Disorders since the first in 1952 had reflected views of schizophrenia as \"reactions\" or \"psychogenic\" (DSM-I), or as manifesting Freudian notions of \"defense mechanisms\" (as in DSM-II of 1969 in which the symptoms of schizophrenia were interpreted as \"psychologically self-protected\"). The diagnostic criteria were vague, minimal and wide, including either concepts that no longer exist or that are now labeled as personality disorders (for example, schizotypal personality disorder). There was also no mention of the dire prognosis Kraepelin had made. Schizophrenia seemed to be more prevalent and more psychogenic and more treatable than either Kraepelin or Bleuler would have allowed.", "title": "Diagnostic manuals" }, { "paragraph_id": 36, "text": "As a direct result of the effort to construct Research Diagnostic Criteria in the 1970s that were independent of any clinical diagnostic manual, Kraepelin's idea that categories of mental disorder should reflect discrete and specific disease entities with a biological basis began to return to prominence. Vague dimensional approaches based on symptoms—so highly favored by the Meyerians and psychoanalysts—were overthrown. For research purposes, the definition of schizophrenia returned to the narrow range allowed by Kraepelin's dementia praecox concept. Furthermore, after 1980 the disorder was a progressively deteriorating one once again, with the notion that recovery, if it happened at all, was rare. This revision of schizophrenia became the basis of the diagnostic criteria in DSM-III (1980). Some of the psychiatrists who worked to bring about this revision referred to themselves as the \"neo-Kraepelinians\".", "title": "Conclusions" }, { "paragraph_id": 37, "text": "", "title": "External links" } ]
Dementia praecox is a disused psychiatric diagnosis that originally designated a chronic, deteriorating psychotic disorder characterized by rapid cognitive disintegration, usually beginning in the late teens or early adulthood. Over the years, the term dementia praecox was gradually replaced by the term schizophrenia, which initially had a meaning that included what is today considered the autism spectrum. The term dementia praecox was first used by German psychiatrist Heinrich Schüle in 1880. It was also used in 1891 by Arnold Pick (1851–1924), a professor of psychiatry at Charles University in Prague. In a brief clinical report, he described a person with a psychotic disorder resembling "hebephrenia". German psychiatrist Emil Kraepelin (1856–1926) popularised the term dementia praecox in his first detailed textbook descriptions of a condition that eventually became a different disease concept later relabeled as schizophrenia. Kraepelin reduced the complex psychiatric taxonomies of the nineteenth century by dividing them into two classes: manic-depressive psychosis and dementia praecox. This division, commonly referred to as the Kraepelinian dichotomy, had a fundamental impact on twentieth-century psychiatry, though it has also been questioned. The primary disturbance in dementia praecox was seen to be a disruption in cognitive or mental functioning in attention, memory, and goal-directed behaviour. Kraepelin contrasted this with manic-depressive psychosis, now termed bipolar disorder, and also with other forms of mood disorder, including major depressive disorder. He eventually concluded that it was not possible to distinguish his categories on the basis of cross-sectional symptoms. Kraepelin viewed dementia praecox as a progressively deteriorating disease from which no one recovered. However, by 1913, and more explicitly by 1920, Kraepelin admitted that while there may be a residual cognitive defect in most cases, the prognosis was not as uniformly dire as he had stated in the 1890s. Still, he regarded it as a specific disease concept that implied incurable, inexplicable madness.
2002-02-25T15:51:15Z
2023-12-10T08:03:14Z
[ "Template:Sfn", "Template:Blockquote", "Template:Cite journal", "Template:Cite web", "Template:Distinguish", "Template:Circa", "Template:Portal", "Template:Lang", "Template:Harvnb", "Template:ISBN", "Template:Commons category-inline", "Template:Cn", "Template:Reflist", "Template:Cite book", "Template:Use dmy dates", "Template:Short description" ]
https://en.wikipedia.org/wiki/Dementia_praecox
9,061
Dolphin
A dolphin is an aquatic mammal within the infraorder Cetacea. Dolphin species belong to the families Delphinidae (the oceanic dolphins), Platanistidae (the Indian river dolphins), Iniidae (the New World river dolphins), Pontoporiidae (the brackish dolphins), and possibly extinct Lipotidae (baiji or Chinese river dolphin). There are 40 extant species named as dolphins. Dolphins range in size from the 1.7-metre-long (5 ft 7 in) and 50-kilogram (110-pound) Maui's dolphin to the 9.5 m (31 ft) and 10-tonne (11-short-ton) orca. Various species of dolphins exhibit sexual dimorphism where the males are larger than females. They have streamlined bodies and two limbs that are modified into flippers. Though not quite as flexible as seals, they are faster; some dolphins can briefly travel at speeds of 29 kilometres per hour (18 mph) or leap about 9 metres (30 ft). Dolphins use their conical teeth to capture fast-moving prey. They have well-developed hearing which is adapted for both air and water. It is so well developed that some can survive even if they are blind. Some species are well adapted for diving to great depths. They have a layer of fat, or blubber, under the skin to keep warm in the cold water. Dolphins are widespread. Most species prefer the warm waters of the tropic zones, but some, such as the right whale dolphin, prefer colder climates. Dolphins feed largely on fish and squid, but a few, such as the orca, feed on large mammals such as seals. Male dolphins typically mate with multiple females every year, but females only mate every two to three years. Calves are typically born in the spring and summer months and females bear all the responsibility for raising them. Mothers of some species fast and nurse their young for a relatively long period of time. Dolphins produce a variety of vocalizations, usually in the form of clicks and whistles. Dolphins are sometimes hunted in places such as Japan, in an activity known as dolphin drive hunting. Besides drive hunting, they also face threats from bycatch, habitat loss, and marine pollution. Dolphins have been depicted in various cultures worldwide. Dolphins are sometimes kept in captivity and trained to perform tricks. The most common dolphin species in captivity is the bottlenose dolphin, while there are around 60 orcas in captivity. The name is originally from Greek δελφίς (delphís), "dolphin", which was related to the Greek δελφύς (delphus), "womb". The animal's name can therefore be interpreted as meaning "a 'fish' with a womb". The name was transmitted via the Latin delphinus (the romanization of the later Greek δελφῖνος – delphinos), which in Medieval Latin became dolfinus and in Old French daulphin, which reintroduced the ph into the word "Dolphin". The term mereswine (that is, "sea pig") has also historically been used. The term 'dolphin' can be used to refer to most species in the family Delphinidae (oceanic dolphins) and the river dolphin families Iniidae (South American river dolphins), Pontoporiidae (La Plata dolphin), Lipotidae (Yangtze river dolphin) and Platanistidae (Ganges river dolphin and Indus river dolphin). This term has often been applied in the US, mainly in the fishing industry, to all small cetaceans (dolphins and porpoises) are considered to be porpoises, while the fish dorado is called dolphin fish. In common usage the term 'whale' is used only for the larger cetacean species, while the smaller ones with a beaked or longer nose are considered 'dolphins'. The name 'dolphin' is used casually as a synonym for bottlenose dolphin, the most common and familiar species of dolphin. There are six species of dolphins commonly thought of as whales, collectively known as blackfish: the orca, the melon-headed whale, the pygmy killer whale, the false killer whale, and the two species of pilot whales, all of which are classified under the family Delphinidae and qualify as dolphins. Although the terms 'dolphin' and 'porpoise' are sometimes used interchangeably, 'porpoise' usually refers to the Phocoenidae family, which have a shorter beak and spade-shaped teeth and differ in their behavior. A group of dolphins is called a "school" or a "pod". Male dolphins are called "bulls", females called "cows" and young dolphins are called "calves". In 1933, three hybrid dolphins beached off the Irish coast; they were hybrids between Risso's and bottlenose dolphins. This mating was later repeated in captivity, producing a hybrid calf. In captivity, a bottlenose and a rough-toothed dolphin produced hybrid offspring. A common-bottlenose hybrid lives at SeaWorld California. Other dolphin hybrids live in captivity around the world or have been reported in the wild, such as a bottlenose-Atlantic spotted hybrid. The best known hybrid is the wholphin, a false killer whale-bottlenose dolphin hybrid. The wolphin is a fertile hybrid. Two wolphins currently live at the Sea Life Park in Hawaii; the first was born in 1985 from a male false killer whale and a female bottlenose. Wolphins have also been observed in the wild. Dolphins are descendants of land-dwelling mammals of the artiodactyl order (even-toed ungulates). They are related to the Indohyus, an extinct chevrotain-like ungulate, from which they split approximately 48 million years ago. The primitive cetaceans, or archaeocetes, first took to the sea approximately 49 million years ago and became fully aquatic by 5–10 million years later. Archaeoceti is a parvorder comprising ancient whales. These ancient whales are the predecessors of modern whales, stretching back to their first ancestor that spent their lives near (rarely in) the water. Likewise, the archaeocetes can be anywhere from near fully terrestrial, to semi-aquatic to fully aquatic, but what defines an archaeocete is the presence of visible legs or asymmetrical teeth. Their features became adapted for living in the marine environment. Major anatomical changes include the hearing set-up that channeled vibrations from the jaw to the earbone which occurred with Ambulocetus 49 million years ago, a streamlining of the body and the growth of flukes on the tail which occurred around 43 million years ago with Protocetus, the migration of the nasal openings toward the top of the cranium and the modification of the forelimbs into flippers which occurred with Basilosaurus 35 million years ago, and the shrinking and eventual disappearance of the hind limbs which took place with the first odontocetes and mysticetes 34 million years ago. The modern dolphin skeleton has two small, rod-shaped pelvic bones thought to be vestigial hind limbs. In October 2006, an unusual bottlenose dolphin was captured in Japan; it had small fins on each side of its genital slit, which scientists believe to be an unusually pronounced development of these vestigial hind limbs. Today, the closest living relatives of cetaceans are the hippopotamuses; these share a semi-aquatic ancestor that branched off from other artiodactyls some 60 million years ago. Around 40 million years ago, a common ancestor between the two branched off into cetacea and anthracotheres; anthracotheres became extinct at the end of the Pleistocene two-and-a-half million years ago, eventually leaving only one surviving lineage: the two species of hippo. Dolphins have torpedo-shaped bodies with generally non-flexible necks, limbs modified into flippers, a tail fin, and bulbous heads. Dolphin skulls have small eye orbits, long snouts, and eyes placed on the sides of its head; they lack external ear flaps. Dolphins range in size from the 1.7 m (5 ft 7 in) long and 50 kg (110 lb) Maui's dolphin to the 9.5 m (31 ft 2 in) and 10 t (11 short tons) orca. Overall, they tend to be dwarfed by other Cetartiodactyls. Several species have female-biased sexual dimorphism, with the females being larger than the males. Dolphins have conical teeth, as opposed to porpoises' spade-shaped teeth. These conical teeth are used to catch swift prey such as fish, squid or large mammals, such as seals. Breathing involves expelling stale air from their blowhole, in an upward blast, which may be visible in cold air, followed by inhaling fresh air into the lungs. Dolphins have rather small, unidentifiable spouts. All dolphins have a thick layer of blubber, thickness varying on climate. This blubber can help with buoyancy, protection to some extent as predators would have a hard time getting through a thick layer of fat, and energy for leaner times; the primary usage for blubber is insulation from the harsh climate. Calves, generally, are born with a thin layer of blubber, which develops at different paces depending on the habitat. Dolphins have a two-chambered stomach that is similar in structure to terrestrial carnivores. They have fundic and pyloric chambers. Dolphins' reproductive organs are located inside the body, with genital slits on the ventral (belly) side. Males have two slits, one concealing the penis and one further behind for the anus. Females have one genital slit, housing the vagina and the anus, with a mammary slit on either side. The integumentary system is an organ system mostly consisted of skin, hair, nails and endocrine glands. The skin of dolphins is very important as it is specialized to satisfy specific requirements. Some of these requirements include protection, fat storage, heat regulation, and sensory perception. The skin of a dolphin is made up of two parts: the epidermis and the blubber, which consists of two layers including the dermis and subcutis. The dolphin's skin is known to have a smooth rubber texture and is without hair and glands, except mammary glands. At birth, a newborn dolphin has hairs lined up in a single band on both sides of the rostrum, which is their jaw, and usually has a total length of 16–17 cm . Dolphins are a part of the species Cetacea. The epidermis of this species is characterized by the lack of keratin and by a prominent intertwine of epidermal rete pegs and long dermal papillae. The epidermal rete pegs are the epithelial extensions that project into the underlying connective tissue in both skin and mucous membranes. The dermal papillae are finger-like projections that help adhesion between the epidermal and dermal layers, as well as providing a larger surface area to nourish the epidermal layer. The thickness of a dolphin's epidermis varies, depending on species and age. Blubber is found within the dermis and subcutis layer. The dermis blends gradually with the adipose layer, which is known as fat, because the fat may extend up to the epidermis border and collagen fiber bundles extend throughout the whole subcutaneous blubber which is fat found under the skin. The thickness of the subcutaneous blubber or fat depends on the dolphin's health, development, location, reproductive state, and how well it feeds. This fat is thickest on the dolphin's back and belly. Most of the dolphin's body fat is accumulated in a thick layer of blubber. Blubber differs from fat in that, in addition to fat cells, it contains a fibrous network of connective tissue. The blubber functions to streamline the body and to form specialized locomotor structures such as the dorsal fin, propulsive fluke blades and caudal keels. There are many nerve endings that resemble small, onion-like configurations that are present in the superficial portion of the dermis. Mechanoreceptors are found within the interlocks of the epidermis with dermal ridges. There are nerve fibers in the dermis that extend to the epidermis. These nerve endings are known to be highly proprioceptive, which explains sensory perception. Proprioception, which is also known as kinesthesia, is the body's ability to sense its location, movements and actions. Dolphins are sensitive to vibrations and small pressure changes. Blood vessels and nerve endings can be found within the dermis. There is a plexus of parallel running arteries and veins in the dorsal fin, fluke, and flippers. The blubber manipulates the blood vessels to help the dolphin stay warm. When the temperature drops, the blubber constricts the blood vessels to reduce blood flow in the dolphin. This allows the dolphin to spend less energy heating its own body, ultimately keeping the animal warmer without burning energy as quick. In order to release heat, the heat must pass the blubber layer. There are thermal windows that lack blubber, are not fully insulated and are somewhat thin and highly vascularized, including the dorsal fin, flukes, and flippers. These thermal windows are a good way for dolphins to get rid of excess heat if overheating. Additionally in order to conserve heat, dolphins use countercurrent heat exchange. Blood flows in different directions in order for heat to transfer across membranes. Heat from warm blood leaving the heart will heat up the cold blood that is headed back to the heart from the extremities, meaning that the heart always has warm blood and it decreases the heat lost to the water in those thermal windows. Dolphins have two pectoral flippers, containing four digits, a boneless dorsal fin for stability, and a tail fin for propulsion. Although dolphins do not possess external hind limbs, some possess discrete rudimentary appendages, which may contain feet and digits. Dolphins are fast swimmers in comparison to seals which typically cruise at 9–28 km/h (5.6–17.4 mph); the orca, in comparison, can travel at speeds up to 55.5 km/h (34.5 mph). The fusing of the neck vertebrae, while increasing stability when swimming at high speeds, decreases flexibility, which means they are unable to turn their heads. River dolphins have non-fused neck vertebrae and can turn their heads up to 90°. Dolphins swim by moving their tail fin and rear body vertically, while their flippers are mainly used for steering. Some species log out of the water, which may allow them to travel faster. Their skeletal anatomy allows them to be fast swimmers. All species have a dorsal fin to prevent themselves from involuntarily spinning in the water. Some dolphins are adapted for diving to great depths. In addition to their streamlined bodies, some can selectively slow their heart rate to conserve oxygen. Some can also re-route blood from tissue tolerant of water pressure to the heart, brain and other organs. Their hemoglobin and myoglobin store oxygen in body tissues, and they have twice as much myoglobin as hemoglobin. A dolphin ear has specific adaptations to the marine environment. In humans, the middle ear works as an impedance equalizer between the outside air's low impedance and the cochlear fluid's high impedance. In dolphins, and other marine mammals, there is no great difference between the outer and inner environments. Instead of sound passing through the outer ear to the middle ear, dolphins receive sound through the throat, from which it passes through a low-impedance fat-filled cavity to the inner ear. The ear is acoustically isolated from the skull by air-filled sinus pockets, which allow for greater directional hearing underwater. Dolphins send out high frequency clicks from an organ known as a melon. This melon consists of fat, and the skull of any such creature containing a melon will have a large depression. This allows dolphins to use echolocation for orientation. Though most dolphins do not have hair, they do have hair follicles that may perform some sensory function. Beyond locating an object, echolocation also provides the animal with an idea on an object's shape and size, though how exactly this works is not yet understood. The small hairs on the rostrum of the boto (river dolphins of South America) are believed to function as a tactile sense, possibly to compensate for the boto's poor eyesight. A dolphin eye is relatively small for its size, yet they do retain a good degree of eyesight. As well as this, the eyes of a dolphin are placed on the sides of its head, so their vision consists of two fields, rather than a binocular view like humans have. When dolphins surface, their lens and cornea correct the nearsightedness that results from the water's refraction of light. Their eyes contain both rod and cone cells, meaning they can see in both dim and bright light, but they have far more rod cells than they do cone cells. They lack short wavelength sensitive visual pigments in their cone cells, indicating a more limited capacity for color vision than most mammals. Most dolphins have slightly flattened eyeballs, enlarged pupils (which shrink as they surface to prevent damage), slightly flattened corneas and a tapetum lucidum (eye tissue behind the retina); these adaptations allow for large amounts of light to pass through the eye and, therefore, a very clear image of the surrounding area. They also have glands on the eyelids and outer corneal layer that act as protection for the cornea. The olfactory lobes and nerve are absent in dolphins, suggesting that they have no sense of smell. Dolphins are not thought to have a good sense of taste, as their taste buds are atrophied or missing altogether. Some have preferences for different kinds of fish, indicating some ability to taste. Dolphins are known to teach, learn, cooperate, scheme, and grieve. The neocortex of many species is home to elongated spindle neurons that, prior to 2007, were known only in hominids. In humans, these cells are involved in social conduct, emotions, judgment, and theory of mind. Cetacean spindle neurons are found in areas of the brain that are analogous to where they are found in humans, suggesting that they perform a similar function. Brain size was previously considered a major indicator of the intelligence of an animal. Since most of the brain is used for maintaining bodily functions, greater ratios of brain to body mass may increase the amount of brain mass available for more complex cognitive tasks. Allometric analysis indicates that mammalian brain size scales at approximately the ⅔ or ¾ exponent of the body mass. Comparison of a particular animal's brain size with the expected brain size based on such allometric analysis provides an encephalization quotient that can be used as another indication of animal intelligence. Orcas have the second largest brain mass of any animal on earth, next to the sperm whale. The brain to body mass ratio in some is second only to humans. Self-awareness is seen, by some, to be a sign of highly developed, abstract thinking. Self-awareness, though not well-defined scientifically, is believed to be the precursor to more advanced processes like meta-cognitive reasoning (thinking about thinking) that are typical of humans. Research in this field has suggested that cetaceans, among others, possess self-awareness. The most widely used test for self-awareness in animals is the mirror test in which a mirror is introduced to an animal, and the animal is then marked with a temporary dye. If the animal then goes to the mirror in order to view the mark, it has exhibited strong evidence of self-awareness. Some disagree with these findings, arguing that the results of these tests are open to human interpretation and susceptible to the Clever Hans effect. This test is much less definitive than when used for primates, because primates can touch the mark or the mirror, while cetaceans cannot, making their alleged self-recognition behavior less certain. Skeptics argue that behaviors that are said to identify self-awareness resemble existing social behaviors, and so researchers could be misinterpreting self-awareness for social responses to another individual. The researchers counter-argue that the behaviors shown are evidence of self-awareness, as they are very different from normal responses to another individual. Whereas apes can merely touch the mark on themselves with their fingers, cetaceans show less definitive behavior of self-awareness; they can only twist and turn themselves to observe the mark. In 1995, Marten and Psarakos used television to test dolphin self-awareness. They showed dolphins real-time video of themselves, video of another dolphin and recorded footage. They concluded that their evidence suggested self-awareness rather than social behavior. While this particular study has not been repeated since then, dolphins have since passed the mirror test. Some researchers have argued that evidence for self-awareness has not been convincingly demonstrated. Dolphins are highly social animals, often living in pods of up to a dozen individuals, though pod sizes and structures vary greatly between species and locations. In places with a high abundance of food, pods can merge temporarily, forming a superpod; such groupings may exceed 1,000 dolphins. Membership in pods is not rigid; interchange is common. They establish strong social bonds, and will stay with injured or ill members, helping them to breathe by bringing them to the surface if needed. This altruism does not appear to be limited to their own species. The dolphin Moko in New Zealand has been observed guiding a female pygmy sperm whale together with her calf out of shallow water where they had stranded several times. They have also been seen protecting swimmers from sharks by swimming circles around the swimmers or charging the sharks to make them go away. Dolphins communicate using a variety of clicks, whistle-like sounds and other vocalizations. Dolphins also use nonverbal communication by means of touch and posturing. Dolphins also display culture, something long believed to be unique to humans (and possibly other primate species). In May 2005, a discovery in Australia found Indo-Pacific bottlenose dolphins (Tursiops aduncus) teaching their young to use tools. They cover their snouts with sponges to protect them while foraging. This knowledge is mostly transferred by mothers to daughters, unlike simian primates, where knowledge is generally passed on to both sexes. Using sponges as mouth protection is a learned behavior. Another learned behavior was discovered among river dolphins in Brazil, where some male dolphins use weeds and sticks as part of a sexual display. Forms of care-giving between fellows and even for members of different species(see Moko (dolphin)) are recorded in various species – such as trying to save weakened fellows or female pilot whales holding up dead calves for long periods. Dolphins engage in acts of aggression towards each other. The older a male dolphin is, the more likely his body is to be covered with bite scars. Male dolphins can get into disputes over companions and females. Acts of aggression can become so intense that targeted dolphins sometimes go into exile after losing a fight. Male bottlenose dolphins have been known to engage in infanticide. Dolphins have also been known to kill porpoises (porpicide) for reasons which are not fully understood, as porpoises generally do not share the same diet as dolphins and are therefore not competitors for food supplies. The Cornwall Wildlife Trust records about one such death a year. Possible explanations include misdirected infanticide, misdirected sexual aggression or play behaviour. Dolphin copulation happens belly to belly; though many species engage in lengthy foreplay, the actual act is usually brief, but may be repeated several times within a short timespan. The gestation period varies with species; for the small Tucuxi dolphin, this period is around 11 to 12 months, while for the orca, the gestation period is around 17 months. Typically dolphins give birth to a single calf, which is, unlike most other mammals, born tail first in most cases. They usually become sexually active at a young age, even before reaching sexual maturity. The age of sexual maturity varies by species and sex. Dolphins are known to display non-reproductive sexual behavior, engaging in masturbation, stimulation of the genital area of other individuals using the rostrum or flippers, and homosexual contact. Various species of dolphin have been known to engage in sexual behavior, including copulation with dolphins of other species, and occasionally exhibit behave sexual behavior towards other animals, including humans. Sexual encounters may be violent, with male dolphins sometimes showing aggressive behavior towards both females and other males. Male dolphins may also work together and attempt to herd females in estrus, keeping the females by their side by means of both physical aggression and intimidation, to increase their chances of reproductive success. Generally, dolphins sleep with only one brain hemisphere in slow-wave sleep at a time, thus maintaining enough consciousness to breathe and to watch for possible predators and other threats. Sleep stages earlier in sleep can occur simultaneously in both hemispheres. In captivity, dolphins seemingly enter a fully asleep state where both eyes are closed and there is no response to mild external stimuli. In this case, respiration is automatic; a tail kick reflex keeps the blowhole above the water if necessary. Anesthetized dolphins initially show a tail kick reflex. Though a similar state has been observed with wild sperm whales, it is not known if dolphins in the wild reach this state. The Indus river dolphin has a sleep method that is different from that of other dolphin species. Living in water with strong currents and potentially dangerous floating debris, it must swim continuously to avoid injury. As a result, this species sleeps in very short bursts which last between 4 and 60 seconds. There are various feeding methods among and within species, some apparently exclusive to a single population. Fish and squid are the main food, but the false killer whale and the orca also feed on other marine mammals. Orcas on occasion also hunt whale species larger than themselves. Different breeds of dolphins vary widely in the number of teeth they possess. The orca usually carries 40–56 teeth while the popular bottlenose dolphin has anywhere from 72 to 116 conical teeth and its smaller cousin the common dolphin has 188–268 teeth: the number of teeth that an individual carries varies widely between within a single species. Hybrids between common and bottlenose bred in captivity had a number of teeth intermediate between that of their parents. One common feeding method is herding, where a pod squeezes a school of fish into a small volume, known as a bait ball. Individual members then take turns plowing through the ball, feeding on the stunned fish. Corralling is a method where dolphins chase fish into shallow water to catch them more easily. Orcas and bottlenose dolphins have also been known to drive their prey onto a beach to feed on it, a behaviour known as beach or strand feeding. Some species also whack fish with their flukes, stunning them and sometimes knocking them out of the water. Reports of cooperative human-dolphin fishing date back to the ancient Roman author and natural philosopher Pliny the Elder. A modern human-dolphin partnership currently operates in Laguna, Santa Catarina, Brazil. Here, dolphins drive fish towards fishermen waiting along the shore and signal the men to cast their nets. The dolphins' reward is the fish that escape the nets. In Shark Bay, Australia, dolphins catch fish by trapping them in huge conch shells. In "shelling", a dolphin brings the shell to the surface and shakes it, so that fish sheltering within fall into the dolphin's mouth. From 2007 to 2018, in 5,278 encounters with dolphins, researchers observed 19 dolphins shelling 42 times. The behavior spreads mainly within generations, rather than being passed from mother to offspring. Dolphins are capable of making a broad range of sounds using nasal airsacs located just below the blowhole. Roughly three categories of sounds can be identified: frequency modulated whistles, burst-pulsed sounds, and clicks. Dolphins communicate with whistle-like sounds produced by vibrating connective tissue, similar to the way human vocal cords function, and through burst-pulsed sounds, though the nature and extent of that ability is not known. The clicks are directional and are for echolocation, often occurring in a short series called a click train. The click rate increases when approaching an object of interest. Dolphin echolocation clicks are amongst the loudest sounds made by marine animals. Bottlenose dolphins have been found to have signature whistles, a whistle that is unique to a specific individual. These whistles are used in order for dolphins to communicate with one another by identifying an individual. It can be seen as the dolphin equivalent of a name for humans. These signature whistles are developed during a dolphin's first year; it continues to maintain the same sound throughout its lifetime. In order to obtain each individual whistle sound, dolphins undergo vocal production learning. This consists of an experience with other dolphins that modifies the signal structure of an existing whistle sound. An auditory experience influences the whistle development of each dolphin. Dolphins are able to communicate to one another by addressing another dolphin through mimicking their whistle. The signature whistle of a male bottlenose dolphin tends to be similar to that of his mother, while the signature whistle of a female bottlenose dolphin tends to be more distinguishing. Bottlenose dolphins have a strong memory when it comes to these signature whistles, as they are able to relate to a signature whistle of an individual they have not encountered for over twenty years. Research done on signature whistle usage by other dolphin species is relatively limited. The research on other species done so far has yielded varied outcomes and inconclusive results. Because dolphins are generally associated in groups, communication is necessary. Signal masking is when other similar sounds (conspecific sounds) interfere with the original acoustic sound. In larger groups, individual whistle sounds are less prominent. Dolphins tend to travel in pods, upon which there are groups of dolphins that range from a few to many. Although they are traveling in these pods, the dolphins do not necessarily swim right next to each other. Rather, they swim within the same general vicinity. In order to prevent losing one of their pod members, there are higher whistle rates. Because their group members were spread out, this was done in order to continue traveling together. Dolphins frequently leap above the water surface, this being done for various reasons. When travelling, jumping can save the dolphin energy as there is less friction while in the air. This type of travel is known as porpoising. Other reasons include orientation, social displays, fighting, non-verbal communication, entertainment and attempting to dislodge parasites. Dolphins show various types of playful behavior, often including objects, self-made bubble rings, other dolphins or other animals. When playing with objects or small animals, common behavior includes carrying the object or animal along using various parts of the body, passing it along to other members of the group or taking it from another member, or throwing it out of the water. Dolphins have also been observed harassing animals in other ways, for example by dragging birds underwater without showing any intent to eat them. Playful behaviour that involves another animal species with active participation of the other animal has also been observed. Playful dolphin interactions with humans are the most obvious examples, followed by those with humpback whales and dogs. Juvenile dolphins off the coast of Western Australia have been observed chasing, capturing, and chewing on blowfish. While some reports state that the dolphins are becoming intoxicated on the tetrodotoxin in the fishes' skin, other reports have characterized this behavior as the normal curiosity and exploration of their environment in which dolphins engage. Although this behaviour is highly unusual in wild dolphins, several Indo-Pacific bottlenose dolphins (Tursiops aduncus) of the Port River, north of Adelaide, South Australia, have been seen to have exhibit "tail-walking". This activity mimicks a standing posture, using the tail to run backwards along the water. To perform this movement, the dolphin "forces the majority of its body vertically out of the water and maintains the position by vigorously pumping its tail". This started in 1988 when a female named Billie was rescued after becoming trapped in a polluted marina, and spent two weeks recuperating with captive dolphins. Billie had previously been observed swimming and frolicking with racehorses exercising in the Port River in the 1980s. After becoming trapped in a reedy estuary further down the coast, she was rescued and placed with several captive dolphins at a marine park to recuperate. There she observed the captive dolphins performing tail-walking. After being returned to the Port River, she continued to perform this trick, and another dolphin, Wave, copied her. Wave, a very active tail-walker, passed on the skill to her daughters, Ripple and Tallula. After Billie's premature death, Wave started tail-walking much more frequently, and other dolphins in the group were observed also performing the behaviour. In 2011, up to 12 dolphins were observed tail-walking, but only females appeared to learn the skill. In October 2021, a dolphin was observed tail-walking over a number of hours. Scientists have found the spread of this behaviour, through up to two generations, surprising, as it brings no apparent advantage, and is very energy-consuming. A 2018 study by Mike Rossley et al. suggested: Social learning is the most likely mechanism for the introduction and spread of this unusual behaviour, which has no known adaptive function. These observations demonstrate the potential strength of the capacity for spontaneous imitation in bottlenose dolphins, and help explain the origin and spread of foraging specializations observed in multiple populations of this genus. Dolphins have few marine enemies. Some species or specific populations have none, making them apex predators. For most of the smaller species of dolphins, only a few of the larger sharks, such as the bull shark, dusky shark, tiger shark and great white shark, are a potential risk, especially for calves. Some of the larger dolphin species, especially orcas, may also prey on smaller dolphins, but this seems rare. Dolphins also suffer from a wide variety of diseases and parasites. The Cetacean morbillivirus in particular has been known to cause regional epizootics often leaving hundreds of animals of various species dead. Symptoms of infection are often a severe combination of pneumonia, encephalitis and damage to the immune system, which greatly impair the cetacean's ability to swim and stay afloat unassisted. A study at the U.S. National Marine Mammal Foundation revealed that dolphins, like humans, develop a natural form of type 2 diabetes which may lead to a better understanding of the disease and new treatments for both humans and dolphins. Dolphins can tolerate and recover from extreme injuries such as shark bites although the exact methods used to achieve this are not known. The healing process is rapid and even very deep wounds do not cause dolphins to hemorrhage to death. Furthermore, even gaping wounds restore in such a way that the animal's body shape is restored, and infection of such large wounds seems rare. A study published in the journal Marine Mammal Science suggests that at least some dolphins survive shark attacks using everything from sophisticated combat moves to teaming up against the shark. Some dolphin species are at risk of extinction, especially some river dolphin species such as the Amazon river dolphin, and the Ganges and Yangtze river dolphin, which are critically or seriously endangered. A 2006 survey found no individuals of the Yangtze river dolphin. The species now appears to be functionally extinct. Pesticides, heavy metals, plastics, and other industrial and agricultural pollutants that do not disintegrate rapidly in the environment concentrate in predators such as dolphins. Injuries or deaths due to collisions with boats, especially their propellers, are also common. Various fishing methods, most notably purse seine fishing for tuna and the use of drift and gill nets, unintentionally kill many dolphins. Accidental by-catch in gill nets and incidental captures in antipredator nets that protect marine fish farms are common and pose a risk for mainly local dolphin populations. In some parts of the world, such as Taiji in Japan and the Faroe Islands, dolphins are traditionally considered food and are killed in harpoon or drive hunts. Dolphin meat is high in mercury and may thus pose a health danger to humans when consumed. Queensland's shark culling program, which has killed roughly 50,000 sharks since 1962, has also killed thousands of dolphins as bycatch. "Shark control" programs in both Queensland and New South Wales use shark nets and drum lines, which entangle and kill dolphins. Queensland's "shark control" program has killed more than 1,000 dolphins in recent years, and at least 32 dolphins have been killed in Queensland since 2014. A shark culling program in KwaZulu-Natal has killed at least 2,310 dolphins. Dolphin safe labels attempt to reassure consumers that fish and other marine products have been caught in a dolphin-friendly way. The earliest campaigns with "dolphin safe" labels were initiated in the 1980s as a result of cooperation between marine activists and the major tuna companies, and involved decreasing incidental dolphin kills by up to 50% by changing the type of nets used to catch tuna. The dolphins are netted only while fishermen are in pursuit of smaller tuna. Albacore are not netted this way, making albacore the only truly dolphin-safe tuna. Loud underwater noises, such as those resulting from naval sonar use, live firing exercises, and certain offshore construction projects such as wind farms, may be harmful to dolphins, increasing stress, damaging hearing, and causing decompression sickness by forcing them to surface too quickly to escape the noise. Dolphins and other smaller cetaceans are also hunted in an activity known as dolphin drive hunting. This is accomplished by driving a pod together with boats and usually into a bay or onto a beach. Their escape is prevented by closing off the route to the ocean with other boats or nets. Dolphins are hunted this way in several places around the world, including the Solomon Islands, the Faroe Islands, Peru, and Japan, the most well-known practitioner of this method. By numbers, dolphins are mostly hunted for their meat, though some end up in dolphinariums. Despite the controversial nature of the hunt resulting in international criticism, and the possible health risk that the often polluted meat causes, thousands of dolphins are caught in drive hunts each year. Dolphins are marine mammals with broad geographic extent, making them susceptible to climate change in various ways. The most common effect of climate change on dolphins is the increasing water temperatures across the globe. This has caused a large variety of dolphin species to experience range shifts, in which the species move from their typical geographic region to cooler waters. Another side effect of increasing water temperatures is the increase in harmful algae blooms, which has caused a mass die-off of bottlenose dolphins. In California, the 1982-83 El Niño warming event caused the near-bottom spawning market squid to leave southern California, which caused their predator, the pilot whale, to also leave. As the market squid returned six years later, Risso's dolphins came to feed on the squid. Bottlenose dolphins expanded their range from southern to central California, and stayed even after the warming event subsided. The Pacific white-sided dolphin has had a decline in population in the southwest Gulf of California, the southern boundary of their distribution. In the 1980s they were abundant with group sizes up to 200 across the entire cool season. Then, in the 2000s, only two groups were recorded with sizes of 20 and 30, and only across the central cool season. This decline was not related to a decline of other marine mammals or prey, so it was concluded to have been caused by climate change as it occurred during a period of warming. Additionally, the Pacific white-sided dolphin had an increase in occurrence on the west coast of Canada from 1984 to 1998. In the Mediterranean, sea surface temperatures have increased, as well as salinity, upwelling intensity, and sea levels. Because of this, prey resources have been reduced causing a steep decline in the short-beaked common dolphin Mediterranean subpopulation, which was deemed endangered in 2003. This species now only exists in the Alboran Sea, due to its high productivity, distinct ecosystem, and differing conditions from the rest of the Mediterranean. In northwest Europe, many dolphin species have experienced range shifts from the region's typically colder waters. Warm water dolphins, like the short-beaked common dolphin and striped dolphin, have expanded north of western Britain and into the northern North Sea, even in the winter, which may displace the white-beaked and Atlantic white-sided dolphin that are in that region. The white-beaked dolphin has shown an increase in the southern North Sea since the 1960s because of this. The rough-toothed dolphin and Atlantic spotted dolphin may move to northwest Europe. In northwest Scotland, white-beaked dolphins (local to the colder waters of the North Atlantic) have decreased while common dolphins (local to warmer waters) have increased from 1992 to 2003. Additionally, Fraser's dolphin, found in tropical waters, was recorded in the UK for the first time in 1996. River dolphins are highly affected by climate change as high evaporation rates, increased water temperatures, decreased precipitation, and increased acidification occur. River dolphins typically have a higher densities when rivers have a lox index of freshwater degradation and better water quality. Specifically looking at the Ganges river dolphin, the high evaporation rates and increased flooding on the plains may lead to more human river regulation, decreasing the dolphin population. As warmer waters lead to a decrease in dolphin prey, this led to other causes of dolphin population decrease. In the case of bottlenose dolphins, mullet populations decrease due to increasing water temperatures, which leads to a decrease in the dolphins' health and thus their population. At the Shark Bay World Heritage Area in Western Australia, the local Indo-Pacific bottlenose dolphin population had a significant decline after a marine heatwave in 2011. This heatwave caused a decrease in prey, which led to a decline in dolphin reproductive rates as female dolphins could not get enough nutrients to sustain a calf. The resultant decrease in fish population due to warming waters has also influenced humans to see dolphins as fishing competitors or even bait. Humans use dusky dolphins as bait or are killed off because they consume the same fish humans eat and sell for profit. In the central Brazilian Amazon alone, approximately 600 pink river dolphins are killed each year to be used as bait. Dolphins have long played a role in human culture. In Greek myths, dolphins were seen invariably as helpers of humankind. Dolphins also seem to have been important to the Minoans, judging by artistic evidence from the ruined palace at Knossos. During the 2009 excavations of a major Mycenaean city at Iklaina, a striking fragment of a wall-paintings came to light, depicting a ship with three human figures and dolphins. Dolphins are common in Greek mythology, and many coins from ancient Greece have been found which feature a man, a boy or a deity riding on the back of a dolphin. The Ancient Greeks welcomed dolphins; spotting dolphins riding in a ship's wake was considered a good omen. In both ancient and later art, Cupid is often shown riding a dolphin. A dolphin rescued the poet Arion from drowning and carried him safe to land, at Cape Matapan, a promontory forming the southernmost point of the Peloponnesus. There was a temple to Poseidon and a statue of Arion riding the dolphin. The Greeks reimagined the Phoenician god Melqart as Melikertês (Melicertes) and made him the son of Athamas and Ino. He drowned but was transfigured as the marine deity Palaemon, while his mother became Leucothea. (cf Ino.) At Corinth, he was so closely connected with the cult of Poseidon that the Isthmian Games, originally instituted in Poseidon's honor, came to be looked upon as the funeral games of Melicertes. Phalanthus was another legendary character brought safely to shore (in Italy) on the back of a dolphin, according to Pausanias. Dionysus was once captured by Etruscan pirates who mistook him for a wealthy prince they could ransom. After the ship set sail Dionysus invoked his divine powers, causing vines to overgrow the ship where the mast and sails had been. He turned the oars into serpents, so terrifying the sailors that they jumped overboard, but Dionysus took pity on them and transformed them into dolphins so that they would spend their lives providing help for those in need. Dolphins were also the messengers of Poseidon and sometimes did errands for him as well. Dolphins were sacred to both Aphrodite and Apollo. "Dolfin" was the name of an aristocratic family in the maritime Republic of Venice, whose most prominent member was the 13th-century Doge Giovanni Dolfin. In Hindu mythology the Ganges river dolphin is associated with Ganga, the deity of the Ganges river. The dolphin is said to be among the creatures which heralded the goddess' descent from the heavens and her mount, the Makara, is sometimes depicted as a dolphin. The Boto, a species of river dolphin that resides in the Amazon River, are believed to be shapeshifters, or encantados, who are capable of having children with human women. There are comparatively few surviving myths of dolphins in Polynesian cultures, in spite of their maritime traditions and relevance of other marine animals such as sharks and seabirds; unlike these, they are more often perceived as food than as totemic symbols. Dolphins are most clearly represented in Rapa Nui Rongorongo, and in the traditions of the Caroline Islands they are depicted similarly to the Boto, being sexually active shapeshifters. Dolphins are also used as symbols, for instance in heraldry. When heraldry developed in the Middle Ages, little was known about the biology of the dolphin and it was often depicted as a sort of fish. The stylised heraldic dolphin still conventionally follows this tradition, sometimes showing the dolphin skin covered with fish scales. A well-known historical example was the coat of arms of the former province of the Dauphiné in southern France, from which were derived the arms and the title of the Dauphin of France, the heir to the former throne of France (the title literally meaning "The Dolphin of France"). Dolphins are present in the coat of arms of Anguilla and the coat of arms of Romania, and the coat of arms of Barbados has a dolphin supporter. The coat of arms of the town of Poole, Dorset, England, first recorded in 1563, includes a dolphin, which was historically depicted in stylised heraldic form, but which since 1976 has been depicted naturalistically. The renewed popularity of dolphins in the 1960s resulted in the appearance of many dolphinaria around the world, making dolphins accessible to the public. Criticism and animal welfare laws forced many to close, although hundreds still exist around the world. In the United States, the best known are the SeaWorld marine mammal parks. In the Middle East the best known are Dolphin Bay at Atlantis, The Palm and the Dubai Dolphinarium. Various species of dolphins are kept in captivity. These small cetaceans are more often than not kept in theme parks, such as SeaWorld, commonly known as a dolphinarium. Bottlenose dolphins are the most common species of dolphin kept in dolphinariums as they are relatively easy to train, have a long lifespan in captivity and have a friendly appearance. Hundreds if not thousands of bottlenose dolphins live in captivity across the world, though exact numbers are hard to determine. Other species kept in captivity are spotted dolphins, false killer whales and common dolphins, Commerson's dolphins, as well as rough-toothed dolphins, but all in much lower numbers than the bottlenose dolphin. There are also fewer than ten pilot whales, Amazon river dolphins, Risso's dolphins, spinner dolphins, or tucuxi in captivity. An unusual and very rare hybrid dolphin, known as a wolphin, is kept at the Sea Life Park in Hawaii, which is a cross between a bottlenose dolphin and a false killer whale. The number of orcas kept in captivity is very small, especially when compared to the number of bottlenose dolphins, with 60 captive orcas being held in aquaria as of 2017. The orca's intelligence, trainability, striking appearance, playfulness in captivity and sheer size have made it a popular exhibit at aquaria and aquatic theme parks. From 1976 to 1997, 55 whales were taken from the wild in Iceland, 19 from Japan, and three from Argentina. These figures exclude animals that died during capture. Live captures fell dramatically in the 1990s, and by 1999, about 40% of the 48 animals on display in the world were captive-born. Organizations such as the Mote Marine Laboratory rescue and rehabilitate sick, wounded, stranded or orphaned dolphins while others, such as the Whale and Dolphin Conservation and Hong Kong Dolphin Conservation Society, work on dolphin conservation and welfare. India has declared the dolphin as its national aquatic animal in an attempt to protect the endangered Ganges river dolphin. The Vikramshila Gangetic Dolphin Sanctuary has been created in the Ganges river for the protection of the animals. There is debate over the welfare of cetaceans in captivity, and often welfare can vary greatly dependent on the levels of care being provided at a particular facility. In the United States, facilities are regularly inspected by federal agencies to ensure that a high standard of welfare is maintained. Additionally, facilities can apply to become accredited by the Association of Zoos and Aquariums (AZA), which (for accreditation) requires "the highest standards of animal care and welfare in the world" to be achieved. Facilities such as SeaWorld and the Georgia Aquarium are accredited by the AZA. Organizations such as World Animal Protection and the Whale and Dolphin Conservation campaign against the practice of keeping them in captivity. In captivity, they often develop pathologies, such as the dorsal fin collapse seen in 60–90% of male orca. Captives have vastly reduced life expectancies, on average only living into their 20s, although there are examples of orcas living longer, including several over 30 years old, and two captive orcas, Corky II and Lolita, are in their mid-40s. In the wild, females who survive infancy live 46 years on average, and up to 70–80 years in rare cases. Wild males who survive infancy live 31 years on average, and up to 50–60 years. Captivity usually bears little resemblance to wild habitat, and captive whales' social groups are foreign to those found in the wild. Critics claim captive life is stressful due to these factors and the requirement to perform circus tricks that are not part of wild orca behavior. Wild orcas may travel up to 160 kilometres (100 mi) in a day, and critics say the animals are too big and intelligent to be suitable for captivity. Captives occasionally act aggressively towards themselves, their tankmates, or humans, which critics say is a result of stress. Although dolphins generally interact well with humans, some attacks have occurred, most of them resulting in small injuries. Orcas, the largest species of dolphin, have been involved in fatal attacks on humans in captivity. The record-holder of documented orca fatal attacks is a male named Tilikum, who lived at SeaWorld from 1992 until his death in 2017. Tilikum has played a role in the death of three people in three different incidents (1991, 1999 and 2010). Tilikum's behaviour sparked the production of the documentary Blackfish, which focuses on the consequences of keeping orcas in captivity. There are documented incidents in the wild, too, but none of them fatal. Fatal attacks from other species are less common, but there is a registered occurrence off the coast of Brazil in 1994, when a man died after being attacked by a bottlenose dolphin named Tião. Tião had suffered harassment by human visitors, including attempts to stick ice cream sticks down his blowhole. Non-fatal incidents occur more frequently, both in the wild and in captivity. While dolphin attacks occur far less frequently than attacks by other sea animals, such as sharks, some scientists are worried about the careless programs of human-dolphin interaction. Dr. Andrew J. Read, a biologist at the Duke University Marine Laboratory who studies dolphin attacks, points out that dolphins are large and wild predators, so people should be more careful when they interact with them. Several scientists who have researched dolphin behaviour have proposed that dolphins' unusually high intelligence in comparison to other animals means that dolphins should be seen as non-human persons who should have their own specific rights and that it is morally unacceptable to keep them captive for entertainment purposes or to kill them either intentionally for consumption or unintentionally as by-catch. Four countries – Chile, Costa Rica, Hungary, and India – have declared dolphins to be "non-human persons" and have banned the capture and import of live dolphins for entertainment. A number of militaries have employed dolphins for various purposes from finding mines to rescuing lost or trapped humans. The military use of dolphins drew scrutiny during the Vietnam War, when rumors circulated that the United States Navy was training dolphins to kill Vietnamese divers. The United States Navy denies that at any point dolphins were trained for combat. Dolphins are still being trained by the United States Navy for other tasks as part of the U.S. Navy Marine Mammal Program. The Russian military is believed to have closed its marine mammal program in the early 1990s. In 2000 the press reported that dolphins trained to kill by the Soviet Navy had been sold to Iran. The military is also interested in disguising underwater communications as artificial dolphin clicks. Dolphins are an increasingly popular choice of animal-assisted therapy for psychological problems and developmental disabilities. For example, a 2005 study found dolphins an effective treatment for mild to moderate depression. This study was criticized on several grounds, including a lack of knowledge on whether dolphins are more effective than common pets. Reviews of this and other published dolphin-assisted therapy (DAT) studies have found important methodological flaws and have concluded that there is no compelling scientific evidence that DAT is a legitimate therapy or that it affords more than fleeting mood improvement. In some parts of the world, such as Taiji, Japan and the Faroe Islands, dolphins are traditionally considered as food, and are killed in harpoon or drive hunts. Dolphin meat is consumed in a small number of countries worldwide, which include Japan and Peru (where it is referred to as chancho marino, or "sea pork"). While Japan may be the best-known and most controversial example, only a very small minority of the population has ever sampled it. Dolphin meat is dense and such a dark shade of red as to appear black. Fat is located in a layer of blubber between the meat and the skin. When dolphin meat is eaten in Japan, it is often cut into thin strips and eaten raw as sashimi, garnished with onion and either horseradish or grated garlic, much as with sashimi of whale or horse meat (basashi). When cooked, dolphin meat is cut into bite-size cubes and then batter-fried or simmered in a miso sauce with vegetables. Cooked dolphin meat has a flavor very similar to beef liver. There have been human health concerns associated with the consumption of dolphin meat in Japan after tests showed that dolphin meat contained high levels of mercury. There are no known cases of mercury poisoning as a result of consuming dolphin meat, though the government continues to monitor people in areas where dolphin meat consumption is high. The Japanese government recommends that children and pregnant women avoid eating dolphin meat on a regular basis. Similar concerns exist with the consumption of dolphin meat in the Faroe Islands, where prenatal exposure to methylmercury and PCBs primarily from the consumption of pilot whale meat has resulted in neuropsychological deficits amongst children. The Faroe Islands population was exposed to methylmercury largely from contaminated pilot whale meat, which contained very high levels of about 2 mg methylmercury/kg. However, the Faroe Islands populations also eat significant numbers of fish. The study of about 900 Faroese children showed that prenatal exposure to methylmercury resulted in neuropsychological deficits at 7 years of age Conservation, research and news: Photos:
[ { "paragraph_id": 0, "text": "A dolphin is an aquatic mammal within the infraorder Cetacea. Dolphin species belong to the families Delphinidae (the oceanic dolphins), Platanistidae (the Indian river dolphins), Iniidae (the New World river dolphins), Pontoporiidae (the brackish dolphins), and possibly extinct Lipotidae (baiji or Chinese river dolphin). There are 40 extant species named as dolphins.", "title": "" }, { "paragraph_id": 1, "text": "Dolphins range in size from the 1.7-metre-long (5 ft 7 in) and 50-kilogram (110-pound) Maui's dolphin to the 9.5 m (31 ft) and 10-tonne (11-short-ton) orca. Various species of dolphins exhibit sexual dimorphism where the males are larger than females. They have streamlined bodies and two limbs that are modified into flippers. Though not quite as flexible as seals, they are faster; some dolphins can briefly travel at speeds of 29 kilometres per hour (18 mph) or leap about 9 metres (30 ft). Dolphins use their conical teeth to capture fast-moving prey. They have well-developed hearing which is adapted for both air and water. It is so well developed that some can survive even if they are blind. Some species are well adapted for diving to great depths. They have a layer of fat, or blubber, under the skin to keep warm in the cold water.", "title": "" }, { "paragraph_id": 2, "text": "Dolphins are widespread. Most species prefer the warm waters of the tropic zones, but some, such as the right whale dolphin, prefer colder climates. Dolphins feed largely on fish and squid, but a few, such as the orca, feed on large mammals such as seals. Male dolphins typically mate with multiple females every year, but females only mate every two to three years. Calves are typically born in the spring and summer months and females bear all the responsibility for raising them. Mothers of some species fast and nurse their young for a relatively long period of time. Dolphins produce a variety of vocalizations, usually in the form of clicks and whistles.", "title": "" }, { "paragraph_id": 3, "text": "Dolphins are sometimes hunted in places such as Japan, in an activity known as dolphin drive hunting. Besides drive hunting, they also face threats from bycatch, habitat loss, and marine pollution. Dolphins have been depicted in various cultures worldwide. Dolphins are sometimes kept in captivity and trained to perform tricks. The most common dolphin species in captivity is the bottlenose dolphin, while there are around 60 orcas in captivity.", "title": "" }, { "paragraph_id": 4, "text": "The name is originally from Greek δελφίς (delphís), \"dolphin\", which was related to the Greek δελφύς (delphus), \"womb\". The animal's name can therefore be interpreted as meaning \"a 'fish' with a womb\". The name was transmitted via the Latin delphinus (the romanization of the later Greek δελφῖνος – delphinos), which in Medieval Latin became dolfinus and in Old French daulphin, which reintroduced the ph into the word \"Dolphin\". The term mereswine (that is, \"sea pig\") has also historically been used.", "title": "Etymology" }, { "paragraph_id": 5, "text": "The term 'dolphin' can be used to refer to most species in the family Delphinidae (oceanic dolphins) and the river dolphin families Iniidae (South American river dolphins), Pontoporiidae (La Plata dolphin), Lipotidae (Yangtze river dolphin) and Platanistidae (Ganges river dolphin and Indus river dolphin). This term has often been applied in the US, mainly in the fishing industry, to all small cetaceans (dolphins and porpoises) are considered to be porpoises, while the fish dorado is called dolphin fish. In common usage the term 'whale' is used only for the larger cetacean species, while the smaller ones with a beaked or longer nose are considered 'dolphins'. The name 'dolphin' is used casually as a synonym for bottlenose dolphin, the most common and familiar species of dolphin. There are six species of dolphins commonly thought of as whales, collectively known as blackfish: the orca, the melon-headed whale, the pygmy killer whale, the false killer whale, and the two species of pilot whales, all of which are classified under the family Delphinidae and qualify as dolphins. Although the terms 'dolphin' and 'porpoise' are sometimes used interchangeably, 'porpoise' usually refers to the Phocoenidae family, which have a shorter beak and spade-shaped teeth and differ in their behavior.", "title": "Etymology" }, { "paragraph_id": 6, "text": "A group of dolphins is called a \"school\" or a \"pod\". Male dolphins are called \"bulls\", females called \"cows\" and young dolphins are called \"calves\".", "title": "Etymology" }, { "paragraph_id": 7, "text": "In 1933, three hybrid dolphins beached off the Irish coast; they were hybrids between Risso's and bottlenose dolphins. This mating was later repeated in captivity, producing a hybrid calf. In captivity, a bottlenose and a rough-toothed dolphin produced hybrid offspring. A common-bottlenose hybrid lives at SeaWorld California. Other dolphin hybrids live in captivity around the world or have been reported in the wild, such as a bottlenose-Atlantic spotted hybrid. The best known hybrid is the wholphin, a false killer whale-bottlenose dolphin hybrid. The wolphin is a fertile hybrid. Two wolphins currently live at the Sea Life Park in Hawaii; the first was born in 1985 from a male false killer whale and a female bottlenose. Wolphins have also been observed in the wild.", "title": "Hybridization" }, { "paragraph_id": 8, "text": "Dolphins are descendants of land-dwelling mammals of the artiodactyl order (even-toed ungulates). They are related to the Indohyus, an extinct chevrotain-like ungulate, from which they split approximately 48 million years ago.", "title": "Evolution" }, { "paragraph_id": 9, "text": "The primitive cetaceans, or archaeocetes, first took to the sea approximately 49 million years ago and became fully aquatic by 5–10 million years later.", "title": "Evolution" }, { "paragraph_id": 10, "text": "Archaeoceti is a parvorder comprising ancient whales. These ancient whales are the predecessors of modern whales, stretching back to their first ancestor that spent their lives near (rarely in) the water. Likewise, the archaeocetes can be anywhere from near fully terrestrial, to semi-aquatic to fully aquatic, but what defines an archaeocete is the presence of visible legs or asymmetrical teeth. Their features became adapted for living in the marine environment. Major anatomical changes include the hearing set-up that channeled vibrations from the jaw to the earbone which occurred with Ambulocetus 49 million years ago, a streamlining of the body and the growth of flukes on the tail which occurred around 43 million years ago with Protocetus, the migration of the nasal openings toward the top of the cranium and the modification of the forelimbs into flippers which occurred with Basilosaurus 35 million years ago, and the shrinking and eventual disappearance of the hind limbs which took place with the first odontocetes and mysticetes 34 million years ago. The modern dolphin skeleton has two small, rod-shaped pelvic bones thought to be vestigial hind limbs. In October 2006, an unusual bottlenose dolphin was captured in Japan; it had small fins on each side of its genital slit, which scientists believe to be an unusually pronounced development of these vestigial hind limbs.", "title": "Evolution" }, { "paragraph_id": 11, "text": "Today, the closest living relatives of cetaceans are the hippopotamuses; these share a semi-aquatic ancestor that branched off from other artiodactyls some 60 million years ago. Around 40 million years ago, a common ancestor between the two branched off into cetacea and anthracotheres; anthracotheres became extinct at the end of the Pleistocene two-and-a-half million years ago, eventually leaving only one surviving lineage: the two species of hippo.", "title": "Evolution" }, { "paragraph_id": 12, "text": "Dolphins have torpedo-shaped bodies with generally non-flexible necks, limbs modified into flippers, a tail fin, and bulbous heads. Dolphin skulls have small eye orbits, long snouts, and eyes placed on the sides of its head; they lack external ear flaps. Dolphins range in size from the 1.7 m (5 ft 7 in) long and 50 kg (110 lb) Maui's dolphin to the 9.5 m (31 ft 2 in) and 10 t (11 short tons) orca. Overall, they tend to be dwarfed by other Cetartiodactyls. Several species have female-biased sexual dimorphism, with the females being larger than the males.", "title": "Anatomy" }, { "paragraph_id": 13, "text": "Dolphins have conical teeth, as opposed to porpoises' spade-shaped teeth. These conical teeth are used to catch swift prey such as fish, squid or large mammals, such as seals.", "title": "Anatomy" }, { "paragraph_id": 14, "text": "Breathing involves expelling stale air from their blowhole, in an upward blast, which may be visible in cold air, followed by inhaling fresh air into the lungs. Dolphins have rather small, unidentifiable spouts.", "title": "Anatomy" }, { "paragraph_id": 15, "text": "All dolphins have a thick layer of blubber, thickness varying on climate. This blubber can help with buoyancy, protection to some extent as predators would have a hard time getting through a thick layer of fat, and energy for leaner times; the primary usage for blubber is insulation from the harsh climate. Calves, generally, are born with a thin layer of blubber, which develops at different paces depending on the habitat.", "title": "Anatomy" }, { "paragraph_id": 16, "text": "Dolphins have a two-chambered stomach that is similar in structure to terrestrial carnivores. They have fundic and pyloric chambers.", "title": "Anatomy" }, { "paragraph_id": 17, "text": "Dolphins' reproductive organs are located inside the body, with genital slits on the ventral (belly) side. Males have two slits, one concealing the penis and one further behind for the anus. Females have one genital slit, housing the vagina and the anus, with a mammary slit on either side.", "title": "Anatomy" }, { "paragraph_id": 18, "text": "The integumentary system is an organ system mostly consisted of skin, hair, nails and endocrine glands. The skin of dolphins is very important as it is specialized to satisfy specific requirements. Some of these requirements include protection, fat storage, heat regulation, and sensory perception. The skin of a dolphin is made up of two parts: the epidermis and the blubber, which consists of two layers including the dermis and subcutis. The dolphin's skin is known to have a smooth rubber texture and is without hair and glands, except mammary glands. At birth, a newborn dolphin has hairs lined up in a single band on both sides of the rostrum, which is their jaw, and usually has a total length of 16–17 cm . Dolphins are a part of the species Cetacea. The epidermis of this species is characterized by the lack of keratin and by a prominent intertwine of epidermal rete pegs and long dermal papillae. The epidermal rete pegs are the epithelial extensions that project into the underlying connective tissue in both skin and mucous membranes. The dermal papillae are finger-like projections that help adhesion between the epidermal and dermal layers, as well as providing a larger surface area to nourish the epidermal layer. The thickness of a dolphin's epidermis varies, depending on species and age.", "title": "Anatomy" }, { "paragraph_id": 19, "text": "Blubber is found within the dermis and subcutis layer. The dermis blends gradually with the adipose layer, which is known as fat, because the fat may extend up to the epidermis border and collagen fiber bundles extend throughout the whole subcutaneous blubber which is fat found under the skin. The thickness of the subcutaneous blubber or fat depends on the dolphin's health, development, location, reproductive state, and how well it feeds. This fat is thickest on the dolphin's back and belly. Most of the dolphin's body fat is accumulated in a thick layer of blubber. Blubber differs from fat in that, in addition to fat cells, it contains a fibrous network of connective tissue.", "title": "Anatomy" }, { "paragraph_id": 20, "text": "The blubber functions to streamline the body and to form specialized locomotor structures such as the dorsal fin, propulsive fluke blades and caudal keels. There are many nerve endings that resemble small, onion-like configurations that are present in the superficial portion of the dermis. Mechanoreceptors are found within the interlocks of the epidermis with dermal ridges. There are nerve fibers in the dermis that extend to the epidermis. These nerve endings are known to be highly proprioceptive, which explains sensory perception. Proprioception, which is also known as kinesthesia, is the body's ability to sense its location, movements and actions. Dolphins are sensitive to vibrations and small pressure changes. Blood vessels and nerve endings can be found within the dermis. There is a plexus of parallel running arteries and veins in the dorsal fin, fluke, and flippers. The blubber manipulates the blood vessels to help the dolphin stay warm. When the temperature drops, the blubber constricts the blood vessels to reduce blood flow in the dolphin. This allows the dolphin to spend less energy heating its own body, ultimately keeping the animal warmer without burning energy as quick. In order to release heat, the heat must pass the blubber layer. There are thermal windows that lack blubber, are not fully insulated and are somewhat thin and highly vascularized, including the dorsal fin, flukes, and flippers. These thermal windows are a good way for dolphins to get rid of excess heat if overheating. Additionally in order to conserve heat, dolphins use countercurrent heat exchange. Blood flows in different directions in order for heat to transfer across membranes. Heat from warm blood leaving the heart will heat up the cold blood that is headed back to the heart from the extremities, meaning that the heart always has warm blood and it decreases the heat lost to the water in those thermal windows.", "title": "Anatomy" }, { "paragraph_id": 21, "text": "Dolphins have two pectoral flippers, containing four digits, a boneless dorsal fin for stability, and a tail fin for propulsion. Although dolphins do not possess external hind limbs, some possess discrete rudimentary appendages, which may contain feet and digits. Dolphins are fast swimmers in comparison to seals which typically cruise at 9–28 km/h (5.6–17.4 mph); the orca, in comparison, can travel at speeds up to 55.5 km/h (34.5 mph). The fusing of the neck vertebrae, while increasing stability when swimming at high speeds, decreases flexibility, which means they are unable to turn their heads. River dolphins have non-fused neck vertebrae and can turn their heads up to 90°. Dolphins swim by moving their tail fin and rear body vertically, while their flippers are mainly used for steering. Some species log out of the water, which may allow them to travel faster. Their skeletal anatomy allows them to be fast swimmers. All species have a dorsal fin to prevent themselves from involuntarily spinning in the water.", "title": "Anatomy" }, { "paragraph_id": 22, "text": "Some dolphins are adapted for diving to great depths. In addition to their streamlined bodies, some can selectively slow their heart rate to conserve oxygen. Some can also re-route blood from tissue tolerant of water pressure to the heart, brain and other organs. Their hemoglobin and myoglobin store oxygen in body tissues, and they have twice as much myoglobin as hemoglobin.", "title": "Anatomy" }, { "paragraph_id": 23, "text": "", "title": "Anatomy" }, { "paragraph_id": 24, "text": "A dolphin ear has specific adaptations to the marine environment. In humans, the middle ear works as an impedance equalizer between the outside air's low impedance and the cochlear fluid's high impedance. In dolphins, and other marine mammals, there is no great difference between the outer and inner environments. Instead of sound passing through the outer ear to the middle ear, dolphins receive sound through the throat, from which it passes through a low-impedance fat-filled cavity to the inner ear. The ear is acoustically isolated from the skull by air-filled sinus pockets, which allow for greater directional hearing underwater. Dolphins send out high frequency clicks from an organ known as a melon. This melon consists of fat, and the skull of any such creature containing a melon will have a large depression. This allows dolphins to use echolocation for orientation. Though most dolphins do not have hair, they do have hair follicles that may perform some sensory function. Beyond locating an object, echolocation also provides the animal with an idea on an object's shape and size, though how exactly this works is not yet understood. The small hairs on the rostrum of the boto (river dolphins of South America) are believed to function as a tactile sense, possibly to compensate for the boto's poor eyesight.", "title": "Anatomy" }, { "paragraph_id": 25, "text": "A dolphin eye is relatively small for its size, yet they do retain a good degree of eyesight. As well as this, the eyes of a dolphin are placed on the sides of its head, so their vision consists of two fields, rather than a binocular view like humans have. When dolphins surface, their lens and cornea correct the nearsightedness that results from the water's refraction of light. Their eyes contain both rod and cone cells, meaning they can see in both dim and bright light, but they have far more rod cells than they do cone cells. They lack short wavelength sensitive visual pigments in their cone cells, indicating a more limited capacity for color vision than most mammals. Most dolphins have slightly flattened eyeballs, enlarged pupils (which shrink as they surface to prevent damage), slightly flattened corneas and a tapetum lucidum (eye tissue behind the retina); these adaptations allow for large amounts of light to pass through the eye and, therefore, a very clear image of the surrounding area. They also have glands on the eyelids and outer corneal layer that act as protection for the cornea.", "title": "Anatomy" }, { "paragraph_id": 26, "text": "The olfactory lobes and nerve are absent in dolphins, suggesting that they have no sense of smell.", "title": "Anatomy" }, { "paragraph_id": 27, "text": "Dolphins are not thought to have a good sense of taste, as their taste buds are atrophied or missing altogether. Some have preferences for different kinds of fish, indicating some ability to taste.", "title": "Anatomy" }, { "paragraph_id": 28, "text": "Dolphins are known to teach, learn, cooperate, scheme, and grieve. The neocortex of many species is home to elongated spindle neurons that, prior to 2007, were known only in hominids. In humans, these cells are involved in social conduct, emotions, judgment, and theory of mind. Cetacean spindle neurons are found in areas of the brain that are analogous to where they are found in humans, suggesting that they perform a similar function.", "title": "Intelligence" }, { "paragraph_id": 29, "text": "Brain size was previously considered a major indicator of the intelligence of an animal. Since most of the brain is used for maintaining bodily functions, greater ratios of brain to body mass may increase the amount of brain mass available for more complex cognitive tasks. Allometric analysis indicates that mammalian brain size scales at approximately the ⅔ or ¾ exponent of the body mass. Comparison of a particular animal's brain size with the expected brain size based on such allometric analysis provides an encephalization quotient that can be used as another indication of animal intelligence. Orcas have the second largest brain mass of any animal on earth, next to the sperm whale. The brain to body mass ratio in some is second only to humans.", "title": "Intelligence" }, { "paragraph_id": 30, "text": "Self-awareness is seen, by some, to be a sign of highly developed, abstract thinking. Self-awareness, though not well-defined scientifically, is believed to be the precursor to more advanced processes like meta-cognitive reasoning (thinking about thinking) that are typical of humans. Research in this field has suggested that cetaceans, among others, possess self-awareness. The most widely used test for self-awareness in animals is the mirror test in which a mirror is introduced to an animal, and the animal is then marked with a temporary dye. If the animal then goes to the mirror in order to view the mark, it has exhibited strong evidence of self-awareness.", "title": "Intelligence" }, { "paragraph_id": 31, "text": "Some disagree with these findings, arguing that the results of these tests are open to human interpretation and susceptible to the Clever Hans effect. This test is much less definitive than when used for primates, because primates can touch the mark or the mirror, while cetaceans cannot, making their alleged self-recognition behavior less certain. Skeptics argue that behaviors that are said to identify self-awareness resemble existing social behaviors, and so researchers could be misinterpreting self-awareness for social responses to another individual. The researchers counter-argue that the behaviors shown are evidence of self-awareness, as they are very different from normal responses to another individual. Whereas apes can merely touch the mark on themselves with their fingers, cetaceans show less definitive behavior of self-awareness; they can only twist and turn themselves to observe the mark.", "title": "Intelligence" }, { "paragraph_id": 32, "text": "In 1995, Marten and Psarakos used television to test dolphin self-awareness. They showed dolphins real-time video of themselves, video of another dolphin and recorded footage. They concluded that their evidence suggested self-awareness rather than social behavior. While this particular study has not been repeated since then, dolphins have since passed the mirror test. Some researchers have argued that evidence for self-awareness has not been convincingly demonstrated.", "title": "Intelligence" }, { "paragraph_id": 33, "text": "Dolphins are highly social animals, often living in pods of up to a dozen individuals, though pod sizes and structures vary greatly between species and locations. In places with a high abundance of food, pods can merge temporarily, forming a superpod; such groupings may exceed 1,000 dolphins. Membership in pods is not rigid; interchange is common. They establish strong social bonds, and will stay with injured or ill members, helping them to breathe by bringing them to the surface if needed. This altruism does not appear to be limited to their own species. The dolphin Moko in New Zealand has been observed guiding a female pygmy sperm whale together with her calf out of shallow water where they had stranded several times. They have also been seen protecting swimmers from sharks by swimming circles around the swimmers or charging the sharks to make them go away.", "title": "Behavior" }, { "paragraph_id": 34, "text": "Dolphins communicate using a variety of clicks, whistle-like sounds and other vocalizations. Dolphins also use nonverbal communication by means of touch and posturing.", "title": "Behavior" }, { "paragraph_id": 35, "text": "Dolphins also display culture, something long believed to be unique to humans (and possibly other primate species). In May 2005, a discovery in Australia found Indo-Pacific bottlenose dolphins (Tursiops aduncus) teaching their young to use tools. They cover their snouts with sponges to protect them while foraging. This knowledge is mostly transferred by mothers to daughters, unlike simian primates, where knowledge is generally passed on to both sexes. Using sponges as mouth protection is a learned behavior. Another learned behavior was discovered among river dolphins in Brazil, where some male dolphins use weeds and sticks as part of a sexual display.", "title": "Behavior" }, { "paragraph_id": 36, "text": "Forms of care-giving between fellows and even for members of different species(see Moko (dolphin)) are recorded in various species – such as trying to save weakened fellows or female pilot whales holding up dead calves for long periods.", "title": "Behavior" }, { "paragraph_id": 37, "text": "Dolphins engage in acts of aggression towards each other. The older a male dolphin is, the more likely his body is to be covered with bite scars. Male dolphins can get into disputes over companions and females. Acts of aggression can become so intense that targeted dolphins sometimes go into exile after losing a fight.", "title": "Behavior" }, { "paragraph_id": 38, "text": "Male bottlenose dolphins have been known to engage in infanticide. Dolphins have also been known to kill porpoises (porpicide) for reasons which are not fully understood, as porpoises generally do not share the same diet as dolphins and are therefore not competitors for food supplies. The Cornwall Wildlife Trust records about one such death a year. Possible explanations include misdirected infanticide, misdirected sexual aggression or play behaviour.", "title": "Behavior" }, { "paragraph_id": 39, "text": "Dolphin copulation happens belly to belly; though many species engage in lengthy foreplay, the actual act is usually brief, but may be repeated several times within a short timespan. The gestation period varies with species; for the small Tucuxi dolphin, this period is around 11 to 12 months, while for the orca, the gestation period is around 17 months. Typically dolphins give birth to a single calf, which is, unlike most other mammals, born tail first in most cases. They usually become sexually active at a young age, even before reaching sexual maturity. The age of sexual maturity varies by species and sex.", "title": "Behavior" }, { "paragraph_id": 40, "text": "Dolphins are known to display non-reproductive sexual behavior, engaging in masturbation, stimulation of the genital area of other individuals using the rostrum or flippers, and homosexual contact.", "title": "Behavior" }, { "paragraph_id": 41, "text": "Various species of dolphin have been known to engage in sexual behavior, including copulation with dolphins of other species, and occasionally exhibit behave sexual behavior towards other animals, including humans. Sexual encounters may be violent, with male dolphins sometimes showing aggressive behavior towards both females and other males. Male dolphins may also work together and attempt to herd females in estrus, keeping the females by their side by means of both physical aggression and intimidation, to increase their chances of reproductive success.", "title": "Behavior" }, { "paragraph_id": 42, "text": "Generally, dolphins sleep with only one brain hemisphere in slow-wave sleep at a time, thus maintaining enough consciousness to breathe and to watch for possible predators and other threats. Sleep stages earlier in sleep can occur simultaneously in both hemispheres. In captivity, dolphins seemingly enter a fully asleep state where both eyes are closed and there is no response to mild external stimuli. In this case, respiration is automatic; a tail kick reflex keeps the blowhole above the water if necessary. Anesthetized dolphins initially show a tail kick reflex. Though a similar state has been observed with wild sperm whales, it is not known if dolphins in the wild reach this state. The Indus river dolphin has a sleep method that is different from that of other dolphin species. Living in water with strong currents and potentially dangerous floating debris, it must swim continuously to avoid injury. As a result, this species sleeps in very short bursts which last between 4 and 60 seconds.", "title": "Behavior" }, { "paragraph_id": 43, "text": "There are various feeding methods among and within species, some apparently exclusive to a single population. Fish and squid are the main food, but the false killer whale and the orca also feed on other marine mammals. Orcas on occasion also hunt whale species larger than themselves. Different breeds of dolphins vary widely in the number of teeth they possess. The orca usually carries 40–56 teeth while the popular bottlenose dolphin has anywhere from 72 to 116 conical teeth and its smaller cousin the common dolphin has 188–268 teeth: the number of teeth that an individual carries varies widely between within a single species. Hybrids between common and bottlenose bred in captivity had a number of teeth intermediate between that of their parents.", "title": "Behavior" }, { "paragraph_id": 44, "text": "One common feeding method is herding, where a pod squeezes a school of fish into a small volume, known as a bait ball. Individual members then take turns plowing through the ball, feeding on the stunned fish. Corralling is a method where dolphins chase fish into shallow water to catch them more easily. Orcas and bottlenose dolphins have also been known to drive their prey onto a beach to feed on it, a behaviour known as beach or strand feeding. Some species also whack fish with their flukes, stunning them and sometimes knocking them out of the water.", "title": "Behavior" }, { "paragraph_id": 45, "text": "Reports of cooperative human-dolphin fishing date back to the ancient Roman author and natural philosopher Pliny the Elder. A modern human-dolphin partnership currently operates in Laguna, Santa Catarina, Brazil. Here, dolphins drive fish towards fishermen waiting along the shore and signal the men to cast their nets. The dolphins' reward is the fish that escape the nets.", "title": "Behavior" }, { "paragraph_id": 46, "text": "In Shark Bay, Australia, dolphins catch fish by trapping them in huge conch shells. In \"shelling\", a dolphin brings the shell to the surface and shakes it, so that fish sheltering within fall into the dolphin's mouth. From 2007 to 2018, in 5,278 encounters with dolphins, researchers observed 19 dolphins shelling 42 times. The behavior spreads mainly within generations, rather than being passed from mother to offspring.", "title": "Behavior" }, { "paragraph_id": 47, "text": "Dolphins are capable of making a broad range of sounds using nasal airsacs located just below the blowhole. Roughly three categories of sounds can be identified: frequency modulated whistles, burst-pulsed sounds, and clicks. Dolphins communicate with whistle-like sounds produced by vibrating connective tissue, similar to the way human vocal cords function, and through burst-pulsed sounds, though the nature and extent of that ability is not known. The clicks are directional and are for echolocation, often occurring in a short series called a click train. The click rate increases when approaching an object of interest. Dolphin echolocation clicks are amongst the loudest sounds made by marine animals.", "title": "Behavior" }, { "paragraph_id": 48, "text": "Bottlenose dolphins have been found to have signature whistles, a whistle that is unique to a specific individual. These whistles are used in order for dolphins to communicate with one another by identifying an individual. It can be seen as the dolphin equivalent of a name for humans. These signature whistles are developed during a dolphin's first year; it continues to maintain the same sound throughout its lifetime. In order to obtain each individual whistle sound, dolphins undergo vocal production learning. This consists of an experience with other dolphins that modifies the signal structure of an existing whistle sound. An auditory experience influences the whistle development of each dolphin. Dolphins are able to communicate to one another by addressing another dolphin through mimicking their whistle. The signature whistle of a male bottlenose dolphin tends to be similar to that of his mother, while the signature whistle of a female bottlenose dolphin tends to be more distinguishing. Bottlenose dolphins have a strong memory when it comes to these signature whistles, as they are able to relate to a signature whistle of an individual they have not encountered for over twenty years. Research done on signature whistle usage by other dolphin species is relatively limited. The research on other species done so far has yielded varied outcomes and inconclusive results.", "title": "Behavior" }, { "paragraph_id": 49, "text": "Because dolphins are generally associated in groups, communication is necessary. Signal masking is when other similar sounds (conspecific sounds) interfere with the original acoustic sound. In larger groups, individual whistle sounds are less prominent. Dolphins tend to travel in pods, upon which there are groups of dolphins that range from a few to many. Although they are traveling in these pods, the dolphins do not necessarily swim right next to each other. Rather, they swim within the same general vicinity. In order to prevent losing one of their pod members, there are higher whistle rates. Because their group members were spread out, this was done in order to continue traveling together.", "title": "Behavior" }, { "paragraph_id": 50, "text": "Dolphins frequently leap above the water surface, this being done for various reasons. When travelling, jumping can save the dolphin energy as there is less friction while in the air. This type of travel is known as porpoising. Other reasons include orientation, social displays, fighting, non-verbal communication, entertainment and attempting to dislodge parasites.", "title": "Behavior" }, { "paragraph_id": 51, "text": "Dolphins show various types of playful behavior, often including objects, self-made bubble rings, other dolphins or other animals. When playing with objects or small animals, common behavior includes carrying the object or animal along using various parts of the body, passing it along to other members of the group or taking it from another member, or throwing it out of the water. Dolphins have also been observed harassing animals in other ways, for example by dragging birds underwater without showing any intent to eat them. Playful behaviour that involves another animal species with active participation of the other animal has also been observed. Playful dolphin interactions with humans are the most obvious examples, followed by those with humpback whales and dogs.", "title": "Behavior" }, { "paragraph_id": 52, "text": "Juvenile dolphins off the coast of Western Australia have been observed chasing, capturing, and chewing on blowfish. While some reports state that the dolphins are becoming intoxicated on the tetrodotoxin in the fishes' skin, other reports have characterized this behavior as the normal curiosity and exploration of their environment in which dolphins engage.", "title": "Behavior" }, { "paragraph_id": 53, "text": "Although this behaviour is highly unusual in wild dolphins, several Indo-Pacific bottlenose dolphins (Tursiops aduncus) of the Port River, north of Adelaide, South Australia, have been seen to have exhibit \"tail-walking\". This activity mimicks a standing posture, using the tail to run backwards along the water. To perform this movement, the dolphin \"forces the majority of its body vertically out of the water and maintains the position by vigorously pumping its tail\".", "title": "Behavior" }, { "paragraph_id": 54, "text": "This started in 1988 when a female named Billie was rescued after becoming trapped in a polluted marina, and spent two weeks recuperating with captive dolphins. Billie had previously been observed swimming and frolicking with racehorses exercising in the Port River in the 1980s. After becoming trapped in a reedy estuary further down the coast, she was rescued and placed with several captive dolphins at a marine park to recuperate. There she observed the captive dolphins performing tail-walking. After being returned to the Port River, she continued to perform this trick, and another dolphin, Wave, copied her. Wave, a very active tail-walker, passed on the skill to her daughters, Ripple and Tallula.", "title": "Behavior" }, { "paragraph_id": 55, "text": "After Billie's premature death, Wave started tail-walking much more frequently, and other dolphins in the group were observed also performing the behaviour. In 2011, up to 12 dolphins were observed tail-walking, but only females appeared to learn the skill. In October 2021, a dolphin was observed tail-walking over a number of hours.", "title": "Behavior" }, { "paragraph_id": 56, "text": "Scientists have found the spread of this behaviour, through up to two generations, surprising, as it brings no apparent advantage, and is very energy-consuming. A 2018 study by Mike Rossley et al. suggested:", "title": "Behavior" }, { "paragraph_id": 57, "text": "Social learning is the most likely mechanism for the introduction and spread of this unusual behaviour, which has no known adaptive function. These observations demonstrate the potential strength of the capacity for spontaneous imitation in bottlenose dolphins, and help explain the origin and spread of foraging specializations observed in multiple populations of this genus.", "title": "Behavior" }, { "paragraph_id": 58, "text": "Dolphins have few marine enemies. Some species or specific populations have none, making them apex predators. For most of the smaller species of dolphins, only a few of the larger sharks, such as the bull shark, dusky shark, tiger shark and great white shark, are a potential risk, especially for calves. Some of the larger dolphin species, especially orcas, may also prey on smaller dolphins, but this seems rare. Dolphins also suffer from a wide variety of diseases and parasites. The Cetacean morbillivirus in particular has been known to cause regional epizootics often leaving hundreds of animals of various species dead. Symptoms of infection are often a severe combination of pneumonia, encephalitis and damage to the immune system, which greatly impair the cetacean's ability to swim and stay afloat unassisted. A study at the U.S. National Marine Mammal Foundation revealed that dolphins, like humans, develop a natural form of type 2 diabetes which may lead to a better understanding of the disease and new treatments for both humans and dolphins.", "title": "Threats" }, { "paragraph_id": 59, "text": "Dolphins can tolerate and recover from extreme injuries such as shark bites although the exact methods used to achieve this are not known. The healing process is rapid and even very deep wounds do not cause dolphins to hemorrhage to death. Furthermore, even gaping wounds restore in such a way that the animal's body shape is restored, and infection of such large wounds seems rare.", "title": "Threats" }, { "paragraph_id": 60, "text": "A study published in the journal Marine Mammal Science suggests that at least some dolphins survive shark attacks using everything from sophisticated combat moves to teaming up against the shark.", "title": "Threats" }, { "paragraph_id": 61, "text": "Some dolphin species are at risk of extinction, especially some river dolphin species such as the Amazon river dolphin, and the Ganges and Yangtze river dolphin, which are critically or seriously endangered. A 2006 survey found no individuals of the Yangtze river dolphin. The species now appears to be functionally extinct.", "title": "Threats" }, { "paragraph_id": 62, "text": "Pesticides, heavy metals, plastics, and other industrial and agricultural pollutants that do not disintegrate rapidly in the environment concentrate in predators such as dolphins. Injuries or deaths due to collisions with boats, especially their propellers, are also common.", "title": "Threats" }, { "paragraph_id": 63, "text": "Various fishing methods, most notably purse seine fishing for tuna and the use of drift and gill nets, unintentionally kill many dolphins. Accidental by-catch in gill nets and incidental captures in antipredator nets that protect marine fish farms are common and pose a risk for mainly local dolphin populations. In some parts of the world, such as Taiji in Japan and the Faroe Islands, dolphins are traditionally considered food and are killed in harpoon or drive hunts. Dolphin meat is high in mercury and may thus pose a health danger to humans when consumed.", "title": "Threats" }, { "paragraph_id": 64, "text": "Queensland's shark culling program, which has killed roughly 50,000 sharks since 1962, has also killed thousands of dolphins as bycatch. \"Shark control\" programs in both Queensland and New South Wales use shark nets and drum lines, which entangle and kill dolphins. Queensland's \"shark control\" program has killed more than 1,000 dolphins in recent years, and at least 32 dolphins have been killed in Queensland since 2014. A shark culling program in KwaZulu-Natal has killed at least 2,310 dolphins.", "title": "Threats" }, { "paragraph_id": 65, "text": "Dolphin safe labels attempt to reassure consumers that fish and other marine products have been caught in a dolphin-friendly way. The earliest campaigns with \"dolphin safe\" labels were initiated in the 1980s as a result of cooperation between marine activists and the major tuna companies, and involved decreasing incidental dolphin kills by up to 50% by changing the type of nets used to catch tuna. The dolphins are netted only while fishermen are in pursuit of smaller tuna. Albacore are not netted this way, making albacore the only truly dolphin-safe tuna. Loud underwater noises, such as those resulting from naval sonar use, live firing exercises, and certain offshore construction projects such as wind farms, may be harmful to dolphins, increasing stress, damaging hearing, and causing decompression sickness by forcing them to surface too quickly to escape the noise.", "title": "Threats" }, { "paragraph_id": 66, "text": "Dolphins and other smaller cetaceans are also hunted in an activity known as dolphin drive hunting. This is accomplished by driving a pod together with boats and usually into a bay or onto a beach. Their escape is prevented by closing off the route to the ocean with other boats or nets. Dolphins are hunted this way in several places around the world, including the Solomon Islands, the Faroe Islands, Peru, and Japan, the most well-known practitioner of this method. By numbers, dolphins are mostly hunted for their meat, though some end up in dolphinariums. Despite the controversial nature of the hunt resulting in international criticism, and the possible health risk that the often polluted meat causes, thousands of dolphins are caught in drive hunts each year.", "title": "Threats" }, { "paragraph_id": 67, "text": "Dolphins are marine mammals with broad geographic extent, making them susceptible to climate change in various ways. The most common effect of climate change on dolphins is the increasing water temperatures across the globe. This has caused a large variety of dolphin species to experience range shifts, in which the species move from their typical geographic region to cooler waters. Another side effect of increasing water temperatures is the increase in harmful algae blooms, which has caused a mass die-off of bottlenose dolphins.", "title": "Threats" }, { "paragraph_id": 68, "text": "In California, the 1982-83 El Niño warming event caused the near-bottom spawning market squid to leave southern California, which caused their predator, the pilot whale, to also leave. As the market squid returned six years later, Risso's dolphins came to feed on the squid. Bottlenose dolphins expanded their range from southern to central California, and stayed even after the warming event subsided. The Pacific white-sided dolphin has had a decline in population in the southwest Gulf of California, the southern boundary of their distribution. In the 1980s they were abundant with group sizes up to 200 across the entire cool season. Then, in the 2000s, only two groups were recorded with sizes of 20 and 30, and only across the central cool season. This decline was not related to a decline of other marine mammals or prey, so it was concluded to have been caused by climate change as it occurred during a period of warming. Additionally, the Pacific white-sided dolphin had an increase in occurrence on the west coast of Canada from 1984 to 1998.", "title": "Threats" }, { "paragraph_id": 69, "text": "In the Mediterranean, sea surface temperatures have increased, as well as salinity, upwelling intensity, and sea levels. Because of this, prey resources have been reduced causing a steep decline in the short-beaked common dolphin Mediterranean subpopulation, which was deemed endangered in 2003. This species now only exists in the Alboran Sea, due to its high productivity, distinct ecosystem, and differing conditions from the rest of the Mediterranean.", "title": "Threats" }, { "paragraph_id": 70, "text": "In northwest Europe, many dolphin species have experienced range shifts from the region's typically colder waters. Warm water dolphins, like the short-beaked common dolphin and striped dolphin, have expanded north of western Britain and into the northern North Sea, even in the winter, which may displace the white-beaked and Atlantic white-sided dolphin that are in that region. The white-beaked dolphin has shown an increase in the southern North Sea since the 1960s because of this. The rough-toothed dolphin and Atlantic spotted dolphin may move to northwest Europe. In northwest Scotland, white-beaked dolphins (local to the colder waters of the North Atlantic) have decreased while common dolphins (local to warmer waters) have increased from 1992 to 2003. Additionally, Fraser's dolphin, found in tropical waters, was recorded in the UK for the first time in 1996.", "title": "Threats" }, { "paragraph_id": 71, "text": "River dolphins are highly affected by climate change as high evaporation rates, increased water temperatures, decreased precipitation, and increased acidification occur. River dolphins typically have a higher densities when rivers have a lox index of freshwater degradation and better water quality. Specifically looking at the Ganges river dolphin, the high evaporation rates and increased flooding on the plains may lead to more human river regulation, decreasing the dolphin population.", "title": "Threats" }, { "paragraph_id": 72, "text": "As warmer waters lead to a decrease in dolphin prey, this led to other causes of dolphin population decrease. In the case of bottlenose dolphins, mullet populations decrease due to increasing water temperatures, which leads to a decrease in the dolphins' health and thus their population. At the Shark Bay World Heritage Area in Western Australia, the local Indo-Pacific bottlenose dolphin population had a significant decline after a marine heatwave in 2011. This heatwave caused a decrease in prey, which led to a decline in dolphin reproductive rates as female dolphins could not get enough nutrients to sustain a calf. The resultant decrease in fish population due to warming waters has also influenced humans to see dolphins as fishing competitors or even bait. Humans use dusky dolphins as bait or are killed off because they consume the same fish humans eat and sell for profit. In the central Brazilian Amazon alone, approximately 600 pink river dolphins are killed each year to be used as bait.", "title": "Threats" }, { "paragraph_id": 73, "text": "Dolphins have long played a role in human culture.", "title": "Relationships with humans" }, { "paragraph_id": 74, "text": "In Greek myths, dolphins were seen invariably as helpers of humankind. Dolphins also seem to have been important to the Minoans, judging by artistic evidence from the ruined palace at Knossos. During the 2009 excavations of a major Mycenaean city at Iklaina, a striking fragment of a wall-paintings came to light, depicting a ship with three human figures and dolphins. Dolphins are common in Greek mythology, and many coins from ancient Greece have been found which feature a man, a boy or a deity riding on the back of a dolphin. The Ancient Greeks welcomed dolphins; spotting dolphins riding in a ship's wake was considered a good omen. In both ancient and later art, Cupid is often shown riding a dolphin. A dolphin rescued the poet Arion from drowning and carried him safe to land, at Cape Matapan, a promontory forming the southernmost point of the Peloponnesus. There was a temple to Poseidon and a statue of Arion riding the dolphin.", "title": "Relationships with humans" }, { "paragraph_id": 75, "text": "The Greeks reimagined the Phoenician god Melqart as Melikertês (Melicertes) and made him the son of Athamas and Ino. He drowned but was transfigured as the marine deity Palaemon, while his mother became Leucothea. (cf Ino.) At Corinth, he was so closely connected with the cult of Poseidon that the Isthmian Games, originally instituted in Poseidon's honor, came to be looked upon as the funeral games of Melicertes. Phalanthus was another legendary character brought safely to shore (in Italy) on the back of a dolphin, according to Pausanias.", "title": "Relationships with humans" }, { "paragraph_id": 76, "text": "Dionysus was once captured by Etruscan pirates who mistook him for a wealthy prince they could ransom. After the ship set sail Dionysus invoked his divine powers, causing vines to overgrow the ship where the mast and sails had been. He turned the oars into serpents, so terrifying the sailors that they jumped overboard, but Dionysus took pity on them and transformed them into dolphins so that they would spend their lives providing help for those in need. Dolphins were also the messengers of Poseidon and sometimes did errands for him as well. Dolphins were sacred to both Aphrodite and Apollo.", "title": "Relationships with humans" }, { "paragraph_id": 77, "text": "\"Dolfin\" was the name of an aristocratic family in the maritime Republic of Venice, whose most prominent member was the 13th-century Doge Giovanni Dolfin.", "title": "Relationships with humans" }, { "paragraph_id": 78, "text": "In Hindu mythology the Ganges river dolphin is associated with Ganga, the deity of the Ganges river. The dolphin is said to be among the creatures which heralded the goddess' descent from the heavens and her mount, the Makara, is sometimes depicted as a dolphin.", "title": "Relationships with humans" }, { "paragraph_id": 79, "text": "The Boto, a species of river dolphin that resides in the Amazon River, are believed to be shapeshifters, or encantados, who are capable of having children with human women.", "title": "Relationships with humans" }, { "paragraph_id": 80, "text": "There are comparatively few surviving myths of dolphins in Polynesian cultures, in spite of their maritime traditions and relevance of other marine animals such as sharks and seabirds; unlike these, they are more often perceived as food than as totemic symbols. Dolphins are most clearly represented in Rapa Nui Rongorongo, and in the traditions of the Caroline Islands they are depicted similarly to the Boto, being sexually active shapeshifters.", "title": "Relationships with humans" }, { "paragraph_id": 81, "text": "Dolphins are also used as symbols, for instance in heraldry. When heraldry developed in the Middle Ages, little was known about the biology of the dolphin and it was often depicted as a sort of fish. The stylised heraldic dolphin still conventionally follows this tradition, sometimes showing the dolphin skin covered with fish scales.", "title": "Relationships with humans" }, { "paragraph_id": 82, "text": "A well-known historical example was the coat of arms of the former province of the Dauphiné in southern France, from which were derived the arms and the title of the Dauphin of France, the heir to the former throne of France (the title literally meaning \"The Dolphin of France\").", "title": "Relationships with humans" }, { "paragraph_id": 83, "text": "Dolphins are present in the coat of arms of Anguilla and the coat of arms of Romania, and the coat of arms of Barbados has a dolphin supporter.", "title": "Relationships with humans" }, { "paragraph_id": 84, "text": "The coat of arms of the town of Poole, Dorset, England, first recorded in 1563, includes a dolphin, which was historically depicted in stylised heraldic form, but which since 1976 has been depicted naturalistically.", "title": "Relationships with humans" }, { "paragraph_id": 85, "text": "The renewed popularity of dolphins in the 1960s resulted in the appearance of many dolphinaria around the world, making dolphins accessible to the public. Criticism and animal welfare laws forced many to close, although hundreds still exist around the world. In the United States, the best known are the SeaWorld marine mammal parks. In the Middle East the best known are Dolphin Bay at Atlantis, The Palm and the Dubai Dolphinarium.", "title": "Relationships with humans" }, { "paragraph_id": 86, "text": "Various species of dolphins are kept in captivity. These small cetaceans are more often than not kept in theme parks, such as SeaWorld, commonly known as a dolphinarium. Bottlenose dolphins are the most common species of dolphin kept in dolphinariums as they are relatively easy to train, have a long lifespan in captivity and have a friendly appearance. Hundreds if not thousands of bottlenose dolphins live in captivity across the world, though exact numbers are hard to determine. Other species kept in captivity are spotted dolphins, false killer whales and common dolphins, Commerson's dolphins, as well as rough-toothed dolphins, but all in much lower numbers than the bottlenose dolphin. There are also fewer than ten pilot whales, Amazon river dolphins, Risso's dolphins, spinner dolphins, or tucuxi in captivity. An unusual and very rare hybrid dolphin, known as a wolphin, is kept at the Sea Life Park in Hawaii, which is a cross between a bottlenose dolphin and a false killer whale.", "title": "Relationships with humans" }, { "paragraph_id": 87, "text": "The number of orcas kept in captivity is very small, especially when compared to the number of bottlenose dolphins, with 60 captive orcas being held in aquaria as of 2017. The orca's intelligence, trainability, striking appearance, playfulness in captivity and sheer size have made it a popular exhibit at aquaria and aquatic theme parks. From 1976 to 1997, 55 whales were taken from the wild in Iceland, 19 from Japan, and three from Argentina. These figures exclude animals that died during capture. Live captures fell dramatically in the 1990s, and by 1999, about 40% of the 48 animals on display in the world were captive-born.", "title": "Relationships with humans" }, { "paragraph_id": 88, "text": "Organizations such as the Mote Marine Laboratory rescue and rehabilitate sick, wounded, stranded or orphaned dolphins while others, such as the Whale and Dolphin Conservation and Hong Kong Dolphin Conservation Society, work on dolphin conservation and welfare. India has declared the dolphin as its national aquatic animal in an attempt to protect the endangered Ganges river dolphin. The Vikramshila Gangetic Dolphin Sanctuary has been created in the Ganges river for the protection of the animals.", "title": "Relationships with humans" }, { "paragraph_id": 89, "text": "There is debate over the welfare of cetaceans in captivity, and often welfare can vary greatly dependent on the levels of care being provided at a particular facility. In the United States, facilities are regularly inspected by federal agencies to ensure that a high standard of welfare is maintained. Additionally, facilities can apply to become accredited by the Association of Zoos and Aquariums (AZA), which (for accreditation) requires \"the highest standards of animal care and welfare in the world\" to be achieved. Facilities such as SeaWorld and the Georgia Aquarium are accredited by the AZA. Organizations such as World Animal Protection and the Whale and Dolphin Conservation campaign against the practice of keeping them in captivity. In captivity, they often develop pathologies, such as the dorsal fin collapse seen in 60–90% of male orca. Captives have vastly reduced life expectancies, on average only living into their 20s, although there are examples of orcas living longer, including several over 30 years old, and two captive orcas, Corky II and Lolita, are in their mid-40s. In the wild, females who survive infancy live 46 years on average, and up to 70–80 years in rare cases. Wild males who survive infancy live 31 years on average, and up to 50–60 years. Captivity usually bears little resemblance to wild habitat, and captive whales' social groups are foreign to those found in the wild. Critics claim captive life is stressful due to these factors and the requirement to perform circus tricks that are not part of wild orca behavior. Wild orcas may travel up to 160 kilometres (100 mi) in a day, and critics say the animals are too big and intelligent to be suitable for captivity. Captives occasionally act aggressively towards themselves, their tankmates, or humans, which critics say is a result of stress.", "title": "Relationships with humans" }, { "paragraph_id": 90, "text": "Although dolphins generally interact well with humans, some attacks have occurred, most of them resulting in small injuries. Orcas, the largest species of dolphin, have been involved in fatal attacks on humans in captivity. The record-holder of documented orca fatal attacks is a male named Tilikum, who lived at SeaWorld from 1992 until his death in 2017. Tilikum has played a role in the death of three people in three different incidents (1991, 1999 and 2010). Tilikum's behaviour sparked the production of the documentary Blackfish, which focuses on the consequences of keeping orcas in captivity. There are documented incidents in the wild, too, but none of them fatal.", "title": "Relationships with humans" }, { "paragraph_id": 91, "text": "Fatal attacks from other species are less common, but there is a registered occurrence off the coast of Brazil in 1994, when a man died after being attacked by a bottlenose dolphin named Tião. Tião had suffered harassment by human visitors, including attempts to stick ice cream sticks down his blowhole. Non-fatal incidents occur more frequently, both in the wild and in captivity.", "title": "Relationships with humans" }, { "paragraph_id": 92, "text": "While dolphin attacks occur far less frequently than attacks by other sea animals, such as sharks, some scientists are worried about the careless programs of human-dolphin interaction. Dr. Andrew J. Read, a biologist at the Duke University Marine Laboratory who studies dolphin attacks, points out that dolphins are large and wild predators, so people should be more careful when they interact with them.", "title": "Relationships with humans" }, { "paragraph_id": 93, "text": "Several scientists who have researched dolphin behaviour have proposed that dolphins' unusually high intelligence in comparison to other animals means that dolphins should be seen as non-human persons who should have their own specific rights and that it is morally unacceptable to keep them captive for entertainment purposes or to kill them either intentionally for consumption or unintentionally as by-catch. Four countries – Chile, Costa Rica, Hungary, and India – have declared dolphins to be \"non-human persons\" and have banned the capture and import of live dolphins for entertainment.", "title": "Relationships with humans" }, { "paragraph_id": 94, "text": "A number of militaries have employed dolphins for various purposes from finding mines to rescuing lost or trapped humans. The military use of dolphins drew scrutiny during the Vietnam War, when rumors circulated that the United States Navy was training dolphins to kill Vietnamese divers. The United States Navy denies that at any point dolphins were trained for combat. Dolphins are still being trained by the United States Navy for other tasks as part of the U.S. Navy Marine Mammal Program. The Russian military is believed to have closed its marine mammal program in the early 1990s. In 2000 the press reported that dolphins trained to kill by the Soviet Navy had been sold to Iran.", "title": "Relationships with humans" }, { "paragraph_id": 95, "text": "The military is also interested in disguising underwater communications as artificial dolphin clicks.", "title": "Relationships with humans" }, { "paragraph_id": 96, "text": "Dolphins are an increasingly popular choice of animal-assisted therapy for psychological problems and developmental disabilities. For example, a 2005 study found dolphins an effective treatment for mild to moderate depression. This study was criticized on several grounds, including a lack of knowledge on whether dolphins are more effective than common pets. Reviews of this and other published dolphin-assisted therapy (DAT) studies have found important methodological flaws and have concluded that there is no compelling scientific evidence that DAT is a legitimate therapy or that it affords more than fleeting mood improvement.", "title": "Relationships with humans" }, { "paragraph_id": 97, "text": "In some parts of the world, such as Taiji, Japan and the Faroe Islands, dolphins are traditionally considered as food, and are killed in harpoon or drive hunts. Dolphin meat is consumed in a small number of countries worldwide, which include Japan and Peru (where it is referred to as chancho marino, or \"sea pork\"). While Japan may be the best-known and most controversial example, only a very small minority of the population has ever sampled it.", "title": "Relationships with humans" }, { "paragraph_id": 98, "text": "Dolphin meat is dense and such a dark shade of red as to appear black. Fat is located in a layer of blubber between the meat and the skin. When dolphin meat is eaten in Japan, it is often cut into thin strips and eaten raw as sashimi, garnished with onion and either horseradish or grated garlic, much as with sashimi of whale or horse meat (basashi). When cooked, dolphin meat is cut into bite-size cubes and then batter-fried or simmered in a miso sauce with vegetables. Cooked dolphin meat has a flavor very similar to beef liver.", "title": "Relationships with humans" }, { "paragraph_id": 99, "text": "There have been human health concerns associated with the consumption of dolphin meat in Japan after tests showed that dolphin meat contained high levels of mercury. There are no known cases of mercury poisoning as a result of consuming dolphin meat, though the government continues to monitor people in areas where dolphin meat consumption is high. The Japanese government recommends that children and pregnant women avoid eating dolphin meat on a regular basis.", "title": "Relationships with humans" }, { "paragraph_id": 100, "text": "Similar concerns exist with the consumption of dolphin meat in the Faroe Islands, where prenatal exposure to methylmercury and PCBs primarily from the consumption of pilot whale meat has resulted in neuropsychological deficits amongst children.", "title": "Relationships with humans" }, { "paragraph_id": 101, "text": "The Faroe Islands population was exposed to methylmercury largely from contaminated pilot whale meat, which contained very high levels of about 2 mg methylmercury/kg. However, the Faroe Islands populations also eat significant numbers of fish. The study of about 900 Faroese children showed that prenatal exposure to methylmercury resulted in neuropsychological deficits at 7 years of age", "title": "Relationships with humans" }, { "paragraph_id": 102, "text": "Conservation, research and news:", "title": "External links" }, { "paragraph_id": 103, "text": "Photos:", "title": "External links" } ]
A dolphin is an aquatic mammal within the infraorder Cetacea. Dolphin species belong to the families Delphinidae, Platanistidae, Iniidae, Pontoporiidae, and possibly extinct Lipotidae. There are 40 extant species named as dolphins. Dolphins range in size from the 1.7-metre-long and 50-kilogram (110-pound) Maui's dolphin to the 9.5 m (31 ft) and 10-tonne (11-short-ton) orca. Various species of dolphins exhibit sexual dimorphism where the males are larger than females. They have streamlined bodies and two limbs that are modified into flippers. Though not quite as flexible as seals, they are faster; some dolphins can briefly travel at speeds of 29 kilometres per hour (18 mph) or leap about 9 metres (30 ft). Dolphins use their conical teeth to capture fast-moving prey. They have well-developed hearing which is adapted for both air and water. It is so well developed that some can survive even if they are blind. Some species are well adapted for diving to great depths. They have a layer of fat, or blubber, under the skin to keep warm in the cold water. Dolphins are widespread. Most species prefer the warm waters of the tropic zones, but some, such as the right whale dolphin, prefer colder climates. Dolphins feed largely on fish and squid, but a few, such as the orca, feed on large mammals such as seals. Male dolphins typically mate with multiple females every year, but females only mate every two to three years. Calves are typically born in the spring and summer months and females bear all the responsibility for raising them. Mothers of some species fast and nurse their young for a relatively long period of time. Dolphins produce a variety of vocalizations, usually in the form of clicks and whistles. Dolphins are sometimes hunted in places such as Japan, in an activity known as dolphin drive hunting. Besides drive hunting, they also face threats from bycatch, habitat loss, and marine pollution. Dolphins have been depicted in various cultures worldwide. Dolphins are sometimes kept in captivity and trained to perform tricks. The most common dolphin species in captivity is the bottlenose dolphin, while there are around 60 orcas in captivity.
2002-01-01T23:26:11Z
2023-12-23T17:18:28Z
[ "Template:Short description", "Template:Anchor", "Template:Cite episode", "Template:Cite conference", "Template:ISBN", "Template:Pp-semi-indef", "Template:Main", "Template:Blockquote", "Template:Cbignore", "Template:Cite newsgroup", "Template:Lang", "Template:See also", "Template:Sister project links", "Template:Heraldry footer", "Template:Authority control", "Template:Pp-move-indef", "Template:Use mdy dates", "Template:Multiple image", "Template:As of", "Template:Cite book", "Template:Cite news", "Template:Clarify", "Template:Quotes", "Template:Cite journal", "Template:Citation needed", "Template:Portal-inline", "Template:Webarchive", "Template:Otheruses", "Template:Convert", "Template:Further", "Template:Cite web", "Template:Cite magazine", "Template:Citation" ]
https://en.wikipedia.org/wiki/Dolphin
9,067
Division ring
In algebra, a division ring, also called a skew field, is a nontrivial ring in which division by nonzero elements is defined. Specifically, it is a nontrivial ring in which every nonzero element a has a multiplicative inverse, that is, an element usually denoted a, such that a a = a a = 1. So, (right) division may be defined as a / b = a b, but this notation is avoided, as one may have a b ≠ b a. A commutative division ring is a field. Wedderburn's little theorem asserts that all finite division rings are commutative and therefore finite fields. Historically, division rings were sometimes referred to as fields, while fields were called "commutative fields". In some languages, such as French, the word equivalent to "field" ("corps") is used for both commutative and noncommutative cases, and the distinction between the two cases is made by adding qualificatives such as "corps commutatif" (commutative field) or "corps gauche" (skew field). All division rings are simple. That is, they have no two-sided ideal besides the zero ideal and itself. All fields are division rings, and every non-field division ring is noncommutative. The best known example is the ring of quaternions. If one allows only rational instead of real coefficients in the constructions of the quaternions, one obtains another division ring. In general, if R is a ring and S is a simple module over R, then, by Schur's lemma, the endomorphism ring of S is a division ring; every division ring arises in this fashion from some simple module. Much of linear algebra may be formulated, and remains correct, for modules over a division ring D instead of vector spaces over a field. Doing so, one must specify whether one is considering right or left modules, and some care is needed in properly distinguishing left and right in formulas. In particular, every module has a basis, and Gaussian elimination can be used. So, everything that can be defined with these tools works on division algebras. Matrices and their products are defined similarly. However, a matrix that is left invertible need not to be right invertible, and if it is, its right inverse can differ from its left inverse. (See Generalized inverse § One-sided inverse.) Determinants are not defined over noncommutative division algebras, and everything that requires this concept cannot be generalized to noncommutative division algebras. Working in coordinates, elements of a finite-dimensional right module can be represented by column vectors, which can be multiplied on the right by scalars, and on the left by matrices (representing linear maps); for elements of a finite-dimensional left module, row vectors must be used, which can be multiplied on the left by scalars, and on the right by matrices. The dual of a right module is a left module, and vice versa. The transpose of a matrix must be viewed as a matrix over the opposite division ring D in order for the rule (AB) = BA to remain valid. Every module over a division ring is free; that is, it has a basis, and all bases of a module have the same number of elements. Linear maps between finite-dimensional modules over a division ring can be described by matrices; the fact that linear maps by definition commute with scalar multiplication is most conveniently represented in notation by writing them on the opposite side of vectors as scalars are. The Gaussian elimination algorithm remains applicable. The column rank of a matrix is the dimension of the right module generated by the columns, and the row rank is the dimension of the left module generated by the rows; the same proof as for the vector space case can be used to show that these ranks are the same and define the rank of a matrix. Division rings are the only rings over which every module is free: a ring R is a division ring if and only if every R-module is free. The center of a division ring is commutative and therefore a field. Every division ring is therefore a division algebra over its center. Division rings can be roughly classified according to whether or not they are finite dimensional or infinite dimensional over their centers. The former are called centrally finite and the latter centrally infinite. Every field is one dimensional over its center. The ring of Hamiltonian quaternions forms a four-dimensional algebra over its center, which is isomorphic to the real numbers. Wedderburn's little theorem: All finite division rings are commutative and therefore finite fields. (Ernst Witt gave a simple proof.) Frobenius theorem: The only finite-dimensional associative division algebras over the reals are the reals themselves, the complex numbers, and the quaternions. Division rings used to be called "fields" in an older usage. In many languages, a word meaning "body" is used for division rings, in some languages designating either commutative or noncommutative division rings, while in others specifically designating commutative division rings (what we now call fields in English). A more complete comparison is found in the article on fields. The name "skew field" has an interesting semantic feature: a modifier (here "skew") widens the scope of the base term (here "field"). Thus a field is a particular type of skew field, and not all skew fields are fields. While division rings and algebras as discussed here are assumed to have associative multiplication, nonassociative division algebras such as the octonions are also of interest. A near-field is an algebraic structure similar to a division ring, except that it has only one of the two distributive laws.
[ { "paragraph_id": 0, "text": "In algebra, a division ring, also called a skew field, is a nontrivial ring in which division by nonzero elements is defined. Specifically, it is a nontrivial ring in which every nonzero element a has a multiplicative inverse, that is, an element usually denoted a, such that a a = a a = 1. So, (right) division may be defined as a / b = a b, but this notation is avoided, as one may have a b ≠ b a.", "title": "" }, { "paragraph_id": 1, "text": "A commutative division ring is a field. Wedderburn's little theorem asserts that all finite division rings are commutative and therefore finite fields.", "title": "" }, { "paragraph_id": 2, "text": "Historically, division rings were sometimes referred to as fields, while fields were called \"commutative fields\". In some languages, such as French, the word equivalent to \"field\" (\"corps\") is used for both commutative and noncommutative cases, and the distinction between the two cases is made by adding qualificatives such as \"corps commutatif\" (commutative field) or \"corps gauche\" (skew field).", "title": "" }, { "paragraph_id": 3, "text": "All division rings are simple. That is, they have no two-sided ideal besides the zero ideal and itself.", "title": "" }, { "paragraph_id": 4, "text": "All fields are division rings, and every non-field division ring is noncommutative. The best known example is the ring of quaternions. If one allows only rational instead of real coefficients in the constructions of the quaternions, one obtains another division ring. In general, if R is a ring and S is a simple module over R, then, by Schur's lemma, the endomorphism ring of S is a division ring; every division ring arises in this fashion from some simple module.", "title": "Relation to fields and linear algebra" }, { "paragraph_id": 5, "text": "Much of linear algebra may be formulated, and remains correct, for modules over a division ring D instead of vector spaces over a field. Doing so, one must specify whether one is considering right or left modules, and some care is needed in properly distinguishing left and right in formulas. In particular, every module has a basis, and Gaussian elimination can be used. So, everything that can be defined with these tools works on division algebras. Matrices and their products are defined similarly. However, a matrix that is left invertible need not to be right invertible, and if it is, its right inverse can differ from its left inverse. (See Generalized inverse § One-sided inverse.)", "title": "Relation to fields and linear algebra" }, { "paragraph_id": 6, "text": "Determinants are not defined over noncommutative division algebras, and everything that requires this concept cannot be generalized to noncommutative division algebras.", "title": "Relation to fields and linear algebra" }, { "paragraph_id": 7, "text": "Working in coordinates, elements of a finite-dimensional right module can be represented by column vectors, which can be multiplied on the right by scalars, and on the left by matrices (representing linear maps); for elements of a finite-dimensional left module, row vectors must be used, which can be multiplied on the left by scalars, and on the right by matrices. The dual of a right module is a left module, and vice versa. The transpose of a matrix must be viewed as a matrix over the opposite division ring D in order for the rule (AB) = BA to remain valid.", "title": "Relation to fields and linear algebra" }, { "paragraph_id": 8, "text": "Every module over a division ring is free; that is, it has a basis, and all bases of a module have the same number of elements. Linear maps between finite-dimensional modules over a division ring can be described by matrices; the fact that linear maps by definition commute with scalar multiplication is most conveniently represented in notation by writing them on the opposite side of vectors as scalars are. The Gaussian elimination algorithm remains applicable. The column rank of a matrix is the dimension of the right module generated by the columns, and the row rank is the dimension of the left module generated by the rows; the same proof as for the vector space case can be used to show that these ranks are the same and define the rank of a matrix.", "title": "Relation to fields and linear algebra" }, { "paragraph_id": 9, "text": "Division rings are the only rings over which every module is free: a ring R is a division ring if and only if every R-module is free.", "title": "Relation to fields and linear algebra" }, { "paragraph_id": 10, "text": "The center of a division ring is commutative and therefore a field. Every division ring is therefore a division algebra over its center. Division rings can be roughly classified according to whether or not they are finite dimensional or infinite dimensional over their centers. The former are called centrally finite and the latter centrally infinite. Every field is one dimensional over its center. The ring of Hamiltonian quaternions forms a four-dimensional algebra over its center, which is isomorphic to the real numbers.", "title": "Relation to fields and linear algebra" }, { "paragraph_id": 11, "text": "Wedderburn's little theorem: All finite division rings are commutative and therefore finite fields. (Ernst Witt gave a simple proof.)", "title": "Main theorems" }, { "paragraph_id": 12, "text": "Frobenius theorem: The only finite-dimensional associative division algebras over the reals are the reals themselves, the complex numbers, and the quaternions.", "title": "Main theorems" }, { "paragraph_id": 13, "text": "Division rings used to be called \"fields\" in an older usage. In many languages, a word meaning \"body\" is used for division rings, in some languages designating either commutative or noncommutative division rings, while in others specifically designating commutative division rings (what we now call fields in English). A more complete comparison is found in the article on fields.", "title": "Related notions" }, { "paragraph_id": 14, "text": "The name \"skew field\" has an interesting semantic feature: a modifier (here \"skew\") widens the scope of the base term (here \"field\"). Thus a field is a particular type of skew field, and not all skew fields are fields.", "title": "Related notions" }, { "paragraph_id": 15, "text": "While division rings and algebras as discussed here are assumed to have associative multiplication, nonassociative division algebras such as the octonions are also of interest.", "title": "Related notions" }, { "paragraph_id": 16, "text": "A near-field is an algebraic structure similar to a division ring, except that it has only one of the two distributive laws.", "title": "Related notions" } ]
In algebra, a division ring, also called a skew field, is a nontrivial ring in which division by nonzero elements is defined. Specifically, it is a nontrivial ring in which every nonzero element a has a multiplicative inverse, that is, an element usually denoted a–1, such that a a–1 = a–1 a = 1. So, (right) division may be defined as a / b = a b–1, but this notation is avoided, as one may have a b–1 ≠ b–1 a. A commutative division ring is a field. Wedderburn's little theorem asserts that all finite division rings are commutative and therefore finite fields. Historically, division rings were sometimes referred to as fields, while fields were called "commutative fields". In some languages, such as French, the word equivalent to "field" ("corps") is used for both commutative and noncommutative cases, and the distinction between the two cases is made by adding qualificatives such as "corps commutatif" or "corps gauche". All division rings are simple. That is, they have no two-sided ideal besides the zero ideal and itself.
2002-01-03T03:03:36Z
2023-11-18T18:10:53Z
[ "Template:Math", "Template:Reflist", "Template:Refend", "Template:Refn", "Template:Sfnp", "Template:Nowrap", "Template:Harvp", "Template:Mvar", "Template:Algebraic structures", "Template:Citation needed", "Template:Slink", "Template:Cite book", "Template:Short description", "Template:Refbegin" ]
https://en.wikipedia.org/wiki/Division_ring
9,069
Dia (software)
Dia (/ˈdiːə/) is free and open source general-purpose diagramming software, developed originally by Alexander Larsson. It uses a controlled single document interface (SDI) similar to GIMP and Inkscape. Dia has a modular design with several shape packages available for different needs: flowchart, network diagrams, circuit diagrams, and more. It does not restrict symbols and connectors from various categories from being placed together. Dia has special objects to help draw entity-relationship models, Unified Modeling Language (UML) diagrams, flowcharts, network diagrams, and simple electrical circuits. It is also possible to add support for new shapes by writing simple XML files, using a subset of Scalable Vector Graphics (SVG) to draw the shape. Dia loads and saves diagrams in a custom XML format which is, by default, gzipped to save space. It can print large diagrams spanning multiple pages and can also be scripted using the Python programming language. Dia can export diagrams to various formats, including: Dia was originally created by Alexander Larsson, but he moved on to work on GNOME and other projects. James Henstridge took over as lead developer, but he also moved on to other projects. He was followed by Cyrille Chepelov, then Lars Ræder Clausen. Dia is currently maintained by Hans Breuer, Steffen Macke and Sameer Sahasrabuddhe. It is written in C, and has an extension system which also supports writing extensions in Python.
[ { "paragraph_id": 0, "text": "Dia (/ˈdiːə/) is free and open source general-purpose diagramming software, developed originally by Alexander Larsson. It uses a controlled single document interface (SDI) similar to GIMP and Inkscape.", "title": "" }, { "paragraph_id": 1, "text": "Dia has a modular design with several shape packages available for different needs: flowchart, network diagrams, circuit diagrams, and more. It does not restrict symbols and connectors from various categories from being placed together.", "title": "Features" }, { "paragraph_id": 2, "text": "Dia has special objects to help draw entity-relationship models, Unified Modeling Language (UML) diagrams, flowcharts, network diagrams, and simple electrical circuits. It is also possible to add support for new shapes by writing simple XML files, using a subset of Scalable Vector Graphics (SVG) to draw the shape.", "title": "Features" }, { "paragraph_id": 3, "text": "Dia loads and saves diagrams in a custom XML format which is, by default, gzipped to save space. It can print large diagrams spanning multiple pages and can also be scripted using the Python programming language.", "title": "Features" }, { "paragraph_id": 4, "text": "Dia can export diagrams to various formats, including:", "title": "Exports" }, { "paragraph_id": 5, "text": "Dia was originally created by Alexander Larsson, but he moved on to work on GNOME and other projects. James Henstridge took over as lead developer, but he also moved on to other projects. He was followed by Cyrille Chepelov, then Lars Ræder Clausen.", "title": "Development" }, { "paragraph_id": 6, "text": "Dia is currently maintained by Hans Breuer, Steffen Macke and Sameer Sahasrabuddhe.", "title": "Development" }, { "paragraph_id": 7, "text": "It is written in C, and has an extension system which also supports writing extensions in Python.", "title": "Development" } ]
Dia is free and open source general-purpose diagramming software, developed originally by Alexander Larsson. It uses a controlled single document interface (SDI) similar to GIMP and Inkscape.
2023-06-12T19:24:26Z
[ "Template:Reflist", "Template:Commons category", "Template:Mindmaps", "Template:Short description", "Template:Infobox software", "Template:IPAc-en", "Template:Unreferenced section", "Template:Portal", "Template:Primary sources", "Template:Cite web", "Template:GNOME" ]
https://en.wikipedia.org/wiki/Dia_(software)
9,070
Deep Space 1
Deep Space 1 (DS1) was a NASA technology demonstration spacecraft which flew by an asteroid and a comet. It was part of the New Millennium Program, dedicated to testing advanced technologies. Launched on 24 October 1998, the Deep Space 1 spacecraft carried out a flyby of asteroid 9969 Braille, which was its primary science target. The mission was extended twice to include an encounter with comet 19P/Borrelly and further engineering testing. Problems during its initial stages and with its star tracker led to repeated changes in mission configuration. While the flyby of the asteroid was only a partial success, the encounter with the comet retrieved valuable information. The Deep Space series was continued by the Deep Space 2 probes, which were launched in January 1999 piggybacked on the Mars Polar Lander and were intended to strike the surface of Mars (though contact was lost and the mission failed). Deep Space 1 was the first NASA spacecraft to use ion propulsion rather than the traditional chemical-powered rockets. The purpose of Deep Space 1 was technology development and validation for future missions; 12 technologies were tested: The Autonav system, developed by NASA's Jet Propulsion Laboratory, takes images of known bright asteroids. The asteroids in the inner Solar System move in relation to other bodies at a noticeable, predictable speed. Thus a spacecraft can determine its relative position by tracking such asteroids across the star background, which appears fixed over such timescales. Two or more asteroids let the spacecraft triangulate its position; two or more positions in time let the spacecraft determine its trajectory. Existing spacecraft are tracked by their interactions with the transmitters of the NASA Deep Space Network (DSN), in effect an inverse GPS. However, DSN tracking requires many skilled operators, and the DSN is overburdened by its use as a communications network. The use of Autonav reduces mission cost and DSN demands. The Autonav system can also be used in reverse, tracking the position of bodies relative to the spacecraft. This is used to acquire targets for the scientific instruments. The spacecraft is programmed with the target's coarse location. After initial acquisition, Autonav keeps the subject in frame, even commandeering the spacecraft's attitude control. The next spacecraft to use Autonav was Deep Impact. Primary power for the mission was produced by a new solar array technology, the Solar Concentrator Array with Refractive Linear Element Technology (SCARLET), which uses linear Fresnel lenses made of silicone to concentrate sunlight onto solar cells. ABLE Engineering developed the concentrator technology and built the solar array for DS1, with Entech Inc, who supplied the Fresnel optics, and the NASA Glenn Research Center. The activity was sponsored by the Ballistic Missile Defense Organization, developed originally for the SSI - Conestoga 1620 payload, METEOR. The concentrating lens technology was combined with dual-junction solar cells, which had considerably better performance than the GaAs solar cells that were the state of the art at the time of the mission launch. The SCARLET arrays generated 2.5 kilowatts at 1 AU, with less size and weight than conventional arrays. Although ion engines had been developed at NASA since the late 1950s, with the exception of the SERT missions in the 1960s, the technology had not been demonstrated in flight on United States spacecraft, though hundreds of Hall-effect engines had been used on Soviet and Russian spacecraft. This lack of a performance history in space meant that despite the potential savings in propellant mass, the technology was considered too experimental to be used for high-cost missions. Furthermore, unforeseen side effects of ion propulsion might in some way interfere with typical scientific experiments, such as fields and particle measurements. Therefore, it was a primary mission of the Deep Space 1 demonstration to show long-duration use of an ion thruster on a scientific mission. The NASA Solar Technology Application Readiness (NSTAR) electrostatic ion thruster, developed at NASA Glenn, achieves a specific impulse of 1000–3000 seconds. This is an order of magnitude higher than traditional space propulsion methods, resulting in a mass savings of approximately half. This leads to much cheaper launch vehicles. Although the engine produces just 92 millinewtons (0.33 ozf) thrust at maximal power (2,100 W on DS1), the craft achieved high speeds because ion engines thrust continuously for long periods. The next spacecraft to use NSTAR engines was Dawn, with three redundant units. Remote Agent (RAX), remote intelligent self-repair software developed at NASA's Ames Research Center and the Jet Propulsion Laboratory, was the first artificial-intelligence control system to control a spacecraft without human supervision. Remote Agent successfully demonstrated the ability to plan onboard activities and correctly diagnose and respond to simulated faults in spacecraft components through its built-in REPL environment. Autonomous control will enable future spacecraft to operate at greater distances from Earth and to carry out more sophisticated science-gathering activities in deep space. Components of the Remote Agent software have been used to support other NASA missions. Major components of Remote Agent were a robust planner (EUROPA), a plan-execution system (EXEC) and a model-based diagnostic system (Livingstone). EUROPA was used as a ground-based planner for the Mars Exploration Rovers. EUROPA II was used to support the Phoenix Mars lander and the Mars Science Laboratory. Livingstone2 was flown as an experiment aboard Earth Observing-1 and on an F/A-18 Hornet at NASA's Dryden Flight Research Center. Another method for reducing DSN burdens is the Beacon Monitor experiment. During the long cruise periods of the mission, spacecraft operations are essentially suspended. Instead of data, Deep Space 1 transmitted a carrier signal on a predetermined frequency. Without data decoding, the carrier could be detected by much simpler ground antennas and receivers. If DS1 detected an anomaly, it changed the carrier between four tones, based on urgency. Ground receivers then signal operators to divert DSN resources. This prevented skilled operators and expensive hardware from babysitting an unburdened mission operating nominally. A similar system was used on the New Horizons Pluto probe to keep costs down during its ten-year cruise from Jupiter to Pluto. The Small Deep Space Transponder (SDST) is a compact and lightweight radio-communications system. Aside from using miniaturized components, the SDST is capable of communicating over the Ka band. Because this band is higher in frequency than bands currently in use by deep-space missions, the same amount of data can be sent by smaller equipment in space and on the ground. Conversely, existing DSN antennas can split time among more missions. At the time of launch, the DSN had a small number of Ka receivers installed on an experimental basis; Ka operations and missions are increasing. The SDST was later used on other space missions such as the Mars Science Laboratory (the Mars rover Curiosity). Once at a target, DS1 senses the particle environment with the PEPE (Plasma Experiment for Planetary Exploration) instrument. This instrument measured the flux of ions and electrons as a function of their energy and direction. The composition of the ions was determined by using a time-of-flight mass spectrometer. The MICAS (Miniature Integrated Camera And Spectrometer) instrument combined visible light imaging with infrared and ultraviolet spectroscopy to determine chemical composition. All channels share a 10 cm (3.9 in) telescope, which uses a silicon carbide mirror. Both PEPE and MICAS were similar in capabilities to larger instruments or suites of instruments on other spacecraft. They were designed to be smaller and require lower power than those used on previous missions. Prior to launch, Deep Space 1 was intended to visit comet 76P/West–Kohoutek–Ikemura and asteroid 3352 McAuliffe. Because of the delayed launch, the targets were changed to asteroid 9969 Braille (at the time called 1992 KD) and comet 19P/Borrelly, with comet 107P/Wilson–Harrington being added following the early success of the mission. It achieved an impaired flyby of Braille and, due to problems with the star tracker, abandoned targeting Wilson–Harrington in order to maintain its flyby of comet 19P/Borrelly, which was successful. An August 2002 flyby of asteroid 1999 KK1 as another extended mission was considered, but ultimately was not advanced due to cost concerns. During the mission, high quality infrared spectra of Mars were also taken. The ion propulsion engine initially failed after 4.5 minutes of operation. However, it was later restored to action and performed excellently. Early in the mission, material ejected during launch vehicle separation caused the closely spaced ion extraction grids to short-circuit. The contamination was eventually cleared, as the material was eroded by electrical arcing, sublimed by outgassing, or simply allowed to drift out. This was achieved by repeatedly restarting the engine in an engine repair mode, arcing across trapped material. It was thought that the ion engine exhaust might interfere with other spacecraft systems, such as radio communications or the science instruments. The PEPE detectors had a secondary function to monitor such effects from the engine. No interference was found although the flux of ions from the thruster prevented PEPE from observing ions below approximately 20 eV. Another failure was the loss of the star tracker. The star tracker determines spacecraft orientation by comparing the star field to its internal charts. The mission was saved when the MICAS camera was reprogrammed to substitute for the star tracker. Although MICAS is more sensitive, its field-of-view is an order of magnitude smaller, creating a greater information processing burden. Ironically, the star tracker was an off-the-shelf component, expected to be highly reliable. Without a working star tracker, ion thrusting was temporarily suspended. The loss of thrust time forced the cancellation of a flyby past comet 107P/Wilson–Harrington. The Autonav system required occasional manual corrections. Most problems were in identifying objects that were too dim, or were difficult to identify because of brighter objects causing diffraction spikes and reflections in the camera, causing Autonav to misidentify targets. The Remote Agent system was presented with three simulated failures on the spacecraft and correctly handled each event. Overall this constituted a successful demonstration of fully autonomous planning, diagnosis, and recovery. The MICAS instrument was a design success, but the ultraviolet channel failed due to an electrical fault. Later in the mission, after the star tracker failure, MICAS assumed this duty as well. This caused continual interruptions in its scientific use during the remaining mission, including the Comet Borrelly encounter. The flyby of the asteroid 9969 Braille was only a partial success. Deep Space 1 was intended to perform the flyby at 56,000 km/h (35,000 mph) at only 240 m (790 ft) from the asteroid. Due to technical difficulties, including a software crash shortly before approach, the craft instead passed Braille at a distance of 26 km (16 mi). This, plus Braille's lower albedo, meant that the asteroid was not bright enough for the Autonav to focus the camera in the right direction, and the picture shoot was delayed by almost an hour. The resulting pictures were disappointingly indistinct. However, the flyby of Comet Borrelly was a great success and returned extremely detailed images of the comet's surface. Such images were of higher resolution than the only previous pictures of a comet -- Halley's Comet, taken by the Giotto spacecraft. The PEPE instrument reported that the comet's solar wind interaction was offset from the nucleus. This is believed to be due to emission of jets, which were not distributed evenly across the comet's surface. Despite having no debris shields, the spacecraft survived the comet passage intact. Once again, the sparse comet jets did not appear to point towards the spacecraft. Deep Space 1 then entered its second extended mission phase, focused on retesting the spacecraft's hardware technologies. The focus of this mission phase was on the ion engine systems. The spacecraft eventually ran out of hydrazine fuel for its attitude control thrusters. The highly efficient ion thruster had a sufficient amount of propellant left to perform attitude control in addition to main propulsion, thus allowing the mission to continue. During late October and early November 1999, during the spacecraft's post-Braille encounter coast phase, Deep Space 1 observed Mars with its MICAS instrument. Although this was a very distant flyby, the instrument did succeed in taking multiple infrared spectra of the planet. Deep Space 1 succeeded in its primary and secondary objectives, returning valuable science data and images. DS1's ion engines were shut down on 18 December 2001 at approximately 20:00:00 UTC, signaling the end of the mission. On-board communications were set to remain in active mode in case the craft should be needed in the future. However, attempts to resume contact in March 2002 were unsuccessful. It remains within the Solar System, in orbit around the Sun.
[ { "paragraph_id": 0, "text": "Deep Space 1 (DS1) was a NASA technology demonstration spacecraft which flew by an asteroid and a comet. It was part of the New Millennium Program, dedicated to testing advanced technologies.", "title": "" }, { "paragraph_id": 1, "text": "Launched on 24 October 1998, the Deep Space 1 spacecraft carried out a flyby of asteroid 9969 Braille, which was its primary science target. The mission was extended twice to include an encounter with comet 19P/Borrelly and further engineering testing. Problems during its initial stages and with its star tracker led to repeated changes in mission configuration. While the flyby of the asteroid was only a partial success, the encounter with the comet retrieved valuable information.", "title": "" }, { "paragraph_id": 2, "text": "The Deep Space series was continued by the Deep Space 2 probes, which were launched in January 1999 piggybacked on the Mars Polar Lander and were intended to strike the surface of Mars (though contact was lost and the mission failed). Deep Space 1 was the first NASA spacecraft to use ion propulsion rather than the traditional chemical-powered rockets.", "title": "" }, { "paragraph_id": 3, "text": "The purpose of Deep Space 1 was technology development and validation for future missions; 12 technologies were tested:", "title": "Technologies" }, { "paragraph_id": 4, "text": "The Autonav system, developed by NASA's Jet Propulsion Laboratory, takes images of known bright asteroids. The asteroids in the inner Solar System move in relation to other bodies at a noticeable, predictable speed. Thus a spacecraft can determine its relative position by tracking such asteroids across the star background, which appears fixed over such timescales. Two or more asteroids let the spacecraft triangulate its position; two or more positions in time let the spacecraft determine its trajectory. Existing spacecraft are tracked by their interactions with the transmitters of the NASA Deep Space Network (DSN), in effect an inverse GPS. However, DSN tracking requires many skilled operators, and the DSN is overburdened by its use as a communications network. The use of Autonav reduces mission cost and DSN demands.", "title": "Technologies" }, { "paragraph_id": 5, "text": "The Autonav system can also be used in reverse, tracking the position of bodies relative to the spacecraft. This is used to acquire targets for the scientific instruments. The spacecraft is programmed with the target's coarse location. After initial acquisition, Autonav keeps the subject in frame, even commandeering the spacecraft's attitude control. The next spacecraft to use Autonav was Deep Impact.", "title": "Technologies" }, { "paragraph_id": 6, "text": "Primary power for the mission was produced by a new solar array technology, the Solar Concentrator Array with Refractive Linear Element Technology (SCARLET), which uses linear Fresnel lenses made of silicone to concentrate sunlight onto solar cells. ABLE Engineering developed the concentrator technology and built the solar array for DS1, with Entech Inc, who supplied the Fresnel optics, and the NASA Glenn Research Center. The activity was sponsored by the Ballistic Missile Defense Organization, developed originally for the SSI - Conestoga 1620 payload, METEOR. The concentrating lens technology was combined with dual-junction solar cells, which had considerably better performance than the GaAs solar cells that were the state of the art at the time of the mission launch.", "title": "Technologies" }, { "paragraph_id": 7, "text": "The SCARLET arrays generated 2.5 kilowatts at 1 AU, with less size and weight than conventional arrays.", "title": "Technologies" }, { "paragraph_id": 8, "text": "Although ion engines had been developed at NASA since the late 1950s, with the exception of the SERT missions in the 1960s, the technology had not been demonstrated in flight on United States spacecraft, though hundreds of Hall-effect engines had been used on Soviet and Russian spacecraft. This lack of a performance history in space meant that despite the potential savings in propellant mass, the technology was considered too experimental to be used for high-cost missions. Furthermore, unforeseen side effects of ion propulsion might in some way interfere with typical scientific experiments, such as fields and particle measurements. Therefore, it was a primary mission of the Deep Space 1 demonstration to show long-duration use of an ion thruster on a scientific mission.", "title": "Technologies" }, { "paragraph_id": 9, "text": "The NASA Solar Technology Application Readiness (NSTAR) electrostatic ion thruster, developed at NASA Glenn, achieves a specific impulse of 1000–3000 seconds. This is an order of magnitude higher than traditional space propulsion methods, resulting in a mass savings of approximately half. This leads to much cheaper launch vehicles. Although the engine produces just 92 millinewtons (0.33 ozf) thrust at maximal power (2,100 W on DS1), the craft achieved high speeds because ion engines thrust continuously for long periods.", "title": "Technologies" }, { "paragraph_id": 10, "text": "The next spacecraft to use NSTAR engines was Dawn, with three redundant units.", "title": "Technologies" }, { "paragraph_id": 11, "text": "Remote Agent (RAX), remote intelligent self-repair software developed at NASA's Ames Research Center and the Jet Propulsion Laboratory, was the first artificial-intelligence control system to control a spacecraft without human supervision. Remote Agent successfully demonstrated the ability to plan onboard activities and correctly diagnose and respond to simulated faults in spacecraft components through its built-in REPL environment. Autonomous control will enable future spacecraft to operate at greater distances from Earth and to carry out more sophisticated science-gathering activities in deep space. Components of the Remote Agent software have been used to support other NASA missions. Major components of Remote Agent were a robust planner (EUROPA), a plan-execution system (EXEC) and a model-based diagnostic system (Livingstone). EUROPA was used as a ground-based planner for the Mars Exploration Rovers. EUROPA II was used to support the Phoenix Mars lander and the Mars Science Laboratory. Livingstone2 was flown as an experiment aboard Earth Observing-1 and on an F/A-18 Hornet at NASA's Dryden Flight Research Center.", "title": "Technologies" }, { "paragraph_id": 12, "text": "Another method for reducing DSN burdens is the Beacon Monitor experiment. During the long cruise periods of the mission, spacecraft operations are essentially suspended. Instead of data, Deep Space 1 transmitted a carrier signal on a predetermined frequency. Without data decoding, the carrier could be detected by much simpler ground antennas and receivers. If DS1 detected an anomaly, it changed the carrier between four tones, based on urgency. Ground receivers then signal operators to divert DSN resources. This prevented skilled operators and expensive hardware from babysitting an unburdened mission operating nominally. A similar system was used on the New Horizons Pluto probe to keep costs down during its ten-year cruise from Jupiter to Pluto.", "title": "Technologies" }, { "paragraph_id": 13, "text": "The Small Deep Space Transponder (SDST) is a compact and lightweight radio-communications system. Aside from using miniaturized components, the SDST is capable of communicating over the Ka band. Because this band is higher in frequency than bands currently in use by deep-space missions, the same amount of data can be sent by smaller equipment in space and on the ground. Conversely, existing DSN antennas can split time among more missions. At the time of launch, the DSN had a small number of Ka receivers installed on an experimental basis; Ka operations and missions are increasing.", "title": "Technologies" }, { "paragraph_id": 14, "text": "The SDST was later used on other space missions such as the Mars Science Laboratory (the Mars rover Curiosity).", "title": "Technologies" }, { "paragraph_id": 15, "text": "Once at a target, DS1 senses the particle environment with the PEPE (Plasma Experiment for Planetary Exploration) instrument. This instrument measured the flux of ions and electrons as a function of their energy and direction. The composition of the ions was determined by using a time-of-flight mass spectrometer.", "title": "Technologies" }, { "paragraph_id": 16, "text": "The MICAS (Miniature Integrated Camera And Spectrometer) instrument combined visible light imaging with infrared and ultraviolet spectroscopy to determine chemical composition. All channels share a 10 cm (3.9 in) telescope, which uses a silicon carbide mirror.", "title": "Technologies" }, { "paragraph_id": 17, "text": "Both PEPE and MICAS were similar in capabilities to larger instruments or suites of instruments on other spacecraft. They were designed to be smaller and require lower power than those used on previous missions.", "title": "Technologies" }, { "paragraph_id": 18, "text": "Prior to launch, Deep Space 1 was intended to visit comet 76P/West–Kohoutek–Ikemura and asteroid 3352 McAuliffe. Because of the delayed launch, the targets were changed to asteroid 9969 Braille (at the time called 1992 KD) and comet 19P/Borrelly, with comet 107P/Wilson–Harrington being added following the early success of the mission. It achieved an impaired flyby of Braille and, due to problems with the star tracker, abandoned targeting Wilson–Harrington in order to maintain its flyby of comet 19P/Borrelly, which was successful. An August 2002 flyby of asteroid 1999 KK1 as another extended mission was considered, but ultimately was not advanced due to cost concerns. During the mission, high quality infrared spectra of Mars were also taken.", "title": "Mission overview" }, { "paragraph_id": 19, "text": "The ion propulsion engine initially failed after 4.5 minutes of operation. However, it was later restored to action and performed excellently. Early in the mission, material ejected during launch vehicle separation caused the closely spaced ion extraction grids to short-circuit. The contamination was eventually cleared, as the material was eroded by electrical arcing, sublimed by outgassing, or simply allowed to drift out. This was achieved by repeatedly restarting the engine in an engine repair mode, arcing across trapped material.", "title": "Mission overview" }, { "paragraph_id": 20, "text": "It was thought that the ion engine exhaust might interfere with other spacecraft systems, such as radio communications or the science instruments. The PEPE detectors had a secondary function to monitor such effects from the engine. No interference was found although the flux of ions from the thruster prevented PEPE from observing ions below approximately 20 eV.", "title": "Mission overview" }, { "paragraph_id": 21, "text": "Another failure was the loss of the star tracker. The star tracker determines spacecraft orientation by comparing the star field to its internal charts. The mission was saved when the MICAS camera was reprogrammed to substitute for the star tracker. Although MICAS is more sensitive, its field-of-view is an order of magnitude smaller, creating a greater information processing burden. Ironically, the star tracker was an off-the-shelf component, expected to be highly reliable.", "title": "Mission overview" }, { "paragraph_id": 22, "text": "Without a working star tracker, ion thrusting was temporarily suspended. The loss of thrust time forced the cancellation of a flyby past comet 107P/Wilson–Harrington.", "title": "Mission overview" }, { "paragraph_id": 23, "text": "The Autonav system required occasional manual corrections. Most problems were in identifying objects that were too dim, or were difficult to identify because of brighter objects causing diffraction spikes and reflections in the camera, causing Autonav to misidentify targets.", "title": "Mission overview" }, { "paragraph_id": 24, "text": "The Remote Agent system was presented with three simulated failures on the spacecraft and correctly handled each event.", "title": "Mission overview" }, { "paragraph_id": 25, "text": "Overall this constituted a successful demonstration of fully autonomous planning, diagnosis, and recovery.", "title": "Mission overview" }, { "paragraph_id": 26, "text": "The MICAS instrument was a design success, but the ultraviolet channel failed due to an electrical fault. Later in the mission, after the star tracker failure, MICAS assumed this duty as well. This caused continual interruptions in its scientific use during the remaining mission, including the Comet Borrelly encounter.", "title": "Mission overview" }, { "paragraph_id": 27, "text": "The flyby of the asteroid 9969 Braille was only a partial success. Deep Space 1 was intended to perform the flyby at 56,000 km/h (35,000 mph) at only 240 m (790 ft) from the asteroid. Due to technical difficulties, including a software crash shortly before approach, the craft instead passed Braille at a distance of 26 km (16 mi). This, plus Braille's lower albedo, meant that the asteroid was not bright enough for the Autonav to focus the camera in the right direction, and the picture shoot was delayed by almost an hour. The resulting pictures were disappointingly indistinct.", "title": "Mission overview" }, { "paragraph_id": 28, "text": "However, the flyby of Comet Borrelly was a great success and returned extremely detailed images of the comet's surface. Such images were of higher resolution than the only previous pictures of a comet -- Halley's Comet, taken by the Giotto spacecraft. The PEPE instrument reported that the comet's solar wind interaction was offset from the nucleus. This is believed to be due to emission of jets, which were not distributed evenly across the comet's surface.", "title": "Mission overview" }, { "paragraph_id": 29, "text": "Despite having no debris shields, the spacecraft survived the comet passage intact. Once again, the sparse comet jets did not appear to point towards the spacecraft. Deep Space 1 then entered its second extended mission phase, focused on retesting the spacecraft's hardware technologies. The focus of this mission phase was on the ion engine systems. The spacecraft eventually ran out of hydrazine fuel for its attitude control thrusters. The highly efficient ion thruster had a sufficient amount of propellant left to perform attitude control in addition to main propulsion, thus allowing the mission to continue.", "title": "Mission overview" }, { "paragraph_id": 30, "text": "During late October and early November 1999, during the spacecraft's post-Braille encounter coast phase, Deep Space 1 observed Mars with its MICAS instrument. Although this was a very distant flyby, the instrument did succeed in taking multiple infrared spectra of the planet.", "title": "Mission overview" }, { "paragraph_id": 31, "text": "Deep Space 1 succeeded in its primary and secondary objectives, returning valuable science data and images. DS1's ion engines were shut down on 18 December 2001 at approximately 20:00:00 UTC, signaling the end of the mission. On-board communications were set to remain in active mode in case the craft should be needed in the future. However, attempts to resume contact in March 2002 were unsuccessful. It remains within the Solar System, in orbit around the Sun.", "title": "Mission overview" }, { "paragraph_id": 32, "text": "", "title": "External links" } ]
Deep Space 1 (DS1) was a NASA technology demonstration spacecraft which flew by an asteroid and a comet. It was part of the New Millennium Program, dedicated to testing advanced technologies. Launched on 24 October 1998, the Deep Space 1 spacecraft carried out a flyby of asteroid 9969 Braille, which was its primary science target. The mission was extended twice to include an encounter with comet 19P/Borrelly and further engineering testing. Problems during its initial stages and with its star tracker led to repeated changes in mission configuration. While the flyby of the asteroid was only a partial success, the encounter with the comet retrieved valuable information. The Deep Space series was continued by the Deep Space 2 probes, which were launched in January 1999 piggybacked on the Mars Polar Lander and were intended to strike the surface of Mars. Deep Space 1 was the first NASA spacecraft to use ion propulsion rather than the traditional chemical-powered rockets.
2002-01-03T21:22:21Z
2023-12-31T06:32:49Z
[ "Template:Multiple image", "Template:Cite journal", "Template:Refimprove", "Template:US$", "Template:Asteroid spacecraft", "Template:Orbital launches in 1998", "Template:Short description", "Template:Convert", "Template:Portal", "Template:Cbignore", "Template:Cite AV media", "Template:Commons category", "Template:Jet Propulsion Laboratory", "Template:Use dmy dates", "Template:Mpl", "Template:Reflist", "Template:Solar System probes", "Template:Infobox spaceflight", "Template:Cite magazine", "Template:Cite book", "Template:Cite web", "Template:Cite conference", "Template:\\", "Template:New Millennium Program", "Template:Use American English", "Template:Italic title", "Template:Main article" ]
https://en.wikipedia.org/wiki/Deep_Space_1
9,071
King David (disambiguation)
David was the second king of the United Kingdom of Israel and Judah King David may also refer to:
[ { "paragraph_id": 0, "text": "David was the second king of the United Kingdom of Israel and Judah", "title": "" }, { "paragraph_id": 1, "text": "King David may also refer to:", "title": "" } ]
David was the second king of the United Kingdom of Israel and Judah King David may also refer to:
2002-01-26T19:39:19Z
2023-10-17T11:09:36Z
[ "Template:Disambiguation" ]
https://en.wikipedia.org/wiki/King_David_(disambiguation)
9,072
Jacques-Louis David
Jacques-Louis David (French: [ʒaklwi david]; 30 August 1748 – 29 December 1825) was a French painter in the Neoclassical style, considered to be the preeminent painter of the era. In the 1780s, his cerebral brand of history painting marked a change in taste away from Rococo frivolity toward classical austerity and severity and heightened feeling, harmonizing with the moral climate of the final years of the Ancien Régime. David later became an active supporter of the French Revolution and friend of Maximilien Robespierre (1758–1794), and was effectively a dictator of the arts under the French Republic. Imprisoned after Robespierre's fall from power, he aligned himself with yet another political regime upon his release: that of Napoleon, the First Consul of France. At this time he developed his Empire style, notable for its use of warm Venetian colours. After Napoleon's fall from Imperial power and the Bourbon revival, David exiled himself to Brussels, then in the United Kingdom of the Netherlands, where he remained until his death. David had many pupils, making him the strongest influence in French art of the early 19th century, especially academic Salon painting. Jacques-Louis David was born into a prosperous French family in Paris on 30 August 1748. When he was about nine his father was killed in a duel and his mother left him with his well-off architect uncles. They saw to it that he received an excellent education at the Collège des Quatre-Nations, University of Paris, but he was never a good student—he had a facial tumor that impeded his speech, and he was always preoccupied with drawing. He covered his notebooks with drawings, and he once said, "I was always hiding behind the instructor's chair, drawing for the duration of the class". Soon, he desired to be a painter, but his uncles and mother wanted him to be an architect. He overcame the opposition, and went to learn from François Boucher (1703–1770), the leading painter of the time, who was also a distant relative. Boucher was a Rococo painter, but tastes were changing, and the fashion for Rococo was giving way to a more classical style. Boucher decided that instead of taking over David's tutelage, he would send David to his friend, Joseph-Marie Vien (1716–1809), a painter who embraced the classical reaction to Rococo. There, David attended the Royal Academy, based in what is now the Louvre. Each year the Academy awarded an outstanding student the prestigious Prix de Rome, which funded a 3- to 5-year stay in Rome. Since artists were now revisiting classical styles, the trip provided its winners the opportunity to study the remains of classical antiquity and the works of the Italian Renaissance masters at first hand. Called pensionnaire they were housed in the French Academy's Rome outpost, which from the years 1737 to 1793 was the Palazzo Mancini in the Via del Corso. David made three consecutive attempts to win the annual prize, (with Minerva Fighting Mars, Diana and Apollo Killing Niobe's Children and The Death of Seneca) with each failure allegedly contributing to his lifelong grudge against the institution. After his second loss in 1772, David went on a hunger strike, which lasted two and a half days before the faculty encouraged him to continue painting. Confident he now had the support and backing needed to win the prize, he resumed his studies with great zeal—only to fail to win the Prix de Rome again the following year. Finally, in 1774, David was awarded the Prix de Rome on the strength of his painting of Erasistratus Discovering the Cause of Antiochus' Disease, a subject set by the judges. In October 1775 he made the journey to Italy with his mentor, Joseph-Marie Vien, who had just been appointed director of the French Academy at Rome. While in Italy, David mostly studied the works of 17th-century masters such as Poussin, Caravaggio, and the Carracci. Although he declared, "the Antique will not seduce me, it lacks animation, it does not move", David filled twelve sketchbooks with drawings that he and his studio used as model books for the rest of his life. He was introduced to the painter Raphael Mengs (1728–1779), who opposed the Rococo tendency to sweeten and trivialize ancient subjects, advocating instead the rigorous study of classical sources and close adherence to ancient models. Mengs' principled, historicizing approach to the representation of classical subjects profoundly influenced David's pre-revolutionary painting, such as The Vestal Virgin, probably from the 1780s. Mengs also introduced David to the theoretical writings on ancient sculpture by Johann Joachim Winckelmann (1717–1768), the German scholar held to be the founder of modern art history. As part of the Prix de Rome, David toured the newly excavated ruins of Pompeii in 1779, which deepened his belief that the persistence of classical culture was an index of its eternal conceptual and formal power. During the trip David also assiduously studied the High Renaissance painters, Raphael making a profound and lasting impression on the young French artist. Although David's fellow students at the academy found him difficult to get along with, they recognized his genius. David's stay at the French Academy in Rome was extended by a year. In July 1780, he returned to Paris. There, he found people ready to use their influence for him, and he was made an official member of the Royal Academy. He sent the Academy two paintings, and both were included in the Salon of 1781, a high honor. He was praised by his famous contemporary painters, but the administration of the Royal Academy was very hostile to this young upstart. After the Salon, the King granted David lodging in the Louvre, an ancient and much desired privilege of great artists. When the contractor of the King's buildings, M. Pécoul, was arranging with David, he asked the artist to marry his daughter, Marguerite Charlotte. This marriage brought him money and eventually four children. David had about 50 of his own pupils and was commissioned by the government to paint "Horace defended by his Father", but he soon decided, "Only in Rome can I paint Romans." His father-in-law provided the money he needed for the trip, and David headed for Rome with his wife, Charlotte, and three of his students, one of whom, Jean-Germain Drouais (1763–1788), was the Prix de Rome winner of that year. In Rome, David painted his famous Oath of the Horatii, 1784. In this piece, the artist references Enlightenment values while alluding to Rousseau's social contract. The republican ideal of the general became the central focus of the painting with all three sons positioned in compliance with the father. The Oath between the characters can be read as an act of unification of men to the binding of the state. The issue of gender roles also becomes apparent in this piece, as the women in Horatii greatly contrast the group of brothers. David depicts the father with his back to the women, shutting them out of the oath. They also appear to be smaller in scale and physically isolated from the male figures. The masculine virility and discipline displayed by the men's rigid and confident stances is also severely contrasted to the slouching, swooning female softness created in the other half of the composition. Here we see the clear division of male-female attributes that confined the sexes to specific roles under Rousseau's popularized doctrine of "separate spheres". These revolutionary ideals are also apparent in the Distribution of Eagles. While Oath of the Horatii and The Tennis Court Oath stress the importance of masculine self-sacrifice for one's country and patriotism, the Distribution of Eagles would ask for self-sacrifice for one's Emperor (Napoleon) and the importance of battlefield glory. In 1787, David did not become the Director of the French Academy in Rome, which was a position he wanted dearly. The Count in charge of the appointments said David was too young, but said he would support him in 6 to 12 years. This situation would be one of many that would cause him to lash out at the Academy in years to come. For the Salon of 1787, David exhibited his famous Death of Socrates. "Condemned to death, Socrates, strong, calm and at peace, discusses the immortality of the soul. Surrounded by Crito, his grieving friends and students, he is teaching, philosophizing, and in fact, thanking the God of Health, Asclepius, for the hemlock brew which will ensure a peaceful death... The wife of Socrates can be seen grieving alone outside the chamber, dismissed for her weakness. Plato is depicted as an old man seated at the end of the bed." Critics compared the Socrates with Michelangelo's Sistine Ceiling and Raphael's Stanze, and one, after ten visits to the Salon, described it as "in every sense perfect". Denis Diderot said it looked as if he copied it from some ancient bas-relief. The painting was very much in tune with the political climate at the time. For this painting, David was not honored by a royal "works of encouragement". For his next painting, David created The Lictors Bring to Brutus the Bodies of His Sons. The work had tremendous appeal for the time. Before the opening of the Salon, the French Revolution had begun. The National Assembly had been established, and the Bastille had fallen. The royal court did not want propaganda agitating the people, so all paintings had to be checked before being hung. David's portrait of Lavoisier, who was a chemist and physicist as well as an active member of the Jacobin party, was banned by the authorities for such reasons. When the newspapers reported that the government had not allowed the showing of The Lictors Bring to Brutus the Bodies of His Sons, the people were outraged, and the royals were forced to give in. The painting was hung in the exhibition, protected by art students. The painting depicts Lucius Junius Brutus, the Roman leader, grieving for his sons. Brutus's sons had attempted to overthrow the government and restore the monarchy, so the father ordered their death to maintain the republic. Brutus was the heroic defender of the republic, sacrificing his own family for the good of the republic. On the right, the mother holds her two daughters, and the nurse is seen on the far right, in anguish. Brutus sits on the left, alone, brooding, seemingly dismissing the dead bodies of his sons. Knowing what he did was best for his country, but the tense posture of his feet and toes reveals his inner turmoil. The whole painting was a Republican symbol, and obviously had immense meaning during these times in France. It exemplified civic virtue, a value highly regarded during the Revolution. In the beginning, David was a supporter of the Revolution, a friend of Robespierre, and a member of the Jacobin Club. While others were leaving the country for new and greater opportunities, David stayed behind to help destroy the old order; he was a regicide who voted in the National Convention for the Execution of Louis XVI. It is uncertain why he did this, as there were many more opportunities for him under the King than the new order; some people suggest David's love for the classical made him embrace everything about that period, including a republican government. Others believed that they found the key to the artist's revolutionary career in his personality. Undoubtedly, David's artistic sensibility, mercurial temperament, volatile emotions, ardent enthusiasm, and fierce independence might have been expected to help turn him against the established order but they did not fully explain his devotion to the republican regime. Nor did the vague statements of those who insisted upon his "powerful ambition...and unusual energy of will" actually account for his revolutionary connections. Those who knew him maintained that "generous ardor", high-minded idealism and well-meaning though sometimes fanatical enthusiasm, rather than opportunism and jealousy, motivated his activities during this period. Soon, David turned his critical sights on the Royal Academy of Painting and Sculpture. This attack was probably caused primarily by the hypocrisy of the organization and their personal opposition to his work, as seen in previous episodes in David's life. The Royal Academy was controlled by royalists, who opposed David's attempts at reform; so the National Assembly finally ordered it to make changes to conform to the new constitution. David then began work on something that would later hound him: propaganda for the new republic. David's painting of Brutus was shown during the play Brutus by Voltaire. In 1789, Jacques-Louis David attempted to leave his artistic mark on the historical beginnings of the French Revolution with his painting of The Oath of the Tennis Court. David undertook this task not out of personal political conviction but rather because he was commissioned to do so. The painting was meant to commemorate the event of the same name but was never completed. A meeting of the Estates General was convened in May to address reforms of the monarchy. Dissent arose over whether the three estates would meet separately, as had been tradition, or as one body. The King's acquiescence in the demands of the upper orders led to the deputies of the Third Estate renaming themselves as the National Assembly on 17 June. They were locked out of the meeting hall three days later when they attempted to meet, and forced to reconvene to the royal indoor tennis court. Presided over by Jean-Sylvain Bailly, they made a 'solemn oath never to separate' until a national constitution had been created. In 1789 this event was seen as a symbol of the national unity against the ancien regime. Rejecting the current conditions, the oath signified a new transition in human history and ideology. David was enlisted by the Society of Friends of the Constitution, the body that would eventually form the Jacobins, to enshrine this symbolic event. This instance is notable in more ways than one because it eventually led David to finally become involved in politics as he joined the Jacobins. The picture was meant to be massive in scale; the figures in the foreground were to be life-sized portraits of the counterparts, including Jean-Sylvain Bailly, the President of the Constituent Assembly. Seeking additional funding, David turned to the Society of Friends of the Constitution. The funding for the project was to come from over three thousand subscribers hoping to receive a print of the image. However, when the funding was insufficient, the state ended up financing the project. David set out in 1790 to transform the contemporary event into a major historical picture which would appear at the Salon of 1791 as a large pen-and-ink drawing. As in the Oath of the Horatii, David represents the unity of men in the service of a patriotic ideal. The outstretched arms which are prominent in both works betray David's deeply held belief that acts of republican virtue akin to those of the Romans were being played out in France. In what was essentially an act of intellect and reason, David creates an air of drama in this work. The very power of the people appears to be "blowing" through the scene with the stormy weather, in a sense alluding to the storm that would be the revolution. Symbolism in this work of art closely represents the revolutionary events taking place at the time. The figure in the middle is raising his right arm making the oath that they will never disband until they have reached their goal of creating a "constitution of the realm fixed upon solid foundations". The importance of this symbol is highlighted by the fact that the crowd's arms are angled to his hand forming a triangular shape. Additionally, the open space in the top half contrasted to the commotion in the lower half serves to emphasize the magnitude of the Tennis Court Oath. In his attempt to depict political events of the Revolution in "real time", David was venturing down a new and untrodden path in the art world. However, Thomas Crow argues that this path "proved to be less a way forward than a cul-de-sac for history painting". Essentially, the history of the demise of David's The Tennis Court Oath illustrates the difficulty of creating works of art that portray current and controversial political occurrences. Political circumstances in France proved too volatile to allow the completion of the painting. The unity that was to be symbolized in The Tennis Court Oath no longer existed in radicalized 1792. The National Assembly had split between conservatives and radical Jacobins, both vying for political power. By 1792 there was no longer consensus that all the revolutionaries at the tennis court were "heroes". A sizeable number of the heroes of 1789 had become the villains of 1792. In this unstable political climate David's work remained unfinished. With only a few nude figures sketched onto the massive canvas, David abandoned The Oath of the Tennis Court. To have completed it would have been politically unsound. After this incident, when David attempted to make a political statement in his paintings, he returned to the less politically charged use of metaphor to convey his message. When Voltaire died in 1778, the church denied him a church burial, and his body was interred near a monastery. A year later, Voltaire's old friends began a campaign to have his body buried in the Panthéon, as church property had been confiscated by the French Government. In 1791, David was appointed to head the organizing committee for the ceremony, a parade through the streets of Paris to the Panthéon. Despite rain and opposition from conservatives due to the amount of money spent, the procession went ahead. Up to 100,000 people watched the "Father of the Revolution" being carried to his resting place. This was the first of many large festivals organized by David for the republic. He went on to organize festivals for martyrs that died fighting royalists. These funerals echoed the religious festivals of the pagan Greeks and Romans and are seen by many as Saturnalian. David incorporated many revolutionary symbols into these theatrical performances and orchestrated ceremonial rituals, in effect radicalizing the applied arts themselves. The most popular symbol for which David was responsible as propaganda minister was drawn from classical Greek images; changing and transforming them with contemporary politics. In an elaborate festival held on the anniversary of the revolt that brought the monarchy to its knees, David's Hercules figure was revealed in a procession following the Goddess of Liberty (Marianne). Liberty, the symbol of Enlightenment ideals was here being overturned by the Hercules symbol; that of strength and passion for the protection of the Republic against disunity and factionalism. In his speech during the procession, David "explicitly emphasized the opposition between people and monarchy; Hercules was chosen, after all, to make this opposition more evident". The ideals that David linked to his Hercules single-handedly transformed the figure from a sign of the old regime into a powerful new symbol of revolution. "David turned him into the representation of a collective, popular power. He took one of the favorite signs of monarchy and reproduced, elevated, and monumentalized it into the sign of its opposite." Hercules, the image, became to the revolutionaries, something to rally around. In June 1791, the King made an ill-fated attempt to flee the country, but was apprehended short of his goal on the Austrian Netherlands border and was forced to return under guard to Paris. Louis XVI had made secret requests to Emperor Leopold II of Austria, Marie-Antoinette's brother, to restore him to his throne. This was granted and Austria threatened France if the royal couple were hurt. In reaction, the people arrested the King. This led to an Invasion after the trials and execution of Louis and Marie-Antoinette. The Bourbon monarchy was destroyed by the French people in 1792—it would be restored after Napoleon, then destroyed again with the Restoration of the House of Bonaparte. When the new National Convention held its first meeting, David was sitting with his friends Jean-Paul Marat and Robespierre. In the convention, David soon earned the nickname "ferocious terrorist". Robespierre's agents discovered a secret vault containing the King's correspondence which proved he was trying to overthrow the government, and demanded his execution. The National Convention held the trial of Louis XVI; David voted for the death of the King, causing his wife, Marguerite Charlotte, a royalist, to divorce him. When Louis XVI was executed on 21 January 1793, another man had already died as well—Louis Michel le Peletier de Saint-Fargeau. Le Peletier was killed on the preceding day by a royal bodyguard in revenge for having voted for the death of the King. David was called upon to organize a funeral, and he painted Le Peletier Assassinated. In it, the assassin's sword was seen hanging by a single strand of horsehair above Le Peletier's body, a concept inspired by the proverbial ancient tale of the sword of Damocles, which illustrated the insecurity of power and position. This underscored the courage displayed by Le Peletier and his companions in routing an oppressive king. The sword pierces a piece of paper on which is written "I vote the death of the tyrant", and as a tribute at the bottom right of the picture David placed the inscription "David to Le Peletier. 20 January 1793". The painting was later destroyed by Le Peletier's royalist daughter, and is known by only a drawing, an engraving, and contemporary accounts. Nevertheless, this work was important in David's career because it was the first completed painting of the French Revolution, made in less than three months, and a work through which he initiated the regeneration process that would continue with The Death of Marat, David's masterpiece. On 13 July 1793, David's friend Marat was assassinated by Charlotte Corday with a knife she had hidden in her clothing. She gained entrance to Marat's house on the pretense of presenting him a list of people who should be executed as enemies of France. Marat thanked her and said that they would be guillotined next week upon which Corday immediately fatally stabbed him. She was guillotined shortly thereafter. Corday was of an opposing political party, whose name can be seen in the note Marat holds in David's subsequent painting, The Death of Marat. Marat, a member of the National Convention and a journalist, had a skin disease that caused him to itch horribly. The only relief he could get was in his bath over which he improvised a desk to write his list of suspect counter-revolutionaries who were to be quickly tried and, if convicted, guillotined. David once again organized a spectacular funeral, and Marat was buried in the Panthéon. Marat's body was to be placed upon a Roman bed, his wound displayed and his right arm extended holding the pen which he had used to defend the Republic and its people. This concept was to be complicated by the fact that the corpse had begun to putrefy. Marat's body had to be periodically sprinkled with water and vinegar as the public crowded to see his corpse prior to the funeral on 15 and 16 July. The stench became so bad however that the funeral had to be brought forward to the evening of 16 July. The Death of Marat, perhaps David's most famous painting, has been called the Pietà of the revolution. Upon presenting the painting to the convention, he said "Citizens, the people were again calling for their friend; their desolate voice was heard: David, take up your brushes..., avenge Marat... I heard the voice of the people. I obeyed." David had to work quickly, but the result was a simple and powerful image. The Death of Marat, 1793, became the leading image of the Terror and immortalized both Marat and David in the world of the revolution. This piece stands today as "a moving testimony to what can be achieved when an artist's political convictions are directly manifested in his work". A political martyr was instantly created as David portrayed Marat with all the marks of the real murder, in a fashion which greatly resembles that of Christ or his disciples. The subject although realistically depicted remains lifeless in a rather supernatural composition. With the surrogate tombstone placed in front of him and the almost holy light cast upon the whole scene; alluding to an out of this world existence. "Atheists though they were, David and Marat, like so many other fervent social reformers of the modern world, seem to have created a new kind of religion." At the very center of these beliefs, there stood the republic. After the King's execution, war broke out between the new Republic and virtually every major power in Europe. David, as a member of the Committee of General Security, contributed directly to the Reign of Terror. David organized his last festival: the festival of the Supreme Being. Robespierre had realized what a tremendous propaganda tool these festivals were, and he decided to create a new religion, mixing moral ideas with the Republic and based on the ideas of Rousseau. This process had already begun by confiscating church lands and requiring priests to take an oath to the state. The festivals, called fêtes, would be the method of indoctrination. On the appointed day, 20 Prairial by the revolutionary calendar, Robespierre spoke, descended steps, and with a torch presented to him by David, incinerated a cardboard image symbolizing atheism, revealing an image of wisdom underneath. Soon, the war began to go well; French troops marched across the southern half of the Netherlands (which would later become Belgium), and the emergency that had placed the Committee of Public Safety in control was no more. Then plotters seized Robespierre at the National Convention and he was later guillotined, in effect ending the Reign of Terror. As Robespierre was arrested, David yelled to his friend "if you drink hemlock, I shall drink it with you." After this, he supposedly fell ill, and did not attend the evening session because of "stomach pain", which saved him from being guillotined along with Robespierre. David was arrested and placed in prison twice, first from 2 August to 28 December 1794 and then from 29 May to 3 August 1795. Most of the time he served his sentence in the not uncomfortable Palais du Luxembourg in Paris. There he painted his own portrait, showing him much younger than he actually was, as well as that of his jailer. After David's wife visited him in jail, he conceived the idea of telling the story of The rape of the Sabine women. The Sabine Women Enforcing Peace by Running between the Combatants, also called The Intervention of the Sabine Women is said to have been painted to honor his wife, with the theme being love prevailing over conflict. The painting was also seen as a plea for the people to reunite after the bloodshed of the revolution. David conceived a new style for this painting, one which he called the "Pure Greek Style", as opposed to the "Roman style" of his earlier historical paintings. The new style was influenced heavily by the work of art historian Johann Joachim Winckelmann. In David's words, "the most prominent general characteristics of the Greek masterpieces are a noble simplicity and silent greatness in pose as well as in expression." Instead of the muscularity and angularity of the figures of his past works, these were smoother, more feminine, and painterly. This work also brought him to the attention of Napoleon. The story for the painting is as follows: "The Romans have abducted the daughters of their neighbors, the Sabines. To avenge this abduction, the Sabines attacked Rome, although not immediately—since Hersilia, the daughter of Tatius, the leader of the Sabines, had been married to Romulus, the Roman leader, and then had two children by him in the interim. Here we see Hersilia between her father and husband as she adjures the warriors on both sides not to take wives away from their husbands or mothers away from their children. The other Sabine Women join in her exhortations." During this time, the martyrs of the Revolution were taken from the Pantheon and buried in common ground, and revolutionary statues were destroyed. When David was finally released to the country, France had changed. His wife managed to get him released from prison, and he wrote letters to his former wife, and told her he never ceased loving her. He remarried her in 1796. Finally, wholly restored to his position, he retreated to his studio, took pupils and for the most part, retired from politics. In August 1796, David and many other artists signed a petition orchestrated by Quatremère de Quincy which questioned the wisdom of the planned seizure of works of art from Rome. The Director Barras believed that David was "tricked" into signing, although one of David's students recalled that in 1798 his master lamented the fact that masterpieces had been imported from Italy. David's close association with the Committee of Public Safety during the Terror resulted in his signing of the death warrant for Alexandre de Beauharnais, a minor noble. Beauharnais's widow, Joséphine, went on to marry Napoleon Bonaparte and became his empress; David himself depicted their coronation in the Coronation of Napoleon and Josephine, 2 December 1804. David had been an admirer of Napoleon from their first meeting, struck by Bonaparte's classical features. Requesting a sitting from the busy and impatient general, David was able to sketch Napoleon in 1797. David recorded the face of the conqueror of Italy, but the full composition of Napoleon holding the peace treaty with Austria remains unfinished. This was likely a decision by Napoleon himself after considering the current political situation. He may have considered the publicity the portrait would bring about to be ill-timed. Bonaparte had high esteem for David, and asked him to accompany him to Egypt in 1798, but David refused, seemingly unwilling to give up the material comfort, safety, and peace of mind he had obtained through the years. Draftsman and engraver Dominique Vivant Denon went to Egypt instead, providing mostly documentary and archaeological work. After Napoleon's successful coup d'état in 1799, as First Consul he commissioned David to commemorate his daring crossing of the Alps. The crossing of the St. Bernard Pass had allowed the French to surprise the Austrian army and win victory at the Battle of Marengo on 14 June 1800. Although Napoleon had crossed the Alps on a mule, he requested that he be portrayed "calm upon a fiery steed". David complied with Napoleon Crossing the Saint-Bernard. After the proclamation of the Empire in 1804, David became the official court painter of the regime. During this period he took students, one of whom was the Belgian painter Pieter van Hanselaere. One of the works David was commissioned for was The Coronation of Napoleon (1805-1807). David was permitted to watch the event. He had plans of Notre Dame delivered and participants in the coronation came to his studio to pose individually, though never the Emperor. (The only time David obtained a sitting from Napoleon had been in 1797.) David did manage to get a private sitting with the Empress Joséphine and Napoleon's sister, Caroline Murat, through the intervention of erstwhile art patron Marshal Joachim Murat, the Emperor's brother-in-law. For his background, David had the choir of Notre Dame act as his fill-in characters. Pope Pius VII came to sit for the painting, and actually blessed David. Napoleon came to see the painter, stared at the canvas for an hour and said "David, I salute you." David had to redo several parts of the painting because of Napoleon's various whims, and for this painting, he received twenty-four thousand Francs. David was made a Chevalier de la Légion d'honneur in 1803. He was promoted to an Officier in 1808. And, in 1815, he was promoted to a Commandant (now Commandeur) de la Légion d'honneur. On the Bourbons returning to power, David figured in the list of proscribed former revolutionaries and Bonapartists for having voted execution for the deposed King Louis XVI; and for participating in the death of Louis XVII, the deposed king's son who was mistreated, starved and forced into a false confession of incest with his mother, Queen Marie-Antoinette, which contributed to her death sentence. The newly restored Bourbon King, Louis XVIII, however, granted amnesty to David and even offered him the position of court painter. David refused, preferring self-exile in Brussels. There, he trained and influenced Brussels artists such as François-Joseph Navez and Ignace Brice, painted Cupid and Psyche and quietly lived the remainder of his life with his wife (whom he had remarried). In that time, he painted smaller-scale mythological scenes, and portraits of citizens of Brussels and Napoleonic émigrés, such as the Baron Gerard. David created his last great work, Mars Being Disarmed by Venus and the Three Graces, from 1822 to 1824. In December 1823, he wrote: "This is the last picture I want to paint, but I want to surpass myself in it. I will put the date of my seventy-five years on it and afterwards I will never again pick up my brush." The finished painting—evoking painted porcelain because of its limpid coloration—was exhibited first in Brussels, then in Paris, where his former students flocked to view it. The exhibition was profitable—13,000 francs, after deducting operating costs, thus, more than 10,000 people visited and viewed the painting. In his later years, David remained in full command of his artistic faculties, even after a stroke in the spring of 1825 disfigured his face and slurred his speech. In June 1825, he resolved to embark on an improved version of his The Anger of Achilles (also known as the Sacrifice of Iphigenie); the earlier version was completed in 1819 and is now in the collection of the Kimbell Art Museum in Fort Worth, Texas. David remarked to his friends who visited his studio "this [painting] is what is killing me" such was his determination to complete the work, but by October it must have already been well advanced, as his former pupil Gros wrote to congratulate him, having heard reports of the painting's merits. By the time David died, the painting had been completed and the commissioner Ambroise Firmin-Didot brought it back to Paris to include it in the exhibition "Pour les grecs" that he had organised and which opened in Paris in April 1826. When David was leaving a theater, the driver of a carriage struck him, and he later died, on 29 December 1825. At his death, some portraits were auctioned in Paris, they sold for little; the famous Death of Marat was exhibited in a secluded room, to avoid outraging public sensibilities. Disallowed return to France for burial, for having been a regicide of King Louis XVI, the body of the painter Jacques-Louis David was buried in Brussels and moved in 1882 to Brussels Cemetery, while some say his heart was buried with his wife at Père Lachaise Cemetery, Paris. The theme of the oath found in several works such as The Oath of the Tennis Court, The Distribution of the Eagles, and Leonidas at Thermopylae, was perhaps inspired by the rituals of Freemasonry. In 1989 during the "David against David" conference Albert Boime presented evidence, a document dated in 1787, showing the painter's membership in the "La Moderation" Masonic Lodge. Jacques-Louis David's facial abnormalities were traditionally reported to be a consequence of a deep facial sword wound after a fencing incident. These left him with a noticeable asymmetry during facial expression and resulted in his difficulty in eating or speaking. (He could not pronounce some consonants such as the letter 'r'.) A sword scar wound on the left side of his face is present in his self-portrait and sculptures and corresponds to some of the buccal branches of the facial nerve. An injury to this nerve and its branches are likely to have resulted in the difficulties with his left facial movement. Furthermore, as a result of this injury, he suffered from a growth on his face that biographers and art historians have defined as a benign tumor. These, however, may have been a granuloma, or even a post-traumatic neuroma. As historian Simon Schama has pointed out, witty banter and public speaking ability were key aspects of the social culture of 18th-century France, so David's tumor could have been a heavy obstacle in his social life. David was sometimes referred to as "David of the Tumor". In addition to his history paintings, David completed a number of privately commissioned portraits. Warren Roberts, among others, has pointed out the contrast between David's "public style" of painting, as shown in his history paintings, and his "private style", as shown in his portraits. His portraits were characterized by a sense of truth and realism. He focused on defining his subjects' features and characters without idealizing them. This is different from the style seen in his historical paintings, in which he idealizes his figures' features and bodies to align with Greek and Roman ideals of beauty. He puts a great deal of detail into his portraits, defining smaller features such as hands and fabric. The compositions of his portraits remain simple with blank backgrounds that allow the viewer to focus on the details of the subject. The portrait he did of his wife (1813) is an example of his typical portrait style. The background is dark and simple without any clues as to the setting, which forces the viewer to focus entirely on her. Her features are un-idealized and truthful to her appearance. There is a great amount of detail that can be seen in his attention to portraying the satin material of the dress she wears, the drapery of the scarf around her, and her hands which rest in her lap. In the painting of Brutus (1789), the man and his wife are separated, both morally and physically. Paintings such as these, depicting the great strength of patriotic sacrifice, made David a popular hero of the revolution. In the Portrait of Antoine-Laurent Lavoisier and his wife (1788), the man and his wife are tied together in an intimate pose. She leans on his shoulder while he pauses from his work to look up at her. David casts them in a soft light, not in the sharp contrast of Brutus or of the Horatii. Also of interest—Lavoisier was a tax collector, as well as a famous chemist. Though he spent some of his money trying to clean up swamps and eradicate malaria, he was nonetheless sent to the guillotine during the Reign of Terror as an enemy of the people. David, then a powerful member of the National Assembly, stood idly by and watched. Other portraits include paintings of his sister-in-law and her husband, Madame and Monsieur Seriziat. The picture of Monsieur Seriziat depicts a man of wealth, sitting comfortably with his horse-riding equipment. The picture of the Madame shows her wearing an unadorned white dress, holding her young child's hand as they lean against a bed. David painted these portraits of Madame and Monsieur Seriziat out of gratitude for letting him stay with them after he was in jail. Towards the end of David's life, he painted a portrait of his old friend Abbé Sieyès. Both had been involved in the Revolution, both had survived the purging of political radicals that followed the reign of terror. The shift in David's perspective played an important role in the paintings of David's later life, including this one of Sieyès. During the height of The Terror, David was an ardent supporter of radicals such as Robespierre and Marat, and twice offered up his life in their defense. He organized revolutionary festivals and painted portraits of martyrs of the revolution, such as Lepeletier, who was assassinated for voting for the death of the king. David was an impassioned speaker at times in the National Assembly. In speaking to the Assembly about the young boy named Bara, another martyr of the revolution, David said, "O Bara! O Viala! The blood that you have spread still smokes; it rises toward Heaven and cries for vengeance." After Robespierre was sent to the guillotine, however, David was imprisoned and changed the attitude of his rhetoric. During his imprisonment he wrote many letters, pleading his innocence. In one he wrote, "I am prevented from returning to my atelier, which, alas, I should never have left. I believed that in accepting the most honorable position, but very difficult to fill, that of legislator, that a righteous heart would suffice, but I lacked the second quality, understanding." Later, while explaining his developing "Grecian style" for paintings such as The Intervention of the Sabine Women, David further commented on a shift in attitude: "In all human activity the violent and transitory develops first; repose and profundity appear last. The recognition of these latter qualities requires time; only great masters have them, while their pupils have access only to violent passions." Jacques-Louis David was, in his time, regarded as the leading painter in France, and arguably all of Western Europe; many of the painters honored by the restored Bourbons following the French Revolution had been David's pupils. David's student Antoine-Jean Gros for example, was made a Baron and honored by Napoleon Bonaparte's court. Another pupil of David's, Jean Auguste Dominique Ingres became the most important artist of the restored Royal Academy and the figurehead of the Neoclassical school of art, engaging the increasingly popular Romantic school of art that was beginning to challenge Neoclassicism. David invested in the formation of young artists for the Rome Prize, which was also a way to pursue his old rivalry with other contemporary painters such as Joseph-Benoît Suvée, who had also started teaching classes. To be one of David's students was considered prestigious and earned his students a lifetime reputation. He called on the more advanced students, such as Jérôme-Martin Langlois, to help him paint his large canvases. Musician and artist Therese Emilie Henriette Winkel; and painter Jean Baptiste Vermay also studied with David. Despite David's reputation, he was more fiercely criticized right after his death than at any point during his life. His style came under the most serious criticism for being static, rigid, and uniform throughout all his work. David's art was also attacked for being cold and lacking warmth. David, however, made his career precisely by challenging what he saw as the earlier rigidity and conformity of the French Royal Academy's approach to art. David's later works also reflect his growth in the development of the Empire style, notable for its dynamism and warm colors. It is likely that much of the criticism of David following his death came from David's opponents; during his lifetime David made a great many enemies with his competitive and arrogant personality as well as his role in the Terror. David sent many people to the guillotine and personally signed the death warrants for King Louis XVI and Marie Antoinette. One significant episode in David's political career that earned him a great deal of contempt was the execution of Emilie Chalgrin. A fellow painter Carle Vernet had approached David, who was on the Committee of Public Safety, requesting him to intervene on behalf of his sister, Chalgrin. She had been accused of crimes against the Republic, most notably possessing stolen items. David refused to intervene in her favor, and she was executed. Vernet blamed David for her death, and the episode followed him for the rest of his life and after. In the last 50 years David has enjoyed a revival in popular favor and in 1948 his two-hundredth birthday was celebrated with an exhibition at the Musée de l'Orangerie in Paris and at Versailles showing his life's works. Following World War II, Jacques-Louis David was increasingly regarded as a symbol of French national pride and identity, as well as a vital force in the development of European and French art in the modern era. The birth of Romanticism is traditionally credited to the paintings of eighteenth-century French artists such as Jacques-Louis David. There are streets named after David in the French cities of Carcassonne and Montpellier. Danton (Andrzej Wajda, France, 1982) – Historical drama. Many scenes include David as a silent character watching and drawing. The film focuses on the period of the Terror.
[ { "paragraph_id": 0, "text": "Jacques-Louis David (French: [ʒaklwi david]; 30 August 1748 – 29 December 1825) was a French painter in the Neoclassical style, considered to be the preeminent painter of the era. In the 1780s, his cerebral brand of history painting marked a change in taste away from Rococo frivolity toward classical austerity and severity and heightened feeling, harmonizing with the moral climate of the final years of the Ancien Régime.", "title": "" }, { "paragraph_id": 1, "text": "David later became an active supporter of the French Revolution and friend of Maximilien Robespierre (1758–1794), and was effectively a dictator of the arts under the French Republic. Imprisoned after Robespierre's fall from power, he aligned himself with yet another political regime upon his release: that of Napoleon, the First Consul of France. At this time he developed his Empire style, notable for its use of warm Venetian colours. After Napoleon's fall from Imperial power and the Bourbon revival, David exiled himself to Brussels, then in the United Kingdom of the Netherlands, where he remained until his death. David had many pupils, making him the strongest influence in French art of the early 19th century, especially academic Salon painting.", "title": "" }, { "paragraph_id": 2, "text": "Jacques-Louis David was born into a prosperous French family in Paris on 30 August 1748. When he was about nine his father was killed in a duel and his mother left him with his well-off architect uncles. They saw to it that he received an excellent education at the Collège des Quatre-Nations, University of Paris, but he was never a good student—he had a facial tumor that impeded his speech, and he was always preoccupied with drawing. He covered his notebooks with drawings, and he once said, \"I was always hiding behind the instructor's chair, drawing for the duration of the class\". Soon, he desired to be a painter, but his uncles and mother wanted him to be an architect. He overcame the opposition, and went to learn from François Boucher (1703–1770), the leading painter of the time, who was also a distant relative. Boucher was a Rococo painter, but tastes were changing, and the fashion for Rococo was giving way to a more classical style. Boucher decided that instead of taking over David's tutelage, he would send David to his friend, Joseph-Marie Vien (1716–1809), a painter who embraced the classical reaction to Rococo. There, David attended the Royal Academy, based in what is now the Louvre.", "title": "Early life" }, { "paragraph_id": 3, "text": "Each year the Academy awarded an outstanding student the prestigious Prix de Rome, which funded a 3- to 5-year stay in Rome. Since artists were now revisiting classical styles, the trip provided its winners the opportunity to study the remains of classical antiquity and the works of the Italian Renaissance masters at first hand. Called pensionnaire they were housed in the French Academy's Rome outpost, which from the years 1737 to 1793 was the Palazzo Mancini in the Via del Corso. David made three consecutive attempts to win the annual prize, (with Minerva Fighting Mars, Diana and Apollo Killing Niobe's Children and The Death of Seneca) with each failure allegedly contributing to his lifelong grudge against the institution. After his second loss in 1772, David went on a hunger strike, which lasted two and a half days before the faculty encouraged him to continue painting. Confident he now had the support and backing needed to win the prize, he resumed his studies with great zeal—only to fail to win the Prix de Rome again the following year. Finally, in 1774, David was awarded the Prix de Rome on the strength of his painting of Erasistratus Discovering the Cause of Antiochus' Disease, a subject set by the judges. In October 1775 he made the journey to Italy with his mentor, Joseph-Marie Vien, who had just been appointed director of the French Academy at Rome.", "title": "Early life" }, { "paragraph_id": 4, "text": "While in Italy, David mostly studied the works of 17th-century masters such as Poussin, Caravaggio, and the Carracci. Although he declared, \"the Antique will not seduce me, it lacks animation, it does not move\", David filled twelve sketchbooks with drawings that he and his studio used as model books for the rest of his life. He was introduced to the painter Raphael Mengs (1728–1779), who opposed the Rococo tendency to sweeten and trivialize ancient subjects, advocating instead the rigorous study of classical sources and close adherence to ancient models. Mengs' principled, historicizing approach to the representation of classical subjects profoundly influenced David's pre-revolutionary painting, such as The Vestal Virgin, probably from the 1780s. Mengs also introduced David to the theoretical writings on ancient sculpture by Johann Joachim Winckelmann (1717–1768), the German scholar held to be the founder of modern art history. As part of the Prix de Rome, David toured the newly excavated ruins of Pompeii in 1779, which deepened his belief that the persistence of classical culture was an index of its eternal conceptual and formal power. During the trip David also assiduously studied the High Renaissance painters, Raphael making a profound and lasting impression on the young French artist.", "title": "Early life" }, { "paragraph_id": 5, "text": "Although David's fellow students at the academy found him difficult to get along with, they recognized his genius. David's stay at the French Academy in Rome was extended by a year. In July 1780, he returned to Paris. There, he found people ready to use their influence for him, and he was made an official member of the Royal Academy. He sent the Academy two paintings, and both were included in the Salon of 1781, a high honor. He was praised by his famous contemporary painters, but the administration of the Royal Academy was very hostile to this young upstart. After the Salon, the King granted David lodging in the Louvre, an ancient and much desired privilege of great artists. When the contractor of the King's buildings, M. Pécoul, was arranging with David, he asked the artist to marry his daughter, Marguerite Charlotte. This marriage brought him money and eventually four children. David had about 50 of his own pupils and was commissioned by the government to paint \"Horace defended by his Father\", but he soon decided, \"Only in Rome can I paint Romans.\" His father-in-law provided the money he needed for the trip, and David headed for Rome with his wife, Charlotte, and three of his students, one of whom, Jean-Germain Drouais (1763–1788), was the Prix de Rome winner of that year.", "title": "Early work" }, { "paragraph_id": 6, "text": "In Rome, David painted his famous Oath of the Horatii, 1784. In this piece, the artist references Enlightenment values while alluding to Rousseau's social contract. The republican ideal of the general became the central focus of the painting with all three sons positioned in compliance with the father. The Oath between the characters can be read as an act of unification of men to the binding of the state. The issue of gender roles also becomes apparent in this piece, as the women in Horatii greatly contrast the group of brothers. David depicts the father with his back to the women, shutting them out of the oath. They also appear to be smaller in scale and physically isolated from the male figures. The masculine virility and discipline displayed by the men's rigid and confident stances is also severely contrasted to the slouching, swooning female softness created in the other half of the composition. Here we see the clear division of male-female attributes that confined the sexes to specific roles under Rousseau's popularized doctrine of \"separate spheres\".", "title": "Early work" }, { "paragraph_id": 7, "text": "These revolutionary ideals are also apparent in the Distribution of Eagles. While Oath of the Horatii and The Tennis Court Oath stress the importance of masculine self-sacrifice for one's country and patriotism, the Distribution of Eagles would ask for self-sacrifice for one's Emperor (Napoleon) and the importance of battlefield glory.", "title": "Early work" }, { "paragraph_id": 8, "text": "In 1787, David did not become the Director of the French Academy in Rome, which was a position he wanted dearly. The Count in charge of the appointments said David was too young, but said he would support him in 6 to 12 years. This situation would be one of many that would cause him to lash out at the Academy in years to come.", "title": "Early work" }, { "paragraph_id": 9, "text": "For the Salon of 1787, David exhibited his famous Death of Socrates. \"Condemned to death, Socrates, strong, calm and at peace, discusses the immortality of the soul. Surrounded by Crito, his grieving friends and students, he is teaching, philosophizing, and in fact, thanking the God of Health, Asclepius, for the hemlock brew which will ensure a peaceful death... The wife of Socrates can be seen grieving alone outside the chamber, dismissed for her weakness. Plato is depicted as an old man seated at the end of the bed.\" Critics compared the Socrates with Michelangelo's Sistine Ceiling and Raphael's Stanze, and one, after ten visits to the Salon, described it as \"in every sense perfect\". Denis Diderot said it looked as if he copied it from some ancient bas-relief. The painting was very much in tune with the political climate at the time. For this painting, David was not honored by a royal \"works of encouragement\".", "title": "Early work" }, { "paragraph_id": 10, "text": "For his next painting, David created The Lictors Bring to Brutus the Bodies of His Sons. The work had tremendous appeal for the time. Before the opening of the Salon, the French Revolution had begun. The National Assembly had been established, and the Bastille had fallen. The royal court did not want propaganda agitating the people, so all paintings had to be checked before being hung. David's portrait of Lavoisier, who was a chemist and physicist as well as an active member of the Jacobin party, was banned by the authorities for such reasons. When the newspapers reported that the government had not allowed the showing of The Lictors Bring to Brutus the Bodies of His Sons, the people were outraged, and the royals were forced to give in. The painting was hung in the exhibition, protected by art students. The painting depicts Lucius Junius Brutus, the Roman leader, grieving for his sons. Brutus's sons had attempted to overthrow the government and restore the monarchy, so the father ordered their death to maintain the republic. Brutus was the heroic defender of the republic, sacrificing his own family for the good of the republic. On the right, the mother holds her two daughters, and the nurse is seen on the far right, in anguish. Brutus sits on the left, alone, brooding, seemingly dismissing the dead bodies of his sons. Knowing what he did was best for his country, but the tense posture of his feet and toes reveals his inner turmoil. The whole painting was a Republican symbol, and obviously had immense meaning during these times in France. It exemplified civic virtue, a value highly regarded during the Revolution.", "title": "Early work" }, { "paragraph_id": 11, "text": "In the beginning, David was a supporter of the Revolution, a friend of Robespierre, and a member of the Jacobin Club. While others were leaving the country for new and greater opportunities, David stayed behind to help destroy the old order; he was a regicide who voted in the National Convention for the Execution of Louis XVI. It is uncertain why he did this, as there were many more opportunities for him under the King than the new order; some people suggest David's love for the classical made him embrace everything about that period, including a republican government.", "title": "The French Revolution" }, { "paragraph_id": 12, "text": "Others believed that they found the key to the artist's revolutionary career in his personality. Undoubtedly, David's artistic sensibility, mercurial temperament, volatile emotions, ardent enthusiasm, and fierce independence might have been expected to help turn him against the established order but they did not fully explain his devotion to the republican regime. Nor did the vague statements of those who insisted upon his \"powerful ambition...and unusual energy of will\" actually account for his revolutionary connections. Those who knew him maintained that \"generous ardor\", high-minded idealism and well-meaning though sometimes fanatical enthusiasm, rather than opportunism and jealousy, motivated his activities during this period.", "title": "The French Revolution" }, { "paragraph_id": 13, "text": "Soon, David turned his critical sights on the Royal Academy of Painting and Sculpture. This attack was probably caused primarily by the hypocrisy of the organization and their personal opposition to his work, as seen in previous episodes in David's life. The Royal Academy was controlled by royalists, who opposed David's attempts at reform; so the National Assembly finally ordered it to make changes to conform to the new constitution.", "title": "The French Revolution" }, { "paragraph_id": 14, "text": "David then began work on something that would later hound him: propaganda for the new republic. David's painting of Brutus was shown during the play Brutus by Voltaire.", "title": "The French Revolution" }, { "paragraph_id": 15, "text": "In 1789, Jacques-Louis David attempted to leave his artistic mark on the historical beginnings of the French Revolution with his painting of The Oath of the Tennis Court. David undertook this task not out of personal political conviction but rather because he was commissioned to do so. The painting was meant to commemorate the event of the same name but was never completed. A meeting of the Estates General was convened in May to address reforms of the monarchy. Dissent arose over whether the three estates would meet separately, as had been tradition, or as one body. The King's acquiescence in the demands of the upper orders led to the deputies of the Third Estate renaming themselves as the National Assembly on 17 June. They were locked out of the meeting hall three days later when they attempted to meet, and forced to reconvene to the royal indoor tennis court. Presided over by Jean-Sylvain Bailly, they made a 'solemn oath never to separate' until a national constitution had been created. In 1789 this event was seen as a symbol of the national unity against the ancien regime. Rejecting the current conditions, the oath signified a new transition in human history and ideology. David was enlisted by the Society of Friends of the Constitution, the body that would eventually form the Jacobins, to enshrine this symbolic event.", "title": "The French Revolution" }, { "paragraph_id": 16, "text": "This instance is notable in more ways than one because it eventually led David to finally become involved in politics as he joined the Jacobins. The picture was meant to be massive in scale; the figures in the foreground were to be life-sized portraits of the counterparts, including Jean-Sylvain Bailly, the President of the Constituent Assembly. Seeking additional funding, David turned to the Society of Friends of the Constitution. The funding for the project was to come from over three thousand subscribers hoping to receive a print of the image. However, when the funding was insufficient, the state ended up financing the project.", "title": "The French Revolution" }, { "paragraph_id": 17, "text": "David set out in 1790 to transform the contemporary event into a major historical picture which would appear at the Salon of 1791 as a large pen-and-ink drawing. As in the Oath of the Horatii, David represents the unity of men in the service of a patriotic ideal. The outstretched arms which are prominent in both works betray David's deeply held belief that acts of republican virtue akin to those of the Romans were being played out in France. In what was essentially an act of intellect and reason, David creates an air of drama in this work. The very power of the people appears to be \"blowing\" through the scene with the stormy weather, in a sense alluding to the storm that would be the revolution.", "title": "The French Revolution" }, { "paragraph_id": 18, "text": "Symbolism in this work of art closely represents the revolutionary events taking place at the time. The figure in the middle is raising his right arm making the oath that they will never disband until they have reached their goal of creating a \"constitution of the realm fixed upon solid foundations\". The importance of this symbol is highlighted by the fact that the crowd's arms are angled to his hand forming a triangular shape. Additionally, the open space in the top half contrasted to the commotion in the lower half serves to emphasize the magnitude of the Tennis Court Oath.", "title": "The French Revolution" }, { "paragraph_id": 19, "text": "In his attempt to depict political events of the Revolution in \"real time\", David was venturing down a new and untrodden path in the art world. However, Thomas Crow argues that this path \"proved to be less a way forward than a cul-de-sac for history painting\". Essentially, the history of the demise of David's The Tennis Court Oath illustrates the difficulty of creating works of art that portray current and controversial political occurrences. Political circumstances in France proved too volatile to allow the completion of the painting. The unity that was to be symbolized in The Tennis Court Oath no longer existed in radicalized 1792. The National Assembly had split between conservatives and radical Jacobins, both vying for political power. By 1792 there was no longer consensus that all the revolutionaries at the tennis court were \"heroes\". A sizeable number of the heroes of 1789 had become the villains of 1792. In this unstable political climate David's work remained unfinished. With only a few nude figures sketched onto the massive canvas, David abandoned The Oath of the Tennis Court. To have completed it would have been politically unsound. After this incident, when David attempted to make a political statement in his paintings, he returned to the less politically charged use of metaphor to convey his message.", "title": "The French Revolution" }, { "paragraph_id": 20, "text": "When Voltaire died in 1778, the church denied him a church burial, and his body was interred near a monastery. A year later, Voltaire's old friends began a campaign to have his body buried in the Panthéon, as church property had been confiscated by the French Government. In 1791, David was appointed to head the organizing committee for the ceremony, a parade through the streets of Paris to the Panthéon. Despite rain and opposition from conservatives due to the amount of money spent, the procession went ahead. Up to 100,000 people watched the \"Father of the Revolution\" being carried to his resting place. This was the first of many large festivals organized by David for the republic. He went on to organize festivals for martyrs that died fighting royalists. These funerals echoed the religious festivals of the pagan Greeks and Romans and are seen by many as Saturnalian.", "title": "The French Revolution" }, { "paragraph_id": 21, "text": "David incorporated many revolutionary symbols into these theatrical performances and orchestrated ceremonial rituals, in effect radicalizing the applied arts themselves. The most popular symbol for which David was responsible as propaganda minister was drawn from classical Greek images; changing and transforming them with contemporary politics. In an elaborate festival held on the anniversary of the revolt that brought the monarchy to its knees, David's Hercules figure was revealed in a procession following the Goddess of Liberty (Marianne). Liberty, the symbol of Enlightenment ideals was here being overturned by the Hercules symbol; that of strength and passion for the protection of the Republic against disunity and factionalism. In his speech during the procession, David \"explicitly emphasized the opposition between people and monarchy; Hercules was chosen, after all, to make this opposition more evident\". The ideals that David linked to his Hercules single-handedly transformed the figure from a sign of the old regime into a powerful new symbol of revolution. \"David turned him into the representation of a collective, popular power. He took one of the favorite signs of monarchy and reproduced, elevated, and monumentalized it into the sign of its opposite.\" Hercules, the image, became to the revolutionaries, something to rally around.", "title": "The French Revolution" }, { "paragraph_id": 22, "text": "In June 1791, the King made an ill-fated attempt to flee the country, but was apprehended short of his goal on the Austrian Netherlands border and was forced to return under guard to Paris. Louis XVI had made secret requests to Emperor Leopold II of Austria, Marie-Antoinette's brother, to restore him to his throne. This was granted and Austria threatened France if the royal couple were hurt. In reaction, the people arrested the King. This led to an Invasion after the trials and execution of Louis and Marie-Antoinette. The Bourbon monarchy was destroyed by the French people in 1792—it would be restored after Napoleon, then destroyed again with the Restoration of the House of Bonaparte. When the new National Convention held its first meeting, David was sitting with his friends Jean-Paul Marat and Robespierre. In the convention, David soon earned the nickname \"ferocious terrorist\". Robespierre's agents discovered a secret vault containing the King's correspondence which proved he was trying to overthrow the government, and demanded his execution. The National Convention held the trial of Louis XVI; David voted for the death of the King, causing his wife, Marguerite Charlotte, a royalist, to divorce him.", "title": "The French Revolution" }, { "paragraph_id": 23, "text": "When Louis XVI was executed on 21 January 1793, another man had already died as well—Louis Michel le Peletier de Saint-Fargeau. Le Peletier was killed on the preceding day by a royal bodyguard in revenge for having voted for the death of the King. David was called upon to organize a funeral, and he painted Le Peletier Assassinated. In it, the assassin's sword was seen hanging by a single strand of horsehair above Le Peletier's body, a concept inspired by the proverbial ancient tale of the sword of Damocles, which illustrated the insecurity of power and position. This underscored the courage displayed by Le Peletier and his companions in routing an oppressive king. The sword pierces a piece of paper on which is written \"I vote the death of the tyrant\", and as a tribute at the bottom right of the picture David placed the inscription \"David to Le Peletier. 20 January 1793\". The painting was later destroyed by Le Peletier's royalist daughter, and is known by only a drawing, an engraving, and contemporary accounts. Nevertheless, this work was important in David's career because it was the first completed painting of the French Revolution, made in less than three months, and a work through which he initiated the regeneration process that would continue with The Death of Marat, David's masterpiece.", "title": "The French Revolution" }, { "paragraph_id": 24, "text": "On 13 July 1793, David's friend Marat was assassinated by Charlotte Corday with a knife she had hidden in her clothing. She gained entrance to Marat's house on the pretense of presenting him a list of people who should be executed as enemies of France. Marat thanked her and said that they would be guillotined next week upon which Corday immediately fatally stabbed him. She was guillotined shortly thereafter. Corday was of an opposing political party, whose name can be seen in the note Marat holds in David's subsequent painting, The Death of Marat. Marat, a member of the National Convention and a journalist, had a skin disease that caused him to itch horribly. The only relief he could get was in his bath over which he improvised a desk to write his list of suspect counter-revolutionaries who were to be quickly tried and, if convicted, guillotined. David once again organized a spectacular funeral, and Marat was buried in the Panthéon. Marat's body was to be placed upon a Roman bed, his wound displayed and his right arm extended holding the pen which he had used to defend the Republic and its people. This concept was to be complicated by the fact that the corpse had begun to putrefy. Marat's body had to be periodically sprinkled with water and vinegar as the public crowded to see his corpse prior to the funeral on 15 and 16 July. The stench became so bad however that the funeral had to be brought forward to the evening of 16 July.", "title": "The French Revolution" }, { "paragraph_id": 25, "text": "The Death of Marat, perhaps David's most famous painting, has been called the Pietà of the revolution. Upon presenting the painting to the convention, he said \"Citizens, the people were again calling for their friend; their desolate voice was heard: David, take up your brushes..., avenge Marat... I heard the voice of the people. I obeyed.\" David had to work quickly, but the result was a simple and powerful image.", "title": "The French Revolution" }, { "paragraph_id": 26, "text": "The Death of Marat, 1793, became the leading image of the Terror and immortalized both Marat and David in the world of the revolution. This piece stands today as \"a moving testimony to what can be achieved when an artist's political convictions are directly manifested in his work\". A political martyr was instantly created as David portrayed Marat with all the marks of the real murder, in a fashion which greatly resembles that of Christ or his disciples. The subject although realistically depicted remains lifeless in a rather supernatural composition. With the surrogate tombstone placed in front of him and the almost holy light cast upon the whole scene; alluding to an out of this world existence. \"Atheists though they were, David and Marat, like so many other fervent social reformers of the modern world, seem to have created a new kind of religion.\" At the very center of these beliefs, there stood the republic.", "title": "The French Revolution" }, { "paragraph_id": 27, "text": "After the King's execution, war broke out between the new Republic and virtually every major power in Europe. David, as a member of the Committee of General Security, contributed directly to the Reign of Terror. David organized his last festival: the festival of the Supreme Being. Robespierre had realized what a tremendous propaganda tool these festivals were, and he decided to create a new religion, mixing moral ideas with the Republic and based on the ideas of Rousseau. This process had already begun by confiscating church lands and requiring priests to take an oath to the state. The festivals, called fêtes, would be the method of indoctrination. On the appointed day, 20 Prairial by the revolutionary calendar, Robespierre spoke, descended steps, and with a torch presented to him by David, incinerated a cardboard image symbolizing atheism, revealing an image of wisdom underneath.", "title": "The French Revolution" }, { "paragraph_id": 28, "text": "Soon, the war began to go well; French troops marched across the southern half of the Netherlands (which would later become Belgium), and the emergency that had placed the Committee of Public Safety in control was no more. Then plotters seized Robespierre at the National Convention and he was later guillotined, in effect ending the Reign of Terror. As Robespierre was arrested, David yelled to his friend \"if you drink hemlock, I shall drink it with you.\" After this, he supposedly fell ill, and did not attend the evening session because of \"stomach pain\", which saved him from being guillotined along with Robespierre. David was arrested and placed in prison twice, first from 2 August to 28 December 1794 and then from 29 May to 3 August 1795. Most of the time he served his sentence in the not uncomfortable Palais du Luxembourg in Paris. There he painted his own portrait, showing him much younger than he actually was, as well as that of his jailer.", "title": "The French Revolution" }, { "paragraph_id": 29, "text": "After David's wife visited him in jail, he conceived the idea of telling the story of The rape of the Sabine women. The Sabine Women Enforcing Peace by Running between the Combatants, also called The Intervention of the Sabine Women is said to have been painted to honor his wife, with the theme being love prevailing over conflict. The painting was also seen as a plea for the people to reunite after the bloodshed of the revolution.", "title": "Post-revolution" }, { "paragraph_id": 30, "text": "David conceived a new style for this painting, one which he called the \"Pure Greek Style\", as opposed to the \"Roman style\" of his earlier historical paintings. The new style was influenced heavily by the work of art historian Johann Joachim Winckelmann. In David's words, \"the most prominent general characteristics of the Greek masterpieces are a noble simplicity and silent greatness in pose as well as in expression.\" Instead of the muscularity and angularity of the figures of his past works, these were smoother, more feminine, and painterly.", "title": "Post-revolution" }, { "paragraph_id": 31, "text": "This work also brought him to the attention of Napoleon. The story for the painting is as follows: \"The Romans have abducted the daughters of their neighbors, the Sabines. To avenge this abduction, the Sabines attacked Rome, although not immediately—since Hersilia, the daughter of Tatius, the leader of the Sabines, had been married to Romulus, the Roman leader, and then had two children by him in the interim. Here we see Hersilia between her father and husband as she adjures the warriors on both sides not to take wives away from their husbands or mothers away from their children. The other Sabine Women join in her exhortations.\"", "title": "Post-revolution" }, { "paragraph_id": 32, "text": "During this time, the martyrs of the Revolution were taken from the Pantheon and buried in common ground, and revolutionary statues were destroyed. When David was finally released to the country, France had changed. His wife managed to get him released from prison, and he wrote letters to his former wife, and told her he never ceased loving her. He remarried her in 1796. Finally, wholly restored to his position, he retreated to his studio, took pupils and for the most part, retired from politics.", "title": "Post-revolution" }, { "paragraph_id": 33, "text": "In August 1796, David and many other artists signed a petition orchestrated by Quatremère de Quincy which questioned the wisdom of the planned seizure of works of art from Rome. The Director Barras believed that David was \"tricked\" into signing, although one of David's students recalled that in 1798 his master lamented the fact that masterpieces had been imported from Italy.", "title": "Post-revolution" }, { "paragraph_id": 34, "text": "David's close association with the Committee of Public Safety during the Terror resulted in his signing of the death warrant for Alexandre de Beauharnais, a minor noble. Beauharnais's widow, Joséphine, went on to marry Napoleon Bonaparte and became his empress; David himself depicted their coronation in the Coronation of Napoleon and Josephine, 2 December 1804.", "title": "Napoleon" }, { "paragraph_id": 35, "text": "David had been an admirer of Napoleon from their first meeting, struck by Bonaparte's classical features. Requesting a sitting from the busy and impatient general, David was able to sketch Napoleon in 1797. David recorded the face of the conqueror of Italy, but the full composition of Napoleon holding the peace treaty with Austria remains unfinished. This was likely a decision by Napoleon himself after considering the current political situation. He may have considered the publicity the portrait would bring about to be ill-timed. Bonaparte had high esteem for David, and asked him to accompany him to Egypt in 1798, but David refused, seemingly unwilling to give up the material comfort, safety, and peace of mind he had obtained through the years. Draftsman and engraver Dominique Vivant Denon went to Egypt instead, providing mostly documentary and archaeological work.", "title": "Napoleon" }, { "paragraph_id": 36, "text": "After Napoleon's successful coup d'état in 1799, as First Consul he commissioned David to commemorate his daring crossing of the Alps. The crossing of the St. Bernard Pass had allowed the French to surprise the Austrian army and win victory at the Battle of Marengo on 14 June 1800. Although Napoleon had crossed the Alps on a mule, he requested that he be portrayed \"calm upon a fiery steed\". David complied with Napoleon Crossing the Saint-Bernard. After the proclamation of the Empire in 1804, David became the official court painter of the regime. During this period he took students, one of whom was the Belgian painter Pieter van Hanselaere.", "title": "Napoleon" }, { "paragraph_id": 37, "text": "One of the works David was commissioned for was The Coronation of Napoleon (1805-1807). David was permitted to watch the event. He had plans of Notre Dame delivered and participants in the coronation came to his studio to pose individually, though never the Emperor. (The only time David obtained a sitting from Napoleon had been in 1797.) David did manage to get a private sitting with the Empress Joséphine and Napoleon's sister, Caroline Murat, through the intervention of erstwhile art patron Marshal Joachim Murat, the Emperor's brother-in-law. For his background, David had the choir of Notre Dame act as his fill-in characters. Pope Pius VII came to sit for the painting, and actually blessed David. Napoleon came to see the painter, stared at the canvas for an hour and said \"David, I salute you.\" David had to redo several parts of the painting because of Napoleon's various whims, and for this painting, he received twenty-four thousand Francs.", "title": "Napoleon" }, { "paragraph_id": 38, "text": "David was made a Chevalier de la Légion d'honneur in 1803. He was promoted to an Officier in 1808. And, in 1815, he was promoted to a Commandant (now Commandeur) de la Légion d'honneur.", "title": "Napoleon" }, { "paragraph_id": 39, "text": "On the Bourbons returning to power, David figured in the list of proscribed former revolutionaries and Bonapartists for having voted execution for the deposed King Louis XVI; and for participating in the death of Louis XVII, the deposed king's son who was mistreated, starved and forced into a false confession of incest with his mother, Queen Marie-Antoinette, which contributed to her death sentence.", "title": "Exile and death" }, { "paragraph_id": 40, "text": "The newly restored Bourbon King, Louis XVIII, however, granted amnesty to David and even offered him the position of court painter. David refused, preferring self-exile in Brussels. There, he trained and influenced Brussels artists such as François-Joseph Navez and Ignace Brice, painted Cupid and Psyche and quietly lived the remainder of his life with his wife (whom he had remarried). In that time, he painted smaller-scale mythological scenes, and portraits of citizens of Brussels and Napoleonic émigrés, such as the Baron Gerard.", "title": "Exile and death" }, { "paragraph_id": 41, "text": "David created his last great work, Mars Being Disarmed by Venus and the Three Graces, from 1822 to 1824. In December 1823, he wrote: \"This is the last picture I want to paint, but I want to surpass myself in it. I will put the date of my seventy-five years on it and afterwards I will never again pick up my brush.\" The finished painting—evoking painted porcelain because of its limpid coloration—was exhibited first in Brussels, then in Paris, where his former students flocked to view it.", "title": "Exile and death" }, { "paragraph_id": 42, "text": "The exhibition was profitable—13,000 francs, after deducting operating costs, thus, more than 10,000 people visited and viewed the painting. In his later years, David remained in full command of his artistic faculties, even after a stroke in the spring of 1825 disfigured his face and slurred his speech. In June 1825, he resolved to embark on an improved version of his The Anger of Achilles (also known as the Sacrifice of Iphigenie); the earlier version was completed in 1819 and is now in the collection of the Kimbell Art Museum in Fort Worth, Texas. David remarked to his friends who visited his studio \"this [painting] is what is killing me\" such was his determination to complete the work, but by October it must have already been well advanced, as his former pupil Gros wrote to congratulate him, having heard reports of the painting's merits. By the time David died, the painting had been completed and the commissioner Ambroise Firmin-Didot brought it back to Paris to include it in the exhibition \"Pour les grecs\" that he had organised and which opened in Paris in April 1826.", "title": "Exile and death" }, { "paragraph_id": 43, "text": "When David was leaving a theater, the driver of a carriage struck him, and he later died, on 29 December 1825. At his death, some portraits were auctioned in Paris, they sold for little; the famous Death of Marat was exhibited in a secluded room, to avoid outraging public sensibilities. Disallowed return to France for burial, for having been a regicide of King Louis XVI, the body of the painter Jacques-Louis David was buried in Brussels and moved in 1882 to Brussels Cemetery, while some say his heart was buried with his wife at Père Lachaise Cemetery, Paris.", "title": "Exile and death" }, { "paragraph_id": 44, "text": "The theme of the oath found in several works such as The Oath of the Tennis Court, The Distribution of the Eagles, and Leonidas at Thermopylae, was perhaps inspired by the rituals of Freemasonry. In 1989 during the \"David against David\" conference Albert Boime presented evidence, a document dated in 1787, showing the painter's membership in the \"La Moderation\" Masonic Lodge.", "title": "Freemasonry" }, { "paragraph_id": 45, "text": "Jacques-Louis David's facial abnormalities were traditionally reported to be a consequence of a deep facial sword wound after a fencing incident. These left him with a noticeable asymmetry during facial expression and resulted in his difficulty in eating or speaking. (He could not pronounce some consonants such as the letter 'r'.) A sword scar wound on the left side of his face is present in his self-portrait and sculptures and corresponds to some of the buccal branches of the facial nerve. An injury to this nerve and its branches are likely to have resulted in the difficulties with his left facial movement.", "title": "Medical analysis of David's face" }, { "paragraph_id": 46, "text": "Furthermore, as a result of this injury, he suffered from a growth on his face that biographers and art historians have defined as a benign tumor. These, however, may have been a granuloma, or even a post-traumatic neuroma. As historian Simon Schama has pointed out, witty banter and public speaking ability were key aspects of the social culture of 18th-century France, so David's tumor could have been a heavy obstacle in his social life. David was sometimes referred to as \"David of the Tumor\".", "title": "Medical analysis of David's face" }, { "paragraph_id": 47, "text": "In addition to his history paintings, David completed a number of privately commissioned portraits. Warren Roberts, among others, has pointed out the contrast between David's \"public style\" of painting, as shown in his history paintings, and his \"private style\", as shown in his portraits. His portraits were characterized by a sense of truth and realism. He focused on defining his subjects' features and characters without idealizing them. This is different from the style seen in his historical paintings, in which he idealizes his figures' features and bodies to align with Greek and Roman ideals of beauty. He puts a great deal of detail into his portraits, defining smaller features such as hands and fabric. The compositions of his portraits remain simple with blank backgrounds that allow the viewer to focus on the details of the subject.", "title": "Portraiture" }, { "paragraph_id": 48, "text": "The portrait he did of his wife (1813) is an example of his typical portrait style. The background is dark and simple without any clues as to the setting, which forces the viewer to focus entirely on her. Her features are un-idealized and truthful to her appearance. There is a great amount of detail that can be seen in his attention to portraying the satin material of the dress she wears, the drapery of the scarf around her, and her hands which rest in her lap.", "title": "Portraiture" }, { "paragraph_id": 49, "text": "In the painting of Brutus (1789), the man and his wife are separated, both morally and physically. Paintings such as these, depicting the great strength of patriotic sacrifice, made David a popular hero of the revolution.", "title": "Portraiture" }, { "paragraph_id": 50, "text": "In the Portrait of Antoine-Laurent Lavoisier and his wife (1788), the man and his wife are tied together in an intimate pose. She leans on his shoulder while he pauses from his work to look up at her. David casts them in a soft light, not in the sharp contrast of Brutus or of the Horatii. Also of interest—Lavoisier was a tax collector, as well as a famous chemist. Though he spent some of his money trying to clean up swamps and eradicate malaria, he was nonetheless sent to the guillotine during the Reign of Terror as an enemy of the people. David, then a powerful member of the National Assembly, stood idly by and watched.", "title": "Portraiture" }, { "paragraph_id": 51, "text": "Other portraits include paintings of his sister-in-law and her husband, Madame and Monsieur Seriziat. The picture of Monsieur Seriziat depicts a man of wealth, sitting comfortably with his horse-riding equipment. The picture of the Madame shows her wearing an unadorned white dress, holding her young child's hand as they lean against a bed. David painted these portraits of Madame and Monsieur Seriziat out of gratitude for letting him stay with them after he was in jail.", "title": "Portraiture" }, { "paragraph_id": 52, "text": "Towards the end of David's life, he painted a portrait of his old friend Abbé Sieyès. Both had been involved in the Revolution, both had survived the purging of political radicals that followed the reign of terror.", "title": "Portraiture" }, { "paragraph_id": 53, "text": "The shift in David's perspective played an important role in the paintings of David's later life, including this one of Sieyès. During the height of The Terror, David was an ardent supporter of radicals such as Robespierre and Marat, and twice offered up his life in their defense. He organized revolutionary festivals and painted portraits of martyrs of the revolution, such as Lepeletier, who was assassinated for voting for the death of the king. David was an impassioned speaker at times in the National Assembly. In speaking to the Assembly about the young boy named Bara, another martyr of the revolution, David said, \"O Bara! O Viala! The blood that you have spread still smokes; it rises toward Heaven and cries for vengeance.\"", "title": "Shift in attitude" }, { "paragraph_id": 54, "text": "After Robespierre was sent to the guillotine, however, David was imprisoned and changed the attitude of his rhetoric. During his imprisonment he wrote many letters, pleading his innocence. In one he wrote, \"I am prevented from returning to my atelier, which, alas, I should never have left. I believed that in accepting the most honorable position, but very difficult to fill, that of legislator, that a righteous heart would suffice, but I lacked the second quality, understanding.\"", "title": "Shift in attitude" }, { "paragraph_id": 55, "text": "Later, while explaining his developing \"Grecian style\" for paintings such as The Intervention of the Sabine Women, David further commented on a shift in attitude: \"In all human activity the violent and transitory develops first; repose and profundity appear last. The recognition of these latter qualities requires time; only great masters have them, while their pupils have access only to violent passions.\"", "title": "Shift in attitude" }, { "paragraph_id": 56, "text": "Jacques-Louis David was, in his time, regarded as the leading painter in France, and arguably all of Western Europe; many of the painters honored by the restored Bourbons following the French Revolution had been David's pupils. David's student Antoine-Jean Gros for example, was made a Baron and honored by Napoleon Bonaparte's court. Another pupil of David's, Jean Auguste Dominique Ingres became the most important artist of the restored Royal Academy and the figurehead of the Neoclassical school of art, engaging the increasingly popular Romantic school of art that was beginning to challenge Neoclassicism. David invested in the formation of young artists for the Rome Prize, which was also a way to pursue his old rivalry with other contemporary painters such as Joseph-Benoît Suvée, who had also started teaching classes. To be one of David's students was considered prestigious and earned his students a lifetime reputation. He called on the more advanced students, such as Jérôme-Martin Langlois, to help him paint his large canvases. Musician and artist Therese Emilie Henriette Winkel; and painter Jean Baptiste Vermay also studied with David.", "title": "Legacy" }, { "paragraph_id": 57, "text": "Despite David's reputation, he was more fiercely criticized right after his death than at any point during his life. His style came under the most serious criticism for being static, rigid, and uniform throughout all his work. David's art was also attacked for being cold and lacking warmth. David, however, made his career precisely by challenging what he saw as the earlier rigidity and conformity of the French Royal Academy's approach to art. David's later works also reflect his growth in the development of the Empire style, notable for its dynamism and warm colors. It is likely that much of the criticism of David following his death came from David's opponents; during his lifetime David made a great many enemies with his competitive and arrogant personality as well as his role in the Terror. David sent many people to the guillotine and personally signed the death warrants for King Louis XVI and Marie Antoinette. One significant episode in David's political career that earned him a great deal of contempt was the execution of Emilie Chalgrin. A fellow painter Carle Vernet had approached David, who was on the Committee of Public Safety, requesting him to intervene on behalf of his sister, Chalgrin. She had been accused of crimes against the Republic, most notably possessing stolen items. David refused to intervene in her favor, and she was executed. Vernet blamed David for her death, and the episode followed him for the rest of his life and after.", "title": "Legacy" }, { "paragraph_id": 58, "text": "In the last 50 years David has enjoyed a revival in popular favor and in 1948 his two-hundredth birthday was celebrated with an exhibition at the Musée de l'Orangerie in Paris and at Versailles showing his life's works. Following World War II, Jacques-Louis David was increasingly regarded as a symbol of French national pride and identity, as well as a vital force in the development of European and French art in the modern era. The birth of Romanticism is traditionally credited to the paintings of eighteenth-century French artists such as Jacques-Louis David.", "title": "Legacy" }, { "paragraph_id": 59, "text": "There are streets named after David in the French cities of Carcassonne and Montpellier.", "title": "Legacy" }, { "paragraph_id": 60, "text": "Danton (Andrzej Wajda, France, 1982) – Historical drama. Many scenes include David as a silent character watching and drawing. The film focuses on the period of the Terror.", "title": "Filmography" } ]
Jacques-Louis David was a French painter in the Neoclassical style, considered to be the preeminent painter of the era. In the 1780s, his cerebral brand of history painting marked a change in taste away from Rococo frivolity toward classical austerity and severity and heightened feeling, harmonizing with the moral climate of the final years of the Ancien Régime. David later became an active supporter of the French Revolution and friend of Maximilien Robespierre (1758–1794), and was effectively a dictator of the arts under the French Republic. Imprisoned after Robespierre's fall from power, he aligned himself with yet another political regime upon his release: that of Napoleon, the First Consul of France. At this time he developed his Empire style, notable for its use of warm Venetian colours. After Napoleon's fall from Imperial power and the Bourbon revival, David exiled himself to Brussels, then in the United Kingdom of the Netherlands, where he remained until his death. David had many pupils, making him the strongest influence in French art of the early 19th century, especially academic Salon painting.
2002-02-25T15:51:15Z
2023-12-04T00:35:57Z
[ "Template:Lang", "Template:Main", "Template:Cite episode", "Template:ISBN", "Template:Commons", "Template:Jacques-Louis David", "Template:Authority control (arts)", "Template:Infobox officeholder", "Template:Page needed", "Template:Harvnb", "Template:Cite book", "Template:Cite web", "Template:Webarchive", "Template:Cite journal", "Template:Short description", "Template:Use dmy dates", "Template:IPA-fr", "Template:Citation needed", "Template:Sfn", "Template:Unreferenced section", "Template:Reflist", "Template:Citation", "Template:C.", "Template:French Revolution navbox" ]
https://en.wikipedia.org/wiki/Jacques-Louis_David
9,074
Design Science License
Design Science License (DSL) is a copyleft license for any type of free content such as text, images, music. Unlike other open source licenses, the DSL was intended to be used on any type of copyrightable work, including documentation and source code. It was the first “generalized copyleft” license. The DSL was written by Michael Stutz. The DSL came out in the 1990s, before the formation of the Creative Commons. Once the Creative Commons arrived, Stutz considered the DSL experiment "over" and no longer recommended its use.
[ { "paragraph_id": 0, "text": "Design Science License (DSL) is a copyleft license for any type of free content such as text, images, music. Unlike other open source licenses, the DSL was intended to be used on any type of copyrightable work, including documentation and source code. It was the first “generalized copyleft” license. The DSL was written by Michael Stutz.", "title": "" }, { "paragraph_id": 1, "text": "The DSL came out in the 1990s, before the formation of the Creative Commons. Once the Creative Commons arrived, Stutz considered the DSL experiment \"over\" and no longer recommended its use.", "title": "" }, { "paragraph_id": 2, "text": "", "title": "External links" } ]
Design Science License (DSL) is a copyleft license for any type of free content such as text, images, music. Unlike other open source licenses, the DSL was intended to be used on any type of copyrightable work, including documentation and source code. It was the first “generalized copyleft” license. The DSL was written by Michael Stutz. The DSL came out in the 1990s, before the formation of the Creative Commons. Once the Creative Commons arrived, Stutz considered the DSL experiment "over" and no longer recommended its use.
2023-05-15T14:24:16Z
[ "Template:Cite book", "Template:Law-stub", "Template:Reflist", "Template:Cite web" ]
https://en.wikipedia.org/wiki/Design_Science_License
9,079
Drum kit
A drum kit (also called a drum set, trap set, or simply drums) is a collection of drums, cymbals, and sometimes other auxiliary percussion instruments set up to be played by one person. The drummer typically holds a pair of matching drumsticks, and uses their feet to operate hi-hat and bass drum pedals. A standard kit usually consists of: The drum kit is a part of the standard rhythm section and is used in many types of popular and traditional music styles, ranging from rock and pop to blues and jazz. Before the development of the classic drum kit, drums and cymbals used in military and orchestral music settings were played separately by different percussionists. In the 1840s, percussionists began to experiment with foot pedals as a way to enable them to play more than one instrument, but these devices would not be mass-produced for another 75 years. By the 1860s, percussionists started combining multiple drums into a kit. The bass drum, snare drum, cymbals, and other percussion instruments were all struck with hand-held drumsticks. Drummers in musical theater appeared in stage shows, where the budget for pit orchestras was often limited due to an insufficient amount of money able to purchase a full percussionist team. This contributed to the creation of the drum kit by developing techniques and devices that would enable one person to replace multiple percussionists. Double-drumming was developed to enable one person to play both bass and snare drums with sticks, while the cymbals could be played by tapping the foot on a "low-boy". With this approach, the bass drum was usually played on beats one and three (in 4 time). While the music was first designed to accompany marching soldiers, this simple and straightforward drumming approach led to the birth of ragtime music, when the simple marching beats became more syncopated. This resulted in a greater swing and dance feel. The drum kit was initially referred to as a "trap set", and from the late 1800s to the 1930s, drummers were referred to as "trap drummers". By the 1870s, drummers were using an overhang pedal. Most drummers in the 1870s preferred to do double-drumming without any pedal to play multiple drums, rather than use an overhang pedal. Companies patented their pedal systems, such as that of drummer Edward "Dee Dee" Chandler of New Orleans in 1904 or 1905. This led to the bass drum being played by percussionists standing and using their feet, hence the term "kick drum". William F. Ludwig Sr. and his brother Theobald founded Ludwig & Ludwig Co. in 1909 and patented the first commercially successful bass drum pedal system. In 1912, drummers replaced sticks with wire brushes and, later, metal fly swatters as the louder sounds made by using drumsticks could overpower other instruments. By World War I, drum kits were often marching-band-style bass drums with many percussion items around them and suspended from them. Drum kits became a central part of jazz, especially Dixieland. The modern drum kit was developed in the vaudeville era, during the 1920s, in New Orleans. Drummers such as Baby Dodds, Zutty Singleton, and Ray Bauduc took the idea of marching rhythms and combined the bass drum, snare drum, and "traps" – a term used to refer to the percussion instruments associated with immigrant groups, which included miniature cymbals, tom toms, cowbells, and woodblocks. They started incorporating these elements into ragtime, which had been popular for a few decades, creating an approach that evolved into a jazz drumming style. Budget constraints and space considerations in musical theater pit orchestras led bandleaders to pressure percussionists to cover more percussion parts. Metal consoles were developed to hold Chinese tom-toms, with swing-out stands for snare drums and cymbals. On top of the console was a "contraption" tray (shortened to "trap"), used to hold items like whistles, klaxons, and cowbells. These kits were dubbed "trap kits". Hi-hat stands became available around 1926. In 1918, Baby Dodds, playing on Mississippi River riverboats with Louis Armstrong, modified the military marching setup, experimenting with playing the drum rims instead of woodblocks, hitting cymbals with sticks (which was not yet common), and adding a side cymbal above the bass drum, which became known as the ride cymbal. William Ludwig developed the "sock" or early low-mounted hi-hat after observing Dodds' drumming. Dodds asked Ludwig to raise the newly produced low-hat cymbal nine inches to make them easier to play, thus creating the modern hi-hat cymbal. Dodds was one of the first drummers to play the broken-triplet beat that became the standard rhythm of modern ride cymbal playing. He also popularized the use of Chinese cymbals. Recording technology was crude, which meant loud sounds could distort the recording. To get around this, Dodds used woodblocks and drum rims as quieter alternatives to cymbals and drum skins. In the 1920s, freelance drummers were hired to play at shows, concerts, theaters, and clubs to support dancers and musicians of various genres. Orchestras were hired to accompany silent films, and the drummer was responsible for providing the sound effects. Sheet music from the 1920s shows that the drummer's sets were starting to evolve in size to support the various acts. However, by 1930, films with audio were more popular, and many were accompanied by pre-recorded soundtracks. This technological breakthrough put thousands of drummers who served as sound effects specialists out of work, with some drummers obtaining work as foley artists for those motion-picture sound tracks. Kit drumming, whether accompanying voices and other instruments or performing a drum solo, consists of two elements: A fill is a departure from the repetitive rhythm pattern in a song. A drum fill can be used to "fill in" the space between the end of one verse and the beginning of another verse or chorus. Fills vary from a simple few strokes on a tom or snare to a distinctive rhythm played on the hi-hat, to sequences several bars long that are short virtuosic drum solos. As well as adding interest and variation to the music, fills serve an important function in indicating significant changes of sections in songs as well as linking them together. A vocal cue is a short drum fill that introduces a singer's entrance into the piece. A fill ending with a cymbal crash on beat one is often used to lead into a chorus or verse. A drum solo is an instrumental section that highlights the drums. While other instrument solos are typically accompanied by the other rhythm section instruments (e.g., bass guitar and electric guitar), for most drum solos, the band members stop playing so that all focus will be on the drummer. In some drum solos, the other rhythm section instrumentalists may play "punches" at certain points – sudden, loud chords of short duration. Drum solos are common in jazz but are also used in several rock genres, such as heavy metal and progressive rock. During drum solos, drummers have a degree of creative freedom, allowing them to use complex polyrhythms that would otherwise be unsuitable with an ensemble. In live concerts, drummers may be given extended drum solos, even in genres where drum solos are rare on recordings. Most drummers hold the drumsticks in one of two types of grip: The bass drum (also known as the "kick drum") is the lowest-pitched drum and usually provides the beat or timing element with basic pulse patterns. Some drummers may use two or more bass drums or a double pedal on a single bass drum, which enables a drummer to play a double-bass-drum style with only one drum. This saves space in recording/performance areas and reduces time and effort during set-up, taking down, and transportation. Double bass drumming is a technique used in certain genres, including heavy metal and progressive rock. The snare drum provides the backbeat. When applied in this fashion, it supplies strong regular accents played by the non-dominant hand and is the backbone for many fills. Its distinctive sound can be attributed to the bed of stiff metal wires held under tension against the bottom head (known as the snare head). When the top head (known as the batter head) is struck with a drumstick, the snare wires vibrate, creating a snappy, staccato buzzing sound, along with the sound of the stick striking the batter head. Tom-tom drums, or toms for short, are drums without snares and played with sticks (or whatever tools the music style requires) and are the most numerous drums in most kits. They provide the bulk of most drum fills and solos. They include: The smallest and largest drums without snares (octobans and gong drums, respectively) are sometimes considered toms. The naming of common configurations (four-piece, five-piece, etc.) is largely a reflection of the number of toms, as conventionally only the drums are counted, and these configurations all contain one snare and one or more bass drums, (though not regularly any standardized use of two bass/kick drums) the balance usually being made up by toms. Octobans are smaller toms designed for use in a drum kit, extending the tom range upwards in pitch, primarily by their great depth and small diameter. They are also called rocket toms and tube toms. Timbales are tuned much higher than a tom of the same diameter, typically have drum shells made of metal, and are normally played with very light, thin, non-tapered sticks. Timbales are more common in Latin music. They have thin heads and a very different tone than a tom but are used by some drummers/percussionists to extend the tom range upwards. Alternatively, they can be fitted with tom heads and tuned as shallow concert toms. Attack timbales and mini timbales are reduced-diameter timbales designed for drum kit usage, the smaller diameter allowing for thicker heads providing the same pitch and head tension. They are recognizable in genres of the 2010s and more traditional forms of Latin, reggae, and numerous other styles. Gong drums are a rare extension of a drum kit. This single-headed mountable drum appears similar to a bass drum (around 20–24 inches in diameter) but is played with sticks rather than a foot-operated pedal and therefore has the same purpose as a floor tom. Most hand drums cannot be played with drumsticks without risking damage to the head and bearing edge, which is not protected by a metal drum rimm. For use in a drum kit, they may be fitted with a metal drum head and played with sticks with care, or played by hand. In most drum kits and drum/percussion kits, cymbals are as prominent as the drums themselves. The oldest idiophones in music are cymbals, a version of which were used throughout the ancient Near East very early in the Bronze Age period. Cymbals are mostly associated with Turkey and Turkish craftsmanship, where Zildjian has made them since 1623. While most drummers purchase cymbals individually, beginner cymbal packs were brought to market to provide entry-level cymbals for the novice drummer. The kits normally contain four cymbals: one ride, one crash, and a pair of hi-hats. Some contain only three cymbals, using a crash/ride instead of the separate ride and crash. The sizes closely follow those given in Common configurations below. Most drummers extend the normal configuration by adding another crash, a splash, and/or a china/effects cymbal. The ride cymbal is most often used for keeping a constant rhythm pattern, every beat or more often, as the music requires. Development of this ride technique is generally credited to jazz drummer Baby Dodds. Most drummers have a single main ride, located near their dominant hand – within easy playing reach, as it is used regularly – often a 20"–22" in diameter, but diameters of 16"–26" are not uncommon. It is usually a medium-heavy- to heavy-weight cymbal whose sound that cuts through other instrumental sounds. Some drummers use a swish cymbal, sizzle cymbal, or other exotic or lighter metal rides, as the main or only ride in their kit, particularly for jazz, gospel, or ballad/folk sounds. In the 1960s, Ringo Starr of the Beatles used a sizzle cymbal as a second ride, particularly during guitar solos. Hi-hat cymbals (nicknamed "hats") consist of two cymbals mounted, one upside down, with their bottoms facing each other, on a hollow metal support cylinder with folding support legs that keep the support cylinder vertical. Like the bass drum, the hi-hat has a foot pedal. The bottom cymbal is fixed in place. The top cymbal is mounted on a thin rod, which is inserted into the hollow cymbal stand. The thin rod is connected to a foot pedal. When the foot pedal is pressed down, it causes the thin rod to move down, causing the upper cymbal to move and strike the lower. When the foot is lifted off the pedal, the upper cymbal rises, due to the pedal's spring-loaded mechanism. The hi-hats can be sounded by striking the cymbals with one or two sticks or just by closing and opening the cymbals with the foot pedal. The ability to create rhythms on the hi-hats with the foot alone expands the drummer's ability to create sounds, as the hands are freed up to play on the drums or other cymbals. Different sounds can be created by striking "open hi-hats" (without the pedal depressed, which creates a noisy sound nicknamed "sloppy hats") or a crisp "closed hi-hats" sound (with the pedal pressed down). High hats can also be struck with the pedal partially depressed. A unique effect can be created by striking an open hi-hat (where the two cymbals are apart) and then closing the cymbals with the foot pedal. This effect is widely used in disco and funk. The hi-hat has a similar function to the ride cymbal; the two are rarely played consistently for long periods at the same time, but one or the other is often used to keep what is known as the "ride rhythm" (e.g., eighth or sixteenth notes) in a song. The hi-hats are played by the right stick of a right-handed drummer. Changing between ride and hi-hat, or between either and a "leaner" sound with neither, is often used to mark a change from one song section to another. Crash cymbals are usually the strongest accent markers within the kit, marking crescendos and climaxes, vocal entries, and major changes of mood, swells, and effects. A crash cymbal is often accompanied by a strong kick on the bass drum pedal, both for musical effect and to support the stroke. It provides a fuller sound and is a commonly taught technique. In jazz, using the smallest kits and at very high volumes, ride cymbals may be played with the technique and sound of a crash cymbal. Some hi-hats will also give a useful crash, particularly thinner hats or those with a severe taper. Alternatively, specialized crash/ride and ride/crash cymbals are designed to combine both functions. All cymbals, other than rides, hi-hats, and crashes/splashes, are usually called effects cymbals when used in a drum kit, though this is a non-classical or colloquial designation that has become standardized. Most extended kits include one or more splash cymbals and at least one china cymbal. Major cymbal makers produce cymbal extension packs consisting of one splash and one china, or more rarely a second crash, a splash, and a china, to match some of their starter packs of ride, crash, and hi-hats. However, any combination of options can be found in the marketplace. Some cymbals may be considered effects in some kits but "basic" in another set of components. Likewise, Ozone crashes have the same purpose as a standard crash cymbal, but are considered to be effects cymbals due to their rarity, and the holes cut into them, which provide a darker, more resonant attack. Cymbals, of any type, used to provide an accent, rather than a regular pattern or groove, are known as accent cymbals. While any cymbal can be used to provide an accent, the term is more narrowly applied to cymbals for which the main purpose is to provide an accent. Accent cymbals include chime cymbals, small-bell domed cymbals, and those cymbals with a clear sonorous/oriental chime to them, such as specialized crash, splash, and china cymbals. Low-volume cymbals are a specialty type of cymbal, made to produce about 80% less volume than a typical cymbal. The entire surface of the cymbal is perforated by holes. Drummers use low-volume cymbals to play in small venues or as a way to practice without disturbing others. Other instruments that have regularly been incorporated into drum kits include: See also Extended kits below. Electronic drums are used for many reasons. Some drummers use electronic drums for playing in small venues, such as coffeehouses or church services, where a very low volume for the band is desired. Since fully electronic drums do not create any acoustic sound (apart from the quiet sound of the stick hitting the sensor pads), all of the drum sounds come from a keyboard amplifier or PA system; as such, the volume of electronic drums can be much lower than an acoustic kit. Some use electronic drums as practice instruments because they can be listened to with headphones, which enable a drummer to practice without disturbing others. Others use electronic drums to take advantage of the huge range of sounds that modern drum modules can produce, which range from sampled sounds of real drums, cymbals, and percussion instruments such as gongs or tubular bells that would be impractical to take to a small gig, to electronic and synthesized sounds, including non-instrument sounds such as ocean waves. A fully electronic kit is easier to soundcheck than acoustic drums, assuming that the electronic drum module has levels that the drummer has preset in their practice room; in contrast, when an acoustic kit is sound checked, most drums and cymbals need to be mic'd and each mic needs to be tested by the drummer so its level and tone equalization can be adjusted by the sound engineer. Also, even after all the individual drum and cymbal mics are sound checked, the engineer needs to listen to the drummer play a standard groove, to check that the balance between the kit instruments is right. Finally, the engineer needs to set up the monitor mix for the drummer, which the drummer uses to hear their instruments and the instruments and vocals of the rest of the band. With a fully electronic kit, many of these steps can be eliminated. Drummers' usage of electronic drum equipment can range from adding a single electronic pad to an entire drum kit (e.g., to have access to an instrument that might otherwise be impractical, such as a large gong), to using a mix of acoustic drums/cymbals and electronic pads, to using an acoustic kit in which the drums and cymbals have triggers, which can be used to sound electronic drums and other sounds, to having an exclusively electronic kit, which is often set up with the rubber or mesh drum pads and rubber "cymbals" in the usual drum kit locations. A fully electronic kit weighs much less and takes up less space to transport than an acoustic kit and it can be set up more quickly. One of the disadvantages of a fully electronic kit is that it may not have the same "feel" as an acoustic kit, and the drum sounds, even if they are high-quality samples, may not sound the same as acoustic drums. Electronic drum pads are the second most widely used type of MIDI performance controllers, after electronic keyboards. Drum controllers may be built into drum machines, they may be standalone control surfaces (e.g., rubber drum pads), or they may emulate the look and feel of acoustic percussion instruments. The pads built into drum machines are typically too small and fragile to be played with sticks, so they are usually played with fingers. Dedicated drum pads such as the Roland Octapad or the DrumKAT are playable with hands or sticks and are often built to resemble the general form of acoustic drums. There are also percussion controllers such as the vibraphone-style MalletKAT, and Don Buchla's Marimba Lumina. MIDI triggers can also be installed into acoustic drum and percussion instruments. Pads that trigger a MIDI device can be homemade from a piezoelectric sensor and a practice pad or other piece of foam rubber, which is possible in two ways: In either case, an electronic control unit (sound module/"brain") with suitable sampled/modeled or synthesized drum sounds, amplification equipment (a PA system, keyboard amp, etc.), and stage monitor speakers are required to hear the electronically produced sounds. See Triggered drum kit. A trigger pad could contain up to four independent sensors, each of them capable of sending information describing the timing and dynamic intensity of a stroke to the drum module/brain. A circular drum pad may have only one sensor for triggering, but a 2016-era cymbal-shaped rubber pad/cymbal will often contain two; one for the body and one for the bell at the center of the cymbal, and perhaps a cymbal choke trigger, to allow drummers to produce this effect. Trigger sensors are most commonly used to replace the acoustic drum sounds, but they can also be used effectively with an acoustic kit to augment or supplement an instrument's sound for the needs of the session or show. For example, in a live performance in a difficult acoustical space, a trigger may be placed on each drum or cymbal and used to trigger a similar sound on a drum module. These sounds are then amplified through a PA system so the audience can hear them, and they can be amplified to any level without the risks of audio feedback or bleed problems associated with microphones and PAs in certain settings. The sound of electronic drums and cymbals themselves is heard by the drummer and possibly other musicians in close proximity, but, even so, the foldback (audio monitor) system is usually fed from the electronic sounds rather than the live acoustic sounds. The drums can be heavily dampened (made to resonate less or have the sound subdued), and their tuning and quality is less critical in the latter scenario. In this way, much of the atmosphere of the live performance is retained in a large venue, but without some of the problems associated with purely microphone-amplified drums. Triggers and sensors can also be used in conjunction with conventional or built-in microphones. If some components of a kit prove more difficult to mic than others (e.g., an excessively "boomy" low tom), triggers may be used on only the more difficult instruments, balancing out a drummer's/band's sound in the mix. Trigger pads and drums, on the other hand, when deployed in a conventional set-up, are most commonly used to produce sounds not possible with an acoustic kit, or at least not with what is available. Any sound that can be sampled/recorded can be played when the pad is struck, by assigning the recorded sounds to specific triggers. Recordings or samples of barking dogs, sirens, breaking glass, and stereo recordings of aircraft taking off and landing have all been used. Along with the more obvious electronically generated drums, there are other sounds that (depending on the device used) can also be played/triggered by electronic drums. Virtual drums are a type of audio software that simulates the sound of a drum kit using synthesized drum kit sounds or digital samples of acoustic drum sounds. Different drum software products offer a recording function, the ability to select from several acoustically distinctive drum kits (e.g., jazz, rock, metal), as well as the option to incorporate different songs into the session. Some computer software can turn any hard surface into a virtual drum kit using only one microphone. Hardware is the name given to the metal stands that support the drums, cymbals, and other percussion instruments. Generally, the term also includes the hi-hat pedal and clutch, and bass drum pedal or pedals, and the drum stool. Hardware is carried along with sticks and other accessories in the traps case, and includes: Many or even all of the stands may be replaced by a drum rack, which is particularly useful for large drum kits. Drummers often set up their own drum hardware onstage and adjust it to their comfort level. Major bands on tour will often have a drum tech who knows how to set up the drummer's hardware and instruments in the desired location and with the desired configuration. Drum kits are traditionally categorized by the number of drums, ignoring cymbals and other instruments. Snare, tom-tom, and bass drums are always counted; other drums, such as octobans, may or may not be counted. Traditionally, in America and the United Kingdom, drum sizes are expressed as depth x diameter, both measured in inches. Many drum kit manufacturers have recently been expressing sizes as diameter x depth, still in inches. For example, a hanging tom 12 inches in diameter and 8 inches deep would be described by Tama as 8 inches × 12 inches, but by Pearl as 12 inches × 8 inches, and a standard diameter Ludwig snare drum 5 inches deep is a 5-inch × 14-inch instrument, while the UK's Premier Manufacturer offers the same dimensions as a 14-inch × 5-inch snare. The sizes of drums and cymbals given below are typical. Many instruments differ slightly or radically from them. Where no size is given, it is because there is too much variety to give a typical size. A three-piece drum set is the most basic set. A conventional three-piece kit consists of a bass drum, a 14"-diameter snare drum, 12"–14" hi-hats, a single 12"-diameter hanging tom, 8"–9" in depth, and a suspended 14"–18" cymbal, the latter two mounted on the bass drum. These kits were common in the 1950s and 1960s and are still used. It is a common configuration of kits sold through mail order, and, with smaller sized drums and cymbals, of kits for children. A four-piece kit extends the three-piece by adding one tom, either a second hanging tom mounted on the bass drum (a notable user is Chris Frantz of Talking Heads) and often displacing the cymbal, or by adding a floor tom. Normally another cymbal is added as well, so there are separate ride and crash cymbals, either on two stands, or the ride cymbal mounted on the bass drum to the player's right and the crash cymbal on a separate stand. The standard cymbal sizes are 16" for the crash, and 18"–20", ride, with the 20" ride cymbal the most common. When a floor tom is added to make a four-piece kit, the floor tom is usually 14" for jazz, and 16" otherwise. This configuration is common in jazz and rock. Notable users include Ringo Starr of The Beatles, Mitch Mitchell of the Jimi Hendrix Experience, John Barbata of the Turtles, and various jazz drummers throughout the 20th century, including Art Blakey, Buddy Rich, and Jo Jones. For jazz, which normally emphasizes the use of a ride cymbal for swing patterns, the lack of second hanging tom in a four-piece kit allows the cymbal to be positioned closer to the drummer, making it easier to play. If a second hanging tom is used, it is 10" diameter and 8" deep for fusion, or 13" diameter and one inch deeper than for the 12" diameter tom. Otherwise, a 14" diameter hanging tom is added to the 12", both being 8" deep. In any case, both toms are most often mounted on the bass drum with the smaller of the two next to the hi-hats (which are to the left for a right-handed drummer). These kits are particularly useful for smaller venues, where space is limited, such as coffeehouses, cafés, hotel lounges, and small pubs. The five-piece kit is the full-size kit and is the most common configuration used across various genres and styles. It adds a third tom to the four-piece kit, making for three toms in all. A fusion kit will normally add a 14" tom, either a floor tom or a hanging tom on a stand to the right of the bass drum; in either case, making the tom lineup 10", 12" and 14". Having three toms enables drummers to have high-, middle-, and low-pitched toms, which gives them more options for fills and solos. Other kits will normally have 12" and 13" hanging toms and either a 14" hanging tom on a stand, a 14" floor tom, or a 16" floor tom. It is common to have 10" and 12" hanging toms, with a 16" floor tom. This configuration is often called a hybrid setup. The bass drum is most commonly 22" in diameter, but rock kits may use 24", fusion 20", jazz 18", and, in larger bands, up to 26". A second crash cymbal is common, typically an inch or two larger or smaller than the 16" one, with the larger of the two to the right for a right-handed drummer. A big band drummer may use crashes up to 20" and a ride up to 24" or, very rarely, 26". A rock kit may also substitute a larger ride cymbal or larger hi-hats, typically 22" for the ride and 15" for the hats. Most five-piece kits, except for entry-level, also have one or more effects cymbals. Adding cymbals beyond the basic ride, hi-hats, and one-crash configuration requires more stands, in addition to the standard drum hardware packs. Because of this, many higher-cost kits for professionals are sold with little or no hardware, to allow the drummer to choose the stands and bass drum pedal they prefer. At the other extreme, many inexpensive, entry-level kits are sold as a five-piece kit complete with two cymbal stands, most often one straight and one boom, and some even with a standard cymbal pack, a stool, and a pair of 5A drum sticks. In the 2010s, digital kits were often offered in a five-piece kit, usually with one plastic crash cymbal trigger and one ride cymbal trigger. Fully electronic drums do not produce any acoustic sound beyond the quiet tapping of sticks on the plastic or rubber heads. Their trigger-pads are wired up to a synth module or sampler. If the toms are omitted completely, or the bass drum is replaced by a pedal-operated beater on the bottom skin of a floor tom and the hanging toms omitted, the result is a two-piece cocktail drum kit, originally developed for cocktail lounge acts. Such kits are particularly favored in musical genres such as trad jazz, bebop, rockabilly, and jump blues. Some rockabilly kits and beginner kits for very young players omit the hi-hat stand. In rockabilly, this allows the drummer to play standing rather than seated. A very simple jazz kit for informal or amateur jam sessions consists of a bass drum, snare drum, and hi-hat, often with only a single cymbal (normally a ride, with or without sizzlers). Although these kits may be small with respect to the number of drums used, the drums themselves are most often of normal size, or even larger in the case of the bass drum. Kits using smaller drums, in both smaller and larger configurations, are for particular uses, such as boutique kits designed to reduce the visual impact of a large kit, kits that need to fit into small spaces in coffeehouses, traveling kits to reduce luggage volume, and junior kits for very young players. Smaller drums also tend to be quieter, again suiting smaller venues, and many of these kits extend this with extra muffling, which allows for quiet or even silent practice. Common extensions beyond the standard configurations include: See also other acoustic instruments above. Another versatile extension becoming increasingly common is the use of some electronic drums in a mainly acoustic kit. Less common extensions found particularly, but not exclusively, in very large kits, include: Sticks are traditionally made from wood (particularly maple, hickory, and oak), but more recently, metal, carbon fiber, and other materials have been used for sticks. The prototypical wooden drum stick was primarily designed for use with the snare drum, and optimized for playing snare rudiments. Sticks come in a variety of weights and tip designs; 7N is a common jazz stick with a nylon tip, while a 5B is a common wood tipped stick, heavier than a 7N but with a similar profile, and a common standard for beginners. Numbers range from 1 (heaviest) to 10 (lightest). The meanings of both numbers and letters vary from manufacturer to manufacturer, and some sticks are not described using this system at all, just being known as Smooth Jazz (typically a 7N or 9N) or Speed Rock (typically a 2B or 3B) for example. Many famous drummers endorse sticks made to their particular preference and sold under their signature. Besides drumsticks, drummers will also use brushes and Rutes in jazz and similar soft music. More rarely, other beaters such as cartwheel mallets (known to kit drummers as "soft sticks") may be used. It is not uncommon for rock drummers to use the "wrong" (butt) end of a stick for a heavier sound; some makers produce tipless sticks with two butt ends. A stick bag is the standard way for a drummer to bring drumsticks to a live performance. For easy access, the stick bag is commonly mounted on the side of the floor tom, just within reach of the drummer's right hand, for a right-handed drummer. Drum muffles are types of mutes that can reduce the ring, boomy overtone frequencies, or overall volume on a snare, bass, or tom. Controlling the ring is useful in studio or live settings when unwanted frequencies can clash with other instruments in the mix. There are internal and external muffling devices which rest on the inside or outside of the drumhead, respectively. Common types of mufflers include muffling rings, gels and duct tape, and improvised methods, such as placing a wallet near the edge of the head. Some drummers muffle the sound of a drum by putting a cloth over the drumhead. Snare drum and tom-tom Typical ways to muffle a snare or tom include placing an object on the outer edge of the drumhead. A piece of cloth, a wallet, gel, or fitted rings made of mylar are common objects. Also used are external clip-on muffles. Internal mufflers that lie on the inside of the drumhead are often built into a drum, but are generally considered less effective than external muffles, as they stifle the initial tone, rather than simply reducing its sustain. Bass drum Muffling the bass can be achieved with the same muffling techniques as for the snare, but bass drums in a drum kit are more commonly muffled by adding pillows, a sleeping bag, or other soft filling inside the drum, between the heads. Cutting a small hole in the resonant head can also produce a more muffled tone, and allows the manipulation of internally placed muffling. The Evans EQ pad places a pad against the batterhead and, when struck, the pad moves off the head momentarily, then returns to rest against the head, thus reducing the sustain without choking the tone. Silencers/mutes Another type of drum muffler is a piece of rubber that fits over the entire drumhead or cymbal. It interrupts contact between the stick and the head, which dampens the sound. They are typically used in practice settings. Cymbals are usually muted with the fingers or hand, to reduce the length or volume of ringing (e.g., the cymbal choke technique which is a key part of heavy metal drumming). Cymbals can also be muted with special rubber rings or duct tape. Historical uses Muffled drums are often associated with funeral ceremonies as well, such as the funerals of Queen Victoria and John F. Kennedy. The use of muffled drums has been written about by such poets as Henry Wadsworth Longfellow, John Mayne, and Theodore O'Hara. Drums have also been used for therapy and learning purposes, such as when an experienced player will sit with a number of students and by the end of the session have all of them relaxed and playing complex rhythms. There are various types of stick holder accessories, including bags that can be attached to a drum and angled sheath-style stick holders, which can hold a single pair of sticks. A sizzler is a metal chain, or combination of chains, that is hung across a cymbal, creating a distinctive metallic sound when the cymbal is struck, similar to that of a sizzle cymbal. Using a sizzler is the non-destructive alternative to drilling holes in a cymbal and putting metal rivets in the holes. Another benefit of using a "sizzler" chain is that the chain is removable, with the cymbal being easily returned to its normal sound. Some sizzlers feature pivoting arms that allow the chains to be quickly lowered onto, or raised from, the cymbal, allowing the effect to be used for some songs and removed for others. Three types of protective covers are common for kit drums: As with all musical instruments, the best protection is provided by a combination of a hard-shelled case with interior padding, such as foam, next to the drums and cymbals. Microphones ("mics") are used with drum kits to pick up the sound of the drums and cymbals for a sound recording and/or to pick up the sound of the drum kit so that it can be amplified through a PA system or sound reinforcement system. While most drummers use microphones and amplification in live shows, so that the sound engineer can adjust the levels of the drums and cymbals, some bands that play quieter genres of music and in small venues, such as coffeehouses, play acoustically, without mics or PA amplification. Small jazz groups, such as jazz quartets or organ trios that are playing in a small bar, will often just use acoustic drums. Of course, if the same small jazz groups play on the mainstage of a big jazz festival, the drums will be miced so that they can be adjusted in the sound system mix. A middle-ground approach is used by some bands that play in small venues: they do not mic every drum and cymbal, but only the instruments that the sound engineer wants to be able to control in the mix, such as the bass drum and the snare. In miking a drum kit, dynamic microphones, which can handle high sound-pressure levels, are usually used to close-mic drums, which is predominantly the way to mic drums for live shows. Condenser microphones are used for overheads and room mics, an approach which is more common with sound recording applications. Close miking of drums may be done using stands or by mounting the microphones on the rims of the drums, or even using microphones built into the drum itself, which eliminates the need for stands for such microphones, reducing both clutter and set-up time, as well as better isolating them. For some styles of music, drummers use electronic effects on drums, such as individual noise gates that mute the attached microphone when the signal is below a threshold volume. This allows the sound engineer to use a higher overall volume for the drum kit by reducing the number of "active" mics which could produce unwanted feedback at any one time. When a drum kit is entirely miked and amplified through the sound reinforcement system, the drummer or the sound engineer can add other electronic effects to the drum sound, such as reverb or digital delay. Some drummers arrive at the venue with their drum kit and use the mics and mic stands provided by the venue's sound engineer. Other drummers bring all their own mics, or selected mics (e.g., good-quality snare and bass drum mics), to ensure that they have good quality mics on hand. In bars and nightclubs, the microphones supplied by the venue can sometimes be in substandard condition, due to the heavy use they experience. Drummers using electronic drums, drum machines, or hybrid acoustic-electric kits (which blend traditional acoustic drums and cymbals with electronic pads) typically use a monitor speaker, keyboard amplifier, or even a small PA system to hear the electronic drum sounds. Even a drummer playing entirely acoustic drums may use a monitor speaker to hear the drums, especially if playing in a loud rock or metal band, where there is substantial onstage volume from large, powerful guitar stacks. Drummers are often given a large speaker cabinet with a 15" subwoofer to help them monitor their bass drum sound (along with a full-range monitor speaker to hear the rest of their kit). Some sound engineers and drummers prefer to use an electronic vibration system, colloquially known as a "butt shaker" or "throne thumper" to monitor the bass drum, because this lowers the stage volume. With a "butt shaker", the "thump" of each bass drum strike causes a vibration in the drum stool; this way the drummer feels their beat on the posterior, rather than hears it. In-Ear Monitors are also popular among drummers since they also work as earplugs. A number of accessories are designed for the bass drum. The bass drum can take advantage of the bass reflex speaker design, in which a tuned port (a hole and a carefully measured tube) are put in a speaker enclosure to improve the bass response at the lowest frequencies. Bass drumhead patches protect the drumhead from the impact of the felt beater. Bass drum pillows are fabric bags with filling or stuffing that can be used to alter the tone or resonance of the bass drum. A less expensive alternative to using a specialized bass drum pillow is to use an old sleeping bag. Some drummers wear special drummer's gloves to improve their grip on the sticks when they play. Drumming gloves often have a textured grip surface made of a synthetic or rubber material and mesh or vents on the parts of the glove not used to hold sticks, to ventilate perspiration. Some drummers wear gloves to prevent blisters. In some styles or settings—such as country music clubs or churches, small venues, or when a live recording is being made—the drummer may use a transparent Perspex or Plexiglas drum screen (also known as a drum shield) to dampen the onstage volume of the drums. A screen that completely surrounds the drum kit is known as a drum booth. In live sound applications, drum shields are used so that the audio engineer can have more control over the volume of drums that the audience hears through the PA system mix, or to reduce the overall volume of the drums, as a way to reduce the overall volume of the band. In some recording studios, foam and fabric baffles are used in addition to, or in place of, clear panels. The drawback with foam/cloth baffle panels is that the drummer cannot see well other performers, the record producer, or the audio engineer. Drummers often bring a carpet, mats, or rugs to venues to prevent the bass drum and hi-hat stand from "crawling" (moving away) on a slippery surface, which can be caused by the drum head striking the bass drum. The carpet also reduces short reverberations (which is generally but not always an advantage), and helps to prevent damage to the flooring or floor coverings. In shows where multiple drummers will bring their kits onstage over the night, it is common for drummers to mark the location of their stands and pedals with tape, to allow for quicker positioning of a kit to a drummer's accustomed position. Bass drums and hi-hat stands commonly have retractable spikes, to help them grip surfaces such as carpet, or rubber feet, to remain stationary on hard surfaces. Drummers use a variety of accessories when practicing. Metronomes and beat counters are used to develop a sense of a steady beat. Drum muffling pads may be used to lessen the volume of drums during practicing. A practice pad, held on the lap, on a leg, or mounted on a stand, is used for near-silent practice with drumsticks. A set of practice pads mounted to simulate an entire drum kit is known as a practice kit. In the 2010s, these have largely been superseded by electronic drums, which can be listened to with headphones for quiet practice and by kits with non-sounding mesh heads. Drummers use a drum key for tuning their drums and adjusting some drum hardware. Besides the basic type of drum key (a T-handled wrench) there are various tuning wrenches and tools. Basic drum keys are divided into three types which allows for tuning of three types of tuning screws on drums: square (most used), slotted, and hexagonal. Ratchet-type wrenches allow high-tension drums to be tuned easily. Spin keys (utilizing a ball joint) allow for rapid head changing. Torque-wrench keys are available, graphically revealing the torque given to each lug. Also, tension gauges, or meters, which are set on the head, aid drummers to achieve a consistent tuning. Drummers can tune drums "by ear" or use a digital drum tuner, which "measures tympanic pressure" on the drumhead to provide accurate tuning. Drum kit music is either written in music notation (called "drum parts"), learned and played by ear, improvised, or some combination of any of all three of these methods. Professional session musician drummers and big-band drummers are often required to read drum parts. Drum parts are most commonly written on a standard five-line staff. As of 2016, a special percussion clef is used, while previously the bass clef was used. However, even if the bass, or no, clef is used, each line and space is assigned an instrument in the kit, rather than a pitch. In jazz, traditional music, folk music, rock music, and pop music, drummers are expected to be able to learn songs by ear (from a recording or from another musician who is playing or singing the song) and improvise. The degree of improvisation differs among different styles. Jazz and jazz fusion drummers may have lengthy improvised solos in every song. In rock music and blues, there are also drum solos in some songs, although they tend to be shorter than those in jazz. Drummers in all popular music and traditional music styles are expected to be able to improvise accompaniment parts to songs, once they are told the genre or style (e.g., shuffle, ballad, blues). On early recording media (until 1925), such as wax cylinders and discs carved with an engraving needle, sound balancing meant that musicians had to be moved back in the room. Drums were often put far from the horn (part of the mechanical transducer) to reduce sound distortion. In the 2020s, drum parts in many popular music styles are often recorded apart from the other instruments and singers, using multitrack recording techniques. Once the drums are recorded, the other instruments (rhythm guitar, piano, etc.), and then vocals, are added. To ensure that the drum tempo is consistent at this type of recording, the drummer usually plays along with a click track (a type of digital metronome) in headphones. The ability to play accurately along with a click track has become an important skill for professional drummers. Manufacturers using the American traditional format in their catalogs include these: Those using the European measures of diameter and depth include these:
[ { "paragraph_id": 0, "text": "A drum kit (also called a drum set, trap set, or simply drums) is a collection of drums, cymbals, and sometimes other auxiliary percussion instruments set up to be played by one person. The drummer typically holds a pair of matching drumsticks, and uses their feet to operate hi-hat and bass drum pedals.", "title": "" }, { "paragraph_id": 1, "text": "A standard kit usually consists of:", "title": "" }, { "paragraph_id": 2, "text": "The drum kit is a part of the standard rhythm section and is used in many types of popular and traditional music styles, ranging from rock and pop to blues and jazz.", "title": "" }, { "paragraph_id": 3, "text": "Before the development of the classic drum kit, drums and cymbals used in military and orchestral music settings were played separately by different percussionists. In the 1840s, percussionists began to experiment with foot pedals as a way to enable them to play more than one instrument, but these devices would not be mass-produced for another 75 years. By the 1860s, percussionists started combining multiple drums into a kit. The bass drum, snare drum, cymbals, and other percussion instruments were all struck with hand-held drumsticks. Drummers in musical theater appeared in stage shows, where the budget for pit orchestras was often limited due to an insufficient amount of money able to purchase a full percussionist team. This contributed to the creation of the drum kit by developing techniques and devices that would enable one person to replace multiple percussionists.", "title": "History" }, { "paragraph_id": 4, "text": "Double-drumming was developed to enable one person to play both bass and snare drums with sticks, while the cymbals could be played by tapping the foot on a \"low-boy\". With this approach, the bass drum was usually played on beats one and three (in 4 time). While the music was first designed to accompany marching soldiers, this simple and straightforward drumming approach led to the birth of ragtime music, when the simple marching beats became more syncopated. This resulted in a greater swing and dance feel. The drum kit was initially referred to as a \"trap set\", and from the late 1800s to the 1930s, drummers were referred to as \"trap drummers\". By the 1870s, drummers were using an overhang pedal. Most drummers in the 1870s preferred to do double-drumming without any pedal to play multiple drums, rather than use an overhang pedal. Companies patented their pedal systems, such as that of drummer Edward \"Dee Dee\" Chandler of New Orleans in 1904 or 1905. This led to the bass drum being played by percussionists standing and using their feet, hence the term \"kick drum\".", "title": "History" }, { "paragraph_id": 5, "text": "William F. Ludwig Sr. and his brother Theobald founded Ludwig & Ludwig Co. in 1909 and patented the first commercially successful bass drum pedal system.", "title": "History" }, { "paragraph_id": 6, "text": "In 1912, drummers replaced sticks with wire brushes and, later, metal fly swatters as the louder sounds made by using drumsticks could overpower other instruments.", "title": "History" }, { "paragraph_id": 7, "text": "By World War I, drum kits were often marching-band-style bass drums with many percussion items around them and suspended from them. Drum kits became a central part of jazz, especially Dixieland. The modern drum kit was developed in the vaudeville era, during the 1920s, in New Orleans.", "title": "History" }, { "paragraph_id": 8, "text": "Drummers such as Baby Dodds, Zutty Singleton, and Ray Bauduc took the idea of marching rhythms and combined the bass drum, snare drum, and \"traps\" – a term used to refer to the percussion instruments associated with immigrant groups, which included miniature cymbals, tom toms, cowbells, and woodblocks. They started incorporating these elements into ragtime, which had been popular for a few decades, creating an approach that evolved into a jazz drumming style.", "title": "History" }, { "paragraph_id": 9, "text": "Budget constraints and space considerations in musical theater pit orchestras led bandleaders to pressure percussionists to cover more percussion parts. Metal consoles were developed to hold Chinese tom-toms, with swing-out stands for snare drums and cymbals. On top of the console was a \"contraption\" tray (shortened to \"trap\"), used to hold items like whistles, klaxons, and cowbells. These kits were dubbed \"trap kits\". Hi-hat stands became available around 1926.", "title": "History" }, { "paragraph_id": 10, "text": "In 1918, Baby Dodds, playing on Mississippi River riverboats with Louis Armstrong, modified the military marching setup, experimenting with playing the drum rims instead of woodblocks, hitting cymbals with sticks (which was not yet common), and adding a side cymbal above the bass drum, which became known as the ride cymbal. William Ludwig developed the \"sock\" or early low-mounted hi-hat after observing Dodds' drumming. Dodds asked Ludwig to raise the newly produced low-hat cymbal nine inches to make them easier to play, thus creating the modern hi-hat cymbal. Dodds was one of the first drummers to play the broken-triplet beat that became the standard rhythm of modern ride cymbal playing. He also popularized the use of Chinese cymbals. Recording technology was crude, which meant loud sounds could distort the recording. To get around this, Dodds used woodblocks and drum rims as quieter alternatives to cymbals and drum skins.", "title": "History" }, { "paragraph_id": 11, "text": "In the 1920s, freelance drummers were hired to play at shows, concerts, theaters, and clubs to support dancers and musicians of various genres. Orchestras were hired to accompany silent films, and the drummer was responsible for providing the sound effects. Sheet music from the 1920s shows that the drummer's sets were starting to evolve in size to support the various acts. However, by 1930, films with audio were more popular, and many were accompanied by pre-recorded soundtracks. This technological breakthrough put thousands of drummers who served as sound effects specialists out of work, with some drummers obtaining work as foley artists for those motion-picture sound tracks.", "title": "History" }, { "paragraph_id": 12, "text": "Kit drumming, whether accompanying voices and other instruments or performing a drum solo, consists of two elements:", "title": "Playing" }, { "paragraph_id": 13, "text": "A fill is a departure from the repetitive rhythm pattern in a song. A drum fill can be used to \"fill in\" the space between the end of one verse and the beginning of another verse or chorus. Fills vary from a simple few strokes on a tom or snare to a distinctive rhythm played on the hi-hat, to sequences several bars long that are short virtuosic drum solos. As well as adding interest and variation to the music, fills serve an important function in indicating significant changes of sections in songs as well as linking them together. A vocal cue is a short drum fill that introduces a singer's entrance into the piece. A fill ending with a cymbal crash on beat one is often used to lead into a chorus or verse.", "title": "Playing" }, { "paragraph_id": 14, "text": "A drum solo is an instrumental section that highlights the drums. While other instrument solos are typically accompanied by the other rhythm section instruments (e.g., bass guitar and electric guitar), for most drum solos, the band members stop playing so that all focus will be on the drummer. In some drum solos, the other rhythm section instrumentalists may play \"punches\" at certain points – sudden, loud chords of short duration. Drum solos are common in jazz but are also used in several rock genres, such as heavy metal and progressive rock. During drum solos, drummers have a degree of creative freedom, allowing them to use complex polyrhythms that would otherwise be unsuitable with an ensemble. In live concerts, drummers may be given extended drum solos, even in genres where drum solos are rare on recordings.", "title": "Playing" }, { "paragraph_id": 15, "text": "Most drummers hold the drumsticks in one of two types of grip:", "title": "Playing" }, { "paragraph_id": 16, "text": "The bass drum (also known as the \"kick drum\") is the lowest-pitched drum and usually provides the beat or timing element with basic pulse patterns. Some drummers may use two or more bass drums or a double pedal on a single bass drum, which enables a drummer to play a double-bass-drum style with only one drum. This saves space in recording/performance areas and reduces time and effort during set-up, taking down, and transportation. Double bass drumming is a technique used in certain genres, including heavy metal and progressive rock.", "title": "Components" }, { "paragraph_id": 17, "text": "The snare drum provides the backbeat. When applied in this fashion, it supplies strong regular accents played by the non-dominant hand and is the backbone for many fills. Its distinctive sound can be attributed to the bed of stiff metal wires held under tension against the bottom head (known as the snare head). When the top head (known as the batter head) is struck with a drumstick, the snare wires vibrate, creating a snappy, staccato buzzing sound, along with the sound of the stick striking the batter head.", "title": "Components" }, { "paragraph_id": 18, "text": "Tom-tom drums, or toms for short, are drums without snares and played with sticks (or whatever tools the music style requires) and are the most numerous drums in most kits. They provide the bulk of most drum fills and solos.", "title": "Components" }, { "paragraph_id": 19, "text": "They include:", "title": "Components" }, { "paragraph_id": 20, "text": "The smallest and largest drums without snares (octobans and gong drums, respectively) are sometimes considered toms. The naming of common configurations (four-piece, five-piece, etc.) is largely a reflection of the number of toms, as conventionally only the drums are counted, and these configurations all contain one snare and one or more bass drums, (though not regularly any standardized use of two bass/kick drums) the balance usually being made up by toms.", "title": "Components" }, { "paragraph_id": 21, "text": "Octobans are smaller toms designed for use in a drum kit, extending the tom range upwards in pitch, primarily by their great depth and small diameter. They are also called rocket toms and tube toms.", "title": "Components" }, { "paragraph_id": 22, "text": "Timbales are tuned much higher than a tom of the same diameter, typically have drum shells made of metal, and are normally played with very light, thin, non-tapered sticks. Timbales are more common in Latin music. They have thin heads and a very different tone than a tom but are used by some drummers/percussionists to extend the tom range upwards. Alternatively, they can be fitted with tom heads and tuned as shallow concert toms.", "title": "Components" }, { "paragraph_id": 23, "text": "Attack timbales and mini timbales are reduced-diameter timbales designed for drum kit usage, the smaller diameter allowing for thicker heads providing the same pitch and head tension. They are recognizable in genres of the 2010s and more traditional forms of Latin, reggae, and numerous other styles.", "title": "Components" }, { "paragraph_id": 24, "text": "Gong drums are a rare extension of a drum kit. This single-headed mountable drum appears similar to a bass drum (around 20–24 inches in diameter) but is played with sticks rather than a foot-operated pedal and therefore has the same purpose as a floor tom.", "title": "Components" }, { "paragraph_id": 25, "text": "Most hand drums cannot be played with drumsticks without risking damage to the head and bearing edge, which is not protected by a metal drum rimm. For use in a drum kit, they may be fitted with a metal drum head and played with sticks with care, or played by hand.", "title": "Components" }, { "paragraph_id": 26, "text": "In most drum kits and drum/percussion kits, cymbals are as prominent as the drums themselves. The oldest idiophones in music are cymbals, a version of which were used throughout the ancient Near East very early in the Bronze Age period. Cymbals are mostly associated with Turkey and Turkish craftsmanship, where Zildjian has made them since 1623.", "title": "Components" }, { "paragraph_id": 27, "text": "While most drummers purchase cymbals individually, beginner cymbal packs were brought to market to provide entry-level cymbals for the novice drummer. The kits normally contain four cymbals: one ride, one crash, and a pair of hi-hats. Some contain only three cymbals, using a crash/ride instead of the separate ride and crash. The sizes closely follow those given in Common configurations below. Most drummers extend the normal configuration by adding another crash, a splash, and/or a china/effects cymbal.", "title": "Components" }, { "paragraph_id": 28, "text": "The ride cymbal is most often used for keeping a constant rhythm pattern, every beat or more often, as the music requires. Development of this ride technique is generally credited to jazz drummer Baby Dodds.", "title": "Components" }, { "paragraph_id": 29, "text": "Most drummers have a single main ride, located near their dominant hand – within easy playing reach, as it is used regularly – often a 20\"–22\" in diameter, but diameters of 16\"–26\" are not uncommon. It is usually a medium-heavy- to heavy-weight cymbal whose sound that cuts through other instrumental sounds. Some drummers use a swish cymbal, sizzle cymbal, or other exotic or lighter metal rides, as the main or only ride in their kit, particularly for jazz, gospel, or ballad/folk sounds. In the 1960s, Ringo Starr of the Beatles used a sizzle cymbal as a second ride, particularly during guitar solos.", "title": "Components" }, { "paragraph_id": 30, "text": "Hi-hat cymbals (nicknamed \"hats\") consist of two cymbals mounted, one upside down, with their bottoms facing each other, on a hollow metal support cylinder with folding support legs that keep the support cylinder vertical. Like the bass drum, the hi-hat has a foot pedal. The bottom cymbal is fixed in place. The top cymbal is mounted on a thin rod, which is inserted into the hollow cymbal stand. The thin rod is connected to a foot pedal. When the foot pedal is pressed down, it causes the thin rod to move down, causing the upper cymbal to move and strike the lower. When the foot is lifted off the pedal, the upper cymbal rises, due to the pedal's spring-loaded mechanism. The hi-hats can be sounded by striking the cymbals with one or two sticks or just by closing and opening the cymbals with the foot pedal. The ability to create rhythms on the hi-hats with the foot alone expands the drummer's ability to create sounds, as the hands are freed up to play on the drums or other cymbals. Different sounds can be created by striking \"open hi-hats\" (without the pedal depressed, which creates a noisy sound nicknamed \"sloppy hats\") or a crisp \"closed hi-hats\" sound (with the pedal pressed down). High hats can also be struck with the pedal partially depressed.", "title": "Components" }, { "paragraph_id": 31, "text": "A unique effect can be created by striking an open hi-hat (where the two cymbals are apart) and then closing the cymbals with the foot pedal. This effect is widely used in disco and funk. The hi-hat has a similar function to the ride cymbal; the two are rarely played consistently for long periods at the same time, but one or the other is often used to keep what is known as the \"ride rhythm\" (e.g., eighth or sixteenth notes) in a song. The hi-hats are played by the right stick of a right-handed drummer. Changing between ride and hi-hat, or between either and a \"leaner\" sound with neither, is often used to mark a change from one song section to another.", "title": "Components" }, { "paragraph_id": 32, "text": "Crash cymbals are usually the strongest accent markers within the kit, marking crescendos and climaxes, vocal entries, and major changes of mood, swells, and effects. A crash cymbal is often accompanied by a strong kick on the bass drum pedal, both for musical effect and to support the stroke. It provides a fuller sound and is a commonly taught technique.", "title": "Components" }, { "paragraph_id": 33, "text": "In jazz, using the smallest kits and at very high volumes, ride cymbals may be played with the technique and sound of a crash cymbal. Some hi-hats will also give a useful crash, particularly thinner hats or those with a severe taper. Alternatively, specialized crash/ride and ride/crash cymbals are designed to combine both functions.", "title": "Components" }, { "paragraph_id": 34, "text": "All cymbals, other than rides, hi-hats, and crashes/splashes, are usually called effects cymbals when used in a drum kit, though this is a non-classical or colloquial designation that has become standardized. Most extended kits include one or more splash cymbals and at least one china cymbal. Major cymbal makers produce cymbal extension packs consisting of one splash and one china, or more rarely a second crash, a splash, and a china, to match some of their starter packs of ride, crash, and hi-hats. However, any combination of options can be found in the marketplace.", "title": "Components" }, { "paragraph_id": 35, "text": "Some cymbals may be considered effects in some kits but \"basic\" in another set of components. Likewise, Ozone crashes have the same purpose as a standard crash cymbal, but are considered to be effects cymbals due to their rarity, and the holes cut into them, which provide a darker, more resonant attack.", "title": "Components" }, { "paragraph_id": 36, "text": "Cymbals, of any type, used to provide an accent, rather than a regular pattern or groove, are known as accent cymbals. While any cymbal can be used to provide an accent, the term is more narrowly applied to cymbals for which the main purpose is to provide an accent. Accent cymbals include chime cymbals, small-bell domed cymbals, and those cymbals with a clear sonorous/oriental chime to them, such as specialized crash, splash, and china cymbals.", "title": "Components" }, { "paragraph_id": 37, "text": "Low-volume cymbals are a specialty type of cymbal, made to produce about 80% less volume than a typical cymbal. The entire surface of the cymbal is perforated by holes. Drummers use low-volume cymbals to play in small venues or as a way to practice without disturbing others.", "title": "Components" }, { "paragraph_id": 38, "text": "Other instruments that have regularly been incorporated into drum kits include:", "title": "Components" }, { "paragraph_id": 39, "text": "See also Extended kits below.", "title": "Components" }, { "paragraph_id": 40, "text": "Electronic drums are used for many reasons. Some drummers use electronic drums for playing in small venues, such as coffeehouses or church services, where a very low volume for the band is desired. Since fully electronic drums do not create any acoustic sound (apart from the quiet sound of the stick hitting the sensor pads), all of the drum sounds come from a keyboard amplifier or PA system; as such, the volume of electronic drums can be much lower than an acoustic kit. Some use electronic drums as practice instruments because they can be listened to with headphones, which enable a drummer to practice without disturbing others. Others use electronic drums to take advantage of the huge range of sounds that modern drum modules can produce, which range from sampled sounds of real drums, cymbals, and percussion instruments such as gongs or tubular bells that would be impractical to take to a small gig, to electronic and synthesized sounds, including non-instrument sounds such as ocean waves.", "title": "Components" }, { "paragraph_id": 41, "text": "A fully electronic kit is easier to soundcheck than acoustic drums, assuming that the electronic drum module has levels that the drummer has preset in their practice room; in contrast, when an acoustic kit is sound checked, most drums and cymbals need to be mic'd and each mic needs to be tested by the drummer so its level and tone equalization can be adjusted by the sound engineer. Also, even after all the individual drum and cymbal mics are sound checked, the engineer needs to listen to the drummer play a standard groove, to check that the balance between the kit instruments is right. Finally, the engineer needs to set up the monitor mix for the drummer, which the drummer uses to hear their instruments and the instruments and vocals of the rest of the band. With a fully electronic kit, many of these steps can be eliminated.", "title": "Components" }, { "paragraph_id": 42, "text": "Drummers' usage of electronic drum equipment can range from adding a single electronic pad to an entire drum kit (e.g., to have access to an instrument that might otherwise be impractical, such as a large gong), to using a mix of acoustic drums/cymbals and electronic pads, to using an acoustic kit in which the drums and cymbals have triggers, which can be used to sound electronic drums and other sounds, to having an exclusively electronic kit, which is often set up with the rubber or mesh drum pads and rubber \"cymbals\" in the usual drum kit locations. A fully electronic kit weighs much less and takes up less space to transport than an acoustic kit and it can be set up more quickly. One of the disadvantages of a fully electronic kit is that it may not have the same \"feel\" as an acoustic kit, and the drum sounds, even if they are high-quality samples, may not sound the same as acoustic drums.", "title": "Components" }, { "paragraph_id": 43, "text": "Electronic drum pads are the second most widely used type of MIDI performance controllers, after electronic keyboards. Drum controllers may be built into drum machines, they may be standalone control surfaces (e.g., rubber drum pads), or they may emulate the look and feel of acoustic percussion instruments. The pads built into drum machines are typically too small and fragile to be played with sticks, so they are usually played with fingers. Dedicated drum pads such as the Roland Octapad or the DrumKAT are playable with hands or sticks and are often built to resemble the general form of acoustic drums. There are also percussion controllers such as the vibraphone-style MalletKAT, and Don Buchla's Marimba Lumina.", "title": "Components" }, { "paragraph_id": 44, "text": "MIDI triggers can also be installed into acoustic drum and percussion instruments. Pads that trigger a MIDI device can be homemade from a piezoelectric sensor and a practice pad or other piece of foam rubber, which is possible in two ways:", "title": "Components" }, { "paragraph_id": 45, "text": "In either case, an electronic control unit (sound module/\"brain\") with suitable sampled/modeled or synthesized drum sounds, amplification equipment (a PA system, keyboard amp, etc.), and stage monitor speakers are required to hear the electronically produced sounds. See Triggered drum kit.", "title": "Components" }, { "paragraph_id": 46, "text": "A trigger pad could contain up to four independent sensors, each of them capable of sending information describing the timing and dynamic intensity of a stroke to the drum module/brain. A circular drum pad may have only one sensor for triggering, but a 2016-era cymbal-shaped rubber pad/cymbal will often contain two; one for the body and one for the bell at the center of the cymbal, and perhaps a cymbal choke trigger, to allow drummers to produce this effect.", "title": "Components" }, { "paragraph_id": 47, "text": "Trigger sensors are most commonly used to replace the acoustic drum sounds, but they can also be used effectively with an acoustic kit to augment or supplement an instrument's sound for the needs of the session or show. For example, in a live performance in a difficult acoustical space, a trigger may be placed on each drum or cymbal and used to trigger a similar sound on a drum module. These sounds are then amplified through a PA system so the audience can hear them, and they can be amplified to any level without the risks of audio feedback or bleed problems associated with microphones and PAs in certain settings.", "title": "Components" }, { "paragraph_id": 48, "text": "The sound of electronic drums and cymbals themselves is heard by the drummer and possibly other musicians in close proximity, but, even so, the foldback (audio monitor) system is usually fed from the electronic sounds rather than the live acoustic sounds. The drums can be heavily dampened (made to resonate less or have the sound subdued), and their tuning and quality is less critical in the latter scenario. In this way, much of the atmosphere of the live performance is retained in a large venue, but without some of the problems associated with purely microphone-amplified drums. Triggers and sensors can also be used in conjunction with conventional or built-in microphones. If some components of a kit prove more difficult to mic than others (e.g., an excessively \"boomy\" low tom), triggers may be used on only the more difficult instruments, balancing out a drummer's/band's sound in the mix.", "title": "Components" }, { "paragraph_id": 49, "text": "Trigger pads and drums, on the other hand, when deployed in a conventional set-up, are most commonly used to produce sounds not possible with an acoustic kit, or at least not with what is available. Any sound that can be sampled/recorded can be played when the pad is struck, by assigning the recorded sounds to specific triggers. Recordings or samples of barking dogs, sirens, breaking glass, and stereo recordings of aircraft taking off and landing have all been used. Along with the more obvious electronically generated drums, there are other sounds that (depending on the device used) can also be played/triggered by electronic drums.", "title": "Components" }, { "paragraph_id": 50, "text": "Virtual drums are a type of audio software that simulates the sound of a drum kit using synthesized drum kit sounds or digital samples of acoustic drum sounds. Different drum software products offer a recording function, the ability to select from several acoustically distinctive drum kits (e.g., jazz, rock, metal), as well as the option to incorporate different songs into the session. Some computer software can turn any hard surface into a virtual drum kit using only one microphone.", "title": "Components" }, { "paragraph_id": 51, "text": "Hardware is the name given to the metal stands that support the drums, cymbals, and other percussion instruments. Generally, the term also includes the hi-hat pedal and clutch, and bass drum pedal or pedals, and the drum stool.", "title": "Components" }, { "paragraph_id": 52, "text": "Hardware is carried along with sticks and other accessories in the traps case, and includes:", "title": "Components" }, { "paragraph_id": 53, "text": "Many or even all of the stands may be replaced by a drum rack, which is particularly useful for large drum kits.", "title": "Components" }, { "paragraph_id": 54, "text": "Drummers often set up their own drum hardware onstage and adjust it to their comfort level. Major bands on tour will often have a drum tech who knows how to set up the drummer's hardware and instruments in the desired location and with the desired configuration.", "title": "Components" }, { "paragraph_id": 55, "text": "Drum kits are traditionally categorized by the number of drums, ignoring cymbals and other instruments. Snare, tom-tom, and bass drums are always counted; other drums, such as octobans, may or may not be counted.", "title": "Common configurations" }, { "paragraph_id": 56, "text": "Traditionally, in America and the United Kingdom, drum sizes are expressed as depth x diameter, both measured in inches. Many drum kit manufacturers have recently been expressing sizes as diameter x depth, still in inches. For example, a hanging tom 12 inches in diameter and 8 inches deep would be described by Tama as 8 inches × 12 inches, but by Pearl as 12 inches × 8 inches, and a standard diameter Ludwig snare drum 5 inches deep is a 5-inch × 14-inch instrument, while the UK's Premier Manufacturer offers the same dimensions as a 14-inch × 5-inch snare. The sizes of drums and cymbals given below are typical. Many instruments differ slightly or radically from them. Where no size is given, it is because there is too much variety to give a typical size.", "title": "Common configurations" }, { "paragraph_id": 57, "text": "A three-piece drum set is the most basic set. A conventional three-piece kit consists of a bass drum, a 14\"-diameter snare drum, 12\"–14\" hi-hats, a single 12\"-diameter hanging tom, 8\"–9\" in depth, and a suspended 14\"–18\" cymbal, the latter two mounted on the bass drum. These kits were common in the 1950s and 1960s and are still used. It is a common configuration of kits sold through mail order, and, with smaller sized drums and cymbals, of kits for children.", "title": "Common configurations" }, { "paragraph_id": 58, "text": "A four-piece kit extends the three-piece by adding one tom, either a second hanging tom mounted on the bass drum (a notable user is Chris Frantz of Talking Heads) and often displacing the cymbal, or by adding a floor tom. Normally another cymbal is added as well, so there are separate ride and crash cymbals, either on two stands, or the ride cymbal mounted on the bass drum to the player's right and the crash cymbal on a separate stand. The standard cymbal sizes are 16\" for the crash, and 18\"–20\", ride, with the 20\" ride cymbal the most common.", "title": "Common configurations" }, { "paragraph_id": 59, "text": "When a floor tom is added to make a four-piece kit, the floor tom is usually 14\" for jazz, and 16\" otherwise. This configuration is common in jazz and rock. Notable users include Ringo Starr of The Beatles, Mitch Mitchell of the Jimi Hendrix Experience, John Barbata of the Turtles, and various jazz drummers throughout the 20th century, including Art Blakey, Buddy Rich, and Jo Jones. For jazz, which normally emphasizes the use of a ride cymbal for swing patterns, the lack of second hanging tom in a four-piece kit allows the cymbal to be positioned closer to the drummer, making it easier to play.", "title": "Common configurations" }, { "paragraph_id": 60, "text": "If a second hanging tom is used, it is 10\" diameter and 8\" deep for fusion, or 13\" diameter and one inch deeper than for the 12\" diameter tom. Otherwise, a 14\" diameter hanging tom is added to the 12\", both being 8\" deep. In any case, both toms are most often mounted on the bass drum with the smaller of the two next to the hi-hats (which are to the left for a right-handed drummer). These kits are particularly useful for smaller venues, where space is limited, such as coffeehouses, cafés, hotel lounges, and small pubs.", "title": "Common configurations" }, { "paragraph_id": 61, "text": "The five-piece kit is the full-size kit and is the most common configuration used across various genres and styles. It adds a third tom to the four-piece kit, making for three toms in all. A fusion kit will normally add a 14\" tom, either a floor tom or a hanging tom on a stand to the right of the bass drum; in either case, making the tom lineup 10\", 12\" and 14\". Having three toms enables drummers to have high-, middle-, and low-pitched toms, which gives them more options for fills and solos.", "title": "Common configurations" }, { "paragraph_id": 62, "text": "Other kits will normally have 12\" and 13\" hanging toms and either a 14\" hanging tom on a stand, a 14\" floor tom, or a 16\" floor tom. It is common to have 10\" and 12\" hanging toms, with a 16\" floor tom. This configuration is often called a hybrid setup. The bass drum is most commonly 22\" in diameter, but rock kits may use 24\", fusion 20\", jazz 18\", and, in larger bands, up to 26\". A second crash cymbal is common, typically an inch or two larger or smaller than the 16\" one, with the larger of the two to the right for a right-handed drummer. A big band drummer may use crashes up to 20\" and a ride up to 24\" or, very rarely, 26\". A rock kit may also substitute a larger ride cymbal or larger hi-hats, typically 22\" for the ride and 15\" for the hats.", "title": "Common configurations" }, { "paragraph_id": 63, "text": "Most five-piece kits, except for entry-level, also have one or more effects cymbals. Adding cymbals beyond the basic ride, hi-hats, and one-crash configuration requires more stands, in addition to the standard drum hardware packs. Because of this, many higher-cost kits for professionals are sold with little or no hardware, to allow the drummer to choose the stands and bass drum pedal they prefer. At the other extreme, many inexpensive, entry-level kits are sold as a five-piece kit complete with two cymbal stands, most often one straight and one boom, and some even with a standard cymbal pack, a stool, and a pair of 5A drum sticks. In the 2010s, digital kits were often offered in a five-piece kit, usually with one plastic crash cymbal trigger and one ride cymbal trigger. Fully electronic drums do not produce any acoustic sound beyond the quiet tapping of sticks on the plastic or rubber heads. Their trigger-pads are wired up to a synth module or sampler.", "title": "Common configurations" }, { "paragraph_id": 64, "text": "If the toms are omitted completely, or the bass drum is replaced by a pedal-operated beater on the bottom skin of a floor tom and the hanging toms omitted, the result is a two-piece cocktail drum kit, originally developed for cocktail lounge acts. Such kits are particularly favored in musical genres such as trad jazz, bebop, rockabilly, and jump blues. Some rockabilly kits and beginner kits for very young players omit the hi-hat stand. In rockabilly, this allows the drummer to play standing rather than seated. A very simple jazz kit for informal or amateur jam sessions consists of a bass drum, snare drum, and hi-hat, often with only a single cymbal (normally a ride, with or without sizzlers).", "title": "Common configurations" }, { "paragraph_id": 65, "text": "Although these kits may be small with respect to the number of drums used, the drums themselves are most often of normal size, or even larger in the case of the bass drum. Kits using smaller drums, in both smaller and larger configurations, are for particular uses, such as boutique kits designed to reduce the visual impact of a large kit, kits that need to fit into small spaces in coffeehouses, traveling kits to reduce luggage volume, and junior kits for very young players. Smaller drums also tend to be quieter, again suiting smaller venues, and many of these kits extend this with extra muffling, which allows for quiet or even silent practice.", "title": "Common configurations" }, { "paragraph_id": 66, "text": "Common extensions beyond the standard configurations include:", "title": "Common configurations" }, { "paragraph_id": 67, "text": "See also other acoustic instruments above. Another versatile extension becoming increasingly common is the use of some electronic drums in a mainly acoustic kit.", "title": "Common configurations" }, { "paragraph_id": 68, "text": "Less common extensions found particularly, but not exclusively, in very large kits, include:", "title": "Common configurations" }, { "paragraph_id": 69, "text": "Sticks are traditionally made from wood (particularly maple, hickory, and oak), but more recently, metal, carbon fiber, and other materials have been used for sticks. The prototypical wooden drum stick was primarily designed for use with the snare drum, and optimized for playing snare rudiments. Sticks come in a variety of weights and tip designs; 7N is a common jazz stick with a nylon tip, while a 5B is a common wood tipped stick, heavier than a 7N but with a similar profile, and a common standard for beginners. Numbers range from 1 (heaviest) to 10 (lightest).", "title": "Accessories" }, { "paragraph_id": 70, "text": "The meanings of both numbers and letters vary from manufacturer to manufacturer, and some sticks are not described using this system at all, just being known as Smooth Jazz (typically a 7N or 9N) or Speed Rock (typically a 2B or 3B) for example. Many famous drummers endorse sticks made to their particular preference and sold under their signature.", "title": "Accessories" }, { "paragraph_id": 71, "text": "Besides drumsticks, drummers will also use brushes and Rutes in jazz and similar soft music. More rarely, other beaters such as cartwheel mallets (known to kit drummers as \"soft sticks\") may be used. It is not uncommon for rock drummers to use the \"wrong\" (butt) end of a stick for a heavier sound; some makers produce tipless sticks with two butt ends.", "title": "Accessories" }, { "paragraph_id": 72, "text": "A stick bag is the standard way for a drummer to bring drumsticks to a live performance. For easy access, the stick bag is commonly mounted on the side of the floor tom, just within reach of the drummer's right hand, for a right-handed drummer.", "title": "Accessories" }, { "paragraph_id": 73, "text": "Drum muffles are types of mutes that can reduce the ring, boomy overtone frequencies, or overall volume on a snare, bass, or tom. Controlling the ring is useful in studio or live settings when unwanted frequencies can clash with other instruments in the mix. There are internal and external muffling devices which rest on the inside or outside of the drumhead, respectively. Common types of mufflers include muffling rings, gels and duct tape, and improvised methods, such as placing a wallet near the edge of the head. Some drummers muffle the sound of a drum by putting a cloth over the drumhead.", "title": "Accessories" }, { "paragraph_id": 74, "text": "Snare drum and tom-tom Typical ways to muffle a snare or tom include placing an object on the outer edge of the drumhead. A piece of cloth, a wallet, gel, or fitted rings made of mylar are common objects. Also used are external clip-on muffles. Internal mufflers that lie on the inside of the drumhead are often built into a drum, but are generally considered less effective than external muffles, as they stifle the initial tone, rather than simply reducing its sustain.", "title": "Accessories" }, { "paragraph_id": 75, "text": "Bass drum Muffling the bass can be achieved with the same muffling techniques as for the snare, but bass drums in a drum kit are more commonly muffled by adding pillows, a sleeping bag, or other soft filling inside the drum, between the heads. Cutting a small hole in the resonant head can also produce a more muffled tone, and allows the manipulation of internally placed muffling. The Evans EQ pad places a pad against the batterhead and, when struck, the pad moves off the head momentarily, then returns to rest against the head, thus reducing the sustain without choking the tone.", "title": "Accessories" }, { "paragraph_id": 76, "text": "Silencers/mutes Another type of drum muffler is a piece of rubber that fits over the entire drumhead or cymbal. It interrupts contact between the stick and the head, which dampens the sound. They are typically used in practice settings.", "title": "Accessories" }, { "paragraph_id": 77, "text": "Cymbals are usually muted with the fingers or hand, to reduce the length or volume of ringing (e.g., the cymbal choke technique which is a key part of heavy metal drumming). Cymbals can also be muted with special rubber rings or duct tape.", "title": "Accessories" }, { "paragraph_id": 78, "text": "Historical uses Muffled drums are often associated with funeral ceremonies as well, such as the funerals of Queen Victoria and John F. Kennedy. The use of muffled drums has been written about by such poets as Henry Wadsworth Longfellow, John Mayne, and Theodore O'Hara. Drums have also been used for therapy and learning purposes, such as when an experienced player will sit with a number of students and by the end of the session have all of them relaxed and playing complex rhythms.", "title": "Accessories" }, { "paragraph_id": 79, "text": "There are various types of stick holder accessories, including bags that can be attached to a drum and angled sheath-style stick holders, which can hold a single pair of sticks.", "title": "Accessories" }, { "paragraph_id": 80, "text": "A sizzler is a metal chain, or combination of chains, that is hung across a cymbal, creating a distinctive metallic sound when the cymbal is struck, similar to that of a sizzle cymbal. Using a sizzler is the non-destructive alternative to drilling holes in a cymbal and putting metal rivets in the holes. Another benefit of using a \"sizzler\" chain is that the chain is removable, with the cymbal being easily returned to its normal sound.", "title": "Accessories" }, { "paragraph_id": 81, "text": "Some sizzlers feature pivoting arms that allow the chains to be quickly lowered onto, or raised from, the cymbal, allowing the effect to be used for some songs and removed for others.", "title": "Accessories" }, { "paragraph_id": 82, "text": "Three types of protective covers are common for kit drums:", "title": "Accessories" }, { "paragraph_id": 83, "text": "As with all musical instruments, the best protection is provided by a combination of a hard-shelled case with interior padding, such as foam, next to the drums and cymbals.", "title": "Accessories" }, { "paragraph_id": 84, "text": "Microphones (\"mics\") are used with drum kits to pick up the sound of the drums and cymbals for a sound recording and/or to pick up the sound of the drum kit so that it can be amplified through a PA system or sound reinforcement system. While most drummers use microphones and amplification in live shows, so that the sound engineer can adjust the levels of the drums and cymbals, some bands that play quieter genres of music and in small venues, such as coffeehouses, play acoustically, without mics or PA amplification. Small jazz groups, such as jazz quartets or organ trios that are playing in a small bar, will often just use acoustic drums. Of course, if the same small jazz groups play on the mainstage of a big jazz festival, the drums will be miced so that they can be adjusted in the sound system mix. A middle-ground approach is used by some bands that play in small venues: they do not mic every drum and cymbal, but only the instruments that the sound engineer wants to be able to control in the mix, such as the bass drum and the snare.", "title": "Accessories" }, { "paragraph_id": 85, "text": "In miking a drum kit, dynamic microphones, which can handle high sound-pressure levels, are usually used to close-mic drums, which is predominantly the way to mic drums for live shows. Condenser microphones are used for overheads and room mics, an approach which is more common with sound recording applications. Close miking of drums may be done using stands or by mounting the microphones on the rims of the drums, or even using microphones built into the drum itself, which eliminates the need for stands for such microphones, reducing both clutter and set-up time, as well as better isolating them.", "title": "Accessories" }, { "paragraph_id": 86, "text": "For some styles of music, drummers use electronic effects on drums, such as individual noise gates that mute the attached microphone when the signal is below a threshold volume. This allows the sound engineer to use a higher overall volume for the drum kit by reducing the number of \"active\" mics which could produce unwanted feedback at any one time. When a drum kit is entirely miked and amplified through the sound reinforcement system, the drummer or the sound engineer can add other electronic effects to the drum sound, such as reverb or digital delay.", "title": "Accessories" }, { "paragraph_id": 87, "text": "Some drummers arrive at the venue with their drum kit and use the mics and mic stands provided by the venue's sound engineer. Other drummers bring all their own mics, or selected mics (e.g., good-quality snare and bass drum mics), to ensure that they have good quality mics on hand. In bars and nightclubs, the microphones supplied by the venue can sometimes be in substandard condition, due to the heavy use they experience.", "title": "Accessories" }, { "paragraph_id": 88, "text": "Drummers using electronic drums, drum machines, or hybrid acoustic-electric kits (which blend traditional acoustic drums and cymbals with electronic pads) typically use a monitor speaker, keyboard amplifier, or even a small PA system to hear the electronic drum sounds. Even a drummer playing entirely acoustic drums may use a monitor speaker to hear the drums, especially if playing in a loud rock or metal band, where there is substantial onstage volume from large, powerful guitar stacks. Drummers are often given a large speaker cabinet with a 15\" subwoofer to help them monitor their bass drum sound (along with a full-range monitor speaker to hear the rest of their kit). Some sound engineers and drummers prefer to use an electronic vibration system, colloquially known as a \"butt shaker\" or \"throne thumper\" to monitor the bass drum, because this lowers the stage volume. With a \"butt shaker\", the \"thump\" of each bass drum strike causes a vibration in the drum stool; this way the drummer feels their beat on the posterior, rather than hears it.", "title": "Accessories" }, { "paragraph_id": 89, "text": "In-Ear Monitors are also popular among drummers since they also work as earplugs.", "title": "Accessories" }, { "paragraph_id": 90, "text": "A number of accessories are designed for the bass drum. The bass drum can take advantage of the bass reflex speaker design, in which a tuned port (a hole and a carefully measured tube) are put in a speaker enclosure to improve the bass response at the lowest frequencies. Bass drumhead patches protect the drumhead from the impact of the felt beater. Bass drum pillows are fabric bags with filling or stuffing that can be used to alter the tone or resonance of the bass drum. A less expensive alternative to using a specialized bass drum pillow is to use an old sleeping bag.", "title": "Accessories" }, { "paragraph_id": 91, "text": "Some drummers wear special drummer's gloves to improve their grip on the sticks when they play. Drumming gloves often have a textured grip surface made of a synthetic or rubber material and mesh or vents on the parts of the glove not used to hold sticks, to ventilate perspiration. Some drummers wear gloves to prevent blisters.", "title": "Accessories" }, { "paragraph_id": 92, "text": "In some styles or settings—such as country music clubs or churches, small venues, or when a live recording is being made—the drummer may use a transparent Perspex or Plexiglas drum screen (also known as a drum shield) to dampen the onstage volume of the drums. A screen that completely surrounds the drum kit is known as a drum booth. In live sound applications, drum shields are used so that the audio engineer can have more control over the volume of drums that the audience hears through the PA system mix, or to reduce the overall volume of the drums, as a way to reduce the overall volume of the band. In some recording studios, foam and fabric baffles are used in addition to, or in place of, clear panels. The drawback with foam/cloth baffle panels is that the drummer cannot see well other performers, the record producer, or the audio engineer.", "title": "Accessories" }, { "paragraph_id": 93, "text": "Drummers often bring a carpet, mats, or rugs to venues to prevent the bass drum and hi-hat stand from \"crawling\" (moving away) on a slippery surface, which can be caused by the drum head striking the bass drum. The carpet also reduces short reverberations (which is generally but not always an advantage), and helps to prevent damage to the flooring or floor coverings. In shows where multiple drummers will bring their kits onstage over the night, it is common for drummers to mark the location of their stands and pedals with tape, to allow for quicker positioning of a kit to a drummer's accustomed position. Bass drums and hi-hat stands commonly have retractable spikes, to help them grip surfaces such as carpet, or rubber feet, to remain stationary on hard surfaces.", "title": "Accessories" }, { "paragraph_id": 94, "text": "Drummers use a variety of accessories when practicing. Metronomes and beat counters are used to develop a sense of a steady beat. Drum muffling pads may be used to lessen the volume of drums during practicing. A practice pad, held on the lap, on a leg, or mounted on a stand, is used for near-silent practice with drumsticks. A set of practice pads mounted to simulate an entire drum kit is known as a practice kit. In the 2010s, these have largely been superseded by electronic drums, which can be listened to with headphones for quiet practice and by kits with non-sounding mesh heads.", "title": "Accessories" }, { "paragraph_id": 95, "text": "Drummers use a drum key for tuning their drums and adjusting some drum hardware. Besides the basic type of drum key (a T-handled wrench) there are various tuning wrenches and tools. Basic drum keys are divided into three types which allows for tuning of three types of tuning screws on drums: square (most used), slotted, and hexagonal. Ratchet-type wrenches allow high-tension drums to be tuned easily. Spin keys (utilizing a ball joint) allow for rapid head changing. Torque-wrench keys are available, graphically revealing the torque given to each lug. Also, tension gauges, or meters, which are set on the head, aid drummers to achieve a consistent tuning. Drummers can tune drums \"by ear\" or use a digital drum tuner, which \"measures tympanic pressure\" on the drumhead to provide accurate tuning.", "title": "Accessories" }, { "paragraph_id": 96, "text": "Drum kit music is either written in music notation (called \"drum parts\"), learned and played by ear, improvised, or some combination of any of all three of these methods. Professional session musician drummers and big-band drummers are often required to read drum parts. Drum parts are most commonly written on a standard five-line staff. As of 2016, a special percussion clef is used, while previously the bass clef was used. However, even if the bass, or no, clef is used, each line and space is assigned an instrument in the kit, rather than a pitch. In jazz, traditional music, folk music, rock music, and pop music, drummers are expected to be able to learn songs by ear (from a recording or from another musician who is playing or singing the song) and improvise. The degree of improvisation differs among different styles. Jazz and jazz fusion drummers may have lengthy improvised solos in every song. In rock music and blues, there are also drum solos in some songs, although they tend to be shorter than those in jazz. Drummers in all popular music and traditional music styles are expected to be able to improvise accompaniment parts to songs, once they are told the genre or style (e.g., shuffle, ballad, blues).", "title": "Accessories" }, { "paragraph_id": 97, "text": "On early recording media (until 1925), such as wax cylinders and discs carved with an engraving needle, sound balancing meant that musicians had to be moved back in the room. Drums were often put far from the horn (part of the mechanical transducer) to reduce sound distortion.", "title": "Accessories" }, { "paragraph_id": 98, "text": "In the 2020s, drum parts in many popular music styles are often recorded apart from the other instruments and singers, using multitrack recording techniques. Once the drums are recorded, the other instruments (rhythm guitar, piano, etc.), and then vocals, are added. To ensure that the drum tempo is consistent at this type of recording, the drummer usually plays along with a click track (a type of digital metronome) in headphones. The ability to play accurately along with a click track has become an important skill for professional drummers.", "title": "Accessories" }, { "paragraph_id": 99, "text": "Manufacturers using the American traditional format in their catalogs include these:", "title": "Drum manufacturers" }, { "paragraph_id": 100, "text": "Those using the European measures of diameter and depth include these:", "title": "Drum manufacturers" }, { "paragraph_id": 101, "text": "", "title": "See also" } ]
A drum kit is a collection of drums, cymbals, and sometimes other auxiliary percussion instruments set up to be played by one person. The drummer typically holds a pair of matching drumsticks, and uses their feet to operate hi-hat and bass drum pedals. A standard kit usually consists of: A snare drum, mounted on a stand A bass drum, played with a beater moved by a foot-operated pedal One or more tom-toms, including rack toms and/or floor toms One or more cymbals, including a ride cymbal and crash cymbal Hi-hat cymbals, a pair of cymbals that can be played with a foot-operated pedal The drum kit is a part of the standard rhythm section and is used in many types of popular and traditional music styles, ranging from rock and pop to blues and jazz.
2002-02-25T15:51:15Z
2023-11-12T07:32:11Z
[ "Template:ISBN?", "Template:Webarchive", "Template:Drum beats", "Template:Snd", "Template:Clear", "Template:Notelist", "Template:Cite web", "Template:ISBN", "Template:Percussion", "Template:Div col", "Template:Reflist", "Template:Citations needed", "Template:Music", "Template:Main", "Template:Rp", "Template:Image frame", "Template:Short description", "Template:Use dmy dates", "Template:Efn", "Template:More citations needed section", "Template:Commons category", "Template:Use American English", "Template:Drum kit components", "Template:Cite book", "Template:Authority control" ]
https://en.wikipedia.org/wiki/Drum_kit
9,080
Dying Earth
Dying Earth is a fantasy series by the American author Jack Vance, comprising four books originally published from 1950 to 1984. Some have been called picaresque. They vary from short story collections to a fix-up (novel created from older short stories), perhaps all the way to novel. The first book in the series, The Dying Earth, was ranked number 16 of 33 "All Time Best Fantasy Novels" by Locus in 1987, based on a poll of subscribers, although it was marketed as a collection and the Internet Speculative Fiction Database (ISFDB) calls it a "loosely connected series of stories". The stories of the Dying Earth series are set in the distant future, at a point when the sun is almost exhausted and magic has asserted itself as a dominant force. The Moon has disappeared and the Sun is in danger of burning out at any time, often flickering as if about to go out, before shining again. The various civilizations of Earth have collapsed for the most part into decadence or religious fanaticism and its inhabitants overcome with a fatalistic outlook. The Earth is mostly barren and cold, and has become infested with various predatory monsters (possibly created by a magician in a former age). Magic in the Dying Earth is performed by memorizing syllables, and the human brain can only accommodate a certain number at once. When a spell is used, the syllables vanish from the caster's mind. Creatures called sandestins can be summoned and used to perform more complex actions, but are considered dangerous to rely upon. Magic has loose links to the science of old, and advanced mathematics is treated like arcane lore. The Dying Earth exists alongside several Overworlds and Underworlds. These help add a sense of profound longing and entrapment to the series. While humans can, with relative ease, physically travel to the horrific Underworlds (as Cugel does on several occasions, to his dismay) the vast majority of the population are only capable of mentally visiting the wondrous Overworlds through rare artifacts (e.g. through the "Eyes of the Overworld") or dangerous magic phenomena (such as the ship Cugel encounters in the deserts). Though they can look at the wonders and pretend they are really there, humans can never truly inhabit or escape to these utopias as their physical bodies remain stuck on the Dying Earth and will die with the sun regardless. These siren-like visions of paradise lead to the deaths, insanity, and suffering of many, especially during Cugel's journeys. While most remaining civilizations on the Dying Earth are utterly unique in their customs and cultures, there are some common threads. Because the moon is gone and wind is often weak (the sun no longer heats the earth as much) the oceans are largely placid bodies of water with no tide and tiny waves. To cross them, boats are propelled by giant sea-worms. These worms are cared for and controlled by "Wormingers". In addition, the manses of magicians, protected by walls and spells and monsters, are relatively common sights in inhabited lands. Vance wrote the stories of the first book while he served in the United States Merchant Marine during World War II. In the late 1940s several of his other stories were published in magazines. According to pulp editor Sam Merwin, Vance's earliest magazine submissions in the 1940s were heavily influenced by the style of James Branch Cabell. Fantasy historian Lin Carter has noted several probable lasting influences of Cabell on Vance's work, and suggests that the early "pseudo-Cabell" experiments bore fruit in The Dying Earth (1950). The series comprises four books by Vance and some sequels by other authors that may be or may not have been canonical. One 741-page omnibus edition has been issued as The Complete Dying Earth (SF Book Club, 1999) and in both the US and UK as Tales of the Dying Earth (2000). All four books were published with Tables of Contents, the first and fourth as collections. The second and third contained mostly material previously published in short story form but were marketed as novels, the second as a fix-up and the third without acknowledging any previous publication. 1. The Dying Earth (the author's preferred title is Mazirian the Magician) was openly a collection of six stories, all original, although written during Vance's war service. ISFDB calls them "slightly connected" and catalogs the last as a novella (17,500 to 40,000 word count). 2. The Eyes of the Overworld (the author's preferred title is Cugel the Clever) was a fix-up of six stories, presented as seven. All were novelettes by word count (7500 to 17,500). Five were previously published as noted here. 3. Cugel's Saga (the author's preferred title is Cugel: The Skybreak Spatterlight) was marketed as a novel. ISFDB calls it "[t]wice as large and less episodic than The Eyes of the Overworld" but qualifies that label. "This is marketed as a novel, but there is a table of contents, and some of the parts were previously published (although none are acknowledged thus)." It catalogs previous publication of three chapters without remark on the degree of revision. 4. Rhialto the Marvellous was marketed as a collection, a Foreword and three stories, one previously published. The Foreword is non-narrative canonical fiction presenting the general state of the world in the 21st Aeon (a "short story" loosely). Some sequels have been written by other authors, either with Vance's authorization or as tributes to his work. Michael Shea's first publication, the novel A Quest for Simbilis (DAW Books, 1974, OCLC 2128177), was an authorized sequel to Eyes. However, "When Vance returned to the milieu, his Cugel's Saga continued the events of The Eyes of the Overworld in a different direction." The tribute anthology Songs of the Dying Earth (2009) contains short fiction set in the world of the Dying Earth by numerous writers alongside tributes to Vance's work and influence. In 2010 Shea wrote another authorized story belonging to the Dying Earth series and featuring Cugel as one of characters: "Hew the Tintmaster", published in the anthology Swords & Dark Magic: The New Sword and Sorcery, ed. Jonathan Strahan and Lou Anders (Eos, 2010, pp. 323–362). WorldCat contributing libraries report holding all four books in French, Spanish, and (in omnibus edition) Hebrew translations; and report holding The Dying Earth in five other languages: Finnish, German, Japanese, Polish, and Russian. The whole first volume (of six stories) has been translated also into Esperanto together with two Cugel stories and made available on-line as e-books by a long-time fan and Vance Integral Edition co-worker. Permission to translate and distribute (only into Esperanto) was obtained informally direct from the author and, since his death in 2013, continues with ongoing permission from the author's estate. To date these are three: Mazirian the Magician, The Sorcerer Pharesm, and The Bagful of Dreams available for free download as EPub, Mobi and PDF. The entire series has seen several Italian translations, and in Italy Vance remains one of the US scifi authors most often translated and published The Dying Earth subgenre of science fiction is named in recognition of Vance's role in standardizing a setting, the entropically dying earth and sun. Its importance was recognized with the publication of Songs of the Dying Earth, a tribute anthology edited by George R. R. Martin and Gardner Dozois (Subterranean, 2009). Each short story in the anthology is set on the Dying Earth, and concludes with a short acknowledgement by the author of Vance's influence on them. Gene Wolfe's The Book of the New Sun (1980–83) is set in a slightly similar world, and was written under Vance's influence. Wolfe suggested in The Castle of the Otter, a collection of essays, that he inserted the book The Dying Earth into his fictional world under the title The Book of Gold (specifically, Wolfe wrote that the "Book of Gold" mentioned in The Book of the New Sun is different for each reader, but for him it was "The Dying Earth"). Wolfe has extended the series. Michael Shea's novel Nifft the Lean (1982), his second book eight years after A Quest for Simbilis, also owes much debt to Vance's creation, since the protagonist of the story is a petty thief (not unlike Cugel the Clever), who travels and struggles in an exotic world. Shea returned to Nifft with 1997 and 2000 sequels. The Archonate stories by Matthew Hughes — the 1994 novel Fools Errant and numerous works in this millennium — take place in "the penultimate age of Old Earth," a period of science and technology that is on the verge of transforming into the magical era of the time of the Dying Earth. Booklist has called him Vance's "heir apparent." (Review by Carl Hays of The Gist Hunter and Other Stories, Booklist, August 2005) The original creators of the Dungeons & Dragons games were fans of Jack Vance and incorporated many aspects of the Dying Earth series into the game. The magic system, in which a wizard is limited in the number of spells that can be simultaneously remembered and forgets them once they are cast, was based on the magic of Dying Earth. In role-playing game circles, this sort of magic system is called "Vancian" or "Vancean". Some of the spells from Dungeons & Dragons are based on spells mentioned in the Dying Earth series, such as the prismatic spray. Magic items from the Dying Earth stories such as ioun stones also made their way into Dungeons & Dragons. One of the deities of magic in Dungeons & Dragons is named Vecna, an anagram of "Vance". The Talislanta role-playing game designed by Stephan Michael Sechi and originally published in 1987 by Bard Games was inspired by the works of Jack Vance so much so that the first release, The Chronicles of Talislanta, is dedicated to the author. There is an official Dying Earth role-playing game published by Pelgrane Press with an occasional magazine The Excellent Prismatic Spray (named after a magic spell). The game situates players in Vance's world populated by desperately extravagant people. Many other role-playing settings pay homage to the series by including fantasy elements he invented such as the darkness-dwelling Grues. Goodman Games have announced the publication of the setting using their Dungeon Crawl Classics roleplaying game system, running a successful Kickstarter campaign for it. The game was released in 2023.
[ { "paragraph_id": 0, "text": "Dying Earth is a fantasy series by the American author Jack Vance, comprising four books originally published from 1950 to 1984. Some have been called picaresque. They vary from short story collections to a fix-up (novel created from older short stories), perhaps all the way to novel.", "title": "" }, { "paragraph_id": 1, "text": "The first book in the series, The Dying Earth, was ranked number 16 of 33 \"All Time Best Fantasy Novels\" by Locus in 1987, based on a poll of subscribers, although it was marketed as a collection and the Internet Speculative Fiction Database (ISFDB) calls it a \"loosely connected series of stories\".", "title": "" }, { "paragraph_id": 2, "text": "The stories of the Dying Earth series are set in the distant future, at a point when the sun is almost exhausted and magic has asserted itself as a dominant force. The Moon has disappeared and the Sun is in danger of burning out at any time, often flickering as if about to go out, before shining again. The various civilizations of Earth have collapsed for the most part into decadence or religious fanaticism and its inhabitants overcome with a fatalistic outlook. The Earth is mostly barren and cold, and has become infested with various predatory monsters (possibly created by a magician in a former age).", "title": "Setting" }, { "paragraph_id": 3, "text": "Magic in the Dying Earth is performed by memorizing syllables, and the human brain can only accommodate a certain number at once. When a spell is used, the syllables vanish from the caster's mind. Creatures called sandestins can be summoned and used to perform more complex actions, but are considered dangerous to rely upon. Magic has loose links to the science of old, and advanced mathematics is treated like arcane lore.", "title": "Setting" }, { "paragraph_id": 4, "text": "The Dying Earth exists alongside several Overworlds and Underworlds. These help add a sense of profound longing and entrapment to the series. While humans can, with relative ease, physically travel to the horrific Underworlds (as Cugel does on several occasions, to his dismay) the vast majority of the population are only capable of mentally visiting the wondrous Overworlds through rare artifacts (e.g. through the \"Eyes of the Overworld\") or dangerous magic phenomena (such as the ship Cugel encounters in the deserts). Though they can look at the wonders and pretend they are really there, humans can never truly inhabit or escape to these utopias as their physical bodies remain stuck on the Dying Earth and will die with the sun regardless. These siren-like visions of paradise lead to the deaths, insanity, and suffering of many, especially during Cugel's journeys.", "title": "Setting" }, { "paragraph_id": 5, "text": "While most remaining civilizations on the Dying Earth are utterly unique in their customs and cultures, there are some common threads. Because the moon is gone and wind is often weak (the sun no longer heats the earth as much) the oceans are largely placid bodies of water with no tide and tiny waves. To cross them, boats are propelled by giant sea-worms. These worms are cared for and controlled by \"Wormingers\". In addition, the manses of magicians, protected by walls and spells and monsters, are relatively common sights in inhabited lands.", "title": "Setting" }, { "paragraph_id": 6, "text": "Vance wrote the stories of the first book while he served in the United States Merchant Marine during World War II. In the late 1940s several of his other stories were published in magazines.", "title": "Origins" }, { "paragraph_id": 7, "text": "According to pulp editor Sam Merwin, Vance's earliest magazine submissions in the 1940s were heavily influenced by the style of James Branch Cabell. Fantasy historian Lin Carter has noted several probable lasting influences of Cabell on Vance's work, and suggests that the early \"pseudo-Cabell\" experiments bore fruit in The Dying Earth (1950).", "title": "Origins" }, { "paragraph_id": 8, "text": "The series comprises four books by Vance and some sequels by other authors that may be or may not have been canonical.", "title": "Series" }, { "paragraph_id": 9, "text": "One 741-page omnibus edition has been issued as The Complete Dying Earth (SF Book Club, 1999) and in both the US and UK as Tales of the Dying Earth (2000).", "title": "Series" }, { "paragraph_id": 10, "text": "All four books were published with Tables of Contents, the first and fourth as collections. The second and third contained mostly material previously published in short story form but were marketed as novels, the second as a fix-up and the third without acknowledging any previous publication.", "title": "Series" }, { "paragraph_id": 11, "text": "1. The Dying Earth (the author's preferred title is Mazirian the Magician) was openly a collection of six stories, all original, although written during Vance's war service. ISFDB calls them \"slightly connected\" and catalogs the last as a novella (17,500 to 40,000 word count).", "title": "Series" }, { "paragraph_id": 12, "text": "2. The Eyes of the Overworld (the author's preferred title is Cugel the Clever) was a fix-up of six stories, presented as seven. All were novelettes by word count (7500 to 17,500). Five were previously published as noted here.", "title": "Series" }, { "paragraph_id": 13, "text": "3. Cugel's Saga (the author's preferred title is Cugel: The Skybreak Spatterlight) was marketed as a novel. ISFDB calls it \"[t]wice as large and less episodic than The Eyes of the Overworld\" but qualifies that label. \"This is marketed as a novel, but there is a table of contents, and some of the parts were previously published (although none are acknowledged thus).\" It catalogs previous publication of three chapters without remark on the degree of revision.", "title": "Series" }, { "paragraph_id": 14, "text": "4. Rhialto the Marvellous was marketed as a collection, a Foreword and three stories, one previously published. The Foreword is non-narrative canonical fiction presenting the general state of the world in the 21st Aeon (a \"short story\" loosely).", "title": "Series" }, { "paragraph_id": 15, "text": "Some sequels have been written by other authors, either with Vance's authorization or as tributes to his work.", "title": "Series" }, { "paragraph_id": 16, "text": "Michael Shea's first publication, the novel A Quest for Simbilis (DAW Books, 1974, OCLC 2128177), was an authorized sequel to Eyes. However, \"When Vance returned to the milieu, his Cugel's Saga continued the events of The Eyes of the Overworld in a different direction.\"", "title": "Series" }, { "paragraph_id": 17, "text": "The tribute anthology Songs of the Dying Earth (2009) contains short fiction set in the world of the Dying Earth by numerous writers alongside tributes to Vance's work and influence.", "title": "Series" }, { "paragraph_id": 18, "text": "In 2010 Shea wrote another authorized story belonging to the Dying Earth series and featuring Cugel as one of characters: \"Hew the Tintmaster\", published in the anthology Swords & Dark Magic: The New Sword and Sorcery, ed. Jonathan Strahan and Lou Anders (Eos, 2010, pp. 323–362).", "title": "Series" }, { "paragraph_id": 19, "text": "WorldCat contributing libraries report holding all four books in French, Spanish, and (in omnibus edition) Hebrew translations; and report holding The Dying Earth in five other languages: Finnish, German, Japanese, Polish, and Russian.", "title": "Series" }, { "paragraph_id": 20, "text": "The whole first volume (of six stories) has been translated also into Esperanto together with two Cugel stories and made available on-line as e-books by a long-time fan and Vance Integral Edition co-worker. Permission to translate and distribute (only into Esperanto) was obtained informally direct from the author and, since his death in 2013, continues with ongoing permission from the author's estate. To date these are three: Mazirian the Magician, The Sorcerer Pharesm, and The Bagful of Dreams available for free download as EPub, Mobi and PDF.", "title": "Series" }, { "paragraph_id": 21, "text": "The entire series has seen several Italian translations, and in Italy Vance remains one of the US scifi authors most often translated and published", "title": "Series" }, { "paragraph_id": 22, "text": "The Dying Earth subgenre of science fiction is named in recognition of Vance's role in standardizing a setting, the entropically dying earth and sun. Its importance was recognized with the publication of Songs of the Dying Earth, a tribute anthology edited by George R. R. Martin and Gardner Dozois (Subterranean, 2009). Each short story in the anthology is set on the Dying Earth, and concludes with a short acknowledgement by the author of Vance's influence on them.", "title": "Legacy" }, { "paragraph_id": 23, "text": "Gene Wolfe's The Book of the New Sun (1980–83) is set in a slightly similar world, and was written under Vance's influence. Wolfe suggested in The Castle of the Otter, a collection of essays, that he inserted the book The Dying Earth into his fictional world under the title The Book of Gold (specifically, Wolfe wrote that the \"Book of Gold\" mentioned in The Book of the New Sun is different for each reader, but for him it was \"The Dying Earth\"). Wolfe has extended the series.", "title": "Legacy" }, { "paragraph_id": 24, "text": "Michael Shea's novel Nifft the Lean (1982), his second book eight years after A Quest for Simbilis, also owes much debt to Vance's creation, since the protagonist of the story is a petty thief (not unlike Cugel the Clever), who travels and struggles in an exotic world. Shea returned to Nifft with 1997 and 2000 sequels.", "title": "Legacy" }, { "paragraph_id": 25, "text": "The Archonate stories by Matthew Hughes — the 1994 novel Fools Errant and numerous works in this millennium — take place in \"the penultimate age of Old Earth,\" a period of science and technology that is on the verge of transforming into the magical era of the time of the Dying Earth. Booklist has called him Vance's \"heir apparent.\" (Review by Carl Hays of The Gist Hunter and Other Stories, Booklist, August 2005)", "title": "Legacy" }, { "paragraph_id": 26, "text": "The original creators of the Dungeons & Dragons games were fans of Jack Vance and incorporated many aspects of the Dying Earth series into the game. The magic system, in which a wizard is limited in the number of spells that can be simultaneously remembered and forgets them once they are cast, was based on the magic of Dying Earth. In role-playing game circles, this sort of magic system is called \"Vancian\" or \"Vancean\". Some of the spells from Dungeons & Dragons are based on spells mentioned in the Dying Earth series, such as the prismatic spray. Magic items from the Dying Earth stories such as ioun stones also made their way into Dungeons & Dragons. One of the deities of magic in Dungeons & Dragons is named Vecna, an anagram of \"Vance\".", "title": "Legacy" }, { "paragraph_id": 27, "text": "The Talislanta role-playing game designed by Stephan Michael Sechi and originally published in 1987 by Bard Games was inspired by the works of Jack Vance so much so that the first release, The Chronicles of Talislanta, is dedicated to the author.", "title": "Legacy" }, { "paragraph_id": 28, "text": "There is an official Dying Earth role-playing game published by Pelgrane Press with an occasional magazine The Excellent Prismatic Spray (named after a magic spell). The game situates players in Vance's world populated by desperately extravagant people. Many other role-playing settings pay homage to the series by including fantasy elements he invented such as the darkness-dwelling Grues.", "title": "Legacy" }, { "paragraph_id": 29, "text": "Goodman Games have announced the publication of the setting using their Dungeon Crawl Classics roleplaying game system, running a successful Kickstarter campaign for it. The game was released in 2023.", "title": "Legacy" } ]
Dying Earth is a fantasy series by the American author Jack Vance, comprising four books originally published from 1950 to 1984. Some have been called picaresque. They vary from short story collections to a fix-up, perhaps all the way to novel. The first book in the series, The Dying Earth, was ranked number 16 of 33 "All Time Best Fantasy Novels" by Locus in 1987, based on a poll of subscribers, although it was marketed as a collection and the Internet Speculative Fiction Database (ISFDB) calls it a "loosely connected series of stories".
2002-01-04T14:50:17Z
2023-12-15T22:44:37Z
[ "Template:Page needed", "Template:Isfdb title", "Template:Wikiquote", "Template:Clear", "Template:Isfdb series", "Template:Cite web", "Template:Citation needed", "Template:ISBN", "Template:Short description", "Template:Infobox book series", "Template:OCLC", "Template:Clarify", "Template:Portal", "Template:Reflist", "Template:Cite book", "Template:Jack Vance", "Template:About", "Template:Italic title", "Template:Main" ]
https://en.wikipedia.org/wiki/Dying_Earth
9,082
Dispute resolution
Dispute resolution or dispute settlement is the process of resolving disputes between parties. The term dispute resolution is sometimes used interchangeably with conflict resolution. Prominent venues for dispute settlement in international law include the International Court of Justice (formerly the Permanent Court of International Justice); the United Nations Human Rights Committee (which operates under the ICCPR) and European Court of Human Rights; the Panels and Appellate Body of the World Trade Organization; and the International Tribunal for the Law of the Sea. Half of all international agreements include a dispute settlement mechanism. States are also known to establish their own arbitration tribunals to settle disputes. Prominent private international courts, which adjudicate disputes between commercial private entities, include the International Court of Arbitration (of the International Chamber of Commerce) and the London Court of International Arbitration. Methods of dispute resolution include: One could theoretically include violence or even war as part of this spectrum, but dispute resolution practitioners do not usually do so; violence rarely ends disputes effectively, and indeed, often only escalates them. Also, violence rarely causes the parties involved in the dispute to no longer disagree on the issue that caused the violence. For example, a country successfully winning a war to annex part of another country's territory doesn't cause the former waring nations to no longer seriously disagree to whom the territory rightly belongs to and tensions may still remain high between the two nations. Dispute resolution processes fall into two major types: Not all disputes, even those in which skilled intervention occurs, end in resolution. Such intractable disputes form a special area in dispute resolution studies. Dispute resolution is an important requirement in international trade, including negotiation, mediation, arbitration and litigation. The legal system provides resolutions for many different types of disputes. Some disputants will not reach agreement through a collaborative process. Some disputes need the coercive power of the state to enforce a resolution. Perhaps more importantly, many people want a professional advocate when they become involved in a dispute, particularly if the dispute involves perceived legal rights, legal wrongdoing, or threat of legal action against them. The most common form of judicial dispute resolution is litigation. Litigation is initiated when one party files suit against another. In the United States, litigation is facilitated by the government within federal, state, and municipal courts. While litigation is often used to resolve disputes, it's strictly speaking a form of conflict adjudication and not a form of conflict resolution per se. This is because litigation only determines the legal rights and obligations of parties involved in a dispute and doesn't necessarily solve the disagreement between the parties involved in the dispute. For example, supreme court cases can rule on whether US states have the constitutional right to criminalize abortion but won't cause the parties involved in the case to no longer disagree on whether states do indeed have the constitutional authority to restrict access to abortion as one of the parties may disagree with the supreme courts reasoning and still disagree with the party that the supreme court sided with. Litigation proceedings are very formal and are governed by rules, such as rules of evidence and procedure, which are established by the legislature. Outcomes are decided by an impartial judge and/or jury, based on the factual questions of the case and the application law. The verdict of the court is binding, not advisory; however, both parties have the right to appeal the judgment to a higher court. Judicial dispute resolution is typically adversarial in nature, for example, involving antagonistic parties or opposing interests seeking an outcome most favorable to their position. Due to the antagonistic nature of litigation, collaborators frequently opt for solving disputes privately. Retired judges or private lawyers often become arbitrators or mediators; however, trained and qualified non-legal dispute resolution specialists form a growing body within the field of alternative dispute resolution (ADR). In the United States, many states now have mediation or other ADR programs annexed to the courts, to facilitate settlement of lawsuits. Some use the term dispute resolution to refer only to alternative dispute resolution (ADR), that is, extrajudicial processes such as arbitration, collaborative law, and mediation used to resolve conflict and potential conflict between and among individuals, business entities, governmental agencies, and (in the public international law context) states. ADR generally depends on agreement by the parties to use ADR processes, either before or after a dispute has arisen. ADR has experienced steadily increasing acceptance and utilization because of a perception of greater flexibility, costs below those of traditional litigation, and speedy resolution of disputes, among other perceived advantages. However, some have criticized these methods as taking away the right to seek redress of grievances in the courts, suggesting that extrajudicial dispute resolution may not offer the fairest way for parties not in an equal bargaining relationship, for example in a dispute between a consumer and a large corporation. In addition, in some circumstances, arbitration and other ADR processes may become as expensive as litigation or more so.
[ { "paragraph_id": 0, "text": "Dispute resolution or dispute settlement is the process of resolving disputes between parties. The term dispute resolution is sometimes used interchangeably with conflict resolution.", "title": "" }, { "paragraph_id": 1, "text": "Prominent venues for dispute settlement in international law include the International Court of Justice (formerly the Permanent Court of International Justice); the United Nations Human Rights Committee (which operates under the ICCPR) and European Court of Human Rights; the Panels and Appellate Body of the World Trade Organization; and the International Tribunal for the Law of the Sea. Half of all international agreements include a dispute settlement mechanism.", "title": "" }, { "paragraph_id": 2, "text": "States are also known to establish their own arbitration tribunals to settle disputes. Prominent private international courts, which adjudicate disputes between commercial private entities, include the International Court of Arbitration (of the International Chamber of Commerce) and the London Court of International Arbitration.", "title": "" }, { "paragraph_id": 3, "text": "Methods of dispute resolution include:", "title": "Methods" }, { "paragraph_id": 4, "text": "One could theoretically include violence or even war as part of this spectrum, but dispute resolution practitioners do not usually do so; violence rarely ends disputes effectively, and indeed, often only escalates them. Also, violence rarely causes the parties involved in the dispute to no longer disagree on the issue that caused the violence. For example, a country successfully winning a war to annex part of another country's territory doesn't cause the former waring nations to no longer seriously disagree to whom the territory rightly belongs to and tensions may still remain high between the two nations.", "title": "Methods" }, { "paragraph_id": 5, "text": "Dispute resolution processes fall into two major types:", "title": "Methods" }, { "paragraph_id": 6, "text": "Not all disputes, even those in which skilled intervention occurs, end in resolution. Such intractable disputes form a special area in dispute resolution studies.", "title": "Methods" }, { "paragraph_id": 7, "text": "Dispute resolution is an important requirement in international trade, including negotiation, mediation, arbitration and litigation.", "title": "Methods" }, { "paragraph_id": 8, "text": "The legal system provides resolutions for many different types of disputes. Some disputants will not reach agreement through a collaborative process. Some disputes need the coercive power of the state to enforce a resolution. Perhaps more importantly, many people want a professional advocate when they become involved in a dispute, particularly if the dispute involves perceived legal rights, legal wrongdoing, or threat of legal action against them.", "title": "Legal dispute resolution" }, { "paragraph_id": 9, "text": "The most common form of judicial dispute resolution is litigation. Litigation is initiated when one party files suit against another. In the United States, litigation is facilitated by the government within federal, state, and municipal courts. While litigation is often used to resolve disputes, it's strictly speaking a form of conflict adjudication and not a form of conflict resolution per se. This is because litigation only determines the legal rights and obligations of parties involved in a dispute and doesn't necessarily solve the disagreement between the parties involved in the dispute. For example, supreme court cases can rule on whether US states have the constitutional right to criminalize abortion but won't cause the parties involved in the case to no longer disagree on whether states do indeed have the constitutional authority to restrict access to abortion as one of the parties may disagree with the supreme courts reasoning and still disagree with the party that the supreme court sided with. Litigation proceedings are very formal and are governed by rules, such as rules of evidence and procedure, which are established by the legislature. Outcomes are decided by an impartial judge and/or jury, based on the factual questions of the case and the application law. The verdict of the court is binding, not advisory; however, both parties have the right to appeal the judgment to a higher court. Judicial dispute resolution is typically adversarial in nature, for example, involving antagonistic parties or opposing interests seeking an outcome most favorable to their position.", "title": "Legal dispute resolution" }, { "paragraph_id": 10, "text": "Due to the antagonistic nature of litigation, collaborators frequently opt for solving disputes privately.", "title": "Legal dispute resolution" }, { "paragraph_id": 11, "text": "Retired judges or private lawyers often become arbitrators or mediators; however, trained and qualified non-legal dispute resolution specialists form a growing body within the field of alternative dispute resolution (ADR). In the United States, many states now have mediation or other ADR programs annexed to the courts, to facilitate settlement of lawsuits.", "title": "Legal dispute resolution" }, { "paragraph_id": 12, "text": "Some use the term dispute resolution to refer only to alternative dispute resolution (ADR), that is, extrajudicial processes such as arbitration, collaborative law, and mediation used to resolve conflict and potential conflict between and among individuals, business entities, governmental agencies, and (in the public international law context) states. ADR generally depends on agreement by the parties to use ADR processes, either before or after a dispute has arisen. ADR has experienced steadily increasing acceptance and utilization because of a perception of greater flexibility, costs below those of traditional litigation, and speedy resolution of disputes, among other perceived advantages. However, some have criticized these methods as taking away the right to seek redress of grievances in the courts, suggesting that extrajudicial dispute resolution may not offer the fairest way for parties not in an equal bargaining relationship, for example in a dispute between a consumer and a large corporation. In addition, in some circumstances, arbitration and other ADR processes may become as expensive as litigation or more so.", "title": "Extrajudicial dispute resolution" } ]
Dispute resolution or dispute settlement is the process of resolving disputes between parties. The term dispute resolution is sometimes used interchangeably with conflict resolution. Prominent venues for dispute settlement in international law include the International Court of Justice; the United Nations Human Rights Committee and European Court of Human Rights; the Panels and Appellate Body of the World Trade Organization; and the International Tribunal for the Law of the Sea. Half of all international agreements include a dispute settlement mechanism. States are also known to establish their own arbitration tribunals to settle disputes. Prominent private international courts, which adjudicate disputes between commercial private entities, include the International Court of Arbitration and the London Court of International Arbitration.
2002-01-05T21:53:49Z
2023-12-09T22:32:24Z
[ "Template:Div col begin", "Template:Authority control", "Template:Citation needed", "Template:Div col end", "Template:Short description", "Template:Cite web", "Template:More citations needed", "Template:Full citation needed", "Template:Reflist", "Template:Cite book", "Template:Cite journal", "Template:ISBN", "Template:Self reference", "Template:Use dmy dates" ]
https://en.wikipedia.org/wiki/Dispute_resolution
9,085
Catan: Cities & Knights
Catan: Cities & Knights (German: Städte und Ritter), formerly The Cities and Knights of Catan is an expansion to the board game The Settlers of Catan for three to four players (five to six player play is also possible with the Settlers and Cities & Knights five to six player extensions; two-player play is possible with the Traders & Barbarians expansion). It contains features taken from The Settlers of Catan, with emphasis on city development and the use of knights, which are used as a method of attacking other players as well as helping opponents defend Catan against a common foe. Cities & Knights can also be combined with the Catan: Seafarers expansion or with Catan: Traders & Barbarians scenarios (again, five to six player play only possible with the applicable five to six player extension(s)). Because of the new rules introduced in Cities & Knights, the game is played to 13 victory points, as opposed to 10 as in the base game The Settlers of Catan. The following cards are not used in Cities & Knights: One of the main additions to the game is commodities, which are a type of secondary resource produced only by cities. Like resources, commodities are associated with a type of terrain, can be stolen by the robber (with Seafarers, also the pirate), count against the resource hand limit, and may not be collected if the robber is on the terrain. Resources may be traded for commodities, and commodities may be traded for resources. Commodities can then be used to build city improvements (provided the player has a city), which provide additional benefits. The commodities are paper (which comes from forest terrain), coin (from mountain terrain), and cloth (from pasture terrain). When combining Cities & Knights with Barbarian Attack, the written rules are ambiguous with regards to whether commodities are collected along with normal resources when collecting from a Gold River tile, as well as whether or not commodities can be collected directly from Gold River tiles. However, online rules state that "Gold can only buy you resources, not commodities." A city on grain or brick gives two of each, as in the original Settlers. A city on wool, ore, or wood, produces one corresponding resource as well as one corresponding commodity (cloth, coin, or paper). Grain and brick, however, are used for new purchasing options: grain activates knights, and brick can be used to build city walls. In total there are 36 commodity cards: 12 paper (from forest), 12 cloth (from pasture), and 12 coin (from mountains). A player with a city may use commodities to build city improvements, which allow several advantages. There are city improvements in five levels, and in three different categories. Each category of improvements requires a different commodity and higher levels require more cards of that commodity. At the third level, players earn a special ability, depending on the type of improvement. The first player with an improvement at the fourth level can claim any of their cities as a metropolis, worth four victory points instead of two for that city. Each type of improvement has only one associated metropolis, and no city can be a metropolis of two different types (because of this, a player without a non-metropolis city may not build improvements beyond the third level). If a player is the first to build an improvement to the final level (out-building the current holder of the metropolis), they take the metropolis from its current holder. The other significant concept in Cities & Knights is the concept of knights, which replace the concept of soldiers and the largest army. Knights are units that require continuous maintenance through their activation mechanism, but have a wide variety of functions. Knights can be promoted through three ranks, although promotion to the final rank is a special ability granted by the city improvement the Fortress. Knights are placed on the board in a similar manner to settlements, and can be used to block opposing roads, active or not. However, knights must be activated in order to perform other functions, which immediately deactivate the knight. Knights cannot perform actions on the same turn they are activated, but can be reactivated on the same turn as performing an action. These actions include: If a knight is promoted or forced to retreat, its active status does not change. The standard Cities & Knights game comes with 24 knights, 6 of each color. The 5/6 player extension adds a further 12 knights, 6 each of two new colors. Cities & Knights introduces a third die, known as the event die, which serves two functions. The first applies to the concept of barbarians, a periodic foe that all players must work together to defend against. Three of the sides of the event die have a picture of a ship on them. The other three sides have a symbol of a city gate, allowing players who have sufficiently built up a city to obtain progress cards (see below). The barbarians are represented by a ship positioned on a track representing the distance between the ship and Catan (i.e. the board). Each time the event die shows a black ship, the barbarian ship takes one step closer to Catan. When the barbarians arrive at Catan, a special phase is immediately performed before all other actions (including collecting resources). In this special phase, the barbarians' attack strength, corresponding to the combined number of cities and metropolises held by all players, is compared to Catan's defense strength, corresponding to the combined levels (i.e. 1 point for each basic, 2 for each strong, and 3 for each mighty) of all activated knights in play. If the barbarians are successful in their attack (if they have a strength greater than Catan), then the players must pay the consequence. The player(s) who had the least defense will be attacked, and will have one city reduced to a settlement. If they only have settlements, or metropolises, then they are immune to barbarians and do not count as the player contributing the least defense. Should Catan prevail, the player who contributes the most to Catan's defense receives a special Defender of Catan card, worth a victory point. Regardless of the outcome, all knights are immediately deactivated, and the barbarian ship returns to its starting point on the track. In the event of a tie among the greatest contributors of knights, none of the tied players earn a Defender of Catan card. Instead, each of the tied players draw a progress card (explained below) of the type of their choosing. There are 6 Defender of Catan cards. As the likelihood of having the barbarian move closer to Catan is very high, a variant in common usage is that the robber (and with Seafarers, the pirate) does not move until the first barbarian attack, nor can a knight move the robber before that point. Examples where cities are lost: The other significant outcome of the event die is Progress cards, which replace development cards. Because of the mechanics of progress cards explained below, one of the two white dice used in Settlers is replaced by a red die. Progress cards are organized into three categories, corresponding to the three types of improvements. Yellow progress cards aid in commercial development, green progress cards aid in technological advancements, and blue progress cards allow for political moves. When a castle appears on the event die, progress cards of the corresponding type may be drawn depending on the value of the red die. Higher levels of city improvements increase the chance that progress cards will be drawn, with the highest level of city improvement allowing progress cards to be drawn regardless of the value on the red die. Progress cards, unlike the development cards they replace, can be played on the turn that they are drawn, and more than one progress card can be played per turn. However, they can generally only be played after the dice are rolled. Progress cards granting victory points are an exception, being played immediately (without regards to whose turn it is), while the Alchemist progress card, which allows a player to select the roll of the white and red dice, necessitates the card being played instead of rolling the numerical dice. (The event die is still rolled as normal.) Players are allowed to keep four progress cards (five in a five to six player game), and any additional ones must be discarded on the spot (unless the 5th card is a victory point, which is played immediately and the original progress cards remain). The only exception to this rule is when the player receives a 5th non-victory point progress card during their turn, in which case the player may choose to play any one of the five progress cards in hand, bringing the progress card count back down to four. While this clarification is not overtly stated in the Cities & Knights rule book, it is enforced in the online version of the game. In total, there are 54 progress cards: 18 science, 18 politics, and 18 trade. City walls are a minor addition to Cities & Knights that increase the number of resource and commodity cards a player is allowed in their hand before having to discard on a roll of 7. However, they do not protect the player from the robber or barbarians. Only cities and metropolises may have walls, and each city or metropolis can only have one wall, up to three walls per player. Each wall that the player has deployed permits the player to hold two more cards before being required to discard on a roll of seven. This results in a maximum of 13 cards. If the barbarians pillage your city, then the city wall is also destroyed and the wall is removed from the board. The game comes with 12 city walls, 3 of each color. The merchant is another addition to Cities & Knights. Like the robber, the merchant is placed on a single land hex. Unlike the robber, the merchant has a beneficial effect. The merchant can only be deployed through the use of a Merchant progress card (of which there are six), on a land hex near a city or a settlement. The player with the control of the merchant can trade the resource (not commodity) of that type at a two-to-one rate, as if the player had a control of a corresponding two-to-one harbor. The player with the control of the merchant also earns a victory point. Both the victory point and the trade privilege are lost if another player takes control of the merchant. In place of The Settlers of Catan standard improvement cost card, Cities & Knights gives a calendar type flip-chart to each player, matching that player's color. The top of the chart has the standard costs from the Settlers game (for settlements, upgrade to city, and roads). It does not include the Development Card cost as those cards are not used in a Cities & Knights game. It does include the costs of hiring a knight, upgrading a knight's level or strength, and the cost to activate a knight. It also includes the cost of a ship, which are not used in a regular game of Cities & Knights, but presumably this is to cater for players who have combined Cities & Knights and Seafarers. Those are only the rudimentary costs of the game however. The calendar also shows the costs of the next city improvement in each of the three categories — as a city is improved in a category, that segment has its card flipped down calendar style to reveal the newly built improvement, any advantages gained by the improvement, and the updated cost of upgrading to the next level in that category. Each segment, as it is flipped down, also shows the updated dice pattern needed to earn the player a progress card in that category. Catan: Legend of the Conquerors is a scenario released in 2017 for the expansion Catan: Cities & Knights. A blog post was made in connection with the release. The game adds swamp hexes to the board. The game also adds a cannon which can be combined with a knight to increase the strength of a knight by one, which makes the maximum possible strength 4, when applied to a mighty knight of strength 3 ( 1 + 3 = 4 {\displaystyle 1+3=4} ). To build a cannon, you pay 1 lumber and 1 ore for a foundry. When you combine a cannon and a knight you have a cannoneer. The game also adds a horse farm that you can build for one lumber and one grain. The horse farm gives you a horse that you can use to turn one of your knights into cavalry. A cavalry unit can move between road networks, even if there is no connection between them. The cannon and horse cannot be combined. The blog post writes, "Some strategists may like the idea of equipping a knight with a horse and a cannon, thus making it some kind of overpowering “mounted cannoneer.” However, you are not allowed to place both playing pieces adjacent to a knight. This being said, it’s also hard to imagine a knight on a horse holding a cannon in his arms and firing it in all directions... " The official website for the world of Catan. (2016). Retrieved May 17, 2016, from http://www.catan.com/service/game-rules
[ { "paragraph_id": 0, "text": "Catan: Cities & Knights (German: Städte und Ritter), formerly The Cities and Knights of Catan is an expansion to the board game The Settlers of Catan for three to four players (five to six player play is also possible with the Settlers and Cities & Knights five to six player extensions; two-player play is possible with the Traders & Barbarians expansion). It contains features taken from The Settlers of Catan, with emphasis on city development and the use of knights, which are used as a method of attacking other players as well as helping opponents defend Catan against a common foe. Cities & Knights can also be combined with the Catan: Seafarers expansion or with Catan: Traders & Barbarians scenarios (again, five to six player play only possible with the applicable five to six player extension(s)).", "title": "" }, { "paragraph_id": 1, "text": "Because of the new rules introduced in Cities & Knights, the game is played to 13 victory points, as opposed to 10 as in the base game The Settlers of Catan.", "title": "Differences from The Settlers of Catan" }, { "paragraph_id": 2, "text": "The following cards are not used in Cities & Knights:", "title": "Differences from The Settlers of Catan" }, { "paragraph_id": 3, "text": "One of the main additions to the game is commodities, which are a type of secondary resource produced only by cities. Like resources, commodities are associated with a type of terrain, can be stolen by the robber (with Seafarers, also the pirate), count against the resource hand limit, and may not be collected if the robber is on the terrain. Resources may be traded for commodities, and commodities may be traded for resources. Commodities can then be used to build city improvements (provided the player has a city), which provide additional benefits.", "title": "Commodities" }, { "paragraph_id": 4, "text": "The commodities are paper (which comes from forest terrain), coin (from mountain terrain), and cloth (from pasture terrain).", "title": "Commodities" }, { "paragraph_id": 5, "text": "When combining Cities & Knights with Barbarian Attack, the written rules are ambiguous with regards to whether commodities are collected along with normal resources when collecting from a Gold River tile, as well as whether or not commodities can be collected directly from Gold River tiles. However, online rules state that \"Gold can only buy you resources, not commodities.\"", "title": "Commodities" }, { "paragraph_id": 6, "text": "A city on grain or brick gives two of each, as in the original Settlers. A city on wool, ore, or wood, produces one corresponding resource as well as one corresponding commodity (cloth, coin, or paper). Grain and brick, however, are used for new purchasing options: grain activates knights, and brick can be used to build city walls.", "title": "Commodities" }, { "paragraph_id": 7, "text": "In total there are 36 commodity cards: 12 paper (from forest), 12 cloth (from pasture), and 12 coin (from mountains).", "title": "Commodities" }, { "paragraph_id": 8, "text": "A player with a city may use commodities to build city improvements, which allow several advantages. There are city improvements in five levels, and in three different categories. Each category of improvements requires a different commodity and higher levels require more cards of that commodity. At the third level, players earn a special ability, depending on the type of improvement.", "title": "City improvements" }, { "paragraph_id": 9, "text": "The first player with an improvement at the fourth level can claim any of their cities as a metropolis, worth four victory points instead of two for that city. Each type of improvement has only one associated metropolis, and no city can be a metropolis of two different types (because of this, a player without a non-metropolis city may not build improvements beyond the third level). If a player is the first to build an improvement to the final level (out-building the current holder of the metropolis), they take the metropolis from its current holder.", "title": "City improvements" }, { "paragraph_id": 10, "text": "The other significant concept in Cities & Knights is the concept of knights, which replace the concept of soldiers and the largest army. Knights are units that require continuous maintenance through their activation mechanism, but have a wide variety of functions. Knights can be promoted through three ranks, although promotion to the final rank is a special ability granted by the city improvement the Fortress.", "title": "Knights" }, { "paragraph_id": 11, "text": "Knights are placed on the board in a similar manner to settlements, and can be used to block opposing roads, active or not. However, knights must be activated in order to perform other functions, which immediately deactivate the knight. Knights cannot perform actions on the same turn they are activated, but can be reactivated on the same turn as performing an action. These actions include:", "title": "Knights" }, { "paragraph_id": 12, "text": "If a knight is promoted or forced to retreat, its active status does not change.", "title": "Knights" }, { "paragraph_id": 13, "text": "The standard Cities & Knights game comes with 24 knights, 6 of each color. The 5/6 player extension adds a further 12 knights, 6 each of two new colors.", "title": "Knights" }, { "paragraph_id": 14, "text": "Cities & Knights introduces a third die, known as the event die, which serves two functions. The first applies to the concept of barbarians, a periodic foe that all players must work together to defend against. Three of the sides of the event die have a picture of a ship on them. The other three sides have a symbol of a city gate, allowing players who have sufficiently built up a city to obtain progress cards (see below).", "title": "Barbarian attacks" }, { "paragraph_id": 15, "text": "The barbarians are represented by a ship positioned on a track representing the distance between the ship and Catan (i.e. the board). Each time the event die shows a black ship, the barbarian ship takes one step closer to Catan. When the barbarians arrive at Catan, a special phase is immediately performed before all other actions (including collecting resources). In this special phase, the barbarians' attack strength, corresponding to the combined number of cities and metropolises held by all players, is compared to Catan's defense strength, corresponding to the combined levels (i.e. 1 point for each basic, 2 for each strong, and 3 for each mighty) of all activated knights in play.", "title": "Barbarian attacks" }, { "paragraph_id": 16, "text": "If the barbarians are successful in their attack (if they have a strength greater than Catan), then the players must pay the consequence. The player(s) who had the least defense will be attacked, and will have one city reduced to a settlement. If they only have settlements, or metropolises, then they are immune to barbarians and do not count as the player contributing the least defense.", "title": "Barbarian attacks" }, { "paragraph_id": 17, "text": "Should Catan prevail, the player who contributes the most to Catan's defense receives a special Defender of Catan card, worth a victory point. Regardless of the outcome, all knights are immediately deactivated, and the barbarian ship returns to its starting point on the track. In the event of a tie among the greatest contributors of knights, none of the tied players earn a Defender of Catan card. Instead, each of the tied players draw a progress card (explained below) of the type of their choosing. There are 6 Defender of Catan cards.", "title": "Barbarian attacks" }, { "paragraph_id": 18, "text": "As the likelihood of having the barbarian move closer to Catan is very high, a variant in common usage is that the robber (and with Seafarers, the pirate) does not move until the first barbarian attack, nor can a knight move the robber before that point.", "title": "Barbarian attacks" }, { "paragraph_id": 19, "text": "Examples where cities are lost:", "title": "Barbarian attacks" }, { "paragraph_id": 20, "text": "The other significant outcome of the event die is Progress cards, which replace development cards. Because of the mechanics of progress cards explained below, one of the two white dice used in Settlers is replaced by a red die.", "title": "Progress cards" }, { "paragraph_id": 21, "text": "Progress cards are organized into three categories, corresponding to the three types of improvements. Yellow progress cards aid in commercial development, green progress cards aid in technological advancements, and blue progress cards allow for political moves. When a castle appears on the event die, progress cards of the corresponding type may be drawn depending on the value of the red die. Higher levels of city improvements increase the chance that progress cards will be drawn, with the highest level of city improvement allowing progress cards to be drawn regardless of the value on the red die.", "title": "Progress cards" }, { "paragraph_id": 22, "text": "Progress cards, unlike the development cards they replace, can be played on the turn that they are drawn, and more than one progress card can be played per turn. However, they can generally only be played after the dice are rolled. Progress cards granting victory points are an exception, being played immediately (without regards to whose turn it is), while the Alchemist progress card, which allows a player to select the roll of the white and red dice, necessitates the card being played instead of rolling the numerical dice. (The event die is still rolled as normal.)", "title": "Progress cards" }, { "paragraph_id": 23, "text": "Players are allowed to keep four progress cards (five in a five to six player game), and any additional ones must be discarded on the spot (unless the 5th card is a victory point, which is played immediately and the original progress cards remain). The only exception to this rule is when the player receives a 5th non-victory point progress card during their turn, in which case the player may choose to play any one of the five progress cards in hand, bringing the progress card count back down to four. While this clarification is not overtly stated in the Cities & Knights rule book, it is enforced in the online version of the game.", "title": "Progress cards" }, { "paragraph_id": 24, "text": "In total, there are 54 progress cards: 18 science, 18 politics, and 18 trade.", "title": "Progress cards" }, { "paragraph_id": 25, "text": "City walls are a minor addition to Cities & Knights that increase the number of resource and commodity cards a player is allowed in their hand before having to discard on a roll of 7. However, they do not protect the player from the robber or barbarians. Only cities and metropolises may have walls, and each city or metropolis can only have one wall, up to three walls per player. Each wall that the player has deployed permits the player to hold two more cards before being required to discard on a roll of seven. This results in a maximum of 13 cards.", "title": "City walls" }, { "paragraph_id": 26, "text": "If the barbarians pillage your city, then the city wall is also destroyed and the wall is removed from the board.", "title": "City walls" }, { "paragraph_id": 27, "text": "The game comes with 12 city walls, 3 of each color.", "title": "City walls" }, { "paragraph_id": 28, "text": "The merchant is another addition to Cities & Knights. Like the robber, the merchant is placed on a single land hex. Unlike the robber, the merchant has a beneficial effect.", "title": "The Merchant" }, { "paragraph_id": 29, "text": "The merchant can only be deployed through the use of a Merchant progress card (of which there are six), on a land hex near a city or a settlement. The player with the control of the merchant can trade the resource (not commodity) of that type at a two-to-one rate, as if the player had a control of a corresponding two-to-one harbor.", "title": "The Merchant" }, { "paragraph_id": 30, "text": "The player with the control of the merchant also earns a victory point. Both the victory point and the trade privilege are lost if another player takes control of the merchant.", "title": "The Merchant" }, { "paragraph_id": 31, "text": "In place of The Settlers of Catan standard improvement cost card, Cities & Knights gives a calendar type flip-chart to each player, matching that player's color. The top of the chart has the standard costs from the Settlers game (for settlements, upgrade to city, and roads). It does not include the Development Card cost as those cards are not used in a Cities & Knights game. It does include the costs of hiring a knight, upgrading a knight's level or strength, and the cost to activate a knight. It also includes the cost of a ship, which are not used in a regular game of Cities & Knights, but presumably this is to cater for players who have combined Cities & Knights and Seafarers.", "title": "City Upgrade Calendar" }, { "paragraph_id": 32, "text": "Those are only the rudimentary costs of the game however. The calendar also shows the costs of the next city improvement in each of the three categories — as a city is improved in a category, that segment has its card flipped down calendar style to reveal the newly built improvement, any advantages gained by the improvement, and the updated cost of upgrading to the next level in that category. Each segment, as it is flipped down, also shows the updated dice pattern needed to earn the player a progress card in that category.", "title": "City Upgrade Calendar" }, { "paragraph_id": 33, "text": "Catan: Legend of the Conquerors is a scenario released in 2017 for the expansion Catan: Cities & Knights. A blog post was made in connection with the release. The game adds swamp hexes to the board. The game also adds a cannon which can be combined with a knight to increase the strength of a knight by one, which makes the maximum possible strength 4, when applied to a mighty knight of strength 3 ( 1 + 3 = 4 {\\displaystyle 1+3=4} ). To build a cannon, you pay 1 lumber and 1 ore for a foundry. When you combine a cannon and a knight you have a cannoneer. The game also adds a horse farm that you can build for one lumber and one grain. The horse farm gives you a horse that you can use to turn one of your knights into cavalry. A cavalry unit can move between road networks, even if there is no connection between them. The cannon and horse cannot be combined. The blog post writes, \"Some strategists may like the idea of equipping a knight with a horse and a cannon, thus making it some kind of overpowering “mounted cannoneer.” However, you are not allowed to place both playing pieces adjacent to a knight. This being said, it’s also hard to imagine a knight on a horse holding a cannon in his arms and firing it in all directions... \"", "title": "Catan Legend of the Conquerors" }, { "paragraph_id": 34, "text": "The official website for the world of Catan. (2016). Retrieved May 17, 2016, from http://www.catan.com/service/game-rules", "title": "References" } ]
Catan: Cities & Knights, formerly The Cities and Knights of Catan is an expansion to the board game The Settlers of Catan for three to four players. It contains features taken from The Settlers of Catan, with emphasis on city development and the use of knights, which are used as a method of attacking other players as well as helping opponents defend Catan against a common foe. Cities & Knights can also be combined with the Catan: Seafarers expansion or with Catan: Traders & Barbarians scenarios.
2002-01-05T23:59:13Z
2023-10-20T19:01:34Z
[ "Template:Infobox Game", "Template:Lang-de", "Template:Bgg", "Template:Reflist", "Template:Cite web", "Template:Catan navbox", "Template:More citations needed", "Template:Italic title" ]
https://en.wikipedia.org/wiki/Catan:_Cities_%26_Knights
9,086
Catan: Seafarers
Catan: Seafarers, or Seafarers of Catan in older editions, (German: Die Seefahrer von Catan) is an expansion of the board game Catan for three to four players (five-to-six-player play is also possible with both of the respective five-to-six-player extensions). The main feature of this expansion is the addition of ships, gold rivers, and the pirate to the game, allowing play between multiple islands. The expansion also provides numerous scenarios, some of which have custom rules. The Seafarers rules and scenarios are also, for the most part, compatible with Catan: Cities & Knights and Catan: Traders & Barbarians. The concepts introduced in Seafarers were part of designer Klaus Teuber's original design for Settlers. Seafarers introduces the concept of ships, which serve as roads over water or along the coast. Each ship costs one lumber and one wool to create (lumber for the hull and wool for the sails). A settlement must first be built before a player can switch from building roads to building ships, or vice versa. Thus, a chain of ships is always anchored at a settlement on the coast. A shipping line that is not anchored at both ends by different settlements can also move the last ship at the open end, although this can only be done once per turn and may not be done with any ships that were created on the same turn. The "Longest Road" card is now renamed the "Longest Trade Route" since this is now calculated by counting the number of contiguous ships plus roads that a player has. A settlement or city is necessary between a road and a ship for the two to be considered continuous for the purposes of this card. The Road Building card allows a player to build 2 roads, 2 ships, or one of each when used. Along with the concept of ships, Seafarers also introduces the notion of the pirate, which acts as a waterborne robber which steals from nearby ships (similar to how the robber steals from nearby settlements). The pirate can also prevent ships from being built or moved nearby, but it does not interfere with harbors. The pirate does not prevent settlements from being built When a seven is rolled or a Knight card is played, the player may move either the robber OR the pirate. Seafarers also introduces the "Gold River" or "Gold Field" terrain, which grants nearby players one resource of their choice for every settlement adjacent to a gold tile and 2 resources for every city. Since being able to choose any resource type allows more building power, gold rivers are often either marked with number token of only 2 or 3 dots and/or are far away from starting positions to offset this. When combined with Cities & Knights, the rules state that you are not allowed to take commodities instead of resources if a city is nearby. Some scenarios have extra rules encompassing the concept of exploration, which is done by having the hex tiles placed face down. Should a player build next to unexplored terrain, the terrain tile is turned face up, and the player is rewarded with a resource should the tile revealed be resource-producing. In other scenarios, the board is divided into islands, and if the player builds a settlement on an island other than the ones they begin on, the settlement is worth extra victory points. The Cities and Knights manual recommends that players not use the Cities & Knights rules in scenarios where exploration is a factor. Unlike The Settlers of Catan and Catan: Cities & Knights, in which the only random element of setup is the placement of land tiles, number tokens, and harbors in an identically-shaped playing area, Catan: Seafarers has a number of different scenarios or maps from which to choose. Each map uses a different selection of tiles laid out in a specific pattern, which may not use all of the tiles. Other attributes also set each map apart, for example, restrictions on the placement of initial settlements, whether tiles are distributed randomly, the number of victory points needed to win, and special victory point awards, usually for building on islands across the sea. Seafarers provides scenarios for three or four players (the older fourth edition used the same maps for three- and four-player versions of the scenarios), while the extension provides scenarios for six players (the older third edition also included separate maps for five- and six-player scenarios). The scenarios between the older editions of Seafarers and the newest are generally incompatible, knowing the different frames included with the game. (In particular, older editions of Settlers did not come with a frame for their board; a separate add-on was made available for players of the older-edition Settlers games, containing the newer edition frames, so as to make them compatible with the newer edition of Seafarers; the older edition of Seafarers included a square frame, and while both older and newer editions of the frames have the same width across, the newer editions are not square-shaped, and are longer down the middle of the board compared to the sides.) Heading to New Shores (New Shores in older editions) is the scenario resembling Teuber's original design for the game. The game board consists of the main Settlers island as well as a few smaller islands, which award a special victory point to each player for their first settlements on them. This scenario is meant for players new to Seafarers, with elements of Seafarers incorporated into the more familiar main board. The Four Islands is the first scenario introduced where new mechanics introduced to Seafarers is brought into the forefront. In this scenario, the map is split up into four islands of roughly equal size and resource distribution. (The six-player version found in the extension has the map split into six islands; the scenario is titled The Six Islands, but is played identically. Older editions of the extension had a five-player version with five islands, called The Five Islands.) Players may claim up to two of the islands as their home islands, and settling on any of the other islands awards a special victory point. The Fog Island (Oceans in older editions) is the first scenario where exploration is used. The board starts off with a portion of the map left blank: when players expand into the blank region, terrain hexes are drawn at random from a supply and placed in the empty space, and, if a land hex is "discovered", a number token may be assigned. As a reward for discovering land, the player making the discovery is rewarded with a bonus resource card corresponding to the type of land hex discovered. Through the Desert (Into the Desert in older edition) is similar to The Four Islands, but consists of a large continent and smaller outlying islands. On the large island, there exists a "wall of deserts" that separates the island into a large main area and separate smaller strips of land. As the name of the scenario implies, expanding through the desert into these smaller strips of land, or by sea to the outlying islands, award bonus victory points. The Forgotten Tribe, originally titled Friendly Neighbors, was a downloadable scenario (but only in the German language) which was incorporated into newer editions of Seafarers. The map consists of a main island and smaller outlying islands, where the namesake forgotten tribe resides. Players may not expand into the outlying islands, but by building ships so that they border the outlying islands, players may be awarded with victory points, development cards, or harbors that players may place on the coast of the main island at a later time. Introduced in the newer editions, Cloth for Catan continues the adventures with the Forgotten Tribe. The scenario was previously available for older editions as a downloadable scenario (but only in German), titled Coffee for Catan. Players begin with settlements on the outside of the map, but may build ships to reach the Forgotten Tribe's islands, which are in the center. By connecting to the Forgotten Tribe's settlements (represented by number tokens), players may earn cloth tokens when the number token for the Forgotten Tribe's villages are rolled. Cloth tokens, in turn, are worth one victory point for each pair obtained. The Pirate Island, introduced in newer editions, is the first scenario which changes the mechanics of new gameplay elements introduced in Seafarers. The Pirate Island had previously been available as a downloadable scenario (but only in German) suitable for the older editions. In this scenario, players begin with a pre-placed settlement on a main island. Ships may only be built in one single line, which must pass through a fixed waypoint (different for each player) en route to a pirate fortress (each player has their own pirate fortress). Once ships connect to the pirate fortress, they may attempt to attack the pirate fortress once per turn. Ships may be lost if the attack is unsuccessful, but after three successful attacks, the pirate fortress is converted into a settlement. Players must convert their pirate fortresses and have 10 victory points before being able to claim victory. Furthermore, the pirate mechanics have also changed: the pirate moves through the middle of the map in a fixed path every turn, and attacks the owner of any nearby settlements. Players win resources if they are able to fend off the pirate attack (which depends on the number rolled by the dice, as well as the number of warships in the defending player's possession; warships are created from using Knight cards on existing ships), but lose resources if they are unsuccessful. Maritime expansion is only permitted by building a settlement at the waypoint, however, this increases the chances of a pirate attack. The Wonders of Catan was a downloadable scenario for older editions of Seafarers in both German and English, and was incorporated into Seafarers in newer editions. In this scenario, there are a number of "wonders", each with a large cost of building as well as a prerequisite. If a player meets the prerequisite for a wonder, they may claim the wonder for themselves. A player may only claim one wonder, and each wonder may only be claimed by one player. Wonders must be built in four parts, and each wonder has a different build cost. The winner is the first player to complete their wonder, or the first player to have 10 victory points and have more parts of their wonder complete than any other player. The Great Crossing was a scenario in the older editions of Seafarers, which has been dropped in newer editions. The map is divided into two islands, Catan and Transcatania. Players begin with both settlements on one of the islands, and must build ships connecting settlements between the two islands. Players earn victory points for connecting their settlements with settlements (not necessarily theirs) from the opposite island using ships, or to another player's shipping lines which connect two settlements together. Greater Catan was a scenario included in the older editions of Seafarers but is not included in newer editions. Due to the sheer amounts of equipment needed, two copies of Settlers and Seafarers are required to set up this scenario. The map consists of a standard Settlers island, along with a smaller chain of outlying islands. Only the main island initially has number tokens: number tokens are assigned to the outlying islands as they are expanded. However, the supply of number tokens is smaller than the number of hexes in the scenario: when the number tokens run out and players expand into a new part of the outlying islands, number tokens are moved from the main island to the outlying islands. Hexes on the main island for which there are no number tokens do not produce resources, but number tokens are moved in such a way so as to avoid rendering a city unproductive; furthermore, whenever possible number tokens must be reassigned from hexes bordering a player's own settlements and cities, so as to prevent harming another player's economy without harming a player's own economy at the same time. New World is a scenario that blankets all other scenarios that may be created from the parts of Settlers and Seafarers. This scenario uses an entirely random map, and players are encouraged to try and create a tile layout that plays well. The only difference between versions in Seafarers, the extension, and the older editions therein is the size of the frames. The Seafarers of Catan was reviewed in the online second volume of Pyramid.
[ { "paragraph_id": 0, "text": "Catan: Seafarers, or Seafarers of Catan in older editions, (German: Die Seefahrer von Catan) is an expansion of the board game Catan for three to four players (five-to-six-player play is also possible with both of the respective five-to-six-player extensions). The main feature of this expansion is the addition of ships, gold rivers, and the pirate to the game, allowing play between multiple islands. The expansion also provides numerous scenarios, some of which have custom rules. The Seafarers rules and scenarios are also, for the most part, compatible with Catan: Cities & Knights and Catan: Traders & Barbarians.", "title": "" }, { "paragraph_id": 1, "text": "The concepts introduced in Seafarers were part of designer Klaus Teuber's original design for Settlers.", "title": "" }, { "paragraph_id": 2, "text": "Seafarers introduces the concept of ships, which serve as roads over water or along the coast. Each ship costs one lumber and one wool to create (lumber for the hull and wool for the sails). A settlement must first be built before a player can switch from building roads to building ships, or vice versa. Thus, a chain of ships is always anchored at a settlement on the coast. A shipping line that is not anchored at both ends by different settlements can also move the last ship at the open end, although this can only be done once per turn and may not be done with any ships that were created on the same turn.", "title": "Ships" }, { "paragraph_id": 3, "text": "The \"Longest Road\" card is now renamed the \"Longest Trade Route\" since this is now calculated by counting the number of contiguous ships plus roads that a player has. A settlement or city is necessary between a road and a ship for the two to be considered continuous for the purposes of this card.", "title": "Ships" }, { "paragraph_id": 4, "text": "The Road Building card allows a player to build 2 roads, 2 ships, or one of each when used.", "title": "Ships" }, { "paragraph_id": 5, "text": "Along with the concept of ships, Seafarers also introduces the notion of the pirate, which acts as a waterborne robber which steals from nearby ships (similar to how the robber steals from nearby settlements). The pirate can also prevent ships from being built or moved nearby, but it does not interfere with harbors. The pirate does not prevent settlements from being built", "title": "Ships" }, { "paragraph_id": 6, "text": "When a seven is rolled or a Knight card is played, the player may move either the robber OR the pirate.", "title": "Ships" }, { "paragraph_id": 7, "text": "Seafarers also introduces the \"Gold River\" or \"Gold Field\" terrain, which grants nearby players one resource of their choice for every settlement adjacent to a gold tile and 2 resources for every city. Since being able to choose any resource type allows more building power, gold rivers are often either marked with number token of only 2 or 3 dots and/or are far away from starting positions to offset this.", "title": "Gold Rivers" }, { "paragraph_id": 8, "text": "When combined with Cities & Knights, the rules state that you are not allowed to take commodities instead of resources if a city is nearby.", "title": "Gold Rivers" }, { "paragraph_id": 9, "text": "Some scenarios have extra rules encompassing the concept of exploration, which is done by having the hex tiles placed face down. Should a player build next to unexplored terrain, the terrain tile is turned face up, and the player is rewarded with a resource should the tile revealed be resource-producing. In other scenarios, the board is divided into islands, and if the player builds a settlement on an island other than the ones they begin on, the settlement is worth extra victory points.", "title": "Exploration" }, { "paragraph_id": 10, "text": "The Cities and Knights manual recommends that players not use the Cities & Knights rules in scenarios where exploration is a factor.", "title": "Exploration" }, { "paragraph_id": 11, "text": "Unlike The Settlers of Catan and Catan: Cities & Knights, in which the only random element of setup is the placement of land tiles, number tokens, and harbors in an identically-shaped playing area, Catan: Seafarers has a number of different scenarios or maps from which to choose. Each map uses a different selection of tiles laid out in a specific pattern, which may not use all of the tiles. Other attributes also set each map apart, for example, restrictions on the placement of initial settlements, whether tiles are distributed randomly, the number of victory points needed to win, and special victory point awards, usually for building on islands across the sea.", "title": "Scenarios" }, { "paragraph_id": 12, "text": "Seafarers provides scenarios for three or four players (the older fourth edition used the same maps for three- and four-player versions of the scenarios), while the extension provides scenarios for six players (the older third edition also included separate maps for five- and six-player scenarios). The scenarios between the older editions of Seafarers and the newest are generally incompatible, knowing the different frames included with the game. (In particular, older editions of Settlers did not come with a frame for their board; a separate add-on was made available for players of the older-edition Settlers games, containing the newer edition frames, so as to make them compatible with the newer edition of Seafarers; the older edition of Seafarers included a square frame, and while both older and newer editions of the frames have the same width across, the newer editions are not square-shaped, and are longer down the middle of the board compared to the sides.)", "title": "Scenarios" }, { "paragraph_id": 13, "text": "Heading to New Shores (New Shores in older editions) is the scenario resembling Teuber's original design for the game. The game board consists of the main Settlers island as well as a few smaller islands, which award a special victory point to each player for their first settlements on them. This scenario is meant for players new to Seafarers, with elements of Seafarers incorporated into the more familiar main board.", "title": "Scenarios" }, { "paragraph_id": 14, "text": "The Four Islands is the first scenario introduced where new mechanics introduced to Seafarers is brought into the forefront. In this scenario, the map is split up into four islands of roughly equal size and resource distribution. (The six-player version found in the extension has the map split into six islands; the scenario is titled The Six Islands, but is played identically. Older editions of the extension had a five-player version with five islands, called The Five Islands.) Players may claim up to two of the islands as their home islands, and settling on any of the other islands awards a special victory point.", "title": "Scenarios" }, { "paragraph_id": 15, "text": "The Fog Island (Oceans in older editions) is the first scenario where exploration is used. The board starts off with a portion of the map left blank: when players expand into the blank region, terrain hexes are drawn at random from a supply and placed in the empty space, and, if a land hex is \"discovered\", a number token may be assigned. As a reward for discovering land, the player making the discovery is rewarded with a bonus resource card corresponding to the type of land hex discovered.", "title": "Scenarios" }, { "paragraph_id": 16, "text": "Through the Desert (Into the Desert in older edition) is similar to The Four Islands, but consists of a large continent and smaller outlying islands. On the large island, there exists a \"wall of deserts\" that separates the island into a large main area and separate smaller strips of land. As the name of the scenario implies, expanding through the desert into these smaller strips of land, or by sea to the outlying islands, award bonus victory points.", "title": "Scenarios" }, { "paragraph_id": 17, "text": "The Forgotten Tribe, originally titled Friendly Neighbors, was a downloadable scenario (but only in the German language) which was incorporated into newer editions of Seafarers.", "title": "Scenarios" }, { "paragraph_id": 18, "text": "The map consists of a main island and smaller outlying islands, where the namesake forgotten tribe resides. Players may not expand into the outlying islands, but by building ships so that they border the outlying islands, players may be awarded with victory points, development cards, or harbors that players may place on the coast of the main island at a later time.", "title": "Scenarios" }, { "paragraph_id": 19, "text": "Introduced in the newer editions, Cloth for Catan continues the adventures with the Forgotten Tribe. The scenario was previously available for older editions as a downloadable scenario (but only in German), titled Coffee for Catan.", "title": "Scenarios" }, { "paragraph_id": 20, "text": "Players begin with settlements on the outside of the map, but may build ships to reach the Forgotten Tribe's islands, which are in the center. By connecting to the Forgotten Tribe's settlements (represented by number tokens), players may earn cloth tokens when the number token for the Forgotten Tribe's villages are rolled. Cloth tokens, in turn, are worth one victory point for each pair obtained.", "title": "Scenarios" }, { "paragraph_id": 21, "text": "The Pirate Island, introduced in newer editions, is the first scenario which changes the mechanics of new gameplay elements introduced in Seafarers. The Pirate Island had previously been available as a downloadable scenario (but only in German) suitable for the older editions.", "title": "Scenarios" }, { "paragraph_id": 22, "text": "In this scenario, players begin with a pre-placed settlement on a main island. Ships may only be built in one single line, which must pass through a fixed waypoint (different for each player) en route to a pirate fortress (each player has their own pirate fortress). Once ships connect to the pirate fortress, they may attempt to attack the pirate fortress once per turn. Ships may be lost if the attack is unsuccessful, but after three successful attacks, the pirate fortress is converted into a settlement. Players must convert their pirate fortresses and have 10 victory points before being able to claim victory.", "title": "Scenarios" }, { "paragraph_id": 23, "text": "Furthermore, the pirate mechanics have also changed: the pirate moves through the middle of the map in a fixed path every turn, and attacks the owner of any nearby settlements. Players win resources if they are able to fend off the pirate attack (which depends on the number rolled by the dice, as well as the number of warships in the defending player's possession; warships are created from using Knight cards on existing ships), but lose resources if they are unsuccessful. Maritime expansion is only permitted by building a settlement at the waypoint, however, this increases the chances of a pirate attack.", "title": "Scenarios" }, { "paragraph_id": 24, "text": "The Wonders of Catan was a downloadable scenario for older editions of Seafarers in both German and English, and was incorporated into Seafarers in newer editions.", "title": "Scenarios" }, { "paragraph_id": 25, "text": "In this scenario, there are a number of \"wonders\", each with a large cost of building as well as a prerequisite. If a player meets the prerequisite for a wonder, they may claim the wonder for themselves. A player may only claim one wonder, and each wonder may only be claimed by one player. Wonders must be built in four parts, and each wonder has a different build cost. The winner is the first player to complete their wonder, or the first player to have 10 victory points and have more parts of their wonder complete than any other player.", "title": "Scenarios" }, { "paragraph_id": 26, "text": "The Great Crossing was a scenario in the older editions of Seafarers, which has been dropped in newer editions. The map is divided into two islands, Catan and Transcatania. Players begin with both settlements on one of the islands, and must build ships connecting settlements between the two islands. Players earn victory points for connecting their settlements with settlements (not necessarily theirs) from the opposite island using ships, or to another player's shipping lines which connect two settlements together.", "title": "Scenarios" }, { "paragraph_id": 27, "text": "Greater Catan was a scenario included in the older editions of Seafarers but is not included in newer editions. Due to the sheer amounts of equipment needed, two copies of Settlers and Seafarers are required to set up this scenario. The map consists of a standard Settlers island, along with a smaller chain of outlying islands. Only the main island initially has number tokens: number tokens are assigned to the outlying islands as they are expanded. However, the supply of number tokens is smaller than the number of hexes in the scenario: when the number tokens run out and players expand into a new part of the outlying islands, number tokens are moved from the main island to the outlying islands.", "title": "Scenarios" }, { "paragraph_id": 28, "text": "Hexes on the main island for which there are no number tokens do not produce resources, but number tokens are moved in such a way so as to avoid rendering a city unproductive; furthermore, whenever possible number tokens must be reassigned from hexes bordering a player's own settlements and cities, so as to prevent harming another player's economy without harming a player's own economy at the same time.", "title": "Scenarios" }, { "paragraph_id": 29, "text": "New World is a scenario that blankets all other scenarios that may be created from the parts of Settlers and Seafarers. This scenario uses an entirely random map, and players are encouraged to try and create a tile layout that plays well. The only difference between versions in Seafarers, the extension, and the older editions therein is the size of the frames.", "title": "Scenarios" }, { "paragraph_id": 30, "text": "The Seafarers of Catan was reviewed in the online second volume of Pyramid.", "title": "Reception" } ]
Catan: Seafarers, or Seafarers of Catan in older editions, is an expansion of the board game Catan for three to four players. The main feature of this expansion is the addition of ships, gold rivers, and the pirate to the game, allowing play between multiple islands. The expansion also provides numerous scenarios, some of which have custom rules. The Seafarers rules and scenarios are also, for the most part, compatible with Catan: Cities & Knights and Catan: Traders & Barbarians. The concepts introduced in Seafarers were part of designer Klaus Teuber's original design for Settlers.
2002-01-05T23:40:21Z
2023-10-27T20:29:40Z
[ "Template:Cite web", "Template:Catan navbox", "Template:Short description", "Template:Italic title", "Template:Lang-de", "Template:Reflist" ]
https://en.wikipedia.org/wiki/Catan:_Seafarers
9,087
Dynamical system
In mathematics, a dynamical system is a system in which a function describes the time dependence of a point in an ambient space, such as in a parametric curve. Examples include the mathematical models that describe the swinging of a clock pendulum, the flow of water in a pipe, the random motion of particles in the air, and the number of fish each springtime in a lake. The most general definition unifies several concepts in mathematics such as ordinary differential equations and ergodic theory by allowing different choices of the space and how time is measured. Time can be measured by integers, by real or complex numbers or can be a more general algebraic object, losing the memory of its physical origin, and the space may be a manifold or simply a set, without the need of a smooth space-time structure defined on it. At any given time, a dynamical system has a state representing a point in an appropriate state space. This state is often given by a tuple of real numbers or by a vector in a geometrical manifold. The evolution rule of the dynamical system is a function that describes what future states follow from the current state. Often the function is deterministic, that is, for a given time interval only one future state follows from the current state. However, some systems are stochastic, in that random events also affect the evolution of the state variables. In physics, a dynamical system is described as a "particle or ensemble of particles whose state varies over time and thus obeys differential equations involving time derivatives". In order to make a prediction about the system's future behavior, an analytical solution of such equations or their integration over time through computer simulation is realized. The study of dynamical systems is the focus of dynamical systems theory, which has applications to a wide variety of fields such as mathematics, physics, biology, chemistry, engineering, economics, history, and medicine. Dynamical systems are a fundamental part of chaos theory, logistic map dynamics, bifurcation theory, the self-assembly and self-organization processes, and the edge of chaos concept. The concept of a dynamical system has its origins in Newtonian mechanics. There, as in other natural sciences and engineering disciplines, the evolution rule of dynamical systems is an implicit relation that gives the state of the system for only a short time into the future. (The relation is either a differential equation, difference equation or other time scale.) To determine the state for all future times requires iterating the relation many times—each advancing time a small step. The iteration procedure is referred to as solving the system or integrating the system. If the system can be solved, then, given an initial point, it is possible to determine all its future positions, a collection of points known as a trajectory or orbit. Before the advent of computers, finding an orbit required sophisticated mathematical techniques and could be accomplished only for a small class of dynamical systems. Numerical methods implemented on electronic computing machines have simplified the task of determining the orbits of a dynamical system. For simple dynamical systems, knowing the trajectory is often sufficient, but most dynamical systems are too complicated to be understood in terms of individual trajectories. The difficulties arise because: Many people regard French mathematician Henri Poincaré as the founder of dynamical systems. Poincaré published two now classical monographs, "New Methods of Celestial Mechanics" (1892–1899) and "Lectures on Celestial Mechanics" (1905–1910). In them, he successfully applied the results of their research to the problem of the motion of three bodies and studied in detail the behavior of solutions (frequency, stability, asymptotic, and so on). These papers included the Poincaré recurrence theorem, which states that certain systems will, after a sufficiently long but finite time, return to a state very close to the initial state. Aleksandr Lyapunov developed many important approximation methods. His methods, which he developed in 1899, make it possible to define the stability of sets of ordinary differential equations. He created the modern theory of the stability of a dynamical system. In 1913, George David Birkhoff proved Poincaré's "Last Geometric Theorem", a special case of the three-body problem, a result that made him world-famous. In 1927, he published his Dynamical Systems. Birkhoff's most durable result has been his 1931 discovery of what is now called the ergodic theorem. Combining insights from physics on the ergodic hypothesis with measure theory, this theorem solved, at least in principle, a fundamental problem of statistical mechanics. The ergodic theorem has also had repercussions for dynamics. Stephen Smale made significant advances as well. His first contribution was the Smale horseshoe that jumpstarted significant research in dynamical systems. He also outlined a research program carried out by many others. Oleksandr Mykolaiovych Sharkovsky developed Sharkovsky's theorem on the periods of discrete dynamical systems in 1964. One of the implications of the theorem is that if a discrete dynamical system on the real line has a periodic point of period 3, then it must have periodic points of every other period. In the late 20th century the dynamical system perspective to partial differential equations started gaining popularity. Palestinian mechanical engineer Ali H. Nayfeh applied nonlinear dynamics in mechanical and engineering systems. His pioneering work in applied nonlinear dynamics has been influential in the construction and maintenance of machines and structures that are common in daily life, such as ships, cranes, bridges, buildings, skyscrapers, jet engines, rocket engines, aircraft and spacecraft. In the most general sense, a dynamical system is a tuple (T, X, Φ) where T is a monoid, written additively, X is a non-empty set and Φ is a function with and for any x in X: for t 1 , t 2 + t 1 ∈ I ( x ) {\displaystyle \,t_{1},\,t_{2}+t_{1}\in I(x)} and t 2 ∈ I ( Φ ( t 1 , x ) ) {\displaystyle \ t_{2}\in I(\Phi (t_{1},x))} , where we have defined the set I ( x ) := { t ∈ T : ( t , x ) ∈ U } {\displaystyle I(x):=\{t\in T:(t,x)\in U\}} for any x in X. In particular, in the case that U = T × X {\displaystyle U=T\times X} we have for every x in X that I ( x ) = T {\displaystyle I(x)=T} and thus that Φ defines a monoid action of T on X. The function Φ(t,x) is called the evolution function of the dynamical system: it associates to every point x in the set X a unique image, depending on the variable t, called the evolution parameter. X is called phase space or state space, while the variable x represents an initial state of the system. We often write if we take one of the variables as constant. The function is called the flow through x and its graph is called the trajectory through x. The set is called the orbit through x. Note that the orbit through x is the image of the flow through x. A subset S of the state space X is called Φ-invariant if for all x in S and all t in T Thus, in particular, if S is Φ-invariant, I ( x ) = T {\displaystyle I(x)=T} for all x in S. That is, the flow through x must be defined for all time for every element of S. More commonly there are two classes of definitions for a dynamical system: one is motivated by ordinary differential equations and is geometrical in flavor; and the other is motivated by ergodic theory and is measure theoretical in flavor. In the geometrical definition, a dynamical system is the tuple ⟨ T , M , f ⟩ {\displaystyle \langle {\mathcal {T}},{\mathcal {M}},f\rangle } . T {\displaystyle {\mathcal {T}}} is the domain for time – there are many choices, usually the reals or the integers, possibly restricted to be non-negative. M {\displaystyle {\mathcal {M}}} is a manifold, i.e. locally a Banach space or Euclidean space, or in the discrete case a graph. f is an evolution rule t → f (with t ∈ T {\displaystyle t\in {\mathcal {T}}} ) such that f is a diffeomorphism of the manifold to itself. So, f is a "smooth" mapping of the time-domain T {\displaystyle {\mathcal {T}}} into the space of diffeomorphisms of the manifold to itself. In other terms, f(t) is a diffeomorphism, for every time t in the domain T {\displaystyle {\mathcal {T}}} . A real dynamical system, real-time dynamical system, continuous time dynamical system, or flow is a tuple (T, M, Φ) with T an open interval in the real numbers R, M a manifold locally diffeomorphic to a Banach space, and Φ a continuous function. If Φ is continuously differentiable we say the system is a differentiable dynamical system. If the manifold M is locally diffeomorphic to R, the dynamical system is finite-dimensional; if not, the dynamical system is infinite-dimensional. Note that this does not assume a symplectic structure. When T is taken to be the reals, the dynamical system is called global or a flow; and if T is restricted to the non-negative reals, then the dynamical system is a semi-flow. A discrete dynamical system, discrete-time dynamical system is a tuple (T, M, Φ), where M is a manifold locally diffeomorphic to a Banach space, and Φ is a function. When T is taken to be the integers, it is a cascade or a map. If T is restricted to the non-negative integers we call the system a semi-cascade. A cellular automaton is a tuple (T, M, Φ), with T a lattice such as the integers or a higher-dimensional integer grid, M is a set of functions from an integer lattice (again, with one or more dimensions) to a finite set, and Φ a (locally defined) evolution function. As such cellular automata are dynamical systems. The lattice in M represents the "space" lattice, while the one in T represents the "time" lattice. Dynamical systems are usually defined over a single independent variable, thought of as time. A more general class of systems are defined over multiple independent variables and are therefore called multidimensional systems. Such systems are useful for modeling, for example, image processing. Given a global dynamical system (R, X, Φ) on a locally compact and Hausdorff topological space X, it is often useful to study the continuous extension Φ* of Φ to the one-point compactification X* of X. Although we lose the differential structure of the original system we can now use compactness arguments to analyze the new system (R, X*, Φ*). In compact dynamical systems the limit set of any orbit is non-empty, compact and simply connected. A dynamical system may be defined formally as a measure-preserving transformation of a measure space, the triplet (T, (X, Σ, μ), Φ). Here, T is a monoid (usually the non-negative integers), X is a set, and (X, Σ, μ) is a probability space, meaning that Σ is a sigma-algebra on X and μ is a finite measure on (X, Σ). A map Φ: X → X is said to be Σ-measurable if and only if, for every σ in Σ, one has Φ − 1 σ ∈ Σ {\displaystyle \Phi ^{-1}\sigma \in \Sigma } . A map Φ is said to preserve the measure if and only if, for every σ in Σ, one has μ ( Φ − 1 σ ) = μ ( σ ) {\displaystyle \mu (\Phi ^{-1}\sigma )=\mu (\sigma )} . Combining the above, a map Φ is said to be a measure-preserving transformation of X , if it is a map from X to itself, it is Σ-measurable, and is measure-preserving. The triplet (T, (X, Σ, μ), Φ), for such a Φ, is then defined to be a dynamical system. The map Φ embodies the time evolution of the dynamical system. Thus, for discrete dynamical systems the iterates Φ n = Φ ∘ Φ ∘ ⋯ ∘ Φ {\displaystyle \Phi ^{n}=\Phi \circ \Phi \circ \dots \circ \Phi } for every integer n are studied. For continuous dynamical systems, the map Φ is understood to be a finite time evolution map and the construction is more complicated. The measure theoretical definition assumes the existence of a measure-preserving transformation. Many different invariant measures can be associated to any one evolution rule. If the dynamical system is given by a system of differential equations the appropriate measure must be determined. This makes it difficult to develop ergodic theory starting from differential equations, so it becomes convenient to have a dynamical systems-motivated definition within ergodic theory that side-steps the choice of measure and assumes the choice has been made. A simple construction (sometimes called the Krylov–Bogolyubov theorem) shows that for a large class of systems it is always possible to construct a measure so as to make the evolution rule of the dynamical system a measure-preserving transformation. In the construction a given measure of the state space is summed for all future points of a trajectory, assuring the invariance. Some systems have a natural measure, such as the Liouville measure in Hamiltonian systems, chosen over other invariant measures, such as the measures supported on periodic orbits of the Hamiltonian system. For chaotic dissipative systems the choice of invariant measure is technically more challenging. The measure needs to be supported on the attractor, but attractors have zero Lebesgue measure and the invariant measures must be singular with respect to the Lebesgue measure. A small region of phase space shrinks under time evolution. For hyperbolic dynamical systems, the Sinai–Ruelle–Bowen measures appear to be the natural choice. They are constructed on the geometrical structure of stable and unstable manifolds of the dynamical system; they behave physically under small perturbations; and they explain many of the observed statistics of hyperbolic systems. The concept of evolution in time is central to the theory of dynamical systems as seen in the previous sections: the basic reason for this fact is that the starting motivation of the theory was the study of time behavior of classical mechanical systems. But a system of ordinary differential equations must be solved before it becomes a dynamic system. For example consider an initial value problem such as the following: where There is no need for higher order derivatives in the equation, nor for the parameter t in v(t,x), because these can be eliminated by considering systems of higher dimensions. Depending on the properties of this vector field, the mechanical system is called The solution can be found using standard ODE techniques and is denoted as the evolution function already introduced above The dynamical system is then (T, M, Φ). Some formal manipulation of the system of differential equations shown above gives a more general form of equations a dynamical system must satisfy where G : ( T × M ) M → C {\displaystyle {\mathfrak {G}}:{{(T\times M)}^{M}}\to \mathbf {C} } is a functional from the set of evolution functions to the field of the complex numbers. This equation is useful when modeling mechanical systems with complicated constraints. Many of the concepts in dynamical systems can be extended to infinite-dimensional manifolds—those that are locally Banach spaces—in which case the differential equations are partial differential equations. Linear dynamical systems can be solved in terms of simple functions and the behavior of all orbits classified. In a linear system the phase space is the N-dimensional Euclidean space, so any point in phase space can be represented by a vector with N numbers. The analysis of linear systems is possible because they satisfy a superposition principle: if u(t) and w(t) satisfy the differential equation for the vector field (but not necessarily the initial condition), then so will u(t) + w(t). For a flow, the vector field v(x) is an affine function of the position in the phase space, that is, with A a matrix, b a vector of numbers and x the position vector. The solution to this system can be found by using the superposition principle (linearity). The case b ≠ 0 with A = 0 is just a straight line in the direction of b: When b is zero and A ≠ 0 the origin is an equilibrium (or singular) point of the flow, that is, if x0 = 0, then the orbit remains there. For other initial conditions, the equation of motion is given by the exponential of a matrix: for an initial point x0, When b = 0, the eigenvalues of A determine the structure of the phase space. From the eigenvalues and the eigenvectors of A it is possible to determine if an initial point will converge or diverge to the equilibrium point at the origin. The distance between two different initial conditions in the case A ≠ 0 will change exponentially in most cases, either converging exponentially fast towards a point, or diverging exponentially fast. Linear systems display sensitive dependence on initial conditions in the case of divergence. For nonlinear systems this is one of the (necessary but not sufficient) conditions for chaotic behavior. A discrete-time, affine dynamical system has the form of a matrix difference equation: with A a matrix and b a vector. As in the continuous case, the change of coordinates x → x + (1 − A)b removes the term b from the equation. In the new coordinate system, the origin is a fixed point of the map and the solutions are of the linear system Ax0. The solutions for the map are no longer curves, but points that hop in the phase space. The orbits are organized in curves, or fibers, which are collections of points that map into themselves under the action of the map. As in the continuous case, the eigenvalues and eigenvectors of A determine the structure of phase space. For example, if u1 is an eigenvector of A, with a real eigenvalue smaller than one, then the straight lines given by the points along α u1, with α ∈ R, is an invariant curve of the map. Points in this straight line run into the fixed point. There are also many other discrete dynamical systems. The qualitative properties of dynamical systems do not change under a smooth change of coordinates (this is sometimes taken as a definition of qualitative): a singular point of the vector field (a point where v(x) = 0) will remain a singular point under smooth transformations; a periodic orbit is a loop in phase space and smooth deformations of the phase space cannot alter it being a loop. It is in the neighborhood of singular points and periodic orbits that the structure of a phase space of a dynamical system can be well understood. In the qualitative study of dynamical systems, the approach is to show that there is a change of coordinates (usually unspecified, but computable) that makes the dynamical system as simple as possible. A flow in most small patches of the phase space can be made very simple. If y is a point where the vector field v(y) ≠ 0, then there is a change of coordinates for a region around y where the vector field becomes a series of parallel vectors of the same magnitude. This is known as the rectification theorem. The rectification theorem says that away from singular points the dynamics of a point in a small patch is a straight line. The patch can sometimes be enlarged by stitching several patches together, and when this works out in the whole phase space M the dynamical system is integrable. In most cases the patch cannot be extended to the entire phase space. There may be singular points in the vector field (where v(x) = 0); or the patches may become smaller and smaller as some point is approached. The more subtle reason is a global constraint, where the trajectory starts out in a patch, and after visiting a series of other patches comes back to the original one. If the next time the orbit loops around phase space in a different way, then it is impossible to rectify the vector field in the whole series of patches. In general, in the neighborhood of a periodic orbit the rectification theorem cannot be used. Poincaré developed an approach that transforms the analysis near a periodic orbit to the analysis of a map. Pick a point x0 in the orbit γ and consider the points in phase space in that neighborhood that are perpendicular to v(x0). These points are a Poincaré section S(γ, x0), of the orbit. The flow now defines a map, the Poincaré map F : S → S, for points starting in S and returning to S. Not all these points will take the same amount of time to come back, but the times will be close to the time it takes x0. The intersection of the periodic orbit with the Poincaré section is a fixed point of the Poincaré map F. By a translation, the point can be assumed to be at x = 0. The Taylor series of the map is F(x) = J · x + O(x), so a change of coordinates h can only be expected to simplify F to its linear part This is known as the conjugation equation. Finding conditions for this equation to hold has been one of the major tasks of research in dynamical systems. Poincaré first approached it assuming all functions to be analytic and in the process discovered the non-resonant condition. If λ1, ..., λν are the eigenvalues of J they will be resonant if one eigenvalue is an integer linear combination of two or more of the others. As terms of the form λi – Σ (multiples of other eigenvalues) occurs in the denominator of the terms for the function h, the non-resonant condition is also known as the small divisor problem. The results on the existence of a solution to the conjugation equation depend on the eigenvalues of J and the degree of smoothness required from h. As J does not need to have any special symmetries, its eigenvalues will typically be complex numbers. When the eigenvalues of J are not in the unit circle, the dynamics near the fixed point x0 of F is called hyperbolic and when the eigenvalues are on the unit circle and complex, the dynamics is called elliptic. In the hyperbolic case, the Hartman–Grobman theorem gives the conditions for the existence of a continuous function that maps the neighborhood of the fixed point of the map to the linear map J · x. The hyperbolic case is also structurally stable. Small changes in the vector field will only produce small changes in the Poincaré map and these small changes will reflect in small changes in the position of the eigenvalues of J in the complex plane, implying that the map is still hyperbolic. The Kolmogorov–Arnold–Moser (KAM) theorem gives the behavior near an elliptic point. When the evolution map Φ (or the vector field it is derived from) depends on a parameter μ, the structure of the phase space will also depend on this parameter. Small changes may produce no qualitative changes in the phase space until a special value μ0 is reached. At this point the phase space changes qualitatively and the dynamical system is said to have gone through a bifurcation. Bifurcation theory considers a structure in phase space (typically a fixed point, a periodic orbit, or an invariant torus) and studies its behavior as a function of the parameter μ. At the bifurcation point the structure may change its stability, split into new structures, or merge with other structures. By using Taylor series approximations of the maps and an understanding of the differences that may be eliminated by a change of coordinates, it is possible to catalog the bifurcations of dynamical systems. The bifurcations of a hyperbolic fixed point x0 of a system family Fμ can be characterized by the eigenvalues of the first derivative of the system DFμ(x0) computed at the bifurcation point. For a map, the bifurcation will occur when there are eigenvalues of DFμ on the unit circle. For a flow, it will occur when there are eigenvalues on the imaginary axis. For more information, see the main article on Bifurcation theory. Some bifurcations can lead to very complicated structures in phase space. For example, the Ruelle–Takens scenario describes how a periodic orbit bifurcates into a torus and the torus into a strange attractor. In another example, Feigenbaum period-doubling describes how a stable periodic orbit goes through a series of period-doubling bifurcations. In many dynamical systems, it is possible to choose the coordinates of the system so that the volume (really a ν-dimensional volume) in phase space is invariant. This happens for mechanical systems derived from Newton's laws as long as the coordinates are the position and the momentum and the volume is measured in units of (position) × (momentum). The flow takes points of a subset A into the points Φ(A) and invariance of the phase space means that In the Hamiltonian formalism, given a coordinate it is possible to derive the appropriate (generalized) momentum such that the associated volume is preserved by the flow. The volume is said to be computed by the Liouville measure. In a Hamiltonian system, not all possible configurations of position and momentum can be reached from an initial condition. Because of energy conservation, only the states with the same energy as the initial condition are accessible. The states with the same energy form an energy shell Ω, a sub-manifold of the phase space. The volume of the energy shell, computed using the Liouville measure, is preserved under evolution. For systems where the volume is preserved by the flow, Poincaré discovered the recurrence theorem: Assume the phase space has a finite Liouville volume and let F be a phase space volume-preserving map and A a subset of the phase space. Then almost every point of A returns to A infinitely often. The Poincaré recurrence theorem was used by Zermelo to object to Boltzmann's derivation of the increase in entropy in a dynamical system of colliding atoms. One of the questions raised by Boltzmann's work was the possible equality between time averages and space averages, what he called the ergodic hypothesis. The hypothesis states that the length of time a typical trajectory spends in a region A is vol(A)/vol(Ω). The ergodic hypothesis turned out not to be the essential property needed for the development of statistical mechanics and a series of other ergodic-like properties were introduced to capture the relevant aspects of physical systems. Koopman approached the study of ergodic systems by the use of functional analysis. An observable a is a function that to each point of the phase space associates a number (say instantaneous pressure, or average height). The value of an observable can be computed at another time by using the evolution function φ. This introduces an operator U, the transfer operator, By studying the spectral properties of the linear operator U it becomes possible to classify the ergodic properties of Φ. In using the Koopman approach of considering the action of the flow on an observable function, the finite-dimensional nonlinear problem involving Φ gets mapped into an infinite-dimensional linear problem involving U. The Liouville measure restricted to the energy surface Ω is the basis for the averages computed in equilibrium statistical mechanics. An average in time along a trajectory is equivalent to an average in space computed with the Boltzmann factor exp(−βH). This idea has been generalized by Sinai, Bowen, and Ruelle (SRB) to a larger class of dynamical systems that includes dissipative systems. SRB measures replace the Boltzmann factor and they are defined on attractors of chaotic systems. Simple nonlinear dynamical systems and even piecewise linear systems can exhibit a completely unpredictable behavior, which might seem to be random, despite the fact that they are fundamentally deterministic. This seemingly unpredictable behavior has been called chaos. Hyperbolic systems are precisely defined dynamical systems that exhibit the properties ascribed to chaotic systems. In hyperbolic systems the tangent space perpendicular to a trajectory can be well separated into two parts: one with the points that converge towards the orbit (the stable manifold) and another of the points that diverge from the orbit (the unstable manifold). This branch of mathematics deals with the long-term qualitative behavior of dynamical systems. Here, the focus is not on finding precise solutions to the equations defining the dynamical system (which is often hopeless), but rather to answer questions like "Will the system settle down to a steady state in the long term, and if so, what are the possible attractors?" or "Does the long-term behavior of the system depend on its initial condition?" Note that the chaotic behavior of complex systems is not the issue. Meteorology has been known for years to involve complex—even chaotic—behavior. Chaos theory has been so surprising because chaos can be found within almost trivial systems. The logistic map is only a second-degree polynomial; the horseshoe map is piecewise linear. For non-linear autonomous ODEs it is possible under some conditions to develop solutions of finite duration, meaning here that from its own dynamics, the system will reach the value zero at an ending time and stays there in zero forever after. These finite-duration solutions can't be analytical functions on the whole real line, and because they are non-Lipschitz functions at their ending time, they are not unique solutions of Lipschitz differential equations. As example, the equation: Admits the finite duration solution: Works providing a broad coverage: Introductory texts with a unique perspective: Textbooks Popularizations:
[ { "paragraph_id": 0, "text": "In mathematics, a dynamical system is a system in which a function describes the time dependence of a point in an ambient space, such as in a parametric curve. Examples include the mathematical models that describe the swinging of a clock pendulum, the flow of water in a pipe, the random motion of particles in the air, and the number of fish each springtime in a lake. The most general definition unifies several concepts in mathematics such as ordinary differential equations and ergodic theory by allowing different choices of the space and how time is measured. Time can be measured by integers, by real or complex numbers or can be a more general algebraic object, losing the memory of its physical origin, and the space may be a manifold or simply a set, without the need of a smooth space-time structure defined on it.", "title": "" }, { "paragraph_id": 1, "text": "At any given time, a dynamical system has a state representing a point in an appropriate state space. This state is often given by a tuple of real numbers or by a vector in a geometrical manifold. The evolution rule of the dynamical system is a function that describes what future states follow from the current state. Often the function is deterministic, that is, for a given time interval only one future state follows from the current state. However, some systems are stochastic, in that random events also affect the evolution of the state variables.", "title": "" }, { "paragraph_id": 2, "text": "In physics, a dynamical system is described as a \"particle or ensemble of particles whose state varies over time and thus obeys differential equations involving time derivatives\". In order to make a prediction about the system's future behavior, an analytical solution of such equations or their integration over time through computer simulation is realized.", "title": "" }, { "paragraph_id": 3, "text": "The study of dynamical systems is the focus of dynamical systems theory, which has applications to a wide variety of fields such as mathematics, physics, biology, chemistry, engineering, economics, history, and medicine. Dynamical systems are a fundamental part of chaos theory, logistic map dynamics, bifurcation theory, the self-assembly and self-organization processes, and the edge of chaos concept.", "title": "" }, { "paragraph_id": 4, "text": "The concept of a dynamical system has its origins in Newtonian mechanics. There, as in other natural sciences and engineering disciplines, the evolution rule of dynamical systems is an implicit relation that gives the state of the system for only a short time into the future. (The relation is either a differential equation, difference equation or other time scale.) To determine the state for all future times requires iterating the relation many times—each advancing time a small step. The iteration procedure is referred to as solving the system or integrating the system. If the system can be solved, then, given an initial point, it is possible to determine all its future positions, a collection of points known as a trajectory or orbit.", "title": "Overview" }, { "paragraph_id": 5, "text": "Before the advent of computers, finding an orbit required sophisticated mathematical techniques and could be accomplished only for a small class of dynamical systems. Numerical methods implemented on electronic computing machines have simplified the task of determining the orbits of a dynamical system.", "title": "Overview" }, { "paragraph_id": 6, "text": "For simple dynamical systems, knowing the trajectory is often sufficient, but most dynamical systems are too complicated to be understood in terms of individual trajectories. The difficulties arise because:", "title": "Overview" }, { "paragraph_id": 7, "text": "Many people regard French mathematician Henri Poincaré as the founder of dynamical systems. Poincaré published two now classical monographs, \"New Methods of Celestial Mechanics\" (1892–1899) and \"Lectures on Celestial Mechanics\" (1905–1910). In them, he successfully applied the results of their research to the problem of the motion of three bodies and studied in detail the behavior of solutions (frequency, stability, asymptotic, and so on). These papers included the Poincaré recurrence theorem, which states that certain systems will, after a sufficiently long but finite time, return to a state very close to the initial state.", "title": "History" }, { "paragraph_id": 8, "text": "Aleksandr Lyapunov developed many important approximation methods. His methods, which he developed in 1899, make it possible to define the stability of sets of ordinary differential equations. He created the modern theory of the stability of a dynamical system.", "title": "History" }, { "paragraph_id": 9, "text": "In 1913, George David Birkhoff proved Poincaré's \"Last Geometric Theorem\", a special case of the three-body problem, a result that made him world-famous. In 1927, he published his Dynamical Systems. Birkhoff's most durable result has been his 1931 discovery of what is now called the ergodic theorem. Combining insights from physics on the ergodic hypothesis with measure theory, this theorem solved, at least in principle, a fundamental problem of statistical mechanics. The ergodic theorem has also had repercussions for dynamics.", "title": "History" }, { "paragraph_id": 10, "text": "Stephen Smale made significant advances as well. His first contribution was the Smale horseshoe that jumpstarted significant research in dynamical systems. He also outlined a research program carried out by many others.", "title": "History" }, { "paragraph_id": 11, "text": "Oleksandr Mykolaiovych Sharkovsky developed Sharkovsky's theorem on the periods of discrete dynamical systems in 1964. One of the implications of the theorem is that if a discrete dynamical system on the real line has a periodic point of period 3, then it must have periodic points of every other period.", "title": "History" }, { "paragraph_id": 12, "text": "In the late 20th century the dynamical system perspective to partial differential equations started gaining popularity. Palestinian mechanical engineer Ali H. Nayfeh applied nonlinear dynamics in mechanical and engineering systems. His pioneering work in applied nonlinear dynamics has been influential in the construction and maintenance of machines and structures that are common in daily life, such as ships, cranes, bridges, buildings, skyscrapers, jet engines, rocket engines, aircraft and spacecraft.", "title": "History" }, { "paragraph_id": 13, "text": "In the most general sense, a dynamical system is a tuple (T, X, Φ) where T is a monoid, written additively, X is a non-empty set and Φ is a function", "title": "Formal definition" }, { "paragraph_id": 14, "text": "with", "title": "Formal definition" }, { "paragraph_id": 15, "text": "and for any x in X:", "title": "Formal definition" }, { "paragraph_id": 16, "text": "for t 1 , t 2 + t 1 ∈ I ( x ) {\\displaystyle \\,t_{1},\\,t_{2}+t_{1}\\in I(x)} and t 2 ∈ I ( Φ ( t 1 , x ) ) {\\displaystyle \\ t_{2}\\in I(\\Phi (t_{1},x))} , where we have defined the set I ( x ) := { t ∈ T : ( t , x ) ∈ U } {\\displaystyle I(x):=\\{t\\in T:(t,x)\\in U\\}} for any x in X.", "title": "Formal definition" }, { "paragraph_id": 17, "text": "In particular, in the case that U = T × X {\\displaystyle U=T\\times X} we have for every x in X that I ( x ) = T {\\displaystyle I(x)=T} and thus that Φ defines a monoid action of T on X.", "title": "Formal definition" }, { "paragraph_id": 18, "text": "The function Φ(t,x) is called the evolution function of the dynamical system: it associates to every point x in the set X a unique image, depending on the variable t, called the evolution parameter. X is called phase space or state space, while the variable x represents an initial state of the system.", "title": "Formal definition" }, { "paragraph_id": 19, "text": "We often write", "title": "Formal definition" }, { "paragraph_id": 20, "text": "if we take one of the variables as constant. The function", "title": "Formal definition" }, { "paragraph_id": 21, "text": "is called the flow through x and its graph is called the trajectory through x. The set", "title": "Formal definition" }, { "paragraph_id": 22, "text": "is called the orbit through x. Note that the orbit through x is the image of the flow through x. A subset S of the state space X is called Φ-invariant if for all x in S and all t in T", "title": "Formal definition" }, { "paragraph_id": 23, "text": "Thus, in particular, if S is Φ-invariant, I ( x ) = T {\\displaystyle I(x)=T} for all x in S. That is, the flow through x must be defined for all time for every element of S.", "title": "Formal definition" }, { "paragraph_id": 24, "text": "More commonly there are two classes of definitions for a dynamical system: one is motivated by ordinary differential equations and is geometrical in flavor; and the other is motivated by ergodic theory and is measure theoretical in flavor.", "title": "Formal definition" }, { "paragraph_id": 25, "text": "In the geometrical definition, a dynamical system is the tuple ⟨ T , M , f ⟩ {\\displaystyle \\langle {\\mathcal {T}},{\\mathcal {M}},f\\rangle } . T {\\displaystyle {\\mathcal {T}}} is the domain for time – there are many choices, usually the reals or the integers, possibly restricted to be non-negative. M {\\displaystyle {\\mathcal {M}}} is a manifold, i.e. locally a Banach space or Euclidean space, or in the discrete case a graph. f is an evolution rule t → f (with t ∈ T {\\displaystyle t\\in {\\mathcal {T}}} ) such that f is a diffeomorphism of the manifold to itself. So, f is a \"smooth\" mapping of the time-domain T {\\displaystyle {\\mathcal {T}}} into the space of diffeomorphisms of the manifold to itself. In other terms, f(t) is a diffeomorphism, for every time t in the domain T {\\displaystyle {\\mathcal {T}}} .", "title": "Formal definition" }, { "paragraph_id": 26, "text": "A real dynamical system, real-time dynamical system, continuous time dynamical system, or flow is a tuple (T, M, Φ) with T an open interval in the real numbers R, M a manifold locally diffeomorphic to a Banach space, and Φ a continuous function. If Φ is continuously differentiable we say the system is a differentiable dynamical system. If the manifold M is locally diffeomorphic to R, the dynamical system is finite-dimensional; if not, the dynamical system is infinite-dimensional. Note that this does not assume a symplectic structure. When T is taken to be the reals, the dynamical system is called global or a flow; and if T is restricted to the non-negative reals, then the dynamical system is a semi-flow.", "title": "Formal definition" }, { "paragraph_id": 27, "text": "A discrete dynamical system, discrete-time dynamical system is a tuple (T, M, Φ), where M is a manifold locally diffeomorphic to a Banach space, and Φ is a function. When T is taken to be the integers, it is a cascade or a map. If T is restricted to the non-negative integers we call the system a semi-cascade.", "title": "Formal definition" }, { "paragraph_id": 28, "text": "A cellular automaton is a tuple (T, M, Φ), with T a lattice such as the integers or a higher-dimensional integer grid, M is a set of functions from an integer lattice (again, with one or more dimensions) to a finite set, and Φ a (locally defined) evolution function. As such cellular automata are dynamical systems. The lattice in M represents the \"space\" lattice, while the one in T represents the \"time\" lattice.", "title": "Formal definition" }, { "paragraph_id": 29, "text": "Dynamical systems are usually defined over a single independent variable, thought of as time. A more general class of systems are defined over multiple independent variables and are therefore called multidimensional systems. Such systems are useful for modeling, for example, image processing.", "title": "Formal definition" }, { "paragraph_id": 30, "text": "Given a global dynamical system (R, X, Φ) on a locally compact and Hausdorff topological space X, it is often useful to study the continuous extension Φ* of Φ to the one-point compactification X* of X. Although we lose the differential structure of the original system we can now use compactness arguments to analyze the new system (R, X*, Φ*).", "title": "Formal definition" }, { "paragraph_id": 31, "text": "In compact dynamical systems the limit set of any orbit is non-empty, compact and simply connected.", "title": "Formal definition" }, { "paragraph_id": 32, "text": "A dynamical system may be defined formally as a measure-preserving transformation of a measure space, the triplet (T, (X, Σ, μ), Φ). Here, T is a monoid (usually the non-negative integers), X is a set, and (X, Σ, μ) is a probability space, meaning that Σ is a sigma-algebra on X and μ is a finite measure on (X, Σ). A map Φ: X → X is said to be Σ-measurable if and only if, for every σ in Σ, one has Φ − 1 σ ∈ Σ {\\displaystyle \\Phi ^{-1}\\sigma \\in \\Sigma } . A map Φ is said to preserve the measure if and only if, for every σ in Σ, one has μ ( Φ − 1 σ ) = μ ( σ ) {\\displaystyle \\mu (\\Phi ^{-1}\\sigma )=\\mu (\\sigma )} . Combining the above, a map Φ is said to be a measure-preserving transformation of X , if it is a map from X to itself, it is Σ-measurable, and is measure-preserving. The triplet (T, (X, Σ, μ), Φ), for such a Φ, is then defined to be a dynamical system.", "title": "Formal definition" }, { "paragraph_id": 33, "text": "The map Φ embodies the time evolution of the dynamical system. Thus, for discrete dynamical systems the iterates Φ n = Φ ∘ Φ ∘ ⋯ ∘ Φ {\\displaystyle \\Phi ^{n}=\\Phi \\circ \\Phi \\circ \\dots \\circ \\Phi } for every integer n are studied. For continuous dynamical systems, the map Φ is understood to be a finite time evolution map and the construction is more complicated.", "title": "Formal definition" }, { "paragraph_id": 34, "text": "The measure theoretical definition assumes the existence of a measure-preserving transformation. Many different invariant measures can be associated to any one evolution rule. If the dynamical system is given by a system of differential equations the appropriate measure must be determined. This makes it difficult to develop ergodic theory starting from differential equations, so it becomes convenient to have a dynamical systems-motivated definition within ergodic theory that side-steps the choice of measure and assumes the choice has been made. A simple construction (sometimes called the Krylov–Bogolyubov theorem) shows that for a large class of systems it is always possible to construct a measure so as to make the evolution rule of the dynamical system a measure-preserving transformation. In the construction a given measure of the state space is summed for all future points of a trajectory, assuring the invariance.", "title": "Formal definition" }, { "paragraph_id": 35, "text": "Some systems have a natural measure, such as the Liouville measure in Hamiltonian systems, chosen over other invariant measures, such as the measures supported on periodic orbits of the Hamiltonian system. For chaotic dissipative systems the choice of invariant measure is technically more challenging. The measure needs to be supported on the attractor, but attractors have zero Lebesgue measure and the invariant measures must be singular with respect to the Lebesgue measure. A small region of phase space shrinks under time evolution.", "title": "Formal definition" }, { "paragraph_id": 36, "text": "For hyperbolic dynamical systems, the Sinai–Ruelle–Bowen measures appear to be the natural choice. They are constructed on the geometrical structure of stable and unstable manifolds of the dynamical system; they behave physically under small perturbations; and they explain many of the observed statistics of hyperbolic systems.", "title": "Formal definition" }, { "paragraph_id": 37, "text": "The concept of evolution in time is central to the theory of dynamical systems as seen in the previous sections: the basic reason for this fact is that the starting motivation of the theory was the study of time behavior of classical mechanical systems. But a system of ordinary differential equations must be solved before it becomes a dynamic system. For example consider an initial value problem such as the following:", "title": "Construction of dynamical systems" }, { "paragraph_id": 38, "text": "where", "title": "Construction of dynamical systems" }, { "paragraph_id": 39, "text": "There is no need for higher order derivatives in the equation, nor for the parameter t in v(t,x), because these can be eliminated by considering systems of higher dimensions.", "title": "Construction of dynamical systems" }, { "paragraph_id": 40, "text": "Depending on the properties of this vector field, the mechanical system is called", "title": "Construction of dynamical systems" }, { "paragraph_id": 41, "text": "The solution can be found using standard ODE techniques and is denoted as the evolution function already introduced above", "title": "Construction of dynamical systems" }, { "paragraph_id": 42, "text": "The dynamical system is then (T, M, Φ).", "title": "Construction of dynamical systems" }, { "paragraph_id": 43, "text": "Some formal manipulation of the system of differential equations shown above gives a more general form of equations a dynamical system must satisfy", "title": "Construction of dynamical systems" }, { "paragraph_id": 44, "text": "where G : ( T × M ) M → C {\\displaystyle {\\mathfrak {G}}:{{(T\\times M)}^{M}}\\to \\mathbf {C} } is a functional from the set of evolution functions to the field of the complex numbers.", "title": "Construction of dynamical systems" }, { "paragraph_id": 45, "text": "This equation is useful when modeling mechanical systems with complicated constraints.", "title": "Construction of dynamical systems" }, { "paragraph_id": 46, "text": "Many of the concepts in dynamical systems can be extended to infinite-dimensional manifolds—those that are locally Banach spaces—in which case the differential equations are partial differential equations.", "title": "Construction of dynamical systems" }, { "paragraph_id": 47, "text": "Linear dynamical systems can be solved in terms of simple functions and the behavior of all orbits classified. In a linear system the phase space is the N-dimensional Euclidean space, so any point in phase space can be represented by a vector with N numbers. The analysis of linear systems is possible because they satisfy a superposition principle: if u(t) and w(t) satisfy the differential equation for the vector field (but not necessarily the initial condition), then so will u(t) + w(t).", "title": "Linear dynamical systems" }, { "paragraph_id": 48, "text": "For a flow, the vector field v(x) is an affine function of the position in the phase space, that is,", "title": "Linear dynamical systems" }, { "paragraph_id": 49, "text": "with A a matrix, b a vector of numbers and x the position vector. The solution to this system can be found by using the superposition principle (linearity). The case b ≠ 0 with A = 0 is just a straight line in the direction of b:", "title": "Linear dynamical systems" }, { "paragraph_id": 50, "text": "When b is zero and A ≠ 0 the origin is an equilibrium (or singular) point of the flow, that is, if x0 = 0, then the orbit remains there. For other initial conditions, the equation of motion is given by the exponential of a matrix: for an initial point x0,", "title": "Linear dynamical systems" }, { "paragraph_id": 51, "text": "When b = 0, the eigenvalues of A determine the structure of the phase space. From the eigenvalues and the eigenvectors of A it is possible to determine if an initial point will converge or diverge to the equilibrium point at the origin.", "title": "Linear dynamical systems" }, { "paragraph_id": 52, "text": "The distance between two different initial conditions in the case A ≠ 0 will change exponentially in most cases, either converging exponentially fast towards a point, or diverging exponentially fast. Linear systems display sensitive dependence on initial conditions in the case of divergence. For nonlinear systems this is one of the (necessary but not sufficient) conditions for chaotic behavior.", "title": "Linear dynamical systems" }, { "paragraph_id": 53, "text": "A discrete-time, affine dynamical system has the form of a matrix difference equation:", "title": "Linear dynamical systems" }, { "paragraph_id": 54, "text": "with A a matrix and b a vector. As in the continuous case, the change of coordinates x → x + (1 − A)b removes the term b from the equation. In the new coordinate system, the origin is a fixed point of the map and the solutions are of the linear system Ax0. The solutions for the map are no longer curves, but points that hop in the phase space. The orbits are organized in curves, or fibers, which are collections of points that map into themselves under the action of the map.", "title": "Linear dynamical systems" }, { "paragraph_id": 55, "text": "As in the continuous case, the eigenvalues and eigenvectors of A determine the structure of phase space. For example, if u1 is an eigenvector of A, with a real eigenvalue smaller than one, then the straight lines given by the points along α u1, with α ∈ R, is an invariant curve of the map. Points in this straight line run into the fixed point.", "title": "Linear dynamical systems" }, { "paragraph_id": 56, "text": "There are also many other discrete dynamical systems.", "title": "Linear dynamical systems" }, { "paragraph_id": 57, "text": "The qualitative properties of dynamical systems do not change under a smooth change of coordinates (this is sometimes taken as a definition of qualitative): a singular point of the vector field (a point where v(x) = 0) will remain a singular point under smooth transformations; a periodic orbit is a loop in phase space and smooth deformations of the phase space cannot alter it being a loop. It is in the neighborhood of singular points and periodic orbits that the structure of a phase space of a dynamical system can be well understood. In the qualitative study of dynamical systems, the approach is to show that there is a change of coordinates (usually unspecified, but computable) that makes the dynamical system as simple as possible.", "title": "Local dynamics" }, { "paragraph_id": 58, "text": "A flow in most small patches of the phase space can be made very simple. If y is a point where the vector field v(y) ≠ 0, then there is a change of coordinates for a region around y where the vector field becomes a series of parallel vectors of the same magnitude. This is known as the rectification theorem.", "title": "Local dynamics" }, { "paragraph_id": 59, "text": "The rectification theorem says that away from singular points the dynamics of a point in a small patch is a straight line. The patch can sometimes be enlarged by stitching several patches together, and when this works out in the whole phase space M the dynamical system is integrable. In most cases the patch cannot be extended to the entire phase space. There may be singular points in the vector field (where v(x) = 0); or the patches may become smaller and smaller as some point is approached. The more subtle reason is a global constraint, where the trajectory starts out in a patch, and after visiting a series of other patches comes back to the original one. If the next time the orbit loops around phase space in a different way, then it is impossible to rectify the vector field in the whole series of patches.", "title": "Local dynamics" }, { "paragraph_id": 60, "text": "In general, in the neighborhood of a periodic orbit the rectification theorem cannot be used. Poincaré developed an approach that transforms the analysis near a periodic orbit to the analysis of a map. Pick a point x0 in the orbit γ and consider the points in phase space in that neighborhood that are perpendicular to v(x0). These points are a Poincaré section S(γ, x0), of the orbit. The flow now defines a map, the Poincaré map F : S → S, for points starting in S and returning to S. Not all these points will take the same amount of time to come back, but the times will be close to the time it takes x0.", "title": "Local dynamics" }, { "paragraph_id": 61, "text": "The intersection of the periodic orbit with the Poincaré section is a fixed point of the Poincaré map F. By a translation, the point can be assumed to be at x = 0. The Taylor series of the map is F(x) = J · x + O(x), so a change of coordinates h can only be expected to simplify F to its linear part", "title": "Local dynamics" }, { "paragraph_id": 62, "text": "This is known as the conjugation equation. Finding conditions for this equation to hold has been one of the major tasks of research in dynamical systems. Poincaré first approached it assuming all functions to be analytic and in the process discovered the non-resonant condition. If λ1, ..., λν are the eigenvalues of J they will be resonant if one eigenvalue is an integer linear combination of two or more of the others. As terms of the form λi – Σ (multiples of other eigenvalues) occurs in the denominator of the terms for the function h, the non-resonant condition is also known as the small divisor problem.", "title": "Local dynamics" }, { "paragraph_id": 63, "text": "The results on the existence of a solution to the conjugation equation depend on the eigenvalues of J and the degree of smoothness required from h. As J does not need to have any special symmetries, its eigenvalues will typically be complex numbers. When the eigenvalues of J are not in the unit circle, the dynamics near the fixed point x0 of F is called hyperbolic and when the eigenvalues are on the unit circle and complex, the dynamics is called elliptic.", "title": "Local dynamics" }, { "paragraph_id": 64, "text": "In the hyperbolic case, the Hartman–Grobman theorem gives the conditions for the existence of a continuous function that maps the neighborhood of the fixed point of the map to the linear map J · x. The hyperbolic case is also structurally stable. Small changes in the vector field will only produce small changes in the Poincaré map and these small changes will reflect in small changes in the position of the eigenvalues of J in the complex plane, implying that the map is still hyperbolic.", "title": "Local dynamics" }, { "paragraph_id": 65, "text": "The Kolmogorov–Arnold–Moser (KAM) theorem gives the behavior near an elliptic point.", "title": "Local dynamics" }, { "paragraph_id": 66, "text": "When the evolution map Φ (or the vector field it is derived from) depends on a parameter μ, the structure of the phase space will also depend on this parameter. Small changes may produce no qualitative changes in the phase space until a special value μ0 is reached. At this point the phase space changes qualitatively and the dynamical system is said to have gone through a bifurcation.", "title": "Bifurcation theory" }, { "paragraph_id": 67, "text": "Bifurcation theory considers a structure in phase space (typically a fixed point, a periodic orbit, or an invariant torus) and studies its behavior as a function of the parameter μ. At the bifurcation point the structure may change its stability, split into new structures, or merge with other structures. By using Taylor series approximations of the maps and an understanding of the differences that may be eliminated by a change of coordinates, it is possible to catalog the bifurcations of dynamical systems.", "title": "Bifurcation theory" }, { "paragraph_id": 68, "text": "The bifurcations of a hyperbolic fixed point x0 of a system family Fμ can be characterized by the eigenvalues of the first derivative of the system DFμ(x0) computed at the bifurcation point. For a map, the bifurcation will occur when there are eigenvalues of DFμ on the unit circle. For a flow, it will occur when there are eigenvalues on the imaginary axis. For more information, see the main article on Bifurcation theory.", "title": "Bifurcation theory" }, { "paragraph_id": 69, "text": "Some bifurcations can lead to very complicated structures in phase space. For example, the Ruelle–Takens scenario describes how a periodic orbit bifurcates into a torus and the torus into a strange attractor. In another example, Feigenbaum period-doubling describes how a stable periodic orbit goes through a series of period-doubling bifurcations.", "title": "Bifurcation theory" }, { "paragraph_id": 70, "text": "In many dynamical systems, it is possible to choose the coordinates of the system so that the volume (really a ν-dimensional volume) in phase space is invariant. This happens for mechanical systems derived from Newton's laws as long as the coordinates are the position and the momentum and the volume is measured in units of (position) × (momentum). The flow takes points of a subset A into the points Φ(A) and invariance of the phase space means that", "title": "Ergodic systems" }, { "paragraph_id": 71, "text": "In the Hamiltonian formalism, given a coordinate it is possible to derive the appropriate (generalized) momentum such that the associated volume is preserved by the flow. The volume is said to be computed by the Liouville measure.", "title": "Ergodic systems" }, { "paragraph_id": 72, "text": "In a Hamiltonian system, not all possible configurations of position and momentum can be reached from an initial condition. Because of energy conservation, only the states with the same energy as the initial condition are accessible. The states with the same energy form an energy shell Ω, a sub-manifold of the phase space. The volume of the energy shell, computed using the Liouville measure, is preserved under evolution.", "title": "Ergodic systems" }, { "paragraph_id": 73, "text": "For systems where the volume is preserved by the flow, Poincaré discovered the recurrence theorem: Assume the phase space has a finite Liouville volume and let F be a phase space volume-preserving map and A a subset of the phase space. Then almost every point of A returns to A infinitely often. The Poincaré recurrence theorem was used by Zermelo to object to Boltzmann's derivation of the increase in entropy in a dynamical system of colliding atoms.", "title": "Ergodic systems" }, { "paragraph_id": 74, "text": "One of the questions raised by Boltzmann's work was the possible equality between time averages and space averages, what he called the ergodic hypothesis. The hypothesis states that the length of time a typical trajectory spends in a region A is vol(A)/vol(Ω).", "title": "Ergodic systems" }, { "paragraph_id": 75, "text": "The ergodic hypothesis turned out not to be the essential property needed for the development of statistical mechanics and a series of other ergodic-like properties were introduced to capture the relevant aspects of physical systems. Koopman approached the study of ergodic systems by the use of functional analysis. An observable a is a function that to each point of the phase space associates a number (say instantaneous pressure, or average height). The value of an observable can be computed at another time by using the evolution function φ. This introduces an operator U, the transfer operator,", "title": "Ergodic systems" }, { "paragraph_id": 76, "text": "By studying the spectral properties of the linear operator U it becomes possible to classify the ergodic properties of Φ. In using the Koopman approach of considering the action of the flow on an observable function, the finite-dimensional nonlinear problem involving Φ gets mapped into an infinite-dimensional linear problem involving U.", "title": "Ergodic systems" }, { "paragraph_id": 77, "text": "The Liouville measure restricted to the energy surface Ω is the basis for the averages computed in equilibrium statistical mechanics. An average in time along a trajectory is equivalent to an average in space computed with the Boltzmann factor exp(−βH). This idea has been generalized by Sinai, Bowen, and Ruelle (SRB) to a larger class of dynamical systems that includes dissipative systems. SRB measures replace the Boltzmann factor and they are defined on attractors of chaotic systems.", "title": "Ergodic systems" }, { "paragraph_id": 78, "text": "Simple nonlinear dynamical systems and even piecewise linear systems can exhibit a completely unpredictable behavior, which might seem to be random, despite the fact that they are fundamentally deterministic. This seemingly unpredictable behavior has been called chaos. Hyperbolic systems are precisely defined dynamical systems that exhibit the properties ascribed to chaotic systems. In hyperbolic systems the tangent space perpendicular to a trajectory can be well separated into two parts: one with the points that converge towards the orbit (the stable manifold) and another of the points that diverge from the orbit (the unstable manifold).", "title": "Ergodic systems" }, { "paragraph_id": 79, "text": "This branch of mathematics deals with the long-term qualitative behavior of dynamical systems. Here, the focus is not on finding precise solutions to the equations defining the dynamical system (which is often hopeless), but rather to answer questions like \"Will the system settle down to a steady state in the long term, and if so, what are the possible attractors?\" or \"Does the long-term behavior of the system depend on its initial condition?\"", "title": "Ergodic systems" }, { "paragraph_id": 80, "text": "Note that the chaotic behavior of complex systems is not the issue. Meteorology has been known for years to involve complex—even chaotic—behavior. Chaos theory has been so surprising because chaos can be found within almost trivial systems. The logistic map is only a second-degree polynomial; the horseshoe map is piecewise linear.", "title": "Ergodic systems" }, { "paragraph_id": 81, "text": "For non-linear autonomous ODEs it is possible under some conditions to develop solutions of finite duration, meaning here that from its own dynamics, the system will reach the value zero at an ending time and stays there in zero forever after. These finite-duration solutions can't be analytical functions on the whole real line, and because they are non-Lipschitz functions at their ending time, they are not unique solutions of Lipschitz differential equations.", "title": "Ergodic systems" }, { "paragraph_id": 82, "text": "As example, the equation:", "title": "Ergodic systems" }, { "paragraph_id": 83, "text": "Admits the finite duration solution:", "title": "Ergodic systems" }, { "paragraph_id": 84, "text": "Works providing a broad coverage:", "title": "Further reading" }, { "paragraph_id": 85, "text": "Introductory texts with a unique perspective:", "title": "Further reading" }, { "paragraph_id": 86, "text": "Textbooks", "title": "Further reading" }, { "paragraph_id": 87, "text": "Popularizations:", "title": "Further reading" } ]
In mathematics, a dynamical system is a system in which a function describes the time dependence of a point in an ambient space, such as in a parametric curve. Examples include the mathematical models that describe the swinging of a clock pendulum, the flow of water in a pipe, the random motion of particles in the air, and the number of fish each springtime in a lake. The most general definition unifies several concepts in mathematics such as ordinary differential equations and ergodic theory by allowing different choices of the space and how time is measured. Time can be measured by integers, by real or complex numbers or can be a more general algebraic object, losing the memory of its physical origin, and the space may be a manifold or simply a set, without the need of a smooth space-time structure defined on it. At any given time, a dynamical system has a state representing a point in an appropriate state space. This state is often given by a tuple of real numbers or by a vector in a geometrical manifold. The evolution rule of the dynamical system is a function that describes what future states follow from the current state. Often the function is deterministic, that is, for a given time interval only one future state follows from the current state. However, some systems are stochastic, in that random events also affect the evolution of the state variables. In physics, a dynamical system is described as a "particle or ensemble of particles whose state varies over time and thus obeys differential equations involving time derivatives". In order to make a prediction about the system's future behavior, an analytical solution of such equations or their integration over time through computer simulation is realized. The study of dynamical systems is the focus of dynamical systems theory, which has applications to a wide variety of fields such as mathematics, physics, biology, chemistry, engineering, economics, history, and medicine. Dynamical systems are a fundamental part of chaos theory, logistic map dynamics, bifurcation theory, the self-assembly and self-organization processes, and the edge of chaos concept.
2002-01-06T12:01:23Z
2023-10-01T15:04:10Z
[ "Template:Chaos theory", "Template:About", "Template:Redirect", "Template:Div col end", "Template:Commonscat", "Template:Authority control", "Template:Citation needed", "Template:Div col", "Template:Reflist", "Template:-", "Template:Portal", "Template:Cite book", "Template:Cite web", "Template:Cite journal", "Template:ISBN", "Template:Refbegin", "Template:ISSN", "Template:Short description", "Template:More footnotes needed", "Template:Main", "Template:Refend" ]
https://en.wikipedia.org/wiki/Dynamical_system
9,090
Dhimmi
Dhimmī (Arabic: ذمي ḏimmī, IPA: [ˈðimmiː], collectively أهل الذمة ʾahl aḏ-ḏimmah/dhimmah "the people of the covenant") or muʿāhid (معاهد) is a historical term for non-Muslims living in an Islamic state with legal protection. The word literally means "protected person", referring to the state's obligation under sharia to protect the individual's life, property, as well as freedom of religion, in exchange for loyalty to the state and payment of the jizya tax, in contrast to the zakat, or obligatory alms, paid by the Muslim subjects. Dhimmi were exempt from certain duties assigned specifically to Muslims if they paid the poll tax (jizya) but were otherwise equal under the laws of property, contract, and obligation. Historically, dhimmi status was originally applied to Jews, Christians, and Sabians, who are considered "People of the Book" in Islamic theology. Later, this status was also applied to Zoroastrians, Sikhs, Hindus, Jains, and Buddhists. Jews and Christians were required to pay the jizyah while others, depending on the different rulings of the four Madhhabs, might be required to accept Islam, pay the jizya, be exiled, or be killed. During the rule of al-Mutawakkil, the tenth Abbasid Caliph, numerous restrictions reinforced the second-class citizen status of dhimmīs and forced their communities into ghettos. For instance, they were required to distinguish themselves from their Muslim neighbors by their dress. They were not permitted to build new churches or synagogues or repair old churches according to the Pact of Umar. Under Sharia, the dhimmi communities were usually governed by their own laws in place of some of the laws applicable to the Muslim community. For example, the Jewish community of Medina was allowed to have its own Halakhic courts, and the Ottoman millet system allowed its various dhimmi communities to rule themselves under separate legal courts. These courts did not cover cases that involved religious groups outside of their own communities, or capital offences. Dhimmi communities were also allowed to engage in certain practices that were usually forbidden for the Muslim community, such as the consumption of alcohol and pork. Some Muslims reject the dhimma system by arguing that it is a system which is inappropriate in the age of nation-states and democracies. There is a range of opinions among 20th-century and contemporary Islamic theologians about whether the notion of dhimma is appropriate for modern times, and, if so, what form it should take in an Islamic state. There are differences among the Islamic Madhhabs regarding which non-Muslims can pay jizya and have dhimmi status. The Hanafi and Maliki Madhabs generally allow non-Muslims to have dhimmi status. In contrast, the Shafi'i and Hanbali Madhabs only allow Christians, Jews and Zoroastrians to have dhimmi status, and they maintain that all other non-Muslims must either convert to Islam or be fought. Based on Quranic verses and Islamic traditions, sharia law distinguishes between Muslims, followers of other Abrahamic religions, and Pagans or people belonging to other polytheistic religions. As monotheists, Jews and Christians have traditionally been considered "People of the Book", and afforded a special legal status known as dhimmi derived from a theoretical contract—"dhimma" or "residence in return for taxes". Islamic legal systems based on sharia law incorporated the religious laws and courts of Christians, Jews, and Hindus, as seen in the early caliphate, al-Andalus, Indian subcontinent, and the Ottoman Millet system. In Yemenite Jewish sources, a treaty was drafted between Muhammad and his Jewish subjects, known as kitāb ḏimmat al-nabi, written in the 17th year of the Hijra (638 CE), which gave express liberty to the Jews living in Arabia to observe the Sabbath and to grow-out their side-locks, but required them to pay the jizya (poll-tax) annually for their protection. Muslim governments in the Indus basin readily extended the dhimmi status to the Hindus and Buddhists of India. Eventually, the largest school of Islamic jurisprudence applied this term to all Non-Muslims living in Muslim lands outside the sacred area surrounding Mecca, Arabia. In medieval Islamic societies, the qadi (Islamic judge) usually could not interfere in the matters of non-Muslims unless the parties voluntarily chose to be judged according to Islamic law, thus the dhimmi communities living in Islamic states usually had their own laws independent from the sharia law, as with the Jews who would have their own rabbinical courts. These courts did not cover cases that involved other religious groups, or capital offences or threats to public order. By the 18th century, however, dhimmi frequently attended the Ottoman Muslim courts, where cases were taken against them by Muslims, or they took cases against Muslims or other dhimmi. Oaths sworn by dhimmi in these courts were tailored to their beliefs. Non-Muslims were allowed to engage in certain practices (such as the consumption of alcohol and pork) that were usually forbidden by Islamic law, in point of fact, any Muslim who pours away their wine or forcibly appropriates it is liable to pay compensation. Some Islamic theologians held that Zoroastrian "self-marriages", considered incestuous under sharia, should also be tolerated. Ibn Qayyim Al-Jawziyya (1292–1350) opined that most scholars of the Hanbali school held that non-Muslims were entitled to such practices, as long as they were not presented to sharia courts and the religious minorities in question held them to be permissible. This ruling was based on the precedent that there were no records of the Islamic prophet Muhammad forbidding such self-marriages among Zoroastrians, despite coming into contact with Zoroastrians and knowing about this practice. Religious minorities were also free to do as they wished in their own homes, provided they did not publicly engage in illicit sexual activity in ways that could threaten public morals. There are parallels for this in Roman and Jewish law. According to law professor H. Patrick Glenn of McGill University, "[t]oday it is said that the dhimmi are 'excluded from the specifically Muslim privileges, but on the other hand they are excluded from the specifically Muslim duties' while (and here there are clear parallels with western public and private law treatment of aliens—Fremdenrecht, la condition de estrangers), '[f]or the rest, the Muslim and the dhimmi are equal in practically the whole of the law of property and of contracts and obligations'." Quoting the Qur'anic statement, "Let Christians judge according to what We have revealed in the Gospel", Muhammad Hamidullah writes that Islam decentralized and "communalized" law and justice. However, the classical dhimma contract is no longer enforced. Western influence over the Muslim world has been instrumental in eliminating the restrictions and protections of the dhimma contract. The dhimma contract is an integral part of traditional Islamic law. From the 9th century AD, the power to interpret and refine law in traditional Islamic societies was in the hands of the scholars (ulama). This separation of powers served to limit the range of actions available to the ruler, who could not easily decree or reinterpret law independently and expect the continued support of the community. Through succeeding centuries and empires, the balance between the ulema and the rulers shifted and reformed, but the balance of power was never decisively changed. At the beginning of the 19th century, the Industrial Revolution and the French Revolution introduced an era of European world hegemony that included the domination of most of the Muslim lands. At the end of the Second World War, the European powers found themselves too weakened to maintain their empires. The wide variety in forms of government, systems of law, attitudes toward modernity and interpretations of sharia are a result of the ensuing drives for independence and modernity in the Muslim world. Muslim states, sects, schools of thought and individuals differ as to exactly what sharia law entails. In addition, Muslim states today utilize a spectrum of legal systems. Most states have a mixed system that implements certain aspects of sharia while acknowledging the supremacy of a constitution. A few, such as Turkey, have declared themselves secular. Local and customary laws may take precedence in certain matters, as well. Islamic law is therefore polynormative, and despite several cases of regression in recent years, the trend is towards liberalization. Questions of human rights and the status of minorities cannot be generalized with regards to the Muslim world. They must instead be examined on a case-by-case basis, within specific political and cultural contexts, using perspectives drawn from the historical framework. The status of the dhimmi "was for long accepted with resignation by the Christians and with gratitude by the Jews" but the rising power of Christendom and the radical ideas of the French Revolution caused a wave of discontent among Christian dhimmis. The continuing and growing pressure from the European powers combined with pressure from Muslim reformers gradually relaxed the inequalities between Muslims and non-Muslims. On 18 February 1856, the Ottoman Reform Edict of 1856 (Hatt-i Humayan) was issued, building upon the 1839 edict. It came about partly as a result of pressure from and the efforts of the ambassadors of France, Austria and the United Kingdom, whose respective countries were needed as allies in the Crimean War. It again proclaimed the principle of equality between Muslims and non-Muslims, and produced many specific reforms to this end. For example, the jizya tax was abolished and non-Muslims were allowed to join the army. Jews and Christians living under early Muslim rule were considered dhimmis, a status that was later also extended to other non-Muslims like Hindus and Buddhists. They were allowed to "freely practice their religion, and to enjoy a large measure of communal autonomy" and guaranteed their personal safety and security of property, in return for paying tribute and acknowledging Muslim rule. Islamic law and custom prohibited the enslavement of free dhimmis within lands under Islamic rule. Taxation from the perspective of dhimmis who came under the Muslim rule, was "a concrete continuation of the taxes paid to earlier regimes" (but much lower under the Muslim rule). They were also exempted from the zakat tax paid by Muslims. The dhimmi communities living in Islamic states had their own laws independent from the Sharia law, such as the Jews who had their own Halakhic courts. The dhimmi communities had their own leaders, courts, personal and religious laws, and "generally speaking, Muslim tolerance of unbelievers was far better than anything available in Christendom, until the rise of secularism in the 17th century". "Muslims guaranteed freedom of worship and livelihood, provided that they remained loyal to the Muslim state and paid a poll tax". "Muslim governments appointed Christian and Jewish professionals to their bureaucracies", and thus, Christians and Jews "contributed to the making of the Islamic civilization". However, dhimmis faced social and symbolic restrictions, and a pattern of stricter, then more lax, enforcement developed over time. Marshall Hodgson, a historian of Islam, writes that during the era of the High Caliphate (7th–13th Centuries), zealous Shariah-minded Muslims gladly elaborated their code of symbolic restrictions on the dhimmis. From an Islamic legal perspective, the pledge of protection granted dhimmis the freedom to practice their religion and spared them forced conversions. The dhimmis also served a variety of useful purposes, mostly economic, which was another point of concern to jurists. Religious minorities were free to do whatever they wished in their own homes, but could not "publicly engage in illicit sex in ways that threaten public morals". In some cases, religious practices that Muslims found repugnant were allowed. One example was the Zoroastrian practice of incestuous "self-marriage" where a man could marry his mother, sister or daughter. According to the famous Islamic legal scholar Ibn Qayyim Al-Jawziyya (1292–1350), non-Muslims had the right to engage in such religious practices even if it offended Muslims, under the conditions that such cases not be presented to Islamic Sharia courts and that these religious minorities believed that the practice in question is permissible according to their religion. This ruling was based on the precedent that Muhammad did not forbid such self-marriages among Zoroastrians despite coming in contact with them and having knowledge of their practices. The Arabs generally established garrisons outside towns in the conquered territories, and had little interaction with the local dhimmi populations for purposes other than the collection of taxes. The conquered Christian, Jewish, Mazdean and Buddhist communities were otherwise left to lead their lives as before. According to historians Lewis and Stillman, local Christians in Syria, Iraq, and Egypt were non-Chalcedonians and many may have felt better off under early Muslim rule than under that of the Byzantine Orthodox of Constantinople. In 1095, Pope Urban II urged western European Christians to come to the aid of the Christians of Palestine. The subsequent Crusades brought Roman Catholic Christians into contact with Orthodox Christians whose beliefs they discovered to differ from their own perhaps more than they had realized, and whose position under the rule of the Muslim Fatimid Caliphate was less uncomfortable than had been supposed. Consequently, the Eastern Christians provided perhaps less support to the Crusaders than had been expected. When the Arab East came under Ottoman rule in the 16th century, Christian populations and fortunes rebounded significantly. The Ottomans had long experience dealing with Christian and Jewish minorities, and were more tolerant towards religious minorities than the former Muslim rulers, the Mamluks of Egypt. However, Christians living under Islamic rule have suffered certain legal disadvantages and at times persecution. In the Ottoman Empire, in accordance with the dhimmi system implemented in Muslim countries, they, like all other Christians and also Jews, were accorded certain freedoms. The dhimmi system in the Ottoman Empire was largely based upon the Pact of Umar. The client status established the rights of the non-Muslims to property, livelihood and freedom of worship but they were in essence treated as second-class citizens in the empire and referred to in Turkish as gavours, a pejorative word meaning "infidel" or "unbeliever". The clause of the Pact of Umar which prohibited non-Muslims from building new places of worship was historically imposed on some communities of the Ottoman Empire and ignored in other cases, at discretion of the local authorities. Although there were no laws mandating religious ghettos, this led to non-Muslim communities being clustered around existing houses of worship. In addition to other legal limitations, dhimmis, including the Christians among them, were not considered equals to Muslims and several prohibitions were placed on them. Their testimony against Muslims was inadmissible in courts of law wherein a Muslim could be punished; this meant that their testimony could only be considered in commercial cases. They were forbidden to carry weapons or ride atop horses and camels. Their houses could not overlook those of Muslims; and their religious practices were severely circumscribed (e.g., the ringing of church bells was strictly forbidden). Because the early Islamic conquests initially preserved much of the existing administrative machinery and culture, in many territories they amounted to little more than a change of rulers for the subject populations, which "brought peace to peoples demoralized and disaffected by the casualties and heavy taxation that resulted from the years of Byzantine-Persian warfare". María Rosa Menocal, argues that the Jewish dhimmis living under the caliphate, while allowed fewer rights than Muslims, were still better off than in the Christian parts of Europe. Jews from other parts of Europe made their way to al-Andalus, where in parallel to Christian sects regarded as heretical by Catholic Europe, they were not just tolerated, but where opportunities to practice faith and trade were open without restriction save for the prohibitions on proselytization. Bernard Lewis states: Generally, the Jewish people were allowed to practice their religion and live according to the laws and scriptures of their community. Furthermore, the restrictions to which they were subject were social and symbolic rather than tangible and practical in character. That is to say, these regulations served to define the relationship between the two communities, and not to oppress the Jewish population. Professor of Jewish medieval history at Hebrew University of Jerusalem, Hayim Hillel Ben-Sasson, notes: The legal and security situation of the Jews in the Muslim world was generally better than in Christendom, because in the former, Jews were not the sole "infidels", because in comparison to the Christians, Jews were less dangerous and more loyal to the Muslim regime, and because the rapidity and the territorial scope of the Muslim conquests imposed upon them a reduction in persecution and a granting of better possibility for the survival of members of other faiths in their lands. According to the French historian Claude Cahen, Islam has "shown more toleration than Europe towards the Jews who remained in Muslim lands." Comparing the treatment of Jews in the medieval Islamic world and medieval Christian Europe, Mark R. Cohen notes that, in contrast to Jews in Christian Europe, the "Jews in Islam were well integrated into the economic life of the larger society", and that they were allowed to practice their religion more freely than they could do in Christian Europe. According to the scholar Mordechai Zaken, tribal chieftains (also known as aghas) in tribal Muslim societies such as the Kurdish society in Kurdistan would tax their Jewish subjects. The Jews were in fact civilians protected by their chieftains in and around their communities; in return they paid part of their harvest as dues, and contributed their skills and services to their patron chieftain. By the 10th century, the Turks of Central Asia had invaded the Indic plains, and spread Islam in Northwestern parts of India. At the end of the 12th century, the Muslims advanced quickly into the Ganges Plain. In one decade, a Muslim army led by Turkic slaves consolidated resistance around Lahore and brought northern India, as far as Bengal, under Muslim rule. From these Turkic slaves would come sultans, including the founder of the sultanate of Delhi. By the 15th century, major parts of Northern India was ruled by Muslim rulers, mostly descended from invaders. In the 16th century, India came under the influence of the Mughals. Babur, the first ruler of the Mughal empire, established a foothold in the north which paved the way for further expansion by his successors. Although the Mughal emperor Akbar has been described as a universalist, most Mughal emperors were oppressive of native Hindu, Buddhist and later Sikh populations. Aurangzeb specifically was inclined towards a highly fundamentalist approach. There were a number of restrictions on dhimmis. In a modern sense the dhimmis would be described as second-class citizens. According to historian Marshall Hodgson, from very early times Muslim rulers would very often humiliate and punish dhimmis (usually Christians or Jews that refused to convert to Islam). It was official policy that dhimmis should “feel inferior and to know ‘their place". Although dhimmis were allowed to perform their religious rituals, they were obliged to do so in a manner not conspicuous to Muslims. Loud prayers were forbidden, as were the ringing of church bells and the blowing of the shofar. They were also not allowed to build or repair churches and synagogues without Muslim consent. Moreover, dhimmis were not allowed to seek converts among Muslims. In the Mamluk Egypt, where non-Mamluk Muslims were not allowed to ride horses and camels, dhimmis were prohibited even from riding donkeys inside cities. Sometimes, Muslim rulers issued regulations requiring dhimmis to attach distinctive signs to their houses. Most of the restrictions were social and symbolic in nature, and a pattern of stricter, then more lax, enforcement developed over time. The major financial disabilities of the dhimmi were the jizya poll tax and the fact dhimmis and Muslims could not inherit from each other. That would create an incentive to convert if someone from the family had already converted. Ira M. Lapidus states that the "payment of the poll tax seems to have been regular, but other obligations were inconsistently enforced and did not prevent many non-Muslims from being important political, business, and scholarly figures. In the late ninth and early tenth centuries, Jewish bankers and financiers were important at the 'Abbasid court." The jurists and scholars of Islamic sharia law called for humane treatment of the dhimmis. A Muslim man may marry a Jewish or Christian dhimmī woman, who may keep her own religion (though her children were automatically considered Muslims and had to be raised as such), but a Muslim woman cannot marry a dhimmī man unless he converts to Islam. Dhimmīs are prohibited from converting Muslims under severe penalties, while Muslims are encouraged to convert dhimmīs. Payment of the jizya obligated Muslim authorities to protect dhimmis in civil and military matters. Sura 9 (At-Tawba), verse 29 stipulates that jizya be exacted from non-Muslims as a condition required for jihad to cease. Islamic jurists required adult, free, healthy males among the dhimma community to pay the jizya, while exempting women, children, the elderly, slaves, those affected by mental or physical handicaps, and travelers who did not settle in Muslim lands. According to Abu Yusuf dhimmi should be imprisoned until they pay the jizya in full. Other jurists specified that dhimmis who don't pay jizya should have their heads shaved and made to wear a dress distinctive from those dhimmis who paid the jizya and Muslims. Lewis states there are varying opinions among scholars as to how much of a burden jizya was. According to Norman Stillman: "jizya and kharaj were a "crushing burden for the non-Muslim peasantry who eked out a bare living in a subsistence economy." Both agree that ultimately, the additional taxation on non-Muslims was a critical factor that drove many dhimmis to leave their religion and accept Islam. However, in some regions the jizya on populations was significantly lower than the zakat, meaning dhimmi populations maintained an economic advantage. According to Cohen, taxation, from the perspective of dhimmis who came under Muslim rule, was "a concrete continuation of the taxes paid to earlier regimes". Lewis observes that the change from Byzantine to Arab rule was welcomed by many among the dhimmis who found the new yoke far lighter than the old, both in taxation and in other matters, and that some, even among the Christians of Syria and Egypt, preferred the rule of Islam to that of Byzantines. Montgomery Watt states, "the Christians were probably better off as dhimmis under Muslim-Arab rulers than they had been under the Byzantine Greeks." In some places, for example Egypt, the jizya was a tax incentive for Christians to convert to Islam. Some scholars have tried compute the relative taxation on Muslims vs non-Muslims in the early Abbasid period. According to one estimate, Muslims had an average tax rate of 17–20 dirhams per person, which rose to 30 dirhams per person when in kind levies are included. Non-Muslims paid either 12, 24 or 48 dirhams per person, depending on their taxation category, though most probably paid 12. The importance of dhimmis as a source of revenue for the Rashidun Caliphate is illustrated in a letter ascribed to Umar I and cited by Abu Yusuf: "if we take dhimmis and share them out, what will be left for the Muslims who come after us? By God, Muslims would not find a man to talk to and profit from his labors." The early Islamic scholars took a relatively humane and practical attitude towards the collection of jizya, compared to the 11th century commentators writing when Islam was under threat both at home and abroad. The jurist Abu Yusuf, the chief judge of the caliph Harun al-Rashid, rules as follows regarding the manner of collecting the jizya No one of the people of the dhimma should be beaten in order to exact payment of the jizya, nor made to stand in the hot sun, nor should hateful things be inflicted upon their bodies, or anything of that sort. Rather they should be treated with leniency. In the border provinces, dhimmis were sometimes recruited for military operations. In such cases, they were exempted from jizya for the year of service. Religious pluralism existed in medieval Islamic law and ethics. The religious laws and courts of other religions, including Christianity, Judaism and Hinduism, were usually accommodated within the Islamic legal framework, as exemplified in the Caliphate, Al-Andalus, Ottoman Empire and Indian subcontinent. In medieval Islamic societies, the qadi (Islamic judge) usually could not interfere in the matters of non-Muslims unless the parties voluntarily chose to be judged according to Islamic law. The dhimmi communities living in Islamic states usually had their own laws independent from the Sharia law, such as the Jews who had their own Halakha courts. Dhimmis were allowed to operate their own courts following their own legal systems. However, dhimmis frequently attended the Muslim courts in order to record property and business transactions within their own communities. Cases were taken out against Muslims, against other dhimmis and even against members of the dhimmi's own family. Dhimmis often took cases relating to marriage, divorce or inheritance to the Muslim courts so these cases would be decided under sharia law. Oaths sworn by dhimmis in the Muslim courts were sometimes the same as the oaths taken by Muslims, sometimes tailored to the dhimmis' beliefs. Muslim men could generally marry dhimmi women who are considered People of the Book, however Islamic jurists rejected the possibility any non-Muslim man might marry a Muslim woman. Bernard Lewis notes that "similar position existed under the laws of Byzantine Empire, according to which a Christian could marry a Jewish woman, but a Jew could not marry a Christian woman under pain of death". Lewis states A hadith by Muhammad, "Whoever killed a muʿāhid (a person who is granted the pledge of protection by the Muslims) shall not smell the fragrance of Paradise though its fragrance can be smelt at a distance of forty years (of traveling).", is cited as a foundation for the right of non-Muslim citizens to live peacefully and undisturbed in an Islamic state. Anwar Shah Kashmiri writes in his commentary on Sahih al-Bukhari Fayd al-Bari on this hadith: "You know the gravity of sin for killing a Muslim, for its odiousness has reached the point of disbelief, and it necessitates that [the killer abides in Hell] forever. As for killing a non-Muslim citizen [muʿāhid], it is similarly no small matter, for the one who does it will not smell the fragrance of Paradise." A similar hadith in regard to the status of the dhimmis: "Whoever wrongs one with whom a compact (treaty) has been made [i.e., a dhimmi] and lays on him a burden beyond his strength, I will be his accuser." The Constitution of Medina, a formal agreement between Muhammad and all the significant tribes and families of Medina (including Muslims, Jews and pagans), declared that non-Muslims in the Ummah had the following rights: A precedent for the dhimma contract was established with the agreement between Muhammad and the Jews after the Battle of Khaybar, an oasis near Medina. Khaybar was the first territory attacked and conquered by Muslims. When the Jews of Khaybar surrendered to Muhammad after a siege, Muhammad allowed them to remain in Khaybar in return for handing over to the Muslims one half their annual produce. The Pact of Umar, traditionally believed to be between caliph Umar and the conquered Jerusalem Christians in the seventh century, was another source of regulations pertaining to dhimmis. However, Western orientalists doubt the authenticity of the pact, arguing it is usually the victors and not the vanquished who impose rather than propose, the terms of peace, and that it is highly unlikely that the people who spoke no Arabic and knew nothing of Islam could draft such a document. Academic historians believe the Pact of Umar in the form it is known today was a product of later jurists who attributed it to Umar in order to lend greater authority to their own opinions. The similarities between the Pact of Umar and the Theodosian and Justinian Codes of the Eastern Roman Empire suggest that perhaps much of the Pact of Umar was borrowed from these earlier codes by later Islamic jurists. At least some of the clauses of the pact mirror the measures first introduced by the Umayyad caliph Umar II or by the early Abbasid caliphs. During the Middle Ages, local associations known as futuwwa clubs developed across the Islamic lands. There were usually several futuwwah in each town. These clubs catered to varying interests, primarily sports, and might involve distinctive manners of dress and custom. They were known for their hospitality, idealism and loyalty to the group. They often had a militaristic aspect, purportedly for the mutual protection of the membership. These clubs commonly crossed social strata, including among their membership local notables, dhimmi and slaves – to the exclusion of those associated with the local ruler, or amir. Muslims and Jews were sometimes partners in trade, with the Muslim taking days off on Fridays and Jews taking off on Saturdays. Andrew Wheatcroft describes how some social customs such as different conceptions of dirt and cleanliness made it difficult for the religious communities to live close to each other, either under Muslim or under Christian rule. The dhimma and the jizya poll tax are no longer imposed in Muslim majority countries. In the 21st century, jizya is widely regarded as being at odds with contemporary secular conceptions of citizens' civil rights and equality before the law, although there have been occasional reports of religious minorities in conflict zones and areas subject to political instability being forced to pay jizya. In 2009 it was claimed that a group of militants that referred to themselves as the Taliban imposed the jizya on Pakistan's minority Sikh community after occupying some of their homes and kidnapping a Sikh leader. As late as 2013, in Egypt jizya was reportedly being imposed by the Muslim Brotherhood on 15,000 Christian Copts of Dalga Village. In February 2014, the Islamic State of Iraq and the Levant (ISIL) announced that it intended to extract jizya from Christians in the city of Raqqa, Syria, which it controlled at the time. Christians who refused to accept the dhimma contract and pay the tax were to have to either convert to Islam, leave or be executed. Wealthy Christians would have to pay half an ounce of gold, the equivalent of $664 twice a year; middle-class Christians were to have to pay half that amount and poorer ones were to be charged one-fourth that amount. In June, 2014 the Institute for the Study of War reported that ISIL claims to have collected jizya and fay. On 18 July 2014 ISIL ordered the Christians in Mosul to accept the dhimma contract and pay the jizya or convert to Islam. If they refused to accept either of the options they would be killed.
[ { "paragraph_id": 0, "text": "Dhimmī (Arabic: ذمي ḏimmī, IPA: [ˈðimmiː], collectively أهل الذمة ʾahl aḏ-ḏimmah/dhimmah \"the people of the covenant\") or muʿāhid (معاهد) is a historical term for non-Muslims living in an Islamic state with legal protection. The word literally means \"protected person\", referring to the state's obligation under sharia to protect the individual's life, property, as well as freedom of religion, in exchange for loyalty to the state and payment of the jizya tax, in contrast to the zakat, or obligatory alms, paid by the Muslim subjects. Dhimmi were exempt from certain duties assigned specifically to Muslims if they paid the poll tax (jizya) but were otherwise equal under the laws of property, contract, and obligation.", "title": "" }, { "paragraph_id": 1, "text": "Historically, dhimmi status was originally applied to Jews, Christians, and Sabians, who are considered \"People of the Book\" in Islamic theology. Later, this status was also applied to Zoroastrians, Sikhs, Hindus, Jains, and Buddhists.", "title": "" }, { "paragraph_id": 2, "text": "Jews and Christians were required to pay the jizyah while others, depending on the different rulings of the four Madhhabs, might be required to accept Islam, pay the jizya, be exiled, or be killed.", "title": "" }, { "paragraph_id": 3, "text": "During the rule of al-Mutawakkil, the tenth Abbasid Caliph, numerous restrictions reinforced the second-class citizen status of dhimmīs and forced their communities into ghettos. For instance, they were required to distinguish themselves from their Muslim neighbors by their dress. They were not permitted to build new churches or synagogues or repair old churches according to the Pact of Umar.", "title": "" }, { "paragraph_id": 4, "text": "Under Sharia, the dhimmi communities were usually governed by their own laws in place of some of the laws applicable to the Muslim community. For example, the Jewish community of Medina was allowed to have its own Halakhic courts, and the Ottoman millet system allowed its various dhimmi communities to rule themselves under separate legal courts. These courts did not cover cases that involved religious groups outside of their own communities, or capital offences. Dhimmi communities were also allowed to engage in certain practices that were usually forbidden for the Muslim community, such as the consumption of alcohol and pork.", "title": "" }, { "paragraph_id": 5, "text": "Some Muslims reject the dhimma system by arguing that it is a system which is inappropriate in the age of nation-states and democracies. There is a range of opinions among 20th-century and contemporary Islamic theologians about whether the notion of dhimma is appropriate for modern times, and, if so, what form it should take in an Islamic state.", "title": "" }, { "paragraph_id": 6, "text": "There are differences among the Islamic Madhhabs regarding which non-Muslims can pay jizya and have dhimmi status. The Hanafi and Maliki Madhabs generally allow non-Muslims to have dhimmi status. In contrast, the Shafi'i and Hanbali Madhabs only allow Christians, Jews and Zoroastrians to have dhimmi status, and they maintain that all other non-Muslims must either convert to Islam or be fought.", "title": "" }, { "paragraph_id": 7, "text": "Based on Quranic verses and Islamic traditions, sharia law distinguishes between Muslims, followers of other Abrahamic religions, and Pagans or people belonging to other polytheistic religions. As monotheists, Jews and Christians have traditionally been considered \"People of the Book\", and afforded a special legal status known as dhimmi derived from a theoretical contract—\"dhimma\" or \"residence in return for taxes\". Islamic legal systems based on sharia law incorporated the religious laws and courts of Christians, Jews, and Hindus, as seen in the early caliphate, al-Andalus, Indian subcontinent, and the Ottoman Millet system.", "title": "The \"Dhimma contract\"" }, { "paragraph_id": 8, "text": "In Yemenite Jewish sources, a treaty was drafted between Muhammad and his Jewish subjects, known as kitāb ḏimmat al-nabi, written in the 17th year of the Hijra (638 CE), which gave express liberty to the Jews living in Arabia to observe the Sabbath and to grow-out their side-locks, but required them to pay the jizya (poll-tax) annually for their protection. Muslim governments in the Indus basin readily extended the dhimmi status to the Hindus and Buddhists of India. Eventually, the largest school of Islamic jurisprudence applied this term to all Non-Muslims living in Muslim lands outside the sacred area surrounding Mecca, Arabia.", "title": "The \"Dhimma contract\"" }, { "paragraph_id": 9, "text": "In medieval Islamic societies, the qadi (Islamic judge) usually could not interfere in the matters of non-Muslims unless the parties voluntarily chose to be judged according to Islamic law, thus the dhimmi communities living in Islamic states usually had their own laws independent from the sharia law, as with the Jews who would have their own rabbinical courts. These courts did not cover cases that involved other religious groups, or capital offences or threats to public order. By the 18th century, however, dhimmi frequently attended the Ottoman Muslim courts, where cases were taken against them by Muslims, or they took cases against Muslims or other dhimmi. Oaths sworn by dhimmi in these courts were tailored to their beliefs. Non-Muslims were allowed to engage in certain practices (such as the consumption of alcohol and pork) that were usually forbidden by Islamic law, in point of fact, any Muslim who pours away their wine or forcibly appropriates it is liable to pay compensation. Some Islamic theologians held that Zoroastrian \"self-marriages\", considered incestuous under sharia, should also be tolerated. Ibn Qayyim Al-Jawziyya (1292–1350) opined that most scholars of the Hanbali school held that non-Muslims were entitled to such practices, as long as they were not presented to sharia courts and the religious minorities in question held them to be permissible. This ruling was based on the precedent that there were no records of the Islamic prophet Muhammad forbidding such self-marriages among Zoroastrians, despite coming into contact with Zoroastrians and knowing about this practice. Religious minorities were also free to do as they wished in their own homes, provided they did not publicly engage in illicit sexual activity in ways that could threaten public morals.", "title": "The \"Dhimma contract\"" }, { "paragraph_id": 10, "text": "There are parallels for this in Roman and Jewish law. According to law professor H. Patrick Glenn of McGill University, \"[t]oday it is said that the dhimmi are 'excluded from the specifically Muslim privileges, but on the other hand they are excluded from the specifically Muslim duties' while (and here there are clear parallels with western public and private law treatment of aliens—Fremdenrecht, la condition de estrangers), '[f]or the rest, the Muslim and the dhimmi are equal in practically the whole of the law of property and of contracts and obligations'.\" Quoting the Qur'anic statement, \"Let Christians judge according to what We have revealed in the Gospel\", Muhammad Hamidullah writes that Islam decentralized and \"communalized\" law and justice. However, the classical dhimma contract is no longer enforced. Western influence over the Muslim world has been instrumental in eliminating the restrictions and protections of the dhimma contract.", "title": "The \"Dhimma contract\"" }, { "paragraph_id": 11, "text": "The dhimma contract is an integral part of traditional Islamic law. From the 9th century AD, the power to interpret and refine law in traditional Islamic societies was in the hands of the scholars (ulama). This separation of powers served to limit the range of actions available to the ruler, who could not easily decree or reinterpret law independently and expect the continued support of the community. Through succeeding centuries and empires, the balance between the ulema and the rulers shifted and reformed, but the balance of power was never decisively changed. At the beginning of the 19th century, the Industrial Revolution and the French Revolution introduced an era of European world hegemony that included the domination of most of the Muslim lands. At the end of the Second World War, the European powers found themselves too weakened to maintain their empires. The wide variety in forms of government, systems of law, attitudes toward modernity and interpretations of sharia are a result of the ensuing drives for independence and modernity in the Muslim world.", "title": "The \"Dhimma contract\"" }, { "paragraph_id": 12, "text": "Muslim states, sects, schools of thought and individuals differ as to exactly what sharia law entails. In addition, Muslim states today utilize a spectrum of legal systems. Most states have a mixed system that implements certain aspects of sharia while acknowledging the supremacy of a constitution. A few, such as Turkey, have declared themselves secular. Local and customary laws may take precedence in certain matters, as well. Islamic law is therefore polynormative, and despite several cases of regression in recent years, the trend is towards liberalization. Questions of human rights and the status of minorities cannot be generalized with regards to the Muslim world. They must instead be examined on a case-by-case basis, within specific political and cultural contexts, using perspectives drawn from the historical framework.", "title": "The \"Dhimma contract\"" }, { "paragraph_id": 13, "text": "The status of the dhimmi \"was for long accepted with resignation by the Christians and with gratitude by the Jews\" but the rising power of Christendom and the radical ideas of the French Revolution caused a wave of discontent among Christian dhimmis. The continuing and growing pressure from the European powers combined with pressure from Muslim reformers gradually relaxed the inequalities between Muslims and non-Muslims.", "title": "The \"Dhimma contract\"" }, { "paragraph_id": 14, "text": "On 18 February 1856, the Ottoman Reform Edict of 1856 (Hatt-i Humayan) was issued, building upon the 1839 edict. It came about partly as a result of pressure from and the efforts of the ambassadors of France, Austria and the United Kingdom, whose respective countries were needed as allies in the Crimean War. It again proclaimed the principle of equality between Muslims and non-Muslims, and produced many specific reforms to this end. For example, the jizya tax was abolished and non-Muslims were allowed to join the army.", "title": "The \"Dhimma contract\"" }, { "paragraph_id": 15, "text": "Jews and Christians living under early Muslim rule were considered dhimmis, a status that was later also extended to other non-Muslims like Hindus and Buddhists. They were allowed to \"freely practice their religion, and to enjoy a large measure of communal autonomy\" and guaranteed their personal safety and security of property, in return for paying tribute and acknowledging Muslim rule. Islamic law and custom prohibited the enslavement of free dhimmis within lands under Islamic rule. Taxation from the perspective of dhimmis who came under the Muslim rule, was \"a concrete continuation of the taxes paid to earlier regimes\" (but much lower under the Muslim rule). They were also exempted from the zakat tax paid by Muslims. The dhimmi communities living in Islamic states had their own laws independent from the Sharia law, such as the Jews who had their own Halakhic courts. The dhimmi communities had their own leaders, courts, personal and religious laws, and \"generally speaking, Muslim tolerance of unbelievers was far better than anything available in Christendom, until the rise of secularism in the 17th century\". \"Muslims guaranteed freedom of worship and livelihood, provided that they remained loyal to the Muslim state and paid a poll tax\". \"Muslim governments appointed Christian and Jewish professionals to their bureaucracies\", and thus, Christians and Jews \"contributed to the making of the Islamic civilization\".", "title": "Dhimmi communities" }, { "paragraph_id": 16, "text": "However, dhimmis faced social and symbolic restrictions, and a pattern of stricter, then more lax, enforcement developed over time. Marshall Hodgson, a historian of Islam, writes that during the era of the High Caliphate (7th–13th Centuries), zealous Shariah-minded Muslims gladly elaborated their code of symbolic restrictions on the dhimmis.", "title": "Dhimmi communities" }, { "paragraph_id": 17, "text": "From an Islamic legal perspective, the pledge of protection granted dhimmis the freedom to practice their religion and spared them forced conversions. The dhimmis also served a variety of useful purposes, mostly economic, which was another point of concern to jurists. Religious minorities were free to do whatever they wished in their own homes, but could not \"publicly engage in illicit sex in ways that threaten public morals\". In some cases, religious practices that Muslims found repugnant were allowed. One example was the Zoroastrian practice of incestuous \"self-marriage\" where a man could marry his mother, sister or daughter. According to the famous Islamic legal scholar Ibn Qayyim Al-Jawziyya (1292–1350), non-Muslims had the right to engage in such religious practices even if it offended Muslims, under the conditions that such cases not be presented to Islamic Sharia courts and that these religious minorities believed that the practice in question is permissible according to their religion. This ruling was based on the precedent that Muhammad did not forbid such self-marriages among Zoroastrians despite coming in contact with them and having knowledge of their practices.", "title": "Dhimmi communities" }, { "paragraph_id": 18, "text": "The Arabs generally established garrisons outside towns in the conquered territories, and had little interaction with the local dhimmi populations for purposes other than the collection of taxes. The conquered Christian, Jewish, Mazdean and Buddhist communities were otherwise left to lead their lives as before.", "title": "Dhimmi communities" }, { "paragraph_id": 19, "text": "According to historians Lewis and Stillman, local Christians in Syria, Iraq, and Egypt were non-Chalcedonians and many may have felt better off under early Muslim rule than under that of the Byzantine Orthodox of Constantinople. In 1095, Pope Urban II urged western European Christians to come to the aid of the Christians of Palestine. The subsequent Crusades brought Roman Catholic Christians into contact with Orthodox Christians whose beliefs they discovered to differ from their own perhaps more than they had realized, and whose position under the rule of the Muslim Fatimid Caliphate was less uncomfortable than had been supposed. Consequently, the Eastern Christians provided perhaps less support to the Crusaders than had been expected. When the Arab East came under Ottoman rule in the 16th century, Christian populations and fortunes rebounded significantly. The Ottomans had long experience dealing with Christian and Jewish minorities, and were more tolerant towards religious minorities than the former Muslim rulers, the Mamluks of Egypt.", "title": "Dhimmi communities" }, { "paragraph_id": 20, "text": "However, Christians living under Islamic rule have suffered certain legal disadvantages and at times persecution. In the Ottoman Empire, in accordance with the dhimmi system implemented in Muslim countries, they, like all other Christians and also Jews, were accorded certain freedoms. The dhimmi system in the Ottoman Empire was largely based upon the Pact of Umar. The client status established the rights of the non-Muslims to property, livelihood and freedom of worship but they were in essence treated as second-class citizens in the empire and referred to in Turkish as gavours, a pejorative word meaning \"infidel\" or \"unbeliever\". The clause of the Pact of Umar which prohibited non-Muslims from building new places of worship was historically imposed on some communities of the Ottoman Empire and ignored in other cases, at discretion of the local authorities. Although there were no laws mandating religious ghettos, this led to non-Muslim communities being clustered around existing houses of worship.", "title": "Dhimmi communities" }, { "paragraph_id": 21, "text": "In addition to other legal limitations, dhimmis, including the Christians among them, were not considered equals to Muslims and several prohibitions were placed on them. Their testimony against Muslims was inadmissible in courts of law wherein a Muslim could be punished; this meant that their testimony could only be considered in commercial cases. They were forbidden to carry weapons or ride atop horses and camels. Their houses could not overlook those of Muslims; and their religious practices were severely circumscribed (e.g., the ringing of church bells was strictly forbidden).", "title": "Dhimmi communities" }, { "paragraph_id": 22, "text": "Because the early Islamic conquests initially preserved much of the existing administrative machinery and culture, in many territories they amounted to little more than a change of rulers for the subject populations, which \"brought peace to peoples demoralized and disaffected by the casualties and heavy taxation that resulted from the years of Byzantine-Persian warfare\".", "title": "Dhimmi communities" }, { "paragraph_id": 23, "text": "María Rosa Menocal, argues that the Jewish dhimmis living under the caliphate, while allowed fewer rights than Muslims, were still better off than in the Christian parts of Europe. Jews from other parts of Europe made their way to al-Andalus, where in parallel to Christian sects regarded as heretical by Catholic Europe, they were not just tolerated, but where opportunities to practice faith and trade were open without restriction save for the prohibitions on proselytization.", "title": "Dhimmi communities" }, { "paragraph_id": 24, "text": "Bernard Lewis states:", "title": "Dhimmi communities" }, { "paragraph_id": 25, "text": "Generally, the Jewish people were allowed to practice their religion and live according to the laws and scriptures of their community. Furthermore, the restrictions to which they were subject were social and symbolic rather than tangible and practical in character. That is to say, these regulations served to define the relationship between the two communities, and not to oppress the Jewish population.", "title": "Dhimmi communities" }, { "paragraph_id": 26, "text": "Professor of Jewish medieval history at Hebrew University of Jerusalem, Hayim Hillel Ben-Sasson, notes:", "title": "Dhimmi communities" }, { "paragraph_id": 27, "text": "The legal and security situation of the Jews in the Muslim world was generally better than in Christendom, because in the former, Jews were not the sole \"infidels\", because in comparison to the Christians, Jews were less dangerous and more loyal to the Muslim regime, and because the rapidity and the territorial scope of the Muslim conquests imposed upon them a reduction in persecution and a granting of better possibility for the survival of members of other faiths in their lands.", "title": "Dhimmi communities" }, { "paragraph_id": 28, "text": "According to the French historian Claude Cahen, Islam has \"shown more toleration than Europe towards the Jews who remained in Muslim lands.\"", "title": "Dhimmi communities" }, { "paragraph_id": 29, "text": "Comparing the treatment of Jews in the medieval Islamic world and medieval Christian Europe, Mark R. Cohen notes that, in contrast to Jews in Christian Europe, the \"Jews in Islam were well integrated into the economic life of the larger society\", and that they were allowed to practice their religion more freely than they could do in Christian Europe.", "title": "Dhimmi communities" }, { "paragraph_id": 30, "text": "According to the scholar Mordechai Zaken, tribal chieftains (also known as aghas) in tribal Muslim societies such as the Kurdish society in Kurdistan would tax their Jewish subjects. The Jews were in fact civilians protected by their chieftains in and around their communities; in return they paid part of their harvest as dues, and contributed their skills and services to their patron chieftain.", "title": "Dhimmi communities" }, { "paragraph_id": 31, "text": "By the 10th century, the Turks of Central Asia had invaded the Indic plains, and spread Islam in Northwestern parts of India. At the end of the 12th century, the Muslims advanced quickly into the Ganges Plain. In one decade, a Muslim army led by Turkic slaves consolidated resistance around Lahore and brought northern India, as far as Bengal, under Muslim rule. From these Turkic slaves would come sultans, including the founder of the sultanate of Delhi. By the 15th century, major parts of Northern India was ruled by Muslim rulers, mostly descended from invaders. In the 16th century, India came under the influence of the Mughals. Babur, the first ruler of the Mughal empire, established a foothold in the north which paved the way for further expansion by his successors. Although the Mughal emperor Akbar has been described as a universalist, most Mughal emperors were oppressive of native Hindu, Buddhist and later Sikh populations. Aurangzeb specifically was inclined towards a highly fundamentalist approach.", "title": "Dhimmi communities" }, { "paragraph_id": 32, "text": "There were a number of restrictions on dhimmis. In a modern sense the dhimmis would be described as second-class citizens. According to historian Marshall Hodgson, from very early times Muslim rulers would very often humiliate and punish dhimmis (usually Christians or Jews that refused to convert to Islam). It was official policy that dhimmis should “feel inferior and to know ‘their place\".", "title": "Restrictions" }, { "paragraph_id": 33, "text": "Although dhimmis were allowed to perform their religious rituals, they were obliged to do so in a manner not conspicuous to Muslims. Loud prayers were forbidden, as were the ringing of church bells and the blowing of the shofar. They were also not allowed to build or repair churches and synagogues without Muslim consent. Moreover, dhimmis were not allowed to seek converts among Muslims. In the Mamluk Egypt, where non-Mamluk Muslims were not allowed to ride horses and camels, dhimmis were prohibited even from riding donkeys inside cities. Sometimes, Muslim rulers issued regulations requiring dhimmis to attach distinctive signs to their houses.", "title": "Restrictions" }, { "paragraph_id": 34, "text": "Most of the restrictions were social and symbolic in nature, and a pattern of stricter, then more lax, enforcement developed over time. The major financial disabilities of the dhimmi were the jizya poll tax and the fact dhimmis and Muslims could not inherit from each other. That would create an incentive to convert if someone from the family had already converted. Ira M. Lapidus states that the \"payment of the poll tax seems to have been regular, but other obligations were inconsistently enforced and did not prevent many non-Muslims from being important political, business, and scholarly figures. In the late ninth and early tenth centuries, Jewish bankers and financiers were important at the 'Abbasid court.\" The jurists and scholars of Islamic sharia law called for humane treatment of the dhimmis.", "title": "Restrictions" }, { "paragraph_id": 35, "text": "A Muslim man may marry a Jewish or Christian dhimmī woman, who may keep her own religion (though her children were automatically considered Muslims and had to be raised as such), but a Muslim woman cannot marry a dhimmī man unless he converts to Islam. Dhimmīs are prohibited from converting Muslims under severe penalties, while Muslims are encouraged to convert dhimmīs.", "title": "Restrictions" }, { "paragraph_id": 36, "text": "Payment of the jizya obligated Muslim authorities to protect dhimmis in civil and military matters. Sura 9 (At-Tawba), verse 29 stipulates that jizya be exacted from non-Muslims as a condition required for jihad to cease. Islamic jurists required adult, free, healthy males among the dhimma community to pay the jizya, while exempting women, children, the elderly, slaves, those affected by mental or physical handicaps, and travelers who did not settle in Muslim lands. According to Abu Yusuf dhimmi should be imprisoned until they pay the jizya in full. Other jurists specified that dhimmis who don't pay jizya should have their heads shaved and made to wear a dress distinctive from those dhimmis who paid the jizya and Muslims.", "title": "Restrictions" }, { "paragraph_id": 37, "text": "Lewis states there are varying opinions among scholars as to how much of a burden jizya was. According to Norman Stillman: \"jizya and kharaj were a \"crushing burden for the non-Muslim peasantry who eked out a bare living in a subsistence economy.\" Both agree that ultimately, the additional taxation on non-Muslims was a critical factor that drove many dhimmis to leave their religion and accept Islam. However, in some regions the jizya on populations was significantly lower than the zakat, meaning dhimmi populations maintained an economic advantage. According to Cohen, taxation, from the perspective of dhimmis who came under Muslim rule, was \"a concrete continuation of the taxes paid to earlier regimes\". Lewis observes that the change from Byzantine to Arab rule was welcomed by many among the dhimmis who found the new yoke far lighter than the old, both in taxation and in other matters, and that some, even among the Christians of Syria and Egypt, preferred the rule of Islam to that of Byzantines. Montgomery Watt states, \"the Christians were probably better off as dhimmis under Muslim-Arab rulers than they had been under the Byzantine Greeks.\" In some places, for example Egypt, the jizya was a tax incentive for Christians to convert to Islam.", "title": "Restrictions" }, { "paragraph_id": 38, "text": "Some scholars have tried compute the relative taxation on Muslims vs non-Muslims in the early Abbasid period. According to one estimate, Muslims had an average tax rate of 17–20 dirhams per person, which rose to 30 dirhams per person when in kind levies are included. Non-Muslims paid either 12, 24 or 48 dirhams per person, depending on their taxation category, though most probably paid 12.", "title": "Restrictions" }, { "paragraph_id": 39, "text": "The importance of dhimmis as a source of revenue for the Rashidun Caliphate is illustrated in a letter ascribed to Umar I and cited by Abu Yusuf: \"if we take dhimmis and share them out, what will be left for the Muslims who come after us? By God, Muslims would not find a man to talk to and profit from his labors.\"", "title": "Restrictions" }, { "paragraph_id": 40, "text": "The early Islamic scholars took a relatively humane and practical attitude towards the collection of jizya, compared to the 11th century commentators writing when Islam was under threat both at home and abroad.", "title": "Restrictions" }, { "paragraph_id": 41, "text": "The jurist Abu Yusuf, the chief judge of the caliph Harun al-Rashid, rules as follows regarding the manner of collecting the jizya", "title": "Restrictions" }, { "paragraph_id": 42, "text": "No one of the people of the dhimma should be beaten in order to exact payment of the jizya, nor made to stand in the hot sun, nor should hateful things be inflicted upon their bodies, or anything of that sort. Rather they should be treated with leniency.", "title": "Restrictions" }, { "paragraph_id": 43, "text": "In the border provinces, dhimmis were sometimes recruited for military operations. In such cases, they were exempted from jizya for the year of service.", "title": "Restrictions" }, { "paragraph_id": 44, "text": "Religious pluralism existed in medieval Islamic law and ethics. The religious laws and courts of other religions, including Christianity, Judaism and Hinduism, were usually accommodated within the Islamic legal framework, as exemplified in the Caliphate, Al-Andalus, Ottoman Empire and Indian subcontinent. In medieval Islamic societies, the qadi (Islamic judge) usually could not interfere in the matters of non-Muslims unless the parties voluntarily chose to be judged according to Islamic law. The dhimmi communities living in Islamic states usually had their own laws independent from the Sharia law, such as the Jews who had their own Halakha courts.", "title": "Restrictions" }, { "paragraph_id": 45, "text": "Dhimmis were allowed to operate their own courts following their own legal systems. However, dhimmis frequently attended the Muslim courts in order to record property and business transactions within their own communities. Cases were taken out against Muslims, against other dhimmis and even against members of the dhimmi's own family. Dhimmis often took cases relating to marriage, divorce or inheritance to the Muslim courts so these cases would be decided under sharia law. Oaths sworn by dhimmis in the Muslim courts were sometimes the same as the oaths taken by Muslims, sometimes tailored to the dhimmis' beliefs.", "title": "Restrictions" }, { "paragraph_id": 46, "text": "Muslim men could generally marry dhimmi women who are considered People of the Book, however Islamic jurists rejected the possibility any non-Muslim man might marry a Muslim woman. Bernard Lewis notes that \"similar position existed under the laws of Byzantine Empire, according to which a Christian could marry a Jewish woman, but a Jew could not marry a Christian woman under pain of death\".", "title": "Restrictions" }, { "paragraph_id": 47, "text": "Lewis states", "title": "Relevant texts" }, { "paragraph_id": 48, "text": "A hadith by Muhammad, \"Whoever killed a muʿāhid (a person who is granted the pledge of protection by the Muslims) shall not smell the fragrance of Paradise though its fragrance can be smelt at a distance of forty years (of traveling).\", is cited as a foundation for the right of non-Muslim citizens to live peacefully and undisturbed in an Islamic state. Anwar Shah Kashmiri writes in his commentary on Sahih al-Bukhari Fayd al-Bari on this hadith: \"You know the gravity of sin for killing a Muslim, for its odiousness has reached the point of disbelief, and it necessitates that [the killer abides in Hell] forever. As for killing a non-Muslim citizen [muʿāhid], it is similarly no small matter, for the one who does it will not smell the fragrance of Paradise.\"", "title": "Relevant texts" }, { "paragraph_id": 49, "text": "A similar hadith in regard to the status of the dhimmis: \"Whoever wrongs one with whom a compact (treaty) has been made [i.e., a dhimmi] and lays on him a burden beyond his strength, I will be his accuser.\"", "title": "Relevant texts" }, { "paragraph_id": 50, "text": "The Constitution of Medina, a formal agreement between Muhammad and all the significant tribes and families of Medina (including Muslims, Jews and pagans), declared that non-Muslims in the Ummah had the following rights:", "title": "Relevant texts" }, { "paragraph_id": 51, "text": "A precedent for the dhimma contract was established with the agreement between Muhammad and the Jews after the Battle of Khaybar, an oasis near Medina. Khaybar was the first territory attacked and conquered by Muslims. When the Jews of Khaybar surrendered to Muhammad after a siege, Muhammad allowed them to remain in Khaybar in return for handing over to the Muslims one half their annual produce.", "title": "Relevant texts" }, { "paragraph_id": 52, "text": "The Pact of Umar, traditionally believed to be between caliph Umar and the conquered Jerusalem Christians in the seventh century, was another source of regulations pertaining to dhimmis. However, Western orientalists doubt the authenticity of the pact, arguing it is usually the victors and not the vanquished who impose rather than propose, the terms of peace, and that it is highly unlikely that the people who spoke no Arabic and knew nothing of Islam could draft such a document. Academic historians believe the Pact of Umar in the form it is known today was a product of later jurists who attributed it to Umar in order to lend greater authority to their own opinions. The similarities between the Pact of Umar and the Theodosian and Justinian Codes of the Eastern Roman Empire suggest that perhaps much of the Pact of Umar was borrowed from these earlier codes by later Islamic jurists. At least some of the clauses of the pact mirror the measures first introduced by the Umayyad caliph Umar II or by the early Abbasid caliphs.", "title": "Relevant texts" }, { "paragraph_id": 53, "text": "During the Middle Ages, local associations known as futuwwa clubs developed across the Islamic lands. There were usually several futuwwah in each town. These clubs catered to varying interests, primarily sports, and might involve distinctive manners of dress and custom. They were known for their hospitality, idealism and loyalty to the group. They often had a militaristic aspect, purportedly for the mutual protection of the membership. These clubs commonly crossed social strata, including among their membership local notables, dhimmi and slaves – to the exclusion of those associated with the local ruler, or amir.", "title": "Cultural interactions and cultural differences" }, { "paragraph_id": 54, "text": "Muslims and Jews were sometimes partners in trade, with the Muslim taking days off on Fridays and Jews taking off on Saturdays.", "title": "Cultural interactions and cultural differences" }, { "paragraph_id": 55, "text": "Andrew Wheatcroft describes how some social customs such as different conceptions of dirt and cleanliness made it difficult for the religious communities to live close to each other, either under Muslim or under Christian rule.", "title": "Cultural interactions and cultural differences" }, { "paragraph_id": 56, "text": "The dhimma and the jizya poll tax are no longer imposed in Muslim majority countries. In the 21st century, jizya is widely regarded as being at odds with contemporary secular conceptions of citizens' civil rights and equality before the law, although there have been occasional reports of religious minorities in conflict zones and areas subject to political instability being forced to pay jizya.", "title": "In modern times" }, { "paragraph_id": 57, "text": "In 2009 it was claimed that a group of militants that referred to themselves as the Taliban imposed the jizya on Pakistan's minority Sikh community after occupying some of their homes and kidnapping a Sikh leader.", "title": "In modern times" }, { "paragraph_id": 58, "text": "As late as 2013, in Egypt jizya was reportedly being imposed by the Muslim Brotherhood on 15,000 Christian Copts of Dalga Village.", "title": "In modern times" }, { "paragraph_id": 59, "text": "In February 2014, the Islamic State of Iraq and the Levant (ISIL) announced that it intended to extract jizya from Christians in the city of Raqqa, Syria, which it controlled at the time. Christians who refused to accept the dhimma contract and pay the tax were to have to either convert to Islam, leave or be executed. Wealthy Christians would have to pay half an ounce of gold, the equivalent of $664 twice a year; middle-class Christians were to have to pay half that amount and poorer ones were to be charged one-fourth that amount. In June, 2014 the Institute for the Study of War reported that ISIL claims to have collected jizya and fay. On 18 July 2014 ISIL ordered the Christians in Mosul to accept the dhimma contract and pay the jizya or convert to Islam. If they refused to accept either of the options they would be killed.", "title": "In modern times" } ]
Dhimmī or muʿāhid (معاهد) is a historical term for non-Muslims living in an Islamic state with legal protection. The word literally means "protected person", referring to the state's obligation under sharia to protect the individual's life, property, as well as freedom of religion, in exchange for loyalty to the state and payment of the jizya tax, in contrast to the zakat, or obligatory alms, paid by the Muslim subjects. Dhimmi were exempt from certain duties assigned specifically to Muslims if they paid the poll tax (jizya) but were otherwise equal under the laws of property, contract, and obligation. Historically, dhimmi status was originally applied to Jews, Christians, and Sabians, who are considered "People of the Book" in Islamic theology. Later, this status was also applied to Zoroastrians, Sikhs, Hindus, Jains, and Buddhists. Jews and Christians were required to pay the jizyah while others, depending on the different rulings of the four Madhhabs, might be required to accept Islam, pay the jizya, be exiled, or be killed. During the rule of al-Mutawakkil, the tenth Abbasid Caliph, numerous restrictions reinforced the second-class citizen status of dhimmīs and forced their communities into ghettos. For instance, they were required to distinguish themselves from their Muslim neighbors by their dress. They were not permitted to build new churches or synagogues or repair old churches according to the Pact of Umar. Under Sharia, the dhimmi communities were usually governed by their own laws in place of some of the laws applicable to the Muslim community. For example, the Jewish community of Medina was allowed to have its own Halakhic courts, and the Ottoman millet system allowed its various dhimmi communities to rule themselves under separate legal courts. These courts did not cover cases that involved religious groups outside of their own communities, or capital offences. Dhimmi communities were also allowed to engage in certain practices that were usually forbidden for the Muslim community, such as the consumption of alcohol and pork. Some Muslims reject the dhimma system by arguing that it is a system which is inappropriate in the age of nation-states and democracies. There is a range of opinions among 20th-century and contemporary Islamic theologians about whether the notion of dhimma is appropriate for modern times, and, if so, what form it should take in an Islamic state. There are differences among the Islamic Madhhabs regarding which non-Muslims can pay jizya and have dhimmi status. The Hanafi and Maliki Madhabs generally allow non-Muslims to have dhimmi status. In contrast, the Shafi'i and Hanbali Madhabs only allow Christians, Jews and Zoroastrians to have dhimmi status, and they maintain that all other non-Muslims must either convert to Islam or be fought.
2002-01-06T23:02:00Z
2023-12-26T14:17:45Z
[ "Template:Annotated link", "Template:Cite journal", "Template:ISBN?", "Template:Cite SSRN", "Template:Short description", "Template:Use dmy dates", "Template:Italic title", "Template:Failed verification", "Template:Qref", "Template:Harvnb", "Template:OCLC", "Template:Islam topics", "Template:Portal", "Template:Cite encyclopedia", "Template:Cite web", "Template:Refend", "Template:Lang", "Template:Rp", "Template:Sfn", "Template:Page needed", "Template:Webarchive", "Template:Religious slurs", "Template:Reflist", "Template:ISBN", "Template:Bukhari", "Template:Characters and names in the Quran", "Template:Cite book", "Template:Fiqh", "Template:Transl", "Template:Lang-ar", "Template:IPA-ar", "Template:Blockquote", "Template:Unreliable source?", "Template:Refbegin", "Template:Authority control", "Template:Main", "Template:Isbn", "Template:Cite news", "Template:Cite magazine" ]
https://en.wikipedia.org/wiki/Dhimmi
9,091
Doctor V64
The Doctor V64 (also referred to simply as the V64) is a development and backup device made by Bung Enterprises Ltd that is used in conjunction with the Nintendo 64. The Doctor V64 also had the ability to play video CDs and audio CDs. Additionally, it could apply stereo 3D effects to the audio. The V64 was released in 1996 and was priced around $450 USD. It was the first commercially-available backup device for the Nintendo 64 unit. The Partner N64 development kit, which was manufactured by Silicon Graphics and sold officially by Nintendo, was a comparatively expensive development machine. The V64 served as a lower-cost development machine, though its unofficial status would later lead to conflict with Nintendo. Some third-party developers used a number of V64s in their development process. The CPU of the V64 is a 6502, and the operating system is contained in a BIOS. The V64 unit contains a CD-ROM drive which sits underneath the Nintendo 64 and plugs into the expansion slot on the underside of the Nintendo 64. The expansion slot is essentially a mirror image of the cartridge slot on the top of the unit, with the same electrical connections; thus, the Nintendo 64 reads data from the Doctor V64 in the same manner as it would from a cartridge plugged into the normal slot. In order to get around Nintendo's lockout chip, when using the V64, a game cartridge is plugged into the Nintendo 64 through an adaptor which connects only the lockout chip. The game cart used for the operation had to contain the same lockout chip used by the game back up. The second problem concerned saving progress. Most N64 games are saved to the cart itself instead of external memory cards. If the player wanted to keep their progress, then the cartridge used had to have the same type of non-volatile memory hardware. Alternatively, Bung produced the "DX256" and "DS1" add-ons to allow (EEPROM and SRAM respectively) saves to be made without using the inserted cartridge. These devices were inserted into the top-slot of the N64 with the game cartridge being then inserted into the top of them to just provide the security bypass. Save slots on the DX256 were selected using an alpha and numeric encoder knobs on the front of the device. The Doctor V64 could be used to read the data from a game cartridge and transfer the data to a PC via the parallel port. This allowed developers and homebrew programmers to upload their game images to the Doctor V64 without having to create a CD backup each time. It also allowed users to upload game images taken from the Internet. Following the Doctor V64's success, Bung released the Doctor V64 Jr. in December 1998. This was a condensed, cost-efficient version of the original V64. The Doctor V64 Jr. has no CD drive and plugs into the normal cartridge slot on the top of the Nintendo 64. Data is loaded into the Doctor V64 Jr.'s battery-backed RAM from a PC via a parallel port connection. The Doctor V64 Jr. has up to 512 megabits (64 MB) of memory storage. This was done to provide for future Nintendo 64 carts that employed larger memory storage, but the high costs associated with ordering large storage carts kept this occurrence at a minimum. Only a handful of 512-megabit games were released for the Nintendo 64 system. In 1998 and 1999, there was a homebrew competition known as "Presence of Mind" (POM), an N64 demo competition led by dextrose.com. The contest consisted of submitting a user-developed N64 program, game, or utility. Bung Enterprises promoted the event and supplied prizes (usually Doctor V64 related accessories). Though a contest was planned for 2000, the interest in the N64 was already fading, and so did the event. POM contest demo entries can still be found on the Internet. The Doctor V64 unit was the first commercially available backup device for the Nintendo 64 unit. Though the unit was sold as a development machine, it could be modified to enable the creation and use of commercial game backups. Unlike official development units, the purchase of V64s was not restricted to software companies only. For this reason, the unit became a popular choice among those looking to proliferate unlicensed copies of games. Original Doctor V64 units sold by Bung did not allow the playing of backups. A person would have to modify the unit by themselves in order to make it backup friendly. This usually required a user to download and install a modified Doctor V64 BIOS. Additionally, the cartridge adapter had to be opened and soldered in order to allow for the operational procedure. Though Bung never sold backup enabled V64s, many re-sellers would modify the units themselves. During the N64's lifetime, Nintendo revised the N64's model, making the serial port area smaller. This slight change in the N64's plastic casing made the connection to the Doctor V64 difficult to achieve without user modification. This revision may have been a direct reaction from Nintendo to discourage the use of V64 devices, and may also explain why Bung decided to discontinue the use of this port in the later Doctor V64 Jr. models. Nintendo made many legal efforts worldwide in order to stop the sale of Doctor V64 units. They sued Bung directly as well as specific store retailers in Europe and North America for copyright infringement. Eventually, Nintendo managed to have the courts prohibit the sale of Doctor V64 units in the United States. The Doctor V64 implemented text-based menu-driven screens. The menus consisted of white text superimposed over a black background. Utilizing the buttons on the V64 unit, a user would navigate the menus and issue commands. Though the menu was mainly designed for game developers, it is possible to back up cartridges with it (through the use of an unofficial V64 BIOS). Some of the menu items related to game backups were removed from the V64's BIOS near the end of its life due to pressure from Nintendo. These items are only available by obtaining a patched V64 BIOS. Most early V64 models shipped with a standard IDE 8X CD-ROM . During the manufacturing lifetime of the device, latter V64 models shipped with 16X and eventually 20X drives. V64 units could be purchased without a CD-ROM drive. It is possible to replace the unit with a faster IDE CD-ROM unit (such as the 52X model in the image on this page). Many Doctor V64s shipped internationally were ordered without an installed CD-ROM drive, to save on shipping costs associated with weight, to avoid import duty on the drive, and to allow users to customize the units in response to the ever-increasing speeds of drives available. The variance in the power draw of different manufacturers drives at different speeds caused issues with disc spin-ups exceeding the wattage rating of the included Bung PSU. This led to users swapping out the Bung PSU for a more powerful model, or selecting low draw drives (mainly Panasonic drives sometimes badged as Creative). V64s can read CD-Rs and CD-RWs (provided the installed CD-ROM unit supports rewritable media). Supported media has to be recorded in Mode 1, ISO 9660 format. Doctor V64s only support the 8.3 DOS naming convention. As such, Joliet file system is not supported. Depending on the model, V64s came with either 128 megabits (16 MB) or 256 megabits (32 MB) of RAM. Original V64 units shipped with 128 megabits of RAM. V64 units started shipping with 256 megabits when developers started using bigger sized memory carts for their games. Users had the option of buying a memory upgrade from Bung and other re-sellers. The Doctor V64 uses a 4 Pin MiniDIN jack (as used for S-Video) for connecting the power supply cord. Power supplies included with Doctor V64s were very unreliable. Bung replaced the power supply with a sturdier version in later V64 units. Replacing broken power supplies became one of the most common maintenance problems with the V64. It is possible to modify an AT PC power supply for V64 use. Only 4 cables have to be connected to the V64 for it to function.
[ { "paragraph_id": 0, "text": "The Doctor V64 (also referred to simply as the V64) is a development and backup device made by Bung Enterprises Ltd that is used in conjunction with the Nintendo 64. The Doctor V64 also had the ability to play video CDs and audio CDs. Additionally, it could apply stereo 3D effects to the audio.", "title": "" }, { "paragraph_id": 1, "text": "The V64 was released in 1996 and was priced around $450 USD. It was the first commercially-available backup device for the Nintendo 64 unit. The Partner N64 development kit, which was manufactured by Silicon Graphics and sold officially by Nintendo, was a comparatively expensive development machine. The V64 served as a lower-cost development machine, though its unofficial status would later lead to conflict with Nintendo. Some third-party developers used a number of V64s in their development process.", "title": "History" }, { "paragraph_id": 2, "text": "The CPU of the V64 is a 6502, and the operating system is contained in a BIOS.", "title": "History" }, { "paragraph_id": 3, "text": "The V64 unit contains a CD-ROM drive which sits underneath the Nintendo 64 and plugs into the expansion slot on the underside of the Nintendo 64. The expansion slot is essentially a mirror image of the cartridge slot on the top of the unit, with the same electrical connections; thus, the Nintendo 64 reads data from the Doctor V64 in the same manner as it would from a cartridge plugged into the normal slot.", "title": "History" }, { "paragraph_id": 4, "text": "In order to get around Nintendo's lockout chip, when using the V64, a game cartridge is plugged into the Nintendo 64 through an adaptor which connects only the lockout chip. The game cart used for the operation had to contain the same lockout chip used by the game back up.", "title": "Usage" }, { "paragraph_id": 5, "text": "The second problem concerned saving progress. Most N64 games are saved to the cart itself instead of external memory cards. If the player wanted to keep their progress, then the cartridge used had to have the same type of non-volatile memory hardware. Alternatively, Bung produced the \"DX256\" and \"DS1\" add-ons to allow (EEPROM and SRAM respectively) saves to be made without using the inserted cartridge. These devices were inserted into the top-slot of the N64 with the game cartridge being then inserted into the top of them to just provide the security bypass. Save slots on the DX256 were selected using an alpha and numeric encoder knobs on the front of the device.", "title": "Usage" }, { "paragraph_id": 6, "text": "The Doctor V64 could be used to read the data from a game cartridge and transfer the data to a PC via the parallel port. This allowed developers and homebrew programmers to upload their game images to the Doctor V64 without having to create a CD backup each time. It also allowed users to upload game images taken from the Internet.", "title": "Usage" }, { "paragraph_id": 7, "text": "Following the Doctor V64's success, Bung released the Doctor V64 Jr. in December 1998. This was a condensed, cost-efficient version of the original V64. The Doctor V64 Jr. has no CD drive and plugs into the normal cartridge slot on the top of the Nintendo 64. Data is loaded into the Doctor V64 Jr.'s battery-backed RAM from a PC via a parallel port connection. The Doctor V64 Jr. has up to 512 megabits (64 MB) of memory storage. This was done to provide for future Nintendo 64 carts that employed larger memory storage, but the high costs associated with ordering large storage carts kept this occurrence at a minimum. Only a handful of 512-megabit games were released for the Nintendo 64 system.", "title": "Doctor V64 Jr." }, { "paragraph_id": 8, "text": "In 1998 and 1999, there was a homebrew competition known as \"Presence of Mind\" (POM), an N64 demo competition led by dextrose.com. The contest consisted of submitting a user-developed N64 program, game, or utility. Bung Enterprises promoted the event and supplied prizes (usually Doctor V64 related accessories). Though a contest was planned for 2000, the interest in the N64 was already fading, and so did the event. POM contest demo entries can still be found on the Internet.", "title": "Promotions" }, { "paragraph_id": 9, "text": "The Doctor V64 unit was the first commercially available backup device for the Nintendo 64 unit. Though the unit was sold as a development machine, it could be modified to enable the creation and use of commercial game backups. Unlike official development units, the purchase of V64s was not restricted to software companies only. For this reason, the unit became a popular choice among those looking to proliferate unlicensed copies of games.", "title": "Legal issues" }, { "paragraph_id": 10, "text": "Original Doctor V64 units sold by Bung did not allow the playing of backups. A person would have to modify the unit by themselves in order to make it backup friendly. This usually required a user to download and install a modified Doctor V64 BIOS. Additionally, the cartridge adapter had to be opened and soldered in order to allow for the operational procedure. Though Bung never sold backup enabled V64s, many re-sellers would modify the units themselves.", "title": "Legal issues" }, { "paragraph_id": 11, "text": "During the N64's lifetime, Nintendo revised the N64's model, making the serial port area smaller. This slight change in the N64's plastic casing made the connection to the Doctor V64 difficult to achieve without user modification. This revision may have been a direct reaction from Nintendo to discourage the use of V64 devices, and may also explain why Bung decided to discontinue the use of this port in the later Doctor V64 Jr. models.", "title": "Legal issues" }, { "paragraph_id": 12, "text": "Nintendo made many legal efforts worldwide in order to stop the sale of Doctor V64 units. They sued Bung directly as well as specific store retailers in Europe and North America for copyright infringement. Eventually, Nintendo managed to have the courts prohibit the sale of Doctor V64 units in the United States.", "title": "Legal issues" }, { "paragraph_id": 13, "text": "The Doctor V64 implemented text-based menu-driven screens. The menus consisted of white text superimposed over a black background. Utilizing the buttons on the V64 unit, a user would navigate the menus and issue commands. Though the menu was mainly designed for game developers, it is possible to back up cartridges with it (through the use of an unofficial V64 BIOS). Some of the menu items related to game backups were removed from the V64's BIOS near the end of its life due to pressure from Nintendo. These items are only available by obtaining a patched V64 BIOS.", "title": "Main menu" }, { "paragraph_id": 14, "text": "Most early V64 models shipped with a standard IDE 8X CD-ROM . During the manufacturing lifetime of the device, latter V64 models shipped with 16X and eventually 20X drives. V64 units could be purchased without a CD-ROM drive. It is possible to replace the unit with a faster IDE CD-ROM unit (such as the 52X model in the image on this page).", "title": "Detailed specifications" }, { "paragraph_id": 15, "text": "Many Doctor V64s shipped internationally were ordered without an installed CD-ROM drive, to save on shipping costs associated with weight, to avoid import duty on the drive, and to allow users to customize the units in response to the ever-increasing speeds of drives available. The variance in the power draw of different manufacturers drives at different speeds caused issues with disc spin-ups exceeding the wattage rating of the included Bung PSU. This led to users swapping out the Bung PSU for a more powerful model, or selecting low draw drives (mainly Panasonic drives sometimes badged as Creative).", "title": "Detailed specifications" }, { "paragraph_id": 16, "text": "V64s can read CD-Rs and CD-RWs (provided the installed CD-ROM unit supports rewritable media). Supported media has to be recorded in Mode 1, ISO 9660 format. Doctor V64s only support the 8.3 DOS naming convention. As such, Joliet file system is not supported.", "title": "Detailed specifications" }, { "paragraph_id": 17, "text": "Depending on the model, V64s came with either 128 megabits (16 MB) or 256 megabits (32 MB) of RAM. Original V64 units shipped with 128 megabits of RAM. V64 units started shipping with 256 megabits when developers started using bigger sized memory carts for their games. Users had the option of buying a memory upgrade from Bung and other re-sellers.", "title": "Detailed specifications" }, { "paragraph_id": 18, "text": "The Doctor V64 uses a 4 Pin MiniDIN jack (as used for S-Video) for connecting the power supply cord. Power supplies included with Doctor V64s were very unreliable. Bung replaced the power supply with a sturdier version in later V64 units. Replacing broken power supplies became one of the most common maintenance problems with the V64. It is possible to modify an AT PC power supply for V64 use. Only 4 cables have to be connected to the V64 for it to function.", "title": "Detailed specifications" } ]
The Doctor V64 is a development and backup device made by Bung Enterprises Ltd that is used in conjunction with the Nintendo 64. The Doctor V64 also had the ability to play video CDs and audio CDs. Additionally, it could apply stereo 3D effects to the audio.
2002-02-25T15:51:15Z
2023-11-10T06:43:26Z
[ "Template:Cite magazine", "Template:Dead link", "Template:Webarchive", "Template:Nintendo 64", "Template:Short description", "Template:Multiple issues", "Template:Citation needed", "Template:Reflist", "Template:Cite news", "Template:Cite web" ]
https://en.wikipedia.org/wiki/Doctor_V64
9,093
De Havilland Mosquito
The de Havilland DH.98 Mosquito is a British twin-engined, multirole combat aircraft, introduced during the Second World War. Unusual in that its frame was constructed mostly of wood, it was nicknamed the "Wooden Wonder", or "Mossie". Lord Beaverbrook, Minister of Aircraft Production, nicknamed it "Freeman's Folly", alluding to Air Chief Marshal Sir Wilfrid Freeman, who defended Geoffrey de Havilland and his design concept against orders to scrap the project. In 1941, it was one of the fastest operational aircraft in the world. Originally conceived as an unarmed fast bomber, the Mosquito's use evolved during the war into many roles, including low- to medium-altitude daytime tactical bomber, high-altitude night bomber, pathfinder, day or night fighter, fighter-bomber, intruder, maritime strike, and photo-reconnaissance aircraft. It was also used by the British Overseas Airways Corporation as a fast transport to carry small, high-value cargo to and from neutral countries through enemy-controlled airspace. The crew of two, pilot and navigator, sat side by side. A single passenger could ride in the aircraft's bomb bay when necessary. The Mosquito FB Mk. VI was often flown in special raids, such as Operation Jericho (an attack on Amiens Prison in early 1944), and precision attacks against military intelligence, security, and police facilities (such as Gestapo headquarters). On 30 January 1943, the 10th anniversary of Hitler being made chancellor and the Nazis gaining power, a morning Mosquito attack knocked out the main Berlin broadcasting station while Hermann Göring was speaking, taking his speech off the air. The Mosquito flew with the Royal Air Force (RAF) and other air forces in the European, Mediterranean, and Italian theatres. The Mosquito was also operated by the RAF in the Southeast Asian theatre and by the Royal Australian Air Force based in the Halmaheras and Borneo during the Pacific War. During the 1950s, the RAF replaced the Mosquito with the jet-powered English Electric Canberra. By the early to mid-1930s, de Havilland had built a reputation for innovative high-speed aircraft with the DH.88 Comet racer. Later, the DH.91 Albatross airliner pioneered the composite wood construction used for the Mosquito. The 22-passenger Albatross could cruise at 210 mph (340 km/h) at 11,000 ft (3,400 m), faster than the Handley Page H.P.42 and other biplanes it was replacing. The wooden monocoque construction not only saved weight and compensated for the low power of the de Havilland Gipsy Twelve engines used by this aircraft, but also simplified production and reduced construction time. On 8 September 1936, the British Air Ministry issued Specification P.13/36, which called for a twin-engined, medium bomber capable of carrying a bomb load of 3,000 lb (1,400 kg) for 3,000 mi (4,800 km) with a maximum speed of 275 mph (445 km/h) at 15,000 ft (4,600 m); a maximum bomb load of 8,000 lb (3,600 kg) that could be carried over shorter ranges was also specified. Aviation firms entered heavy designs with new high-powered engines and multiple defensive turrets, leading to the production of the Avro Manchester and Handley Page Halifax. In May 1937, as a comparison to P.13/36, George Volkert, the chief designer of Handley Page, put forward the concept of a fast, unarmed bomber. In 20 pages, Volkert planned an aerodynamically clean, medium bomber to carry 3,000 lb (1,400 kg) of bombs at a cruising speed of 300 mph (485 km/h). Support existed in the RAF and Air Ministry; Captain R. N. Liptrot, Research Director Aircraft 3, appraised Volkert's design, calculating that its top speed would exceed that of the new Supermarine Spitfire, but counter-arguments held that although such a design had merit, it would not necessarily be faster than enemy fighters for long. The ministry was also considering using non-strategic materials for aircraft production, which, in 1938, had led to specification B.9/38 and the Armstrong Whitworth Albemarle medium bomber, largely constructed from spruce and plywood attached to a steel-tube frame. The idea of a small, fast bomber gained support at a much earlier stage than is sometimes acknowledged, though the Air Ministry likely envisaged it using light alloy components. Based on his experience with the Albatross, Geoffrey de Havilland believed that a bomber with a good aerodynamic design and smooth, minimal skin area, would exceed the P.13/36 specification. Furthermore, adapting the Albatross principles could save time. In April 1938, performance estimates were produced for a twin Rolls-Royce Merlin-powered DH.91, with the Bristol Hercules (radial engine) and Napier Sabre (H-engine) as alternatives. On 7 July 1938, de Havilland wrote to Air Marshal Wilfrid Freeman, the Air Council's member for Research and Development, discussing the specification and arguing that in war, shortages of aluminium and steel would occur, but supplies of wood-based products were "adequate." Although inferior in tension, the strength-to-weight ratio of wood is equal to or better than light alloys or steel, hence this approach was feasible. A follow-up letter to Freeman on 27 July said that the P.13/36 specification could not be met by a twin Merlin-powered aircraft and either the top speed or load capacity would be compromised, depending on which was paramount. For example, a larger, slower, turret-armed aircraft would have a range of 1,500 mi (2,400 km) carrying a 4,000 lb bomb load, with a maximum of 260 mph (420 km/h) at 19,000 ft (5,800 m), and a cruising speed of 230 mph (370 km/h) at 18,000 ft (5,500 m). De Havilland believed that a compromise, including eliminating surplus equipment, would improve matters. On 4 October 1938, de Havilland projected the performance of another design based on the Albatross, powered by two Merlin Xs, with a three-man crew and six or eight forward-firing guns, plus one or two manually operated guns and a tail turret. Based on a total loaded weight of 19,000 lb (8,600 kg), it would have a top speed of 300 mph (480 km/h) and cruising speed of 268 mph (431 km/h) at 22,500 ft (6,900 m). Still believing this could be improved, and after examining more concepts based on the Albatross and the new all-metal DH.95 Flamingo, de Havilland settled on designing a new aircraft that would be aerodynamically clean, wooden, and powered by the Merlin, which offered substantial future development. The new design would be faster than foreseeable enemy fighter aircraft, and could dispense with a defensive armament, which would slow it and make interception or losses to antiaircraft guns more likely. Instead, high speed and good manoeuvrability would make evading fighters and ground fire easier. The lack of turrets simplified production, reduced drag, and reduced production time, with a delivery rate far in advance of competing designs. Without armament, the crew could be reduced to a pilot and navigator. Whereas contemporary RAF design philosophy favoured well-armed heavy bombers, this mode of design was more akin to the German philosophy of the Schnellbomber. At a meeting in early October 1938 with Geoffrey de Havilland and Charles Walker (de Havilland's chief engineer), the Air Ministry showed little interest, and instead asked de Havilland to build wings for other bombers as a subcontractor. By September 1939, de Havilland had produced preliminary estimates for single- and twin-engined variations of light-bomber designs using different engines, speculating on the effects of defensive armament on their designs. One design, completed on 6 September, was for an aircraft powered by a single 2,000 hp (1,500 kW) Napier Sabre, with a wingspan of 47 ft (14 m) and capable of carrying a 1,000 lb (450 kg) bomb load 1,500 mi (2,400 km). On 20 September, in another letter to Wilfrid Freeman, de Havilland wrote "... we believe that we could produce a twin-engine[d] bomber which would have a performance so outstanding that little defensive equipment would be needed." By 4 October, work had progressed to a twin-engined light bomber with a wingspan of 51 ft (16 m) and powered by Merlin or Griffon engines, the Merlin favoured because of availability. On 7 October 1939, a month into the war, the nucleus of a design team under Eric Bishop moved to the security and secrecy of Salisbury Hall to work on what was later known as the DH.98. For more versatility, Bishop made provision for four 20 mm cannon in the forward half of the bomb bay, under the cockpit, firing via blast tubes and troughs under the fuselage. The DH.98 was too radical for the ministry, which wanted a heavily armed, multirole aircraft, combining medium bomber, reconnaissance, and general-purpose roles, that was also capable of carrying torpedoes. With the outbreak of war, the ministry became more receptive, but was still sceptical about an unarmed bomber. They thought the Germans would produce fighters that were faster than had been expected. and suggested the incorporation of two forward- and two rear-firing machine guns for defence. The ministry also opposed a two-man bomber, wanting at least a third crewman to reduce the work of the others on long flights. The Air Council added further requirements such as remotely controlled guns, a top speed of 275 mph (445 km/h) at 15,000 ft on two-thirds engine power, and a range of 3,000 mi (4,800 km) with a 4,000-lb bomb load. To appease the ministry, de Havilland built mock-ups with a gun turret just aft of the cockpit, but apart from this compromise, de Havilland made no changes. On 12 November, at a meeting considering fast-bomber ideas put forward by de Havilland, Blackburn, and Bristol, Air Marshal Freeman directed de Havilland to produce a fast aircraft, powered initially by Merlin engines, with options of using progressively more powerful engines, including the Rolls-Royce Griffon and the Napier Sabre. Although estimates were presented for a slightly larger Griffon-powered aircraft, armed with a four-gun tail turret, Freeman got the requirement for defensive weapons dropped, and a draft requirement was raised calling for a high-speed, light-reconnaissance bomber capable of 400 mph (645 km/h) at 18,000 ft. On 12 December, the Vice-Chief of the Air Staff, Director General of Research and Development, and the Air Officer Commanding-in-Chief (AOC-in-C) of RAF Bomber Command met to finalise the design and decide how to fit it into the RAF's aims. The AOC-in-C would not accept an unarmed bomber, but insisted on its suitability for reconnaissance missions with F8 or F24 cameras. After company representatives, the ministry, and the RAF's operational commands examined a full-scale mock-up at Hatfield on 29 December 1939, the project received backing. This was confirmed on 1 January 1940, when Freeman chaired a meeting with Geoffrey de Havilland, John Buchanan (Deputy of Aircraft Production), and John Connolly (Buchanan's chief of staff). De Havilland claimed the DH.98 was the "fastest bomber in the world ... it must be useful". Freeman supported it for RAF service, ordering a single prototype for an unarmed bomber to specification B.1/40/dh, which called for a light bomber/reconnaissance aircraft powered by two 1,280 hp (950 kW) Rolls-Royce RM3SM (an early designation for the Merlin 21) with ducted radiators, capable of carrying a 1,000 lb (450 kg) bomb load. The aircraft was to have a speed of 400 mph (640 km/h) at 24,000 ft (7,300 m) and a cruising speed of 325 mph (525 km/h) at 26,500 ft (8,100 m) with a range of 1,500 mi (2,400 km) at 25,000 ft (7,600 m) on full tanks. Maximum service ceiling was to be 32,000 ft (9,800 m). On 1 March 1940, Air Marshal Roderic Hill issued a contract under Specification B.1/40, for 50 bomber-reconnaissance variants of the DH.98; this contract included the prototype, which was given the factory serial E-0234. In May 1940, specification F.21/40 was issued, calling for a long-range fighter armed with four 20 mm cannon and four .303 machine guns in the nose, after which de Havilland was authorised to build a prototype of a fighter version of the DH.98. After debate, it was decided that this prototype, given the military serial number W4052, was to carry airborne interception (AI) Mk IV equipment as a day and night fighter. By June 1940, the DH.98 had been named "Mosquito". Having the fighter variant kept the Mosquito project alive, as doubts remained within the government and Air Ministry regarding the usefulness of an unarmed bomber, even after the prototype had shown its capabilities. With design of the DH.98 started, mock-ups were built, the most detailed at Salisbury Hall, where E-0234 was later constructed. Initially, the concept was for the crew to be enclosed in the fuselage behind a transparent nose (similar to the Bristol Blenheim or Heinkel He 111H), but this was quickly altered to a more solid nose with a conventional canopy. Work was cancelled again after the evacuation of the British Army from France, when Lord Beaverbrook, as Minister of Aircraft Production, concentrating production on aircraft types for the defence of the UK decided no production capacity remained for aircraft like the DH.98, which was not expected to be in service until early 1942. Beaverbrook told Air Vice-Marshal Freeman that work on the project should stop, but he did not issue a specific instruction, and Freeman ignored the request. In June 1940, however, Lord Beaverbrook and the Air Staff ordered that production should concentrate on five existing types, namely the Supermarine Spitfire, Hawker Hurricane fighter, Vickers Wellington, Armstrong-Whitworth Whitley, and Bristol Blenheim bombers. Work on the DH.98 prototype stopped. Apparently, the project shut down when the design team were denied materials for the prototype. The Mosquito was only reinstated as a priority in July 1940, after de Havilland's general manager, L.C.L. Murray, promised Lord Beaverbrook 50 Mosquitoes by December 1941. This was only after Beaverbrook was satisfied that Mosquito production would not hinder de Havilland's primary work of producing Tiger Moth and Airspeed Oxford trainers, repairing Hurricanes, and manufacturing Merlin engines under licence. In promising Beaverbrook such a number by the end of 1941, de Havilland was taking a gamble, because they were unlikely to be built in such a limited time. As it transpired, only 20 aircraft were built in 1941, but the other 30 were delivered by mid-March 1942. During the Battle of Britain, interruptions to production due to air raid warnings caused nearly a third of de Havilland's factory time to be lost. Nevertheless, work on the prototype went ahead quickly at Salisbury Hall since E-0234 was completed by November 1940. In the aftermath of the Battle of Britain, the original order was changed to 20 bomber variants and 30 fighters. Whether the fighter version should have dual or single controls, or should carry a turret, was still uncertain, so three prototypes were built: W4052, W4053, and W4073. The second and third, both turret armed, were later disarmed, to become the prototypes for the T.III trainer. This caused some delays, since half-built wing components had to be strengthened for the required higher combat loading. The nose sections also had to be changed from a design with a clear perspex bomb-aimer's position, to one with a solid nose housing four .303 machine guns and their ammunition. On 3 November 1940, the prototype aircraft, painted in "prototype yellow" and still coded E-0234, was dismantled, transported by road to Hatfield and placed in a small, blast-proof assembly building. Two Merlin 21 two-speed, single-stage supercharged engines were installed, driving three-bladed de Havilland Hydromatic constant-speed controllable-pitch propellers. Engine runs were made on 19 November. On 24 November, taxiing trials were carried out by Geoffrey de Havilland Jr., the de Havilland test pilot. On 25 November, the aircraft made its first flight, piloted by de Havilland Jr., accompanied by John E. Walker, the chief engine installation designer. For this maiden flight, E-0234, weighing 14,150 lb (6,420 kg), took off from the grass airstrip at the Hatfield site. The takeoff was reported as "straightforward and easy" and the undercarriage was not retracted until a considerable altitude was attained. The aircraft reached 220 mph (355 km/h), with the only problem being the undercarriage doors – which were operated by bungee cords attached to the main undercarriage legs – that remained open by some 12 in (300 mm) at that speed. This problem persisted for some time. The left wing of E-0234 also had a tendency to drag to port slightly, so a rigging adjustment, i.e., a slight change in the angle of the wing, was carried out before further flights. On 5 December 1940, the prototype, with the military serial number W4050, experienced tail buffeting at speeds between 240 and 255 mph (385 and 410 km/h). The pilot noticed this most in the control column, with handling becoming more difficult. During testing on 10 December, wool tufts were attached to suspect areas to investigate the direction of airflow. The conclusion was that the airflow separating from the rear section of the inner engine nacelles was disturbed, leading to a localised stall and the disturbed airflow was striking the tailplane, causing buffeting. To smooth the air flow and deflect it from forcefully striking the tailplane, nonretractable slots fitted to the inner engine nacelles and to the leading edge of the tailplane were tested. These slots and wing-root fairings fitted to the forward fuselage and leading edge of the radiator intakes stopped some of the vibration experienced, but did not cure the tailplane buffeting. In February 1941, buffeting was eliminated by incorporating triangular fillets on the trailing edge of the wings and lengthening the nacelles, the trailing edge of which curved up to fair into the fillet some 10 in (250 mm) behind the wing's trailing edge; this meant the flaps had to be divided into inboard and outboard sections. With the buffeting problems largely resolved, John Cunningham flew W4050 on 9 February 1941. He was greatly impressed by the "lightness of the controls and generally pleasant handling characteristics". Cunningham concluded that when the type was fitted with AI equipment, it might replace the Bristol Beaufighter night fighter. During its trials on 16 January 1941, W4050 outpaced a Spitfire at 6,000 ft (1,800 m). The original estimates were that as the Mosquito prototype had twice the surface area and over twice the weight of the Spitfire Mk.II, but also with twice its power, the Mosquito would end up being 20 mph (30 km/h) faster. Over the next few months, W4050 surpassed this estimate, easily beating the Spitfire Mk.II in testing at RAF Boscombe Down in February 1941, reaching a top speed of 392 mph (631 km/h) at 22,000 ft (6,700 m) altitude, compared to a top speed of 360 mph (580 km/h) at 19,500 ft (5,900 m) for the Spitfire. On 19 February, official trials began at the Aeroplane and Armament Experimental Establishment (AAEE) based at Boscombe Down, although the de Havilland representative was surprised by a delay in starting the tests. On 24 February, as W4050 taxied across the rough airfield, the tailwheel jammed leading to the fuselage fracturing. Repairs were made by early March, using part of the fuselage of the photo-reconnaissance prototype W4051. In spite of this setback, the Initial Handling Report 767 issued by the AAEE stated, "The aeroplane is pleasant to fly ... aileron control light and effective ..." The maximum speed reached was 388 mph (624 km/h) at 22,000 ft (6,700 m), with an estimated maximum ceiling of 34,000 ft (10,000 m) and a maximum rate of climb of 2,880 ft/min (880 m/min) at 11,500 ft (3,500 m). W4050 continued to be used for various test programmes, as the experimental "workhorse" for the Mosquito family. In late October 1941, it returned to the factory to be fitted with Merlin 61s, the first production Merlins fitted with a two-speed, two-stage supercharger. The first flight with the new engines was on 20 June 1942. W4050 recorded a maximum speed of 428 mph (689 km/h) at 28,500 ft (8,700 m) (fitted with straight-through air intakes with snow guards, engines in full supercharger gear) and 437 mph (703 km/h) at 29,200 ft (8,900 m) without snow guards. In October 1942, in connection with development work on the NF Mk.XV, W4050 was fitted with extended wingtips, increasing the span to 59 ft 2 in (18.03 m), first flying in this configuration on 8 December. Fitted with high-altitude-rated, two-stage, two-speed Merlin 77s, it reached 439 mph (707 km/h) in December 1943. Soon after these flights, W4050 was grounded and scheduled to be scrapped, but instead served as an instructional airframe at Hatfield. In September 1958, W4050 was returned to the Salisbury Hall hangar where it was built, restored to its original configuration, and became one of the primary exhibits of the de Havilland Aircraft Heritage Centre. W4051, which was designed from the outset to be the prototype for the photo-reconnaissance versions of the Mosquito, was slated to make its first flight in early 1941. However, the fuselage fracture in W4050 meant that W4051's fuselage was used as a replacement; W4051 was then rebuilt using a production standard fuselage and first flew on 10 June 1941. This prototype continued to use the short engine nacelles, single-piece trailing-edge flaps, and the 19 ft 5.5 in (5.931 m) "No. 1" tailplane used by W4050, but had production-standard 54 ft 2 in (16.51 m) wings and became the only Mosquito prototype to fly operationally. Construction of the fighter prototype, W4052, was also carried out at the secret Salisbury Hall facility. It was powered by 1,460 hp (1,090 kW) Merlin 21s, and had an altered canopy structure with a flat, bullet-proof windscreen; the solid nose had mounted four .303 British Browning machine guns and their ammunition boxes, accessible by a large, sideways hinged panel. Four 20-mm Hispano Mk.II cannon were housed in a compartment under the cockpit floor with the breeches projecting into the bomb bay and the automatic bomb bay doors were replaced by manually operated bay doors, which incorporated cartridge ejector chutes. As a day and night fighter, prototype W4052 was equipped with AI Mk IV equipment, complete with an "arrowhead" transmission aerial mounted between the central Brownings and receiving aerials through the outer wing tips, and it was painted in black RDM2a "Special Night" finish. It was also the first prototype constructed with the extended engine nacelles. W4052 was later tested with other modifications, including bomb racks, drop tanks, barrage balloon cable cutters in the leading edge of the wings, Hamilton airscrews and braking propellers, and drooping aileron systems that enabled steep approaches and a larger rudder tab. The prototype continued to serve as a test machine until it was scrapped on 28 January 1946. 4055 flew the first operational Mosquito flight on 17 September 1941. During flight testing, the Mosquito prototypes were modified to test a number of configurations. W4050 was fitted with a turret behind the cockpit for drag tests, after which the idea was abandoned in July 1941. W4052 had the first version of the Youngman Frill airbrake fitted to the fighter prototype. The frill was mounted around the fuselage behind the wing and was opened by bellows and venturi effect to provide rapid deceleration during interceptions and was tested between January and August 1942, but was also abandoned when lowering the undercarriage was found to have the same effect with less buffeting. The Air Ministry authorised mass production plans on 21 June 1941, by which time the Mosquito had become one of the world's fastest operational aircraft. It ordered 19 photo-reconnaissance (PR) models and 176 fighters. A further 50 were unspecified; in July 1941, these were confirmed to be unarmed fast bombers. By the end of January 1942, contracts had been awarded for 1,378 Mosquitoes of all variants, including 20 T.III trainers and 334 FB.VI bombers. Another 400 were to be built by de Havilland Canada. On 20 April 1941, W4050 was demonstrated to Lord Beaverbrook, the Minister of Aircraft Production. The Mosquito made a series of flights, including one rolling climb on one engine. Also present were US General Henry H. Arnold and his aide Major Elwood Quesada, who wrote "I ... recall the first time I saw the Mosquito as being impressed by its performance, which we were aware of. We were impressed by the appearance of the airplane that looks fast usually is fast, and the Mosquito was, by the standards of the time, an extremely well-streamlined airplane, and it was highly regarded, highly respected." The trials set up future production plans between Britain, Australia, and Canada. Six days later, Arnold returned to America with a full set of manufacturer's drawings. As a result of his report, five companies (Beech, Curtiss-Wright, Fairchild, Fleetwings, and Hughes) were asked to evaluate the de Havilland data. The report by Beech Aircraft summed up the general view: "It appears as though this airplane has sacrificed serviceability, structural strength, ease of construction and flying characteristics in an attempt to use construction material which is not suitable for the manufacture of efficient airplanes." The Americans did not pursue the proposal for licensed production, the consensus arguing that the Lockheed P-38 Lightning could fulfill the same duties. However, Arnold urged the United States Army Air Forces (USAAF) to evaluate the design even if they would not adopt it. On 12 December 1941, after the attack on Pearl Harbor, the USAAF requested one airframe for this purpose. While timber construction was considered outmoded by some, de Havilland claimed that their successes with techniques used for the DH 91 Albatross could lead to a fast, light bomber using monocoque-sandwich shell construction. Arguments in favour of this included speed of prototyping, rapid development, minimisation of jig-building time, and employment of a separate category of workforce. The ply-balsa-ply monocoque fuselage and one-piece wings with doped fabric covering would give excellent aerodynamic performance and low weight, combined with strength and stiffness. At the same time, the design team had to fight conservative Air Ministry views on defensive armament. Guns and gun turrets, favoured by the ministry, would impair the aircraft's aerodynamic properties and reduce speed and manoeuvrability, in the opinion of the designers. Whilst submitting these arguments, Geoffrey de Havilland funded his private venture until a very late stage. The project was a success beyond all expectations. The initial bomber and photo-reconnaissance versions were extremely fast, whilst the armament of subsequent variants might be regarded as primarily offensive. The most-produced variant, designated the FB Mk. VI (Fighter-bomber Mark 6), was powered by two Merlin Mk.23 or Mk.25 engines driving three-bladed de Havilland hydromatic propellers. The typical fixed armament for an FB Mk. VI was four Browning .303 machine guns and four 20-mm Hispano cannons, while the offensive load consisted of up to 2,000 lb (910 kg) of bombs, or eight RP-3 unguided rockets. The design was noted for light and effective control surfaces that provided good manoeuvrability, but required that the rudder not be used aggressively at high speeds. Poor aileron control at low speeds when landing and taking off was also a problem for inexperienced crews. For flying at low speeds, the flaps had to be set at 15°, speed reduced to 200 mph (320 km/h), and rpm set to 2,650. The speed could be reduced to an acceptable 150 mph (240 km/h) for low-speed flying. For cruising, the optimum speed for obtaining maximum range was 200 mph (320 km/h) at 17,000 lb (7,700 kg) weight. The Mosquito had a high stalling speed of 120 mph (190 km/h) with undercarriage and flaps raised. When both were lowered, the stalling speed decreased from 120 to 100 mph (190 to 160 km/h). Stall speed at normal approach angle and conditions was 100 to 110 mph (160 to 180 km/h). Warning of the stall was given by buffeting and would occur 12 mph (19 km/h) before stall was reached. The conditions and impact of the stall were not severe. The wing did not drop unless the control column was pulled back. The nose drooped gently and recovery was easy. Early on in the Mosquito's operational life, the intake shrouds that were to cool the exhausts on production aircraft overheated. Flame dampers prevented exhaust glow on night operations, but they had an effect on performance. Multiple ejector and open-ended exhaust stubs helped solve the problem and were used in the PR.VIII, B.IX, and B.XVI variants. This increased speed performance in the B.IX alone by 10 to 13 mph (16 to 21 km/h). The oval-section fuselage was a frameless monocoque shell built in two vertically separate halves formed over a mahogany or concrete mould. Pressure was applied with band clamps. Some of the 1/2—3/4" shell sandwich skins comprised 3/32" birch three-ply outers, with 7/16" cores of Ecuadorean balsa. In many generally smaller but vital areas, such as around apertures and attachment zones, stronger timbers, including aircraft-quality spruce, replaced the balsa core. The main areas of the sandwich skin were only 0.55 in (14 mm) thick. Together with various forms of wood reinforcement, often of laminated construction, the sandwich skin gave great stiffness and torsional resistance. The separate fuselage halves speeded construction, permitting access by personnel working in parallel with others, as the work progressed. Work on the separate half-fuselages included installation of control mechanisms and cabling. Screwed inserts into the inner skins that would be under stress in service were reinforced using round shear plates made from a fabric-Bakelite composite. Transverse bulkheads were also compositely built-up with several species of timber, plywood, and balsa. Seven vertically halved bulkheads were installed within each moulded fuselage shell before the main "boxing up" operation. Bulkhead number seven was especially strongly built, since it carried the fitments and transmitted the aerodynamic loadings for the tailplane and rudder. The fuselage had a large ventral section cut-out, strongly reinforced, that allowed the fuselage to be lowered onto the wing centre-section at a later stage of assembly. For early production aircraft, the structural assembly adhesive was casein-based. At a later stage, this was replaced by "Aerolite", a synthetic urea-formaldehyde type, which was more durable. To provide for the edge joints for the fuselage halves, zones near the outer edges of the shells had their balsa sandwich cores replaced by much stronger inner laminations of birch plywood. For the bonding together of the two halves ("boxing up"), a longitudinal cut was machined into these edges. The profile of this cut was a form of V-groove. Part of the edge bonding process also included adding further longitudinal plywood lap strips on the outside of the shells. The half bulkheads of each shell were bonded to their corresponding pair in a similar way. Two laminated wooden clamps were used in the after portion of the fuselage to provide supports during this complex gluing work. The resulting large structural components had to be kept completely still and held in the correct environment until the glue cured. For finishing, a covering of doped madapollam (a fine, plain-woven cotton) fabric was stretched tightly over the shell and several coats of red, followed by silver dope, were added, followed by the final camouflage paint. The all-wood wing pairs comprised a single structural unit throughout the wingspan, with no central longitudinal joint. Instead, the spars ran from wingtip to wingtip. There was a single continuous main spar and another continuous rear spar. Because of the combination of dihedral with the forward sweep of the trailing edges of the wings, this rear spar was one of the most complex units to laminate and to finish machining after the bonding and curing. It had to produce the correct 3D tilt in each of two planes. Also, it was designed and made to taper from the wing roots towards the wingtips. Both principal spars were of ply box construction, using in general 0.25-in plywood webs with laminated spruce flanges, plus a number of additional reinforcements and special details. Spruce and plywood ribs were connected with gusset joints. Some heavy-duty ribs contained pieces of ash and walnut, as well as the special five ply that included veneers laid up at 45°. The upper skin construction was in two layers of 0.25-in five-ply birch, separated by Douglas fir stringers running in the span-wise direction. The wings were covered with madapollam fabric and doped in a similar manner to the fuselage. The wing was installed into the roots by means of four large attachment points. The engine radiators were fitted in the inner wing, just outboard of the fuselage on either side. These gave less drag. The radiators themselves were split into three sections: an oil cooler section outboard, the middle section forming the coolant radiator and the inboard section serving the cabin heater. The wing contained metal-framed and -skinned ailerons, but the flaps were made of wood and were hydraulically controlled. The nacelles were mostly wood, although for strength, the engine mounts were all metal, as were the undercarriage parts. Engine mounts of welded steel tube were added, along with simple landing gear oleos filled with rubber blocks. Wood was used to carry only in-plane loads, with metal fittings used for all triaxially loaded components such as landing gear, engine mounts, control-surface mounting brackets, and the wing-to-fuselage junction. The outer leading wing edge had to be brought 22 in (56 cm) further forward to accommodate this design. The main tail unit was all wood built. The control surfaces, the rudder, and elevator were aluminium-framed and fabric-covered. The total weight of metal castings and forgings used in the aircraft was only 280 lb (130 kg). In November 1944, several crashes occurred in the Far East. At first, these were thought to be a result of wing-structure failures. The casein glue, it was said, cracked when exposed to extreme heat and/or monsoon conditions. This caused the upper surfaces to "lift" from the main spar. An investigating team led by Major Hereward de Havilland travelled to India and produced a report in early December 1944 stating, "the accidents were not caused by the deterioration of the glue, but by shrinkage of the airframe during the wet monsoon season". However, a later inquiry by Cabot & Myers firmly attributed the accidents to faulty manufacture and this was confirmed by a further investigation team by the Ministry of Aircraft Production at Defford, which found faults in six Mosquito marks (all built at de Havilland's Hatfield and Leavesden plants). The defects were similar, and none of the aircraft had been exposed to monsoon conditions or termite attack. The investigators concluded that construction defects occurred at the two plants. They found that the "... standard of glueing ... left much to be desired." Records at the time showed that accidents caused by "loss of control" were three times more frequent on Mosquitoes than on any other type of aircraft. The Air Ministry forestalled any loss of confidence in the Mosquito by holding to Major de Havilland's initial investigation in India that the accidents were caused "largely by climate" To solve the problem of seepage into the interior, a strip of plywood was set along the span of the wing to seal the entire length of the skin joint. The fuel systems gave the Mosquito good range and endurance, using up to nine fuel tanks. Two outer wing tanks each contained 58 imp gal (70 US gal; 260 L) of fuel. These were complemented by two inner wing fuel tanks, each containing 143 imp gal (172 US gal; 650 L), located between the wing root and engine nacelle. In the central fuselage were twin fuel tanks mounted between bulkhead number two and three aft of the cockpit. In the FB.VI, these tanks contained 25 imp gal (30 US gal; 110 L) each, while in the B.IV and other unarmed Mosquitoes each of the two centre tanks contained 68 imp gal (82 US gal; 310 L). Both the inner wing, and fuselage tanks are listed as the "main tanks" and the total internal fuel load of 452 imp gal (545 US gal; 2,055 L) was initially deemed appropriate for the type. In addition, the FB Mk. VI could have larger fuselage tanks, increasing the capacity to 63 imp gal (76 US gal; 290 L). Drop tanks of 50 imp gal (60 US gal; 230 L) or 100 imp gal (120 US gal; 450 L) could be mounted under each wing, increasing the total fuel load to 615 or 715 imp gal (739 or 859 US gal; 2,800 or 3,250 L). The design of the Mk.VI allowed for a provisional long-range fuel tank to increase range for action over enemy territory, for the installation of bomb release equipment specific to depth charges for strikes against enemy shipping, or for the simultaneous use of rocket projectiles along with a 100 imp gal (120 US gal; 450 L) drop tank under each wing supplementing the main fuel cells. The FB.VI had a wingspan of 54 ft 2 in (16.51 m), a length (over guns) of 41 ft 2 in (12.55 m). It had a maximum speed of 378 mph (608 km/h) at 13,200 ft (4,000 m). Maximum take-off weight was 22,300 lb (10,100 kg) and the range of the aircraft was 1,120 mi (1,800 km) with a service ceiling of 26,000 ft (7,900 m). To reduce fuel vaporisation at the high altitudes of photographic reconnaissance variants, the central and inner wing tanks were pressurised. The pressure venting cock located behind the pilot's seat controlled the pressure valve. As the altitude increased, the valve increased the volume applied by a pump. This system was extended to include field modifications of the fuel tank system. The engine oil tanks were in the engine nacelles. Each nacelle contained a 15 imp gal (18 US gal; 68 L) oil tank, including a 2.5 imp gal (3.0 US gal; 11 L) air space. The oil tanks themselves had no separate coolant controlling systems. The coolant header tank was in the forward nacelle, behind the propeller. The remaining coolant systems were controlled by the coolant radiators shutters in the forward inner wing compartment, between the nacelle and the fuselage and behind the main engine cooling radiators, which were fitted in the leading edge. Electric-pneumatic operated radiator shutters directed and controlled airflow through the ducts and into the coolant valves, to predetermined temperatures. Electrical power came from a 24 volt DC generator on the starboard (No. 2) engine and an alternator on the port engine, which also supplied AC power for radios. The radiator shutters, supercharger gear change, gun camera, bomb bay, bomb/rocket release and all the other crew controlled instruments were powered by a 24 V battery. The radio communication devices included VHF and HF communications, GEE navigation, and IFF and G.P. devices. The electric generators also powered the fire extinguishers. Located on the starboard side of the cockpit, the switches would operate automatically in the event of a crash. In flight, a warning light would flash to indicate a fire, should the pilot not already be aware of it. In later models, to save liquids and engine clean up time in case of belly landing, the fire extinguisher was changed to semi-automatic triggers. The main landing gear, housed in the nacelles behind the engines, were raised and lowered hydraulically. The main landing gear shock absorbers were de Havilland manufactured and used a system of rubber in compression, rather than hydraulic oleos, with twin pneumatic brakes for each wheel. The Dunlop-Marstrand anti-shimmy tailwheel was also retractable. The de Havilland Mosquito operated in many roles, performing medium bomber, reconnaissance, tactical strike, anti-submarine warfare, shipping attacks and night fighter duties, until the end of the war. In July 1941, the first production Mosquito W4051 (a production fuselage combined with some prototype flying surfaces – see Prototypes and test flights) was sent to No. 1 Photographic Reconnaissance Unit (PRU), at RAF Benson. The secret reconnaissance flights of this aircraft were the first operational missions of the Mosquito. In 1944, the journal Flight gave 19 September 1941 as date of the first PR mission, at an altitude "of some 20,000 ft". On 15 November 1941, 105 Squadron, RAF, took delivery at RAF Swanton Morley, Norfolk, of the first operational Mosquito Mk. B.IV bomber, serial no. W4064. Throughout 1942, 105 Squadron, based next at RAF Horsham St. Faith, then from 29 September, RAF Marham, undertook daylight low-level and shallow dive attacks. Apart from the Oslo and Berlin raids, the strikes were mainly on industrial and infrastructure targets in occupied Netherlands and Norway, France and northern and western Germany. The crews faced deadly flak and fighters, particularly Focke-Wulf Fw 190s, which they called snappers. Germany still controlled continental airspace and the Fw 190s were often already airborne and at an advantageous altitude. Collisions within the formations also caused casualties. It was the Mosquito's excellent handling capabilities, rather than pure speed, that facilitated successful evasions. The Mosquito was first announced publicly on 26 September 1942 after the Oslo Mosquito raid of 25 September. It was featured in The Times on 28 September and the next day the newspaper published two captioned photographs illustrating the bomb strikes and damage. On 6 December 1942, Mosquitoes from Nos. 105 and 139 Squadrons made up part of the bomber force used in Operation Oyster, the large No. 2 Group raid against the Philips works at Eindhoven. From mid-1942 to mid-1943, Mosquito bombers flew high-speed, medium and low-altitude daylight missions against factories, railways and other pinpoint targets in Germany and German-occupied Europe. From June 1943, Mosquito bombers were formed into the Light Night Striking Force to guide RAF Bomber Command heavy bomber raids and as "nuisance" bombers, dropping Blockbuster bombs – 4,000 lb (1,800 kg) "cookies" – in high-altitude, high-speed raids that German night fighters were almost powerless to intercept. As a night fighter from mid-1942, the Mosquito intercepted Luftwaffe raids on Britain, notably those of Operation Steinbock in 1944. Starting in July 1942, Mosquito night-fighter units raided Luftwaffe airfields. As part of 100 Group, it was flown as a night fighter and as an intruder supporting Bomber Command heavy bombers that reduced losses during 1944 and 1945. The Mosquito fighter-bomber served as a strike aircraft in the Second Tactical Air Force (2TAF) from its inception on 1 June 1943. The main objective was to prepare for the invasion of occupied Europe a year later. In Operation Overlord three Mosquito FB Mk. VI wings flew close air support for the Allied armies in co-operation with other RAF units equipped with the North American B-25 Mitchell medium bomber. In the months between the foundation of 2TAF and its duties from D day onwards, vital training was interspersed with attacks on V-1 flying bomb launch sites. In another example of the daylight precision raids carried out by the Mosquitoes of Nos. 105 and 139 Squadrons, on 30 January 1943, the 10th anniversary of the Nazis' seizure of power, a morning Mosquito attack knocked out the main Berlin broadcasting station while Luftwaffe Chief Reichsmarschall Hermann Göring was speaking, putting his speech off the air. A second sortie in the afternoon inconvenienced another speech, by Propaganda Minister Joseph Goebbels. Lecturing a group of German aircraft manufacturers, Göring said: In 1940 I could at least fly as far as Glasgow in most of my aircraft, but not now! It makes me furious when I see the Mosquito. I turn green and yellow with envy. The British, who can afford aluminium better than we can, knock together a beautiful wooden aircraft that every piano factory over there is building, and they give it a speed which they have now increased yet again. What do you make of that? There is nothing the British do not have. They have the geniuses and we have the nincompoops. After the war is over I'm going to buy a British radio set – then at least I'll own something that has always worked. During this daylight-raiding phase, Nos. 105 and 139 Squadrons flew 139 combat operations and aircrew losses were high. Even the losses incurred in the squadrons' dangerous Blenheim era were exceeded in percentage terms. The Roll of Honour shows 51 aircrew deaths from the end of May 1942 to April 1943. In the corresponding period, crews gained three Mentions in Despatches, two DFMs and three DFCs. The low-level daylight attacks finished on 27 May 1943 with strikes on the Schott glass and Zeiss instrument works, both in Jena. Subsequently, when low-level precision attacks required Mosquitoes, they were allotted to squadrons operating the FB.IV version. Examples include the Aarhus air raid and Operation Jericho. Since the beginning of the year, the German fighter force had become seriously overstretched. In April 1943, in response to "political humiliation" caused by the Mosquito, Göring ordered the formation of special Luftwaffe units (Jagdgeschwader 25, commanded by Oberstleutnant Herbert Ihlefeld and Jagdgeschwader 50, under Major Hermann Graf) to combat the Mosquito attacks, though these units, which were "little more than glorified squadrons", were unsuccessful against the elusive RAF aircraft. Post-war German histories also indicate that there was a belief within the Luftwaffe that Mosquito aircraft "gave only a weak radar signal.". The first Mosquito Squadron to be equipped with Oboe (navigation) was No. 109, based at RAF Wyton, after working as an experimental unit at RAF Boscombe Down. They used Oboe in anger for the first time on 31 December 1942 and 1 January 1943, target marking for a force of heavy bombers attacking Düsseldorf.. On 1 June, the two pioneering Squadrons joined No. 109 Squadron in the re-formed No. 8 Group RAF (Bomber Command). Initially they were engaged in moderately high altitude (about 10,000 ft (3,000 m)) night bombing, with 67 trips during that summer, mainly to Berlin. Soon after, Nos. 105 and 139 Squadron bombers were widely used by the RAF Pathfinder Force, marking targets for the main night-time strategic bombing force. In what were, initially, diversionary "nuisance raids," Mosquito bombers dropped 4,000 lb Blockbuster bombs or "Cookies." Particularly after the introduction of H2S (radar) in some Mosquitoes, these raids carrying larger bombs succeeded to the extent that they provided a significant additional form of attack to the large formations of "heavies." Latterly in the war, there were a significant number of all-Mosquito raids on big German cities involving up to 100 or more aircraft. On the night of 20/21 February 1945, for example, Mosquitoes of No. 8 Group mounted the first of 36 consecutive night raids on Berlin. From 1943, Mosquitoes with RAF Coastal Command attacked Kriegsmarine U-boats and intercepted transport ship concentrations. After Operation Overlord, the U-boat threat in the Western Approaches decreased fairly quickly, but correspondingly the Norwegian and Danish waters posed greater dangers. Hence the RAF Coastal Command Mosquitoes were moved to Scotland to counter this threat. The Strike Wing at Banff stood up in September 1944 and comprised Mosquito aircraft of No's 143, 144, 235 and 248 Squadrons Royal Air Force and No.333 Squadron Royal Norwegian Air Force. Despite an initially high loss rate, the Mosquito bomber variants ended the war with the lowest losses of any aircraft in RAF Bomber Command service. The Mosquito also proved a very capable night fighter. Some of the most successful RAF pilots flew these variants. For example, Wing Commander Branse Burbridge claimed 21 kills. Mosquitoes of No. 100 Group RAF acted as night intruders operating at high level in support of the Bomber Command "heavies", to counter the enemy tactic of merging into the bomber stream, which, towards the end of 1943, was causing serious allied losses. These RCM (radio countermeasures) aircraft were fitted with a device called "Serrate" to allow them to track down German night fighters from their Lichtenstein B/C (low-UHF-band) and Lichtenstein SN-2 (lower end of the VHF FM broadcast band) radar emissions, as well as a device named "Perfectos" that tracked German IFF signals. These methods were responsible for the destruction of 257 German aircraft from December 1943 to April 1945. Mosquito fighters from all units accounted for 487 German aircraft during the war, the vast majority of which were night fighters. One Mosquito is listed as belonging to German secret operations unit Kampfgeschwader 200, which tested, evaluated and sometimes clandestinely operated captured enemy aircraft during the war. The aircraft was listed on the order of battle of Versuchsverband OKL's, 2 Staffel, Stab Gruppe on 10 November and 31 December 1944. However, on both lists, the Mosquito is listed as unserviceable. The Mosquito flew its last official European war mission on 21 May 1945, when Mosquitoes of 143 Squadron and 248 Squadron RAF were ordered to continue to hunt German submarines that might be tempted to continue the fight; instead of submarines all the Mosquitoes encountered were passive E-boats. The last operational RAF Mosquitoes were the Mosquito TT.35's, which were finally retired from No. 3 Civilian Anti-Aircraft Co-Operation Unit (CAACU) in May 1963. In 1947–49, up to 180 Canadian surplus Mosquitoes flew many operations for the Nationalist Chinese under Chiang Kai-shek in the civil war against Communist forces. Pilots from three squadrons of Mosquitoes claimed to have sunk or damaged 500 ships during one invasion attempt. As the Communists assumed control, the remaining aircraft were evacuated to Formosa, where they flew missions against shipping. Until the end of 1942 the RAF always used Roman numerals (I, II, ...) for mark numbers; 1943–1948 was a transition period during which new aircraft entering service were given Arabic numerals (1, 2, ...) for mark numbers, but older aircraft retained their Roman numerals. From 1948 onwards, Arabic numerals were used exclusively. Three prototypes were built, each with a different configuration. The first to fly was W4050 on 25 November 1940, followed by the fighter W4052 on 15 May 1941 and the photo-reconnaissance prototype W4051 on 10 June 1941. W4051 later flew operationally with 1 Photographic Reconnaissance Unit (1 PRU). Media related to De Havilland Mosquito PR at Wikimedia Commons A total of 10 Mosquito PR Mk.Is were built, four of them "long range" versions equipped with a 151 imp gal (690 L) overload fuel tank in the fuselage. The contract called for 10 of the PR Mk.I airframes to be converted to B Mk.IV Series 1s. All of the PR Mk.Is, and the B Mk.IV Series 1s, had the original short engine nacelles and short span (19 ft 5.5 in) tailplanes. Their engine cowlings incorporated the original pattern of integrated exhaust manifolds, which, after relatively brief flight time, had a troublesome habit of burning and blistering the cowling panels. The first operational sortie by a Mosquito was made by a PR Mk.I, W4055, on 17 September 1941; during this sortie the unarmed Mosquito PR.I evaded three Messerschmitt Bf 109s at 23,000 ft (7,000 m). Powered by two Merlin 21s, the PR Mk.I had a maximum speed of 382 mph (615 km/h), a cruise speed of 255 mph (410 km/h), a ceiling of 35,000 ft (11,000 m), a range of 2,180 nmi (4,040 km), and a climb rate of 2,850 ft (870 m) per minute. Over 30 Mosquito B Mk.IV bombers were converted into the PR Mk.IV photo-reconnaissance aircraft. The first operational flight by a PR Mk.IV was made by DK284 in April 1942. The Mosquito PR Mk.VIII, built as a stopgap pending the introduction of the refined PR Mk.IX, was the next photo-reconnaissance version. The five VIIIs were converted from B Mk.IVs and became the first operational Mosquito version to be powered by two-stage, two-speed supercharged engines, using 1,565 hp (1,167 kW) Rolls-Royce Merlin 61 engines in place of Merlin 21/22s. The first PR Mk.VIII, DK324 first flew on 20 October 1942. The PR Mk.VIII had a maximum speed of 436 mph (702 km/h), an economical cruise speed of 295 mph (475 km/h) at 20,000 ft, and 350 mph (560 km/h) at 30,000 ft, a ceiling of 38,000 ft (12,000 m), a range of 2,550 nmi (4,720 km), and a climb rate of 2,500 ft per minute (760 m). The Mosquito PR Mk.IX, 90 of which were built, was the first Mosquito variant with two-stage, two-speed engines to be produced in quantity; the first of these, LR405, first flew in April 1943. The PR Mk.IX was based on the Mosquito B Mk.IX bomber and was powered by two 1,680 hp (1,250 kW) Merlin 72/73 or 76/77 engines. It could carry either two 50 imp gal (230 L), two 100 imp gal (450 L) or two 200 imp gal (910 L) droppable fuel tanks. The Mosquito PR Mk.XVI had a pressurised cockpit and, like the Mk.IX, was powered by two Rolls-Royce Merlin 72/73 or 76/77 piston engines. This version was equipped with three overload fuel tanks, totalling 760 imp gal (3,500 L) in the bomb bay, and could also carry two 50 imp gal (230 L) or 100 imp gal (450 L) drop tanks. A total of 435 of the PR Mk.XVI were built. The PR Mk.XVI had a maximum speed of 415 mph (668 km/h), a cruise speed of 250 mph (400 km/h), ceiling of 38,500 ft (11,700 m), a range of 2,450 nmi (4,540 km), and a climb rate of 2,900 feet per minute (884 m). The Mosquito PR Mk.32 was a long-range, high-altitude, pressurised photo-reconnaissance version. It was powered by a pair of two-stage supercharged 1,690 hp (1,260 kW) Rolls-Royce Merlin 113 and Merlin 114 piston engines, the Merlin 113 on the starboard side and the Merlin 114 on the port. First flown in August 1944, only five were built and all were conversions from PR.XVIs. The Mosquito PR Mk.34 and PR Mk.34A was a very long-range unarmed high altitude photo-reconnaissance version. The fuel tank and cockpit protection armour were removed. Additional fuel was carried in a bulged bomb bay: 1,192 gallons—the equivalent of 5,419 mi (8,721 km). A further two 200-gallon (910-litre) drop tanks under the outer wings gave a range of 3,600 mi (5,800 km) cruising at 300 mph (480 km/h). Powered by two 1,690 hp (1,260 kW) Merlin 114s first used in the PR.32. The port Merlin 114 drove a Marshal cabin supercharger. A total of 181 were built, including 50 built by Percival Aircraft Company at Luton. The PR.34's maximum speed (TAS) was 335 mph (539 km/h) at sea level, 405 mph (652 km/h) at 17,000 ft (5,200 m) and 425 mph (684 km/h) at 30,000 ft (9,100 m). All PR.34s were installed with four split F52 vertical cameras, two forward, two aft of the fuselage tank and one F24 oblique camera. Sometimes a K-17 camera was used for air surveys. In August 1945, the PR.34A was the final photo-reconnaissance variant with one Merlin 113A and 114A each delivering 1,710 hp (1,280 kW). Colonel Roy M. Stanley II, USAF (RET) wrote: "I consider the Mosquito the best photo-reconnaissance aircraft of the war". After the end of World War II Spartan Air Services used ten ex-RAF Mosquitoes, mostly B.35s plus one of only six PR.35s built, for high-altitude photographic survey work in Canada. Media related to De Havilland Mosquito B at Wikimedia Commons On 21 June 1941 the Air Ministry ordered that the last 10 Mosquitoes, ordered as photo-reconnaissance aircraft, should be converted to bombers. These 10 aircraft were part of the original 1 March 1940 production order and became the B Mk.IV Series 1. W4052 was to be the prototype and flew for the first time on 8 September 1941. The bomber prototype led to the B Mk.IV, of which 273 were built: apart from the 10 Series 1s, all of the rest were built as Series 2s with extended nacelles, revised exhaust manifolds, with integrated flame dampers, and larger tailplanes. Series 2 bombers also differed from the Series 1 in having an increased payload of four 500 lb (230 kg) bombs, instead of the four 250 lb (110 kg) bombs of Series 1. This was made possible by cropping, or shortening the tail of the 500 lb (230 kg) bomb so that these four heavier weapons could be carried (or a 2,000 lb (920 kg) total load). The B Mk.IV entered service in May 1942 with 105 Squadron. In April 1943 it was decided to convert a B Mk.IV to carry a 4,000 lb (1,800 kg) Blockbuster bomb (nicknamed a Cookie). The conversion, including modified bomb bay suspension arrangements, bulged bomb bay doors and fairings, was relatively straightforward and 54 B.IVs were modified and distributed to squadrons of the Light Night Striking Force. 27 B Mk.IVs were later converted for special operations with the Highball anti-shipping weapon, and were used by 618 Squadron, formed in April 1943 specifically to use this weapon. A B Mk.IV, DK290 was initially used as a trials aircraft for the bomb, followed by DZ471,530 and 533. The B Mk.IV had a maximum speed of 380 mph (610 km/h), a cruising speed of 265 mph (426 km/h), ceiling of 34,000 ft (10,000 m), a range of 2,040 nmi (3,780 km), and a climb rate of 2,500 ft per minute (12.7 m/s). Other bomber variants of the Mosquito included the Merlin 21 powered B Mk.V high-altitude version. Trials with this configuration were made with W4057, which had strengthened wings and two additional fuel tanks, or alternatively, two 500 lb (230 kg) bombs. This design was not produced in Britain, but formed the basic design of the Canadian-built B.VII. Only W4057 was built in prototype form. The Merlin 31 powered B Mk.VII was built by de Havilland Canada and first flown on 24 September 1942. It only saw service in Canada, 25 were built. Six were handed over to the United States Army Air Forces. B Mk.IX (54 built) was powered by the Merlin 72,73, 76 or 77. The two-stage Merlin variant was based on the PR.IX. The prototype DK 324 was converted from a PR.VIII and first flew on 24 March 1943. In October 1943 it was decided that all B Mk.IVs and all B Mk.IXs then in service would be converted to carry the 4,000 lb (1,800 kg) "Cookie", and all B Mk.IXs built after that date were designed to allow them to be converted to carry the weapon. The B Mk.IX had a maximum speed of 408 mph (657 km/h), an economical cruise speed of 295 mph (475 km/h) at 20,000 ft, and 350 mph (560 km/h) at 30,000 ft, ceiling of 36,000 ft (11,000 m), a range of 2,450 nmi (4,540 km), and a climb rate of 2,850 feet per minute (14.5 m/s). The IX could carry a maximum load of 2,000–4,000 lb (910–1,810 kg) of bombs. A Mosquito B Mk.IX holds the record for the most combat operations flown by an Allied bomber in the Second World War. LR503, known as "F for Freddie" (from its squadron code letters, GB*F), first served with No. 109 and subsequently, No. 105 RAF squadrons. It flew 213 sorties during the war, only to crash at Calgary airport during the Eighth Victory Loan Bond Drive on 10 May 1945, two days after Victory in Europe Day, killing both the pilot, Flt. Lt. Maurice Briggs, DSO, DFC, DFM and navigator Fl. Off. John Baker, DFC and Bar. The B Mk.XVI was powered by the same variations as the B.IX. All B Mk.XVIs were capable of being converted to carry the 4,000 lb (1,800 kg) "Cookie". The two-stage powerplants were added along with a pressurised cabin. DZ540 first flew on 1 January 1944. The prototype was converted from a IV (402 built). The next variant, the B Mk.XX, was powered by Packard Merlins 31 and 33s. It was the Canadian version of the IV. Altogether, 245 were built. The B Mk.XVI had a maximum speed of 408 mph (657 km/h), an economical cruise speed of 295 mph (475 km/h) at 20,000 ft, and 350 mph (560 km/h) at 30,000 ft, ceiling of 37,000 ft (11,000 m), a range of 1,485 nmi (2,750 km), and a climb rate of 2,800 ft per minute (14 m/s). The type could carry 4,000 lb (1,800 kg) of bombs. The B.35 was powered by Merlin 113 and 114As. Some were converted to TT.35s (Target Tugs) and others were used as PR.35s (photo-reconnaissance). The B.35 had a maximum speed of 422 mph (679 km/h), a cruising speed of 276 mph (444 km/h), ceiling of 42,000 ft (13,000 m), a range of 1,750 nmi (3,240 km), and a climb rate of 2,700 ft per minute (13.7 m/s). A total of 174 B.35s were delivered up to the end of 1945. A further 100 were delivered from 1946 for a grand total of 274, 65 of which were built by Airspeed Ltd. Developed during 1940, the first prototype of the Mosquito F Mk.II was completed on 15 May 1941. These Mosquitoes were fitted with four 20 mm (0.79 in) Hispano cannon in the fuselage belly and four .303 (7.7 mm) Browning machine guns mounted in the nose. On production Mk.IIs the machine guns and ammunition tanks were accessed via two centrally hinged, sideways opening doors in the upper nose section. To arm and service the cannon the bomb bay doors were replaced by manually operated bay doors: the F and NF Mk.IIs could not carry bombs. The type was also fitted with a gun camera in a compartment above the machine guns in the nose and was fitted with exhaust flame dampers to reduce the glare from the Merlin XXs. In the summer of 1942, Britain experienced day-time incursions of the high-altitude reconnaissance bomber, the Junkers Ju 86P. Although the Ju 86P only carried a light bomb load, it overflew sensitive areas, including Bristol, Bedfordshire and Hertfordshire. Bombs were dropped on Luton and elsewhere, and this particular aircraft was seen from the main de Havilland offices and factory at Hatfield. An attempt to intercept it with a Spitfire from RAF Manston was unsuccessful. As a result of the potential threat, a decision was quickly taken to develop a high-altitude Mosquito interceptor, using the MP469 prototype. MP469 entered the experimental shop on 7 September and made its initial flight on 14 September, piloted by John de Havilland. The bomber nose was altered using a normal fighter nose, armed with four standard .303 (7.7 mm) Browning machine guns. The low pressure cabin retained a bomber canopy structure and a two-piece windscreen. The control wheel was replaced with a fighter control stick. The wingspan was increased to 59 ft (18 m). The airframe was lightened by removing armour plating, some fuel tanks and other fitments. Smaller-diameter main wheels were fitted after the first few flights. At a loaded weight of 16,200 lb (7,300 kg) this HA Mk.XV was 2,300 lb (1,000 kg) lighter than a standard Mk.II. For this first conversion, the engines were a pair of Merlin 61s. On 15 September, John de Havilland reached an altitude of 43,000 ft (13,000 m) in this version. The aircraft was delivered to a High Altitude Flight which had been formed at RAF Northolt. However, the high-level German daylight intruders were no longer to be seen. It was subsequently revealed that only five Ju 86P aircraft had been built and they had only flown 12 sorties. Nevertheless, the general need for high altitude interceptors was recognised – but now the emphasis was to be upon night fighters. The A&AEE tested the climb and speed of night fighter conversion of MP469 in January 1943 for the Ministry of Aircraft Production. Wingspan had been increased to 62 ft (19 m), the Brownings had been moved to a fairing below the fuselage. According to Birtles, an AI radar was mounted in the nose and the Merlins were upgraded to Mk76 type, although Boscombe Down reported Merlin 61s. In addition to MP469, four more B Mk.IVs were converted into NF MK XVs. The Fighter Interception Unit at RAF Ford carried out service trials, March 1943, and then these five aircraft went to 85 Squadron, Hunsdon, where they were flown from April until August of that year. The greatest height reached in service was 44,600 ft (13,600 m). Apart from the F Mk.XV, all Mosquito fighters and fighter bombers featured a modified canopy structure incorporating a flat, single piece armoured windscreen, and the crew entry/exit door was moved from the bottom of the forward fuselage to the right side of the nose, just forward of the wing leading edge. Media related to De Havilland Mosquito NF at Wikimedia Commons At the end of 1940, the Air Staff's preferred turret-equipped night fighter design to Operational Requirement O.R. 95 was the Gloster F.18/40 (derived from their F.9/37). However, although in agreement as to the quality of the Gloster company's design, the Ministry of Aircraft Production was concerned that Gloster would not be able to work on the F.18/40 and also the jet fighter design, considered the greater priority. Consequently, in mid-1941 the Air Staff and MAP agreed that the Gloster aircraft would be dropped and the Mosquito, when fitted with a turret would be considered for the night fighter requirement. The first production night fighter Mosquitoes – minus turrets – were designated NF Mk.II. A total of 466 were built with the first entering service with No. 157 Squadron in January 1942, replacing the Douglas Havoc. These aircraft were similar to the F Mk.II, but were fitted with the AI Mk.IV metric wavelength radar. The herring-bone transmitting antenna was mounted on the nose and the dipole receiving antennae were carried under the outer wings. A number of NF IIs had their radar equipment removed and additional fuel tanks installed in the bay behind the cannon for use as night intruders. These aircraft, designated NF II (Special) were first used by 23 Squadron in operations over Europe in 1942. 23 Squadron was then deployed to Malta on 20 December 1942, and operated against targets in Italy. Ninety-seven NF Mk.IIs were upgraded with 3.3 GHz frequency, low-SHF-band AI Mk.VIII radar and these were designated NF Mk.XII. The NF Mk.XIII, of which 270 were built, was the production equivalent of the Mk.XII conversions. These "centimetric" radar sets were mounted in a solid "thimble" (Mk.XII / XIII) or universal "bull nose" (Mk.XVII / XIX) radome, which required the machine guns to be dispensed with. Four F Mk.XVs were converted to the NF Mk.XV. These were fitted with AI Mk.VIII in a "thimble" radome, and the .303 Brownings were moved into a gun pack fitted under the forward fuselage. NF Mk.XVII was the designation for 99 NF Mk.II conversions, with single-stage Merlin 21, 22, or 23 engines, but British AI.X (US SCR-720) radar. The NF Mk.XIX was an improved version of the NF XIII. It could be fitted with American or British AI radars; 220 were built. The NF Mk.30 was the final wartime variant and was a high-altitude version, powered by two 1,710 hp (1,280 kW) Rolls-Royce Merlin 76s. The NF Mk.30 had a maximum speed of 424 mph (682 km/h) at 26,500 ft (8,100 m). It also carried early electronic countermeasures equipment. 526 were built. Other Mosquito night fighter variants planned but never built included the NF Mk.X and NF Mk.XIV (the latter based on the NF Mk.XIII), both of which were to have two-stage Merlins. The NF Mk.31 was a variant of the NF Mk.30, but powered by Packard Merlins. After the war, two more night fighter versions were developed: The NF Mk.36 was similar to the Mosquito NF Mk.30, but fitted with the American-built AI.Mk.X radar. Powered by two 1,690 hp (1,260 kW) Rolls-Royce Merlin 113/114 piston engines; 266 built. Max level speeds (TAS) with flame dampers fitted were 305 mph (491 km/h) at sea level, 380 mph (610 km/h) at 17,000 ft (5,200 m), and 405 mph (652 km/h) at 30,000 ft (9,100 m). The NF Mk.38, 101 of which were built, was also similar to the Mosquito NF Mk.30, but fitted with the British-built AI Mk.IX radar. This variant suffered from stability problems and did not enter RAF service: 60 were eventually sold to Yugoslavia. According to the Pilot's Notes and Air Ministry 'Special Flying Instruction TF/487', which posted limits on the Mosquito's maximum speeds, the NF Mk.38 had a VNE of 370 knots (425 mph), without under-wing stores, and within the altitude range of sea level to 10,000 ft (3,000 m). However, from 10,000 to 15,000 ft (4,600 m) the maximum speed was 348 knots (400 mph). As the height increased other recorded speeds were; 15,000 to 20,000 ft (6,100 m) 320 knots (368 mph); 20,000 to 25,000 ft (7,600 m), 295 knots (339 mph); 25,000 to 30,000 ft (9,100 m), 260 knots (299 mph); 30,000 to 35,000 ft (11,000 m), 235 knots (270 mph). With two added 100-gallon fuel tanks this performance fell; between sea level and 15,000 feet 330 knots (379 mph); between 15,000 and 20,000 ft (6,100 m) 320 knots (368 mph); 20,000 to 25,000 ft (7,600 m), 295 knots (339 mph); 25,000 to 30,000 ft (9,100 m), 260 knots (299 mph); 30,000 to 35,000 ft (11,000 m), 235 knots (270 mph). Little difference was noted above 15,000 ft (4,600 m). Media related to De Havilland Mosquito FB at Wikimedia Commons The FB Mk. VI, which first flew on 1 June 1942, was powered by two, single-stage two-speed, 1,460 hp (1,090 kW) Merlin 21s or 1,635 hp (1,219 kW) Merlin 25s, and introduced a re-stressed and reinforced "basic" wing structure capable of carrying single 250-or-500 lb (110-or-230 kg) bombs on racks housed in streamlined fairings under each wing, or up to eight RP-3 25lb or 60 lb rockets. In addition fuel lines were added to the wings to enable single 50 imp gal (230 L) or 100 imp gal (450 L) drop tanks to be carried under each wing. The usual fixed armament was four 20 mm Hispano Mk.II cannon and four .303 (7.7 mm) Browning machine guns, while two 250-or-500 lb (110-or-230 kg) bombs could be carried in the bomb bay. Unlike the F Mk.II, the ventral bay doors were split into two pairs, with the forward pair being used to access the cannon, while the rear pair acted as bomb bay doors. The maximum fuel load was 719.5 imp gal (3,271 L) distributed between 453 imp gal (2,060 L) internal fuel tanks, plus two overload tanks, each of 66.5 imp gal (302 L) capacity, which could be fitted in the bomb bay, and two 100 imp gal (450 L) drop tanks. All-out level speed is often given as 368 mph (592 km/h), although this speed applies to aircraft fitted with saxophone exhausts. The test aircraft (HJ679) fitted with stub exhausts was found to be performing below expectations. It was returned to de Havilland at Hatfield where it was serviced. Its top speed was then tested and found to be 384 mph (618 km/h), in line with expectations. 2,298 FB Mk. VIs were built, nearly one-third of Mosquito production. Two were converted to TR.33 carrier-borne, maritime strike prototypes. The FB Mk. VI proved capable of holding its own against fighter aircraft, in addition to strike/bombing roles. For example, on 15 January 1945 Mosquito FB Mk. VIs of 143 Squadron were engaged by 30 Focke-Wulf Fw 190s from Jagdgeschwader 5: the Mosquitoes sank an armed trawler and two merchant ships, but five Mosquitoes were lost (two reportedly to flak), while shooting down five Fw 190s. Another fighter-bomber variant was the Mosquito FB Mk. XVIII (sometimes known as the Tsetse) of which one was converted from a FB Mk. VI to serve as prototype and 17 were purpose-built. The Mk.XVIII was armed with a Molins "6-pounder Class M" cannon: this was a modified QF 6-pounder (57 mm) anti-tank gun fitted with an auto-loader to allow both semi- or fully automatic fire. 25 rounds were carried, with the entire installation weighing 1,580 lb (720 kg). In addition, 900 lb (410 kg) of armour was added within the engine cowlings, around the nose and under the cockpit floor to protect the engines and crew from heavily armed U-boats, the intended primary target of the Mk.XVIII. Two or four .303 (7.7 mm) Browning machine guns were retained in the nose and were used to "sight" the main weapon onto the target. The Air Ministry initially suspected that this variant would not work, but tests proved otherwise. Although the gun provided the Mosquito with yet more anti-shipping firepower for use against U-boats, it required a steady approach run to aim and fire the gun, making its wooden construction an even greater liability, in the face of intense anti-aircraft fire. The gun had a muzzle velocity of 2,950 ft/s (900 m/s) and an excellent range of some 1,800–1,500 yd (1,600–1,400 m). It was sensitive to sidewards movement; an attack required a dive from 5,000 ft (1,500 m) at a 30° angle with the turn and bank indicator on centre. A move during the dive could jam the gun. The prototype HJ732 was converted from a FB.VI and was first flown on 8 June 1943. The effect of the new weapon was demonstrated on 10 March 1944 when Mk.XVIIIs from 248 Squadron (escorted by four Mk.VIs) engaged a German convoy of one U-boat and four destroyers, protected by 10 Ju 88s. Three of the Ju 88s were shot down. Pilot Tony Phillips destroyed one Ju 88 with four shells, one of which tore an engine off the Ju 88. The U-boat was damaged. On 25 March, U-976 was sunk by Molins-equipped Mosquitoes. On 10 June, U-821 was abandoned in the face of intense air attack from No. 248 Squadron, and was later sunk by a Liberator of No. 206 Squadron. On 5 April 1945 Mosquitoes with Molins attacked five German surface ships in the Kattegat and again demonstrated their value by setting them all on fire and sinking them. A German Sperrbrecher ("minefield breaker") was lost with all hands, with some 200 bodies being recovered by Swedish vessels. Some 900 German soldiers died in total. On 9 April, German U-boats U-804, U-843 and U-1065 were spotted in formation heading for Norway. All were sunk with rockets. U-251 and U-2359 followed on 19 April and 2 May 1945, also sunk by rockets. Despite the preference for rockets, a further development of the large gun idea was carried out using the even larger, 96 mm calibre QF 32-pounder, a gun based on the QF 3.7-inch AA gun designed for tank use, the airborne version using a novel form of muzzle brake. Developed to prove the feasibility of using such a large weapon in the Mosquito, this installation was not completed until after the war, when it was flown and fired in a single aircraft without problems, then scrapped. Designs based on the Mk.VI were the FB Mk. 26, built in Canada, and the FB Mk.40, built in Australia, powered by Packard Merlins. The FB.26 improved from the FB.21 using 1,620 hp (1,210 kW) single stage Packard Merlin 225s. Some 300 were built and another 37 converted to T.29 standard. 212 FB.40s were built by de Havilland Australia. Six were converted to PR.40; 28 to PR.41s, one to FB.42 and 22 to T.43 trainers. Most were powered by Packard-built Merlin 31 or 33s. The Mosquito was also built as the Mosquito T Mk.III two-seat trainer. This version, powered by two Rolls-Royce Merlin 21s, was unarmed and had a modified cockpit fitted with dual control arrangements. A total of 348 of the T Mk.III were built for the RAF and Fleet Air Arm. de Havilland Australia built 11 T Mk.43 trainers, similar to the Mk.III. To meet specification N.15/44 for a navalised Mosquito for Royal Navy use as a torpedo bomber, de Havilland produced a carrier-borne variant. A Mosquito FB.VI was modified as a prototype designated Sea Mosquito TR Mk.33 with folding wings, arrester hook, thimble nose radome, Merlin 25 engines with four-bladed propellers and a new oleo-pneumatic landing gear rather than the standard rubber-in-compression gear. Initial carrier tests of the Sea Mosquito were carried out by Eric "Winkle" Brown aboard HMS Indefatigable, the first landing-on taking place on 25 March 1944. An order for 100 TR.33s was placed although only 50 were built at Leavesden. Armament was four 20 mm cannon, two 500 lb bombs in the bomb bay (another two could be fitted under the wings), eight 60 lb rockets (four under each wing) and a standard torpedo under the fuselage. The first production TR.33 flew on 10 November 1945. This series was followed by six Sea Mosquito TR Mk.37s, which were built at Chester (Broughton) and differed in having ASV Mk.XIII radar instead of the TR.33's AN/APS-6. The RAF's target tug version was the Mosquito TT Mk.35, which were the last aircraft to remain in operational service with No 3 CAACU at Exeter, being finally retired in 1963. These aircraft were then featured in the film 633 Squadron. A number of B Mk.XVIs bombers were converted into TT Mk.39 target tug aircraft. The Royal Navy also operated the Mosquito TT Mk.39 for target towing. Two ex-RAF FB.6s were converted to TT.6 standard at Manchester (Ringway) Airport by Fairey Aviation in 1953–1954, and delivered to the Belgian Air Force for use as towing aircraft from the Sylt firing ranges. A total of 1,032 (wartime + 2 afterwards) Mosquitoes were built by De Havilland Canada at Downsview Airfield in Downsview Ontario (now Downsview Park in Toronto Ontario). A number of Mosquito IVs were modified by Vickers-Armstrongs to carry Highball "bouncing bombs" and were allocated Vickers Type numbers: About 5,000 of the total of 7,781 Mosquitoes built had major structural components fabricated from wood in High Wycombe, Buckinghamshire, England. Fuselages, wings and tailplanes were made at furniture companies such as Ronson, E. Gomme, Parker Knoll, Austinsuite and Styles & Mealing. Wing spars were made by J. B. Heath and Dancer & Hearne. Many of the other parts, including flaps, flap shrouds, fins, leading edge assemblies and bomb doors were also produced in the Buckinghamshire town. Dancer & Hearne processed much of the wood from start to finish, receiving timber and transforming it into finished wing spars at their factory in Penn Street on the outskirts of High Wycombe. Initially much of the specialised yellow birch wood veneer and finished plywood used for the prototypes and early production aircraft was shipped from firms in Wisconsin, US. Prominent in this role were Roddis Plywood and Veneer Manufacturing in Marshfield. In conjunction with the USDA Forest Products Laboratory, Hamilton Roddis had developed new plywood adhesives and hot pressing technology. Later on, paper birch was logged in large quantities from the interior of British Columbia along the Fraser and Quesnel Rivers and processed in Quesnel and New Westminster by the Pacific Veneer Company. According to the Quesnel archives, BC paper birch supplied ½ of the wartime British Empire birch used for Mosquitoes and other aircraft. As the supply of Ecuadorean balsa was threatened by the U-boats in the Atlantic Ocean, the Ministry of Aircraft Production approved a research effort to supplant the balsa with calcium alginate foam, made from local brown algae. By 1944 the foam was ready, but the U-boat threat had been reduced, the larger B-25 bombers were in sufficient supply to handle most of the bombing raids, and the foam was not used in Mosquito production. In July 1941, it was decided that DH Canada would build Mosquitoes at Downsview, Ontario. This was to continue even if Germany invaded Great Britain. Packard Merlin engines produced under licence were bench-tested by August and the first two aircraft were built in September. Production was to increase to fifty per month by early 1942. Initially, the Canadian production was for bomber variants; later, fighters, fighter-bombers and training aircraft were also made. DH Chief Production Engineer, Harry Povey, was sent first, then W. D. Hunter followed on an extended stay, to liaise with materials and parts suppliers. As was the case with initial UK production, Tego-bonded plywood and birch veneer was obtained from firms in Wisconsin, principally Roddis Plywood and Veneer Manufacturing, Marshfield. Enemy action delayed the shipping of jigs and moulds and it was decided to build these locally. During 1942, production improved to over 80 machines per month, as sub-contractors and suppliers became established. A mechanised production line based in part on car building methods started in 1944. As the war progressed, Canadian Mosquitoes may have utilized paper birch supplied by the Pacific Veneer Company of New Westminster using birch logs from the Cariboo, although records only say this birch was shipped to England for production there. When flight testing could no longer keep up, this was moved to the Central Aircraft Company airfield, London, Ontario, where the approved Mosquitoes left for commissioning and subsequent ferry transfer to Europe. Ferrying Mosquitoes and many other types of WWII aircraft from Canada to Europe was dangerous, resulting in losses of lives and machines, but in the exigencies of war it was regarded as the best option for twin-engine and multi-engine aircraft. In the parlance of the day, among RAF personnel, "it was no piece of cake." Considerable efforts were made by de Havilland Canada to resolve problems with engine and oil systems and an additional five hours of flight testing were introduced before the ferry flight, but the actual cause of some of the losses was unknown. Nevertheless, by the end of the war, nearly 500 Mosquito bombers and fighter-bombers had been ferried successfully by the Canadian operation. After DH Canada had been established for the Mosquito, further manufacturing was set up at DH Australia, in Sydney. One of the DH staff who travelled there was the distinguished test pilot, Pat Fillingham. These production lines added totals of 1,133 aircraft of varying types from Canada plus 212 aircraft from Australia. In total, both during the war and after, de Havilland exported 46 FB.VIs and 29 PR. XVIs to Australia; two FB.VI and 18 NF.30s to Belgium; approximately 250 FB.26, T.29 and T.27s from Canada to Nationalist China. A significant number never went into service due to deterioration on the voyage and to crashes during Chinese pilot training; however, five were captured by the People's Liberation Army during the Chinese Civil War; 19 FB.VIs to Czechoslovakia in 1948; 6 FB.VIs to Dominica; a few B.IVs, 57 FB.VIs, 29 PR.XVIs and 23 NF.30s to France. Some T.IIIs were exported to Israel along with 60 FB.VIs, and at least five PR.XVIs and 14 naval versions. Four T.IIIs, 76 FB.VIs, one FB.40 and four T.43s were exported to New Zealand. Three T.IIIs were exported to Norway, and 18 FB.VIs, which were later converted to night fighter standard. South Africa received two F.II and 14 PR.XVI/XIs and Sweden received 60 NF.XIXs. Turkey received 96 FB.VIs and several T.IIIs, and Yugoslavia had 60 NF.38s, 80 FB.VIs and three T.IIIs delivered. At least a single de Havilland Mosquito was delivered to the Soviet Union marked 'DK 296'. Total Mosquito production was 7,781, of which 6,710 were built during the war. A number of Mosquitoes were lost in civilian airline service, mostly with British Overseas Airways Corporation during World War II. On 21 July 1996, Mosquito G-ASKH, wearing the markings of RR299, crashed 1 mile west of Manchester Barton Airport. Pilot Kevin Moorhouse and Engineer Steve Watson were both killed in the crash. At the time, this was the last airworthy Mosquito T.III. There are approximately 30 non-flying Mosquitoes around the world with four airworthy examples, three in the United States and one in Canada. The largest collection of Mosquitoes is at the de Havilland Aircraft Museum in the United Kingdom, which owns three aircraft, including the first prototype, W4050, the only initial prototype of a Second World War British aircraft design still in existence in the 21st century. Data from Jane's Fighting Aircraft of World War II, World War II Warbirds General characteristics Performance Armament Avionics Notable Mosquito missions Related development Aircraft of comparable role, configuration, and era Related lists
[ { "paragraph_id": 0, "text": "The de Havilland DH.98 Mosquito is a British twin-engined, multirole combat aircraft, introduced during the Second World War. Unusual in that its frame was constructed mostly of wood, it was nicknamed the \"Wooden Wonder\", or \"Mossie\". Lord Beaverbrook, Minister of Aircraft Production, nicknamed it \"Freeman's Folly\", alluding to Air Chief Marshal Sir Wilfrid Freeman, who defended Geoffrey de Havilland and his design concept against orders to scrap the project. In 1941, it was one of the fastest operational aircraft in the world.", "title": "" }, { "paragraph_id": 1, "text": "Originally conceived as an unarmed fast bomber, the Mosquito's use evolved during the war into many roles, including low- to medium-altitude daytime tactical bomber, high-altitude night bomber, pathfinder, day or night fighter, fighter-bomber, intruder, maritime strike, and photo-reconnaissance aircraft. It was also used by the British Overseas Airways Corporation as a fast transport to carry small, high-value cargo to and from neutral countries through enemy-controlled airspace. The crew of two, pilot and navigator, sat side by side. A single passenger could ride in the aircraft's bomb bay when necessary.", "title": "" }, { "paragraph_id": 2, "text": "The Mosquito FB Mk. VI was often flown in special raids, such as Operation Jericho (an attack on Amiens Prison in early 1944), and precision attacks against military intelligence, security, and police facilities (such as Gestapo headquarters). On 30 January 1943, the 10th anniversary of Hitler being made chancellor and the Nazis gaining power, a morning Mosquito attack knocked out the main Berlin broadcasting station while Hermann Göring was speaking, taking his speech off the air.", "title": "" }, { "paragraph_id": 3, "text": "The Mosquito flew with the Royal Air Force (RAF) and other air forces in the European, Mediterranean, and Italian theatres. The Mosquito was also operated by the RAF in the Southeast Asian theatre and by the Royal Australian Air Force based in the Halmaheras and Borneo during the Pacific War. During the 1950s, the RAF replaced the Mosquito with the jet-powered English Electric Canberra.", "title": "" }, { "paragraph_id": 4, "text": "By the early to mid-1930s, de Havilland had built a reputation for innovative high-speed aircraft with the DH.88 Comet racer. Later, the DH.91 Albatross airliner pioneered the composite wood construction used for the Mosquito. The 22-passenger Albatross could cruise at 210 mph (340 km/h) at 11,000 ft (3,400 m), faster than the Handley Page H.P.42 and other biplanes it was replacing. The wooden monocoque construction not only saved weight and compensated for the low power of the de Havilland Gipsy Twelve engines used by this aircraft, but also simplified production and reduced construction time.", "title": "Development" }, { "paragraph_id": 5, "text": "On 8 September 1936, the British Air Ministry issued Specification P.13/36, which called for a twin-engined, medium bomber capable of carrying a bomb load of 3,000 lb (1,400 kg) for 3,000 mi (4,800 km) with a maximum speed of 275 mph (445 km/h) at 15,000 ft (4,600 m); a maximum bomb load of 8,000 lb (3,600 kg) that could be carried over shorter ranges was also specified. Aviation firms entered heavy designs with new high-powered engines and multiple defensive turrets, leading to the production of the Avro Manchester and Handley Page Halifax.", "title": "Development" }, { "paragraph_id": 6, "text": "In May 1937, as a comparison to P.13/36, George Volkert, the chief designer of Handley Page, put forward the concept of a fast, unarmed bomber. In 20 pages, Volkert planned an aerodynamically clean, medium bomber to carry 3,000 lb (1,400 kg) of bombs at a cruising speed of 300 mph (485 km/h). Support existed in the RAF and Air Ministry; Captain R. N. Liptrot, Research Director Aircraft 3, appraised Volkert's design, calculating that its top speed would exceed that of the new Supermarine Spitfire, but counter-arguments held that although such a design had merit, it would not necessarily be faster than enemy fighters for long. The ministry was also considering using non-strategic materials for aircraft production, which, in 1938, had led to specification B.9/38 and the Armstrong Whitworth Albemarle medium bomber, largely constructed from spruce and plywood attached to a steel-tube frame. The idea of a small, fast bomber gained support at a much earlier stage than is sometimes acknowledged, though the Air Ministry likely envisaged it using light alloy components.", "title": "Development" }, { "paragraph_id": 7, "text": "Based on his experience with the Albatross, Geoffrey de Havilland believed that a bomber with a good aerodynamic design and smooth, minimal skin area, would exceed the P.13/36 specification. Furthermore, adapting the Albatross principles could save time. In April 1938, performance estimates were produced for a twin Rolls-Royce Merlin-powered DH.91, with the Bristol Hercules (radial engine) and Napier Sabre (H-engine) as alternatives. On 7 July 1938, de Havilland wrote to Air Marshal Wilfrid Freeman, the Air Council's member for Research and Development, discussing the specification and arguing that in war, shortages of aluminium and steel would occur, but supplies of wood-based products were \"adequate.\" Although inferior in tension, the strength-to-weight ratio of wood is equal to or better than light alloys or steel, hence this approach was feasible.", "title": "Development" }, { "paragraph_id": 8, "text": "A follow-up letter to Freeman on 27 July said that the P.13/36 specification could not be met by a twin Merlin-powered aircraft and either the top speed or load capacity would be compromised, depending on which was paramount. For example, a larger, slower, turret-armed aircraft would have a range of 1,500 mi (2,400 km) carrying a 4,000 lb bomb load, with a maximum of 260 mph (420 km/h) at 19,000 ft (5,800 m), and a cruising speed of 230 mph (370 km/h) at 18,000 ft (5,500 m). De Havilland believed that a compromise, including eliminating surplus equipment, would improve matters. On 4 October 1938, de Havilland projected the performance of another design based on the Albatross, powered by two Merlin Xs, with a three-man crew and six or eight forward-firing guns, plus one or two manually operated guns and a tail turret. Based on a total loaded weight of 19,000 lb (8,600 kg), it would have a top speed of 300 mph (480 km/h) and cruising speed of 268 mph (431 km/h) at 22,500 ft (6,900 m).", "title": "Development" }, { "paragraph_id": 9, "text": "Still believing this could be improved, and after examining more concepts based on the Albatross and the new all-metal DH.95 Flamingo, de Havilland settled on designing a new aircraft that would be aerodynamically clean, wooden, and powered by the Merlin, which offered substantial future development. The new design would be faster than foreseeable enemy fighter aircraft, and could dispense with a defensive armament, which would slow it and make interception or losses to antiaircraft guns more likely. Instead, high speed and good manoeuvrability would make evading fighters and ground fire easier. The lack of turrets simplified production, reduced drag, and reduced production time, with a delivery rate far in advance of competing designs. Without armament, the crew could be reduced to a pilot and navigator. Whereas contemporary RAF design philosophy favoured well-armed heavy bombers, this mode of design was more akin to the German philosophy of the Schnellbomber. At a meeting in early October 1938 with Geoffrey de Havilland and Charles Walker (de Havilland's chief engineer), the Air Ministry showed little interest, and instead asked de Havilland to build wings for other bombers as a subcontractor.", "title": "Development" }, { "paragraph_id": 10, "text": "By September 1939, de Havilland had produced preliminary estimates for single- and twin-engined variations of light-bomber designs using different engines, speculating on the effects of defensive armament on their designs. One design, completed on 6 September, was for an aircraft powered by a single 2,000 hp (1,500 kW) Napier Sabre, with a wingspan of 47 ft (14 m) and capable of carrying a 1,000 lb (450 kg) bomb load 1,500 mi (2,400 km). On 20 September, in another letter to Wilfrid Freeman, de Havilland wrote \"... we believe that we could produce a twin-engine[d] bomber which would have a performance so outstanding that little defensive equipment would be needed.\" By 4 October, work had progressed to a twin-engined light bomber with a wingspan of 51 ft (16 m) and powered by Merlin or Griffon engines, the Merlin favoured because of availability. On 7 October 1939, a month into the war, the nucleus of a design team under Eric Bishop moved to the security and secrecy of Salisbury Hall to work on what was later known as the DH.98. For more versatility, Bishop made provision for four 20 mm cannon in the forward half of the bomb bay, under the cockpit, firing via blast tubes and troughs under the fuselage.", "title": "Development" }, { "paragraph_id": 11, "text": "The DH.98 was too radical for the ministry, which wanted a heavily armed, multirole aircraft, combining medium bomber, reconnaissance, and general-purpose roles, that was also capable of carrying torpedoes. With the outbreak of war, the ministry became more receptive, but was still sceptical about an unarmed bomber. They thought the Germans would produce fighters that were faster than had been expected. and suggested the incorporation of two forward- and two rear-firing machine guns for defence. The ministry also opposed a two-man bomber, wanting at least a third crewman to reduce the work of the others on long flights. The Air Council added further requirements such as remotely controlled guns, a top speed of 275 mph (445 km/h) at 15,000 ft on two-thirds engine power, and a range of 3,000 mi (4,800 km) with a 4,000-lb bomb load. To appease the ministry, de Havilland built mock-ups with a gun turret just aft of the cockpit, but apart from this compromise, de Havilland made no changes.", "title": "Development" }, { "paragraph_id": 12, "text": "On 12 November, at a meeting considering fast-bomber ideas put forward by de Havilland, Blackburn, and Bristol, Air Marshal Freeman directed de Havilland to produce a fast aircraft, powered initially by Merlin engines, with options of using progressively more powerful engines, including the Rolls-Royce Griffon and the Napier Sabre. Although estimates were presented for a slightly larger Griffon-powered aircraft, armed with a four-gun tail turret, Freeman got the requirement for defensive weapons dropped, and a draft requirement was raised calling for a high-speed, light-reconnaissance bomber capable of 400 mph (645 km/h) at 18,000 ft.", "title": "Development" }, { "paragraph_id": 13, "text": "On 12 December, the Vice-Chief of the Air Staff, Director General of Research and Development, and the Air Officer Commanding-in-Chief (AOC-in-C) of RAF Bomber Command met to finalise the design and decide how to fit it into the RAF's aims. The AOC-in-C would not accept an unarmed bomber, but insisted on its suitability for reconnaissance missions with F8 or F24 cameras. After company representatives, the ministry, and the RAF's operational commands examined a full-scale mock-up at Hatfield on 29 December 1939, the project received backing. This was confirmed on 1 January 1940, when Freeman chaired a meeting with Geoffrey de Havilland, John Buchanan (Deputy of Aircraft Production), and John Connolly (Buchanan's chief of staff). De Havilland claimed the DH.98 was the \"fastest bomber in the world ... it must be useful\". Freeman supported it for RAF service, ordering a single prototype for an unarmed bomber to specification B.1/40/dh, which called for a light bomber/reconnaissance aircraft powered by two 1,280 hp (950 kW) Rolls-Royce RM3SM (an early designation for the Merlin 21) with ducted radiators, capable of carrying a 1,000 lb (450 kg) bomb load. The aircraft was to have a speed of 400 mph (640 km/h) at 24,000 ft (7,300 m) and a cruising speed of 325 mph (525 km/h) at 26,500 ft (8,100 m) with a range of 1,500 mi (2,400 km) at 25,000 ft (7,600 m) on full tanks. Maximum service ceiling was to be 32,000 ft (9,800 m).", "title": "Development" }, { "paragraph_id": 14, "text": "On 1 March 1940, Air Marshal Roderic Hill issued a contract under Specification B.1/40, for 50 bomber-reconnaissance variants of the DH.98; this contract included the prototype, which was given the factory serial E-0234. In May 1940, specification F.21/40 was issued, calling for a long-range fighter armed with four 20 mm cannon and four .303 machine guns in the nose, after which de Havilland was authorised to build a prototype of a fighter version of the DH.98. After debate, it was decided that this prototype, given the military serial number W4052, was to carry airborne interception (AI) Mk IV equipment as a day and night fighter. By June 1940, the DH.98 had been named \"Mosquito\". Having the fighter variant kept the Mosquito project alive, as doubts remained within the government and Air Ministry regarding the usefulness of an unarmed bomber, even after the prototype had shown its capabilities.", "title": "Development" }, { "paragraph_id": 15, "text": "With design of the DH.98 started, mock-ups were built, the most detailed at Salisbury Hall, where E-0234 was later constructed. Initially, the concept was for the crew to be enclosed in the fuselage behind a transparent nose (similar to the Bristol Blenheim or Heinkel He 111H), but this was quickly altered to a more solid nose with a conventional canopy.", "title": "Development" }, { "paragraph_id": 16, "text": "Work was cancelled again after the evacuation of the British Army from France, when Lord Beaverbrook, as Minister of Aircraft Production, concentrating production on aircraft types for the defence of the UK decided no production capacity remained for aircraft like the DH.98, which was not expected to be in service until early 1942. Beaverbrook told Air Vice-Marshal Freeman that work on the project should stop, but he did not issue a specific instruction, and Freeman ignored the request. In June 1940, however, Lord Beaverbrook and the Air Staff ordered that production should concentrate on five existing types, namely the Supermarine Spitfire, Hawker Hurricane fighter, Vickers Wellington, Armstrong-Whitworth Whitley, and Bristol Blenheim bombers. Work on the DH.98 prototype stopped. Apparently, the project shut down when the design team were denied materials for the prototype.", "title": "Development" }, { "paragraph_id": 17, "text": "The Mosquito was only reinstated as a priority in July 1940, after de Havilland's general manager, L.C.L. Murray, promised Lord Beaverbrook 50 Mosquitoes by December 1941. This was only after Beaverbrook was satisfied that Mosquito production would not hinder de Havilland's primary work of producing Tiger Moth and Airspeed Oxford trainers, repairing Hurricanes, and manufacturing Merlin engines under licence. In promising Beaverbrook such a number by the end of 1941, de Havilland was taking a gamble, because they were unlikely to be built in such a limited time. As it transpired, only 20 aircraft were built in 1941, but the other 30 were delivered by mid-March 1942. During the Battle of Britain, interruptions to production due to air raid warnings caused nearly a third of de Havilland's factory time to be lost. Nevertheless, work on the prototype went ahead quickly at Salisbury Hall since E-0234 was completed by November 1940.", "title": "Development" }, { "paragraph_id": 18, "text": "In the aftermath of the Battle of Britain, the original order was changed to 20 bomber variants and 30 fighters. Whether the fighter version should have dual or single controls, or should carry a turret, was still uncertain, so three prototypes were built: W4052, W4053, and W4073. The second and third, both turret armed, were later disarmed, to become the prototypes for the T.III trainer. This caused some delays, since half-built wing components had to be strengthened for the required higher combat loading. The nose sections also had to be changed from a design with a clear perspex bomb-aimer's position, to one with a solid nose housing four .303 machine guns and their ammunition.", "title": "Development" }, { "paragraph_id": 19, "text": "On 3 November 1940, the prototype aircraft, painted in \"prototype yellow\" and still coded E-0234, was dismantled, transported by road to Hatfield and placed in a small, blast-proof assembly building. Two Merlin 21 two-speed, single-stage supercharged engines were installed, driving three-bladed de Havilland Hydromatic constant-speed controllable-pitch propellers. Engine runs were made on 19 November. On 24 November, taxiing trials were carried out by Geoffrey de Havilland Jr., the de Havilland test pilot. On 25 November, the aircraft made its first flight, piloted by de Havilland Jr., accompanied by John E. Walker, the chief engine installation designer.", "title": "Development" }, { "paragraph_id": 20, "text": "For this maiden flight, E-0234, weighing 14,150 lb (6,420 kg), took off from the grass airstrip at the Hatfield site. The takeoff was reported as \"straightforward and easy\" and the undercarriage was not retracted until a considerable altitude was attained. The aircraft reached 220 mph (355 km/h), with the only problem being the undercarriage doors – which were operated by bungee cords attached to the main undercarriage legs – that remained open by some 12 in (300 mm) at that speed. This problem persisted for some time. The left wing of E-0234 also had a tendency to drag to port slightly, so a rigging adjustment, i.e., a slight change in the angle of the wing, was carried out before further flights.", "title": "Development" }, { "paragraph_id": 21, "text": "On 5 December 1940, the prototype, with the military serial number W4050, experienced tail buffeting at speeds between 240 and 255 mph (385 and 410 km/h). The pilot noticed this most in the control column, with handling becoming more difficult. During testing on 10 December, wool tufts were attached to suspect areas to investigate the direction of airflow. The conclusion was that the airflow separating from the rear section of the inner engine nacelles was disturbed, leading to a localised stall and the disturbed airflow was striking the tailplane, causing buffeting. To smooth the air flow and deflect it from forcefully striking the tailplane, nonretractable slots fitted to the inner engine nacelles and to the leading edge of the tailplane were tested. These slots and wing-root fairings fitted to the forward fuselage and leading edge of the radiator intakes stopped some of the vibration experienced, but did not cure the tailplane buffeting.", "title": "Development" }, { "paragraph_id": 22, "text": "In February 1941, buffeting was eliminated by incorporating triangular fillets on the trailing edge of the wings and lengthening the nacelles, the trailing edge of which curved up to fair into the fillet some 10 in (250 mm) behind the wing's trailing edge; this meant the flaps had to be divided into inboard and outboard sections. With the buffeting problems largely resolved, John Cunningham flew W4050 on 9 February 1941. He was greatly impressed by the \"lightness of the controls and generally pleasant handling characteristics\". Cunningham concluded that when the type was fitted with AI equipment, it might replace the Bristol Beaufighter night fighter.", "title": "Development" }, { "paragraph_id": 23, "text": "During its trials on 16 January 1941, W4050 outpaced a Spitfire at 6,000 ft (1,800 m). The original estimates were that as the Mosquito prototype had twice the surface area and over twice the weight of the Spitfire Mk.II, but also with twice its power, the Mosquito would end up being 20 mph (30 km/h) faster. Over the next few months, W4050 surpassed this estimate, easily beating the Spitfire Mk.II in testing at RAF Boscombe Down in February 1941, reaching a top speed of 392 mph (631 km/h) at 22,000 ft (6,700 m) altitude, compared to a top speed of 360 mph (580 km/h) at 19,500 ft (5,900 m) for the Spitfire.", "title": "Development" }, { "paragraph_id": 24, "text": "On 19 February, official trials began at the Aeroplane and Armament Experimental Establishment (AAEE) based at Boscombe Down, although the de Havilland representative was surprised by a delay in starting the tests. On 24 February, as W4050 taxied across the rough airfield, the tailwheel jammed leading to the fuselage fracturing. Repairs were made by early March, using part of the fuselage of the photo-reconnaissance prototype W4051. In spite of this setback, the Initial Handling Report 767 issued by the AAEE stated, \"The aeroplane is pleasant to fly ... aileron control light and effective ...\" The maximum speed reached was 388 mph (624 km/h) at 22,000 ft (6,700 m), with an estimated maximum ceiling of 34,000 ft (10,000 m) and a maximum rate of climb of 2,880 ft/min (880 m/min) at 11,500 ft (3,500 m).", "title": "Development" }, { "paragraph_id": 25, "text": "W4050 continued to be used for various test programmes, as the experimental \"workhorse\" for the Mosquito family. In late October 1941, it returned to the factory to be fitted with Merlin 61s, the first production Merlins fitted with a two-speed, two-stage supercharger. The first flight with the new engines was on 20 June 1942. W4050 recorded a maximum speed of 428 mph (689 km/h) at 28,500 ft (8,700 m) (fitted with straight-through air intakes with snow guards, engines in full supercharger gear) and 437 mph (703 km/h) at 29,200 ft (8,900 m) without snow guards. In October 1942, in connection with development work on the NF Mk.XV, W4050 was fitted with extended wingtips, increasing the span to 59 ft 2 in (18.03 m), first flying in this configuration on 8 December. Fitted with high-altitude-rated, two-stage, two-speed Merlin 77s, it reached 439 mph (707 km/h) in December 1943. Soon after these flights, W4050 was grounded and scheduled to be scrapped, but instead served as an instructional airframe at Hatfield. In September 1958, W4050 was returned to the Salisbury Hall hangar where it was built, restored to its original configuration, and became one of the primary exhibits of the de Havilland Aircraft Heritage Centre.", "title": "Development" }, { "paragraph_id": 26, "text": "W4051, which was designed from the outset to be the prototype for the photo-reconnaissance versions of the Mosquito, was slated to make its first flight in early 1941. However, the fuselage fracture in W4050 meant that W4051's fuselage was used as a replacement; W4051 was then rebuilt using a production standard fuselage and first flew on 10 June 1941. This prototype continued to use the short engine nacelles, single-piece trailing-edge flaps, and the 19 ft 5.5 in (5.931 m) \"No. 1\" tailplane used by W4050, but had production-standard 54 ft 2 in (16.51 m) wings and became the only Mosquito prototype to fly operationally.", "title": "Development" }, { "paragraph_id": 27, "text": "Construction of the fighter prototype, W4052, was also carried out at the secret Salisbury Hall facility. It was powered by 1,460 hp (1,090 kW) Merlin 21s, and had an altered canopy structure with a flat, bullet-proof windscreen; the solid nose had mounted four .303 British Browning machine guns and their ammunition boxes, accessible by a large, sideways hinged panel. Four 20-mm Hispano Mk.II cannon were housed in a compartment under the cockpit floor with the breeches projecting into the bomb bay and the automatic bomb bay doors were replaced by manually operated bay doors, which incorporated cartridge ejector chutes.", "title": "Development" }, { "paragraph_id": 28, "text": "As a day and night fighter, prototype W4052 was equipped with AI Mk IV equipment, complete with an \"arrowhead\" transmission aerial mounted between the central Brownings and receiving aerials through the outer wing tips, and it was painted in black RDM2a \"Special Night\" finish. It was also the first prototype constructed with the extended engine nacelles. W4052 was later tested with other modifications, including bomb racks, drop tanks, barrage balloon cable cutters in the leading edge of the wings, Hamilton airscrews and braking propellers, and drooping aileron systems that enabled steep approaches and a larger rudder tab. The prototype continued to serve as a test machine until it was scrapped on 28 January 1946. 4055 flew the first operational Mosquito flight on 17 September 1941.", "title": "Development" }, { "paragraph_id": 29, "text": "During flight testing, the Mosquito prototypes were modified to test a number of configurations. W4050 was fitted with a turret behind the cockpit for drag tests, after which the idea was abandoned in July 1941. W4052 had the first version of the Youngman Frill airbrake fitted to the fighter prototype. The frill was mounted around the fuselage behind the wing and was opened by bellows and venturi effect to provide rapid deceleration during interceptions and was tested between January and August 1942, but was also abandoned when lowering the undercarriage was found to have the same effect with less buffeting.", "title": "Development" }, { "paragraph_id": 30, "text": "The Air Ministry authorised mass production plans on 21 June 1941, by which time the Mosquito had become one of the world's fastest operational aircraft. It ordered 19 photo-reconnaissance (PR) models and 176 fighters. A further 50 were unspecified; in July 1941, these were confirmed to be unarmed fast bombers. By the end of January 1942, contracts had been awarded for 1,378 Mosquitoes of all variants, including 20 T.III trainers and 334 FB.VI bombers. Another 400 were to be built by de Havilland Canada.", "title": "Development" }, { "paragraph_id": 31, "text": "On 20 April 1941, W4050 was demonstrated to Lord Beaverbrook, the Minister of Aircraft Production. The Mosquito made a series of flights, including one rolling climb on one engine. Also present were US General Henry H. Arnold and his aide Major Elwood Quesada, who wrote \"I ... recall the first time I saw the Mosquito as being impressed by its performance, which we were aware of. We were impressed by the appearance of the airplane that looks fast usually is fast, and the Mosquito was, by the standards of the time, an extremely well-streamlined airplane, and it was highly regarded, highly respected.\"", "title": "Development" }, { "paragraph_id": 32, "text": "The trials set up future production plans between Britain, Australia, and Canada. Six days later, Arnold returned to America with a full set of manufacturer's drawings. As a result of his report, five companies (Beech, Curtiss-Wright, Fairchild, Fleetwings, and Hughes) were asked to evaluate the de Havilland data. The report by Beech Aircraft summed up the general view: \"It appears as though this airplane has sacrificed serviceability, structural strength, ease of construction and flying characteristics in an attempt to use construction material which is not suitable for the manufacture of efficient airplanes.\" The Americans did not pursue the proposal for licensed production, the consensus arguing that the Lockheed P-38 Lightning could fulfill the same duties. However, Arnold urged the United States Army Air Forces (USAAF) to evaluate the design even if they would not adopt it. On 12 December 1941, after the attack on Pearl Harbor, the USAAF requested one airframe for this purpose.", "title": "Development" }, { "paragraph_id": 33, "text": "While timber construction was considered outmoded by some, de Havilland claimed that their successes with techniques used for the DH 91 Albatross could lead to a fast, light bomber using monocoque-sandwich shell construction. Arguments in favour of this included speed of prototyping, rapid development, minimisation of jig-building time, and employment of a separate category of workforce. The ply-balsa-ply monocoque fuselage and one-piece wings with doped fabric covering would give excellent aerodynamic performance and low weight, combined with strength and stiffness. At the same time, the design team had to fight conservative Air Ministry views on defensive armament. Guns and gun turrets, favoured by the ministry, would impair the aircraft's aerodynamic properties and reduce speed and manoeuvrability, in the opinion of the designers. Whilst submitting these arguments, Geoffrey de Havilland funded his private venture until a very late stage. The project was a success beyond all expectations. The initial bomber and photo-reconnaissance versions were extremely fast, whilst the armament of subsequent variants might be regarded as primarily offensive.", "title": "Design and manufacture" }, { "paragraph_id": 34, "text": "The most-produced variant, designated the FB Mk. VI (Fighter-bomber Mark 6), was powered by two Merlin Mk.23 or Mk.25 engines driving three-bladed de Havilland hydromatic propellers. The typical fixed armament for an FB Mk. VI was four Browning .303 machine guns and four 20-mm Hispano cannons, while the offensive load consisted of up to 2,000 lb (910 kg) of bombs, or eight RP-3 unguided rockets.", "title": "Design and manufacture" }, { "paragraph_id": 35, "text": "The design was noted for light and effective control surfaces that provided good manoeuvrability, but required that the rudder not be used aggressively at high speeds. Poor aileron control at low speeds when landing and taking off was also a problem for inexperienced crews. For flying at low speeds, the flaps had to be set at 15°, speed reduced to 200 mph (320 km/h), and rpm set to 2,650. The speed could be reduced to an acceptable 150 mph (240 km/h) for low-speed flying. For cruising, the optimum speed for obtaining maximum range was 200 mph (320 km/h) at 17,000 lb (7,700 kg) weight.", "title": "Design and manufacture" }, { "paragraph_id": 36, "text": "The Mosquito had a high stalling speed of 120 mph (190 km/h) with undercarriage and flaps raised. When both were lowered, the stalling speed decreased from 120 to 100 mph (190 to 160 km/h). Stall speed at normal approach angle and conditions was 100 to 110 mph (160 to 180 km/h). Warning of the stall was given by buffeting and would occur 12 mph (19 km/h) before stall was reached. The conditions and impact of the stall were not severe. The wing did not drop unless the control column was pulled back. The nose drooped gently and recovery was easy.", "title": "Design and manufacture" }, { "paragraph_id": 37, "text": "Early on in the Mosquito's operational life, the intake shrouds that were to cool the exhausts on production aircraft overheated. Flame dampers prevented exhaust glow on night operations, but they had an effect on performance. Multiple ejector and open-ended exhaust stubs helped solve the problem and were used in the PR.VIII, B.IX, and B.XVI variants. This increased speed performance in the B.IX alone by 10 to 13 mph (16 to 21 km/h).", "title": "Design and manufacture" }, { "paragraph_id": 38, "text": "The oval-section fuselage was a frameless monocoque shell built in two vertically separate halves formed over a mahogany or concrete mould. Pressure was applied with band clamps. Some of the 1/2—3/4\" shell sandwich skins comprised 3/32\" birch three-ply outers, with 7/16\" cores of Ecuadorean balsa. In many generally smaller but vital areas, such as around apertures and attachment zones, stronger timbers, including aircraft-quality spruce, replaced the balsa core. The main areas of the sandwich skin were only 0.55 in (14 mm) thick. Together with various forms of wood reinforcement, often of laminated construction, the sandwich skin gave great stiffness and torsional resistance. The separate fuselage halves speeded construction, permitting access by personnel working in parallel with others, as the work progressed.", "title": "Design and manufacture" }, { "paragraph_id": 39, "text": "Work on the separate half-fuselages included installation of control mechanisms and cabling. Screwed inserts into the inner skins that would be under stress in service were reinforced using round shear plates made from a fabric-Bakelite composite.", "title": "Design and manufacture" }, { "paragraph_id": 40, "text": "Transverse bulkheads were also compositely built-up with several species of timber, plywood, and balsa. Seven vertically halved bulkheads were installed within each moulded fuselage shell before the main \"boxing up\" operation. Bulkhead number seven was especially strongly built, since it carried the fitments and transmitted the aerodynamic loadings for the tailplane and rudder. The fuselage had a large ventral section cut-out, strongly reinforced, that allowed the fuselage to be lowered onto the wing centre-section at a later stage of assembly.", "title": "Design and manufacture" }, { "paragraph_id": 41, "text": "For early production aircraft, the structural assembly adhesive was casein-based. At a later stage, this was replaced by \"Aerolite\", a synthetic urea-formaldehyde type, which was more durable. To provide for the edge joints for the fuselage halves, zones near the outer edges of the shells had their balsa sandwich cores replaced by much stronger inner laminations of birch plywood. For the bonding together of the two halves (\"boxing up\"), a longitudinal cut was machined into these edges. The profile of this cut was a form of V-groove. Part of the edge bonding process also included adding further longitudinal plywood lap strips on the outside of the shells. The half bulkheads of each shell were bonded to their corresponding pair in a similar way. Two laminated wooden clamps were used in the after portion of the fuselage to provide supports during this complex gluing work. The resulting large structural components had to be kept completely still and held in the correct environment until the glue cured.", "title": "Design and manufacture" }, { "paragraph_id": 42, "text": "For finishing, a covering of doped madapollam (a fine, plain-woven cotton) fabric was stretched tightly over the shell and several coats of red, followed by silver dope, were added, followed by the final camouflage paint.", "title": "Design and manufacture" }, { "paragraph_id": 43, "text": "The all-wood wing pairs comprised a single structural unit throughout the wingspan, with no central longitudinal joint. Instead, the spars ran from wingtip to wingtip. There was a single continuous main spar and another continuous rear spar. Because of the combination of dihedral with the forward sweep of the trailing edges of the wings, this rear spar was one of the most complex units to laminate and to finish machining after the bonding and curing. It had to produce the correct 3D tilt in each of two planes. Also, it was designed and made to taper from the wing roots towards the wingtips. Both principal spars were of ply box construction, using in general 0.25-in plywood webs with laminated spruce flanges, plus a number of additional reinforcements and special details.", "title": "Design and manufacture" }, { "paragraph_id": 44, "text": "Spruce and plywood ribs were connected with gusset joints. Some heavy-duty ribs contained pieces of ash and walnut, as well as the special five ply that included veneers laid up at 45°. The upper skin construction was in two layers of 0.25-in five-ply birch, separated by Douglas fir stringers running in the span-wise direction. The wings were covered with madapollam fabric and doped in a similar manner to the fuselage. The wing was installed into the roots by means of four large attachment points. The engine radiators were fitted in the inner wing, just outboard of the fuselage on either side. These gave less drag. The radiators themselves were split into three sections: an oil cooler section outboard, the middle section forming the coolant radiator and the inboard section serving the cabin heater.", "title": "Design and manufacture" }, { "paragraph_id": 45, "text": "The wing contained metal-framed and -skinned ailerons, but the flaps were made of wood and were hydraulically controlled. The nacelles were mostly wood, although for strength, the engine mounts were all metal, as were the undercarriage parts. Engine mounts of welded steel tube were added, along with simple landing gear oleos filled with rubber blocks. Wood was used to carry only in-plane loads, with metal fittings used for all triaxially loaded components such as landing gear, engine mounts, control-surface mounting brackets, and the wing-to-fuselage junction. The outer leading wing edge had to be brought 22 in (56 cm) further forward to accommodate this design. The main tail unit was all wood built. The control surfaces, the rudder, and elevator were aluminium-framed and fabric-covered. The total weight of metal castings and forgings used in the aircraft was only 280 lb (130 kg).", "title": "Design and manufacture" }, { "paragraph_id": 46, "text": "In November 1944, several crashes occurred in the Far East. At first, these were thought to be a result of wing-structure failures. The casein glue, it was said, cracked when exposed to extreme heat and/or monsoon conditions. This caused the upper surfaces to \"lift\" from the main spar. An investigating team led by Major Hereward de Havilland travelled to India and produced a report in early December 1944 stating, \"the accidents were not caused by the deterioration of the glue, but by shrinkage of the airframe during the wet monsoon season\". However, a later inquiry by Cabot & Myers firmly attributed the accidents to faulty manufacture and this was confirmed by a further investigation team by the Ministry of Aircraft Production at Defford, which found faults in six Mosquito marks (all built at de Havilland's Hatfield and Leavesden plants). The defects were similar, and none of the aircraft had been exposed to monsoon conditions or termite attack.", "title": "Design and manufacture" }, { "paragraph_id": 47, "text": "The investigators concluded that construction defects occurred at the two plants. They found that the \"... standard of glueing ... left much to be desired.\" Records at the time showed that accidents caused by \"loss of control\" were three times more frequent on Mosquitoes than on any other type of aircraft. The Air Ministry forestalled any loss of confidence in the Mosquito by holding to Major de Havilland's initial investigation in India that the accidents were caused \"largely by climate\" To solve the problem of seepage into the interior, a strip of plywood was set along the span of the wing to seal the entire length of the skin joint.", "title": "Design and manufacture" }, { "paragraph_id": 48, "text": "The fuel systems gave the Mosquito good range and endurance, using up to nine fuel tanks. Two outer wing tanks each contained 58 imp gal (70 US gal; 260 L) of fuel. These were complemented by two inner wing fuel tanks, each containing 143 imp gal (172 US gal; 650 L), located between the wing root and engine nacelle. In the central fuselage were twin fuel tanks mounted between bulkhead number two and three aft of the cockpit. In the FB.VI, these tanks contained 25 imp gal (30 US gal; 110 L) each, while in the B.IV and other unarmed Mosquitoes each of the two centre tanks contained 68 imp gal (82 US gal; 310 L). Both the inner wing, and fuselage tanks are listed as the \"main tanks\" and the total internal fuel load of 452 imp gal (545 US gal; 2,055 L) was initially deemed appropriate for the type. In addition, the FB Mk. VI could have larger fuselage tanks, increasing the capacity to 63 imp gal (76 US gal; 290 L). Drop tanks of 50 imp gal (60 US gal; 230 L) or 100 imp gal (120 US gal; 450 L) could be mounted under each wing, increasing the total fuel load to 615 or 715 imp gal (739 or 859 US gal; 2,800 or 3,250 L).", "title": "Design and manufacture" }, { "paragraph_id": 49, "text": "The design of the Mk.VI allowed for a provisional long-range fuel tank to increase range for action over enemy territory, for the installation of bomb release equipment specific to depth charges for strikes against enemy shipping, or for the simultaneous use of rocket projectiles along with a 100 imp gal (120 US gal; 450 L) drop tank under each wing supplementing the main fuel cells. The FB.VI had a wingspan of 54 ft 2 in (16.51 m), a length (over guns) of 41 ft 2 in (12.55 m). It had a maximum speed of 378 mph (608 km/h) at 13,200 ft (4,000 m). Maximum take-off weight was 22,300 lb (10,100 kg) and the range of the aircraft was 1,120 mi (1,800 km) with a service ceiling of 26,000 ft (7,900 m).", "title": "Design and manufacture" }, { "paragraph_id": 50, "text": "To reduce fuel vaporisation at the high altitudes of photographic reconnaissance variants, the central and inner wing tanks were pressurised. The pressure venting cock located behind the pilot's seat controlled the pressure valve. As the altitude increased, the valve increased the volume applied by a pump. This system was extended to include field modifications of the fuel tank system.", "title": "Design and manufacture" }, { "paragraph_id": 51, "text": "The engine oil tanks were in the engine nacelles. Each nacelle contained a 15 imp gal (18 US gal; 68 L) oil tank, including a 2.5 imp gal (3.0 US gal; 11 L) air space. The oil tanks themselves had no separate coolant controlling systems. The coolant header tank was in the forward nacelle, behind the propeller. The remaining coolant systems were controlled by the coolant radiators shutters in the forward inner wing compartment, between the nacelle and the fuselage and behind the main engine cooling radiators, which were fitted in the leading edge. Electric-pneumatic operated radiator shutters directed and controlled airflow through the ducts and into the coolant valves, to predetermined temperatures.", "title": "Design and manufacture" }, { "paragraph_id": 52, "text": "Electrical power came from a 24 volt DC generator on the starboard (No. 2) engine and an alternator on the port engine, which also supplied AC power for radios. The radiator shutters, supercharger gear change, gun camera, bomb bay, bomb/rocket release and all the other crew controlled instruments were powered by a 24 V battery. The radio communication devices included VHF and HF communications, GEE navigation, and IFF and G.P. devices. The electric generators also powered the fire extinguishers. Located on the starboard side of the cockpit, the switches would operate automatically in the event of a crash. In flight, a warning light would flash to indicate a fire, should the pilot not already be aware of it. In later models, to save liquids and engine clean up time in case of belly landing, the fire extinguisher was changed to semi-automatic triggers.", "title": "Design and manufacture" }, { "paragraph_id": 53, "text": "The main landing gear, housed in the nacelles behind the engines, were raised and lowered hydraulically. The main landing gear shock absorbers were de Havilland manufactured and used a system of rubber in compression, rather than hydraulic oleos, with twin pneumatic brakes for each wheel. The Dunlop-Marstrand anti-shimmy tailwheel was also retractable.", "title": "Design and manufacture" }, { "paragraph_id": 54, "text": "The de Havilland Mosquito operated in many roles, performing medium bomber, reconnaissance, tactical strike, anti-submarine warfare, shipping attacks and night fighter duties, until the end of the war. In July 1941, the first production Mosquito W4051 (a production fuselage combined with some prototype flying surfaces – see Prototypes and test flights) was sent to No. 1 Photographic Reconnaissance Unit (PRU), at RAF Benson. The secret reconnaissance flights of this aircraft were the first operational missions of the Mosquito. In 1944, the journal Flight gave 19 September 1941 as date of the first PR mission, at an altitude \"of some 20,000 ft\".", "title": "Operational history" }, { "paragraph_id": 55, "text": "On 15 November 1941, 105 Squadron, RAF, took delivery at RAF Swanton Morley, Norfolk, of the first operational Mosquito Mk. B.IV bomber, serial no. W4064. Throughout 1942, 105 Squadron, based next at RAF Horsham St. Faith, then from 29 September, RAF Marham, undertook daylight low-level and shallow dive attacks. Apart from the Oslo and Berlin raids, the strikes were mainly on industrial and infrastructure targets in occupied Netherlands and Norway, France and northern and western Germany. The crews faced deadly flak and fighters, particularly Focke-Wulf Fw 190s, which they called snappers. Germany still controlled continental airspace and the Fw 190s were often already airborne and at an advantageous altitude. Collisions within the formations also caused casualties. It was the Mosquito's excellent handling capabilities, rather than pure speed, that facilitated successful evasions.", "title": "Operational history" }, { "paragraph_id": 56, "text": "The Mosquito was first announced publicly on 26 September 1942 after the Oslo Mosquito raid of 25 September. It was featured in The Times on 28 September and the next day the newspaper published two captioned photographs illustrating the bomb strikes and damage. On 6 December 1942, Mosquitoes from Nos. 105 and 139 Squadrons made up part of the bomber force used in Operation Oyster, the large No. 2 Group raid against the Philips works at Eindhoven.", "title": "Operational history" }, { "paragraph_id": 57, "text": "From mid-1942 to mid-1943, Mosquito bombers flew high-speed, medium and low-altitude daylight missions against factories, railways and other pinpoint targets in Germany and German-occupied Europe. From June 1943, Mosquito bombers were formed into the Light Night Striking Force to guide RAF Bomber Command heavy bomber raids and as \"nuisance\" bombers, dropping Blockbuster bombs – 4,000 lb (1,800 kg) \"cookies\" – in high-altitude, high-speed raids that German night fighters were almost powerless to intercept.", "title": "Operational history" }, { "paragraph_id": 58, "text": "As a night fighter from mid-1942, the Mosquito intercepted Luftwaffe raids on Britain, notably those of Operation Steinbock in 1944. Starting in July 1942, Mosquito night-fighter units raided Luftwaffe airfields. As part of 100 Group, it was flown as a night fighter and as an intruder supporting Bomber Command heavy bombers that reduced losses during 1944 and 1945.", "title": "Operational history" }, { "paragraph_id": 59, "text": "The Mosquito fighter-bomber served as a strike aircraft in the Second Tactical Air Force (2TAF) from its inception on 1 June 1943. The main objective was to prepare for the invasion of occupied Europe a year later. In Operation Overlord three Mosquito FB Mk. VI wings flew close air support for the Allied armies in co-operation with other RAF units equipped with the North American B-25 Mitchell medium bomber. In the months between the foundation of 2TAF and its duties from D day onwards, vital training was interspersed with attacks on V-1 flying bomb launch sites.", "title": "Operational history" }, { "paragraph_id": 60, "text": "In another example of the daylight precision raids carried out by the Mosquitoes of Nos. 105 and 139 Squadrons, on 30 January 1943, the 10th anniversary of the Nazis' seizure of power, a morning Mosquito attack knocked out the main Berlin broadcasting station while Luftwaffe Chief Reichsmarschall Hermann Göring was speaking, putting his speech off the air. A second sortie in the afternoon inconvenienced another speech, by Propaganda Minister Joseph Goebbels. Lecturing a group of German aircraft manufacturers, Göring said:", "title": "Operational history" }, { "paragraph_id": 61, "text": "In 1940 I could at least fly as far as Glasgow in most of my aircraft, but not now! It makes me furious when I see the Mosquito. I turn green and yellow with envy. The British, who can afford aluminium better than we can, knock together a beautiful wooden aircraft that every piano factory over there is building, and they give it a speed which they have now increased yet again. What do you make of that? There is nothing the British do not have. They have the geniuses and we have the nincompoops. After the war is over I'm going to buy a British radio set – then at least I'll own something that has always worked.", "title": "Operational history" }, { "paragraph_id": 62, "text": "During this daylight-raiding phase, Nos. 105 and 139 Squadrons flew 139 combat operations and aircrew losses were high. Even the losses incurred in the squadrons' dangerous Blenheim era were exceeded in percentage terms. The Roll of Honour shows 51 aircrew deaths from the end of May 1942 to April 1943. In the corresponding period, crews gained three Mentions in Despatches, two DFMs and three DFCs. The low-level daylight attacks finished on 27 May 1943 with strikes on the Schott glass and Zeiss instrument works, both in Jena. Subsequently, when low-level precision attacks required Mosquitoes, they were allotted to squadrons operating the FB.IV version. Examples include the Aarhus air raid and Operation Jericho.", "title": "Operational history" }, { "paragraph_id": 63, "text": "Since the beginning of the year, the German fighter force had become seriously overstretched. In April 1943, in response to \"political humiliation\" caused by the Mosquito, Göring ordered the formation of special Luftwaffe units (Jagdgeschwader 25, commanded by Oberstleutnant Herbert Ihlefeld and Jagdgeschwader 50, under Major Hermann Graf) to combat the Mosquito attacks, though these units, which were \"little more than glorified squadrons\", were unsuccessful against the elusive RAF aircraft. Post-war German histories also indicate that there was a belief within the Luftwaffe that Mosquito aircraft \"gave only a weak radar signal.\".", "title": "Operational history" }, { "paragraph_id": 64, "text": "The first Mosquito Squadron to be equipped with Oboe (navigation) was No. 109, based at RAF Wyton, after working as an experimental unit at RAF Boscombe Down. They used Oboe in anger for the first time on 31 December 1942 and 1 January 1943, target marking for a force of heavy bombers attacking Düsseldorf.. On 1 June, the two pioneering Squadrons joined No. 109 Squadron in the re-formed No. 8 Group RAF (Bomber Command). Initially they were engaged in moderately high altitude (about 10,000 ft (3,000 m)) night bombing, with 67 trips during that summer, mainly to Berlin. Soon after, Nos. 105 and 139 Squadron bombers were widely used by the RAF Pathfinder Force, marking targets for the main night-time strategic bombing force.", "title": "Operational history" }, { "paragraph_id": 65, "text": "In what were, initially, diversionary \"nuisance raids,\" Mosquito bombers dropped 4,000 lb Blockbuster bombs or \"Cookies.\" Particularly after the introduction of H2S (radar) in some Mosquitoes, these raids carrying larger bombs succeeded to the extent that they provided a significant additional form of attack to the large formations of \"heavies.\" Latterly in the war, there were a significant number of all-Mosquito raids on big German cities involving up to 100 or more aircraft. On the night of 20/21 February 1945, for example, Mosquitoes of No. 8 Group mounted the first of 36 consecutive night raids on Berlin.", "title": "Operational history" }, { "paragraph_id": 66, "text": "From 1943, Mosquitoes with RAF Coastal Command attacked Kriegsmarine U-boats and intercepted transport ship concentrations. After Operation Overlord, the U-boat threat in the Western Approaches decreased fairly quickly, but correspondingly the Norwegian and Danish waters posed greater dangers. Hence the RAF Coastal Command Mosquitoes were moved to Scotland to counter this threat. The Strike Wing at Banff stood up in September 1944 and comprised Mosquito aircraft of No's 143, 144, 235 and 248 Squadrons Royal Air Force and No.333 Squadron Royal Norwegian Air Force. Despite an initially high loss rate, the Mosquito bomber variants ended the war with the lowest losses of any aircraft in RAF Bomber Command service.", "title": "Operational history" }, { "paragraph_id": 67, "text": "The Mosquito also proved a very capable night fighter. Some of the most successful RAF pilots flew these variants. For example, Wing Commander Branse Burbridge claimed 21 kills.", "title": "Operational history" }, { "paragraph_id": 68, "text": "Mosquitoes of No. 100 Group RAF acted as night intruders operating at high level in support of the Bomber Command \"heavies\", to counter the enemy tactic of merging into the bomber stream, which, towards the end of 1943, was causing serious allied losses. These RCM (radio countermeasures) aircraft were fitted with a device called \"Serrate\" to allow them to track down German night fighters from their Lichtenstein B/C (low-UHF-band) and Lichtenstein SN-2 (lower end of the VHF FM broadcast band) radar emissions, as well as a device named \"Perfectos\" that tracked German IFF signals. These methods were responsible for the destruction of 257 German aircraft from December 1943 to April 1945. Mosquito fighters from all units accounted for 487 German aircraft during the war, the vast majority of which were night fighters.", "title": "Operational history" }, { "paragraph_id": 69, "text": "One Mosquito is listed as belonging to German secret operations unit Kampfgeschwader 200, which tested, evaluated and sometimes clandestinely operated captured enemy aircraft during the war. The aircraft was listed on the order of battle of Versuchsverband OKL's, 2 Staffel, Stab Gruppe on 10 November and 31 December 1944. However, on both lists, the Mosquito is listed as unserviceable.", "title": "Operational history" }, { "paragraph_id": 70, "text": "The Mosquito flew its last official European war mission on 21 May 1945, when Mosquitoes of 143 Squadron and 248 Squadron RAF were ordered to continue to hunt German submarines that might be tempted to continue the fight; instead of submarines all the Mosquitoes encountered were passive E-boats.", "title": "Operational history" }, { "paragraph_id": 71, "text": "The last operational RAF Mosquitoes were the Mosquito TT.35's, which were finally retired from No. 3 Civilian Anti-Aircraft Co-Operation Unit (CAACU) in May 1963.", "title": "Operational history" }, { "paragraph_id": 72, "text": "In 1947–49, up to 180 Canadian surplus Mosquitoes flew many operations for the Nationalist Chinese under Chiang Kai-shek in the civil war against Communist forces. Pilots from three squadrons of Mosquitoes claimed to have sunk or damaged 500 ships during one invasion attempt. As the Communists assumed control, the remaining aircraft were evacuated to Formosa, where they flew missions against shipping.", "title": "Operational history" }, { "paragraph_id": 73, "text": "Until the end of 1942 the RAF always used Roman numerals (I, II, ...) for mark numbers; 1943–1948 was a transition period during which new aircraft entering service were given Arabic numerals (1, 2, ...) for mark numbers, but older aircraft retained their Roman numerals. From 1948 onwards, Arabic numerals were used exclusively.", "title": "Variants" }, { "paragraph_id": 74, "text": "Three prototypes were built, each with a different configuration. The first to fly was W4050 on 25 November 1940, followed by the fighter W4052 on 15 May 1941 and the photo-reconnaissance prototype W4051 on 10 June 1941. W4051 later flew operationally with 1 Photographic Reconnaissance Unit (1 PRU).", "title": "Variants" }, { "paragraph_id": 75, "text": "Media related to De Havilland Mosquito PR at Wikimedia Commons", "title": "Variants" }, { "paragraph_id": 76, "text": "A total of 10 Mosquito PR Mk.Is were built, four of them \"long range\" versions equipped with a 151 imp gal (690 L) overload fuel tank in the fuselage. The contract called for 10 of the PR Mk.I airframes to be converted to B Mk.IV Series 1s. All of the PR Mk.Is, and the B Mk.IV Series 1s, had the original short engine nacelles and short span (19 ft 5.5 in) tailplanes. Their engine cowlings incorporated the original pattern of integrated exhaust manifolds, which, after relatively brief flight time, had a troublesome habit of burning and blistering the cowling panels. The first operational sortie by a Mosquito was made by a PR Mk.I, W4055, on 17 September 1941; during this sortie the unarmed Mosquito PR.I evaded three Messerschmitt Bf 109s at 23,000 ft (7,000 m). Powered by two Merlin 21s, the PR Mk.I had a maximum speed of 382 mph (615 km/h), a cruise speed of 255 mph (410 km/h), a ceiling of 35,000 ft (11,000 m), a range of 2,180 nmi (4,040 km), and a climb rate of 2,850 ft (870 m) per minute.", "title": "Variants" }, { "paragraph_id": 77, "text": "Over 30 Mosquito B Mk.IV bombers were converted into the PR Mk.IV photo-reconnaissance aircraft. The first operational flight by a PR Mk.IV was made by DK284 in April 1942.", "title": "Variants" }, { "paragraph_id": 78, "text": "The Mosquito PR Mk.VIII, built as a stopgap pending the introduction of the refined PR Mk.IX, was the next photo-reconnaissance version. The five VIIIs were converted from B Mk.IVs and became the first operational Mosquito version to be powered by two-stage, two-speed supercharged engines, using 1,565 hp (1,167 kW) Rolls-Royce Merlin 61 engines in place of Merlin 21/22s. The first PR Mk.VIII, DK324 first flew on 20 October 1942. The PR Mk.VIII had a maximum speed of 436 mph (702 km/h), an economical cruise speed of 295 mph (475 km/h) at 20,000 ft, and 350 mph (560 km/h) at 30,000 ft, a ceiling of 38,000 ft (12,000 m), a range of 2,550 nmi (4,720 km), and a climb rate of 2,500 ft per minute (760 m).", "title": "Variants" }, { "paragraph_id": 79, "text": "The Mosquito PR Mk.IX, 90 of which were built, was the first Mosquito variant with two-stage, two-speed engines to be produced in quantity; the first of these, LR405, first flew in April 1943. The PR Mk.IX was based on the Mosquito B Mk.IX bomber and was powered by two 1,680 hp (1,250 kW) Merlin 72/73 or 76/77 engines. It could carry either two 50 imp gal (230 L), two 100 imp gal (450 L) or two 200 imp gal (910 L) droppable fuel tanks.", "title": "Variants" }, { "paragraph_id": 80, "text": "The Mosquito PR Mk.XVI had a pressurised cockpit and, like the Mk.IX, was powered by two Rolls-Royce Merlin 72/73 or 76/77 piston engines. This version was equipped with three overload fuel tanks, totalling 760 imp gal (3,500 L) in the bomb bay, and could also carry two 50 imp gal (230 L) or 100 imp gal (450 L) drop tanks. A total of 435 of the PR Mk.XVI were built. The PR Mk.XVI had a maximum speed of 415 mph (668 km/h), a cruise speed of 250 mph (400 km/h), ceiling of 38,500 ft (11,700 m), a range of 2,450 nmi (4,540 km), and a climb rate of 2,900 feet per minute (884 m).", "title": "Variants" }, { "paragraph_id": 81, "text": "The Mosquito PR Mk.32 was a long-range, high-altitude, pressurised photo-reconnaissance version. It was powered by a pair of two-stage supercharged 1,690 hp (1,260 kW) Rolls-Royce Merlin 113 and Merlin 114 piston engines, the Merlin 113 on the starboard side and the Merlin 114 on the port. First flown in August 1944, only five were built and all were conversions from PR.XVIs.", "title": "Variants" }, { "paragraph_id": 82, "text": "The Mosquito PR Mk.34 and PR Mk.34A was a very long-range unarmed high altitude photo-reconnaissance version. The fuel tank and cockpit protection armour were removed. Additional fuel was carried in a bulged bomb bay: 1,192 gallons—the equivalent of 5,419 mi (8,721 km). A further two 200-gallon (910-litre) drop tanks under the outer wings gave a range of 3,600 mi (5,800 km) cruising at 300 mph (480 km/h). Powered by two 1,690 hp (1,260 kW) Merlin 114s first used in the PR.32. The port Merlin 114 drove a Marshal cabin supercharger. A total of 181 were built, including 50 built by Percival Aircraft Company at Luton. The PR.34's maximum speed (TAS) was 335 mph (539 km/h) at sea level, 405 mph (652 km/h) at 17,000 ft (5,200 m) and 425 mph (684 km/h) at 30,000 ft (9,100 m). All PR.34s were installed with four split F52 vertical cameras, two forward, two aft of the fuselage tank and one F24 oblique camera. Sometimes a K-17 camera was used for air surveys. In August 1945, the PR.34A was the final photo-reconnaissance variant with one Merlin 113A and 114A each delivering 1,710 hp (1,280 kW).", "title": "Variants" }, { "paragraph_id": 83, "text": "Colonel Roy M. Stanley II, USAF (RET) wrote: \"I consider the Mosquito the best photo-reconnaissance aircraft of the war\".", "title": "Variants" }, { "paragraph_id": 84, "text": "After the end of World War II Spartan Air Services used ten ex-RAF Mosquitoes, mostly B.35s plus one of only six PR.35s built, for high-altitude photographic survey work in Canada.", "title": "Variants" }, { "paragraph_id": 85, "text": "Media related to De Havilland Mosquito B at Wikimedia Commons", "title": "Variants" }, { "paragraph_id": 86, "text": "On 21 June 1941 the Air Ministry ordered that the last 10 Mosquitoes, ordered as photo-reconnaissance aircraft, should be converted to bombers. These 10 aircraft were part of the original 1 March 1940 production order and became the B Mk.IV Series 1. W4052 was to be the prototype and flew for the first time on 8 September 1941.", "title": "Variants" }, { "paragraph_id": 87, "text": "The bomber prototype led to the B Mk.IV, of which 273 were built: apart from the 10 Series 1s, all of the rest were built as Series 2s with extended nacelles, revised exhaust manifolds, with integrated flame dampers, and larger tailplanes. Series 2 bombers also differed from the Series 1 in having an increased payload of four 500 lb (230 kg) bombs, instead of the four 250 lb (110 kg) bombs of Series 1. This was made possible by cropping, or shortening the tail of the 500 lb (230 kg) bomb so that these four heavier weapons could be carried (or a 2,000 lb (920 kg) total load). The B Mk.IV entered service in May 1942 with 105 Squadron.", "title": "Variants" }, { "paragraph_id": 88, "text": "In April 1943 it was decided to convert a B Mk.IV to carry a 4,000 lb (1,800 kg) Blockbuster bomb (nicknamed a Cookie). The conversion, including modified bomb bay suspension arrangements, bulged bomb bay doors and fairings, was relatively straightforward and 54 B.IVs were modified and distributed to squadrons of the Light Night Striking Force. 27 B Mk.IVs were later converted for special operations with the Highball anti-shipping weapon, and were used by 618 Squadron, formed in April 1943 specifically to use this weapon. A B Mk.IV, DK290 was initially used as a trials aircraft for the bomb, followed by DZ471,530 and 533. The B Mk.IV had a maximum speed of 380 mph (610 km/h), a cruising speed of 265 mph (426 km/h), ceiling of 34,000 ft (10,000 m), a range of 2,040 nmi (3,780 km), and a climb rate of 2,500 ft per minute (12.7 m/s).", "title": "Variants" }, { "paragraph_id": 89, "text": "Other bomber variants of the Mosquito included the Merlin 21 powered B Mk.V high-altitude version. Trials with this configuration were made with W4057, which had strengthened wings and two additional fuel tanks, or alternatively, two 500 lb (230 kg) bombs. This design was not produced in Britain, but formed the basic design of the Canadian-built B.VII. Only W4057 was built in prototype form. The Merlin 31 powered B Mk.VII was built by de Havilland Canada and first flown on 24 September 1942. It only saw service in Canada, 25 were built. Six were handed over to the United States Army Air Forces.", "title": "Variants" }, { "paragraph_id": 90, "text": "B Mk.IX (54 built) was powered by the Merlin 72,73, 76 or 77. The two-stage Merlin variant was based on the PR.IX. The prototype DK 324 was converted from a PR.VIII and first flew on 24 March 1943. In October 1943 it was decided that all B Mk.IVs and all B Mk.IXs then in service would be converted to carry the 4,000 lb (1,800 kg) \"Cookie\", and all B Mk.IXs built after that date were designed to allow them to be converted to carry the weapon. The B Mk.IX had a maximum speed of 408 mph (657 km/h), an economical cruise speed of 295 mph (475 km/h) at 20,000 ft, and 350 mph (560 km/h) at 30,000 ft, ceiling of 36,000 ft (11,000 m), a range of 2,450 nmi (4,540 km), and a climb rate of 2,850 feet per minute (14.5 m/s). The IX could carry a maximum load of 2,000–4,000 lb (910–1,810 kg) of bombs. A Mosquito B Mk.IX holds the record for the most combat operations flown by an Allied bomber in the Second World War. LR503, known as \"F for Freddie\" (from its squadron code letters, GB*F), first served with No. 109 and subsequently, No. 105 RAF squadrons. It flew 213 sorties during the war, only to crash at Calgary airport during the Eighth Victory Loan Bond Drive on 10 May 1945, two days after Victory in Europe Day, killing both the pilot, Flt. Lt. Maurice Briggs, DSO, DFC, DFM and navigator Fl. Off. John Baker, DFC and Bar.", "title": "Variants" }, { "paragraph_id": 91, "text": "The B Mk.XVI was powered by the same variations as the B.IX. All B Mk.XVIs were capable of being converted to carry the 4,000 lb (1,800 kg) \"Cookie\". The two-stage powerplants were added along with a pressurised cabin. DZ540 first flew on 1 January 1944. The prototype was converted from a IV (402 built). The next variant, the B Mk.XX, was powered by Packard Merlins 31 and 33s. It was the Canadian version of the IV. Altogether, 245 were built. The B Mk.XVI had a maximum speed of 408 mph (657 km/h), an economical cruise speed of 295 mph (475 km/h) at 20,000 ft, and 350 mph (560 km/h) at 30,000 ft, ceiling of 37,000 ft (11,000 m), a range of 1,485 nmi (2,750 km), and a climb rate of 2,800 ft per minute (14 m/s). The type could carry 4,000 lb (1,800 kg) of bombs.", "title": "Variants" }, { "paragraph_id": 92, "text": "The B.35 was powered by Merlin 113 and 114As. Some were converted to TT.35s (Target Tugs) and others were used as PR.35s (photo-reconnaissance). The B.35 had a maximum speed of 422 mph (679 km/h), a cruising speed of 276 mph (444 km/h), ceiling of 42,000 ft (13,000 m), a range of 1,750 nmi (3,240 km), and a climb rate of 2,700 ft per minute (13.7 m/s). A total of 174 B.35s were delivered up to the end of 1945. A further 100 were delivered from 1946 for a grand total of 274, 65 of which were built by Airspeed Ltd.", "title": "Variants" }, { "paragraph_id": 93, "text": "Developed during 1940, the first prototype of the Mosquito F Mk.II was completed on 15 May 1941. These Mosquitoes were fitted with four 20 mm (0.79 in) Hispano cannon in the fuselage belly and four .303 (7.7 mm) Browning machine guns mounted in the nose. On production Mk.IIs the machine guns and ammunition tanks were accessed via two centrally hinged, sideways opening doors in the upper nose section. To arm and service the cannon the bomb bay doors were replaced by manually operated bay doors: the F and NF Mk.IIs could not carry bombs. The type was also fitted with a gun camera in a compartment above the machine guns in the nose and was fitted with exhaust flame dampers to reduce the glare from the Merlin XXs.", "title": "Variants" }, { "paragraph_id": 94, "text": "In the summer of 1942, Britain experienced day-time incursions of the high-altitude reconnaissance bomber, the Junkers Ju 86P. Although the Ju 86P only carried a light bomb load, it overflew sensitive areas, including Bristol, Bedfordshire and Hertfordshire. Bombs were dropped on Luton and elsewhere, and this particular aircraft was seen from the main de Havilland offices and factory at Hatfield. An attempt to intercept it with a Spitfire from RAF Manston was unsuccessful. As a result of the potential threat, a decision was quickly taken to develop a high-altitude Mosquito interceptor, using the MP469 prototype.", "title": "Variants" }, { "paragraph_id": 95, "text": "MP469 entered the experimental shop on 7 September and made its initial flight on 14 September, piloted by John de Havilland. The bomber nose was altered using a normal fighter nose, armed with four standard .303 (7.7 mm) Browning machine guns. The low pressure cabin retained a bomber canopy structure and a two-piece windscreen. The control wheel was replaced with a fighter control stick. The wingspan was increased to 59 ft (18 m). The airframe was lightened by removing armour plating, some fuel tanks and other fitments. Smaller-diameter main wheels were fitted after the first few flights. At a loaded weight of 16,200 lb (7,300 kg) this HA Mk.XV was 2,300 lb (1,000 kg) lighter than a standard Mk.II. For this first conversion, the engines were a pair of Merlin 61s. On 15 September, John de Havilland reached an altitude of 43,000 ft (13,000 m) in this version. The aircraft was delivered to a High Altitude Flight which had been formed at RAF Northolt. However, the high-level German daylight intruders were no longer to be seen. It was subsequently revealed that only five Ju 86P aircraft had been built and they had only flown 12 sorties. Nevertheless, the general need for high altitude interceptors was recognised – but now the emphasis was to be upon night fighters.", "title": "Variants" }, { "paragraph_id": 96, "text": "The A&AEE tested the climb and speed of night fighter conversion of MP469 in January 1943 for the Ministry of Aircraft Production. Wingspan had been increased to 62 ft (19 m), the Brownings had been moved to a fairing below the fuselage. According to Birtles, an AI radar was mounted in the nose and the Merlins were upgraded to Mk76 type, although Boscombe Down reported Merlin 61s. In addition to MP469, four more B Mk.IVs were converted into NF MK XVs. The Fighter Interception Unit at RAF Ford carried out service trials, March 1943, and then these five aircraft went to 85 Squadron, Hunsdon, where they were flown from April until August of that year. The greatest height reached in service was 44,600 ft (13,600 m).", "title": "Variants" }, { "paragraph_id": 97, "text": "Apart from the F Mk.XV, all Mosquito fighters and fighter bombers featured a modified canopy structure incorporating a flat, single piece armoured windscreen, and the crew entry/exit door was moved from the bottom of the forward fuselage to the right side of the nose, just forward of the wing leading edge.", "title": "Variants" }, { "paragraph_id": 98, "text": "Media related to De Havilland Mosquito NF at Wikimedia Commons", "title": "Variants" }, { "paragraph_id": 99, "text": "At the end of 1940, the Air Staff's preferred turret-equipped night fighter design to Operational Requirement O.R. 95 was the Gloster F.18/40 (derived from their F.9/37). However, although in agreement as to the quality of the Gloster company's design, the Ministry of Aircraft Production was concerned that Gloster would not be able to work on the F.18/40 and also the jet fighter design, considered the greater priority. Consequently, in mid-1941 the Air Staff and MAP agreed that the Gloster aircraft would be dropped and the Mosquito, when fitted with a turret would be considered for the night fighter requirement.", "title": "Variants" }, { "paragraph_id": 100, "text": "The first production night fighter Mosquitoes – minus turrets – were designated NF Mk.II. A total of 466 were built with the first entering service with No. 157 Squadron in January 1942, replacing the Douglas Havoc. These aircraft were similar to the F Mk.II, but were fitted with the AI Mk.IV metric wavelength radar. The herring-bone transmitting antenna was mounted on the nose and the dipole receiving antennae were carried under the outer wings. A number of NF IIs had their radar equipment removed and additional fuel tanks installed in the bay behind the cannon for use as night intruders. These aircraft, designated NF II (Special) were first used by 23 Squadron in operations over Europe in 1942. 23 Squadron was then deployed to Malta on 20 December 1942, and operated against targets in Italy.", "title": "Variants" }, { "paragraph_id": 101, "text": "Ninety-seven NF Mk.IIs were upgraded with 3.3 GHz frequency, low-SHF-band AI Mk.VIII radar and these were designated NF Mk.XII. The NF Mk.XIII, of which 270 were built, was the production equivalent of the Mk.XII conversions. These \"centimetric\" radar sets were mounted in a solid \"thimble\" (Mk.XII / XIII) or universal \"bull nose\" (Mk.XVII / XIX) radome, which required the machine guns to be dispensed with.", "title": "Variants" }, { "paragraph_id": 102, "text": "Four F Mk.XVs were converted to the NF Mk.XV. These were fitted with AI Mk.VIII in a \"thimble\" radome, and the .303 Brownings were moved into a gun pack fitted under the forward fuselage.", "title": "Variants" }, { "paragraph_id": 103, "text": "NF Mk.XVII was the designation for 99 NF Mk.II conversions, with single-stage Merlin 21, 22, or 23 engines, but British AI.X (US SCR-720) radar.", "title": "Variants" }, { "paragraph_id": 104, "text": "The NF Mk.XIX was an improved version of the NF XIII. It could be fitted with American or British AI radars; 220 were built.", "title": "Variants" }, { "paragraph_id": 105, "text": "The NF Mk.30 was the final wartime variant and was a high-altitude version, powered by two 1,710 hp (1,280 kW) Rolls-Royce Merlin 76s. The NF Mk.30 had a maximum speed of 424 mph (682 km/h) at 26,500 ft (8,100 m). It also carried early electronic countermeasures equipment. 526 were built.", "title": "Variants" }, { "paragraph_id": 106, "text": "Other Mosquito night fighter variants planned but never built included the NF Mk.X and NF Mk.XIV (the latter based on the NF Mk.XIII), both of which were to have two-stage Merlins. The NF Mk.31 was a variant of the NF Mk.30, but powered by Packard Merlins.", "title": "Variants" }, { "paragraph_id": 107, "text": "After the war, two more night fighter versions were developed: The NF Mk.36 was similar to the Mosquito NF Mk.30, but fitted with the American-built AI.Mk.X radar. Powered by two 1,690 hp (1,260 kW) Rolls-Royce Merlin 113/114 piston engines; 266 built. Max level speeds (TAS) with flame dampers fitted were 305 mph (491 km/h) at sea level, 380 mph (610 km/h) at 17,000 ft (5,200 m), and 405 mph (652 km/h) at 30,000 ft (9,100 m).", "title": "Variants" }, { "paragraph_id": 108, "text": "The NF Mk.38, 101 of which were built, was also similar to the Mosquito NF Mk.30, but fitted with the British-built AI Mk.IX radar. This variant suffered from stability problems and did not enter RAF service: 60 were eventually sold to Yugoslavia. According to the Pilot's Notes and Air Ministry 'Special Flying Instruction TF/487', which posted limits on the Mosquito's maximum speeds, the NF Mk.38 had a VNE of 370 knots (425 mph), without under-wing stores, and within the altitude range of sea level to 10,000 ft (3,000 m). However, from 10,000 to 15,000 ft (4,600 m) the maximum speed was 348 knots (400 mph). As the height increased other recorded speeds were; 15,000 to 20,000 ft (6,100 m) 320 knots (368 mph); 20,000 to 25,000 ft (7,600 m), 295 knots (339 mph); 25,000 to 30,000 ft (9,100 m), 260 knots (299 mph); 30,000 to 35,000 ft (11,000 m), 235 knots (270 mph). With two added 100-gallon fuel tanks this performance fell; between sea level and 15,000 feet 330 knots (379 mph); between 15,000 and 20,000 ft (6,100 m) 320 knots (368 mph); 20,000 to 25,000 ft (7,600 m), 295 knots (339 mph); 25,000 to 30,000 ft (9,100 m), 260 knots (299 mph); 30,000 to 35,000 ft (11,000 m), 235 knots (270 mph). Little difference was noted above 15,000 ft (4,600 m).", "title": "Variants" }, { "paragraph_id": 109, "text": "Media related to De Havilland Mosquito FB at Wikimedia Commons", "title": "Variants" }, { "paragraph_id": 110, "text": "The FB Mk. VI, which first flew on 1 June 1942, was powered by two, single-stage two-speed, 1,460 hp (1,090 kW) Merlin 21s or 1,635 hp (1,219 kW) Merlin 25s, and introduced a re-stressed and reinforced \"basic\" wing structure capable of carrying single 250-or-500 lb (110-or-230 kg) bombs on racks housed in streamlined fairings under each wing, or up to eight RP-3 25lb or 60 lb rockets. In addition fuel lines were added to the wings to enable single 50 imp gal (230 L) or 100 imp gal (450 L) drop tanks to be carried under each wing. The usual fixed armament was four 20 mm Hispano Mk.II cannon and four .303 (7.7 mm) Browning machine guns, while two 250-or-500 lb (110-or-230 kg) bombs could be carried in the bomb bay.", "title": "Variants" }, { "paragraph_id": 111, "text": "Unlike the F Mk.II, the ventral bay doors were split into two pairs, with the forward pair being used to access the cannon, while the rear pair acted as bomb bay doors. The maximum fuel load was 719.5 imp gal (3,271 L) distributed between 453 imp gal (2,060 L) internal fuel tanks, plus two overload tanks, each of 66.5 imp gal (302 L) capacity, which could be fitted in the bomb bay, and two 100 imp gal (450 L) drop tanks. All-out level speed is often given as 368 mph (592 km/h), although this speed applies to aircraft fitted with saxophone exhausts. The test aircraft (HJ679) fitted with stub exhausts was found to be performing below expectations. It was returned to de Havilland at Hatfield where it was serviced. Its top speed was then tested and found to be 384 mph (618 km/h), in line with expectations. 2,298 FB Mk. VIs were built, nearly one-third of Mosquito production. Two were converted to TR.33 carrier-borne, maritime strike prototypes.", "title": "Variants" }, { "paragraph_id": 112, "text": "The FB Mk. VI proved capable of holding its own against fighter aircraft, in addition to strike/bombing roles. For example, on 15 January 1945 Mosquito FB Mk. VIs of 143 Squadron were engaged by 30 Focke-Wulf Fw 190s from Jagdgeschwader 5: the Mosquitoes sank an armed trawler and two merchant ships, but five Mosquitoes were lost (two reportedly to flak), while shooting down five Fw 190s.", "title": "Variants" }, { "paragraph_id": 113, "text": "Another fighter-bomber variant was the Mosquito FB Mk. XVIII (sometimes known as the Tsetse) of which one was converted from a FB Mk. VI to serve as prototype and 17 were purpose-built. The Mk.XVIII was armed with a Molins \"6-pounder Class M\" cannon: this was a modified QF 6-pounder (57 mm) anti-tank gun fitted with an auto-loader to allow both semi- or fully automatic fire. 25 rounds were carried, with the entire installation weighing 1,580 lb (720 kg). In addition, 900 lb (410 kg) of armour was added within the engine cowlings, around the nose and under the cockpit floor to protect the engines and crew from heavily armed U-boats, the intended primary target of the Mk.XVIII. Two or four .303 (7.7 mm) Browning machine guns were retained in the nose and were used to \"sight\" the main weapon onto the target.", "title": "Variants" }, { "paragraph_id": 114, "text": "The Air Ministry initially suspected that this variant would not work, but tests proved otherwise. Although the gun provided the Mosquito with yet more anti-shipping firepower for use against U-boats, it required a steady approach run to aim and fire the gun, making its wooden construction an even greater liability, in the face of intense anti-aircraft fire. The gun had a muzzle velocity of 2,950 ft/s (900 m/s) and an excellent range of some 1,800–1,500 yd (1,600–1,400 m). It was sensitive to sidewards movement; an attack required a dive from 5,000 ft (1,500 m) at a 30° angle with the turn and bank indicator on centre. A move during the dive could jam the gun. The prototype HJ732 was converted from a FB.VI and was first flown on 8 June 1943.", "title": "Variants" }, { "paragraph_id": 115, "text": "The effect of the new weapon was demonstrated on 10 March 1944 when Mk.XVIIIs from 248 Squadron (escorted by four Mk.VIs) engaged a German convoy of one U-boat and four destroyers, protected by 10 Ju 88s. Three of the Ju 88s were shot down. Pilot Tony Phillips destroyed one Ju 88 with four shells, one of which tore an engine off the Ju 88. The U-boat was damaged. On 25 March, U-976 was sunk by Molins-equipped Mosquitoes. On 10 June, U-821 was abandoned in the face of intense air attack from No. 248 Squadron, and was later sunk by a Liberator of No. 206 Squadron. On 5 April 1945 Mosquitoes with Molins attacked five German surface ships in the Kattegat and again demonstrated their value by setting them all on fire and sinking them. A German Sperrbrecher (\"minefield breaker\") was lost with all hands, with some 200 bodies being recovered by Swedish vessels. Some 900 German soldiers died in total. On 9 April, German U-boats U-804, U-843 and U-1065 were spotted in formation heading for Norway. All were sunk with rockets. U-251 and U-2359 followed on 19 April and 2 May 1945, also sunk by rockets.", "title": "Variants" }, { "paragraph_id": 116, "text": "Despite the preference for rockets, a further development of the large gun idea was carried out using the even larger, 96 mm calibre QF 32-pounder, a gun based on the QF 3.7-inch AA gun designed for tank use, the airborne version using a novel form of muzzle brake. Developed to prove the feasibility of using such a large weapon in the Mosquito, this installation was not completed until after the war, when it was flown and fired in a single aircraft without problems, then scrapped.", "title": "Variants" }, { "paragraph_id": 117, "text": "Designs based on the Mk.VI were the FB Mk. 26, built in Canada, and the FB Mk.40, built in Australia, powered by Packard Merlins. The FB.26 improved from the FB.21 using 1,620 hp (1,210 kW) single stage Packard Merlin 225s. Some 300 were built and another 37 converted to T.29 standard. 212 FB.40s were built by de Havilland Australia. Six were converted to PR.40; 28 to PR.41s, one to FB.42 and 22 to T.43 trainers. Most were powered by Packard-built Merlin 31 or 33s.", "title": "Variants" }, { "paragraph_id": 118, "text": "The Mosquito was also built as the Mosquito T Mk.III two-seat trainer. This version, powered by two Rolls-Royce Merlin 21s, was unarmed and had a modified cockpit fitted with dual control arrangements. A total of 348 of the T Mk.III were built for the RAF and Fleet Air Arm. de Havilland Australia built 11 T Mk.43 trainers, similar to the Mk.III.", "title": "Variants" }, { "paragraph_id": 119, "text": "To meet specification N.15/44 for a navalised Mosquito for Royal Navy use as a torpedo bomber, de Havilland produced a carrier-borne variant. A Mosquito FB.VI was modified as a prototype designated Sea Mosquito TR Mk.33 with folding wings, arrester hook, thimble nose radome, Merlin 25 engines with four-bladed propellers and a new oleo-pneumatic landing gear rather than the standard rubber-in-compression gear. Initial carrier tests of the Sea Mosquito were carried out by Eric \"Winkle\" Brown aboard HMS Indefatigable, the first landing-on taking place on 25 March 1944. An order for 100 TR.33s was placed although only 50 were built at Leavesden. Armament was four 20 mm cannon, two 500 lb bombs in the bomb bay (another two could be fitted under the wings), eight 60 lb rockets (four under each wing) and a standard torpedo under the fuselage. The first production TR.33 flew on 10 November 1945. This series was followed by six Sea Mosquito TR Mk.37s, which were built at Chester (Broughton) and differed in having ASV Mk.XIII radar instead of the TR.33's AN/APS-6.", "title": "Variants" }, { "paragraph_id": 120, "text": "The RAF's target tug version was the Mosquito TT Mk.35, which were the last aircraft to remain in operational service with No 3 CAACU at Exeter, being finally retired in 1963. These aircraft were then featured in the film 633 Squadron. A number of B Mk.XVIs bombers were converted into TT Mk.39 target tug aircraft. The Royal Navy also operated the Mosquito TT Mk.39 for target towing. Two ex-RAF FB.6s were converted to TT.6 standard at Manchester (Ringway) Airport by Fairey Aviation in 1953–1954, and delivered to the Belgian Air Force for use as towing aircraft from the Sylt firing ranges.", "title": "Variants" }, { "paragraph_id": 121, "text": "A total of 1,032 (wartime + 2 afterwards) Mosquitoes were built by De Havilland Canada at Downsview Airfield in Downsview Ontario (now Downsview Park in Toronto Ontario).", "title": "Variants" }, { "paragraph_id": 122, "text": "A number of Mosquito IVs were modified by Vickers-Armstrongs to carry Highball \"bouncing bombs\" and were allocated Vickers Type numbers:", "title": "Variants" }, { "paragraph_id": 123, "text": "About 5,000 of the total of 7,781 Mosquitoes built had major structural components fabricated from wood in High Wycombe, Buckinghamshire, England. Fuselages, wings and tailplanes were made at furniture companies such as Ronson, E. Gomme, Parker Knoll, Austinsuite and Styles & Mealing. Wing spars were made by J. B. Heath and Dancer & Hearne. Many of the other parts, including flaps, flap shrouds, fins, leading edge assemblies and bomb doors were also produced in the Buckinghamshire town. Dancer & Hearne processed much of the wood from start to finish, receiving timber and transforming it into finished wing spars at their factory in Penn Street on the outskirts of High Wycombe.", "title": "Production" }, { "paragraph_id": 124, "text": "Initially much of the specialised yellow birch wood veneer and finished plywood used for the prototypes and early production aircraft was shipped from firms in Wisconsin, US. Prominent in this role were Roddis Plywood and Veneer Manufacturing in Marshfield. In conjunction with the USDA Forest Products Laboratory, Hamilton Roddis had developed new plywood adhesives and hot pressing technology. Later on, paper birch was logged in large quantities from the interior of British Columbia along the Fraser and Quesnel Rivers and processed in Quesnel and New Westminster by the Pacific Veneer Company. According to the Quesnel archives, BC paper birch supplied ½ of the wartime British Empire birch used for Mosquitoes and other aircraft.", "title": "Production" }, { "paragraph_id": 125, "text": "As the supply of Ecuadorean balsa was threatened by the U-boats in the Atlantic Ocean, the Ministry of Aircraft Production approved a research effort to supplant the balsa with calcium alginate foam, made from local brown algae. By 1944 the foam was ready, but the U-boat threat had been reduced, the larger B-25 bombers were in sufficient supply to handle most of the bombing raids, and the foam was not used in Mosquito production.", "title": "Production" }, { "paragraph_id": 126, "text": "In July 1941, it was decided that DH Canada would build Mosquitoes at Downsview, Ontario. This was to continue even if Germany invaded Great Britain. Packard Merlin engines produced under licence were bench-tested by August and the first two aircraft were built in September. Production was to increase to fifty per month by early 1942. Initially, the Canadian production was for bomber variants; later, fighters, fighter-bombers and training aircraft were also made. DH Chief Production Engineer, Harry Povey, was sent first, then W. D. Hunter followed on an extended stay, to liaise with materials and parts suppliers. As was the case with initial UK production, Tego-bonded plywood and birch veneer was obtained from firms in Wisconsin, principally Roddis Plywood and Veneer Manufacturing, Marshfield. Enemy action delayed the shipping of jigs and moulds and it was decided to build these locally. During 1942, production improved to over 80 machines per month, as sub-contractors and suppliers became established. A mechanised production line based in part on car building methods started in 1944. As the war progressed, Canadian Mosquitoes may have utilized paper birch supplied by the Pacific Veneer Company of New Westminster using birch logs from the Cariboo, although records only say this birch was shipped to England for production there. When flight testing could no longer keep up, this was moved to the Central Aircraft Company airfield, London, Ontario, where the approved Mosquitoes left for commissioning and subsequent ferry transfer to Europe.", "title": "Production" }, { "paragraph_id": 127, "text": "Ferrying Mosquitoes and many other types of WWII aircraft from Canada to Europe was dangerous, resulting in losses of lives and machines, but in the exigencies of war it was regarded as the best option for twin-engine and multi-engine aircraft. In the parlance of the day, among RAF personnel, \"it was no piece of cake.\" Considerable efforts were made by de Havilland Canada to resolve problems with engine and oil systems and an additional five hours of flight testing were introduced before the ferry flight, but the actual cause of some of the losses was unknown. Nevertheless, by the end of the war, nearly 500 Mosquito bombers and fighter-bombers had been ferried successfully by the Canadian operation.", "title": "Production" }, { "paragraph_id": 128, "text": "After DH Canada had been established for the Mosquito, further manufacturing was set up at DH Australia, in Sydney. One of the DH staff who travelled there was the distinguished test pilot, Pat Fillingham. These production lines added totals of 1,133 aircraft of varying types from Canada plus 212 aircraft from Australia.", "title": "Production" }, { "paragraph_id": 129, "text": "In total, both during the war and after, de Havilland exported 46 FB.VIs and 29 PR. XVIs to Australia; two FB.VI and 18 NF.30s to Belgium; approximately 250 FB.26, T.29 and T.27s from Canada to Nationalist China. A significant number never went into service due to deterioration on the voyage and to crashes during Chinese pilot training; however, five were captured by the People's Liberation Army during the Chinese Civil War; 19 FB.VIs to Czechoslovakia in 1948; 6 FB.VIs to Dominica; a few B.IVs, 57 FB.VIs, 29 PR.XVIs and 23 NF.30s to France. Some T.IIIs were exported to Israel along with 60 FB.VIs, and at least five PR.XVIs and 14 naval versions. Four T.IIIs, 76 FB.VIs, one FB.40 and four T.43s were exported to New Zealand. Three T.IIIs were exported to Norway, and 18 FB.VIs, which were later converted to night fighter standard. South Africa received two F.II and 14 PR.XVI/XIs and Sweden received 60 NF.XIXs. Turkey received 96 FB.VIs and several T.IIIs, and Yugoslavia had 60 NF.38s, 80 FB.VIs and three T.IIIs delivered. At least a single de Havilland Mosquito was delivered to the Soviet Union marked 'DK 296'.", "title": "Production" }, { "paragraph_id": 130, "text": "Total Mosquito production was 7,781, of which 6,710 were built during the war.", "title": "Production" }, { "paragraph_id": 131, "text": "A number of Mosquitoes were lost in civilian airline service, mostly with British Overseas Airways Corporation during World War II.", "title": "Civilian accidents and incidents" }, { "paragraph_id": 132, "text": "On 21 July 1996, Mosquito G-ASKH, wearing the markings of RR299, crashed 1 mile west of Manchester Barton Airport. Pilot Kevin Moorhouse and Engineer Steve Watson were both killed in the crash. At the time, this was the last airworthy Mosquito T.III.", "title": "Civilian accidents and incidents" }, { "paragraph_id": 133, "text": "There are approximately 30 non-flying Mosquitoes around the world with four airworthy examples, three in the United States and one in Canada. The largest collection of Mosquitoes is at the de Havilland Aircraft Museum in the United Kingdom, which owns three aircraft, including the first prototype, W4050, the only initial prototype of a Second World War British aircraft design still in existence in the 21st century.", "title": "Surviving aircraft" }, { "paragraph_id": 134, "text": "Data from Jane's Fighting Aircraft of World War II, World War II Warbirds", "title": "Specifications (B Mk.XVI)" }, { "paragraph_id": 135, "text": "General characteristics", "title": "Specifications (B Mk.XVI)" }, { "paragraph_id": 136, "text": "Performance", "title": "Specifications (B Mk.XVI)" }, { "paragraph_id": 137, "text": "Armament", "title": "Specifications (B Mk.XVI)" }, { "paragraph_id": 138, "text": "Avionics", "title": "Specifications (B Mk.XVI)" }, { "paragraph_id": 139, "text": "Notable Mosquito missions", "title": "See also" }, { "paragraph_id": 140, "text": "Related development", "title": "See also" }, { "paragraph_id": 141, "text": "Aircraft of comparable role, configuration, and era", "title": "See also" }, { "paragraph_id": 142, "text": "Related lists", "title": "See also" } ]
The de Havilland DH.98 Mosquito is a British twin-engined, multirole combat aircraft, introduced during the Second World War. Unusual in that its frame was constructed mostly of wood, it was nicknamed the "Wooden Wonder", or "Mossie". Lord Beaverbrook, Minister of Aircraft Production, nicknamed it "Freeman's Folly", alluding to Air Chief Marshal Sir Wilfrid Freeman, who defended Geoffrey de Havilland and his design concept against orders to scrap the project. In 1941, it was one of the fastest operational aircraft in the world. Originally conceived as an unarmed fast bomber, the Mosquito's use evolved during the war into many roles, including low- to medium-altitude daytime tactical bomber, high-altitude night bomber, pathfinder, day or night fighter, fighter-bomber, intruder, maritime strike, and photo-reconnaissance aircraft. It was also used by the British Overseas Airways Corporation as a fast transport to carry small, high-value cargo to and from neutral countries through enemy-controlled airspace. The crew of two, pilot and navigator, sat side by side. A single passenger could ride in the aircraft's bomb bay when necessary. The Mosquito FB Mk. VI was often flown in special raids, such as Operation Jericho, and precision attacks against military intelligence, security, and police facilities. On 30 January 1943, the 10th anniversary of Hitler being made chancellor and the Nazis gaining power, a morning Mosquito attack knocked out the main Berlin broadcasting station while Hermann Göring was speaking, taking his speech off the air. The Mosquito flew with the Royal Air Force (RAF) and other air forces in the European, Mediterranean, and Italian theatres. The Mosquito was also operated by the RAF in the Southeast Asian theatre and by the Royal Australian Air Force based in the Halmaheras and Borneo during the Pacific War. During the 1950s, the RAF replaced the Mosquito with the jet-powered English Electric Canberra.
2002-01-09T03:55:53Z
2023-12-01T16:13:05Z
[ "Template:Cbignore", "Template:ROC", "Template:Aircontent", "Template:ISBN", "Template:NZL", "Template:Div col end", "Template:Cite news", "Template:Short description", "Template:Page needed", "Template:PRC", "Template:TCH", "Template:UK", "Template:Webarchive", "Template:Citation", "Template:Commons category", "Template:Refn", "Template:Main list", "Template:Div col", "Template:GS", "Template:SWE", "Template:TUR", "Template:Portal", "Template:Cite web", "Template:Lowercase title", "Template:Use dmy dates", "Template:Nbsp", "Template:NOR", "Template:Aircraft specs", "Template:Reflist", "Template:Cite book", "Template:Infobox aircraft begin", "Template:Cvt", "Template:Flag", "Template:ISR", "Template:Cite journal", "Template:Navboxes", "Template:Authority control", "Template:Use British English", "Template:AUS", "Template:DOM", "Template:YUG", "Template:Cite magazine", "Template:Infobox aircraft type", "Template:Blockquote", "Template:Multiple image", "Template:Commons category-inline", "Template:BEL", "Template:SUI", "Template:ISSN", "Template:Citation needed", "Template:Main", "Template:'" ]
https://en.wikipedia.org/wiki/De_Havilland_Mosquito
9,099
Dave Thomas (businessman)
Rex David Thomas (July 2, 1932 – January 8, 2002) was an American businessman, philanthropist, and fast-food tycoon. Thomas was the founder and chief executive officer of Wendy's, a fast-food restaurant chain specializing in hamburgers. In this role, Thomas appeared in more than 800 commercial advertisements for the chain from 1989 to 2002, more than any other company founder in television history. Rex David Thomas was born July 2, 1932, in Atlantic City, New Jersey. His biological father's name was Sam and his biological mother's name was Molly. Thomas was adopted between six weeks and six months later by Rex and Auleva Thomas, and as an adult became a well-known advocate for adoption, founding the Dave Thomas Foundation for Adoption. After his adoptive mother's death when he was five, his father moved around the country seeking work. Thomas spent some of his early childhood near Kalamazoo, Michigan, with his grandmother, Minnie Sinclair, whom he credited with teaching him the importance of service and treating others well and with respect, lessons that helped him in his future business life. At age 12, Thomas had his first job at Regas Restaurant, a fine dining restaurant in downtown Knoxville, Tennessee, then lost it in a dispute with his boss; decades later, Regas Restaurant installed a large autographed poster of Thomas just inside their entrance, which remained until the business closed in 2010. He vowed never to lose another job. By 15, he was moving with his father and working at the Hobby House Restaurant in Fort Wayne, Indiana. When his father prepared to move again, Thomas decided to stay in Fort Wayne, dropping out of high school to work full-time at the restaurant. Thomas, who considered ending his schooling the greatest mistake of his life, did not graduate from high school until 1993, when he obtained a GED. He subsequently became an education advocate and founded the Dave Thomas Education Center in Coconut Creek, Florida, which offers GED classes to young adults. At the outbreak of the Korean War in 1950, rather than waiting for the draft, he volunteered for the U.S. Army at age 18 to have some choice in assignments. Having food production and service experience, Thomas requested the Cook's and Baker's School at Fort Benning, Georgia. He was sent to West Germany as a mess sergeant and was responsible for the daily meals of 2,000 soldiers, rising to the rank of staff sergeant. After his discharge in 1953, Thomas returned to Fort Wayne and the Hobby House. In the mid-1950s, Kentucky Fried Chicken founder Col. Harland Sanders came to Fort Wayne, hoping to find restaurateurs with established businesses to whom he could try to sell KFC franchises. At first, Thomas – who was the head cook at a restaurant – and the Clauss family declined Sanders' offer, but Sanders persisted, and the Clauss family franchised their restaurant with KFC; they also later owned many other KFC franchises in the Midwest. During this time, Thomas worked with Sanders on many projects to make KFC more profitable and give it brand recognition. Among other ideas for improvements, Thomas suggested that KFC reduce the number of items on its menu and instead focus on a signature dish; he also proposed that KFC make commercials in which Sanders would personally appear. Thomas was sent by the Clauss family in the mid-1960s to help turn around four of their failing KFC stores in Columbus, Ohio. By 1968, Thomas had increased sales in the four fried chicken restaurants so much that he sold his share in them back to Sanders for more than $1.5 million. This experience would prove invaluable to Thomas when he began Wendy's about a year later. After serving as a regional director for Kentucky Fried Chicken, Thomas became part of the investor group which founded Arthur Treacher's. His involvement with the new restaurant lasted less than a year before he went on to found Wendy's. Thomas opened his first Wendy's in Columbus, Ohio, November 15, 1969. This original restaurant remained operational until March 2, 2007, when it was closed due to lagging sales. Thomas named the restaurant after his eight-year-old daughter Melinda Lou, whose nickname was "Wendy", stemming from the child's inability to say her own name at a young age. According to Bio TV, Dave claims that people nicknamed his daughter "Wenda. Not Wendy, but Wenda. 'I'm going to call it Wendy's Old Fashioned Hamburgers'." Before his death in 2002, Thomas admitted regret for naming the franchise after his daughter, saying "I should've just named it after myself, because it put a lot of pressure on [her]." In 1982, Thomas resigned from his day-to-day operations at Wendy's. However, by 1985, several company business decisions, including an awkward new breakfast menu and loss in brand awareness due to fizzled marketing efforts, led the company's new president to urge Thomas back into a more active role with Wendy's. Thomas began to visit franchises and espouse his hardworking, so-called "mop-bucket attitude". In 1989, he took on a significant role as the TV spokesperson in a series of commercials for the brand. Thomas was not a natural actor, and initially, his performances were criticized as stiff and ineffective by advertising critics. By 1990, after efforts by Wendy's advertising agency, Backer Spielvolgel Bates, to get humor into the campaign, a decision was made to portray Thomas in a more self-deprecating and folksy manner, which proved much more popular with test audiences. Consumer brand awareness of Wendy's eventually regained levels it had not achieved since octogenarian Clara Peller's highly popular "Where's the beef?" campaign of 1984. With his natural self-effacing style and his relaxed manner, Thomas quickly became a household name. A company survey during the 1990s, a decade during which Thomas starred in every Wendy's commercial that aired, found that 90% of Americans knew who Thomas was. After more than 800 commercials, it was clear that Thomas played a major role in Wendy's' status as the third most popular burger restaurant in the U.S. In 1982, Thomas and a consortium of entrepreneurs created and launched The Wellington School in Upper Arlington, Ohio. The group of entrepreneurs spent three years refining plans, raising money, finding a property, and recruiting teachers and students. The school opened with 137 students and 19 employees as the first co-ed independent school in the greater Columbus metropolitan area. The first graduating class was in 1989 with 32 students. In 2010, a new 76,000 sq ft (7,100 m) building opened. In 2012, the Little Jags preschool program for 3-year-olds began. Thomas was a Christian. He was married for 47 years to Lorraine Thomas and started his family with her in Upper Arlington, Ohio. In addition to Melinda, they had three more daughters – Pam, Lori, and Molly – and a son, Kenny. After Kenny died in 2013, his sisters still continued to own and run multiple Wendy's locations. Thomas founded the chain Sisters Chicken and Biscuits in 1978, named in reference to his other three daughters. Thomas had been afflicted with a carcinoid neuroendocrine tumor for a decade, before it metastasized to his liver. He died on January 8, 2002, in his home in Fort Lauderdale, Florida, at the age of 69. Thomas was buried in Union Cemetery in Columbus, Ohio. At the time of his death, there were more than 6,000 Wendy's restaurants operating in North America. In 1979, Thomas received the Horatio Alger Award for his success with his restaurant chain Wendy's, which had reached annual sales of US$1 billion with franchises then. In 1980, Thomas received the Golden Plate Award of the American Academy of Achievement. Thomas, realizing that his success as a high school dropout might convince other teenagers to quit school (something he later claimed was a mistake), became a student at Coconut Creek High School. He earned a GED in 1993. Thomas was inducted into the Junior Achievement U.S. Business Hall of Fame in 1999. Thomas was an honorary Kentucky colonel, as was former boss Harland Sanders. Thomas was posthumously awarded the Presidential Medal of Freedom in 2003. Thomas was raised a Master Mason in Sol. D. Bayless Lodge No. 359 of Fort Wayne, Indiana, and became a 32° Mason, N.M.J., on November 16, 1961, in the Scottish Rite Bodies of Fort Wayne. He was unanimously elected to the Scottish Rite's highest honor, the Grand Cross, by The Supreme Council, 33°, in Executive Session on October 3, 1997, in Washington, D.C. A small triangular block and the surrounding streets and traffic pattern in the Northeast quadrant of Washington, D.C., is unofficially known in the D.C. area as Dave Thomas Circle, due to the longtime presence of a Wendy's franchise and its parking lot on that block.
[ { "paragraph_id": 0, "text": "Rex David Thomas (July 2, 1932 – January 8, 2002) was an American businessman, philanthropist, and fast-food tycoon. Thomas was the founder and chief executive officer of Wendy's, a fast-food restaurant chain specializing in hamburgers. In this role, Thomas appeared in more than 800 commercial advertisements for the chain from 1989 to 2002, more than any other company founder in television history.", "title": "" }, { "paragraph_id": 1, "text": "Rex David Thomas was born July 2, 1932, in Atlantic City, New Jersey. His biological father's name was Sam and his biological mother's name was Molly. Thomas was adopted between six weeks and six months later by Rex and Auleva Thomas, and as an adult became a well-known advocate for adoption, founding the Dave Thomas Foundation for Adoption. After his adoptive mother's death when he was five, his father moved around the country seeking work. Thomas spent some of his early childhood near Kalamazoo, Michigan, with his grandmother, Minnie Sinclair, whom he credited with teaching him the importance of service and treating others well and with respect, lessons that helped him in his future business life.", "title": "Early life" }, { "paragraph_id": 2, "text": "At age 12, Thomas had his first job at Regas Restaurant, a fine dining restaurant in downtown Knoxville, Tennessee, then lost it in a dispute with his boss; decades later, Regas Restaurant installed a large autographed poster of Thomas just inside their entrance, which remained until the business closed in 2010. He vowed never to lose another job. By 15, he was moving with his father and working at the Hobby House Restaurant in Fort Wayne, Indiana. When his father prepared to move again, Thomas decided to stay in Fort Wayne, dropping out of high school to work full-time at the restaurant. Thomas, who considered ending his schooling the greatest mistake of his life, did not graduate from high school until 1993, when he obtained a GED.", "title": "Early life" }, { "paragraph_id": 3, "text": "He subsequently became an education advocate and founded the Dave Thomas Education Center in Coconut Creek, Florida, which offers GED classes to young adults.", "title": "Early life" }, { "paragraph_id": 4, "text": "At the outbreak of the Korean War in 1950, rather than waiting for the draft, he volunteered for the U.S. Army at age 18 to have some choice in assignments. Having food production and service experience, Thomas requested the Cook's and Baker's School at Fort Benning, Georgia. He was sent to West Germany as a mess sergeant and was responsible for the daily meals of 2,000 soldiers, rising to the rank of staff sergeant. After his discharge in 1953, Thomas returned to Fort Wayne and the Hobby House.", "title": "Career" }, { "paragraph_id": 5, "text": "In the mid-1950s, Kentucky Fried Chicken founder Col. Harland Sanders came to Fort Wayne, hoping to find restaurateurs with established businesses to whom he could try to sell KFC franchises. At first, Thomas – who was the head cook at a restaurant – and the Clauss family declined Sanders' offer, but Sanders persisted, and the Clauss family franchised their restaurant with KFC; they also later owned many other KFC franchises in the Midwest. During this time, Thomas worked with Sanders on many projects to make KFC more profitable and give it brand recognition. Among other ideas for improvements, Thomas suggested that KFC reduce the number of items on its menu and instead focus on a signature dish; he also proposed that KFC make commercials in which Sanders would personally appear. Thomas was sent by the Clauss family in the mid-1960s to help turn around four of their failing KFC stores in Columbus, Ohio.", "title": "Career" }, { "paragraph_id": 6, "text": "By 1968, Thomas had increased sales in the four fried chicken restaurants so much that he sold his share in them back to Sanders for more than $1.5 million. This experience would prove invaluable to Thomas when he began Wendy's about a year later.", "title": "Career" }, { "paragraph_id": 7, "text": "After serving as a regional director for Kentucky Fried Chicken, Thomas became part of the investor group which founded Arthur Treacher's. His involvement with the new restaurant lasted less than a year before he went on to found Wendy's.", "title": "Career" }, { "paragraph_id": 8, "text": "Thomas opened his first Wendy's in Columbus, Ohio, November 15, 1969. This original restaurant remained operational until March 2, 2007, when it was closed due to lagging sales. Thomas named the restaurant after his eight-year-old daughter Melinda Lou, whose nickname was \"Wendy\", stemming from the child's inability to say her own name at a young age. According to Bio TV, Dave claims that people nicknamed his daughter \"Wenda. Not Wendy, but Wenda. 'I'm going to call it Wendy's Old Fashioned Hamburgers'.\" Before his death in 2002, Thomas admitted regret for naming the franchise after his daughter, saying \"I should've just named it after myself, because it put a lot of pressure on [her].\"", "title": "Career" }, { "paragraph_id": 9, "text": "In 1982, Thomas resigned from his day-to-day operations at Wendy's. However, by 1985, several company business decisions, including an awkward new breakfast menu and loss in brand awareness due to fizzled marketing efforts, led the company's new president to urge Thomas back into a more active role with Wendy's. Thomas began to visit franchises and espouse his hardworking, so-called \"mop-bucket attitude\". In 1989, he took on a significant role as the TV spokesperson in a series of commercials for the brand. Thomas was not a natural actor, and initially, his performances were criticized as stiff and ineffective by advertising critics.", "title": "Career" }, { "paragraph_id": 10, "text": "By 1990, after efforts by Wendy's advertising agency, Backer Spielvolgel Bates, to get humor into the campaign, a decision was made to portray Thomas in a more self-deprecating and folksy manner, which proved much more popular with test audiences. Consumer brand awareness of Wendy's eventually regained levels it had not achieved since octogenarian Clara Peller's highly popular \"Where's the beef?\" campaign of 1984.", "title": "Career" }, { "paragraph_id": 11, "text": "With his natural self-effacing style and his relaxed manner, Thomas quickly became a household name. A company survey during the 1990s, a decade during which Thomas starred in every Wendy's commercial that aired, found that 90% of Americans knew who Thomas was. After more than 800 commercials, it was clear that Thomas played a major role in Wendy's' status as the third most popular burger restaurant in the U.S.", "title": "Career" }, { "paragraph_id": 12, "text": "In 1982, Thomas and a consortium of entrepreneurs created and launched The Wellington School in Upper Arlington, Ohio. The group of entrepreneurs spent three years refining plans, raising money, finding a property, and recruiting teachers and students.", "title": "Career" }, { "paragraph_id": 13, "text": "The school opened with 137 students and 19 employees as the first co-ed independent school in the greater Columbus metropolitan area. The first graduating class was in 1989 with 32 students. In 2010, a new 76,000 sq ft (7,100 m) building opened. In 2012, the Little Jags preschool program for 3-year-olds began.", "title": "Career" }, { "paragraph_id": 14, "text": "Thomas was a Christian. He was married for 47 years to Lorraine Thomas and started his family with her in Upper Arlington, Ohio. In addition to Melinda, they had three more daughters – Pam, Lori, and Molly – and a son, Kenny. After Kenny died in 2013, his sisters still continued to own and run multiple Wendy's locations. Thomas founded the chain Sisters Chicken and Biscuits in 1978, named in reference to his other three daughters.", "title": "Personal life" }, { "paragraph_id": 15, "text": "Thomas had been afflicted with a carcinoid neuroendocrine tumor for a decade, before it metastasized to his liver. He died on January 8, 2002, in his home in Fort Lauderdale, Florida, at the age of 69. Thomas was buried in Union Cemetery in Columbus, Ohio. At the time of his death, there were more than 6,000 Wendy's restaurants operating in North America.", "title": "Personal life" }, { "paragraph_id": 16, "text": "In 1979, Thomas received the Horatio Alger Award for his success with his restaurant chain Wendy's, which had reached annual sales of US$1 billion with franchises then.", "title": "Honors and memberships" }, { "paragraph_id": 17, "text": "In 1980, Thomas received the Golden Plate Award of the American Academy of Achievement.", "title": "Honors and memberships" }, { "paragraph_id": 18, "text": "Thomas, realizing that his success as a high school dropout might convince other teenagers to quit school (something he later claimed was a mistake), became a student at Coconut Creek High School. He earned a GED in 1993. Thomas was inducted into the Junior Achievement U.S. Business Hall of Fame in 1999.", "title": "Honors and memberships" }, { "paragraph_id": 19, "text": "Thomas was an honorary Kentucky colonel, as was former boss Harland Sanders.", "title": "Honors and memberships" }, { "paragraph_id": 20, "text": "Thomas was posthumously awarded the Presidential Medal of Freedom in 2003.", "title": "Honors and memberships" }, { "paragraph_id": 21, "text": "Thomas was raised a Master Mason in Sol. D. Bayless Lodge No. 359 of Fort Wayne, Indiana, and became a 32° Mason, N.M.J., on November 16, 1961, in the Scottish Rite Bodies of Fort Wayne. He was unanimously elected to the Scottish Rite's highest honor, the Grand Cross, by The Supreme Council, 33°, in Executive Session on October 3, 1997, in Washington, D.C.", "title": "Honors and memberships" }, { "paragraph_id": 22, "text": "A small triangular block and the surrounding streets and traffic pattern in the Northeast quadrant of Washington, D.C., is unofficially known in the D.C. area as Dave Thomas Circle, due to the longtime presence of a Wendy's franchise and its parking lot on that block.", "title": "Honors and memberships" } ]
Rex David Thomas was an American businessman, philanthropist, and fast-food tycoon. Thomas was the founder and chief executive officer of Wendy's, a fast-food restaurant chain specializing in hamburgers. In this role, Thomas appeared in more than 800 commercial advertisements for the chain from 1989 to 2002, more than any other company founder in television history.
2002-01-09T20:07:17Z
2023-10-10T11:35:10Z
[ "Template:Refimprove", "Template:Short description", "Template:About", "Template:Cite AV media", "Template:Authority control", "Template:Infobox person", "Template:Fact", "Template:Cite news", "Template:Cvt", "Template:ISBN", "Template:Cite episode", "Template:Wendy's", "Template:Reflist", "Template:Cite web", "Template:Find a Grave" ]
https://en.wikipedia.org/wiki/Dave_Thomas_(businessman)
9,101
Device driver
In computing, a device driver is a computer program that operates or controls a particular type of device that is attached to a computer or automaton. A driver provides a software interface to hardware devices, enabling operating systems and other computer programs to access hardware functions without needing to know precise details about the hardware being used. A driver communicates with the device through the computer bus or communications subsystem to which the hardware connects. When a calling program invokes a routine in the driver, the driver issues commands to the device (drives it). Once the device sends data back to the driver, the driver may invoke routines in the original calling program. Drivers are hardware dependent and operating-system-specific. They usually provide the interrupt handling required for any necessary asynchronous time-dependent hardware interface. The main purpose of device drivers is to provide abstraction by acting as a translator between a hardware device and the applications or operating systems that use it. Programmers can write higher-level application code independently of whatever specific hardware the end-user is using. For example, a high-level application for interacting with a serial port may simply have two functions for "send data" and "receive data". At a lower level, a device driver implementing these functions would communicate to the particular serial port controller installed on a user's computer. The commands needed to control a 16550 UART are much different from the commands needed to control an FTDI serial port converter, but each hardware-specific device driver abstracts these details into the same (or similar) software interface. Writing a device driver requires an in-depth understanding of how the hardware and the software works for a given platform function. Because drivers require low-level access to hardware functions in order to operate, drivers typically operate in a highly privileged environment and can cause system operational issues if something goes wrong. In contrast, most user-level software on modern operating systems can be stopped without greatly affecting the rest of the system. Even drivers executing in user mode can crash a system if the device is erroneously programmed. These factors make it more difficult and dangerous to diagnose problems. The task of writing drivers thus usually falls to software engineers or computer engineers who work for hardware-development companies. This is because they have better information than most outsiders about the design of their hardware. Moreover, it was traditionally considered in the hardware manufacturer's interest to guarantee that their clients can use their hardware in an optimum way. Typically, the Logical Device Driver (LDD) is written by the operating system vendor, while the Physical Device Driver (PDD) is implemented by the device vendor. However, in recent years, non-vendors have written numerous device drivers for proprietary devices, mainly for use with free and open source operating systems. In such cases, it is important that the hardware manufacturer provide information on how the device communicates. Although this information can instead be learned by reverse engineering, this is much more difficult with hardware than it is with software. Microsoft has attempted to reduce system instability due to poorly written device drivers by creating a new framework for driver development, called Windows Driver Frameworks (WDF). This includes User-Mode Driver Framework (UMDF) that encourages development of certain types of drivers—primarily those that implement a message-based protocol for communicating with their devices—as user-mode drivers. If such drivers malfunction, they do not cause system instability. The Kernel-Mode Driver Framework (KMDF) model continues to allow development of kernel-mode device drivers, but attempts to provide standard implementations of functions that are known to cause problems, including cancellation of I/O operations, power management, and plug and play device support. Apple has an open-source framework for developing drivers on macOS, called I/O Kit. In Linux environments, programmers can build device drivers as parts of the kernel, separately as loadable modules, or as user-mode drivers (for certain types of devices where kernel interfaces exist, such as for USB devices). Makedev includes a list of the devices in Linux, including ttyS (terminal), lp (parallel port), hd (disk), loop, and sound (these include mixer, sequencer, dsp, and audio). Microsoft Windows .sys files and Linux .ko files can contain loadable device drivers. The advantage of loadable device drivers is that they can be loaded only when necessary and then unloaded, thus saving kernel memory. Depending on the operating system, device drivers may be permitted to run at various different privilege levels. The choice of which level of privilege the drivers are in is largely decided by the type of kernel an operating system uses. An operating system which uses a monolithic kernel, such as the Linux kernel, will typically run device drivers with the same privilege as all other kernel objects. By contrast, a system designed around microkernel , such as Minix, will place drivers as processes independent from the kernel but that use the it for essential input-output functionalities and to pass messages between user programs and each other. On Windows NT, a system with a hybrid kernel, it is common for device drivers to run in either kernel-mode or user-mode. The most common mechanism for segregating memory into various privilege levels is via protection rings. On many systems, such as those with x86 and ARM processors, switching between rings imposes a performance penalty, a factor that operating system developers and embedded software engineers consider when creating drivers for devices which are preferred to be run with low latency, such as network interface cards. The primary benefit of running a driver in user mode is improved stability, since a poorly written user-mode device driver cannot crash the system by overwriting kernel memory. Because of the diversity of modern hardware and operating systems, drivers operate in many different environments. Drivers may interface with: Common levels of abstraction for device drivers include: So choosing and installing the correct device drivers for given hardware is often a key component of computer system configuration. Virtual device drivers represent a particular variant of device drivers. They are used to emulate a hardware device, particularly in virtualization environments, for example when a DOS program is run on a Microsoft Windows computer or when a guest operating system is run on, for example, a Xen host. Instead of enabling the guest operating system to dialog with hardware, virtual device drivers take the opposite role and emulates a piece of hardware, so that the guest operating system and its drivers running inside a virtual machine can have the illusion of accessing real hardware. Attempts by the guest operating system to access the hardware are routed to the virtual device driver in the host operating system as e.g., function calls. The virtual device driver can also send simulated processor-level events like interrupts into the virtual machine. Virtual devices may also operate in a non-virtualized environment. For example, a virtual network adapter is used with a virtual private network, while a virtual disk device is used with iSCSI. A good example for virtual device drivers can be Daemon Tools. There are several variants of virtual device drivers, such as VxDs, VLMs, and VDDs. Solaris descriptions of commonly used device drivers: A device on the PCI bus or USB is identified by two IDs which consist of 4 hexadecimal numbers each. The vendor ID identifies the vendor of the device. The device ID identifies a specific device from that manufacturer/vendor. A PCI device has often an ID pair for the main chip of the device, and also a subsystem ID pair which identifies the vendor, which may be different from the chip manufacturer. Devices often have a large number of diverse and customized device drivers running in their operating system (OS) kernel and often contain various bugs and vulnerabilities, making them a target for exploits. Bring Your Own Vulnerable Driver (BYOVD) uses signed, old drivers that contain flaws that allow hackers to insert malicious code into the kernel. There is a lack of effective kernel vulnerability detection tools, especially for closed-source OSes such as Microsoft Windows where the source code of the device drivers is mostly not public (open source) and the drivers often also have many privileges. Such vulnerabilities also exist in drivers in laptops, drivers for WiFi and bluetooth, gaming/graphics drivers, and drivers in printers. A group of security researchers considers the lack of isolation as one of the main factors undermining kernel security, and published a isolation framework to protect operating system kernels, primarily the monolithic Linux kernel which, according to them, gets ~80,000 commits/year to its drivers. An important consideration in the design of a kernel is the support it provides for protection from faults (fault tolerance) and from malicious behaviours (security). These two aspects are usually not clearly distinguished, and the adoption of this distinction in the kernel design leads to the rejection of a hierarchical structure for protection.
[ { "paragraph_id": 0, "text": "In computing, a device driver is a computer program that operates or controls a particular type of device that is attached to a computer or automaton. A driver provides a software interface to hardware devices, enabling operating systems and other computer programs to access hardware functions without needing to know precise details about the hardware being used.", "title": "" }, { "paragraph_id": 1, "text": "A driver communicates with the device through the computer bus or communications subsystem to which the hardware connects. When a calling program invokes a routine in the driver, the driver issues commands to the device (drives it). Once the device sends data back to the driver, the driver may invoke routines in the original calling program.", "title": "" }, { "paragraph_id": 2, "text": "Drivers are hardware dependent and operating-system-specific. They usually provide the interrupt handling required for any necessary asynchronous time-dependent hardware interface.", "title": "" }, { "paragraph_id": 3, "text": "The main purpose of device drivers is to provide abstraction by acting as a translator between a hardware device and the applications or operating systems that use it. Programmers can write higher-level application code independently of whatever specific hardware the end-user is using. For example, a high-level application for interacting with a serial port may simply have two functions for \"send data\" and \"receive data\". At a lower level, a device driver implementing these functions would communicate to the particular serial port controller installed on a user's computer. The commands needed to control a 16550 UART are much different from the commands needed to control an FTDI serial port converter, but each hardware-specific device driver abstracts these details into the same (or similar) software interface.", "title": "Purpose" }, { "paragraph_id": 4, "text": "Writing a device driver requires an in-depth understanding of how the hardware and the software works for a given platform function. Because drivers require low-level access to hardware functions in order to operate, drivers typically operate in a highly privileged environment and can cause system operational issues if something goes wrong. In contrast, most user-level software on modern operating systems can be stopped without greatly affecting the rest of the system. Even drivers executing in user mode can crash a system if the device is erroneously programmed. These factors make it more difficult and dangerous to diagnose problems.", "title": "Development" }, { "paragraph_id": 5, "text": "The task of writing drivers thus usually falls to software engineers or computer engineers who work for hardware-development companies. This is because they have better information than most outsiders about the design of their hardware. Moreover, it was traditionally considered in the hardware manufacturer's interest to guarantee that their clients can use their hardware in an optimum way. Typically, the Logical Device Driver (LDD) is written by the operating system vendor, while the Physical Device Driver (PDD) is implemented by the device vendor. However, in recent years, non-vendors have written numerous device drivers for proprietary devices, mainly for use with free and open source operating systems. In such cases, it is important that the hardware manufacturer provide information on how the device communicates. Although this information can instead be learned by reverse engineering, this is much more difficult with hardware than it is with software.", "title": "Development" }, { "paragraph_id": 6, "text": "Microsoft has attempted to reduce system instability due to poorly written device drivers by creating a new framework for driver development, called Windows Driver Frameworks (WDF). This includes User-Mode Driver Framework (UMDF) that encourages development of certain types of drivers—primarily those that implement a message-based protocol for communicating with their devices—as user-mode drivers. If such drivers malfunction, they do not cause system instability. The Kernel-Mode Driver Framework (KMDF) model continues to allow development of kernel-mode device drivers, but attempts to provide standard implementations of functions that are known to cause problems, including cancellation of I/O operations, power management, and plug and play device support.", "title": "Development" }, { "paragraph_id": 7, "text": "Apple has an open-source framework for developing drivers on macOS, called I/O Kit.", "title": "Development" }, { "paragraph_id": 8, "text": "In Linux environments, programmers can build device drivers as parts of the kernel, separately as loadable modules, or as user-mode drivers (for certain types of devices where kernel interfaces exist, such as for USB devices). Makedev includes a list of the devices in Linux, including ttyS (terminal), lp (parallel port), hd (disk), loop, and sound (these include mixer, sequencer, dsp, and audio).", "title": "Development" }, { "paragraph_id": 9, "text": "Microsoft Windows .sys files and Linux .ko files can contain loadable device drivers. The advantage of loadable device drivers is that they can be loaded only when necessary and then unloaded, thus saving kernel memory.", "title": "Development" }, { "paragraph_id": 10, "text": "Depending on the operating system, device drivers may be permitted to run at various different privilege levels. The choice of which level of privilege the drivers are in is largely decided by the type of kernel an operating system uses. An operating system which uses a monolithic kernel, such as the Linux kernel, will typically run device drivers with the same privilege as all other kernel objects. By contrast, a system designed around microkernel , such as Minix, will place drivers as processes independent from the kernel but that use the it for essential input-output functionalities and to pass messages between user programs and each other. On Windows NT, a system with a hybrid kernel, it is common for device drivers to run in either kernel-mode or user-mode.", "title": "Privilege levels" }, { "paragraph_id": 11, "text": "The most common mechanism for segregating memory into various privilege levels is via protection rings. On many systems, such as those with x86 and ARM processors, switching between rings imposes a performance penalty, a factor that operating system developers and embedded software engineers consider when creating drivers for devices which are preferred to be run with low latency, such as network interface cards. The primary benefit of running a driver in user mode is improved stability, since a poorly written user-mode device driver cannot crash the system by overwriting kernel memory.", "title": "Privilege levels" }, { "paragraph_id": 12, "text": "Because of the diversity of modern hardware and operating systems, drivers operate in many different environments. Drivers may interface with:", "title": "Applications" }, { "paragraph_id": 13, "text": "Common levels of abstraction for device drivers include:", "title": "Applications" }, { "paragraph_id": 14, "text": "So choosing and installing the correct device drivers for given hardware is often a key component of computer system configuration.", "title": "Applications" }, { "paragraph_id": 15, "text": "Virtual device drivers represent a particular variant of device drivers. They are used to emulate a hardware device, particularly in virtualization environments, for example when a DOS program is run on a Microsoft Windows computer or when a guest operating system is run on, for example, a Xen host. Instead of enabling the guest operating system to dialog with hardware, virtual device drivers take the opposite role and emulates a piece of hardware, so that the guest operating system and its drivers running inside a virtual machine can have the illusion of accessing real hardware. Attempts by the guest operating system to access the hardware are routed to the virtual device driver in the host operating system as e.g., function calls. The virtual device driver can also send simulated processor-level events like interrupts into the virtual machine.", "title": "Virtual device drivers" }, { "paragraph_id": 16, "text": "Virtual devices may also operate in a non-virtualized environment. For example, a virtual network adapter is used with a virtual private network, while a virtual disk device is used with iSCSI. A good example for virtual device drivers can be Daemon Tools.", "title": "Virtual device drivers" }, { "paragraph_id": 17, "text": "There are several variants of virtual device drivers, such as VxDs, VLMs, and VDDs.", "title": "Virtual device drivers" }, { "paragraph_id": 18, "text": "Solaris descriptions of commonly used device drivers:", "title": "Open source drivers" }, { "paragraph_id": 19, "text": "A device on the PCI bus or USB is identified by two IDs which consist of 4 hexadecimal numbers each. The vendor ID identifies the vendor of the device. The device ID identifies a specific device from that manufacturer/vendor.", "title": "Identifiers" }, { "paragraph_id": 20, "text": "A PCI device has often an ID pair for the main chip of the device, and also a subsystem ID pair which identifies the vendor, which may be different from the chip manufacturer.", "title": "Identifiers" }, { "paragraph_id": 21, "text": "Devices often have a large number of diverse and customized device drivers running in their operating system (OS) kernel and often contain various bugs and vulnerabilities, making them a target for exploits. Bring Your Own Vulnerable Driver (BYOVD) uses signed, old drivers that contain flaws that allow hackers to insert malicious code into the kernel.", "title": "Security" }, { "paragraph_id": 22, "text": "There is a lack of effective kernel vulnerability detection tools, especially for closed-source OSes such as Microsoft Windows where the source code of the device drivers is mostly not public (open source) and the drivers often also have many privileges.", "title": "Security" }, { "paragraph_id": 23, "text": "Such vulnerabilities also exist in drivers in laptops, drivers for WiFi and bluetooth, gaming/graphics drivers, and drivers in printers.", "title": "Security" }, { "paragraph_id": 24, "text": "A group of security researchers considers the lack of isolation as one of the main factors undermining kernel security, and published a isolation framework to protect operating system kernels, primarily the monolithic Linux kernel which, according to them, gets ~80,000 commits/year to its drivers.", "title": "Security" }, { "paragraph_id": 25, "text": "An important consideration in the design of a kernel is the support it provides for protection from faults (fault tolerance) and from malicious behaviours (security). These two aspects are usually not clearly distinguished, and the adoption of this distinction in the kernel design leads to the rejection of a hierarchical structure for protection.", "title": "Security" } ]
In computing, a device driver is a computer program that operates or controls a particular type of device that is attached to a computer or automaton. A driver provides a software interface to hardware devices, enabling operating systems and other computer programs to access hardware functions without needing to know precise details about the hardware being used. A driver communicates with the device through the computer bus or communications subsystem to which the hardware connects. When a calling program invokes a routine in the driver, the driver issues commands to the device. Once the device sends data back to the driver, the driver may invoke routines in the original calling program. Drivers are hardware dependent and operating-system-specific. They usually provide the interrupt handling required for any necessary asynchronous time-dependent hardware interface.
2002-01-10T21:12:37Z
2023-12-30T16:22:57Z
[ "Template:About", "Template:Cite web", "Template:Cite news", "Template:As of", "Template:Anchor", "Template:Excerpt", "Template:Authority control", "Template:Short description", "Template:Div col", "Template:Cite book", "Template:Dead link", "Template:Operating systems", "Template:OS", "Template:Div col end", "Template:Reflist" ]
https://en.wikipedia.org/wiki/Device_driver
9,103
Dimona
Dimona (Hebrew: דִּימוֹנָה, Arabic: ديمونا) is an Israeli city in the Negev desert, 30 kilometres (19 mi) to the south-east of Beersheba and 35 kilometres (22 mi) west of the Dead Sea above the Arava valley in the Southern District of Israel. In 2021 its population was 35,892. The Shimon Peres Negev Nuclear Research Center, colloquially known as the Dimona Reactor, is located 13 kilometres (8.1 mi) southeast of the city. The Negev Naming Committee chose the name based upon that of a biblical town, mentioned in Joshua 15:21-22, on the basis that "the sound of this name had been preserved in the Arabic name Harabat Umm Dumna." Dimona was one of the development towns created in the 1950s under the leadership of Israel's first Prime Minister, David Ben-Gurion. Dimona itself was conceived in 1953. The location chosen was close to the Dead Sea Works. It was established in 1955. The first residents were Jewish immigrants from North Africa, with an initial 36 families being the first to settle there. Its population in 1955 was about 300. The North African immigrants also constructed the city's houses. The population was composed mainly of North African, particularly Moroccan immigrants, though immigrants from Yemen and Eastern Europe also arrived, as did Bene Israel immigrants from India. When the Israeli nuclear program began in 1958, a location not far from the city was chosen for the Negev Nuclear Research Center due to its relative isolation in the desert and availability of housing. In the late 1950s and early 1960s, immigrants from Eastern Europe arrived. A textile factory was opened in 1958. That same year, Dimona became a local council. In 1961, it had a population of 5,000. The emblem of Dimona (as a local council), adopted 2 March 1961, appeared on a stamp issued on 24 March 1965. Dimona was declared a city in 1969. In 1971, it had a population of 23,700. In spite of a gradual decrease during the 1980s, the city's population began to grow once again in the 1990s when it took in immigrants from the former Soviet Union and Ethiopia. Currently, Dimona is the third largest city in the Negev, with the population of almost 34,000. Due to projected rapid population growth in the Negev, the city is expected to triple in size by 2025. Dimona is described as "mini-India" by many for its 7,500-strong Indian Jewish community. It is also home to Israel's Black Hebrew community, formerly governed by its founder and spiritual leader, Ben Ammi Ben-Israel, now deceased. The Black Hebrews number about 3,000 in Dimona, with additional families in Arad, Mitzpe Ramon and the Tiberias area. Their official status in Israel was an ongoing issue for many years, but in May 1990, the issue was resolved with the issuing of first B/1 visas, and a year later, issuing of temporary residency. Status was extended to August 2003, when the Israeli Ministry of Interior granted permanent residency. In the early 1980s, textile plants, such as Dimona Textiles Ltd., dominated the industrial landscape. Many plants have since closed. Dimona Silica Industries Ltd. manufactures precipitated silica and calcium carbonate fillers. About a third of the city's population works in industrial workplaces (chemical plants near the Dead Sea like the Dead Sea Works, high-tech companies and textile shops), and another third in the area of services. Due to the introduction of new technologies, many workers have been made redundant in the recent years, creating a total unemployment rate of about 10%. Dimona has taken part of Israel's solar transformation. The Rotem Industrial Complex outside of the city has dozens of solar mirrors that focus the sun's rays on a tower that in turn heats a water boiler to create steam, turning a turbine to create electricity. Luz II, Ltd. plans to use the solar array to test new technology for the three new solar plants to be built in California for Pacific Gas and Electric Company. Dimona is located in the Negev Desert. The city stands at an elevation of around 550–600 metres (1,800–1,970 ft) above sea level. Dimona has a semi-arid climate (Köppen climate classification: BSh). The average annual temperature is 18.5 °C (65.3 °F), and around 213 mm (8.39 in) of precipitation falls annually. In the early 1950s, an extension to Dimona and south was constructed from the Railway to Beersheba, designed for freight traffic. A passenger service began in 2005, after pressure from Dimona's municipality. Dimona Railway Station is located in the southwestern part of the city. The main bus terminal is the Dimona Central Bus Station, with lines to Beersheba, Tel Aviv, Eilat, and nearby towns. Dimona is twinned with:
[ { "paragraph_id": 0, "text": "Dimona (Hebrew: דִּימוֹנָה, Arabic: ديمونا) is an Israeli city in the Negev desert, 30 kilometres (19 mi) to the south-east of Beersheba and 35 kilometres (22 mi) west of the Dead Sea above the Arava valley in the Southern District of Israel. In 2021 its population was 35,892. The Shimon Peres Negev Nuclear Research Center, colloquially known as the Dimona Reactor, is located 13 kilometres (8.1 mi) southeast of the city.", "title": "" }, { "paragraph_id": 1, "text": "The Negev Naming Committee chose the name based upon that of a biblical town, mentioned in Joshua 15:21-22, on the basis that \"the sound of this name had been preserved in the Arabic name Harabat Umm Dumna.\"", "title": "Etymology" }, { "paragraph_id": 2, "text": "Dimona was one of the development towns created in the 1950s under the leadership of Israel's first Prime Minister, David Ben-Gurion. Dimona itself was conceived in 1953. The location chosen was close to the Dead Sea Works. It was established in 1955. The first residents were Jewish immigrants from North Africa, with an initial 36 families being the first to settle there. Its population in 1955 was about 300. The North African immigrants also constructed the city's houses. The population was composed mainly of North African, particularly Moroccan immigrants, though immigrants from Yemen and Eastern Europe also arrived, as did Bene Israel immigrants from India.", "title": "History" }, { "paragraph_id": 3, "text": "When the Israeli nuclear program began in 1958, a location not far from the city was chosen for the Negev Nuclear Research Center due to its relative isolation in the desert and availability of housing. In the late 1950s and early 1960s, immigrants from Eastern Europe arrived. A textile factory was opened in 1958. That same year, Dimona became a local council. In 1961, it had a population of 5,000. The emblem of Dimona (as a local council), adopted 2 March 1961, appeared on a stamp issued on 24 March 1965. Dimona was declared a city in 1969. In 1971, it had a population of 23,700.", "title": "History" }, { "paragraph_id": 4, "text": "In spite of a gradual decrease during the 1980s, the city's population began to grow once again in the 1990s when it took in immigrants from the former Soviet Union and Ethiopia. Currently, Dimona is the third largest city in the Negev, with the population of almost 34,000. Due to projected rapid population growth in the Negev, the city is expected to triple in size by 2025.", "title": "History" }, { "paragraph_id": 5, "text": "Dimona is described as \"mini-India\" by many for its 7,500-strong Indian Jewish community. It is also home to Israel's Black Hebrew community, formerly governed by its founder and spiritual leader, Ben Ammi Ben-Israel, now deceased. The Black Hebrews number about 3,000 in Dimona, with additional families in Arad, Mitzpe Ramon and the Tiberias area. Their official status in Israel was an ongoing issue for many years, but in May 1990, the issue was resolved with the issuing of first B/1 visas, and a year later, issuing of temporary residency. Status was extended to August 2003, when the Israeli Ministry of Interior granted permanent residency.", "title": "Demography" }, { "paragraph_id": 6, "text": "In the early 1980s, textile plants, such as Dimona Textiles Ltd., dominated the industrial landscape. Many plants have since closed. Dimona Silica Industries Ltd. manufactures precipitated silica and calcium carbonate fillers. About a third of the city's population works in industrial workplaces (chemical plants near the Dead Sea like the Dead Sea Works, high-tech companies and textile shops), and another third in the area of services. Due to the introduction of new technologies, many workers have been made redundant in the recent years, creating a total unemployment rate of about 10%. Dimona has taken part of Israel's solar transformation. The Rotem Industrial Complex outside of the city has dozens of solar mirrors that focus the sun's rays on a tower that in turn heats a water boiler to create steam, turning a turbine to create electricity. Luz II, Ltd. plans to use the solar array to test new technology for the three new solar plants to be built in California for Pacific Gas and Electric Company.", "title": "Economy" }, { "paragraph_id": 7, "text": "Dimona is located in the Negev Desert. The city stands at an elevation of around 550–600 metres (1,800–1,970 ft) above sea level.", "title": "Geography and climate" }, { "paragraph_id": 8, "text": "Dimona has a semi-arid climate (Köppen climate classification: BSh). The average annual temperature is 18.5 °C (65.3 °F), and around 213 mm (8.39 in) of precipitation falls annually.", "title": "Geography and climate" }, { "paragraph_id": 9, "text": "In the early 1950s, an extension to Dimona and south was constructed from the Railway to Beersheba, designed for freight traffic. A passenger service began in 2005, after pressure from Dimona's municipality. Dimona Railway Station is located in the southwestern part of the city. The main bus terminal is the Dimona Central Bus Station, with lines to Beersheba, Tel Aviv, Eilat, and nearby towns.", "title": "Transportation" }, { "paragraph_id": 10, "text": "Dimona is twinned with:", "title": "Twin towns" } ]
Dimona is an Israeli city in the Negev desert, 30 kilometres (19 mi) to the south-east of Beersheba and 35 kilometres (22 mi) west of the Dead Sea above the Arava valley in the Southern District of Israel. In 2021 its population was 35,892. The Shimon Peres Negev Nuclear Research Center, colloquially known as the Dimona Reactor, is located 13 kilometres (8.1 mi) southeast of the city.
2002-01-11T12:30:11Z
2023-11-08T04:59:31Z
[ "Template:Lang-ar", "Template:Convert", "Template:Israel populations", "Template:Flagicon", "Template:ISBN", "Template:Cite book", "Template:Commons category", "Template:Weather box", "Template:South District (Israel)", "Template:Authority control", "Template:Reflist", "Template:Infobox settlement", "Template:Lang-he", "Template:Cite web" ]
https://en.wikipedia.org/wiki/Dimona
9,105
DC Comics
DC Comics, Inc. (doing business as DC) is an American comic book publisher and the flagship unit of DC Entertainment, a subsidiary of Warner Bros. Discovery. DC Comics is one of the largest and oldest American comic book companies, with their first comic under the DC banner being published in 1937. The majority of its publications take place within the fictional DC Universe and feature numerous culturally iconic heroic characters, such as Superman, Batman, Wonder Woman, Green Lantern, the Flash, and Aquaman; as well as famous fictional teams including the Justice League, the Justice Society of America, the Teen Titans, and the Suicide Squad. The universe also features an assortment of well-known supervillains such as the Joker, Lex Luthor, Deathstroke, the Reverse-Flash, Brainiac, and Darkseid. The company has published non-DC Universe-related material, including Watchmen, V for Vendetta, Fables and many titles under their alternative imprint Vertigo and now DC Black Label. Originally in Manhattan at 432 Fourth Avenue, the DC Comics offices have been located at 480 and later 575 Lexington Avenue; 909 Third Avenue; 75 Rockefeller Plaza; 666 Fifth Avenue; and 1325 Avenue of the Americas. DC had its headquarters at 1700 Broadway, Midtown Manhattan, New York City, but DC Entertainment relocated its headquarters to Burbank, California in April 2015. Penguin Random House Publisher Services distributes DC Comics' books to the bookstore market, while Diamond Comic Distributors supplied the comics shop direct market until June 2020, when Lunar Distribution and UCS Comic Distributors, who already dominated direct market distribution on account of the disruption to Diamond that resulted from the COVID-19 pandemic, replaced Diamond to distribute to that market. DC Comics and its longtime major competitor Marvel Comics (acquired in 2009 by The Walt Disney Company, Warner Bros. Discovery's main competitor) together shared approximately 70% of the American comic book market in 2017, though this number may give a distorted view since graphic novels are excluded. With the sales of all books included, DC is the second biggest publisher, after Viz Media, and Marvel is third. Entrepreneur Major Malcolm Wheeler-Nicholson founded National Allied Publications in 1935 intended as an American comic book publishing company. The first publishing of the company debuted with the tabloid-sized New Fun: The Big Comic Magazine #1 (the first of a comic series later called More Fun Comics) with a cover date of February 1935. It was an anthology title essentially for original stories not reprinted from newspaper strips, unlike many comic book series before it. While superhero comics are what DC Comics is known for throughout modern times, the genres in the first anthology titles consisted of funnies, Western comics and adventure-related stories. The character Doctor Occult, created by Jerry Siegel and Joe Shuster in December 1935 with issue No. 6 of New Fun Comics, is considered the earliest recurring superhero created by DC who is still used. The company created a second recurring title called New Comics No. 1, released in December 1935, which was the start of the long-running Adventure Comics series featuring many anthology titles as well. Wheeler-Nicholson's next and final title, Detective Comics, advertised with a cover illustration dated December 1936, eventually premiered three months late with a March 1937 cover date. The themed anthology that revolved originally around fictional detective stories became in modern times the longest-running ongoing comic series. A notable debut in the first issue was Slam Bradley, created in a collaboration between Malcolm Wheeler-Nicholson, Jerry Siegel and Joe Shuster. In 1937, in debt to printing-plant owner and magazine distributor Harry Donenfeld — who also published pulp magazines and operated as a principal in the magazine distributorship Independent News — Wheeler-Nicholson had to take Donenfeld on as a partner to publish Detective Comics No. 1. Detective Comics, Inc. (which would help inspire the abbreviation DC) was formed, with Wheeler-Nicholson and Jack S. Liebowitz, Donenfeld's accountant, listed as owners. Major Wheeler-Nicholson remained for a year, but cash-flow problems continued, and he was forced out. Shortly afterwards, Detective Comics, Inc. purchased the remains of National Allied, also known as Nicholson Publishing, at a bankruptcy auction. Meanwhile, Max Gaines formed the sister company All-American Publications in 1939. Detective Comics, Inc. soon launched a new anthology title, entitled Action Comics. Issue#1, cover dated June 1938, first featured characters such as Superman by Siegel and Shuster, Zatara by Fred Guardineer and Tex Thompson by Ken Finch and Bernard Baily. It is considered to be the first comic book to feature the new character archetype, soon known as "superheroes", and was a sales hit bringing to life a new age of comic books, with the credit going to the first appearance of Superman both being featured on the cover and within the issue. It is now one of the most expensive and valuable comic book issues of all time. The issue's first featured tale which starred Superman was the first to feature an origin story of superheroes with the reveal of an unnamed planet, later known as Krypton, that he is said to be from. The issue also contained the first essential supporting character and one of the earliest essential female characters in comics with Lois Lane as Superman's first depicted romantic interest. The Green Hornet-inspired character known as the Crimson Avenger by Jim Chamber was featured in Detective Comics No. 20 (October 1938). The character makes a distinction of being the first masked vigilante published by DC. An unnamed "office boy" retconned as Jimmy Olsen's first appearance was revealed in Action Comics #6's (November 1938) Superman story by Siegel and Shuster. Starting in 1939, Siegel and Shuster's Superman would be the first comic-derived character to appear outside of comic magazines and later appear in newspaper strips starring himself, which first introduced Superman's biological parents, Jor-El and Lara. All-American Publications' first comic series called All-American Comics was first published in April 1939. The series Detective Comics would make successful history as first featuring Batman by Bob Kane and Bill Finger in issue No.27 (March 1939) with the request of more superhero titles. Batman was depicted as a masked vigilante wearing a caped suit known as the Batsuit, along with riding a car that would later be referred to as the Batmobile. Also within the Batman story was the supporting character, James Gordon, Police commissioner of what later would be Gotham City Police Department. Despite being a parody, All-American Publications introduced the earliest female character who would later be a female superhero called Red Tornado (though disguised as a male) in Ma Hunkel who first appeared in the "Scribbly" stories in All-American Comics No. 3 (June 1939). Another important Batman debut was the introduction of the fictional mansion known as Wayne Manor first seen in Detective Comics No. 28 (June 1939). The series Adventure Comics would eventually follow in the footsteps of Action Comics and Detective Comics, featuring a new recurring superhero. The superhero called Sandman was first written in issue No. 40 (cover date: July 1939). Action Comics No. 13 (June 1939) introduced the first recurring Superman enemy referred to as the Ultra-Humanite first introduced by Siegel and Shuster, commonly cited as one of the earliest supervillains in comic books. The character Superman had another breakthrough when he was given his own comic book, which was unheard of at the time. The first issue, introduced in June 1939, helped directly introduce Superman's adoptive parents, Jonathan and Martha Kent, by Siegel and Shuster. Detective Comics #29 (July 1939) introduced the Batman's utility belt by Gardner Fox. Outside of DC's publishing, a character later integrated as DC was introduced by Fox Feature Syndicate named the Blue Beetle released in August 1939. Fictional cities would be a common theme of DC. The first revealed city was Superman's home city, Metropolis, that was originally named in Action Comics No. 16 in September 1939. Detective Comics No. 31 in September 1939 by Gardner Fox, Bob Kane and Sheldon Moldoff introduced a romantic interest of Batman named Julie Madison, the weapon known as the Batarang that Batman commonly uses, and the fictional aircraft called the Batplane. Batman's origin would first be shown in Detective Comics No. 33 (Nov. 1939) first depicting the death of Thomas Wayne and Martha Wayne by a mugger. The origin story would remain crucial for the fictional character since the inception. The Daily Planet (a common setting of Superman) was first named in a Superman newspaper strip around November 1939. The superhero Doll Man was the first superhero by Quality, which DC now owns. Fawcett Comics was formed around 1939 and would become DC's original competitor company in the next decade. National Allied Publications soon merged with Detective Comics, Inc., forming National Comics Publications on September 30, 1946. National Comics Publications absorbed an affiliated concern, Max Gaines' and Liebowitz' All-American Publications. In the same year Gaines let Liebowitz buy him out, and kept only Picture Stories from the Bible as the foundation of his own new company, EC Comics. At that point, "Liebowitz promptly orchestrated the merger of All-American and Detective Comics into National Comics... Next he took charge of organizing National Comics, [the self-distributorship] Independent News, and their affiliated firms into a single corporate entity, National Periodical Publications". National Periodical Publications became publicly traded on the stock market in 1961. Despite the official names "National Comics" and "National Periodical Publications", the company began branding itself as "Superman-DC" as early as 1940, and the company became known colloquially as DC Comics for years before the official adoption of that name in 1977. The company began to move aggressively against what it saw as copyright-violating imitations from other companies, such as Fox Comics' Wonder Man, which (according to court testimony) Fox started as a copy of Superman. This extended to DC suing Fawcett Comics over Captain Marvel, at the time comics' top-selling character (see National Comics Publications, Inc. v. Fawcett Publications, Inc.). Faced with declining sales and the prospect of bankruptcy if it lost, Fawcett capitulated in 1953 and ceased publishing comics. Years later, Fawcett sold the rights for Captain Marvel to DC—which in 1972 revived Captain Marvel in the new title Shazam! featuring artwork by his creator, C. C. Beck. In the meantime, the abandoned trademark had been seized by Marvel Comics in 1967, with the creation of their Captain Marvel, forbidding the DC comic itself to be called that. While Captain Marvel did not recapture his old popularity, he later appeared in a Saturday morning live action TV adaptation and gained a prominent place in the mainstream continuity DC calls the DC Universe. When the popularity of superheroes faded in the late 1940s, the company focused on such genres as science fiction, Westerns, humor, and romance. DC also published crime and horror titles, but relatively tame ones, and thus avoided the mid-1950s backlash against such comics. A handful of the most popular superhero-titles, including Action Comics and Detective Comics, the medium's two longest-running titles, continued publication. In the mid-1950s, editorial director Irwin Donenfeld and publisher Liebowitz directed editor Julius Schwartz (whose roots lay in the science-fiction book market) to produce a one-shot Flash story in the try-out title Showcase. Instead of reviving the old character, Schwartz had writers Robert Kanigher and John Broome, penciler Carmine Infantino, and inker Joe Kubert create an entirely new super-speedster, updating and modernizing the Flash's civilian identity, costume, and origin with a science-fiction bent. The Flash's reimagining in Showcase No. 4 (October 1956) proved sufficiently popular that it soon led to a similar revamping of the Green Lantern character, the introduction of the modern all-star team Justice League of America (JLA), and many more superheroes, heralding what historians and fans call the Silver Age of Comic Books. National did not reimagine its continuing characters (primarily Superman, Batman, and Wonder Woman), but radically overhauled them. The Superman family of titles, under editor Mort Weisinger, introduced such enduring characters as Supergirl, Bizarro, and Brainiac. The Batman titles, under editor Jack Schiff, introduced the successful Batwoman, Bat-Girl, Ace the Bat-Hound, and Bat-Mite in an attempt to modernize the strip with non-science-fiction elements. Schwartz, together with artist Infantino, then revitalized Batman in what the company promoted as the "New Look", with relatively down-to-Earth stories re-emphasizing Batman as a detective. Meanwhile, editor Kanigher successfully introduced a whole family of Wonder Woman characters having fantastic adventures in a mythological context. Since the 1940s, when Superman, Batman, and many of the company's other heroes began appearing in stories together, DC's characters inhabited a shared continuity that, decades later, was dubbed the "DC Universe" by fans. With the story "Flash of Two Worlds", in Flash No. 123 (September 1961), editor Schwartz (with writer Gardner Fox and artists Infantino and Joe Giella) introduced a concept that allowed slotting the 1930s and 1940s Golden Age heroes into this continuity via the explanation that they lived on an other-dimensional "Earth 2", as opposed to the modern heroes' "Earth 1"—in the process creating the foundation for what was later called the DC Multiverse. DC's introduction of the reimagined superheroes did not go unnoticed by other comics companies. In 1961, with DC's JLA as the specific spur, Marvel Comics writer-editor Stan Lee and a robust creator Jack Kirby ushered in the sub-Silver Age "Marvel Age" of comics with the debut issue of The Fantastic Four. Reportedly, DC ignored the initial success of Marvel with this editorial change until its consistently strengthening sales, albeit also benefiting Independent News' business as their distributor as well, made that impossible. That commercial situation especially applied with Marvel's superior sell-through percentage numbers which were typically 70% to DC's roughly 50%, which meant DC's publications were barely making a profit in comparison after returns from the distributors were calculated while Marvel was making an excellent profit by comparison. However, the senior DC staff were reportedly at a loss at this time to understand how this small publishing house was achieving this increasingly threatening commercial strength. For instance, when Marvel's product was examined in a meeting, Marvel's emphasis on more sophisticated character-based narrative and artist-driven visual storytelling was apparently ignored for self-deluding guesses at the brand's popularity which included superficial reasons like the presence of the color red or word balloons on the cover, or that the perceived crudeness of the interior art was somehow more appealing to readers. When Lee learned about DC's subsequent experimental attempts to imitate these perceived details, he amused himself by arranging direct defiance of those assumptions in Marvel's publications as sales strengthened further to frustrate the competition. However, this ignorance of Marvel's true appeal did not extend to some of the writing talent during this period, from which there were some attempts to emulate Marvel's narrative approach. For instance, there was the Doom Patrol series by Arnold Drake, a writer who previously warned the management of the new rival's strength; a superhero team of outsiders who resented their freakish powers, which Drake later speculated was plagiarized by Stan Lee to create The X-Men. There was also the young Jim Shooter who purposely emulated Marvel's writing when he wrote for DC after much study of both companies' styles, such as for the Legion of Super-Heroes feature. In 1966, National Periodical Publications had set up its own television arm, led by Allen Ducovny to develop and produce projects for television, with Superman TV Corporation to handle its television distribution of NPP's TV shows. A 1966 Batman TV show on the ABC network sparked a temporary spike in comic book sales, and a brief fad for superheroes in Saturday morning animation (Filmation created most of DC's initial cartoons) and other media. DC significantly lightened the tone of many DC comics—particularly Batman and Detective Comics—to better complement the "camp" tone of the TV series. This tone coincided with the famous "Go-Go Checks" checkerboard cover-dress which featured a black-and-white checkerboard strip (all DC books cover dated February 1966 until August 1967) at the top of each comic, a misguided attempt by then-managing editor Irwin Donenfeld to make DC's output "stand out on the newsracks". In particular, DC artist, Carmine Infantino, complained that the visual cover distinctiveness made DC's titles easier for readers to see and then avoid in favor of Marvel's titles. In 1967, Batman artist Infantino (who had designed popular Silver Age characters Batgirl and the Phantom Stranger) rose from art director to become DC's editorial director. With the growing popularity of upstart rival Marvel Comics threatening to topple DC from its longtime number-one position in the comics industry, he attempted to infuse the company with more focus towards marketing new and existing titles and characters with more adult sensibilities towards an emerging older age group of superhero comic book fans that grew out of Marvel's efforts to market their superhero line to college-aged adults. He also recruited major talents such as ex-Marvel artist and Spider-Man co-creator Steve Ditko and promising newcomers Neal Adams and Denny O'Neil and replaced some existing DC editors with artist-editors, including Joe Kubert and Dick Giordano, to give DC's output a more artistic critical eye. In 1967, National Periodical Publications was purchased by Kinney National Company, which purchased Warner Bros.-Seven Arts in 1969. Kinney National spun off its non-entertainment assets in 1972 (as National Kinney Corporation) and changed its name to Warner Communications Inc. In 1970, Jack Kirby moved from Marvel Comics to DC, at the end of the Silver Age of Comics, in which Kirby's contributions to Marvel played a large, integral role. As artist Gil Kane described: Jack was the single most influential figure in the turnaround in Marvel's fortunes from the time he rejoined the company ... It wasn't merely that Jack conceived most of the characters that are being done, but ... Jack's point of view and philosophy of drawing became the governing philosophy of the entire publishing company and, beyond the publishing company, of the entire field ... [Marvel took] Jack and use[d] him as a primer. They would get artists ... and they taught them the ABCs, which amounted to learning Jack Kirby ... Jack was like the Holy Scripture and they simply had to follow him without deviation. That's what was told to me ... It was how they taught everyone to reconcile all those opposing attitudes to one single master point of view. Given carte blanche to write and illustrate his own stories, he created a handful of thematically-linked series he called collectively "The Fourth World". In the existing series Superman's Pal Jimmy Olsen and in his own, newly-launched series New Gods, Mister Miracle, and The Forever People, Kirby introduced such enduring characters and concepts as arch-villain Darkseid and the other-dimensional realm Apokolips. Furthermore, Kirby intended their stories to be reprinted in collected editions, in a publishing format that was later called the trade paperback, which became a standard industry practice decades later. While sales were respectable, they did not meet DC management's initially high expectations, and also suffered from a lack of comprehension and internal support from Infantino. By 1973 the "Fourth World" was all cancelled, although Kirby's conceptions soon became integral to the broadening of the DC Universe, especially after the major toy-company, Kenner Products, judged them ideal for their action-figure adaptation of the DC Universe, the Super Powers Collection. Obligated by his contract, Kirby created other unrelated series for DC, including Kamandi, The Demon, and OMAC, before ultimately returning to Marvel Comics in 1976. Following the science-fiction innovations of the Silver Age, the comics of the 1970s and 1980s became known as the Bronze Age, as fantasy gave way to more naturalistic and sometimes darker themes. Illegal drug use, banned by the Comics Code Authority, explicitly appeared in comics for the first time in Marvel Comics' story "Green Goblin Reborn!" in The Amazing Spider-Man No. 96 (May 1971), and after the Code's updating in response, DC offered a drug-fueled storyline in writer Dennis O'Neil and artist Neal Adams' Green Lantern, beginning with the story "Snowbirds Don't Fly" in the retitled Green Lantern / Green Arrow No. 85 (September 1971), which depicted Speedy, the teen sidekick of superhero archer Green Arrow, as having become a heroin addict. Jenette Kahn, a former children's magazine publisher, replaced Infantino as editorial director in January 1976. As it happened, her first task even before being formally hired, was to convince Bill Sarnoff, the head of Warner Publishing, to keep DC as a publishing concern, as opposed to simply managing their licensing of their properties. With that established, DC had attempted to compete with the now-surging Marvel by dramatically increasing its output and attempting to win the market by flooding it. This included launching series featuring such new characters as Firestorm and Shade, the Changing Man, as well as an increasing array of non-superhero titles, in an attempt to recapture the pre-Wertham days of post-War comicdom. In 1977, the company officially changed its name to DC Comics. It had used the brand "Superman-DC" since the 1950s, and was colloquially known as DC Comics for years. In June 1978, five months before the release of the first Superman movie, Kahn expanded the line further, increasing the number of titles and story pages, and raising the price from 35 cents to 50 cents. Most series received eight-page back-up features while some had full-length twenty-five-page stories. This was a move the company called the "DC Explosion". The move was not successful, however, and corporate parent Warner dramatically cut back on these largely unsuccessful titles, firing many staffers in what industry watchers dubbed "the DC Implosion". In September 1978, the line was dramatically reduced and standard-size books returned to 17-page stories but for a still increased 40 cents. By 1980, the books returned to 50 cents with a 25-page story count but the story pages replaced house ads in the books. Seeking new ways to boost market share, the new team of publisher Kahn, vice president Paul Levitz, and managing editor Giordano addressed the issue of talent instability. To that end—and following the example of Atlas/Seaboard Comics and such independent companies as Eclipse Comics—DC began to offer royalties in place of the industry-standard work-for-hire agreement in which creators worked for a flat fee and signed away all rights, giving talent a financial incentive tied to the success of their work. As it happened, the implementation of these incentives proved opportune considering Marvel Comics' Editor-in-Chief, Jim Shooter, was alienating much of his company's creative staff with his authoritarian manner and major talents there went to DC like Roy Thomas, Gene Colan, Marv Wolfman, and George Perez. In addition, emulating the era's new television form, the miniseries while addressing the matter of an excessive number of ongoing titles fizzling out within a few issues of their start, DC created the industry concept of the comic book limited series. This publishing format allowed for the deliberate creation of finite storylines within a more flexible publishing format that could showcase creations without forcing the talent into unsustainable open-ended commitments. The first such title was World of Krypton in 1979, and its positive results led to subsequent similar titles and later more ambitious productions like Camelot 3000 for the direct market in 1982. These changes in policy shaped the future of the medium as a whole, and in the short term allowed DC to entice creators away from rival Marvel, and encourage stability on individual titles. In November 1980 DC launched the ongoing series The New Teen Titans, by writer Marv Wolfman and artist George Pérez, two popular talents with a history of success. Their superhero-team comic, superficially similar to Marvel's ensemble series X-Men, but rooted in DC history, earned significant sales in part due to the stability of the creative team, who both continued with the title for six full years. In addition, Wolfman and Pérez took advantage of the limited-series option to create a spin-off title, Tales of the New Teen Titans, to present origin stories of their original characters without having to break the narrative flow of the main series or oblige them to double their work load with another ongoing title. This successful revitalization of the Silver Age Teen Titans led DC's editors to seek the same for the wider DC Universe. The result, the Wolfman/Pérez 12-issue limited series Crisis on Infinite Earths, gave the company an opportunity to realign and jettison some of the characters' complicated backstory and continuity discrepancies. A companion publication, two volumes entitled The History of the DC Universe, set out the revised history of the major DC characters. Crisis featured many key deaths that shaped the DC Universe for the following decades, and it separated the timeline of DC publications into pre- and post-"Crisis". Meanwhile, a parallel update had started in the non-superhero and horror titles. Since early 1984, the work of British writer Alan Moore had revitalized the horror series The Saga of the Swamp Thing, and soon numerous British writers, including Neil Gaiman and Grant Morrison, began freelancing for the company. The resulting influx of sophisticated horror-fantasy material led to DC in 1993 establishing the Vertigo mature-readers imprint, which did not subscribe to the Comics Code Authority. Two DC limited series, Batman: The Dark Knight Returns by Frank Miller and Watchmen by Moore and artist Dave Gibbons, drew attention in the mainstream press for their dark psychological complexity and promotion of the antihero. These titles helped pave the way for comics to be more widely accepted in literary-criticism circles and to make inroads into the book industry, with collected editions of these series as commercially successful trade paperbacks. The mid-1980s also saw the end of many long-running DC war comics, including series that had been in print since the 1960s. These titles, all with over 100 issues, included Sgt. Rock, G.I. Combat, The Unknown Soldier, and Weird War Tales. In March 1989, Warner Communications merged with Time Inc., making DC Comics a subsidiary of Time Warner. In June, the first Tim Burton-directed Batman movie was released, and DC began publishing its hardcover series of DC Archive Editions, collections of many of their early, key comics series, featuring rare and expensive stories unseen by many modern fans. Restoration for many of the Archive Editions was handled by Rick Keene with colour restoration by DC's long-time resident colourist, Bob LeRose. These collections attempted to retroactively credit many of the writers and artists who had worked without much recognition for DC during the early period of comics when individual credits were few and far between. The comics industry experienced a brief boom in the early 1990s, thanks to a combination of speculative purchasing (mass purchase of the books as collectible items, with intent to resell at a higher value as the rising value of older issues, was thought to imply that all comics would rise dramatically in price) and several storylines which gained attention from the mainstream media. DC's extended storylines in which Superman was killed, Batman was crippled and superhero Green Lantern turned into the supervillain Parallax resulted in dramatically increased sales, but the increases were as temporary as the hero's replacements. Sales dropped off as the industry went into a major slump, while manufactured "collectables" numbering in the millions replaced quality with quantity until fans and speculators alike deserted the medium in droves. DC's Piranha Press and other imprints (including the mature readers line Vertigo, and Helix, a short-lived science fiction imprint) were introduced to facilitate compartmentalized diversification and allow for specialized marketing of individual product lines. They increased the use of non-traditional contractual arrangements, including the dramatic rise of creator-owned projects, leading to a significant increase in critically lauded work (much of it for Vertigo) and the licensing of material from other companies. DC also increased publication of book-store friendly formats, including trade paperback collections of individual serial comics, as well as original graphic novels. One of the other imprints was Impact Comics from 1991 to 1992 in which the Archie Comics superheroes were licensed and revamped. The stories in the line were part of its own shared universe. DC entered into a publishing agreement with Milestone Media that gave DC a line of comics featuring a culturally and racially diverse range of superhero characters. Although the Milestone line ceased publication after a few years, it yielded the popular animated series Static Shock. DC established Paradox Press to publish material such as the large-format Big Book of... series of multi-artist interpretations on individual themes, and such crime fiction as the graphic novel Road to Perdition. In 1998, DC purchased WildStorm Comics, Jim Lee's imprint under the Image Comics banner, continuing it for many years as a wholly separate imprint – and fictional universe – with its own style and audience. As part of this purchase, DC also began to publish titles under the fledgling WildStorm sub-imprint America's Best Comics (ABC), a series of titles created by Alan Moore, including The League of Extraordinary Gentlemen, Tom Strong, and Promethea. Moore strongly contested this situation, and DC eventually stopped publishing ABC. In March 2003 DC acquired publishing and merchandising rights to the long-running fantasy series Elfquest, previously self-published by creators Wendy and Richard Pini under their WaRP Graphics publication banner. This series then followed another non-DC title, Tower Comics' series T.H.U.N.D.E.R. Agents, in collection into DC Archive Editions. In 2004 DC temporarily acquired the North American publishing rights to graphic novels from European publishers 2000 AD and Humanoids. It also rebranded its younger-audience titles with the mascot Johnny DC and established the CMX imprint to reprint translated manga. In 2006, CMX took over from Dark Horse Comics publication of the webcomic Megatokyo in print form. DC also took advantage of the demise of Kitchen Sink Press and acquired the rights to much of the work of Will Eisner, such as his The Spirit series and his graphic novels. In 2004, DC began laying the groundwork for a full continuity-reshuffling sequel to Crisis on Infinite Earths, promising substantial changes to the DC Universe (and side-stepping the 1994 Zero Hour event which similarly tried to ret-con the history of the DCU). In 2005, the critically lauded Batman Begins film was released; also, the company published several limited series establishing increasingly escalated conflicts among DC's heroes, with events climaxing in the Infinite Crisis limited series. Immediately after this event, DC's ongoing series jumped forward a full year in their in-story continuity, as DC launched a weekly series, 52, to gradually fill in the missing time. Concurrently, DC lost the copyright to "Superboy" (while retaining the trademark) when the heirs of Jerry Siegel used a provision of the 1976 revision to the copyright law to regain ownership. In 2005, DC launched its "All-Star" line (evoking the title of the 1940s publication), designed to feature some of the company's best-known characters in stories that eschewed the long and convoluted continuity of the DC Universe. The line began with All-Star Batman & Robin the Boy Wonder and All-Star Superman, with All-Star Wonder Woman and All-Star Batgirl announced in 2006 but neither being released nor scheduled as of the end of 2009. DC licensed characters from the Archie Comics imprint Red Circle Comics by 2007. They appeared in the Red Circle line, based in the DC Universe, with a series of one-shots followed by a miniseries that lead into two ongoing titles, each lasting 10 issues. In 2011, DC rebooted all of its running titles following the Flashpoint storyline. The reboot called The New 52 gave new origin stories and costume designs to many of DC's characters. DC licensed pulp characters including Doc Savage and the Spirit which it then used, along with some DC heroes, as part of the First Wave comics line launched in 2010 and lasting through fall 2011. In May 2011, DC announced it would begin releasing digital versions of their comics on the same day as paper versions. On June 1, 2011, DC announced that it would end all ongoing series set in the DC Universe in August and relaunch its comic line with 52 issue #1s, starting with Justice League on August 31 (written by Geoff Johns and drawn by Jim Lee), with the rest to follow later on in September. On June 4, 2013, DC unveiled two new digital comic innovations to enhance interactivity: DC and DC Multiverse. DC layers dynamic artwork onto digital comic panels, adding a new level of dimension to digital storytelling, while DC Multiverse allows readers to determine a specific story outcome by selecting individual characters, storylines and plot developments while reading the comic, meaning one digital comic has multiple outcomes. DC appeared in the digital-first title, Batman '66, based on the 1960s television series and DC Multiverse appeared in Batman: Arkham Origins, a digital-first title based on the video game of the same name. In 2014, DC announced an eight-issue miniseries titled Convergence which began in April 2015. In 2016, DC announced a line-wide relaunch titled DC Rebirth. The new line would launch with an 80-page one-shot titled DC Universe: Rebirth, written by Geoff Johns, with art from Gary Frank, Ethan Van Sciver, and more. After that, many new series would launch with a twice-monthly release schedule and new creative teams for nearly every title. The relaunch was meant to bring back the legacy and heart many felt had been missing from DC characters since the launch of the New 52. Rebirth brought huge success, both financially and critically. On February 21, 2020, the Co-Publisher of DC Comics, Dan DiDio stepped down after 10 years at that position. The company did not give a reason for the move, nor did it indicate whether it was his decision or the company's. The leadership change was the latest event in the company restructuring which began the previous month, as several top executives were laid off from the company. However, Bleeding Cool reported that he was fired. In June 2020, Warner Bros. announced a separate DC-themed online-only convention. Known as DC FanDome, the free "immersive virtual fan experience" was a 24-hour-long event held on August 22, 2020. The main presentation, entitled "DC FanDome: Hall of Heroes", was held as scheduled on August 22. The remaining programming was provided through a one-day video on demand experience, "DC FanDome: Explore the Multiverse", on September 12. As Warner Bros. and DC's response to San Diego Comic-Con's cancellation due to the COVID-19 pandemic, the convention featured information about DC-based content including the DC Extended Universe film franchise, the Arrowverse television franchise, comic books, and video games. The convention also returned for the virtual premiere of Wonder Woman 1984 and returned once again on October 16, 2021. In August 2020, roughly one-third of DC's editorial ranks were laid off, including the editor-in-chief, senior story editor, executive editor, and several senior VPs. In March 2021, DC relaunched their entire line once again under the banner of Infinite Frontier. After the events of the Dark Nights: Death Metal storyline, the DC Multiverse was expanded into a larger "Omniverse" where everything is canon, effectively reversing the changes The New 52 introduced a decade prior. Furthermore, AT&T spun off WarnerMedia to Discovery, forming Warner Bros. Discovery. This merger was completed on April 8, 2022. In January 2023, DC is set to relaunch their line under the banner of Dawn of DC following the conclusion of Dark Crisis on Infinite Earths and Lazarus Planet. Later that year, Jim Lee was promoted to President of DC in May.
[ { "paragraph_id": 0, "text": "DC Comics, Inc. (doing business as DC) is an American comic book publisher and the flagship unit of DC Entertainment, a subsidiary of Warner Bros. Discovery.", "title": "" }, { "paragraph_id": 1, "text": "DC Comics is one of the largest and oldest American comic book companies, with their first comic under the DC banner being published in 1937. The majority of its publications take place within the fictional DC Universe and feature numerous culturally iconic heroic characters, such as Superman, Batman, Wonder Woman, Green Lantern, the Flash, and Aquaman; as well as famous fictional teams including the Justice League, the Justice Society of America, the Teen Titans, and the Suicide Squad. The universe also features an assortment of well-known supervillains such as the Joker, Lex Luthor, Deathstroke, the Reverse-Flash, Brainiac, and Darkseid. The company has published non-DC Universe-related material, including Watchmen, V for Vendetta, Fables and many titles under their alternative imprint Vertigo and now DC Black Label.", "title": "" }, { "paragraph_id": 2, "text": "Originally in Manhattan at 432 Fourth Avenue, the DC Comics offices have been located at 480 and later 575 Lexington Avenue; 909 Third Avenue; 75 Rockefeller Plaza; 666 Fifth Avenue; and 1325 Avenue of the Americas. DC had its headquarters at 1700 Broadway, Midtown Manhattan, New York City, but DC Entertainment relocated its headquarters to Burbank, California in April 2015.", "title": "" }, { "paragraph_id": 3, "text": "Penguin Random House Publisher Services distributes DC Comics' books to the bookstore market, while Diamond Comic Distributors supplied the comics shop direct market until June 2020, when Lunar Distribution and UCS Comic Distributors, who already dominated direct market distribution on account of the disruption to Diamond that resulted from the COVID-19 pandemic, replaced Diamond to distribute to that market.", "title": "" }, { "paragraph_id": 4, "text": "DC Comics and its longtime major competitor Marvel Comics (acquired in 2009 by The Walt Disney Company, Warner Bros. Discovery's main competitor) together shared approximately 70% of the American comic book market in 2017, though this number may give a distorted view since graphic novels are excluded. With the sales of all books included, DC is the second biggest publisher, after Viz Media, and Marvel is third.", "title": "" }, { "paragraph_id": 5, "text": "Entrepreneur Major Malcolm Wheeler-Nicholson founded National Allied Publications in 1935 intended as an American comic book publishing company. The first publishing of the company debuted with the tabloid-sized New Fun: The Big Comic Magazine #1 (the first of a comic series later called More Fun Comics) with a cover date of February 1935. It was an anthology title essentially for original stories not reprinted from newspaper strips, unlike many comic book series before it. While superhero comics are what DC Comics is known for throughout modern times, the genres in the first anthology titles consisted of funnies, Western comics and adventure-related stories. The character Doctor Occult, created by Jerry Siegel and Joe Shuster in December 1935 with issue No. 6 of New Fun Comics, is considered the earliest recurring superhero created by DC who is still used. The company created a second recurring title called New Comics No. 1, released in December 1935, which was the start of the long-running Adventure Comics series featuring many anthology titles as well.", "title": "History" }, { "paragraph_id": 6, "text": "Wheeler-Nicholson's next and final title, Detective Comics, advertised with a cover illustration dated December 1936, eventually premiered three months late with a March 1937 cover date. The themed anthology that revolved originally around fictional detective stories became in modern times the longest-running ongoing comic series. A notable debut in the first issue was Slam Bradley, created in a collaboration between Malcolm Wheeler-Nicholson, Jerry Siegel and Joe Shuster. In 1937, in debt to printing-plant owner and magazine distributor Harry Donenfeld — who also published pulp magazines and operated as a principal in the magazine distributorship Independent News — Wheeler-Nicholson had to take Donenfeld on as a partner to publish Detective Comics No. 1. Detective Comics, Inc. (which would help inspire the abbreviation DC) was formed, with Wheeler-Nicholson and Jack S. Liebowitz, Donenfeld's accountant, listed as owners. Major Wheeler-Nicholson remained for a year, but cash-flow problems continued, and he was forced out. Shortly afterwards, Detective Comics, Inc. purchased the remains of National Allied, also known as Nicholson Publishing, at a bankruptcy auction.", "title": "History" }, { "paragraph_id": 7, "text": "Meanwhile, Max Gaines formed the sister company All-American Publications in 1939. Detective Comics, Inc. soon launched a new anthology title, entitled Action Comics. Issue#1, cover dated June 1938, first featured characters such as Superman by Siegel and Shuster, Zatara by Fred Guardineer and Tex Thompson by Ken Finch and Bernard Baily. It is considered to be the first comic book to feature the new character archetype, soon known as \"superheroes\", and was a sales hit bringing to life a new age of comic books, with the credit going to the first appearance of Superman both being featured on the cover and within the issue. It is now one of the most expensive and valuable comic book issues of all time. The issue's first featured tale which starred Superman was the first to feature an origin story of superheroes with the reveal of an unnamed planet, later known as Krypton, that he is said to be from. The issue also contained the first essential supporting character and one of the earliest essential female characters in comics with Lois Lane as Superman's first depicted romantic interest. The Green Hornet-inspired character known as the Crimson Avenger by Jim Chamber was featured in Detective Comics No. 20 (October 1938). The character makes a distinction of being the first masked vigilante published by DC. An unnamed \"office boy\" retconned as Jimmy Olsen's first appearance was revealed in Action Comics #6's (November 1938) Superman story by Siegel and Shuster.", "title": "History" }, { "paragraph_id": 8, "text": "Starting in 1939, Siegel and Shuster's Superman would be the first comic-derived character to appear outside of comic magazines and later appear in newspaper strips starring himself, which first introduced Superman's biological parents, Jor-El and Lara. All-American Publications' first comic series called All-American Comics was first published in April 1939. The series Detective Comics would make successful history as first featuring Batman by Bob Kane and Bill Finger in issue No.27 (March 1939) with the request of more superhero titles. Batman was depicted as a masked vigilante wearing a caped suit known as the Batsuit, along with riding a car that would later be referred to as the Batmobile. Also within the Batman story was the supporting character, James Gordon, Police commissioner of what later would be Gotham City Police Department. Despite being a parody, All-American Publications introduced the earliest female character who would later be a female superhero called Red Tornado (though disguised as a male) in Ma Hunkel who first appeared in the \"Scribbly\" stories in All-American Comics No. 3 (June 1939). Another important Batman debut was the introduction of the fictional mansion known as Wayne Manor first seen in Detective Comics No. 28 (June 1939). The series Adventure Comics would eventually follow in the footsteps of Action Comics and Detective Comics, featuring a new recurring superhero. The superhero called Sandman was first written in issue No. 40 (cover date: July 1939). Action Comics No. 13 (June 1939) introduced the first recurring Superman enemy referred to as the Ultra-Humanite first introduced by Siegel and Shuster, commonly cited as one of the earliest supervillains in comic books. The character Superman had another breakthrough when he was given his own comic book, which was unheard of at the time. The first issue, introduced in June 1939, helped directly introduce Superman's adoptive parents, Jonathan and Martha Kent, by Siegel and Shuster. Detective Comics #29 (July 1939) introduced the Batman's utility belt by Gardner Fox. Outside of DC's publishing, a character later integrated as DC was introduced by Fox Feature Syndicate named the Blue Beetle released in August 1939. Fictional cities would be a common theme of DC. The first revealed city was Superman's home city, Metropolis, that was originally named in Action Comics No. 16 in September 1939. Detective Comics No. 31 in September 1939 by Gardner Fox, Bob Kane and Sheldon Moldoff introduced a romantic interest of Batman named Julie Madison, the weapon known as the Batarang that Batman commonly uses, and the fictional aircraft called the Batplane. Batman's origin would first be shown in Detective Comics No. 33 (Nov. 1939) first depicting the death of Thomas Wayne and Martha Wayne by a mugger. The origin story would remain crucial for the fictional character since the inception. The Daily Planet (a common setting of Superman) was first named in a Superman newspaper strip around November 1939. The superhero Doll Man was the first superhero by Quality, which DC now owns. Fawcett Comics was formed around 1939 and would become DC's original competitor company in the next decade.", "title": "History" }, { "paragraph_id": 9, "text": "National Allied Publications soon merged with Detective Comics, Inc., forming National Comics Publications on September 30, 1946. National Comics Publications absorbed an affiliated concern, Max Gaines' and Liebowitz' All-American Publications. In the same year Gaines let Liebowitz buy him out, and kept only Picture Stories from the Bible as the foundation of his own new company, EC Comics. At that point, \"Liebowitz promptly orchestrated the merger of All-American and Detective Comics into National Comics... Next he took charge of organizing National Comics, [the self-distributorship] Independent News, and their affiliated firms into a single corporate entity, National Periodical Publications\". National Periodical Publications became publicly traded on the stock market in 1961.", "title": "History" }, { "paragraph_id": 10, "text": "Despite the official names \"National Comics\" and \"National Periodical Publications\", the company began branding itself as \"Superman-DC\" as early as 1940, and the company became known colloquially as DC Comics for years before the official adoption of that name in 1977.", "title": "History" }, { "paragraph_id": 11, "text": "The company began to move aggressively against what it saw as copyright-violating imitations from other companies, such as Fox Comics' Wonder Man, which (according to court testimony) Fox started as a copy of Superman. This extended to DC suing Fawcett Comics over Captain Marvel, at the time comics' top-selling character (see National Comics Publications, Inc. v. Fawcett Publications, Inc.). Faced with declining sales and the prospect of bankruptcy if it lost, Fawcett capitulated in 1953 and ceased publishing comics. Years later, Fawcett sold the rights for Captain Marvel to DC—which in 1972 revived Captain Marvel in the new title Shazam! featuring artwork by his creator, C. C. Beck. In the meantime, the abandoned trademark had been seized by Marvel Comics in 1967, with the creation of their Captain Marvel, forbidding the DC comic itself to be called that. While Captain Marvel did not recapture his old popularity, he later appeared in a Saturday morning live action TV adaptation and gained a prominent place in the mainstream continuity DC calls the DC Universe.", "title": "History" }, { "paragraph_id": 12, "text": "When the popularity of superheroes faded in the late 1940s, the company focused on such genres as science fiction, Westerns, humor, and romance. DC also published crime and horror titles, but relatively tame ones, and thus avoided the mid-1950s backlash against such comics. A handful of the most popular superhero-titles, including Action Comics and Detective Comics, the medium's two longest-running titles, continued publication.", "title": "History" }, { "paragraph_id": 13, "text": "In the mid-1950s, editorial director Irwin Donenfeld and publisher Liebowitz directed editor Julius Schwartz (whose roots lay in the science-fiction book market) to produce a one-shot Flash story in the try-out title Showcase. Instead of reviving the old character, Schwartz had writers Robert Kanigher and John Broome, penciler Carmine Infantino, and inker Joe Kubert create an entirely new super-speedster, updating and modernizing the Flash's civilian identity, costume, and origin with a science-fiction bent. The Flash's reimagining in Showcase No. 4 (October 1956) proved sufficiently popular that it soon led to a similar revamping of the Green Lantern character, the introduction of the modern all-star team Justice League of America (JLA), and many more superheroes, heralding what historians and fans call the Silver Age of Comic Books.", "title": "History" }, { "paragraph_id": 14, "text": "National did not reimagine its continuing characters (primarily Superman, Batman, and Wonder Woman), but radically overhauled them. The Superman family of titles, under editor Mort Weisinger, introduced such enduring characters as Supergirl, Bizarro, and Brainiac. The Batman titles, under editor Jack Schiff, introduced the successful Batwoman, Bat-Girl, Ace the Bat-Hound, and Bat-Mite in an attempt to modernize the strip with non-science-fiction elements. Schwartz, together with artist Infantino, then revitalized Batman in what the company promoted as the \"New Look\", with relatively down-to-Earth stories re-emphasizing Batman as a detective. Meanwhile, editor Kanigher successfully introduced a whole family of Wonder Woman characters having fantastic adventures in a mythological context.", "title": "History" }, { "paragraph_id": 15, "text": "Since the 1940s, when Superman, Batman, and many of the company's other heroes began appearing in stories together, DC's characters inhabited a shared continuity that, decades later, was dubbed the \"DC Universe\" by fans. With the story \"Flash of Two Worlds\", in Flash No. 123 (September 1961), editor Schwartz (with writer Gardner Fox and artists Infantino and Joe Giella) introduced a concept that allowed slotting the 1930s and 1940s Golden Age heroes into this continuity via the explanation that they lived on an other-dimensional \"Earth 2\", as opposed to the modern heroes' \"Earth 1\"—in the process creating the foundation for what was later called the DC Multiverse.", "title": "History" }, { "paragraph_id": 16, "text": "DC's introduction of the reimagined superheroes did not go unnoticed by other comics companies. In 1961, with DC's JLA as the specific spur, Marvel Comics writer-editor Stan Lee and a robust creator Jack Kirby ushered in the sub-Silver Age \"Marvel Age\" of comics with the debut issue of The Fantastic Four. Reportedly, DC ignored the initial success of Marvel with this editorial change until its consistently strengthening sales, albeit also benefiting Independent News' business as their distributor as well, made that impossible. That commercial situation especially applied with Marvel's superior sell-through percentage numbers which were typically 70% to DC's roughly 50%, which meant DC's publications were barely making a profit in comparison after returns from the distributors were calculated while Marvel was making an excellent profit by comparison.", "title": "History" }, { "paragraph_id": 17, "text": "However, the senior DC staff were reportedly at a loss at this time to understand how this small publishing house was achieving this increasingly threatening commercial strength. For instance, when Marvel's product was examined in a meeting, Marvel's emphasis on more sophisticated character-based narrative and artist-driven visual storytelling was apparently ignored for self-deluding guesses at the brand's popularity which included superficial reasons like the presence of the color red or word balloons on the cover, or that the perceived crudeness of the interior art was somehow more appealing to readers. When Lee learned about DC's subsequent experimental attempts to imitate these perceived details, he amused himself by arranging direct defiance of those assumptions in Marvel's publications as sales strengthened further to frustrate the competition.", "title": "History" }, { "paragraph_id": 18, "text": "However, this ignorance of Marvel's true appeal did not extend to some of the writing talent during this period, from which there were some attempts to emulate Marvel's narrative approach. For instance, there was the Doom Patrol series by Arnold Drake, a writer who previously warned the management of the new rival's strength; a superhero team of outsiders who resented their freakish powers, which Drake later speculated was plagiarized by Stan Lee to create The X-Men. There was also the young Jim Shooter who purposely emulated Marvel's writing when he wrote for DC after much study of both companies' styles, such as for the Legion of Super-Heroes feature. In 1966, National Periodical Publications had set up its own television arm, led by Allen Ducovny to develop and produce projects for television, with Superman TV Corporation to handle its television distribution of NPP's TV shows.", "title": "History" }, { "paragraph_id": 19, "text": "A 1966 Batman TV show on the ABC network sparked a temporary spike in comic book sales, and a brief fad for superheroes in Saturday morning animation (Filmation created most of DC's initial cartoons) and other media. DC significantly lightened the tone of many DC comics—particularly Batman and Detective Comics—to better complement the \"camp\" tone of the TV series. This tone coincided with the famous \"Go-Go Checks\" checkerboard cover-dress which featured a black-and-white checkerboard strip (all DC books cover dated February 1966 until August 1967) at the top of each comic, a misguided attempt by then-managing editor Irwin Donenfeld to make DC's output \"stand out on the newsracks\". In particular, DC artist, Carmine Infantino, complained that the visual cover distinctiveness made DC's titles easier for readers to see and then avoid in favor of Marvel's titles.", "title": "History" }, { "paragraph_id": 20, "text": "In 1967, Batman artist Infantino (who had designed popular Silver Age characters Batgirl and the Phantom Stranger) rose from art director to become DC's editorial director. With the growing popularity of upstart rival Marvel Comics threatening to topple DC from its longtime number-one position in the comics industry, he attempted to infuse the company with more focus towards marketing new and existing titles and characters with more adult sensibilities towards an emerging older age group of superhero comic book fans that grew out of Marvel's efforts to market their superhero line to college-aged adults. He also recruited major talents such as ex-Marvel artist and Spider-Man co-creator Steve Ditko and promising newcomers Neal Adams and Denny O'Neil and replaced some existing DC editors with artist-editors, including Joe Kubert and Dick Giordano, to give DC's output a more artistic critical eye.", "title": "History" }, { "paragraph_id": 21, "text": "In 1967, National Periodical Publications was purchased by Kinney National Company, which purchased Warner Bros.-Seven Arts in 1969. Kinney National spun off its non-entertainment assets in 1972 (as National Kinney Corporation) and changed its name to Warner Communications Inc.", "title": "History" }, { "paragraph_id": 22, "text": "In 1970, Jack Kirby moved from Marvel Comics to DC, at the end of the Silver Age of Comics, in which Kirby's contributions to Marvel played a large, integral role.", "title": "History" }, { "paragraph_id": 23, "text": "As artist Gil Kane described:", "title": "History" }, { "paragraph_id": 24, "text": "Jack was the single most influential figure in the turnaround in Marvel's fortunes from the time he rejoined the company ... It wasn't merely that Jack conceived most of the characters that are being done, but ... Jack's point of view and philosophy of drawing became the governing philosophy of the entire publishing company and, beyond the publishing company, of the entire field ... [Marvel took] Jack and use[d] him as a primer. They would get artists ... and they taught them the ABCs, which amounted to learning Jack Kirby ... Jack was like the Holy Scripture and they simply had to follow him without deviation. That's what was told to me ... It was how they taught everyone to reconcile all those opposing attitudes to one single master point of view.", "title": "History" }, { "paragraph_id": 25, "text": "Given carte blanche to write and illustrate his own stories, he created a handful of thematically-linked series he called collectively \"The Fourth World\". In the existing series Superman's Pal Jimmy Olsen and in his own, newly-launched series New Gods, Mister Miracle, and The Forever People, Kirby introduced such enduring characters and concepts as arch-villain Darkseid and the other-dimensional realm Apokolips. Furthermore, Kirby intended their stories to be reprinted in collected editions, in a publishing format that was later called the trade paperback, which became a standard industry practice decades later. While sales were respectable, they did not meet DC management's initially high expectations, and also suffered from a lack of comprehension and internal support from Infantino. By 1973 the \"Fourth World\" was all cancelled, although Kirby's conceptions soon became integral to the broadening of the DC Universe, especially after the major toy-company, Kenner Products, judged them ideal for their action-figure adaptation of the DC Universe, the Super Powers Collection. Obligated by his contract, Kirby created other unrelated series for DC, including Kamandi, The Demon, and OMAC, before ultimately returning to Marvel Comics in 1976.", "title": "History" }, { "paragraph_id": 26, "text": "Following the science-fiction innovations of the Silver Age, the comics of the 1970s and 1980s became known as the Bronze Age, as fantasy gave way to more naturalistic and sometimes darker themes. Illegal drug use, banned by the Comics Code Authority, explicitly appeared in comics for the first time in Marvel Comics' story \"Green Goblin Reborn!\" in The Amazing Spider-Man No. 96 (May 1971), and after the Code's updating in response, DC offered a drug-fueled storyline in writer Dennis O'Neil and artist Neal Adams' Green Lantern, beginning with the story \"Snowbirds Don't Fly\" in the retitled Green Lantern / Green Arrow No. 85 (September 1971), which depicted Speedy, the teen sidekick of superhero archer Green Arrow, as having become a heroin addict.", "title": "History" }, { "paragraph_id": 27, "text": "Jenette Kahn, a former children's magazine publisher, replaced Infantino as editorial director in January 1976. As it happened, her first task even before being formally hired, was to convince Bill Sarnoff, the head of Warner Publishing, to keep DC as a publishing concern, as opposed to simply managing their licensing of their properties. With that established, DC had attempted to compete with the now-surging Marvel by dramatically increasing its output and attempting to win the market by flooding it. This included launching series featuring such new characters as Firestorm and Shade, the Changing Man, as well as an increasing array of non-superhero titles, in an attempt to recapture the pre-Wertham days of post-War comicdom.", "title": "History" }, { "paragraph_id": 28, "text": "In 1977, the company officially changed its name to DC Comics. It had used the brand \"Superman-DC\" since the 1950s, and was colloquially known as DC Comics for years.", "title": "History" }, { "paragraph_id": 29, "text": "In June 1978, five months before the release of the first Superman movie, Kahn expanded the line further, increasing the number of titles and story pages, and raising the price from 35 cents to 50 cents. Most series received eight-page back-up features while some had full-length twenty-five-page stories. This was a move the company called the \"DC Explosion\". The move was not successful, however, and corporate parent Warner dramatically cut back on these largely unsuccessful titles, firing many staffers in what industry watchers dubbed \"the DC Implosion\". In September 1978, the line was dramatically reduced and standard-size books returned to 17-page stories but for a still increased 40 cents. By 1980, the books returned to 50 cents with a 25-page story count but the story pages replaced house ads in the books.", "title": "History" }, { "paragraph_id": 30, "text": "Seeking new ways to boost market share, the new team of publisher Kahn, vice president Paul Levitz, and managing editor Giordano addressed the issue of talent instability. To that end—and following the example of Atlas/Seaboard Comics and such independent companies as Eclipse Comics—DC began to offer royalties in place of the industry-standard work-for-hire agreement in which creators worked for a flat fee and signed away all rights, giving talent a financial incentive tied to the success of their work. As it happened, the implementation of these incentives proved opportune considering Marvel Comics' Editor-in-Chief, Jim Shooter, was alienating much of his company's creative staff with his authoritarian manner and major talents there went to DC like Roy Thomas, Gene Colan, Marv Wolfman, and George Perez.", "title": "History" }, { "paragraph_id": 31, "text": "In addition, emulating the era's new television form, the miniseries while addressing the matter of an excessive number of ongoing titles fizzling out within a few issues of their start, DC created the industry concept of the comic book limited series. This publishing format allowed for the deliberate creation of finite storylines within a more flexible publishing format that could showcase creations without forcing the talent into unsustainable open-ended commitments. The first such title was World of Krypton in 1979, and its positive results led to subsequent similar titles and later more ambitious productions like Camelot 3000 for the direct market in 1982.", "title": "History" }, { "paragraph_id": 32, "text": "These changes in policy shaped the future of the medium as a whole, and in the short term allowed DC to entice creators away from rival Marvel, and encourage stability on individual titles. In November 1980 DC launched the ongoing series The New Teen Titans, by writer Marv Wolfman and artist George Pérez, two popular talents with a history of success. Their superhero-team comic, superficially similar to Marvel's ensemble series X-Men, but rooted in DC history, earned significant sales in part due to the stability of the creative team, who both continued with the title for six full years. In addition, Wolfman and Pérez took advantage of the limited-series option to create a spin-off title, Tales of the New Teen Titans, to present origin stories of their original characters without having to break the narrative flow of the main series or oblige them to double their work load with another ongoing title.", "title": "History" }, { "paragraph_id": 33, "text": "This successful revitalization of the Silver Age Teen Titans led DC's editors to seek the same for the wider DC Universe. The result, the Wolfman/Pérez 12-issue limited series Crisis on Infinite Earths, gave the company an opportunity to realign and jettison some of the characters' complicated backstory and continuity discrepancies. A companion publication, two volumes entitled The History of the DC Universe, set out the revised history of the major DC characters. Crisis featured many key deaths that shaped the DC Universe for the following decades, and it separated the timeline of DC publications into pre- and post-\"Crisis\".", "title": "History" }, { "paragraph_id": 34, "text": "Meanwhile, a parallel update had started in the non-superhero and horror titles. Since early 1984, the work of British writer Alan Moore had revitalized the horror series The Saga of the Swamp Thing, and soon numerous British writers, including Neil Gaiman and Grant Morrison, began freelancing for the company. The resulting influx of sophisticated horror-fantasy material led to DC in 1993 establishing the Vertigo mature-readers imprint, which did not subscribe to the Comics Code Authority.", "title": "History" }, { "paragraph_id": 35, "text": "Two DC limited series, Batman: The Dark Knight Returns by Frank Miller and Watchmen by Moore and artist Dave Gibbons, drew attention in the mainstream press for their dark psychological complexity and promotion of the antihero. These titles helped pave the way for comics to be more widely accepted in literary-criticism circles and to make inroads into the book industry, with collected editions of these series as commercially successful trade paperbacks.", "title": "History" }, { "paragraph_id": 36, "text": "The mid-1980s also saw the end of many long-running DC war comics, including series that had been in print since the 1960s. These titles, all with over 100 issues, included Sgt. Rock, G.I. Combat, The Unknown Soldier, and Weird War Tales.", "title": "History" }, { "paragraph_id": 37, "text": "In March 1989, Warner Communications merged with Time Inc., making DC Comics a subsidiary of Time Warner. In June, the first Tim Burton-directed Batman movie was released, and DC began publishing its hardcover series of DC Archive Editions, collections of many of their early, key comics series, featuring rare and expensive stories unseen by many modern fans. Restoration for many of the Archive Editions was handled by Rick Keene with colour restoration by DC's long-time resident colourist, Bob LeRose. These collections attempted to retroactively credit many of the writers and artists who had worked without much recognition for DC during the early period of comics when individual credits were few and far between.", "title": "History" }, { "paragraph_id": 38, "text": "The comics industry experienced a brief boom in the early 1990s, thanks to a combination of speculative purchasing (mass purchase of the books as collectible items, with intent to resell at a higher value as the rising value of older issues, was thought to imply that all comics would rise dramatically in price) and several storylines which gained attention from the mainstream media. DC's extended storylines in which Superman was killed, Batman was crippled and superhero Green Lantern turned into the supervillain Parallax resulted in dramatically increased sales, but the increases were as temporary as the hero's replacements. Sales dropped off as the industry went into a major slump, while manufactured \"collectables\" numbering in the millions replaced quality with quantity until fans and speculators alike deserted the medium in droves.", "title": "History" }, { "paragraph_id": 39, "text": "DC's Piranha Press and other imprints (including the mature readers line Vertigo, and Helix, a short-lived science fiction imprint) were introduced to facilitate compartmentalized diversification and allow for specialized marketing of individual product lines. They increased the use of non-traditional contractual arrangements, including the dramatic rise of creator-owned projects, leading to a significant increase in critically lauded work (much of it for Vertigo) and the licensing of material from other companies. DC also increased publication of book-store friendly formats, including trade paperback collections of individual serial comics, as well as original graphic novels.", "title": "History" }, { "paragraph_id": 40, "text": "One of the other imprints was Impact Comics from 1991 to 1992 in which the Archie Comics superheroes were licensed and revamped. The stories in the line were part of its own shared universe.", "title": "History" }, { "paragraph_id": 41, "text": "DC entered into a publishing agreement with Milestone Media that gave DC a line of comics featuring a culturally and racially diverse range of superhero characters. Although the Milestone line ceased publication after a few years, it yielded the popular animated series Static Shock. DC established Paradox Press to publish material such as the large-format Big Book of... series of multi-artist interpretations on individual themes, and such crime fiction as the graphic novel Road to Perdition. In 1998, DC purchased WildStorm Comics, Jim Lee's imprint under the Image Comics banner, continuing it for many years as a wholly separate imprint – and fictional universe – with its own style and audience. As part of this purchase, DC also began to publish titles under the fledgling WildStorm sub-imprint America's Best Comics (ABC), a series of titles created by Alan Moore, including The League of Extraordinary Gentlemen, Tom Strong, and Promethea. Moore strongly contested this situation, and DC eventually stopped publishing ABC.", "title": "History" }, { "paragraph_id": 42, "text": "In March 2003 DC acquired publishing and merchandising rights to the long-running fantasy series Elfquest, previously self-published by creators Wendy and Richard Pini under their WaRP Graphics publication banner. This series then followed another non-DC title, Tower Comics' series T.H.U.N.D.E.R. Agents, in collection into DC Archive Editions. In 2004 DC temporarily acquired the North American publishing rights to graphic novels from European publishers 2000 AD and Humanoids. It also rebranded its younger-audience titles with the mascot Johnny DC and established the CMX imprint to reprint translated manga. In 2006, CMX took over from Dark Horse Comics publication of the webcomic Megatokyo in print form. DC also took advantage of the demise of Kitchen Sink Press and acquired the rights to much of the work of Will Eisner, such as his The Spirit series and his graphic novels.", "title": "History" }, { "paragraph_id": 43, "text": "In 2004, DC began laying the groundwork for a full continuity-reshuffling sequel to Crisis on Infinite Earths, promising substantial changes to the DC Universe (and side-stepping the 1994 Zero Hour event which similarly tried to ret-con the history of the DCU). In 2005, the critically lauded Batman Begins film was released; also, the company published several limited series establishing increasingly escalated conflicts among DC's heroes, with events climaxing in the Infinite Crisis limited series. Immediately after this event, DC's ongoing series jumped forward a full year in their in-story continuity, as DC launched a weekly series, 52, to gradually fill in the missing time. Concurrently, DC lost the copyright to \"Superboy\" (while retaining the trademark) when the heirs of Jerry Siegel used a provision of the 1976 revision to the copyright law to regain ownership.", "title": "History" }, { "paragraph_id": 44, "text": "In 2005, DC launched its \"All-Star\" line (evoking the title of the 1940s publication), designed to feature some of the company's best-known characters in stories that eschewed the long and convoluted continuity of the DC Universe. The line began with All-Star Batman & Robin the Boy Wonder and All-Star Superman, with All-Star Wonder Woman and All-Star Batgirl announced in 2006 but neither being released nor scheduled as of the end of 2009.", "title": "History" }, { "paragraph_id": 45, "text": "DC licensed characters from the Archie Comics imprint Red Circle Comics by 2007. They appeared in the Red Circle line, based in the DC Universe, with a series of one-shots followed by a miniseries that lead into two ongoing titles, each lasting 10 issues.", "title": "History" }, { "paragraph_id": 46, "text": "In 2011, DC rebooted all of its running titles following the Flashpoint storyline. The reboot called The New 52 gave new origin stories and costume designs to many of DC's characters.", "title": "History" }, { "paragraph_id": 47, "text": "DC licensed pulp characters including Doc Savage and the Spirit which it then used, along with some DC heroes, as part of the First Wave comics line launched in 2010 and lasting through fall 2011.", "title": "History" }, { "paragraph_id": 48, "text": "In May 2011, DC announced it would begin releasing digital versions of their comics on the same day as paper versions.", "title": "History" }, { "paragraph_id": 49, "text": "On June 1, 2011, DC announced that it would end all ongoing series set in the DC Universe in August and relaunch its comic line with 52 issue #1s, starting with Justice League on August 31 (written by Geoff Johns and drawn by Jim Lee), with the rest to follow later on in September.", "title": "History" }, { "paragraph_id": 50, "text": "On June 4, 2013, DC unveiled two new digital comic innovations to enhance interactivity: DC and DC Multiverse. DC layers dynamic artwork onto digital comic panels, adding a new level of dimension to digital storytelling, while DC Multiverse allows readers to determine a specific story outcome by selecting individual characters, storylines and plot developments while reading the comic, meaning one digital comic has multiple outcomes. DC appeared in the digital-first title, Batman '66, based on the 1960s television series and DC Multiverse appeared in Batman: Arkham Origins, a digital-first title based on the video game of the same name.", "title": "History" }, { "paragraph_id": 51, "text": "In 2014, DC announced an eight-issue miniseries titled Convergence which began in April 2015.", "title": "History" }, { "paragraph_id": 52, "text": "In 2016, DC announced a line-wide relaunch titled DC Rebirth. The new line would launch with an 80-page one-shot titled DC Universe: Rebirth, written by Geoff Johns, with art from Gary Frank, Ethan Van Sciver, and more. After that, many new series would launch with a twice-monthly release schedule and new creative teams for nearly every title. The relaunch was meant to bring back the legacy and heart many felt had been missing from DC characters since the launch of the New 52. Rebirth brought huge success, both financially and critically.", "title": "History" }, { "paragraph_id": 53, "text": "On February 21, 2020, the Co-Publisher of DC Comics, Dan DiDio stepped down after 10 years at that position. The company did not give a reason for the move, nor did it indicate whether it was his decision or the company's. The leadership change was the latest event in the company restructuring which began the previous month, as several top executives were laid off from the company. However, Bleeding Cool reported that he was fired.", "title": "History" }, { "paragraph_id": 54, "text": "In June 2020, Warner Bros. announced a separate DC-themed online-only convention. Known as DC FanDome, the free \"immersive virtual fan experience\" was a 24-hour-long event held on August 22, 2020. The main presentation, entitled \"DC FanDome: Hall of Heroes\", was held as scheduled on August 22. The remaining programming was provided through a one-day video on demand experience, \"DC FanDome: Explore the Multiverse\", on September 12.", "title": "History" }, { "paragraph_id": 55, "text": "As Warner Bros. and DC's response to San Diego Comic-Con's cancellation due to the COVID-19 pandemic, the convention featured information about DC-based content including the DC Extended Universe film franchise, the Arrowverse television franchise, comic books, and video games. The convention also returned for the virtual premiere of Wonder Woman 1984 and returned once again on October 16, 2021.", "title": "History" }, { "paragraph_id": 56, "text": "In August 2020, roughly one-third of DC's editorial ranks were laid off, including the editor-in-chief, senior story editor, executive editor, and several senior VPs.", "title": "History" }, { "paragraph_id": 57, "text": "In March 2021, DC relaunched their entire line once again under the banner of Infinite Frontier. After the events of the Dark Nights: Death Metal storyline, the DC Multiverse was expanded into a larger \"Omniverse\" where everything is canon, effectively reversing the changes The New 52 introduced a decade prior.", "title": "History" }, { "paragraph_id": 58, "text": "Furthermore, AT&T spun off WarnerMedia to Discovery, forming Warner Bros. Discovery. This merger was completed on April 8, 2022.", "title": "History" }, { "paragraph_id": 59, "text": "In January 2023, DC is set to relaunch their line under the banner of Dawn of DC following the conclusion of Dark Crisis on Infinite Earths and Lazarus Planet. Later that year, Jim Lee was promoted to President of DC in May.", "title": "History" } ]
DC Comics, Inc. is an American comic book publisher and the flagship unit of DC Entertainment, a subsidiary of Warner Bros. Discovery. DC Comics is one of the largest and oldest American comic book companies, with their first comic under the DC banner being published in 1937. The majority of its publications take place within the fictional DC Universe and feature numerous culturally iconic heroic characters, such as Superman, Batman, Wonder Woman, Green Lantern, the Flash, and Aquaman; as well as famous fictional teams including the Justice League, the Justice Society of America, the Teen Titans, and the Suicide Squad. The universe also features an assortment of well-known supervillains such as the Joker, Lex Luthor, Deathstroke, the Reverse-Flash, Brainiac, and Darkseid. The company has published non-DC Universe-related material, including Watchmen, V for Vendetta, Fables and many titles under their alternative imprint Vertigo and now DC Black Label. Originally in Manhattan at 432 Fourth Avenue, the DC Comics offices have been located at 480 and later 575 Lexington Avenue; 909 Third Avenue; 75 Rockefeller Plaza; 666 Fifth Avenue; and 1325 Avenue of the Americas. DC had its headquarters at 1700 Broadway, Midtown Manhattan, New York City, but DC Entertainment relocated its headquarters to Burbank, California in April 2015. Penguin Random House Publisher Services distributes DC Comics' books to the bookstore market, while Diamond Comic Distributors supplied the comics shop direct market until June 2020, when Lunar Distribution and UCS Comic Distributors, who already dominated direct market distribution on account of the disruption to Diamond that resulted from the COVID-19 pandemic, replaced Diamond to distribute to that market. DC Comics and its longtime major competitor Marvel Comics together shared approximately 70% of the American comic book market in 2017, though this number may give a distorted view since graphic novels are excluded. With the sales of all books included, DC is the second biggest publisher, after Viz Media, and Marvel is third.
2002-01-11T21:11:37Z
2023-12-18T08:02:34Z
[ "Template:Use American English", "Template:Sfn", "Template:Citation needed", "Template:Refend", "Template:Div col", "Template:Official website", "Template:DC Comics imprints", "Template:DC events", "Template:GoldenAge", "Template:Portal", "Template:Cite journal", "Template:Comic book publishers in North America", "Template:Main", "Template:Cite web", "Template:Refbegin", "Template:Gcdb publisher", "Template:Authority control", "Template:Infobox publisher", "Template:Multiple image", "Template:Div col end", "Template:Reflist", "Template:Cite press release", "Template:Sic", "Template:DC Comics War Titles", "Template:Pp-pc", "Template:Notelist", "Template:Comicbookdb", "Template:Short description", "Template:Use mdy dates", "Template:Efn", "Template:Cite book", "Template:Citation", "Template:Sister project links", "Template:About", "Template:Cite news", "Template:Cite magazine", "Template:Webarchive" ]
https://en.wikipedia.org/wiki/DC_Comics
9,109
Diophantine equation
In mathematics, a Diophantine equation is an equation, typically a polynomial equation in two or more unknowns with integer coefficients, for which only integer solutions are of interest. A linear Diophantine equation equates to a constant the sum of two or more monomials, each of degree one. An exponential Diophantine equation is one in which unknowns can appear in exponents. Diophantine problems have fewer equations than unknowns and involve finding integers that solve simultaneously all equations. As such systems of equations define algebraic curves, algebraic surfaces, or, more generally, algebraic sets, their study is a part of algebraic geometry that is called Diophantine geometry. The word Diophantine refers to the Hellenistic mathematician of the 3rd century, Diophantus of Alexandria, who made a study of such equations and was one of the first mathematicians to introduce symbolism into algebra. The mathematical study of Diophantine problems that Diophantus initiated is now called Diophantine analysis. While individual equations present a kind of puzzle and have been considered throughout history, the formulation of general theories of Diophantine equations (beyond the case of linear and quadratic equations) was an achievement of the twentieth century. In the following Diophantine equations, w, x, y, and z are the unknowns and the other letters are given constants: The simplest linear Diophantine equation takes the form where a, b and c are given integers. The solutions are described by the following theorem: Proof: If d is this greatest common divisor, Bézout's identity asserts the existence of integers e and f such that ae + bf = d. If c is a multiple of d, then c = dh for some integer h, and (eh, fh) is a solution. On the other hand, for every pair of integers x and y, the greatest common divisor d of a and b divides ax + by. Thus, if the equation has a solution, then c must be a multiple of d. If a = ud and b = vd, then for every solution (x, y), we have showing that (x + kv, y − ku) is another solution. Finally, given two solutions such that one deduces that As u and v are coprime, Euclid's lemma shows that v divides x2 − x1, and thus that there exists an integer k such that both Therefore, which completes the proof. The Chinese remainder theorem describes an important class of linear Diophantine systems of equations: let n 1 , … , n k {\displaystyle n_{1},\dots ,n_{k}} be k pairwise coprime integers greater than one, a 1 , … , a k {\displaystyle a_{1},\dots ,a_{k}} be k arbitrary integers, and N be the product n 1 ⋯ n k . {\displaystyle n_{1}\cdots n_{k}.} The Chinese remainder theorem asserts that the following linear Diophantine system has exactly one solution ( x , x 1 , … , x k ) {\displaystyle (x,x_{1},\dots ,x_{k})} such that 0 ≤ x < N, and that the other solutions are obtained by adding to x a multiple of N: More generally, every system of linear Diophantine equations may be solved by computing the Smith normal form of its matrix, in a way that is similar to the use of the reduced row echelon form to solve a system of linear equations over a field. Using matrix notation every system of linear Diophantine equations may be written where A is an m × n matrix of integers, X is an n × 1 column matrix of unknowns and C is an m × 1 column matrix of integers. The computation of the Smith normal form of A provides two unimodular matrices (that is matrices that are invertible over the integers and have ±1 as determinant) U and V of respective dimensions m × m and n × n, such that the matrix is such that bi,i is not zero for i not greater than some integer k, and all the other entries are zero. The system to be solved may thus be rewritten as Calling yi the entries of VX and di those of D = UC, this leads to the system This system is equivalent to the given one in the following sense: A column matrix of integers x is a solution of the given system if and only if x = Vy for some column matrix of integers y such that By = D. It follows that the system has a solution if and only if bi,i divides di for i ≤ k and di = 0 for i > k. If this condition is fulfilled, the solutions of the given system are where hk+1, …, hn are arbitrary integers. Hermite normal form may also be used for solving systems of linear Diophantine equations. However, Hermite normal form does not directly provide the solutions; to get the solutions from the Hermite normal form, one has to successively solve several linear equations. Nevertheless, Richard Zippel wrote that the Smith normal form "is somewhat more than is actually needed to solve linear diophantine equations. Instead of reducing the equation to diagonal form, we only need to make it triangular, which is called the Hermite normal form. The Hermite normal form is substantially easier to compute than the Smith normal form." Integer linear programming amounts to finding some integer solutions (optimal in some sense) of linear systems that include also inequations. Thus systems of linear Diophantine equations are basic in this context, and textbooks on integer programming usually have a treatment of systems of linear Diophantine equations. A homogeneous Diophantine equation is a Diophantine equation that is defined by a homogeneous polynomial. A typical such equation is the equation of Fermat's Last Theorem As a homogeneous polynomial in n indeterminates defines a hypersurface in the projective space of dimension n − 1, solving a homogeneous Diophantine equation is the same as finding the rational points of a projective hypersurface. Solving a homogeneous Diophantine equation is generally a very difficult problem, even in the simplest non-trivial case of three indeterminates (in the case of two indeterminates the problem is equivalent with testing if a rational number is the dth power of another rational number). A witness of the difficulty of the problem is Fermat's Last Theorem (for d > 2, there is no integer solution of the above equation), which needed more than three centuries of mathematicians' efforts before being solved. For degrees higher than three, most known results are theorems asserting that there are no solutions (for example Fermat's Last Theorem) or that the number of solutions is finite (for example Falting's theorem). For the degree three, there are general solving methods, which work on almost all equations that are encountered in practice, but no algorithm is known that works for every cubic equation. Homogeneous Diophantine equations of degree two are easier to solve. The standard solving method proceeds in two steps. One has first to find one solution, or to prove that there is no solution. When a solution has been found, all solutions are then deduced. For proving that there is no solution, one may reduce the equation modulo p. For example, the Diophantine equation does not have any other solution than the trivial solution (0, 0, 0). In fact, by dividing x, y, and z by their greatest common divisor, one may suppose that they are coprime. The squares modulo 4 are congruent to 0 and 1. Thus the left-hand side of the equation is congruent to 0, 1, or 2, and the right-hand side is congruent to 0 or 3. Thus the equality may be obtained only if x, y, and z are all even, and are thus not coprime. Thus the only solution is the trivial solution (0, 0, 0). This shows that there is no rational point on a circle of radius 3 , {\displaystyle {\sqrt {3}},} centered at the origin. More generally, the Hasse principle allows deciding whether a homogeneous Diophantine equation of degree two has an integer solution, and computing a solution if there exist. If a non-trivial integer solution is known, one may produce all other solutions in the following way. Let be a homogeneous Diophantine equation, where Q ( x 1 , … , x n ) {\displaystyle Q(x_{1},\ldots ,x_{n})} is a quadratic form (that is, a homogeneous polynomial of degree 2), with integer coefficients. The trivial solution is the solution where all x i {\displaystyle x_{i}} are zero. If ( a 1 , … , a n ) {\displaystyle (a_{1},\ldots ,a_{n})} is a non-trivial integer solution of this equation, then ( a 1 , … , a n ) {\displaystyle \left(a_{1},\ldots ,a_{n}\right)} are the homogeneous coordinates of a rational point of the hypersurface defined by Q. Conversely, if ( p 1 q , … , p n q ) {\textstyle \left({\frac {p_{1}}{q}},\ldots ,{\frac {p_{n}}{q}}\right)} are homogeneous coordinates of a rational point of this hypersurface, where q , p 1 , … , p n {\displaystyle q,p_{1},\ldots ,p_{n}} are integers, then ( p 1 , … , p n ) {\displaystyle \left(p_{1},\ldots ,p_{n}\right)} is an integer solution of the Diophantine equation. Moreover, the integer solutions that define a given rational point are all sequences of the form where k is any integer, and d is the greatest common divisor of the p i . {\displaystyle p_{i}.} It follows that solving the Diophantine equation Q ( x 1 , … , x n ) = 0 {\displaystyle Q(x_{1},\ldots ,x_{n})=0} is completely reduced to finding the rational points of the corresponding projective hypersurface. Let now A = ( a 1 , … , a n ) {\displaystyle A=\left(a_{1},\ldots ,a_{n}\right)} be an integer solution of the equation Q ( x 1 , … , x n ) = 0. {\displaystyle Q(x_{1},\ldots ,x_{n})=0.} As Q is a polynomial of degree two, a line passing through A crosses the hypersurface at a single other point, which is rational if and only if the line is rational (that is, if the line is defined by rational parameters). This allows parameterizing the hypersurface by the lines passing through A, and the rational points are the those that are obtained from rational lines, that is, those that correspond to rational values of the parameters. More precisely, one may proceed as follows. By permuting the indices, one may suppose, without loss of generality that a n ≠ 0. {\displaystyle a_{n}\neq 0.} Then one may pass to the affine case by considering the affine hypersurface defined by which has the rational point If this rational point is a singular point, that is if all partial derivatives are zero at R, all lines passing through R are contained in the hypersurface, and one has a cone. The change of variables does not change the rational points, and transforms q into a homogeneous polynomial in n − 1 variables. In this case, the problem may thus be solved by applying the method to an equation with fewer variables. If the polynomial q is a product of linear polynomials (possibly with non-rational coefficients), then it defines two hyperplanes. The intersection of these hyperplanes is a rational flat, and contains rational singular points. This case is thus a special instance of the preceding case. In the general case, consider the parametric equation of a line passing through R: Substituting this in q, one gets a polynomial of degree two in x1, that is zero for x1 = r1. It is thus divisible by x1 – r1. The quotient is linear in x1, and may be solved for expressing x1 as a quotient of two polynomials of degree at most two in t 2 , … , t n − 1 , {\displaystyle t_{2},\ldots ,t_{n-1},} with integer coefficients: Substituting this in the expressions for x 2 , … , x n − 1 , {\displaystyle x_{2},\ldots ,x_{n-1},} one gets, for i = 1, …, n − 1, where f 1 , … , f n {\displaystyle f_{1},\ldots ,f_{n}} are polynomials of degree at most two with integer coefficients. Then, one can return to the homogeneous case. Let, for i = 1, …, n, be the homogenization of f i . {\displaystyle f_{i}.} These quadratic polynomials with integer coefficients form a parameterization of the projective hypersurface defined by Q: A point of the projective hypersurface defined by Q is rational if and only if it may be obtained from rational values of t 1 , … , t n − 1 . {\displaystyle t_{1},\ldots ,t_{n-1}.} As F 1 , … , F n {\displaystyle F_{1},\ldots ,F_{n}} are homogeneous polynomials, the point is not changed if all ti are multiplied by the same rational number. Thus, one may suppose that t 1 , … , t n − 1 {\displaystyle t_{1},\ldots ,t_{n-1}} are coprime integers. It follows that the integer solutions of the Diophantine equation are exactly the sequences ( x 1 , … , x n ) {\displaystyle (x_{1},\ldots ,x_{n})} where, for i = 1, ..., n, where k is an integer, t 1 , … , t n − 1 {\displaystyle t_{1},\ldots ,t_{n-1}} are coprime integers, and d is the greatest common divisor of the n integers F i ( t 1 , … , t n − 1 ) . {\displaystyle F_{i}(t_{1},\ldots ,t_{n-1}).} One could hope that the coprimality of the ti, could imply that d = 1. Unfortunately this is not the case, as shown in the next section. The equation is probably the first homogeneous Diophantine equation of degree two that has been studied. Its solutions are the Pythagorean triples. This is also the homogeneous equation of the unit circle. In this section, we show how the above method allows retrieving Euclid's formula for generating Pythagorean triples. For retrieving exactly Euclid's formula, we start from the solution (−1, 0, 1), corresponding to the point (−1, 0) of the unit circle. A line passing through this point may be parameterized by its slope: Putting this in the circle equation one gets Dividing by x + 1, results in which is easy to solve in x: It follows Homogenizing as described above one gets all solutions as where k is any integer, s and t are coprime integers, and d is the greatest common divisor of the three numerators. In fact, d = 2 if s and t are both odd, and d = 1 if one is odd and the other is even. The primitive triples are the solutions where k = 1 and s > t > 0. This description of the solutions differs slightly from Euclid's formula because Euclid's formula considers only the solutions such that x, y, and z are all positive, and does not distinguish between two triples that differ by the exchange of x and y, The questions asked in Diophantine analysis include: These traditional problems often lay unsolved for centuries, and mathematicians gradually came to understand their depth (in some cases), rather than treat them as puzzles. The given information is that a father's age is 1 less than twice that of his son, and that the digits AB making up the father's age are reversed in the son's age (i.e. BA). This leads to the equation 10A + B = 2(10B + A) − 1, thus 19B − 8A = 1. Inspection gives the result A = 7, B = 3, and thus AB equals 73 years and BA equals 37 years. One may easily show that there is not any other solution with A and B positive integers less than 10. Many well known puzzles in the field of recreational mathematics lead to diophantine equations. Examples include the cannonball problem, Archimedes's cattle problem and the monkey and the coconuts. In 1637, Pierre de Fermat scribbled on the margin of his copy of Arithmetica: "It is impossible to separate a cube into two cubes, or a fourth power into two fourth powers, or in general, any power higher than the second into two like powers." Stated in more modern language, "The equation a + b = c has no solutions for any n higher than 2." Following this, he wrote: "I have discovered a truly marvelous proof of this proposition, which this margin is too narrow to contain." Such a proof eluded mathematicians for centuries, however, and as such his statement became famous as Fermat's Last Theorem. It was not until 1995 that it was proven by the British mathematician Andrew Wiles. In 1657, Fermat attempted to solve the Diophantine equation 61x + 1 = y (solved by Brahmagupta over 1000 years earlier). The equation was eventually solved by Euler in the early 18th century, who also solved a number of other Diophantine equations. The smallest solution of this equation in positive integers is x = 226153980, y = 1766319049 (see Chakravala method). In 1900, David Hilbert proposed the solvability of all Diophantine equations as the tenth of his fundamental problems. In 1970, Yuri Matiyasevich solved it negatively, building on work of Julia Robinson, Martin Davis, and Hilary Putnam to prove that a general algorithm for solving all Diophantine equations cannot exist. Diophantine geometry, is the application of techniques from algebraic geometry which considers equations that also have a geometric meaning. The central idea of Diophantine geometry is that of a rational point, namely a solution to a polynomial equation or a system of polynomial equations, which is a vector in a prescribed field K, when K is not algebraically closed. The oldest general method for solving a Diophantine equation—or for proving that there is no solution— is the method of infinite descent, which was introduced by Pierre de Fermat. Another general method is the Hasse principle that uses modular arithmetic modulo all prime numbers for finding the solutions. Despite many improvements these methods cannot solve most Diophantine equations. The difficulty of solving Diophantine equations is illustrated by Hilbert's tenth problem, which was set in 1900 by David Hilbert; it was to find an algorithm to determine whether a given polynomial Diophantine equation with integer coefficients has an integer solution. Matiyasevich's theorem implies that such an algorithm cannot exist. During the 20th century, a new approach has been deeply explored, consisting of using algebraic geometry. In fact, a Diophantine equation can be viewed as the equation of an hypersurface, and the solutions of the equation are the points of the hypersurface that have integer coordinates. This approach led eventually to the proof by Andrew Wiles in 1994 of Fermat's Last Theorem, stated without proof around 1637. This is another illustration of the difficulty of solving Diophantine equations. An example of an infinite diophantine equation is: which can be expressed as "How many ways can a given integer n be written as the sum of a square plus twice a square plus thrice a square and so on?" The number of ways this can be done for each n forms an integer sequence. Infinite Diophantine equations are related to theta functions and infinite dimensional lattices. This equation always has a solution for any positive n. Compare this to: which does not always have a solution for positive n. If a Diophantine equation has as an additional variable or variables occurring as exponents, it is an exponential Diophantine equation. Examples include the Ramanujan–Nagell equation, 2 − 7 = x, and the equation of the Fermat–Catalan conjecture and Beal's conjecture, a + b = c with inequality restrictions on the exponents. A general theory for such equations is not available; particular cases such as Catalan's conjecture have been tackled. However, the majority are solved via ad hoc methods such as Størmer's theorem or even trial and error.
[ { "paragraph_id": 0, "text": "In mathematics, a Diophantine equation is an equation, typically a polynomial equation in two or more unknowns with integer coefficients, for which only integer solutions are of interest. A linear Diophantine equation equates to a constant the sum of two or more monomials, each of degree one. An exponential Diophantine equation is one in which unknowns can appear in exponents.", "title": "" }, { "paragraph_id": 1, "text": "Diophantine problems have fewer equations than unknowns and involve finding integers that solve simultaneously all equations. As such systems of equations define algebraic curves, algebraic surfaces, or, more generally, algebraic sets, their study is a part of algebraic geometry that is called Diophantine geometry.", "title": "" }, { "paragraph_id": 2, "text": "The word Diophantine refers to the Hellenistic mathematician of the 3rd century, Diophantus of Alexandria, who made a study of such equations and was one of the first mathematicians to introduce symbolism into algebra. The mathematical study of Diophantine problems that Diophantus initiated is now called Diophantine analysis.", "title": "" }, { "paragraph_id": 3, "text": "While individual equations present a kind of puzzle and have been considered throughout history, the formulation of general theories of Diophantine equations (beyond the case of linear and quadratic equations) was an achievement of the twentieth century.", "title": "" }, { "paragraph_id": 4, "text": "In the following Diophantine equations, w, x, y, and z are the unknowns and the other letters are given constants:", "title": "Examples" }, { "paragraph_id": 5, "text": "The simplest linear Diophantine equation takes the form", "title": "Linear Diophantine equations" }, { "paragraph_id": 6, "text": "where a, b and c are given integers. The solutions are described by the following theorem:", "title": "Linear Diophantine equations" }, { "paragraph_id": 7, "text": "Proof: If d is this greatest common divisor, Bézout's identity asserts the existence of integers e and f such that ae + bf = d. If c is a multiple of d, then c = dh for some integer h, and (eh, fh) is a solution. On the other hand, for every pair of integers x and y, the greatest common divisor d of a and b divides ax + by. Thus, if the equation has a solution, then c must be a multiple of d. If a = ud and b = vd, then for every solution (x, y), we have", "title": "Linear Diophantine equations" }, { "paragraph_id": 8, "text": "showing that (x + kv, y − ku) is another solution. Finally, given two solutions such that", "title": "Linear Diophantine equations" }, { "paragraph_id": 9, "text": "one deduces that", "title": "Linear Diophantine equations" }, { "paragraph_id": 10, "text": "As u and v are coprime, Euclid's lemma shows that v divides x2 − x1, and thus that there exists an integer k such that both", "title": "Linear Diophantine equations" }, { "paragraph_id": 11, "text": "Therefore,", "title": "Linear Diophantine equations" }, { "paragraph_id": 12, "text": "which completes the proof.", "title": "Linear Diophantine equations" }, { "paragraph_id": 13, "text": "The Chinese remainder theorem describes an important class of linear Diophantine systems of equations: let n 1 , … , n k {\\displaystyle n_{1},\\dots ,n_{k}} be k pairwise coprime integers greater than one, a 1 , … , a k {\\displaystyle a_{1},\\dots ,a_{k}} be k arbitrary integers, and N be the product n 1 ⋯ n k . {\\displaystyle n_{1}\\cdots n_{k}.} The Chinese remainder theorem asserts that the following linear Diophantine system has exactly one solution ( x , x 1 , … , x k ) {\\displaystyle (x,x_{1},\\dots ,x_{k})} such that 0 ≤ x < N, and that the other solutions are obtained by adding to x a multiple of N:", "title": "Linear Diophantine equations" }, { "paragraph_id": 14, "text": "More generally, every system of linear Diophantine equations may be solved by computing the Smith normal form of its matrix, in a way that is similar to the use of the reduced row echelon form to solve a system of linear equations over a field. Using matrix notation every system of linear Diophantine equations may be written", "title": "Linear Diophantine equations" }, { "paragraph_id": 15, "text": "where A is an m × n matrix of integers, X is an n × 1 column matrix of unknowns and C is an m × 1 column matrix of integers.", "title": "Linear Diophantine equations" }, { "paragraph_id": 16, "text": "The computation of the Smith normal form of A provides two unimodular matrices (that is matrices that are invertible over the integers and have ±1 as determinant) U and V of respective dimensions m × m and n × n, such that the matrix", "title": "Linear Diophantine equations" }, { "paragraph_id": 17, "text": "is such that bi,i is not zero for i not greater than some integer k, and all the other entries are zero. The system to be solved may thus be rewritten as", "title": "Linear Diophantine equations" }, { "paragraph_id": 18, "text": "Calling yi the entries of VX and di those of D = UC, this leads to the system", "title": "Linear Diophantine equations" }, { "paragraph_id": 19, "text": "This system is equivalent to the given one in the following sense: A column matrix of integers x is a solution of the given system if and only if x = Vy for some column matrix of integers y such that By = D.", "title": "Linear Diophantine equations" }, { "paragraph_id": 20, "text": "It follows that the system has a solution if and only if bi,i divides di for i ≤ k and di = 0 for i > k. If this condition is fulfilled, the solutions of the given system are", "title": "Linear Diophantine equations" }, { "paragraph_id": 21, "text": "where hk+1, …, hn are arbitrary integers.", "title": "Linear Diophantine equations" }, { "paragraph_id": 22, "text": "Hermite normal form may also be used for solving systems of linear Diophantine equations. However, Hermite normal form does not directly provide the solutions; to get the solutions from the Hermite normal form, one has to successively solve several linear equations. Nevertheless, Richard Zippel wrote that the Smith normal form \"is somewhat more than is actually needed to solve linear diophantine equations. Instead of reducing the equation to diagonal form, we only need to make it triangular, which is called the Hermite normal form. The Hermite normal form is substantially easier to compute than the Smith normal form.\"", "title": "Linear Diophantine equations" }, { "paragraph_id": 23, "text": "Integer linear programming amounts to finding some integer solutions (optimal in some sense) of linear systems that include also inequations. Thus systems of linear Diophantine equations are basic in this context, and textbooks on integer programming usually have a treatment of systems of linear Diophantine equations.", "title": "Linear Diophantine equations" }, { "paragraph_id": 24, "text": "A homogeneous Diophantine equation is a Diophantine equation that is defined by a homogeneous polynomial. A typical such equation is the equation of Fermat's Last Theorem", "title": "Homogeneous equations" }, { "paragraph_id": 25, "text": "As a homogeneous polynomial in n indeterminates defines a hypersurface in the projective space of dimension n − 1, solving a homogeneous Diophantine equation is the same as finding the rational points of a projective hypersurface.", "title": "Homogeneous equations" }, { "paragraph_id": 26, "text": "Solving a homogeneous Diophantine equation is generally a very difficult problem, even in the simplest non-trivial case of three indeterminates (in the case of two indeterminates the problem is equivalent with testing if a rational number is the dth power of another rational number). A witness of the difficulty of the problem is Fermat's Last Theorem (for d > 2, there is no integer solution of the above equation), which needed more than three centuries of mathematicians' efforts before being solved.", "title": "Homogeneous equations" }, { "paragraph_id": 27, "text": "For degrees higher than three, most known results are theorems asserting that there are no solutions (for example Fermat's Last Theorem) or that the number of solutions is finite (for example Falting's theorem).", "title": "Homogeneous equations" }, { "paragraph_id": 28, "text": "For the degree three, there are general solving methods, which work on almost all equations that are encountered in practice, but no algorithm is known that works for every cubic equation.", "title": "Homogeneous equations" }, { "paragraph_id": 29, "text": "Homogeneous Diophantine equations of degree two are easier to solve. The standard solving method proceeds in two steps. One has first to find one solution, or to prove that there is no solution. When a solution has been found, all solutions are then deduced.", "title": "Homogeneous equations" }, { "paragraph_id": 30, "text": "For proving that there is no solution, one may reduce the equation modulo p. For example, the Diophantine equation", "title": "Homogeneous equations" }, { "paragraph_id": 31, "text": "does not have any other solution than the trivial solution (0, 0, 0). In fact, by dividing x, y, and z by their greatest common divisor, one may suppose that they are coprime. The squares modulo 4 are congruent to 0 and 1. Thus the left-hand side of the equation is congruent to 0, 1, or 2, and the right-hand side is congruent to 0 or 3. Thus the equality may be obtained only if x, y, and z are all even, and are thus not coprime. Thus the only solution is the trivial solution (0, 0, 0). This shows that there is no rational point on a circle of radius 3 , {\\displaystyle {\\sqrt {3}},} centered at the origin.", "title": "Homogeneous equations" }, { "paragraph_id": 32, "text": "More generally, the Hasse principle allows deciding whether a homogeneous Diophantine equation of degree two has an integer solution, and computing a solution if there exist.", "title": "Homogeneous equations" }, { "paragraph_id": 33, "text": "If a non-trivial integer solution is known, one may produce all other solutions in the following way.", "title": "Homogeneous equations" }, { "paragraph_id": 34, "text": "Let", "title": "Homogeneous equations" }, { "paragraph_id": 35, "text": "be a homogeneous Diophantine equation, where Q ( x 1 , … , x n ) {\\displaystyle Q(x_{1},\\ldots ,x_{n})} is a quadratic form (that is, a homogeneous polynomial of degree 2), with integer coefficients. The trivial solution is the solution where all x i {\\displaystyle x_{i}} are zero. If ( a 1 , … , a n ) {\\displaystyle (a_{1},\\ldots ,a_{n})} is a non-trivial integer solution of this equation, then ( a 1 , … , a n ) {\\displaystyle \\left(a_{1},\\ldots ,a_{n}\\right)} are the homogeneous coordinates of a rational point of the hypersurface defined by Q. Conversely, if ( p 1 q , … , p n q ) {\\textstyle \\left({\\frac {p_{1}}{q}},\\ldots ,{\\frac {p_{n}}{q}}\\right)} are homogeneous coordinates of a rational point of this hypersurface, where q , p 1 , … , p n {\\displaystyle q,p_{1},\\ldots ,p_{n}} are integers, then ( p 1 , … , p n ) {\\displaystyle \\left(p_{1},\\ldots ,p_{n}\\right)} is an integer solution of the Diophantine equation. Moreover, the integer solutions that define a given rational point are all sequences of the form", "title": "Homogeneous equations" }, { "paragraph_id": 36, "text": "where k is any integer, and d is the greatest common divisor of the p i . {\\displaystyle p_{i}.}", "title": "Homogeneous equations" }, { "paragraph_id": 37, "text": "It follows that solving the Diophantine equation Q ( x 1 , … , x n ) = 0 {\\displaystyle Q(x_{1},\\ldots ,x_{n})=0} is completely reduced to finding the rational points of the corresponding projective hypersurface.", "title": "Homogeneous equations" }, { "paragraph_id": 38, "text": "Let now A = ( a 1 , … , a n ) {\\displaystyle A=\\left(a_{1},\\ldots ,a_{n}\\right)} be an integer solution of the equation Q ( x 1 , … , x n ) = 0. {\\displaystyle Q(x_{1},\\ldots ,x_{n})=0.} As Q is a polynomial of degree two, a line passing through A crosses the hypersurface at a single other point, which is rational if and only if the line is rational (that is, if the line is defined by rational parameters). This allows parameterizing the hypersurface by the lines passing through A, and the rational points are the those that are obtained from rational lines, that is, those that correspond to rational values of the parameters.", "title": "Homogeneous equations" }, { "paragraph_id": 39, "text": "More precisely, one may proceed as follows.", "title": "Homogeneous equations" }, { "paragraph_id": 40, "text": "By permuting the indices, one may suppose, without loss of generality that a n ≠ 0. {\\displaystyle a_{n}\\neq 0.} Then one may pass to the affine case by considering the affine hypersurface defined by", "title": "Homogeneous equations" }, { "paragraph_id": 41, "text": "which has the rational point", "title": "Homogeneous equations" }, { "paragraph_id": 42, "text": "If this rational point is a singular point, that is if all partial derivatives are zero at R, all lines passing through R are contained in the hypersurface, and one has a cone. The change of variables", "title": "Homogeneous equations" }, { "paragraph_id": 43, "text": "does not change the rational points, and transforms q into a homogeneous polynomial in n − 1 variables. In this case, the problem may thus be solved by applying the method to an equation with fewer variables.", "title": "Homogeneous equations" }, { "paragraph_id": 44, "text": "If the polynomial q is a product of linear polynomials (possibly with non-rational coefficients), then it defines two hyperplanes. The intersection of these hyperplanes is a rational flat, and contains rational singular points. This case is thus a special instance of the preceding case.", "title": "Homogeneous equations" }, { "paragraph_id": 45, "text": "In the general case, consider the parametric equation of a line passing through R:", "title": "Homogeneous equations" }, { "paragraph_id": 46, "text": "Substituting this in q, one gets a polynomial of degree two in x1, that is zero for x1 = r1. It is thus divisible by x1 – r1. The quotient is linear in x1, and may be solved for expressing x1 as a quotient of two polynomials of degree at most two in t 2 , … , t n − 1 , {\\displaystyle t_{2},\\ldots ,t_{n-1},} with integer coefficients:", "title": "Homogeneous equations" }, { "paragraph_id": 47, "text": "Substituting this in the expressions for x 2 , … , x n − 1 , {\\displaystyle x_{2},\\ldots ,x_{n-1},} one gets, for i = 1, …, n − 1,", "title": "Homogeneous equations" }, { "paragraph_id": 48, "text": "where f 1 , … , f n {\\displaystyle f_{1},\\ldots ,f_{n}} are polynomials of degree at most two with integer coefficients.", "title": "Homogeneous equations" }, { "paragraph_id": 49, "text": "Then, one can return to the homogeneous case. Let, for i = 1, …, n,", "title": "Homogeneous equations" }, { "paragraph_id": 50, "text": "be the homogenization of f i . {\\displaystyle f_{i}.} These quadratic polynomials with integer coefficients form a parameterization of the projective hypersurface defined by Q:", "title": "Homogeneous equations" }, { "paragraph_id": 51, "text": "A point of the projective hypersurface defined by Q is rational if and only if it may be obtained from rational values of t 1 , … , t n − 1 . {\\displaystyle t_{1},\\ldots ,t_{n-1}.} As F 1 , … , F n {\\displaystyle F_{1},\\ldots ,F_{n}} are homogeneous polynomials, the point is not changed if all ti are multiplied by the same rational number. Thus, one may suppose that t 1 , … , t n − 1 {\\displaystyle t_{1},\\ldots ,t_{n-1}} are coprime integers. It follows that the integer solutions of the Diophantine equation are exactly the sequences ( x 1 , … , x n ) {\\displaystyle (x_{1},\\ldots ,x_{n})} where, for i = 1, ..., n,", "title": "Homogeneous equations" }, { "paragraph_id": 52, "text": "where k is an integer, t 1 , … , t n − 1 {\\displaystyle t_{1},\\ldots ,t_{n-1}} are coprime integers, and d is the greatest common divisor of the n integers F i ( t 1 , … , t n − 1 ) . {\\displaystyle F_{i}(t_{1},\\ldots ,t_{n-1}).}", "title": "Homogeneous equations" }, { "paragraph_id": 53, "text": "One could hope that the coprimality of the ti, could imply that d = 1. Unfortunately this is not the case, as shown in the next section.", "title": "Homogeneous equations" }, { "paragraph_id": 54, "text": "The equation", "title": "Homogeneous equations" }, { "paragraph_id": 55, "text": "is probably the first homogeneous Diophantine equation of degree two that has been studied. Its solutions are the Pythagorean triples. This is also the homogeneous equation of the unit circle. In this section, we show how the above method allows retrieving Euclid's formula for generating Pythagorean triples.", "title": "Homogeneous equations" }, { "paragraph_id": 56, "text": "For retrieving exactly Euclid's formula, we start from the solution (−1, 0, 1), corresponding to the point (−1, 0) of the unit circle. A line passing through this point may be parameterized by its slope:", "title": "Homogeneous equations" }, { "paragraph_id": 57, "text": "Putting this in the circle equation", "title": "Homogeneous equations" }, { "paragraph_id": 58, "text": "one gets", "title": "Homogeneous equations" }, { "paragraph_id": 59, "text": "Dividing by x + 1, results in", "title": "Homogeneous equations" }, { "paragraph_id": 60, "text": "which is easy to solve in x:", "title": "Homogeneous equations" }, { "paragraph_id": 61, "text": "It follows", "title": "Homogeneous equations" }, { "paragraph_id": 62, "text": "Homogenizing as described above one gets all solutions as", "title": "Homogeneous equations" }, { "paragraph_id": 63, "text": "where k is any integer, s and t are coprime integers, and d is the greatest common divisor of the three numerators. In fact, d = 2 if s and t are both odd, and d = 1 if one is odd and the other is even.", "title": "Homogeneous equations" }, { "paragraph_id": 64, "text": "The primitive triples are the solutions where k = 1 and s > t > 0.", "title": "Homogeneous equations" }, { "paragraph_id": 65, "text": "This description of the solutions differs slightly from Euclid's formula because Euclid's formula considers only the solutions such that x, y, and z are all positive, and does not distinguish between two triples that differ by the exchange of x and y,", "title": "Homogeneous equations" }, { "paragraph_id": 66, "text": "The questions asked in Diophantine analysis include:", "title": "Diophantine analysis" }, { "paragraph_id": 67, "text": "These traditional problems often lay unsolved for centuries, and mathematicians gradually came to understand their depth (in some cases), rather than treat them as puzzles.", "title": "Diophantine analysis" }, { "paragraph_id": 68, "text": "The given information is that a father's age is 1 less than twice that of his son, and that the digits AB making up the father's age are reversed in the son's age (i.e. BA). This leads to the equation 10A + B = 2(10B + A) − 1, thus 19B − 8A = 1. Inspection gives the result A = 7, B = 3, and thus AB equals 73 years and BA equals 37 years. One may easily show that there is not any other solution with A and B positive integers less than 10.", "title": "Diophantine analysis" }, { "paragraph_id": 69, "text": "Many well known puzzles in the field of recreational mathematics lead to diophantine equations. Examples include the cannonball problem, Archimedes's cattle problem and the monkey and the coconuts.", "title": "Diophantine analysis" }, { "paragraph_id": 70, "text": "In 1637, Pierre de Fermat scribbled on the margin of his copy of Arithmetica: \"It is impossible to separate a cube into two cubes, or a fourth power into two fourth powers, or in general, any power higher than the second into two like powers.\" Stated in more modern language, \"The equation a + b = c has no solutions for any n higher than 2.\" Following this, he wrote: \"I have discovered a truly marvelous proof of this proposition, which this margin is too narrow to contain.\" Such a proof eluded mathematicians for centuries, however, and as such his statement became famous as Fermat's Last Theorem. It was not until 1995 that it was proven by the British mathematician Andrew Wiles.", "title": "Diophantine analysis" }, { "paragraph_id": 71, "text": "In 1657, Fermat attempted to solve the Diophantine equation 61x + 1 = y (solved by Brahmagupta over 1000 years earlier). The equation was eventually solved by Euler in the early 18th century, who also solved a number of other Diophantine equations. The smallest solution of this equation in positive integers is x = 226153980, y = 1766319049 (see Chakravala method).", "title": "Diophantine analysis" }, { "paragraph_id": 72, "text": "In 1900, David Hilbert proposed the solvability of all Diophantine equations as the tenth of his fundamental problems. In 1970, Yuri Matiyasevich solved it negatively, building on work of Julia Robinson, Martin Davis, and Hilary Putnam to prove that a general algorithm for solving all Diophantine equations cannot exist.", "title": "Diophantine analysis" }, { "paragraph_id": 73, "text": "Diophantine geometry, is the application of techniques from algebraic geometry which considers equations that also have a geometric meaning. The central idea of Diophantine geometry is that of a rational point, namely a solution to a polynomial equation or a system of polynomial equations, which is a vector in a prescribed field K, when K is not algebraically closed.", "title": "Diophantine analysis" }, { "paragraph_id": 74, "text": "The oldest general method for solving a Diophantine equation—or for proving that there is no solution— is the method of infinite descent, which was introduced by Pierre de Fermat. Another general method is the Hasse principle that uses modular arithmetic modulo all prime numbers for finding the solutions. Despite many improvements these methods cannot solve most Diophantine equations.", "title": "Diophantine analysis" }, { "paragraph_id": 75, "text": "The difficulty of solving Diophantine equations is illustrated by Hilbert's tenth problem, which was set in 1900 by David Hilbert; it was to find an algorithm to determine whether a given polynomial Diophantine equation with integer coefficients has an integer solution. Matiyasevich's theorem implies that such an algorithm cannot exist.", "title": "Diophantine analysis" }, { "paragraph_id": 76, "text": "During the 20th century, a new approach has been deeply explored, consisting of using algebraic geometry. In fact, a Diophantine equation can be viewed as the equation of an hypersurface, and the solutions of the equation are the points of the hypersurface that have integer coordinates.", "title": "Diophantine analysis" }, { "paragraph_id": 77, "text": "This approach led eventually to the proof by Andrew Wiles in 1994 of Fermat's Last Theorem, stated without proof around 1637. This is another illustration of the difficulty of solving Diophantine equations.", "title": "Diophantine analysis" }, { "paragraph_id": 78, "text": "An example of an infinite diophantine equation is:", "title": "Diophantine analysis" }, { "paragraph_id": 79, "text": "which can be expressed as \"How many ways can a given integer n be written as the sum of a square plus twice a square plus thrice a square and so on?\" The number of ways this can be done for each n forms an integer sequence. Infinite Diophantine equations are related to theta functions and infinite dimensional lattices. This equation always has a solution for any positive n. Compare this to:", "title": "Diophantine analysis" }, { "paragraph_id": 80, "text": "which does not always have a solution for positive n.", "title": "Diophantine analysis" }, { "paragraph_id": 81, "text": "If a Diophantine equation has as an additional variable or variables occurring as exponents, it is an exponential Diophantine equation. Examples include the Ramanujan–Nagell equation, 2 − 7 = x, and the equation of the Fermat–Catalan conjecture and Beal's conjecture, a + b = c with inequality restrictions on the exponents. A general theory for such equations is not available; particular cases such as Catalan's conjecture have been tackled. However, the majority are solved via ad hoc methods such as Størmer's theorem or even trial and error.", "title": "Exponential Diophantine equations" } ]
In mathematics, a Diophantine equation is an equation, typically a polynomial equation in two or more unknowns with integer coefficients, for which only integer solutions are of interest. A linear Diophantine equation equates to a constant the sum of two or more monomials, each of degree one. An exponential Diophantine equation is one in which unknowns can appear in exponents. Diophantine problems have fewer equations than unknowns and involve finding integers that solve simultaneously all equations. As such systems of equations define algebraic curves, algebraic surfaces, or, more generally, algebraic sets, their study is a part of algebraic geometry that is called Diophantine geometry. The word Diophantine refers to the Hellenistic mathematician of the 3rd century, Diophantus of Alexandria, who made a study of such equations and was one of the first mathematicians to introduce symbolism into algebra. The mathematical study of Diophantine problems that Diophantus initiated is now called Diophantine analysis. While individual equations present a kind of puzzle and have been considered throughout history, the formulation of general theories of Diophantine equations was an achievement of the twentieth century.
2002-01-12T16:03:12Z
2023-12-18T07:25:53Z
[ "Template:Math", "Template:Anchor", "Template:Citation", "Template:Cite journal", "Template:Springer", "Template:Authority control", "Template:Use dmy dates", "Template:Mvar", "Template:Main article", "Template:Mdash", "Template:Short description", "Template:Cite conference", "Template:Cite book", "Template:Cite web", "Template:Ancient Greek mathematics", "Template:Reflist" ]
https://en.wikipedia.org/wiki/Diophantine_equation
9,110
Diophantus
Diophantus of Alexandria (born c. AD 200 – c. 214; died c. AD 284 – c. 298) was a Greek mathematician, who was the author of a series of books called Arithmetica, many of which deal with solving algebraic equations. Diophantus is considered "the father of algebra" by many mathematicians because of his contributions to number theory, mathematical equations, and the earliest known use of algebraic notation and symbolism in his works. In modern use, Diophantine equations are algebraic equations with integer coefficients, for which integer solutions are sought. Diophantine equations, Diophantine geometry, and Diophantine approximations are subareas of number theory that are named after him. Diophantus coined the term παρισότης (parisotes) to refer to an approximate equality. This term was rendered as adaequalitas in Latin, and became the technique of adequality developed by Pierre de Fermat to find maxima for functions and tangent lines to curves. Diophantus was the first Greek mathematician who recognized positive rational numbers as numbers, by allowing fractions for coefficients and solutions. Diophantus is known to have lived in Alexandria, Egypt, during the Roman era, between AD 200 and 214 to 284 or 298. Diophantus has variously been described by historians as either Greek, or possibly Hellenized Egyptian, or Hellenized Babylonian, The last two of these identifications may stem from confusion with the 4th-century rhetorician Diophantus the Arab. Much of our knowledge of the life of Diophantus is derived from a 5th-century Greek anthology of number games and puzzles created by Metrodorus. One of the problems (sometimes called his epitaph) states: This puzzle implies that Diophantus' age x can be expressed as which gives x a value of 84 years. However, the accuracy of the information cannot be confirmed. In popular culture, this puzzle was the Puzzle No.142 in Professor Layton and Pandora's Box as one of the hardest solving puzzles in the game, which needed to be unlocked by solving other puzzles first. Arithmetica is the major work of Diophantus and the most prominent work on algebra in Greek mathematics. It is a collection of problems giving numerical solutions of both determinate and indeterminate equations. Of the original thirteen books of which Arithmetica consisted only six have survived, though there are some who believe that four Arabic books discovered in 1968 are also by Diophantus. Some Diophantine problems from Arithmetica have been found in Arabic sources. It should be mentioned here that Diophantus never used general methods in his solutions. Hermann Hankel, renowned German mathematician made the following remark regarding Diophantus. "Our author (Diophantos) not the slightest trace of a general, comprehensive method is discernible; each problem calls for some special method which refuses to work even for the most closely related problems. For this reason it is difficult for the modern scholar to solve the 101st problem even after having studied 100 of Diophantos's solutions". Like many other Greek mathematical treatises, Diophantus was forgotten in Western Europe during the Dark Ages, since the study of ancient Greek, and literacy in general, had greatly declined. The portion of the Greek Arithmetica that survived, however, was, like all ancient Greek texts transmitted to the early modern world, copied by, and thus known to, medieval Byzantine scholars. Scholia on Diophantus by the Byzantine Greek scholar John Chortasmenos (1370–1437) are preserved together with a comprehensive commentary written by the earlier Greek scholar Maximos Planudes (1260 – 1305), who produced an edition of Diophantus within the library of the Chora Monastery in Byzantine Constantinople. In addition, some portion of the Arithmetica probably survived in the Arab tradition (see above). In 1463 German mathematician Regiomontanus wrote: Arithmetica was first translated from Greek into Latin by Bombelli in 1570, but the translation was never published. However, Bombelli borrowed many of the problems for his own book Algebra. The editio princeps of Arithmetica was published in 1575 by Xylander. The Latin translation of Arithmetica by Bachet in 1621 became the first Latin edition that was widely available. Pierre de Fermat owned a copy, studied it and made notes in the margins. A later 1895 Latin translation by Paul Tannery was said to be an improvement by Thomas L. Heath, who used it in the 1910 second edition of his English translation. The 1621 edition of Arithmetica by Bachet gained fame after Pierre de Fermat wrote his famous "Last Theorem" in the margins of his copy: Fermat's proof was never found, and the problem of finding a proof for the theorem went unsolved for centuries. A proof was finally found in 1994 by Andrew Wiles after working on it for seven years. It is believed that Fermat did not actually have the proof he claimed to have. Although the original copy in which Fermat wrote this is lost today, Fermat's son edited the next edition of Diophantus, published in 1670. Even though the text is otherwise inferior to the 1621 edition, Fermat's annotations—including the "Last Theorem"—were printed in this version. Fermat was not the first mathematician so moved to write in his own marginal notes to Diophantus; the Byzantine scholar John Chortasmenos (1370–1437) had written "Thy soul, Diophantus, be with Satan because of the difficulty of your other theorems and particularly of the present theorem" next to the same problem. Diophantus wrote several other books besides Arithmetica, but only a few of them have survived. Diophantus himself refers to a work which consists of a collection of lemmas called The Porisms (or Porismata), but this book is entirely lost. Although The Porisms is lost, we know three lemmas contained there, since Diophantus refers to them in the Arithmetica. One lemma states that the difference of the cubes of two rational numbers is equal to the sum of the cubes of two other rational numbers, i.e. given any a and b, with a > b, there exist c and d, all positive and rational, such that Diophantus is also known to have written on polygonal numbers, a topic of great interest to Pythagoras and Pythagoreans. Fragments of a book dealing with polygonal numbers are extant. A book called Preliminaries to the Geometric Elements has been traditionally attributed to Hero of Alexandria. It has been studied recently by Wilbur Knorr, who suggested that the attribution to Hero is incorrect, and that the true author is Diophantus. Diophantus' work has had a large influence in history. Editions of Arithmetica exerted a profound influence on the development of algebra in Europe in the late sixteenth and through the 17th and 18th centuries. Diophantus and his works also influenced Arab mathematics and were of great fame among Arab mathematicians. Diophantus' work created a foundation for work on algebra and in fact much of advanced mathematics is based on algebra. How much he affected India is a matter of debate. Diophantus has been considered "the father of algebra" because of his contributions to number theory, mathematical notations and the earliest known use of syncopated notation in his book series Arithmetica. However this is usually debated, because Al-Khwarizmi was also given the title as "the father of algebra", nevertheless both mathematicians were responsible for paving the way for algebra today. Today, Diophantine analysis is the area of study where integer (whole-number) solutions are sought for equations, and Diophantine equations are polynomial equations with integer coefficients to which only integer solutions are sought. It is usually rather difficult to tell whether a given Diophantine equation is solvable. Most of the problems in Arithmetica lead to quadratic equations. Diophantus looked at 3 different types of quadratic equations: ax + bx = c, ax = bx + c, and ax + c = bx. The reason why there were three cases to Diophantus, while today we have only one case, is that he did not have any notion for zero and he avoided negative coefficients by considering the given numbers a, b, c to all be positive in each of the three cases above. Diophantus was always satisfied with a rational solution and did not require a whole number which means he accepted fractions as solutions to his problems. Diophantus considered negative or irrational square root solutions "useless", "meaningless", and even "absurd". To give one specific example, he calls the equation 4 = 4x + 20 'absurd' because it would lead to a negative value for x. One solution was all he looked for in a quadratic equation. There is no evidence that suggests Diophantus even realized that there could be two solutions to a quadratic equation. He also considered simultaneous quadratic equations. Diophantus made important advances in mathematical notation, becoming the first person known to use algebraic notation and symbolism. Before him everyone wrote out equations completely. Diophantus introduced an algebraic symbolism that used an abridged notation for frequently occurring operations, and an abbreviation for the unknown and for the powers of the unknown. Mathematical historian Kurt Vogel states: "The symbolism that Diophantus introduced for the first time, and undoubtedly devised himself, provided a short and readily comprehensible means of expressing an equation... Since an abbreviation is also employed for the word 'equals', Diophantus took a fundamental step from verbal algebra towards symbolic algebra." Although Diophantus made important advances in symbolism, he still lacked the necessary notation to express more general methods. This caused his work to be more concerned with particular problems rather than general situations. Some of the limitations of Diophantus' notation are that he only had notation for one unknown and, when problems involved more than a single unknown, Diophantus was reduced to expressing "first unknown", "second unknown", etc. in words. He also lacked a symbol for a general number n. Where we would write 12 + 6n/n − 3, Diophantus has to resort to constructions like: "... a sixfold number increased by twelve, which is divided by the difference by which the square of the number exceeds three". Algebra still had a long way to go before very general problems could be written down and solved succinctly.
[ { "paragraph_id": 0, "text": "Diophantus of Alexandria (born c. AD 200 – c. 214; died c. AD 284 – c. 298) was a Greek mathematician, who was the author of a series of books called Arithmetica, many of which deal with solving algebraic equations.", "title": "" }, { "paragraph_id": 1, "text": "Diophantus is considered \"the father of algebra\" by many mathematicians because of his contributions to number theory, mathematical equations, and the earliest known use of algebraic notation and symbolism in his works. In modern use, Diophantine equations are algebraic equations with integer coefficients, for which integer solutions are sought.", "title": "" }, { "paragraph_id": 2, "text": "Diophantine equations, Diophantine geometry, and Diophantine approximations are subareas of number theory that are named after him. Diophantus coined the term παρισότης (parisotes) to refer to an approximate equality. This term was rendered as adaequalitas in Latin, and became the technique of adequality developed by Pierre de Fermat to find maxima for functions and tangent lines to curves. Diophantus was the first Greek mathematician who recognized positive rational numbers as numbers, by allowing fractions for coefficients and solutions.", "title": "" }, { "paragraph_id": 3, "text": "Diophantus is known to have lived in Alexandria, Egypt, during the Roman era, between AD 200 and 214 to 284 or 298. Diophantus has variously been described by historians as either Greek, or possibly Hellenized Egyptian, or Hellenized Babylonian, The last two of these identifications may stem from confusion with the 4th-century rhetorician Diophantus the Arab. Much of our knowledge of the life of Diophantus is derived from a 5th-century Greek anthology of number games and puzzles created by Metrodorus. One of the problems (sometimes called his epitaph) states:", "title": "Biography" }, { "paragraph_id": 4, "text": "This puzzle implies that Diophantus' age x can be expressed as", "title": "Biography" }, { "paragraph_id": 5, "text": "which gives x a value of 84 years. However, the accuracy of the information cannot be confirmed.", "title": "Biography" }, { "paragraph_id": 6, "text": "In popular culture, this puzzle was the Puzzle No.142 in Professor Layton and Pandora's Box as one of the hardest solving puzzles in the game, which needed to be unlocked by solving other puzzles first.", "title": "Biography" }, { "paragraph_id": 7, "text": "Arithmetica is the major work of Diophantus and the most prominent work on algebra in Greek mathematics. It is a collection of problems giving numerical solutions of both determinate and indeterminate equations. Of the original thirteen books of which Arithmetica consisted only six have survived, though there are some who believe that four Arabic books discovered in 1968 are also by Diophantus. Some Diophantine problems from Arithmetica have been found in Arabic sources.", "title": "Arithmetica" }, { "paragraph_id": 8, "text": "It should be mentioned here that Diophantus never used general methods in his solutions. Hermann Hankel, renowned German mathematician made the following remark regarding Diophantus.", "title": "Arithmetica" }, { "paragraph_id": 9, "text": "\"Our author (Diophantos) not the slightest trace of a general, comprehensive method is discernible; each problem calls for some special method which refuses to work even for the most closely related problems. For this reason it is difficult for the modern scholar to solve the 101st problem even after having studied 100 of Diophantos's solutions\".", "title": "Arithmetica" }, { "paragraph_id": 10, "text": "Like many other Greek mathematical treatises, Diophantus was forgotten in Western Europe during the Dark Ages, since the study of ancient Greek, and literacy in general, had greatly declined. The portion of the Greek Arithmetica that survived, however, was, like all ancient Greek texts transmitted to the early modern world, copied by, and thus known to, medieval Byzantine scholars. Scholia on Diophantus by the Byzantine Greek scholar John Chortasmenos (1370–1437) are preserved together with a comprehensive commentary written by the earlier Greek scholar Maximos Planudes (1260 – 1305), who produced an edition of Diophantus within the library of the Chora Monastery in Byzantine Constantinople. In addition, some portion of the Arithmetica probably survived in the Arab tradition (see above). In 1463 German mathematician Regiomontanus wrote:", "title": "Arithmetica" }, { "paragraph_id": 11, "text": "Arithmetica was first translated from Greek into Latin by Bombelli in 1570, but the translation was never published. However, Bombelli borrowed many of the problems for his own book Algebra. The editio princeps of Arithmetica was published in 1575 by Xylander. The Latin translation of Arithmetica by Bachet in 1621 became the first Latin edition that was widely available. Pierre de Fermat owned a copy, studied it and made notes in the margins. A later 1895 Latin translation by Paul Tannery was said to be an improvement by Thomas L. Heath, who used it in the 1910 second edition of his English translation.", "title": "Arithmetica" }, { "paragraph_id": 12, "text": "The 1621 edition of Arithmetica by Bachet gained fame after Pierre de Fermat wrote his famous \"Last Theorem\" in the margins of his copy:", "title": "Arithmetica" }, { "paragraph_id": 13, "text": "Fermat's proof was never found, and the problem of finding a proof for the theorem went unsolved for centuries. A proof was finally found in 1994 by Andrew Wiles after working on it for seven years. It is believed that Fermat did not actually have the proof he claimed to have. Although the original copy in which Fermat wrote this is lost today, Fermat's son edited the next edition of Diophantus, published in 1670. Even though the text is otherwise inferior to the 1621 edition, Fermat's annotations—including the \"Last Theorem\"—were printed in this version.", "title": "Arithmetica" }, { "paragraph_id": 14, "text": "Fermat was not the first mathematician so moved to write in his own marginal notes to Diophantus; the Byzantine scholar John Chortasmenos (1370–1437) had written \"Thy soul, Diophantus, be with Satan because of the difficulty of your other theorems and particularly of the present theorem\" next to the same problem.", "title": "Arithmetica" }, { "paragraph_id": 15, "text": "Diophantus wrote several other books besides Arithmetica, but only a few of them have survived.", "title": "Other works" }, { "paragraph_id": 16, "text": "Diophantus himself refers to a work which consists of a collection of lemmas called The Porisms (or Porismata), but this book is entirely lost.", "title": "Other works" }, { "paragraph_id": 17, "text": "Although The Porisms is lost, we know three lemmas contained there, since Diophantus refers to them in the Arithmetica. One lemma states that the difference of the cubes of two rational numbers is equal to the sum of the cubes of two other rational numbers, i.e. given any a and b, with a > b, there exist c and d, all positive and rational, such that", "title": "Other works" }, { "paragraph_id": 18, "text": "Diophantus is also known to have written on polygonal numbers, a topic of great interest to Pythagoras and Pythagoreans. Fragments of a book dealing with polygonal numbers are extant.", "title": "Other works" }, { "paragraph_id": 19, "text": "A book called Preliminaries to the Geometric Elements has been traditionally attributed to Hero of Alexandria. It has been studied recently by Wilbur Knorr, who suggested that the attribution to Hero is incorrect, and that the true author is Diophantus.", "title": "Other works" }, { "paragraph_id": 20, "text": "Diophantus' work has had a large influence in history. Editions of Arithmetica exerted a profound influence on the development of algebra in Europe in the late sixteenth and through the 17th and 18th centuries. Diophantus and his works also influenced Arab mathematics and were of great fame among Arab mathematicians. Diophantus' work created a foundation for work on algebra and in fact much of advanced mathematics is based on algebra. How much he affected India is a matter of debate.", "title": "Influence" }, { "paragraph_id": 21, "text": "Diophantus has been considered \"the father of algebra\" because of his contributions to number theory, mathematical notations and the earliest known use of syncopated notation in his book series Arithmetica. However this is usually debated, because Al-Khwarizmi was also given the title as \"the father of algebra\", nevertheless both mathematicians were responsible for paving the way for algebra today.", "title": "Influence" }, { "paragraph_id": 22, "text": "Today, Diophantine analysis is the area of study where integer (whole-number) solutions are sought for equations, and Diophantine equations are polynomial equations with integer coefficients to which only integer solutions are sought. It is usually rather difficult to tell whether a given Diophantine equation is solvable. Most of the problems in Arithmetica lead to quadratic equations. Diophantus looked at 3 different types of quadratic equations: ax + bx = c, ax = bx + c, and ax + c = bx. The reason why there were three cases to Diophantus, while today we have only one case, is that he did not have any notion for zero and he avoided negative coefficients by considering the given numbers a, b, c to all be positive in each of the three cases above. Diophantus was always satisfied with a rational solution and did not require a whole number which means he accepted fractions as solutions to his problems. Diophantus considered negative or irrational square root solutions \"useless\", \"meaningless\", and even \"absurd\". To give one specific example, he calls the equation 4 = 4x + 20 'absurd' because it would lead to a negative value for x. One solution was all he looked for in a quadratic equation. There is no evidence that suggests Diophantus even realized that there could be two solutions to a quadratic equation. He also considered simultaneous quadratic equations.", "title": "Diophantine analysis" }, { "paragraph_id": 23, "text": "Diophantus made important advances in mathematical notation, becoming the first person known to use algebraic notation and symbolism. Before him everyone wrote out equations completely. Diophantus introduced an algebraic symbolism that used an abridged notation for frequently occurring operations, and an abbreviation for the unknown and for the powers of the unknown. Mathematical historian Kurt Vogel states:", "title": "Mathematical notation" }, { "paragraph_id": 24, "text": "\"The symbolism that Diophantus introduced for the first time, and undoubtedly devised himself, provided a short and readily comprehensible means of expressing an equation... Since an abbreviation is also employed for the word 'equals', Diophantus took a fundamental step from verbal algebra towards symbolic algebra.\"", "title": "Mathematical notation" }, { "paragraph_id": 25, "text": "Although Diophantus made important advances in symbolism, he still lacked the necessary notation to express more general methods. This caused his work to be more concerned with particular problems rather than general situations. Some of the limitations of Diophantus' notation are that he only had notation for one unknown and, when problems involved more than a single unknown, Diophantus was reduced to expressing \"first unknown\", \"second unknown\", etc. in words. He also lacked a symbol for a general number n. Where we would write 12 + 6n/n − 3, Diophantus has to resort to constructions like: \"... a sixfold number increased by twelve, which is divided by the difference by which the square of the number exceeds three\". Algebra still had a long way to go before very general problems could be written down and solved succinctly.", "title": "Mathematical notation" } ]
Diophantus of Alexandria was a Greek mathematician, who was the author of a series of books called Arithmetica, many of which deal with solving algebraic equations. Diophantus is considered "the father of algebra" by many mathematicians because of his contributions to number theory, mathematical equations, and the earliest known use of algebraic notation and symbolism in his works. In modern use, Diophantine equations are algebraic equations with integer coefficients, for which integer solutions are sought. Diophantine equations, Diophantine geometry, and Diophantine approximations are subareas of number theory that are named after him. Diophantus coined the term παρισότης (parisotes) to refer to an approximate equality. This term was rendered as adaequalitas in Latin, and became the technique of adequality developed by Pierre de Fermat to find maxima for functions and tangent lines to curves. Diophantus was the first Greek mathematician who recognized positive rational numbers as numbers, by allowing fractions for coefficients and solutions.
2002-01-12T16:23:07Z
2023-11-21T23:57:50Z
[ "Template:See also", "Template:Cite web", "Template:Isbn", "Template:Doi", "Template:Circa", "Template:Math", "Template:Cite book", "Template:Commons category inline", "Template:Wikiquote", "Template:Reflist", "Template:ISBN", "Template:Quote", "Template:Cite encyclopedia", "Template:MacTutor Biography", "Template:Authority control", "Template:Short description", "Template:For multi", "Template:Lang-grc", "Template:Citation", "Template:EB1911 poster", "Template:Ancient Greek mathematics" ]
https://en.wikipedia.org/wiki/Diophantus
9,111
Dong
Dong or DONG may refer to:
[ { "paragraph_id": 0, "text": "Dong or DONG may refer to:", "title": "" } ]
Dong or DONG may refer to:
2002-01-12T17:29:56Z
2023-09-09T18:11:51Z
[ "Template:Wiktionary", "Template:TOC right", "Template:Disambiguation" ]
https://en.wikipedia.org/wiki/Dong
9,118
Duke Kahanamoku
Duke Paoa Kahinu Mokoe Hulikohola Kahanamoku (August 24, 1890 – January 22, 1968) was a Hawaiian competition swimmer who popularized the sport of surfing. A Native Hawaiian, he was born to a minor noble family less than three years before the overthrow of the Hawaiian Kingdom. He lived to see the territory's admission as a state, and became a United States citizen. He was a five-time Olympic medalist in swimming, winning medals in 1912, 1920 and 1924. Kahanamoku joined fraternal organizations: he was a Scottish Rite Freemason in the Honolulu lodge, and a Shriner. He worked as a law enforcement officer, an actor, a beach volleyball player, and a businessman. According to Kahanamoku, he was born in Honolulu at Haleʻākala, the home of Bernice Pauahi Bishop, which was later converted into the Arlington Hotel. He was born into a family of Native Hawaiians headed by Duke Halapu Kahanamoku and Julia Paʻakonia Lonokahikina Paoa. He had five brothers, and three sisters. His brothers were Sargent, Samuel, David, William and Louis, all of whom participated in competitive aquatic sports. His sisters were Bernice, Kapiolani and Maria. "Duke" was not a title or a nickname, but a given name. He was named after his father, Duke Halapu Kahanamoku, who was christened by Bernice Pauahi Bishop in honor of Prince Alfred, Duke of Edinburgh, who was visiting Hawaii at the time. His father was a policeman. His mother Julia Paʻakonia Lonokahikina Paoa was a deeply religious woman with a strong sense of family ancestry. His parents were from prominent Hawaiian ohana (families). The Kahanamoku and the Paoa ohana were considered to be lower-ranking nobles, who were in service to the aliʻi nui, or royalty. His paternal grandfather was Kahanamoku and his grandmother, Kapiolani Kaoeha (sometimes spelled Kahoea), a descendant of Alapainui. They were kahu, retainers and trusted advisors of the Kamehamehas, to whom they were related. His maternal grandparents Paoa, son of Paoa Hoolae and Hiikaalani, and Mele Uliama, were also of aliʻi descent. In 1893, his family moved to Kālia, Waikiki (near the present site of Hilton Hawaiian Village), to be closer to his mother's parents and family. Kahanamoku grew up with his siblings and 31 Paoa cousins. He attended the Waikiki Grammar School, Kaahumanu School, and the Kamehameha Schools, although he never graduated because he had to quit to help support the family. Growing up on the outskirts of Waikiki, Kahanamoku spent much of his youth at the beach, where he developed his surfing and swimming skills. In his youth, Kahanamoku preferred a traditional surf board, which he called his "papa nui", constructed after the fashion of ancient Hawaiian olo boards. Made from the wood of a koa tree, it was 16 feet (4.9 m) long and weighed 114 pounds (52 kg). The board was without a skeg, which had yet to be invented. In his later surfing career, he would often use smaller boards but always preferred those made of wood. Kahanamoku was also a powerful swimmer. On August 11, 1911, Kahanamoku was timed at 55.4 seconds in the 100 yards (91 m) freestyle, beating the existing world record by 4.6 seconds, in the salt water of Honolulu Harbor. He also broke the record in the 220 yd (200 m) and equaled it in the 50 yd (46 m). But the Amateur Athletic Union (AAU), in disbelief, would not recognize these feats until many years later. The AAU initially claimed that the judges must have been using alarm clocks rather than stopwatches and later claimed that ocean currents aided Kahanamoku. Kahanamoku easily qualified for the U.S. Olympic swimming team in 1912. At the 1912 Summer Olympics in Stockholm, he won a gold medal in the 100-meter freestyle, and a silver medal with the second-place U.S. team in the men's 4×200-meter freestyle relay. During the 1920 Olympics in Antwerp, Kahanamoku won gold medals in both the 100 meters (bettering fellow Hawaiian Pua Kealoha) and in the relay. He finished the 100 meters with a silver medal during the 1924 Olympics in Paris, with the gold going to Johnny Weissmuller and the bronze to Kahanamoku's brother, Samuel. By then age 34, Kahanamoku won no more Olympic medals. But he served as an alternate for the U.S. water polo team at the 1932 Summer Olympics. Between Olympic competitions, and after retiring from the Olympics, Kahanamoku traveled internationally to give swimming exhibitions. It was during this period that he popularized the sport of surfing, previously known only in Hawaii, by incorporating surfing exhibitions into his touring exhibitions as well. He attracted people to surfing in mainland America first in 1912 while in Southern California. His surfing exhibition at Sydney, Australia's Freshwater Beach on December 24, 1914, is widely regarded as a seminal event in the development of surfing in Australia. The board that Kahanamoku built from a piece of pine from a local hardware store is retained by the Freshwater Surf Life Saving Club. A statue of Kahanamoku was erected in his honor on the Northern headland of Freshwater Lake, New South Wales. During his time living in Southern California, Kahanamoku performed in Hollywood as a background actor and a character actor in several films. He made connections in this way with people who could further publicize the sport of surfing. Kahanamoku was involved with the Los Angeles Athletic Club, acting as a lifeguard and competing in both swimming and water polo teams. While living in Newport Beach, California, on June 14, 1925, Kahanamoku rescued eight men from a fishing vessel that capsized in heavy surf while it was attempting to enter the city's harbor. Using his surfboard, Kahanamoku made repeated trips from shore to the capsized ship, and helped rescue several people. Two other surfers saved four more fishermen, while five succumbed to the seas before they could be rescued. At the time the Newport Beach police chief called Kahanamoku's efforts "The most superhuman surfboard rescue act the world has ever seen." It also led to lifeguards across the US to begin using surfboards as standard equipment for water rescues. He was the first person to be inducted into both the Swimming Hall of Fame and the Surfing Hall of Fame. The Duke Kahanamoku Invitational Surfing Championships in Hawaii, the first major professional surfing contest event ever held in the huge surf on the North Shore of Oahu, was named in his honor. He is a member of the U.S. Olympic Hall of Fame. Later Kahanamoku was elected to serve as the Sheriff of Honolulu, Hawaii from 1932 to 1961, completing 13 consecutive terms. During World War II, he also served as a military police officer for the United States; Hawai'i was not yet a state and was administered. In the postwar period, he also appeared in a number of television programs and films, such as Mister Roberts (1955). He was well-liked throughout the Hollywood community. Kahanamoku became a friend and surfing companion of heiress Doris Duke. She built a home (now a museum) on Oahu named Shangri-la. Kahanamoku gave private surfing lessons to Franklin D. Roosevelt Jr. and John Aspinwall Roosevelt, the children of Franklin D. Roosevelt. In 1946, Kahanamoku was the pro forma defendant in the landmark Supreme Court case Duncan v. Kahanamoku. While Kahanamoku was a military police officer during World War II, he arrested Duncan, a civilian shipfitter, for public intoxication. At the time, Hawaii, not yet a state, was being administered by the United States under the Hawaiian Organic Act. This effectively instituted martial law on the island. After Duncan was tried by a military tribunal, he appealed to the Supreme Court. In a post hoc ruling, the court ruled that trial by military tribunal for the civilian was, in this case, unconstitutional. On August 2, 1940, Kahanamoku married dance instructor Nadine Alexander, who had relocated to Hawaii from Cleveland, Ohio, after she had been hired to teach at the Royal Hawaiian Hotel. Duke was 50 years old, Nadine was 35. He was initiated, passed and raised to the degree of Master Mason in Hawaiian Lodge Masonic Lodge No 21 and was also a Noble (member) of the Shriners fraternal organization. He was a Republican. Kahanamoku died of a heart attack on January 22, 1968, at age 77. For his burial at sea, a long motorcade of mourners, accompanied by a 30-man police escort, traveled in procession across town to Waikiki Beach. Reverend Abraham Akaka, the pastor of Kawaiahao Church, performed the service. A group of beach boys sang Hawaiian songs, including "Aloha Oe", and Kahanamoku's ashes were scattered into the ocean. In 1994 a statue of Kahanamoku by Barry Donohoo was inaugurated in Freshwater, NSW, Australia. It is the showpiece of the Australian Surfers Walk of Fame. On February 28, 2015, a monument featuring a replica of Kahanamoku's surfboard was unveiled at New Brighton beach, Christchurch, New Zealand in honor of the 100th anniversary of Kahanamoku's visit to New Brighton. A statue of Kahanamoku was installed in Huntington Beach, California. A nearby restaurant is named for him and is close to Huntington Beach pier. The City of Huntington Beach identifies with the legacy of surfing, and a museum dedicated to that sport is located here. In April 2022 NSW Heritage announced that Kahanamoku would be included in the first batch of Blue Plaques to be issued, to recognize his contribution to recreation and surfing. A sculpture of Kahanamoku flanked by a male knee paddler and a female prone paddler commemorating the Catalina Classic Paddleboard Race was installed on the Manhattan Beach Pier in 2023. Hawaii music promoter Kimo Wilder McVay capitalized on Kahanamoku's popularity by naming his Waikiki showroom "Duke Kahanamoku's" at the International Market Place and giving Kahanamoku a financial interest in the showroom in exchange for the use of his name. It was a major Waikiki showroom in the 1960s and is remembered as the home of Don Ho & The Aliis from 1964 through 1969. The showroom continued to be known as Duke Kahanamoku's until Hawaii showman Jack Cione bought it in the mid-1970s and renamed it Le Boom Boom. The Duke Kahanamoku Aquatic Complex (DKAC) serves as the home for the University of Hawai‘i’s swimming and diving and women’s water polo teams. The facility, located on the University’s lower campus, includes a 50-meter training pool and a separate 25-yard competition and diving pool. The long course pool is four feet at both ends, seven feet in the middle, and an average depth of six feet. Kahanamoku's name is also used by Duke's Canoe Club & Barefoot Bar, as of 2016 known as Duke's Waikiki, a beachfront bar and restaurant in the Outrigger Waikiki on the Beach Hotel. There is a chain of restaurants named after him in California, Florida and Hawaii called Duke's. On August 24, 2002, the 112th anniversary of Kahanamoku's birth, the U.S. Postal Service issued a first-class commemorative stamp with Duke's picture on it. The First Day Ceremony was held at the Hilton Hawaiian Village in Waikiki and was attended by thousands. At this ceremony, attendees could attach the Duke stamp to an envelope and get it canceled with a First Day of Issue postmark. These first day covers are very collectible. On August 24, 2015, a Google Doodle honored the 125th anniversary of Duke Kahanamoku's birthday.
[ { "paragraph_id": 0, "text": "Duke Paoa Kahinu Mokoe Hulikohola Kahanamoku (August 24, 1890 – January 22, 1968) was a Hawaiian competition swimmer who popularized the sport of surfing. A Native Hawaiian, he was born to a minor noble family less than three years before the overthrow of the Hawaiian Kingdom. He lived to see the territory's admission as a state, and became a United States citizen. He was a five-time Olympic medalist in swimming, winning medals in 1912, 1920 and 1924.", "title": "" }, { "paragraph_id": 1, "text": "Kahanamoku joined fraternal organizations: he was a Scottish Rite Freemason in the Honolulu lodge, and a Shriner. He worked as a law enforcement officer, an actor, a beach volleyball player, and a businessman.", "title": "" }, { "paragraph_id": 2, "text": "According to Kahanamoku, he was born in Honolulu at Haleʻākala, the home of Bernice Pauahi Bishop, which was later converted into the Arlington Hotel.", "title": "Family background" }, { "paragraph_id": 3, "text": "He was born into a family of Native Hawaiians headed by Duke Halapu Kahanamoku and Julia Paʻakonia Lonokahikina Paoa. He had five brothers, and three sisters. His brothers were Sargent, Samuel, David, William and Louis, all of whom participated in competitive aquatic sports. His sisters were Bernice, Kapiolani and Maria.", "title": "Family background" }, { "paragraph_id": 4, "text": "\"Duke\" was not a title or a nickname, but a given name. He was named after his father, Duke Halapu Kahanamoku, who was christened by Bernice Pauahi Bishop in honor of Prince Alfred, Duke of Edinburgh, who was visiting Hawaii at the time. His father was a policeman. His mother Julia Paʻakonia Lonokahikina Paoa was a deeply religious woman with a strong sense of family ancestry.", "title": "Family background" }, { "paragraph_id": 5, "text": "His parents were from prominent Hawaiian ohana (families). The Kahanamoku and the Paoa ohana were considered to be lower-ranking nobles, who were in service to the aliʻi nui, or royalty. His paternal grandfather was Kahanamoku and his grandmother, Kapiolani Kaoeha (sometimes spelled Kahoea), a descendant of Alapainui. They were kahu, retainers and trusted advisors of the Kamehamehas, to whom they were related. His maternal grandparents Paoa, son of Paoa Hoolae and Hiikaalani, and Mele Uliama, were also of aliʻi descent.", "title": "Family background" }, { "paragraph_id": 6, "text": "In 1893, his family moved to Kālia, Waikiki (near the present site of Hilton Hawaiian Village), to be closer to his mother's parents and family. Kahanamoku grew up with his siblings and 31 Paoa cousins. He attended the Waikiki Grammar School, Kaahumanu School, and the Kamehameha Schools, although he never graduated because he had to quit to help support the family.", "title": "Family background" }, { "paragraph_id": 7, "text": "Growing up on the outskirts of Waikiki, Kahanamoku spent much of his youth at the beach, where he developed his surfing and swimming skills. In his youth, Kahanamoku preferred a traditional surf board, which he called his \"papa nui\", constructed after the fashion of ancient Hawaiian olo boards. Made from the wood of a koa tree, it was 16 feet (4.9 m) long and weighed 114 pounds (52 kg). The board was without a skeg, which had yet to be invented. In his later surfing career, he would often use smaller boards but always preferred those made of wood.", "title": "Early years" }, { "paragraph_id": 8, "text": "Kahanamoku was also a powerful swimmer. On August 11, 1911, Kahanamoku was timed at 55.4 seconds in the 100 yards (91 m) freestyle, beating the existing world record by 4.6 seconds, in the salt water of Honolulu Harbor. He also broke the record in the 220 yd (200 m) and equaled it in the 50 yd (46 m). But the Amateur Athletic Union (AAU), in disbelief, would not recognize these feats until many years later. The AAU initially claimed that the judges must have been using alarm clocks rather than stopwatches and later claimed that ocean currents aided Kahanamoku.", "title": "Early years" }, { "paragraph_id": 9, "text": "Kahanamoku easily qualified for the U.S. Olympic swimming team in 1912. At the 1912 Summer Olympics in Stockholm, he won a gold medal in the 100-meter freestyle, and a silver medal with the second-place U.S. team in the men's 4×200-meter freestyle relay.", "title": "Career" }, { "paragraph_id": 10, "text": "During the 1920 Olympics in Antwerp, Kahanamoku won gold medals in both the 100 meters (bettering fellow Hawaiian Pua Kealoha) and in the relay. He finished the 100 meters with a silver medal during the 1924 Olympics in Paris, with the gold going to Johnny Weissmuller and the bronze to Kahanamoku's brother, Samuel. By then age 34, Kahanamoku won no more Olympic medals. But he served as an alternate for the U.S. water polo team at the 1932 Summer Olympics.", "title": "Career" }, { "paragraph_id": 11, "text": "Between Olympic competitions, and after retiring from the Olympics, Kahanamoku traveled internationally to give swimming exhibitions. It was during this period that he popularized the sport of surfing, previously known only in Hawaii, by incorporating surfing exhibitions into his touring exhibitions as well. He attracted people to surfing in mainland America first in 1912 while in Southern California.", "title": "Career" }, { "paragraph_id": 12, "text": "His surfing exhibition at Sydney, Australia's Freshwater Beach on December 24, 1914, is widely regarded as a seminal event in the development of surfing in Australia. The board that Kahanamoku built from a piece of pine from a local hardware store is retained by the Freshwater Surf Life Saving Club. A statue of Kahanamoku was erected in his honor on the Northern headland of Freshwater Lake, New South Wales.", "title": "Career" }, { "paragraph_id": 13, "text": "During his time living in Southern California, Kahanamoku performed in Hollywood as a background actor and a character actor in several films. He made connections in this way with people who could further publicize the sport of surfing. Kahanamoku was involved with the Los Angeles Athletic Club, acting as a lifeguard and competing in both swimming and water polo teams.", "title": "Career" }, { "paragraph_id": 14, "text": "While living in Newport Beach, California, on June 14, 1925, Kahanamoku rescued eight men from a fishing vessel that capsized in heavy surf while it was attempting to enter the city's harbor. Using his surfboard, Kahanamoku made repeated trips from shore to the capsized ship, and helped rescue several people. Two other surfers saved four more fishermen, while five succumbed to the seas before they could be rescued. At the time the Newport Beach police chief called Kahanamoku's efforts \"The most superhuman surfboard rescue act the world has ever seen.\" It also led to lifeguards across the US to begin using surfboards as standard equipment for water rescues.", "title": "Career" }, { "paragraph_id": 15, "text": "He was the first person to be inducted into both the Swimming Hall of Fame and the Surfing Hall of Fame. The Duke Kahanamoku Invitational Surfing Championships in Hawaii, the first major professional surfing contest event ever held in the huge surf on the North Shore of Oahu, was named in his honor. He is a member of the U.S. Olympic Hall of Fame.", "title": "Career" }, { "paragraph_id": 16, "text": "Later Kahanamoku was elected to serve as the Sheriff of Honolulu, Hawaii from 1932 to 1961, completing 13 consecutive terms. During World War II, he also served as a military police officer for the United States; Hawai'i was not yet a state and was administered.", "title": "Career" }, { "paragraph_id": 17, "text": "In the postwar period, he also appeared in a number of television programs and films, such as Mister Roberts (1955). He was well-liked throughout the Hollywood community.", "title": "Career" }, { "paragraph_id": 18, "text": "Kahanamoku became a friend and surfing companion of heiress Doris Duke. She built a home (now a museum) on Oahu named Shangri-la. Kahanamoku gave private surfing lessons to Franklin D. Roosevelt Jr. and John Aspinwall Roosevelt, the children of Franklin D. Roosevelt.", "title": "Career" }, { "paragraph_id": 19, "text": "In 1946, Kahanamoku was the pro forma defendant in the landmark Supreme Court case Duncan v. Kahanamoku. While Kahanamoku was a military police officer during World War II, he arrested Duncan, a civilian shipfitter, for public intoxication.", "title": "Duncan v. Kahanamoku" }, { "paragraph_id": 20, "text": "At the time, Hawaii, not yet a state, was being administered by the United States under the Hawaiian Organic Act. This effectively instituted martial law on the island. After Duncan was tried by a military tribunal, he appealed to the Supreme Court. In a post hoc ruling, the court ruled that trial by military tribunal for the civilian was, in this case, unconstitutional.", "title": "Duncan v. Kahanamoku" }, { "paragraph_id": 21, "text": "On August 2, 1940, Kahanamoku married dance instructor Nadine Alexander, who had relocated to Hawaii from Cleveland, Ohio, after she had been hired to teach at the Royal Hawaiian Hotel. Duke was 50 years old, Nadine was 35.", "title": "Personal life" }, { "paragraph_id": 22, "text": "He was initiated, passed and raised to the degree of Master Mason in Hawaiian Lodge Masonic Lodge No 21 and was also a Noble (member) of the Shriners fraternal organization. He was a Republican.", "title": "Personal life" }, { "paragraph_id": 23, "text": "Kahanamoku died of a heart attack on January 22, 1968, at age 77. For his burial at sea, a long motorcade of mourners, accompanied by a 30-man police escort, traveled in procession across town to Waikiki Beach. Reverend Abraham Akaka, the pastor of Kawaiahao Church, performed the service. A group of beach boys sang Hawaiian songs, including \"Aloha Oe\", and Kahanamoku's ashes were scattered into the ocean.", "title": "Death and legacy" }, { "paragraph_id": 24, "text": "In 1994 a statue of Kahanamoku by Barry Donohoo was inaugurated in Freshwater, NSW, Australia. It is the showpiece of the Australian Surfers Walk of Fame.", "title": "Death and legacy" }, { "paragraph_id": 25, "text": "On February 28, 2015, a monument featuring a replica of Kahanamoku's surfboard was unveiled at New Brighton beach, Christchurch, New Zealand in honor of the 100th anniversary of Kahanamoku's visit to New Brighton.", "title": "Death and legacy" }, { "paragraph_id": 26, "text": "A statue of Kahanamoku was installed in Huntington Beach, California. A nearby restaurant is named for him and is close to Huntington Beach pier. The City of Huntington Beach identifies with the legacy of surfing, and a museum dedicated to that sport is located here.", "title": "Death and legacy" }, { "paragraph_id": 27, "text": "In April 2022 NSW Heritage announced that Kahanamoku would be included in the first batch of Blue Plaques to be issued, to recognize his contribution to recreation and surfing.", "title": "Death and legacy" }, { "paragraph_id": 28, "text": "A sculpture of Kahanamoku flanked by a male knee paddler and a female prone paddler commemorating the Catalina Classic Paddleboard Race was installed on the Manhattan Beach Pier in 2023.", "title": "Death and legacy" }, { "paragraph_id": 29, "text": "Hawaii music promoter Kimo Wilder McVay capitalized on Kahanamoku's popularity by naming his Waikiki showroom \"Duke Kahanamoku's\" at the International Market Place and giving Kahanamoku a financial interest in the showroom in exchange for the use of his name. It was a major Waikiki showroom in the 1960s and is remembered as the home of Don Ho & The Aliis from 1964 through 1969. The showroom continued to be known as Duke Kahanamoku's until Hawaii showman Jack Cione bought it in the mid-1970s and renamed it Le Boom Boom.", "title": "Death and legacy" }, { "paragraph_id": 30, "text": "The Duke Kahanamoku Aquatic Complex (DKAC) serves as the home for the University of Hawai‘i’s swimming and diving and women’s water polo teams. The facility, located on the University’s lower campus, includes a 50-meter training pool and a separate 25-yard competition and diving pool. The long course pool is four feet at both ends, seven feet in the middle, and an average depth of six feet.", "title": "Death and legacy" }, { "paragraph_id": 31, "text": "Kahanamoku's name is also used by Duke's Canoe Club & Barefoot Bar, as of 2016 known as Duke's Waikiki, a beachfront bar and restaurant in the Outrigger Waikiki on the Beach Hotel. There is a chain of restaurants named after him in California, Florida and Hawaii called Duke's.", "title": "Death and legacy" }, { "paragraph_id": 32, "text": "On August 24, 2002, the 112th anniversary of Kahanamoku's birth, the U.S. Postal Service issued a first-class commemorative stamp with Duke's picture on it. The First Day Ceremony was held at the Hilton Hawaiian Village in Waikiki and was attended by thousands. At this ceremony, attendees could attach the Duke stamp to an envelope and get it canceled with a First Day of Issue postmark. These first day covers are very collectible.", "title": "Death and legacy" }, { "paragraph_id": 33, "text": "On August 24, 2015, a Google Doodle honored the 125th anniversary of Duke Kahanamoku's birthday.", "title": "Death and legacy" } ]
Duke Paoa Kahinu Mokoe Hulikohola Kahanamoku was a Hawaiian competition swimmer who popularized the sport of surfing. A Native Hawaiian, he was born to a minor noble family less than three years before the overthrow of the Hawaiian Kingdom. He lived to see the territory's admission as a state, and became a United States citizen. He was a five-time Olympic medalist in swimming, winning medals in 1912, 1920 and 1924. Kahanamoku joined fraternal organizations: he was a Scottish Rite Freemason in the Honolulu lodge, and a Shriner. He worked as a law enforcement officer, an actor, a beach volleyball player, and a businessman.
2002-01-13T20:03:28Z
2023-12-18T20:57:06Z
[ "Template:Portal", "Template:Cite journal", "Template:Internet Archive author", "Template:Authority control", "Template:Okina", "Template:As of", "Template:United States men's water polo squad 1920 Summer Olympics", "Template:Olympics.com profile", "Template:Discogs artist", "Template:Short description", "Template:Convert", "Template:Cite news", "Template:Commons category", "Template:Rp", "Template:Cite web", "Template:Cite book", "Template:Footer USA Swimming 1912 Summer Olympics", "Template:Footer Olympic Champions 100 m Freestyle Men", "Template:Olympic champions in men's 4 × 200 m freestyle relay", "Template:Use mdy dates", "Template:Infobox swimmer", "Template:Team USA Hall of Fame", "Template:Reflist", "Template:Webarchive", "Template:Olympedia", "Template:Footer USA Swimming 1924 Summer Olympics", "Template:Footer USA Swimming 1920 Summer Olympics", "Template:Further", "Template:IMDb name" ]
https://en.wikipedia.org/wiki/Duke_Kahanamoku
9,119
Distinguished Service Medal (U.S. Army)
The Distinguished Service Medal (DSM) is a military decoration of the United States Army that is presented to soldiers who have distinguished themselves by exceptionally meritorious service to the government in a duty of great responsibility. The performance must be such as to merit recognition for service that is clearly exceptional. The exceptional performance of normal duty will not alone justify an award of this decoration. The Army's Distinguished Service Medal is equivalent to the Naval Service's Navy Distinguished Service Medal, Air and Space Forces' Distinguished Service Medal, and the Coast Guard Distinguished Service Medal. Prior to the creation of the Air Force's Distinguished Service Medal in 1960, United States Air Force airmen were awarded the Army's Distinguished Service Medal. The Distinguished Service Medal is awarded to any person who, while serving in any capacity with the United States Army, has distinguished themselves by exceptionally meritorious service to the Government in a duty of great responsibility. The performance must be such as to merit recognition for service which is clearly exceptional. Exceptional performance of normal duty will not alone justify an award of this decoration. For service not related to actual war, the term "duty of a great responsibility" applies to a narrower range of positions than in time of war and requires evidence of a conspicuously significant achievement. However, justification of the award may accrue by virtue of exceptionally meritorious service in a succession of high positions of great importance. Awards may be made to persons other than members of the Armed Forces of the United States for wartime services only, and only then under exceptional circumstances with the express approval of the president in each case. The Distinguished Service Medal was authorized by Presidential Order dated January 2, 1918, and confirmed by Congress on July 9, 1918. It was announced by War Department General Order No. 6, 1918-01-12, with the following information concerning the medal: "A bronze medal of appropriate design and a ribbon to be worn in lieu thereof, to be awarded by the President to any person who, while serving in any capacity with the Army shall hereafter distinguish himself or herself, or who, since 04-06-1917, has distinguished himself or herself by exceptionally meritorious service to the Government in a duty of great responsibility in time of war or in connection with military operations against an armed enemy of the United States." The Act of Congress on July 9, 1918, recognized the need for different types and degrees of heroism and meritorious service and included such provisions for award criteria. The current statutory authorization for the Distinguished Service Medal is Title 10, United States Code, Section 3743. More than 2,000 awards were made during World War I, and by the time the United States entered World War II, approximately 2,800 awards had been made. From July 1, 1941, to June 6, 1969, when the Department of the Army stopped publishing awards of the DSM in Department of the Army General Orders, over 2,800 further awards were made. Prior to World War II the DSM was the only decoration for non-combat service in the U.S. Army. As a result, before World War II the DSM was awarded to a wider range of recipients than during and after World War II. During World War I awards of the DSM to officers below the rank of brigadier general were fairly common but became rare once the Legion of Merit was established in 1942. Until the first award of the Air Force Distinguished Service Medal in 1965, United States Air Force personnel received this award as well, as was the case with several other Department of the Army decorations until the Department of the Air Force fully established its own system of decorations. Because the Army Distinguished Service Medal is principally awarded to general officers, a list of notable recipients would include nearly every general, and some admirals, since 1918, many of whom received multiple awards, as well as a few civilians and sergeants major prominent for their contributions to national defense. General Martin Dempsey, former chairman of the Joint Chiefs of Staff, holds the record for receiving the greatest number of awards of the Army Distinguished Service Medal, at six. He also received three awards of the Defense Distinguished Service Medal as well as one award each of the Navy Distinguished Service Medal, the Air Force Distinguished Service Medal, and the Coast Guard Distinguished Service Medal, for a total of twelve Distinguished Service Medals. Generals of the Army Douglas MacArthur and Dwight Eisenhower are tied with five awards each received of the Army Distinguished Service Medal. They also each received one award of the Navy Distinguished Service Medal, for a total of six DSMs each. General Lucius D. Clay (Four Star) received three Army DSM awards for his service that included Commanding General, U.S. Army Forces (European Theater) and Military Governor of Germany. During his tenure, Gen. Clay solved his greatest challenge: the Soviet Blockade of Berlin, which was imposed in June 1948. Gen. Clay triggered the Berlin Airlift, which served the city residents during the harsh winter of 1948–1949. He is also a recipient of the Legion of Merit. General Norman Schwarzkopf received two awards of the Army DSM and one award each of the Defense DSM, Navy DSM, the Air Force DSM and the Coast Guard DSM, for a total of six DSMs. General Lloyd Austin received four awards of the Army DSM and five awards of the Defense DSM for a total of nine DSMs. Among notable recipients below flag rank are: X-1 test pilot Chuck Yeager and X-15 test pilot Robert M. White, who both received the DSM as U.S. Air Force majors; Air Force Major Rudolf Anderson, the U-2 pilot shot down during the Cuban Missile Crisis; director Frank Capra, decorated in 1945 as an army colonel; actor James Stewart, decorated in 1945 as an Army Air Forces colonel (later Air Force Brigadier General); Colonel Wendell Fertig, who led Filipino guerrillas behind Japanese lines; Colonel (later Major General) John K. Singlaub, who led partisan forces in the Korean War; and Major Maude C. Davison, who led the "Angels of Bataan and Corregidor" during their imprisonment by the Japanese, and Colonel William S. Taylor, Program Manager Multiple Launch Rocket System. Among notable civilian recipients are Harry L. Hopkins, Robert S. McNamara and Henry L. Stimson. Notable American and foreign recipients include: Note – includes Army Air Service, Army Air Corps and Army Air Forces Major General Franklin L McKean - https://ocsalumni.org/at_biz_dir/franklin-l-mckean/
[ { "paragraph_id": 0, "text": "The Distinguished Service Medal (DSM) is a military decoration of the United States Army that is presented to soldiers who have distinguished themselves by exceptionally meritorious service to the government in a duty of great responsibility. The performance must be such as to merit recognition for service that is clearly exceptional. The exceptional performance of normal duty will not alone justify an award of this decoration.", "title": "" }, { "paragraph_id": 1, "text": "The Army's Distinguished Service Medal is equivalent to the Naval Service's Navy Distinguished Service Medal, Air and Space Forces' Distinguished Service Medal, and the Coast Guard Distinguished Service Medal. Prior to the creation of the Air Force's Distinguished Service Medal in 1960, United States Air Force airmen were awarded the Army's Distinguished Service Medal.", "title": "" }, { "paragraph_id": 2, "text": "The Distinguished Service Medal is awarded to any person who, while serving in any capacity with the United States Army, has distinguished themselves by exceptionally meritorious service to the Government in a duty of great responsibility.", "title": "Criteria" }, { "paragraph_id": 3, "text": "The performance must be such as to merit recognition for service which is clearly exceptional. Exceptional performance of normal duty will not alone justify an award of this decoration. For service not related to actual war, the term \"duty of a great responsibility\" applies to a narrower range of positions than in time of war and requires evidence of a conspicuously significant achievement. However, justification of the award may accrue by virtue of exceptionally meritorious service in a succession of high positions of great importance. Awards may be made to persons other than members of the Armed Forces of the United States for wartime services only, and only then under exceptional circumstances with the express approval of the president in each case.", "title": "Criteria" }, { "paragraph_id": 4, "text": "The Distinguished Service Medal was authorized by Presidential Order dated January 2, 1918, and confirmed by Congress on July 9, 1918. It was announced by War Department General Order No. 6, 1918-01-12, with the following information concerning the medal: \"A bronze medal of appropriate design and a ribbon to be worn in lieu thereof, to be awarded by the President to any person who, while serving in any capacity with the Army shall hereafter distinguish himself or herself, or who, since 04-06-1917, has distinguished himself or herself by exceptionally meritorious service to the Government in a duty of great responsibility in time of war or in connection with military operations against an armed enemy of the United States.\" The Act of Congress on July 9, 1918, recognized the need for different types and degrees of heroism and meritorious service and included such provisions for award criteria. The current statutory authorization for the Distinguished Service Medal is Title 10, United States Code, Section 3743.", "title": "History of the Distinguished Service Medal" }, { "paragraph_id": 5, "text": "More than 2,000 awards were made during World War I, and by the time the United States entered World War II, approximately 2,800 awards had been made. From July 1, 1941, to June 6, 1969, when the Department of the Army stopped publishing awards of the DSM in Department of the Army General Orders, over 2,800 further awards were made.", "title": "Recipients" }, { "paragraph_id": 6, "text": "Prior to World War II the DSM was the only decoration for non-combat service in the U.S. Army. As a result, before World War II the DSM was awarded to a wider range of recipients than during and after World War II. During World War I awards of the DSM to officers below the rank of brigadier general were fairly common but became rare once the Legion of Merit was established in 1942.", "title": "Recipients" }, { "paragraph_id": 7, "text": "Until the first award of the Air Force Distinguished Service Medal in 1965, United States Air Force personnel received this award as well, as was the case with several other Department of the Army decorations until the Department of the Air Force fully established its own system of decorations.", "title": "Recipients" }, { "paragraph_id": 8, "text": "Because the Army Distinguished Service Medal is principally awarded to general officers, a list of notable recipients would include nearly every general, and some admirals, since 1918, many of whom received multiple awards, as well as a few civilians and sergeants major prominent for their contributions to national defense.", "title": "Recipients" }, { "paragraph_id": 9, "text": "General Martin Dempsey, former chairman of the Joint Chiefs of Staff, holds the record for receiving the greatest number of awards of the Army Distinguished Service Medal, at six. He also received three awards of the Defense Distinguished Service Medal as well as one award each of the Navy Distinguished Service Medal, the Air Force Distinguished Service Medal, and the Coast Guard Distinguished Service Medal, for a total of twelve Distinguished Service Medals.", "title": "Recipients" }, { "paragraph_id": 10, "text": "Generals of the Army Douglas MacArthur and Dwight Eisenhower are tied with five awards each received of the Army Distinguished Service Medal. They also each received one award of the Navy Distinguished Service Medal, for a total of six DSMs each.", "title": "Recipients" }, { "paragraph_id": 11, "text": "General Lucius D. Clay (Four Star) received three Army DSM awards for his service that included Commanding General, U.S. Army Forces (European Theater) and Military Governor of Germany. During his tenure, Gen. Clay solved his greatest challenge: the Soviet Blockade of Berlin, which was imposed in June 1948. Gen. Clay triggered the Berlin Airlift, which served the city residents during the harsh winter of 1948–1949. He is also a recipient of the Legion of Merit.", "title": "Recipients" }, { "paragraph_id": 12, "text": "General Norman Schwarzkopf received two awards of the Army DSM and one award each of the Defense DSM, Navy DSM, the Air Force DSM and the Coast Guard DSM, for a total of six DSMs.", "title": "Recipients" }, { "paragraph_id": 13, "text": "General Lloyd Austin received four awards of the Army DSM and five awards of the Defense DSM for a total of nine DSMs.", "title": "Recipients" }, { "paragraph_id": 14, "text": "Among notable recipients below flag rank are: X-1 test pilot Chuck Yeager and X-15 test pilot Robert M. White, who both received the DSM as U.S. Air Force majors; Air Force Major Rudolf Anderson, the U-2 pilot shot down during the Cuban Missile Crisis; director Frank Capra, decorated in 1945 as an army colonel; actor James Stewart, decorated in 1945 as an Army Air Forces colonel (later Air Force Brigadier General); Colonel Wendell Fertig, who led Filipino guerrillas behind Japanese lines; Colonel (later Major General) John K. Singlaub, who led partisan forces in the Korean War; and Major Maude C. Davison, who led the \"Angels of Bataan and Corregidor\" during their imprisonment by the Japanese, and Colonel William S. Taylor, Program Manager Multiple Launch Rocket System. Among notable civilian recipients are Harry L. Hopkins, Robert S. McNamara and Henry L. Stimson.", "title": "Recipients" }, { "paragraph_id": 15, "text": "Notable American and foreign recipients include:", "title": "Recipients" }, { "paragraph_id": 16, "text": "Note – includes Army Air Service, Army Air Corps and Army Air Forces", "title": "Recipients" }, { "paragraph_id": 17, "text": "Major General Franklin L McKean - https://ocsalumni.org/at_biz_dir/franklin-l-mckean/", "title": "References" } ]
The Distinguished Service Medal (DSM) is a military decoration of the United States Army that is presented to soldiers who have distinguished themselves by exceptionally meritorious service to the government in a duty of great responsibility. The performance must be such as to merit recognition for service that is clearly exceptional. The exceptional performance of normal duty will not alone justify an award of this decoration. The Army's Distinguished Service Medal is equivalent to the Naval Service's Navy Distinguished Service Medal, Air and Space Forces' Distinguished Service Medal, and the Coast Guard Distinguished Service Medal. Prior to the creation of the Air Force's Distinguished Service Medal in 1960, United States Air Force airmen were awarded the Army's Distinguished Service Medal.
2002-02-25T15:51:15Z
2023-12-18T19:38:33Z
[ "Template:Commons category", "Template:Short description", "Template:Infobox award", "Template:Incomplete list", "Template:Reflist", "Template:Cite web", "Template:UnitedStatesCode", "Template:Cite news", "Template:Webarchive", "Template:USArmy decorations", "Template:Convert", "Template:See also", "Template:Cite book" ]
https://en.wikipedia.org/wiki/Distinguished_Service_Medal_(U.S._Army)
9,120
Defense Distinguished Service Medal
The Defense Distinguished Service Medal is a military decoration of the United States Department of Defense, which is presented to United States Armed Forces service members for exceptionally distinguished performance of duty contributing to the national security or defense of the United States. The medal was created on July 9, 1970, by President Richard Nixon in Executive Order 11545. President Nixon awarded the first medal, on the day the Executive Order was signed, to General Earle Wheeler, who was retiring from the US Army after serving as Chief of Staff of the United States Army and then Chairman of the Joint Chiefs of Staff. It is equivalent to the United States Department of Homeland Security's Homeland Security Distinguished Service Medal. The Defense Distinguished Service Medal is the United States Department of Defense's highest non-combat related military award and it is the highest joint service decoration. The Defense Distinguished Service Medal is awarded only while assigned to a joint activity. Normally, such responsibilities deserving of the Defense Distinguished Service Medal are held by the most senior officers such as the Chairman and Vice Chairman of the Joint Chiefs of Staff, the chiefs and vice chiefs of the military services, and commanders and deputy commanders of the Combatant Commands, the Director of the Joint Staff, and others whose duties bring them frequently into direct contact with the Secretary of Defense, the Deputy Secretary of Defense, and other senior government officials. In addition, the medal may also be awarded to other service members whose direct and individual contributions to national security or national defense are recognized as being so exceptional in scope and value as to be equivalent to contributions normally associated with positions encompassing broader responsibilities. This decoration takes precedence over the Distinguished Service Medals of the services and is not to be awarded to any individual for a period of service for which an Army, Navy, Air Force or Coast Guard Distinguished Service Medal is awarded. The medal is gold in color and on the obverse it features a medium blue enameled pentagon (point up). Superimposed on this is an American bald eagle with wings outspread facing left grasping three crossed arrows in its talons and on its breast is a shield of the United States. The pentagon and eagle are enclosed within a gold pieced circle consisting, in the upper half of 13 five-pointed stars and in the lower half, a wreath of laurel on the left and olive on the right. At the top is a suspender of five graduated gold rays. The reverse of the medal has the inscription "For Distinguished Service" at the top in raised letters, and within the pentagon the inscription "FROM THE SECRETARY OF DEFENSE TO", all in raised letters. Additional awards of the Defense Distinguished Service Medal are denoted by oak leaf clusters. - John Zirkelbach (two awards)
[ { "paragraph_id": 0, "text": "The Defense Distinguished Service Medal is a military decoration of the United States Department of Defense, which is presented to United States Armed Forces service members for exceptionally distinguished performance of duty contributing to the national security or defense of the United States. The medal was created on July 9, 1970, by President Richard Nixon in Executive Order 11545. President Nixon awarded the first medal, on the day the Executive Order was signed, to General Earle Wheeler, who was retiring from the US Army after serving as Chief of Staff of the United States Army and then Chairman of the Joint Chiefs of Staff.", "title": "" }, { "paragraph_id": 1, "text": "It is equivalent to the United States Department of Homeland Security's Homeland Security Distinguished Service Medal.", "title": "" }, { "paragraph_id": 2, "text": "The Defense Distinguished Service Medal is the United States Department of Defense's highest non-combat related military award and it is the highest joint service decoration. The Defense Distinguished Service Medal is awarded only while assigned to a joint activity. Normally, such responsibilities deserving of the Defense Distinguished Service Medal are held by the most senior officers such as the Chairman and Vice Chairman of the Joint Chiefs of Staff, the chiefs and vice chiefs of the military services, and commanders and deputy commanders of the Combatant Commands, the Director of the Joint Staff, and others whose duties bring them frequently into direct contact with the Secretary of Defense, the Deputy Secretary of Defense, and other senior government officials. In addition, the medal may also be awarded to other service members whose direct and individual contributions to national security or national defense are recognized as being so exceptional in scope and value as to be equivalent to contributions normally associated with positions encompassing broader responsibilities.", "title": "Criteria" }, { "paragraph_id": 3, "text": "This decoration takes precedence over the Distinguished Service Medals of the services and is not to be awarded to any individual for a period of service for which an Army, Navy, Air Force or Coast Guard Distinguished Service Medal is awarded.", "title": "Criteria" }, { "paragraph_id": 4, "text": "The medal is gold in color and on the obverse it features a medium blue enameled pentagon (point up). Superimposed on this is an American bald eagle with wings outspread facing left grasping three crossed arrows in its talons and on its breast is a shield of the United States. The pentagon and eagle are enclosed within a gold pieced circle consisting, in the upper half of 13 five-pointed stars and in the lower half, a wreath of laurel on the left and olive on the right. At the top is a suspender of five graduated gold rays. The reverse of the medal has the inscription \"For Distinguished Service\" at the top in raised letters, and within the pentagon the inscription \"FROM THE SECRETARY OF DEFENSE TO\", all in raised letters.", "title": "Appearance" }, { "paragraph_id": 5, "text": "Additional awards of the Defense Distinguished Service Medal are denoted by oak leaf clusters.", "title": "Appearance" }, { "paragraph_id": 6, "text": "- John Zirkelbach (two awards)", "title": "Notable recipients" } ]
The Defense Distinguished Service Medal is a military decoration of the United States Department of Defense, which is presented to United States Armed Forces service members for exceptionally distinguished performance of duty contributing to the national security or defense of the United States. The medal was created on July 9, 1970, by President Richard Nixon in Executive Order 11545. President Nixon awarded the first medal, on the day the Executive Order was signed, to General Earle Wheeler, who was retiring from the US Army after serving as Chief of Staff of the United States Army and then Chairman of the Joint Chiefs of Staff. It is equivalent to the United States Department of Homeland Security's Homeland Security Distinguished Service Medal.
2002-01-13T21:03:29Z
2023-11-02T09:51:19Z
[ "Template:Commons category inline", "Template:US interservice decorations", "Template:Authority control", "Template:Div col", "Template:Reflist", "Template:Webarchive", "Template:Div col end", "Template:Cite web", "Template:Cite news", "Template:Short description", "Template:Infobox military award", "Template:ExecutiveOrder" ]
https://en.wikipedia.org/wiki/Defense_Distinguished_Service_Medal
9,121
Dacoity
Dacoity is a term used for "banditry" in the Indian subcontinent. The spelling is the anglicised version of the Hindi word डाकू (daaku); "dacoit" /dəˈkɔɪt/ is a colloquial Indian English word with this meaning and it appears in the Glossary of Colloquial Anglo-Indian Words and Phrases (1903). Banditry is criminal activity involving robbery by groups of armed bandits. The East India Company established the Thuggee and Dacoity Department in 1830, and the Thuggee and Dacoity Suppression Acts, 1836–1848 were enacted in British India under East India Company rule. Areas with ravines or forests, such as Chambal and Chilapata Forests, were once known for dacoits. The word "dacoity", the anglicized version of the Hindi word ḍakaitī (historically spelled dakaitee). Hindi डकैती comes from ḍākū (historically spelled dakoo, Hindi: डाकू, meaning "armed robber"). The term dacoit (Hindi: डकैत ḍakait) means "a bandit" according to the OED ("A member of a class of robbers in India and Burma, who plunder in armed bands"). The dacoity have had a large impact in the Bhind and Morena of Chambal regions in Madhya Pradesh, Rajasthan, Haryana and Uttar Pradesh in north-central India. The exact reasons for the emergence of dacoity in the Chambal valley have been disputed. Most explanations have simply suggested feudal exploitation as the cause that provoked many people of this region to take to arms. The area was also underdeveloped and poor, so that banditry posed great economic incentives. However, the fact that many gangs operating in this valley were composed of higher castes and wealthy people appears to suggest that feudalism may only be a partial explanation of dacoity in Chambal valley (Bhaduri, 1972; Khan, 1981; Jatar, 1980; Katare, 1972). Furthermore, traditional honour codes and blood feuds would drive some into criminality. In Chambal, India, organized crime controlled much of the countryside from the time of the British Raj up to the early 2000s, with the police offering high rewards for the most notorious bandit chiefs. The criminals regularly targeted local businesses, though they preferred to kidnap wealthy people, and demand ransom from their relatives - cutting off fingers, noses, and ears to pressure them into paying high sums. Many dacoity also posed as social bandits toward the local poor, paying medical bills and funding weddings. One ex-dacoit described his own criminal past by claiming that "I was a rebel. I fought injustice." Following intense anti-banditry campaigns by the Indian Police, highway robbery was almost completely eradicated in the early 2000s. Nevertheless, Chambal is still popularly believed to be unsafe and bandit-infested by many Indians. One police officer noted that the fading of the dacoity was also due to social changes, as few young people were any longer willing to endure the harsh life as a highway robber in the countryside. Instead, they prefer to join crime groups in the city, where life is easier. While thugs and dacoits operating in northern and central India are more popularly known and referenced in books, films and academic journal, a significant number of accounts also come from Bengal. Writing about the dacoits of Bengal, the colonial official CH Keighly mentions the “great difference between gangs of hereditary dacoits or thugs in other parts of India and the dacoits of Bengal”. It is notable that unlike the rest of India, dacoits in Bengal did not come from a particular social class, caste, or creed. Dacoit gangs in Nadia and Hooghly were particularly known for their ritualistic practices before the night of dacoity. Before setting off for their mission, the members would assemble to perform “kalipuja” led by the Sirdar (leader). The dacoits would form a straight line and a pot of liquor, torches, and weapons to be used in the dacoity were laid down in a clear space. The Sirdar would then dip his finger in oil and touch the forehead of all the dacoits, making them promise never to confess. Even during the raid, when dacoits opened chests and discovered a good fortune, they would shout “Kali, Jai Kali”. Dacoity was highly prevalent in 19 century west Bengal. One of the gangs, led by a charismatic leader named Bhabani Pathak, was known for its loyalty to their leader. After the British captured Bhabani, the inner workings and social factors that led to the construction of this gang were revealed. Leaders such as Bhabani were known as Sirdars and had a symbiotic relationship with their followers. Among other benefits, a Sirdar would lend loans to members and provided them protection. This allowed for the formation of a special bond between the Sirdar and followers which meant that cases of desertion and exiting the gang were virtually unheard of. In Burdwan, dacoities were heavily planned and considerable thought went into their seamless execution. Sirdars in Burdwan operated by employing several informants who kept them updated about prospective targets. When a target was finalized, the Sirdar and relevant gang members were constantly made aware about his whereabouts. The informants were always on the lookout for wealthy businessmen and kept a close watch on those that exchanged bank notes of considerable value or received a shipment of merchandise that they would store in their houses. The term is also applied, according to the OED, to "pirates who formerly infested the Ganges between Calcutta and Burhampore". Dacoits existed in Burma as well – Rudyard Kipling's fictional Private Mulvaney hunted Burmese dacoits in "The Taking of Lungtungpen". Sax Rohmer's criminal mastermind Dr. Fu Manchu also employed Burmese dacoits as his henchmen. Indian police forces use "Known Dacoit" (K.D.) as a label to classify criminals. Introduced in 1836, the suppression acts brought about several legislative measures including the establishment of special courts, authorization for the use of rewards for informants, and the power to arrest suspects. The suppression acts marked the beginning of active British intervention in policing and law enforcement in Indian society. These acts were known to be authoritarian and further deepened the uneven power dynamic between the British and the Indians. The British often saw Indians as primitive, violent, and unruly, and this often acted as a justification for colonization and further consolidated their “civilization mission” pretext. The practice of thuggee and dacoity was seen in a similar Eurocentric light, without understanding the local context. An orientalist view of such activities was portrayed in the rest of the world to account for several repressive legislative measures that the British took. Under this punitive approach, several innocent individuals fell prey to false suspicion and were incriminated. Notable dacoits include: In Madhya Pradesh, women belonging to a village defence group have been issued firearm permits to fend off dacoity. The Chief minister of the state, Shivraj Singh Chouhan, recognised the role the women had played in defending their villages without guns. He stated that he wanted to enable these women to better defend both themselves and their villages, and issued the gun permits to advance this goal. As the dacoits flourished through the 1940s–1970s, they were the subject of various Hindi films made during this era, leading to the emergence of the dacoit film genre in Hindi Film Industry. The genre began with Mehboob Khan's Aurat (1940), which he remade as Mother India (1957). Mother India received an Academy Award nomination, and defined the dacoit film genre, along with Dilip Kumar's Gunga Jumna (1961). Other popular films in this genre included Raj Kapoor’s Jis Desh Mein Ganga Behti Hai (1961) and Moni Bhattacharjee's Mujhe Jeene Do (1963). Pakistani actor Akmal Khan had two dacoit films, Malangi (1965) and Imam Din Gohavia (1967). Other films in this genre included Khote Sikkay (1973), Mera Gaon Mera Desh (1971), and Kuchhe Dhaage (1973) both by Raj Khosla. The most famous dacoit film is Sholay (1975), written by Salim–Javed, and starring Dharmendra, Amitabh Bachchan, and Amjad Khan as the dacoit character Gabbar Singh. It was a masala film that combined the dacoit film conventions of Mother India and Gunga Jumna with that of Spaghetti Westerns, spawning the "Dacoit Western" genre, also known as the "Curry Western" genre. The film also borrowed elements from Akira Kurosawa's Seven Samurai. Sholay became a classic in the genre, and its success led to a surge of films in this genre, including Ganga Ki Saugandh (1978), once again starring Amitabh Bachchan and Amjad Khan. An internationally acclaimed example of the genre is Bandit Queen (1994). The Tamil movie starring Karthi, Theeran Adhigaaram Ondru (2017) deals elaborately with bandits. The film reveals the real dacoity incidents which held in Tamil Nadu between 1995 and 2005. Director Vinoth did a two-year research about bandits to develop the script. A related genre of crime films are Mumbai underworld films. Bengali novel Devi Chowdhurani by author Bankim Chandra Chatterjee in 1867. A Hindi novel named Painstth Lakh ki Dacoity (1977) was written by Surender Mohan Pathak; it was translated as The 65 Lakh Heist. Dacoits armed with pistols and swords appear in Age of Empires III: Asian Dynasties. They frequently appeared in the French language Bob Morane series of novels by Henri Vernes, principally as the main thugs or assassins of the hero's recurring villain, Mr. Ming and in English as the agents of Sax Rohmer’s Fu Manchu.
[ { "paragraph_id": 0, "text": "Dacoity is a term used for \"banditry\" in the Indian subcontinent. The spelling is the anglicised version of the Hindi word डाकू (daaku); \"dacoit\" /dəˈkɔɪt/ is a colloquial Indian English word with this meaning and it appears in the Glossary of Colloquial Anglo-Indian Words and Phrases (1903). Banditry is criminal activity involving robbery by groups of armed bandits. The East India Company established the Thuggee and Dacoity Department in 1830, and the Thuggee and Dacoity Suppression Acts, 1836–1848 were enacted in British India under East India Company rule. Areas with ravines or forests, such as Chambal and Chilapata Forests, were once known for dacoits.", "title": "" }, { "paragraph_id": 1, "text": "The word \"dacoity\", the anglicized version of the Hindi word ḍakaitī (historically spelled dakaitee). Hindi डकैती comes from ḍākū (historically spelled dakoo, Hindi: डाकू, meaning \"armed robber\").", "title": "Etymology" }, { "paragraph_id": 2, "text": "The term dacoit (Hindi: डकैत ḍakait) means \"a bandit\" according to the OED (\"A member of a class of robbers in India and Burma, who plunder in armed bands\").", "title": "Etymology" }, { "paragraph_id": 3, "text": "The dacoity have had a large impact in the Bhind and Morena of Chambal regions in Madhya Pradesh, Rajasthan, Haryana and Uttar Pradesh in north-central India. The exact reasons for the emergence of dacoity in the Chambal valley have been disputed. Most explanations have simply suggested feudal exploitation as the cause that provoked many people of this region to take to arms. The area was also underdeveloped and poor, so that banditry posed great economic incentives. However, the fact that many gangs operating in this valley were composed of higher castes and wealthy people appears to suggest that feudalism may only be a partial explanation of dacoity in Chambal valley (Bhaduri, 1972; Khan, 1981; Jatar, 1980; Katare, 1972). Furthermore, traditional honour codes and blood feuds would drive some into criminality.", "title": "History" }, { "paragraph_id": 4, "text": "In Chambal, India, organized crime controlled much of the countryside from the time of the British Raj up to the early 2000s, with the police offering high rewards for the most notorious bandit chiefs. The criminals regularly targeted local businesses, though they preferred to kidnap wealthy people, and demand ransom from their relatives - cutting off fingers, noses, and ears to pressure them into paying high sums. Many dacoity also posed as social bandits toward the local poor, paying medical bills and funding weddings. One ex-dacoit described his own criminal past by claiming that \"I was a rebel. I fought injustice.\" Following intense anti-banditry campaigns by the Indian Police, highway robbery was almost completely eradicated in the early 2000s. Nevertheless, Chambal is still popularly believed to be unsafe and bandit-infested by many Indians. One police officer noted that the fading of the dacoity was also due to social changes, as few young people were any longer willing to endure the harsh life as a highway robber in the countryside. Instead, they prefer to join crime groups in the city, where life is easier.", "title": "History" }, { "paragraph_id": 5, "text": "While thugs and dacoits operating in northern and central India are more popularly known and referenced in books, films and academic journal, a significant number of accounts also come from Bengal. Writing about the dacoits of Bengal, the colonial official CH Keighly mentions the “great difference between gangs of hereditary dacoits or thugs in other parts of India and the dacoits of Bengal”. It is notable that unlike the rest of India, dacoits in Bengal did not come from a particular social class, caste, or creed.", "title": "History" }, { "paragraph_id": 6, "text": "Dacoit gangs in Nadia and Hooghly were particularly known for their ritualistic practices before the night of dacoity. Before setting off for their mission, the members would assemble to perform “kalipuja” led by the Sirdar (leader). The dacoits would form a straight line and a pot of liquor, torches, and weapons to be used in the dacoity were laid down in a clear space. The Sirdar would then dip his finger in oil and touch the forehead of all the dacoits, making them promise never to confess. Even during the raid, when dacoits opened chests and discovered a good fortune, they would shout “Kali, Jai Kali”.", "title": "History" }, { "paragraph_id": 7, "text": "Dacoity was highly prevalent in 19 century west Bengal. One of the gangs, led by a charismatic leader named Bhabani Pathak, was known for its loyalty to their leader. After the British captured Bhabani, the inner workings and social factors that led to the construction of this gang were revealed. Leaders such as Bhabani were known as Sirdars and had a symbiotic relationship with their followers. Among other benefits, a Sirdar would lend loans to members and provided them protection. This allowed for the formation of a special bond between the Sirdar and followers which meant that cases of desertion and exiting the gang were virtually unheard of.", "title": "History" }, { "paragraph_id": 8, "text": "In Burdwan, dacoities were heavily planned and considerable thought went into their seamless execution. Sirdars in Burdwan operated by employing several informants who kept them updated about prospective targets. When a target was finalized, the Sirdar and relevant gang members were constantly made aware about his whereabouts. The informants were always on the lookout for wealthy businessmen and kept a close watch on those that exchanged bank notes of considerable value or received a shipment of merchandise that they would store in their houses.", "title": "History" }, { "paragraph_id": 9, "text": "The term is also applied, according to the OED, to \"pirates who formerly infested the Ganges between Calcutta and Burhampore\".", "title": "History" }, { "paragraph_id": 10, "text": "Dacoits existed in Burma as well – Rudyard Kipling's fictional Private Mulvaney hunted Burmese dacoits in \"The Taking of Lungtungpen\". Sax Rohmer's criminal mastermind Dr. Fu Manchu also employed Burmese dacoits as his henchmen.", "title": "History" }, { "paragraph_id": 11, "text": "Indian police forces use \"Known Dacoit\" (K.D.) as a label to classify criminals.", "title": "History" }, { "paragraph_id": 12, "text": "Introduced in 1836, the suppression acts brought about several legislative measures including the establishment of special courts, authorization for the use of rewards for informants, and the power to arrest suspects. The suppression acts marked the beginning of active British intervention in policing and law enforcement in Indian society. These acts were known to be authoritarian and further deepened the uneven power dynamic between the British and the Indians.", "title": "History" }, { "paragraph_id": 13, "text": "The British often saw Indians as primitive, violent, and unruly, and this often acted as a justification for colonization and further consolidated their “civilization mission” pretext. The practice of thuggee and dacoity was seen in a similar Eurocentric light, without understanding the local context. An orientalist view of such activities was portrayed in the rest of the world to account for several repressive legislative measures that the British took. Under this punitive approach, several innocent individuals fell prey to false suspicion and were incriminated.", "title": "History" }, { "paragraph_id": 14, "text": "Notable dacoits include:", "title": "Notable dacoits" }, { "paragraph_id": 15, "text": "In Madhya Pradesh, women belonging to a village defence group have been issued firearm permits to fend off dacoity. The Chief minister of the state, Shivraj Singh Chouhan, recognised the role the women had played in defending their villages without guns. He stated that he wanted to enable these women to better defend both themselves and their villages, and issued the gun permits to advance this goal.", "title": "Protection measures" }, { "paragraph_id": 16, "text": "As the dacoits flourished through the 1940s–1970s, they were the subject of various Hindi films made during this era, leading to the emergence of the dacoit film genre in Hindi Film Industry. The genre began with Mehboob Khan's Aurat (1940), which he remade as Mother India (1957). Mother India received an Academy Award nomination, and defined the dacoit film genre, along with Dilip Kumar's Gunga Jumna (1961). Other popular films in this genre included Raj Kapoor’s Jis Desh Mein Ganga Behti Hai (1961) and Moni Bhattacharjee's Mujhe Jeene Do (1963).", "title": "In popular culture" }, { "paragraph_id": 17, "text": "Pakistani actor Akmal Khan had two dacoit films, Malangi (1965) and Imam Din Gohavia (1967). Other films in this genre included Khote Sikkay (1973), Mera Gaon Mera Desh (1971), and Kuchhe Dhaage (1973) both by Raj Khosla.", "title": "In popular culture" }, { "paragraph_id": 18, "text": "The most famous dacoit film is Sholay (1975), written by Salim–Javed, and starring Dharmendra, Amitabh Bachchan, and Amjad Khan as the dacoit character Gabbar Singh. It was a masala film that combined the dacoit film conventions of Mother India and Gunga Jumna with that of Spaghetti Westerns, spawning the \"Dacoit Western\" genre, also known as the \"Curry Western\" genre. The film also borrowed elements from Akira Kurosawa's Seven Samurai. Sholay became a classic in the genre, and its success led to a surge of films in this genre, including Ganga Ki Saugandh (1978), once again starring Amitabh Bachchan and Amjad Khan.", "title": "In popular culture" }, { "paragraph_id": 19, "text": "An internationally acclaimed example of the genre is Bandit Queen (1994).", "title": "In popular culture" }, { "paragraph_id": 20, "text": "The Tamil movie starring Karthi, Theeran Adhigaaram Ondru (2017) deals elaborately with bandits. The film reveals the real dacoity incidents which held in Tamil Nadu between 1995 and 2005. Director Vinoth did a two-year research about bandits to develop the script.", "title": "In popular culture" }, { "paragraph_id": 21, "text": "A related genre of crime films are Mumbai underworld films.", "title": "In popular culture" }, { "paragraph_id": 22, "text": "Bengali novel Devi Chowdhurani by author Bankim Chandra Chatterjee in 1867.", "title": "In popular culture" }, { "paragraph_id": 23, "text": "A Hindi novel named Painstth Lakh ki Dacoity (1977) was written by Surender Mohan Pathak; it was translated as The 65 Lakh Heist.", "title": "In popular culture" }, { "paragraph_id": 24, "text": "Dacoits armed with pistols and swords appear in Age of Empires III: Asian Dynasties.", "title": "In popular culture" }, { "paragraph_id": 25, "text": "They frequently appeared in the French language Bob Morane series of novels by Henri Vernes, principally as the main thugs or assassins of the hero's recurring villain, Mr. Ming and in English as the agents of Sax Rohmer’s Fu Manchu.", "title": "In popular culture" } ]
Dacoity is a term used for "banditry" in the Indian subcontinent. The spelling is the anglicised version of the Hindi word डाकू (daaku); "dacoit" is a colloquial Indian English word with this meaning and it appears in the Glossary of Colloquial Anglo-Indian Words and Phrases (1903). Banditry is criminal activity involving robbery by groups of armed bandits. The East India Company established the Thuggee and Dacoity Department in 1830, and the Thuggee and Dacoity Suppression Acts, 1836–1848 were enacted in British India under East India Company rule. Areas with ravines or forests, such as Chambal and Chilapata Forests, were once known for dacoits.
2002-01-14T00:13:38Z
2023-12-14T18:55:13Z
[ "Template:Short description", "Template:More citations needed", "Template:Webarchive", "Template:Cite news", "Template:Western (genre)", "Template:Organised crime in India", "Template:Redirect", "Template:IPAc-en", "Template:Cite web", "Template:Cite book", "Template:Cite journal", "Template:Cite magazine", "Template:ISBN", "Template:Film genres", "Template:Authority control", "Template:Use dmy dates", "Template:Reflist", "Template:Wiktionary", "Template:Organized crime groups in Asia" ]
https://en.wikipedia.org/wiki/Dacoity
9,123
Davis, California
Davis is the most populous city in Yolo County, California, United States. Located in the Sacramento Valley region of Northern California, the city had a population of 66,850 in 2020, not including the on-campus population of the University of California, Davis, which was over 9,400 (not including students' families) in 2016. As of 2019, there were 38,369 students enrolled at the university. Davis sits on land that originally belonged to the Indigenous Patwin, a southern branch of Wintun people, who were killed or forced from their lands by the 1830s as part of the California Genocide through a combination of mass murders, smallpox and other diseases, and both Mexican and American systems of Indigenous slavery. Patwin burial grounds have been found across Davis, including on the site of the UC Davis Mondavi Center. After the killing and expulsion of the Patwin, territory that eventually became Davis emerged from one of California's most complicated ranchos, Laguna de Santos Callé. The 1852 Land Commission concurred with US Attorneys who argued that the grant was "fraudulent in all its parts," and in his 1860 District Court ruling Justice Ogden Hoffman observed that "It is impossible to contemplate without disgust the series of perjuries which compose the record" of the land grant. Nevertheless, Jerome C. Davis, a prominent farmer and one of the early claimants to land in Laguna de Santos Callé, lobbied all the way to the United States Congress in order to retain the land that eventually became Davis. Davis became a depot on the Southern Pacific Railroad in 1868, when it was named "Davisville" after Jerome C. Davis. However, the post office at Davisville shortened the town name to "Davis" in 1907. The name stuck, and the city of Davis was incorporated on March 28, 1917. From its inception as a farming community, Davis is known primarily for its contributions to agricultural policy along with veterinary care and animal husbandry. Following the passage of the University Farm Bill in 1905 by the California State Legislature, Governor George Pardee selected Davis out of 50 other sites as the future home to the University of California's University Farm, officially opening to students in 1908. The farm, later renamed the Northern Branch of the College of Agriculture in 1922, was upgraded to become the seventh UC general campus, the University of California, Davis, in 1959. Davis is located in Yolo County, California, 11 mi (18 km) west of Sacramento, 70 mi (113 km) northeast of San Francisco, 385 mi (619 km) north of Los Angeles, at the intersection of Interstate 80 and State Route 113. Neighboring towns include Dixon, Winters, Woodland, and West Sacramento. Davis lies in the Sacramento Valley, the northern portion of the Central Valley, in Northern California, at an elevation of about 52 feet (16 m) above sea level. According to the United States Census Bureau, the city has a total area of 10.5 square miles (27 km). 10.4 square miles (27 km) of it is land and 0.04 square miles (0.10 km) of it (0.19%) is water. The topography is flat, which has helped Davis to become known as a haven for bicyclists. The Davis climate resembles that of nearby Sacramento and is typical of California's Central Valley Mediterranean climate region: warm and dry in the spring, summer and autumn, and cool and wet in the winter. It is classified as a Köppen Csa climate. Summer days are hot, ranging from 85 to 105 °F (29 to 41 °C), but the nights turn pleasantly cool, almost always dropping below 70 °F (21 °C). The Delta Breeze, a flow of cool marine air originating from the Pacific Ocean via San Francisco Bay and the Sacramento–San Joaquin River Delta, frequently provides relief in the evening. Winter temperatures generally reach between 45 and 65 °F (7 and 18 °C) in the afternoon; nights average at about 35 to 40 °F (2 to 4 °C), but often fall below freezing. Average temperatures range from 46 °F (8 °C) in December and January to 75 °F (24 °C) in July and August. Thick ground fog called tule fog settles into Davis during late fall and winter. This fog can be dense, with visibility nearly zero. As in other areas of northern California, the tule fog is a leading cause of road accidents in the winter season. Mean rainfall per annum is about 20 inches (510 mm). The bulk of rain occurs between about mid-November to mid-March, with typically no precipitation falling from mid-June to mid-September. Record temperatures range from a high of 116 °F (47 °C) on July 17, 1925, to a low of 12 °F (−11 °C) on December 11, 1932. Davis is internally divided by two freeways (Interstate 80 and State Route 113), a north–south railroad (California Northern), an east–west mainline (Union Pacific) and several major streets. The city is unofficially divided into six main districts made up of smaller neighborhoods (often originally named as housing subdivisions): The University of California, Davis is located south of Russell Boulevard and west of A Street and then south of 1st Street. The land occupied by the university is not incorporated within the boundaries of the city of Davis and lies within both Yolo and Solano Counties. Local energy planning began in Davis after the energy crisis of 1973. A new building code promoted energy efficiency. Energy use in buildings decreased dramatically and in 1981 Davis citizens won a $100,000 prize from utility PG&E, for cutting electricity use during the summer peak. On November 14, 1984, the Davis City Council declared the city to be a nuclear-free zone. In 1998, the City passed a "Dark Skies" ordinance in an effort to reduce light pollution in the night sky. In 2013, Davis became part of the state Cool Roof Initiative with the "CoolDavis" campaign, requiring all new roofing projects to meet Cool Roof Rating Council (CRRC) requirements, including the installation of light-colored roofs. The aim is to reflect more sunlight back into space via the albedo effect, and reduce the amount of heat absorbed in hopes of limiting climate change. Davis is part of the Sacramento–Arden-Arcade–Roseville Metropolitan Statistical Area. According to the 2020 Census the population of Davis was 66,850 people. In 2020 the racial demographics were as follows: 53.6% White 2.3% Black 13.8% Hispanic or Latino 23.3% Asian 1.1% Native American 9.6% 2 or more races The 2010 United States Census reported that Davis had a population of 65,622. The population density was 6,615.8 inhabitants per square mile (2,554.4/km). The racial makeup of Davis was 42,571 (64.9%) White, 1,528 (2.3%) African American, 339 (0.5%) Native American, 14,355 (21.9%) Asian, 136 (0.2%) Pacific Islander, 3,121 (4.8%) from other races, and 3,572 (5.4%) from two or more races. Hispanic or Latino of any race were 8,172 persons (12.5%). In 2006, Davis was ranked as the second most educated city (in terms of the percentage of residents with graduate degrees) in the US by CNN Money Magazine, after Arlington County, Virginia. Davis' Asian population of 14,355 was apportioned among 1,631 Indian Americans, 6,395 Chinese Americans, 1,560 Korean Americans, 1,185 Vietnamese Americans, 1,033 Filipino Americans, 953 Japanese Americans, and 1,598 other Asian Americans. Davis' Hispanic and Latino population of 8,172 was apportioned among 5,618 Mexican American, 221 Puerto Rican American, 80 Cuban American, and 2,253 other Hispanic and Latino. The Census reported that 63,522 people (96.8% of the population) lived in households, 1,823 (2.8%) lived in non-institutionalized group quarters, and 277 (0.4%) were institutionalized. There were 24,873 households, of which 6,119 (24.6%) had children under the age of 18 living in them, 9,343 (37.6%) were opposite-sex married couples living together, 1,880 (7.6%) had a female householder with no husband present, and 702 (2.8%) had a male householder with no wife present. There were 1,295 (5.2%) unmarried opposite-sex partnerships, and 210 (0.8%) same-sex married couples or partnerships. 5,952 households (23.9%) were made up of individuals, and 1,665 (6.7%) had someone living alone who was 65 years of age or older. The average household size was 2.55. There were 11,925 families (47.9% of all households); the average family size was 2.97. The population age and sex distribution was 10,760 people (16.4%) under the age of 18, 21,757 people (33.2%) aged 18 to 24, 14,823 people (22.6%) aged 25 to 44, 12,685 people (19.3%) aged 45 to 64, and 5,597 people (8.5%) who were 65 years of age or older. The median age was 25.2 years. For every 100 females, there were 90.5 males. For every 100 females age 18 and over, there were 88.0 males. There were 25,869 housing units, with an average density of 2,608.0 per square mile (1,007.0/km), of which 10,699 (43.0%) were owner-occupied, and 14,174 (57.0%) were occupied by renters. The homeowner vacancy rate was 0.9%; the rental vacancy rate was 3.5%. 27,594 people (42.0% of the population) lived in owner-occupied housing units and 35,928 people (54.7%) lived in rental housing units. As of the United States 2000 Census, there were 60,308 people, 22,948 households, and 11,290 families residing in the city. The population density was 5,769.2 inhabitants per square mile (2,227.5 inhabitants/km). There were 23,617 housing units at an average density of 2,259.3 per square mile (872.3/km). The racial composition of the city was 70.07% White, 2.35% Black or African American, 0.67% Native American, 17.5% Asian, 0.24% Pacific Islander, 4.26% from other races, and 4.87% from two or more races. 9.61% of the population were Hispanic or Latino of any race. There were 22,948 households, of which 26.4% had children under the age of 18 living with them, 38.3% were married couples living together, 8.2% had a female householder with no husband present, and 50.8% were non-families. 25.0% of all households were composed of individuals, and 5.2% had someone living alone who was 65 years of age or older. The average household size was 2.50 and the average family size was 3.00. In the city, the population age distribution was 18.6% under the age of 18, 30.9% from 18 to 24, 27.1% from 25 to 44, 16.7% from 45 to 64, and 6.6% who were 65 years of age or older. The median age was 25 years. For every 100 females, there were 91.2 males. For every 100 females age 18 and over, there were 87.8 males. The median income for a household in the city was $42,454, and the median income for a family was $74,051. Males had a median income of $51,189 versus $36,082 for females. The per capita income for the city was $22,937. About 5.4% of families and 24.5% of the population were below the poverty line, including 6.8% of those under age 18 and 2.8% of those age 65 or over. This city of approximately 62,000 people abuts a university campus of 32,000 students. Although the university's land is not incorporated within the city, many students live off-campus in the city. The California Northern Railroad is based in Davis. According to the city's 2020 Comprehensive Annual Financial Report, the top employers in the city are: A community currency scheme was in use in Davis, called Davis Dollars. Bicycling has been one of the most popular modes of transportation in Davis for decades, particularly among school-age children and UC Davis students. In 2010, Davis became the new home of the United States Bicycling Hall of Fame. Bicycle infrastructure became a political issue in the 1960s, culminating in the election of a pro-bicycle majority to the City Council in 1966. By the early 1970s, Davis became a pioneer in the implementation of cycling facilities. As the city expands, new facilities are usually mandated. As a result, Davis residents today enjoy an extensive network of bike lanes, bike paths, and grade-separated bicycle crossings. The flat terrain and temperate climate are also conducive to bicycling. In 2005 the Bicycle-Friendly Community program of the League of American Bicyclists recognized Davis as the first Platinum Level city in the US In March 2006, Bicycling Magazine named Davis the best small town for cycling in its compilation of "America's Best Biking Cities." Bicycling appears to be declining among Davis residents: from 1990 to 2000, the US Census Bureau reported a decline in the fraction of commuters traveling by bicycle, from 22 percent to 15 percent. This resulted in the reestablishment of the city's Bicycle Advisory Commission and creation of advocate groups such as "Davis Bicycles!". In 2016, Fifth Street, a main road in Davis was converted from four lanes to two lanes to allow for bicycle lanes and encourage more bicycling. In 1996, 2001, 2006, and 2009 the UC Davis "Cal Aggie Cycling" Team won the national road cycling competition. The team also competes off-road and on the track, and has competed in the national competitions of these disciplines. In 2007, UC Davis also organized a record breaking bicycle parade numbering 822 bicycles. A continuous stream of bands, speakers and various workshops occurs throughout Mother's Day weekend on each of Whole Earth Festival's (WEF) three stages and other specialty areas. The WEF is organized entirely by UC Davis students, in association with the Associated Students of UC Davis and the university. Celebrate Davis is the annual free festival held by the Davis Chamber of Commerce. It features booths by Davis businesses, live music, food vendors, live animals, activities like rock climbing and zip-line. It concludes with fireworks after dark. Parking is problematic, so most people ride their bikes and use the free valet parking. Picnic Day is an annual event at the University of California, Davis and is always held on the third Saturday in April. It is the largest student-run event in the US. Picnic Day starts off with a parade, which features the UC Davis California Aggie Marching Band-uh!, and runs through campus and around downtown Davis and ends with the Battle of the Bands, which lasts until the last band stops playing (sometimes until 2 am). There are over 150 free events and over 50,000 attend every year. Other highlights include: the Dachshund races, a.k.a. the Doxie Derby, held in the Pavilion; the Davis Rock Challenge, the Chemistry Magic Show, and the sheep dog trials. Many departments have exhibits and demonstrations, such as the Cole Facility, which until recently showed a fistulated cow (a cow that has been fitted with a plastic portal (a "fistula") into its digestive system to observe digestion processes). Its name was "Hole-y Cow". The Davis Transmedia Art Walk is a free—self-guided—public art tour includes 23 public murals, 16 sculptures, and 15 galleries and museums all in downtown Davis and the University of Davis campus. A free Davis Art Walk map serves as a detailed guide to the entire collection. The art pieces are all within walking distance of each other. The walk is a roughly circuitous path that can be completed within an hour or two. Every piece of art on the Art Walk has been embedded with an RFID chip. Using a cellphone that supports this technology, you access multimedia files that relate to each work. You can even leave a comment or "burn your own message" for other visitors to see. Artist hosted tours are held on the weekend by appointment only. To pick up a copy of the Davis Art Walk map, visit the Yolo County Visitors Bureau (132 E St., Suite 200; (530) 297–1900) or the John Natsoulas Center for the Arts (521 1st St.; (530) 756–3938). The Manetti Shrem Museum of Art, located on the UC Davis campus, opened on November 13, 2016, and carries on the legacy of the university's world-renowned first generation art faculty, which contributed to innovations in conceptual, performance and video art in the 1960s and 70s. The museum has generated nationwide attention with exhibits by artists such as Wayne Thiebaud, Bruce Nauman, John Cage, and Robert Arneson as well as its striking architecture, featuring a 50,000 square-foot “Grand Canopy” of perforated aluminum triangular beams, supported by 40 steel columns. Every year the museum exhibits works by graduating art students. The museum is free and hosts lecture series and events throughout the year, as well as weekend art studio activities for all ages. The Mondavi Center, located on the UC Davis campus, is one of the biggest non-seasonal attractions in Davis. The Mondavi Center is a theater which hosts many world-class touring acts, including star performers such as Yo-Yo Ma, Yitzhak Perlman and Wynton Marsalis, and draws a large audience from Sacramento. The UC Davis Arboretum is an arboretum and botanical garden. Plants from all over the world grow in different sections of the park. There are notable oak and native plant collections and a small redwood grove. A small waterway spans the arboretum along the bed of the old North Fork of Putah Creek. Occasionally herons, kingfishers, and cormorants can be seen around the waterways, as well as the ever-present ducks. Tours of the arboretum led by volunteer naturalists are often held for grade-school children. The Domes, (AKA Baggins End Innovative Housing), is an on-campus cooperative housing community designed by project manager Ron Swenson and future student-residents in 1972. Consisting of 14 polyurethane foam-insulated fiberglass domes and located in the Sustainable Research Area at the western end of Orchard Road, it is governed by its 26 UCD student residents. It is one of the few student co-housing cooperative communities in the US, and is an early example of the modern-day growing tiny house movement. The community has successfully resisted several threats to its continuation over the years. The Davis Farmers Market is held every Wednesday evening and Saturday morning. Participants sell a range of fruits and vegetables, baked goods, dairy and meat products (often from certified organic farms), crafts, and plants and flowers. From April to October, the market hosts Picnic in the Park, with musical events and food sold from restaurant stands. The Davis Farmers Market won first place in the 2009, and second place in the 2010 America's Favorite Farmers Markets held by the American Farmland Trust under the large Farmers market classification. Davis has one newspaper, The Davis Enterprise, a thrice-weekly founded in 1897. UC Davis also has a weekly newspaper called The California Aggie which covers campus, local and national news. Davis Media Access, a community media center, is the umbrella organization of television station DCTV. There are also numerous commercial stations broadcasting from nearby Sacramento. Davis has two community radio stations: KDVS 90.3 FM, on the University of California campus, and KDRT 95.7 FM, a subsidiary of Davis Media Access and one of the first low-power FM radio stations in the United States. Davis has the world's largest English-language local wiki, DavisWiki. In 2006, The People's Vanguard of Davis began news reporting about the city of Davis, the Davis Joint Unified School District, the county of Yolo, and the Sacramento area. Davis' Toad Tunnel is a wildlife crossing that was constructed in 1995 and has drawn much attention over the years, including a mention on The Daily Show. Because of the building of an overpass, animal lovers worried about toads being killed by cars commuting from South Davis to North Davis, since the toads traveled from one side of a dirt lot (which the overpass replaced) to the reservoir at the other end. After much controversy, a decision was made to build a toad tunnel, which runs beneath the Pole Line Road overpass which crosses Interstate 80. The project cost $14,000, equivalent to $27,000 in 2022. The tunnel is 21 inches (53 cm) wide and 18 inches (46 cm) high. The Davis Food Coop is a Davis institution. Founded in 1972, this cooperative is presently owned and operated by over 9,000 members and families of the Davis Community. The Coop is a full service supermarket that has championed organic, healthy eating in the community, sponsoring community events including summer programs for children, cooking classes, and many other activities. The University of California, Davis, or UC Davis, a campus of the University of California, had a 2019 Fall enrollment of 38,369 students. UC Davis has a dominant influence on the social and cultural life of the town. Also known as Deganawidah-Quetzalcoatl University and much smaller than UC Davis, D-Q University was a two-year institution located on Road 31 in Yolo County 6.7 miles (10.8 km) west of State Route 113. This is just west of Davis near the Yolo County Airport. About four miles (6.4 km) to the west, the Road 31 exit from Interstate 505 is marked with cryptic signage, "DQU." The site is about 100 feet (30 m) above mean sea level (AMSL). NAD83 coordinates for the campus are 38°34′02″N 121°53′12″W / 38.56722°N 121.88667°W / 38.56722; -121.88667 The college closed in 2005. The curriculum was said to include heritage and traditional American Indian ceremonies. The 643 acres (2.60 km) and 5 buildings were formerly a military reservation according to a National Park Service publication, Five Views. The full name of the school is included here so that readers can accurately identify the topic. According to some tribal members, use of the spelled-out name of the university can be offensive. People who want to be culturally respectful refer to the institution as D-Q University. Tribal members in appropriate circumstances may use the full name. An off-campus branch of Sacramento City College is located in Davis. The satellite is located in West Village, an area built by UC Davis to house students and others affiliated with the university. Davis' public school system is administrated by the Davis Joint Unified School District. The city has nine public elementary schools (North Davis, Birch Lane, Pioneer Elementary, Patwin, Cesar Chavez, Robert E. Willett, Marguerite Montgomery, Fred T. Korematsu at Mace Ranch, and Fairfield Elementary (which is outside the city limits but opened in 1866 and is Davis Joint Unified School District's oldest public school)). Davis has one school for independent study (Davis School for Independent Study), four public junior high schools (Ralph Waldo Emerson, Oliver Wendell Holmes, Frances Harper, and Leonardo da Vinci Junior High), one main high school (Davis Senior High School), one alternative high school (Martin Luther King High School), and a small project based high school (Leonardo da Vinci High School). Cesar Chavez is a Spanish immersion school, with no English integration until the third grade. The junior high schools contain grades 7 through 9. Due to a decline in the school-age population in Davis, two of the elementary schools in south Davis may have their district boundaries changed, or magnet programs may be moved to equalize enrollment. Valley Oak was closed after the 2007–08 school year, and their campus was granted to Da Vinci High (which had formerly been located in the back of Davis Senior High's campus) and a special-ed preschool. On average, class size is about 25 students: 1 teacher. At one time, Chavez and Willett were incorporated together to provide elementary education K–6 to both English-speaking and Spanish immersion students in West Davis. César Chávez served grades K–3 and was called West Davis Elementary, and Robert E. Willett (named for a long-time teacher at the school, now deceased) served grades 4–6 and was known as West Davis Intermediate. Willett now serves K–6 English-speaking students, and Chavez supports the Spanish immersion program for K–6. These are some notable Davis residents, other than UC Davis faculty who were not previously from Davis. Davis' sister cities are:
[ { "paragraph_id": 0, "text": "Davis is the most populous city in Yolo County, California, United States. Located in the Sacramento Valley region of Northern California, the city had a population of 66,850 in 2020, not including the on-campus population of the University of California, Davis, which was over 9,400 (not including students' families) in 2016. As of 2019, there were 38,369 students enrolled at the university.", "title": "" }, { "paragraph_id": 1, "text": "Davis sits on land that originally belonged to the Indigenous Patwin, a southern branch of Wintun people, who were killed or forced from their lands by the 1830s as part of the California Genocide through a combination of mass murders, smallpox and other diseases, and both Mexican and American systems of Indigenous slavery. Patwin burial grounds have been found across Davis, including on the site of the UC Davis Mondavi Center. After the killing and expulsion of the Patwin, territory that eventually became Davis emerged from one of California's most complicated ranchos, Laguna de Santos Callé. The 1852 Land Commission concurred with US Attorneys who argued that the grant was \"fraudulent in all its parts,\" and in his 1860 District Court ruling Justice Ogden Hoffman observed that \"It is impossible to contemplate without disgust the series of perjuries which compose the record\" of the land grant. Nevertheless, Jerome C. Davis, a prominent farmer and one of the early claimants to land in Laguna de Santos Callé, lobbied all the way to the United States Congress in order to retain the land that eventually became Davis. Davis became a depot on the Southern Pacific Railroad in 1868, when it was named \"Davisville\" after Jerome C. Davis. However, the post office at Davisville shortened the town name to \"Davis\" in 1907. The name stuck, and the city of Davis was incorporated on March 28, 1917.", "title": "History" }, { "paragraph_id": 2, "text": "From its inception as a farming community, Davis is known primarily for its contributions to agricultural policy along with veterinary care and animal husbandry. Following the passage of the University Farm Bill in 1905 by the California State Legislature, Governor George Pardee selected Davis out of 50 other sites as the future home to the University of California's University Farm, officially opening to students in 1908. The farm, later renamed the Northern Branch of the College of Agriculture in 1922, was upgraded to become the seventh UC general campus, the University of California, Davis, in 1959.", "title": "History" }, { "paragraph_id": 3, "text": "Davis is located in Yolo County, California, 11 mi (18 km) west of Sacramento, 70 mi (113 km) northeast of San Francisco, 385 mi (619 km) north of Los Angeles, at the intersection of Interstate 80 and State Route 113. Neighboring towns include Dixon, Winters, Woodland, and West Sacramento.", "title": "Geography and environment" }, { "paragraph_id": 4, "text": "Davis lies in the Sacramento Valley, the northern portion of the Central Valley, in Northern California, at an elevation of about 52 feet (16 m) above sea level.", "title": "Geography and environment" }, { "paragraph_id": 5, "text": "According to the United States Census Bureau, the city has a total area of 10.5 square miles (27 km). 10.4 square miles (27 km) of it is land and 0.04 square miles (0.10 km) of it (0.19%) is water.", "title": "Geography and environment" }, { "paragraph_id": 6, "text": "The topography is flat, which has helped Davis to become known as a haven for bicyclists.", "title": "Geography and environment" }, { "paragraph_id": 7, "text": "The Davis climate resembles that of nearby Sacramento and is typical of California's Central Valley Mediterranean climate region: warm and dry in the spring, summer and autumn, and cool and wet in the winter. It is classified as a Köppen Csa climate. Summer days are hot, ranging from 85 to 105 °F (29 to 41 °C), but the nights turn pleasantly cool, almost always dropping below 70 °F (21 °C). The Delta Breeze, a flow of cool marine air originating from the Pacific Ocean via San Francisco Bay and the Sacramento–San Joaquin River Delta, frequently provides relief in the evening. Winter temperatures generally reach between 45 and 65 °F (7 and 18 °C) in the afternoon; nights average at about 35 to 40 °F (2 to 4 °C), but often fall below freezing.", "title": "Geography and environment" }, { "paragraph_id": 8, "text": "Average temperatures range from 46 °F (8 °C) in December and January to 75 °F (24 °C) in July and August. Thick ground fog called tule fog settles into Davis during late fall and winter. This fog can be dense, with visibility nearly zero. As in other areas of northern California, the tule fog is a leading cause of road accidents in the winter season.", "title": "Geography and environment" }, { "paragraph_id": 9, "text": "Mean rainfall per annum is about 20 inches (510 mm). The bulk of rain occurs between about mid-November to mid-March, with typically no precipitation falling from mid-June to mid-September.", "title": "Geography and environment" }, { "paragraph_id": 10, "text": "Record temperatures range from a high of 116 °F (47 °C) on July 17, 1925, to a low of 12 °F (−11 °C) on December 11, 1932.", "title": "Geography and environment" }, { "paragraph_id": 11, "text": "Davis is internally divided by two freeways (Interstate 80 and State Route 113), a north–south railroad (California Northern), an east–west mainline (Union Pacific) and several major streets. The city is unofficially divided into six main districts made up of smaller neighborhoods (often originally named as housing subdivisions):", "title": "Geography and environment" }, { "paragraph_id": 12, "text": "The University of California, Davis is located south of Russell Boulevard and west of A Street and then south of 1st Street. The land occupied by the university is not incorporated within the boundaries of the city of Davis and lies within both Yolo and Solano Counties.", "title": "Geography and environment" }, { "paragraph_id": 13, "text": "Local energy planning began in Davis after the energy crisis of 1973. A new building code promoted energy efficiency. Energy use in buildings decreased dramatically and in 1981 Davis citizens won a $100,000 prize from utility PG&E, for cutting electricity use during the summer peak.", "title": "Geography and environment" }, { "paragraph_id": 14, "text": "On November 14, 1984, the Davis City Council declared the city to be a nuclear-free zone. In 1998, the City passed a \"Dark Skies\" ordinance in an effort to reduce light pollution in the night sky.", "title": "Geography and environment" }, { "paragraph_id": 15, "text": "In 2013, Davis became part of the state Cool Roof Initiative with the \"CoolDavis\" campaign, requiring all new roofing projects to meet Cool Roof Rating Council (CRRC) requirements, including the installation of light-colored roofs. The aim is to reflect more sunlight back into space via the albedo effect, and reduce the amount of heat absorbed in hopes of limiting climate change.", "title": "Geography and environment" }, { "paragraph_id": 16, "text": "Davis is part of the Sacramento–Arden-Arcade–Roseville Metropolitan Statistical Area.", "title": "Demographics" }, { "paragraph_id": 17, "text": "According to the 2020 Census the population of Davis was 66,850 people.", "title": "Demographics" }, { "paragraph_id": 18, "text": "In 2020 the racial demographics were as follows:", "title": "Demographics" }, { "paragraph_id": 19, "text": "53.6% White", "title": "Demographics" }, { "paragraph_id": 20, "text": "2.3% Black", "title": "Demographics" }, { "paragraph_id": 21, "text": "13.8% Hispanic or Latino", "title": "Demographics" }, { "paragraph_id": 22, "text": "23.3% Asian", "title": "Demographics" }, { "paragraph_id": 23, "text": "1.1% Native American", "title": "Demographics" }, { "paragraph_id": 24, "text": "9.6% 2 or more races", "title": "Demographics" }, { "paragraph_id": 25, "text": "", "title": "Demographics" }, { "paragraph_id": 26, "text": "", "title": "Demographics" }, { "paragraph_id": 27, "text": "The 2010 United States Census reported that Davis had a population of 65,622. The population density was 6,615.8 inhabitants per square mile (2,554.4/km). The racial makeup of Davis was 42,571 (64.9%) White, 1,528 (2.3%) African American, 339 (0.5%) Native American, 14,355 (21.9%) Asian, 136 (0.2%) Pacific Islander, 3,121 (4.8%) from other races, and 3,572 (5.4%) from two or more races. Hispanic or Latino of any race were 8,172 persons (12.5%).", "title": "Demographics" }, { "paragraph_id": 28, "text": "In 2006, Davis was ranked as the second most educated city (in terms of the percentage of residents with graduate degrees) in the US by CNN Money Magazine, after Arlington County, Virginia.", "title": "Demographics" }, { "paragraph_id": 29, "text": "Davis' Asian population of 14,355 was apportioned among 1,631 Indian Americans, 6,395 Chinese Americans, 1,560 Korean Americans, 1,185 Vietnamese Americans, 1,033 Filipino Americans, 953 Japanese Americans, and 1,598 other Asian Americans.", "title": "Demographics" }, { "paragraph_id": 30, "text": "Davis' Hispanic and Latino population of 8,172 was apportioned among 5,618 Mexican American, 221 Puerto Rican American, 80 Cuban American, and 2,253 other Hispanic and Latino.", "title": "Demographics" }, { "paragraph_id": 31, "text": "The Census reported that 63,522 people (96.8% of the population) lived in households, 1,823 (2.8%) lived in non-institutionalized group quarters, and 277 (0.4%) were institutionalized.", "title": "Demographics" }, { "paragraph_id": 32, "text": "There were 24,873 households, of which 6,119 (24.6%) had children under the age of 18 living in them, 9,343 (37.6%) were opposite-sex married couples living together, 1,880 (7.6%) had a female householder with no husband present, and 702 (2.8%) had a male householder with no wife present. There were 1,295 (5.2%) unmarried opposite-sex partnerships, and 210 (0.8%) same-sex married couples or partnerships. 5,952 households (23.9%) were made up of individuals, and 1,665 (6.7%) had someone living alone who was 65 years of age or older. The average household size was 2.55. There were 11,925 families (47.9% of all households); the average family size was 2.97.", "title": "Demographics" }, { "paragraph_id": 33, "text": "The population age and sex distribution was 10,760 people (16.4%) under the age of 18, 21,757 people (33.2%) aged 18 to 24, 14,823 people (22.6%) aged 25 to 44, 12,685 people (19.3%) aged 45 to 64, and 5,597 people (8.5%) who were 65 years of age or older. The median age was 25.2 years. For every 100 females, there were 90.5 males. For every 100 females age 18 and over, there were 88.0 males.", "title": "Demographics" }, { "paragraph_id": 34, "text": "There were 25,869 housing units, with an average density of 2,608.0 per square mile (1,007.0/km), of which 10,699 (43.0%) were owner-occupied, and 14,174 (57.0%) were occupied by renters. The homeowner vacancy rate was 0.9%; the rental vacancy rate was 3.5%. 27,594 people (42.0% of the population) lived in owner-occupied housing units and 35,928 people (54.7%) lived in rental housing units.", "title": "Demographics" }, { "paragraph_id": 35, "text": "As of the United States 2000 Census, there were 60,308 people, 22,948 households, and 11,290 families residing in the city. The population density was 5,769.2 inhabitants per square mile (2,227.5 inhabitants/km). There were 23,617 housing units at an average density of 2,259.3 per square mile (872.3/km). The racial composition of the city was 70.07% White, 2.35% Black or African American, 0.67% Native American, 17.5% Asian, 0.24% Pacific Islander, 4.26% from other races, and 4.87% from two or more races. 9.61% of the population were Hispanic or Latino of any race.", "title": "Demographics" }, { "paragraph_id": 36, "text": "There were 22,948 households, of which 26.4% had children under the age of 18 living with them, 38.3% were married couples living together, 8.2% had a female householder with no husband present, and 50.8% were non-families. 25.0% of all households were composed of individuals, and 5.2% had someone living alone who was 65 years of age or older. The average household size was 2.50 and the average family size was 3.00.", "title": "Demographics" }, { "paragraph_id": 37, "text": "In the city, the population age distribution was 18.6% under the age of 18, 30.9% from 18 to 24, 27.1% from 25 to 44, 16.7% from 45 to 64, and 6.6% who were 65 years of age or older. The median age was 25 years. For every 100 females, there were 91.2 males. For every 100 females age 18 and over, there were 87.8 males.", "title": "Demographics" }, { "paragraph_id": 38, "text": "The median income for a household in the city was $42,454, and the median income for a family was $74,051. Males had a median income of $51,189 versus $36,082 for females. The per capita income for the city was $22,937. About 5.4% of families and 24.5% of the population were below the poverty line, including 6.8% of those under age 18 and 2.8% of those age 65 or over.", "title": "Demographics" }, { "paragraph_id": 39, "text": "This city of approximately 62,000 people abuts a university campus of 32,000 students. Although the university's land is not incorporated within the city, many students live off-campus in the city.", "title": "Demographics" }, { "paragraph_id": 40, "text": "The California Northern Railroad is based in Davis.", "title": "Economy" }, { "paragraph_id": 41, "text": "According to the city's 2020 Comprehensive Annual Financial Report, the top employers in the city are:", "title": "Economy" }, { "paragraph_id": 42, "text": "A community currency scheme was in use in Davis, called Davis Dollars.", "title": "Economy" }, { "paragraph_id": 43, "text": "Bicycling has been one of the most popular modes of transportation in Davis for decades, particularly among school-age children and UC Davis students. In 2010, Davis became the new home of the United States Bicycling Hall of Fame.", "title": "Bicycling" }, { "paragraph_id": 44, "text": "Bicycle infrastructure became a political issue in the 1960s, culminating in the election of a pro-bicycle majority to the City Council in 1966. By the early 1970s, Davis became a pioneer in the implementation of cycling facilities. As the city expands, new facilities are usually mandated. As a result, Davis residents today enjoy an extensive network of bike lanes, bike paths, and grade-separated bicycle crossings. The flat terrain and temperate climate are also conducive to bicycling.", "title": "Bicycling" }, { "paragraph_id": 45, "text": "In 2005 the Bicycle-Friendly Community program of the League of American Bicyclists recognized Davis as the first Platinum Level city in the US In March 2006, Bicycling Magazine named Davis the best small town for cycling in its compilation of \"America's Best Biking Cities.\" Bicycling appears to be declining among Davis residents: from 1990 to 2000, the US Census Bureau reported a decline in the fraction of commuters traveling by bicycle, from 22 percent to 15 percent. This resulted in the reestablishment of the city's Bicycle Advisory Commission and creation of advocate groups such as \"Davis Bicycles!\". In 2016, Fifth Street, a main road in Davis was converted from four lanes to two lanes to allow for bicycle lanes and encourage more bicycling.", "title": "Bicycling" }, { "paragraph_id": 46, "text": "In 1996, 2001, 2006, and 2009 the UC Davis \"Cal Aggie Cycling\" Team won the national road cycling competition. The team also competes off-road and on the track, and has competed in the national competitions of these disciplines. In 2007, UC Davis also organized a record breaking bicycle parade numbering 822 bicycles.", "title": "Bicycling" }, { "paragraph_id": 47, "text": "A continuous stream of bands, speakers and various workshops occurs throughout Mother's Day weekend on each of Whole Earth Festival's (WEF) three stages and other specialty areas. The WEF is organized entirely by UC Davis students, in association with the Associated Students of UC Davis and the university.", "title": "Sights and culture" }, { "paragraph_id": 48, "text": "Celebrate Davis is the annual free festival held by the Davis Chamber of Commerce. It features booths by Davis businesses, live music, food vendors, live animals, activities like rock climbing and zip-line. It concludes with fireworks after dark. Parking is problematic, so most people ride their bikes and use the free valet parking.", "title": "Sights and culture" }, { "paragraph_id": 49, "text": "Picnic Day is an annual event at the University of California, Davis and is always held on the third Saturday in April. It is the largest student-run event in the US. Picnic Day starts off with a parade, which features the UC Davis California Aggie Marching Band-uh!, and runs through campus and around downtown Davis and ends with the Battle of the Bands, which lasts until the last band stops playing (sometimes until 2 am). There are over 150 free events and over 50,000 attend every year. Other highlights include: the Dachshund races, a.k.a. the Doxie Derby, held in the Pavilion; the Davis Rock Challenge, the Chemistry Magic Show, and the sheep dog trials. Many departments have exhibits and demonstrations, such as the Cole Facility, which until recently showed a fistulated cow (a cow that has been fitted with a plastic portal (a \"fistula\") into its digestive system to observe digestion processes). Its name was \"Hole-y Cow\".", "title": "Sights and culture" }, { "paragraph_id": 50, "text": "The Davis Transmedia Art Walk is a free—self-guided—public art tour includes 23 public murals, 16 sculptures, and 15 galleries and museums all in downtown Davis and the University of Davis campus. A free Davis Art Walk map serves as a detailed guide to the entire collection. The art pieces are all within walking distance of each other. The walk is a roughly circuitous path that can be completed within an hour or two. Every piece of art on the Art Walk has been embedded with an RFID chip. Using a cellphone that supports this technology, you access multimedia files that relate to each work. You can even leave a comment or \"burn your own message\" for other visitors to see. Artist hosted tours are held on the weekend by appointment only. To pick up a copy of the Davis Art Walk map, visit the Yolo County Visitors Bureau (132 E St., Suite 200; (530) 297–1900) or the John Natsoulas Center for the Arts (521 1st St.; (530) 756–3938).", "title": "Sights and culture" }, { "paragraph_id": 51, "text": "The Manetti Shrem Museum of Art, located on the UC Davis campus, opened on November 13, 2016, and carries on the legacy of the university's world-renowned first generation art faculty, which contributed to innovations in conceptual, performance and video art in the 1960s and 70s. The museum has generated nationwide attention with exhibits by artists such as Wayne Thiebaud, Bruce Nauman, John Cage, and Robert Arneson as well as its striking architecture, featuring a 50,000 square-foot “Grand Canopy” of perforated aluminum triangular beams, supported by 40 steel columns. Every year the museum exhibits works by graduating art students. The museum is free and hosts lecture series and events throughout the year, as well as weekend art studio activities for all ages.", "title": "Sights and culture" }, { "paragraph_id": 52, "text": "The Mondavi Center, located on the UC Davis campus, is one of the biggest non-seasonal attractions in Davis. The Mondavi Center is a theater which hosts many world-class touring acts, including star performers such as Yo-Yo Ma, Yitzhak Perlman and Wynton Marsalis, and draws a large audience from Sacramento.", "title": "Sights and culture" }, { "paragraph_id": 53, "text": "The UC Davis Arboretum is an arboretum and botanical garden. Plants from all over the world grow in different sections of the park. There are notable oak and native plant collections and a small redwood grove. A small waterway spans the arboretum along the bed of the old North Fork of Putah Creek. Occasionally herons, kingfishers, and cormorants can be seen around the waterways, as well as the ever-present ducks. Tours of the arboretum led by volunteer naturalists are often held for grade-school children.", "title": "Sights and culture" }, { "paragraph_id": 54, "text": "The Domes, (AKA Baggins End Innovative Housing), is an on-campus cooperative housing community designed by project manager Ron Swenson and future student-residents in 1972. Consisting of 14 polyurethane foam-insulated fiberglass domes and located in the Sustainable Research Area at the western end of Orchard Road, it is governed by its 26 UCD student residents. It is one of the few student co-housing cooperative communities in the US, and is an early example of the modern-day growing tiny house movement. The community has successfully resisted several threats to its continuation over the years.", "title": "Sights and culture" }, { "paragraph_id": 55, "text": "The Davis Farmers Market is held every Wednesday evening and Saturday morning. Participants sell a range of fruits and vegetables, baked goods, dairy and meat products (often from certified organic farms), crafts, and plants and flowers. From April to October, the market hosts Picnic in the Park, with musical events and food sold from restaurant stands. The Davis Farmers Market won first place in the 2009, and second place in the 2010 America's Favorite Farmers Markets held by the American Farmland Trust under the large Farmers market classification.", "title": "Sights and culture" }, { "paragraph_id": 56, "text": "Davis has one newspaper, The Davis Enterprise, a thrice-weekly founded in 1897. UC Davis also has a weekly newspaper called The California Aggie which covers campus, local and national news. Davis Media Access, a community media center, is the umbrella organization of television station DCTV. There are also numerous commercial stations broadcasting from nearby Sacramento. Davis has two community radio stations: KDVS 90.3 FM, on the University of California campus, and KDRT 95.7 FM, a subsidiary of Davis Media Access and one of the first low-power FM radio stations in the United States. Davis has the world's largest English-language local wiki, DavisWiki. In 2006, The People's Vanguard of Davis began news reporting about the city of Davis, the Davis Joint Unified School District, the county of Yolo, and the Sacramento area.", "title": "Sights and culture" }, { "paragraph_id": 57, "text": "Davis' Toad Tunnel is a wildlife crossing that was constructed in 1995 and has drawn much attention over the years, including a mention on The Daily Show. Because of the building of an overpass, animal lovers worried about toads being killed by cars commuting from South Davis to North Davis, since the toads traveled from one side of a dirt lot (which the overpass replaced) to the reservoir at the other end. After much controversy, a decision was made to build a toad tunnel, which runs beneath the Pole Line Road overpass which crosses Interstate 80. The project cost $14,000, equivalent to $27,000 in 2022. The tunnel is 21 inches (53 cm) wide and 18 inches (46 cm) high.", "title": "Sights and culture" }, { "paragraph_id": 58, "text": "The Davis Food Coop is a Davis institution. Founded in 1972, this cooperative is presently owned and operated by over 9,000 members and families of the Davis Community. The Coop is a full service supermarket that has championed organic, healthy eating in the community, sponsoring community events including summer programs for children, cooking classes, and many other activities.", "title": "Sights and culture" }, { "paragraph_id": 59, "text": "The University of California, Davis, or UC Davis, a campus of the University of California, had a 2019 Fall enrollment of 38,369 students. UC Davis has a dominant influence on the social and cultural life of the town.", "title": "Education" }, { "paragraph_id": 60, "text": "Also known as Deganawidah-Quetzalcoatl University and much smaller than UC Davis, D-Q University was a two-year institution located on Road 31 in Yolo County 6.7 miles (10.8 km) west of State Route 113. This is just west of Davis near the Yolo County Airport. About four miles (6.4 km) to the west, the Road 31 exit from Interstate 505 is marked with cryptic signage, \"DQU.\" The site is about 100 feet (30 m) above mean sea level (AMSL). NAD83 coordinates for the campus are 38°34′02″N 121°53′12″W / 38.56722°N 121.88667°W / 38.56722; -121.88667", "title": "Education" }, { "paragraph_id": 61, "text": "The college closed in 2005. The curriculum was said to include heritage and traditional American Indian ceremonies. The 643 acres (2.60 km) and 5 buildings were formerly a military reservation according to a National Park Service publication, Five Views. The full name of the school is included here so that readers can accurately identify the topic. According to some tribal members, use of the spelled-out name of the university can be offensive. People who want to be culturally respectful refer to the institution as D-Q University. Tribal members in appropriate circumstances may use the full name.", "title": "Education" }, { "paragraph_id": 62, "text": "An off-campus branch of Sacramento City College is located in Davis. The satellite is located in West Village, an area built by UC Davis to house students and others affiliated with the university.", "title": "Education" }, { "paragraph_id": 63, "text": "Davis' public school system is administrated by the Davis Joint Unified School District.", "title": "Education" }, { "paragraph_id": 64, "text": "The city has nine public elementary schools (North Davis, Birch Lane, Pioneer Elementary, Patwin, Cesar Chavez, Robert E. Willett, Marguerite Montgomery, Fred T. Korematsu at Mace Ranch, and Fairfield Elementary (which is outside the city limits but opened in 1866 and is Davis Joint Unified School District's oldest public school)). Davis has one school for independent study (Davis School for Independent Study), four public junior high schools (Ralph Waldo Emerson, Oliver Wendell Holmes, Frances Harper, and Leonardo da Vinci Junior High), one main high school (Davis Senior High School), one alternative high school (Martin Luther King High School), and a small project based high school (Leonardo da Vinci High School). Cesar Chavez is a Spanish immersion school, with no English integration until the third grade. The junior high schools contain grades 7 through 9. Due to a decline in the school-age population in Davis, two of the elementary schools in south Davis may have their district boundaries changed, or magnet programs may be moved to equalize enrollment. Valley Oak was closed after the 2007–08 school year, and their campus was granted to Da Vinci High (which had formerly been located in the back of Davis Senior High's campus) and a special-ed preschool. On average, class size is about 25 students: 1 teacher.", "title": "Education" }, { "paragraph_id": 65, "text": "At one time, Chavez and Willett were incorporated together to provide elementary education K–6 to both English-speaking and Spanish immersion students in West Davis. César Chávez served grades K–3 and was called West Davis Elementary, and Robert E. Willett (named for a long-time teacher at the school, now deceased) served grades 4–6 and was known as West Davis Intermediate. Willett now serves K–6 English-speaking students, and Chavez supports the Spanish immersion program for K–6.", "title": "Education" }, { "paragraph_id": 66, "text": "These are some notable Davis residents, other than UC Davis faculty who were not previously from Davis.", "title": "Notable people" }, { "paragraph_id": 67, "text": "Davis' sister cities are:", "title": "Sister cities" } ]
Davis is the most populous city in Yolo County, California, United States. Located in the Sacramento Valley region of Northern California, the city had a population of 66,850 in 2020, not including the on-campus population of the University of California, Davis, which was over 9,400 in 2016. As of 2019, there were 38,369 students enrolled at the university.
2002-01-11T02:58:38Z
2023-12-19T05:07:36Z
[ "Template:Weather box", "Template:Coord", "Template:Authority control", "Template:Infobox settlement", "Template:Flagicon", "Template:Cite news", "Template:Wikivoyage", "Template:Div col", "Template:Curlie", "Template:Div col end", "Template:Cite magazine", "Template:Official website", "Template:Sacramento Valley", "Template:Short description", "Template:Use mdy dates", "Template:As of", "Template:More citations needed section", "Template:Citation needed", "Template:Cite web", "Template:Greater Sacramento", "Template:Convert", "Template:Main", "Template:Commons category", "Template:Cities of Yolo County, California", "Template:Webarchive", "Template:Which", "Template:Inflation", "Template:See also", "Template:Reflist", "Template:Cite journal", "Template:US Census population", "Template:Center", "Template:Inflation/fn", "Template:Portal" ]
https://en.wikipedia.org/wiki/Davis,_California
9,128
Damon Runyon
Alfred Damon Runyon (October 4, 1880 – December 10, 1946) was an American journalist and short-story writer. He was best known for his short stories celebrating the world of Broadway in New York City that grew out of the Prohibition era. To New Yorkers of his generation, a "Damon Runyon character" evoked a distinctive social type from Brooklyn or Midtown Manhattan. The adjective "Runyonesque" refers to this type of character and the type of situations and dialog that Runyon depicts. He spun humorous and sentimental tales of gamblers, hustlers, actors, and gangsters, few of whom go by "square" names, preferring instead colorful monikers such as "Nathan Detroit", "Benny Southstreet", "Big Jule", "Harry the Horse", "Good Time Charley", "Dave the Dude", or "The Seldom Seen Kid". His distinctive vernacular style is known as "Runyonese": a mixture of formal speech and colorful slang, almost always in the present tense, and always devoid of contractions. He is credited with coining the phrase "Hooray Henry", a term now used in British English to describe the upper-class version of a loud-mouthed, arrogant twit. Runyon's fictional world is also known to the general public through the musical Guys and Dolls based on two of his stories, "The Idyll of Miss Sarah Brown" and "Blood Pressure". The musical additionally borrows characters and story elements from a few other Runyon stories, most notably "Pick The Winner". The film Little Miss Marker (and its three remakes, Sorrowful Jones, 40 Pounds of Trouble and the 1980 Little Miss Marker) grew from his short story of the same name. Runyon was also a newspaper reporter, covering sports and general news for decades for various publications and syndicates owned by William Randolph Hearst. Already known for his fiction, he wrote a well-remembered "present tense" article on Franklin Delano Roosevelt's Presidential inauguration in 1933 for the Universal Service, a Hearst syndicate, which was merged with the co-owned International News Service in 1937. Damon Runyon was born Alfred Damon Runyan to Alfred Lee and Elizabeth (Damon) Runyan. His relatives in his birthplace of Manhattan, Kansas, included several newspapermen. His grandfather was a newspaper printer from New Jersey who had relocated to Manhattan, Kansas, in 1855, and his father was the editor of his newspaper in the town. In 1882 Runyon's father was forced to sell his newspaper, and the family moved westward. The family eventually settled in Pueblo, Colorado, in 1887, where Runyon spent the rest of his youth. By most accounts, he attended school only through the fourth grade. He began to work in the newspaper trade under his father in Pueblo. In present-day Pueblo, Runyon Field, the Damon Runyon Repertory Theater Company, and Runyon Lake are named in his honor. In 1898, when still in his teens, Runyon enlisted in the US Army to fight in the Spanish–American War. While in the service, he was assigned to write for the Manila Freedom and Soldier's Letter. After military service, he worked for Colorado newspapers, beginning in Pueblo. His first job as a reporter was in September 1900, when he was hired by the Pueblo Star; he then worked in the Rocky Mountain area during the first decade of the 1900s: at the Denver Daily News, he served as "sporting editor" (today a "sports editor") and then as a staff writer. His expertise was in covering the semi-professional teams in Colorado. He briefly managed a semi-pro team in Trinidad, Colorado. At one of the newspapers where he worked, the spelling of his last name was changed from "Runyan" to "Runyon", a change he let stand. After failing in an attempt to organize a Colorado minor baseball league, which lasted less than a week, Runyon moved to New York City in 1910. In his first New York byline, the American editor dropped the "Alfred" and the name "Damon Runyon" appeared for the first time. For the next ten years, he covered the New York Giants and professional boxing for the New York American. He was the Hearst newspapers' baseball columnist for many years, beginning in 1911, and his knack for spotting the eccentric and the unusual, on the field or in the stands, is credited with revolutionizing the way baseball was covered. Perhaps as confirmation, Runyon was voted 1967 J. G. Taylor Spink Award by the Baseball Writers' Association of America (BBWAA), for which he was honored at ceremonies at the National Baseball Hall of Fame in July 1968. He is also a member of the International Boxing Hall Of Fame and is known for dubbing heavyweight champion James J. Braddock, the "Cinderella Man". Runyon frequently contributed sports poems to the American on boxing and baseball themes and wrote numerous short stories and essays. If I have all the tears that are shed on Broadway by guys in love, I will have enough salt water to start an opposition ocean to the Atlantic and Pacific, with enough left over to run the Great Salt Lake out of business. But I wish to say I never shed any of these tears personally, because I am never in love, and furthermore, barring a bad break, I never expect to be in love, for the way I look at it love is strictly the old phedinkus, and I tell the little guy as much. from "Tobias the Terrible", collected in More than Somewhat (1937) Gambling, particularly on craps or horse races, was a common theme of Runyon's works, and he was a notorious gambler. One of his paraphrases from a line in Ecclesiastes ran: "The race is not always to the swift, nor the battle to the strong, but that's how the smart money bets." A heavy drinker as a young man, he seems to have quit drinking soon after arriving in New York, after his drinking nearly cost him the courtship of the woman who became his first wife, Ellen Egan. He remained a heavy smoker. His best friend was mobster accountant Otto Berman, and he incorporated Berman into several of his stories under the alias "Regret, the horse player". When Berman was killed in a hit on Berman's boss, Dutch Schultz, Runyon quickly assumed the role of damage control for his deceased friend, mostly by correcting erroneous press releases - including one that stated Berman was one of Schultz's gunmen, to which Runyon replied, "Otto would have been as effective a bodyguard as a two-year-old."). While in New York City, Runyon courted and eventually married Ellen Egan. Their marriage produced two children, Mary and Damon Jr. Modern writers remark that "by contemporary standards, Runyon was a marginal husband and father." In 1928, Egan separated from Runyon permanently and moved to Bronxville with their children after hearing persistent rumors about her husband's infidelities. As it became subsequently known, Runyon, in 1916, was covering the border raids of Mexican bandit Pancho Villa as a reporter for the for the American newspaper owned by William Randolph Hearst. He had first met Villa in Texas while covering spring training of the state's teams. While in Mexico, Runyon visited one afternoon the Ciudad Juárez racetrack where Villa was present and placed a bet through a young messenger girl in Villa's entourage. The 14-year-old girl, whose name was Patrice Amati del Grande, erroneously placed Runyon's bet on a different horse that nonetheless won the race. She confided to the lucky bettor that she wanted to be a dancer when she grew up and Runyon told her that if, instead, she'd attend school, for which he'd pay, she could come after her graduation to see him New York and he'd get her a dancing job in the city; Runyon did indeed pay for her enrollment in the local convent school. In 1925, 19-year-old Grande came to New York City looking for Runyon and found him through the American's receptionist. The two became lovers and he found her work at local speakeasies. In 1928, after the separation between Runyon and Ellen Egan turned into a divorce, Runyon and Grande were married by his friend, city mayor Jimmy Walker. His former wife became an alcoholic and died in 1931 from a heart attack. In 1946, some time after Grande began an affair with a younger man, the couple got divorced. In late 1946, the same year he and his second wife were divorced, Runyon died, at age 66, in New York City from the throat cancer that had been diagnosed two years earlier, in 1944, when he underwent an unsuccessful operation that left him practically unable to speak. His favorite cigarette brand was Turkish Ovals. His body was cremated, and his ashes were scattered from a DC-3 airplane over Broadway in Manhattan by Eddie Rickenbacker on December 18, 1946. This was an infringement of the law but widely approved. The family plot of Damon Runyon is located at Woodlawn Cemetery in The Bronx, New York. Runyon, in his will, left to his former second wife his house in Florida, his racing stables, and the money from his insurance. He split in half the royalties from his works to his children and Grande. His daughter Mary was eventually institutionalized for alcoholism while his son Damon Jr., after working as a journalist in Washington, D.C., committed suicide in 1968. The English comedy writer Frank Muir comments that Runyon's plots were, in the manner of O. Henry, neatly constructed with professionally wrought endings, but their distinction lay in the manner of their telling, as the author invented a peculiar argot for his characters to speak. Runyon almost totally avoids the past tense (English humorist E.C. Bentley thought there was only one instance and was willing to "lay plenty of 6 to 5 that it is nothing but a misprint", but "was" appears in the short stories "The Lily of St Pierre" and "A Piece of Pie"; "had" appears in "The Lily of St Pierre", "Undertaker Song" and "Bloodhounds of Broadway"), and makes little use of the future tense, using the present for both. He also avoided the conditional, using instead the future indicative in situations that would normally require conditional. An example: "Now most any doll on Broadway will be very glad indeed to have Handsome Jack Madigan give her a tumble" (Guys and Dolls, "Social error"). Bentley comments that "there is a sort of ungrammatical purity about it [Runyon's resolute avoidance of the past tense], an almost religious exactitude." There is an homage to Runyon that makes use of this peculiarity ("Chronic Offender" by Spider Robinson), which involves a time machine and a man going by the name "Harry the Horse". He uses many slang terms (which go unexplained in his stories), such as: There are many recurring composite phrases such as: Bentley notes that Runyon's "telling use of the recurrent phrase and fixed epithet" demonstrates a debt to Homer. Runyon's stories also employ occasional rhyming slang, similar to the cockney variety but native to New York (e.g.: "Miss Missouri Martin makes the following crack one night to her: 'Well, I do not see any Simple Simon on your lean and linger.' This is Miss Missouri Martin's way of saying she sees no diamond on Miss Billy Perry's finger." (from "Romance in the Roaring Forties")). The comic effect of his style results partly from the juxtaposition of broad slang with mock pomposity. Women, when not "dolls", "Judies", "pancakes", "tomatoes", or "broads", may be "characters of a female nature", for example. He typically avoided contractions such as "don't" in the example above, which also contributes significantly to the humorously pompous effect. In one sequence, a gangster tells another character to do as he is told, or else "find another world in which to live". Runyon's short stories are told in the first person by a protagonist who is never named and whose role is unclear; he knows many gangsters and does not appear to have a job, but he does not admit to any criminal involvement, and seems to be largely a bystander. He describes himself as "being known to one and all as a guy who is just around". The radio program The Damon Runyon Theatre dramatized 52 of Runyon's works in 1949, and for these the protagonist was given the name "Broadway", although it was admitted that this was not his real name, much in the way "Harry the Horse" and "Sorrowful Jones" are aliases. There are many collections of Runyon's stories, in particular Runyon on Broadway and Runyon from First to Last. A publisher's note in the latter claims that collection contains all of Runyon's short stories not included in Runyon on Broadway, but two Broadway stories originally published in Collier's Weekly are not in either collection: "Maybe a Queen" and "Leopard's Spots", both collected in More Guys And Dolls (1950). The radio show, in addition, has a story, "Joe Terrace", that appears in 'More Guys and Dolls' and the August 29, 1936, issue of Colliers. It is one of his "Our Town" stories that does not appear in the "In Our Town" book, and the only episode of the show which is not a Broadway' story, however, the action is changed in the show from Our Town to Broadway. The "Our Town" stories are short vignettes of life in a small town, largely based on Runyon's experiences. They are written in a simple, descriptive style and contain twists and odd endings based on the personalities of the people involved. Each story's title is the name of the principal character. Twenty-seven of them were published in the 1946 book In Our Town. Runyon on Broadway contains the following stories: Runyon from First to Last includes the following stories and sketches: In Our Town contains the following stories: The following "Our Town" stories were not included in In Our Town: Twenty of his stories became motion pictures. In 1938, his unproduced play Saratoga Chips became the basis of The Ritz Brothers film Straight, Place and Show. The Damon Runyon Theater radio series dramatized 52 of Runyon's short stories in weekly broadcasts running from October 1948 to September 1949 (with reruns until 1951). The series was produced by Alan Ladd's Mayfair Transcription Company for syndication to local radio stations. John Brown played the character "Broadway", who doubled as host and narrator. The cast also comprised Alan Reed, Luis Van Rooten, Joseph Du Val, Gerald Mohr, Frank Lovejoy, Herb Vigran, Sheldon Leonard, William Conrad, Jeff Chandler, Lionel Stander, Sidney Miller, Olive Deering and Joe De Santis. Pat O'Brien was initially engaged for the role of "Broadway". The original stories were adapted for the radio by Russell Hughes. "Broadway's New York had a crisis each week, though the streets had a rose-tinged aura", wrote radio historian John Dunning. "The sad shows then were all the sadder; plays like For a Pal had a special poignance. The bulk of Runyon's work had been untapped by radio, and the well was deep." Damon Runyon Theatre aired on CBS-TV from 1955 to 1956. Mike McShane told Runyon stories as monologues on British TV in 1994, and an accompanying book was released, both titled Broadway Stories. Three Wise Guys was a 2005 TV movie.
[ { "paragraph_id": 0, "text": "Alfred Damon Runyon (October 4, 1880 – December 10, 1946) was an American journalist and short-story writer.", "title": "" }, { "paragraph_id": 1, "text": "He was best known for his short stories celebrating the world of Broadway in New York City that grew out of the Prohibition era. To New Yorkers of his generation, a \"Damon Runyon character\" evoked a distinctive social type from Brooklyn or Midtown Manhattan. The adjective \"Runyonesque\" refers to this type of character and the type of situations and dialog that Runyon depicts.", "title": "" }, { "paragraph_id": 2, "text": "He spun humorous and sentimental tales of gamblers, hustlers, actors, and gangsters, few of whom go by \"square\" names, preferring instead colorful monikers such as \"Nathan Detroit\", \"Benny Southstreet\", \"Big Jule\", \"Harry the Horse\", \"Good Time Charley\", \"Dave the Dude\", or \"The Seldom Seen Kid\".", "title": "" }, { "paragraph_id": 3, "text": "His distinctive vernacular style is known as \"Runyonese\": a mixture of formal speech and colorful slang, almost always in the present tense, and always devoid of contractions. He is credited with coining the phrase \"Hooray Henry\", a term now used in British English to describe the upper-class version of a loud-mouthed, arrogant twit.", "title": "" }, { "paragraph_id": 4, "text": "Runyon's fictional world is also known to the general public through the musical Guys and Dolls based on two of his stories, \"The Idyll of Miss Sarah Brown\" and \"Blood Pressure\". The musical additionally borrows characters and story elements from a few other Runyon stories, most notably \"Pick The Winner\". The film Little Miss Marker (and its three remakes, Sorrowful Jones, 40 Pounds of Trouble and the 1980 Little Miss Marker) grew from his short story of the same name.", "title": "" }, { "paragraph_id": 5, "text": "Runyon was also a newspaper reporter, covering sports and general news for decades for various publications and syndicates owned by William Randolph Hearst. Already known for his fiction, he wrote a well-remembered \"present tense\" article on Franklin Delano Roosevelt's Presidential inauguration in 1933 for the Universal Service, a Hearst syndicate, which was merged with the co-owned International News Service in 1937.", "title": "" }, { "paragraph_id": 6, "text": "Damon Runyon was born Alfred Damon Runyan to Alfred Lee and Elizabeth (Damon) Runyan. His relatives in his birthplace of Manhattan, Kansas, included several newspapermen. His grandfather was a newspaper printer from New Jersey who had relocated to Manhattan, Kansas, in 1855, and his father was the editor of his newspaper in the town. In 1882 Runyon's father was forced to sell his newspaper, and the family moved westward. The family eventually settled in Pueblo, Colorado, in 1887, where Runyon spent the rest of his youth. By most accounts, he attended school only through the fourth grade. He began to work in the newspaper trade under his father in Pueblo. In present-day Pueblo, Runyon Field, the Damon Runyon Repertory Theater Company, and Runyon Lake are named in his honor.", "title": "Early life" }, { "paragraph_id": 7, "text": "In 1898, when still in his teens, Runyon enlisted in the US Army to fight in the Spanish–American War. While in the service, he was assigned to write for the Manila Freedom and Soldier's Letter.", "title": "Enlistment in the military" }, { "paragraph_id": 8, "text": "After military service, he worked for Colorado newspapers, beginning in Pueblo. His first job as a reporter was in September 1900, when he was hired by the Pueblo Star; he then worked in the Rocky Mountain area during the first decade of the 1900s: at the Denver Daily News, he served as \"sporting editor\" (today a \"sports editor\") and then as a staff writer. His expertise was in covering the semi-professional teams in Colorado. He briefly managed a semi-pro team in Trinidad, Colorado. At one of the newspapers where he worked, the spelling of his last name was changed from \"Runyan\" to \"Runyon\", a change he let stand.", "title": "Newspaper reporter" }, { "paragraph_id": 9, "text": "After failing in an attempt to organize a Colorado minor baseball league, which lasted less than a week, Runyon moved to New York City in 1910. In his first New York byline, the American editor dropped the \"Alfred\" and the name \"Damon Runyon\" appeared for the first time. For the next ten years, he covered the New York Giants and professional boxing for the New York American.", "title": "Newspaper reporter" }, { "paragraph_id": 10, "text": "He was the Hearst newspapers' baseball columnist for many years, beginning in 1911, and his knack for spotting the eccentric and the unusual, on the field or in the stands, is credited with revolutionizing the way baseball was covered. Perhaps as confirmation, Runyon was voted 1967 J. G. Taylor Spink Award by the Baseball Writers' Association of America (BBWAA), for which he was honored at ceremonies at the National Baseball Hall of Fame in July 1968. He is also a member of the International Boxing Hall Of Fame and is known for dubbing heavyweight champion James J. Braddock, the \"Cinderella Man\". Runyon frequently contributed sports poems to the American on boxing and baseball themes and wrote numerous short stories and essays.", "title": "Newspaper reporter" }, { "paragraph_id": 11, "text": "If I have all the tears that are shed on Broadway by guys in love, I will have enough salt water to start an opposition ocean to the Atlantic and Pacific, with enough left over to run the Great Salt Lake out of business. But I wish to say I never shed any of these tears personally, because I am never in love, and furthermore, barring a bad break, I never expect to be in love, for the way I look at it love is strictly the old phedinkus, and I tell the little guy as much.", "title": "Newspaper reporter" }, { "paragraph_id": 12, "text": "from \"Tobias the Terrible\", collected in More than Somewhat (1937)", "title": "Newspaper reporter" }, { "paragraph_id": 13, "text": "Gambling, particularly on craps or horse races, was a common theme of Runyon's works, and he was a notorious gambler. One of his paraphrases from a line in Ecclesiastes ran: \"The race is not always to the swift, nor the battle to the strong, but that's how the smart money bets.\"", "title": "Gambling" }, { "paragraph_id": 14, "text": "A heavy drinker as a young man, he seems to have quit drinking soon after arriving in New York, after his drinking nearly cost him the courtship of the woman who became his first wife, Ellen Egan. He remained a heavy smoker.", "title": "Gambling" }, { "paragraph_id": 15, "text": "His best friend was mobster accountant Otto Berman, and he incorporated Berman into several of his stories under the alias \"Regret, the horse player\". When Berman was killed in a hit on Berman's boss, Dutch Schultz, Runyon quickly assumed the role of damage control for his deceased friend, mostly by correcting erroneous press releases - including one that stated Berman was one of Schultz's gunmen, to which Runyon replied, \"Otto would have been as effective a bodyguard as a two-year-old.\").", "title": "Gambling" }, { "paragraph_id": 16, "text": "While in New York City, Runyon courted and eventually married Ellen Egan. Their marriage produced two children, Mary and Damon Jr. Modern writers remark that \"by contemporary standards, Runyon was a marginal husband and father.\" In 1928, Egan separated from Runyon permanently and moved to Bronxville with their children after hearing persistent rumors about her husband's infidelities. As it became subsequently known, Runyon, in 1916, was covering the border raids of Mexican bandit Pancho Villa as a reporter for the for the American newspaper owned by William Randolph Hearst. He had first met Villa in Texas while covering spring training of the state's teams. While in Mexico, Runyon visited one afternoon the Ciudad Juárez racetrack where Villa was present and placed a bet through a young messenger girl in Villa's entourage. The 14-year-old girl, whose name was Patrice Amati del Grande, erroneously placed Runyon's bet on a different horse that nonetheless won the race. She confided to the lucky bettor that she wanted to be a dancer when she grew up and Runyon told her that if, instead, she'd attend school, for which he'd pay, she could come after her graduation to see him New York and he'd get her a dancing job in the city; Runyon did indeed pay for her enrollment in the local convent school.", "title": "Personal life" }, { "paragraph_id": 17, "text": "In 1925, 19-year-old Grande came to New York City looking for Runyon and found him through the American's receptionist. The two became lovers and he found her work at local speakeasies. In 1928, after the separation between Runyon and Ellen Egan turned into a divorce, Runyon and Grande were married by his friend, city mayor Jimmy Walker. His former wife became an alcoholic and died in 1931 from a heart attack. In 1946, some time after Grande began an affair with a younger man, the couple got divorced.", "title": "Personal life" }, { "paragraph_id": 18, "text": "In late 1946, the same year he and his second wife were divorced, Runyon died, at age 66, in New York City from the throat cancer that had been diagnosed two years earlier, in 1944, when he underwent an unsuccessful operation that left him practically unable to speak. His favorite cigarette brand was Turkish Ovals.", "title": "Death" }, { "paragraph_id": 19, "text": "His body was cremated, and his ashes were scattered from a DC-3 airplane over Broadway in Manhattan by Eddie Rickenbacker on December 18, 1946. This was an infringement of the law but widely approved. The family plot of Damon Runyon is located at Woodlawn Cemetery in The Bronx, New York.", "title": "Death" }, { "paragraph_id": 20, "text": "Runyon, in his will, left to his former second wife his house in Florida, his racing stables, and the money from his insurance. He split in half the royalties from his works to his children and Grande. His daughter Mary was eventually institutionalized for alcoholism while his son Damon Jr., after working as a journalist in Washington, D.C., committed suicide in 1968.", "title": "Death" }, { "paragraph_id": 21, "text": "The English comedy writer Frank Muir comments that Runyon's plots were, in the manner of O. Henry, neatly constructed with professionally wrought endings, but their distinction lay in the manner of their telling, as the author invented a peculiar argot for his characters to speak. Runyon almost totally avoids the past tense (English humorist E.C. Bentley thought there was only one instance and was willing to \"lay plenty of 6 to 5 that it is nothing but a misprint\", but \"was\" appears in the short stories \"The Lily of St Pierre\" and \"A Piece of Pie\"; \"had\" appears in \"The Lily of St Pierre\", \"Undertaker Song\" and \"Bloodhounds of Broadway\"), and makes little use of the future tense, using the present for both. He also avoided the conditional, using instead the future indicative in situations that would normally require conditional. An example: \"Now most any doll on Broadway will be very glad indeed to have Handsome Jack Madigan give her a tumble\" (Guys and Dolls, \"Social error\"). Bentley comments that \"there is a sort of ungrammatical purity about it [Runyon's resolute avoidance of the past tense], an almost religious exactitude.\" There is an homage to Runyon that makes use of this peculiarity (\"Chronic Offender\" by Spider Robinson), which involves a time machine and a man going by the name \"Harry the Horse\".", "title": "Literary style – the \"Broadway\" stories" }, { "paragraph_id": 22, "text": "He uses many slang terms (which go unexplained in his stories), such as:", "title": "Literary style – the \"Broadway\" stories" }, { "paragraph_id": 23, "text": "There are many recurring composite phrases such as:", "title": "Literary style – the \"Broadway\" stories" }, { "paragraph_id": 24, "text": "Bentley notes that Runyon's \"telling use of the recurrent phrase and fixed epithet\" demonstrates a debt to Homer.", "title": "Literary style – the \"Broadway\" stories" }, { "paragraph_id": 25, "text": "Runyon's stories also employ occasional rhyming slang, similar to the cockney variety but native to New York (e.g.: \"Miss Missouri Martin makes the following crack one night to her: 'Well, I do not see any Simple Simon on your lean and linger.' This is Miss Missouri Martin's way of saying she sees no diamond on Miss Billy Perry's finger.\" (from \"Romance in the Roaring Forties\")).", "title": "Literary style – the \"Broadway\" stories" }, { "paragraph_id": 26, "text": "The comic effect of his style results partly from the juxtaposition of broad slang with mock pomposity. Women, when not \"dolls\", \"Judies\", \"pancakes\", \"tomatoes\", or \"broads\", may be \"characters of a female nature\", for example. He typically avoided contractions such as \"don't\" in the example above, which also contributes significantly to the humorously pompous effect. In one sequence, a gangster tells another character to do as he is told, or else \"find another world in which to live\".", "title": "Literary style – the \"Broadway\" stories" }, { "paragraph_id": 27, "text": "Runyon's short stories are told in the first person by a protagonist who is never named and whose role is unclear; he knows many gangsters and does not appear to have a job, but he does not admit to any criminal involvement, and seems to be largely a bystander. He describes himself as \"being known to one and all as a guy who is just around\". The radio program The Damon Runyon Theatre dramatized 52 of Runyon's works in 1949, and for these the protagonist was given the name \"Broadway\", although it was admitted that this was not his real name, much in the way \"Harry the Horse\" and \"Sorrowful Jones\" are aliases.", "title": "Literary style – the \"Broadway\" stories" }, { "paragraph_id": 28, "text": "There are many collections of Runyon's stories, in particular Runyon on Broadway and Runyon from First to Last. A publisher's note in the latter claims that collection contains all of Runyon's short stories not included in Runyon on Broadway, but two Broadway stories originally published in Collier's Weekly are not in either collection: \"Maybe a Queen\" and \"Leopard's Spots\", both collected in More Guys And Dolls (1950). The radio show, in addition, has a story, \"Joe Terrace\", that appears in 'More Guys and Dolls' and the August 29, 1936, issue of Colliers. It is one of his \"Our Town\" stories that does not appear in the \"In Our Town\" book, and the only episode of the show which is not a Broadway' story, however, the action is changed in the show from Our Town to Broadway.", "title": "Literary works" }, { "paragraph_id": 29, "text": "The \"Our Town\" stories are short vignettes of life in a small town, largely based on Runyon's experiences. They are written in a simple, descriptive style and contain twists and odd endings based on the personalities of the people involved. Each story's title is the name of the principal character. Twenty-seven of them were published in the 1946 book In Our Town.", "title": "Literary works" }, { "paragraph_id": 30, "text": "Runyon on Broadway contains the following stories:", "title": "Literary works" }, { "paragraph_id": 31, "text": "Runyon from First to Last includes the following stories and sketches:", "title": "Literary works" }, { "paragraph_id": 32, "text": "In Our Town contains the following stories:", "title": "Literary works" }, { "paragraph_id": 33, "text": "The following \"Our Town\" stories were not included in In Our Town:", "title": "Literary works" }, { "paragraph_id": 34, "text": "Twenty of his stories became motion pictures.", "title": "Literary works" }, { "paragraph_id": 35, "text": "In 1938, his unproduced play Saratoga Chips became the basis of The Ritz Brothers film Straight, Place and Show.", "title": "Literary works" }, { "paragraph_id": 36, "text": "The Damon Runyon Theater radio series dramatized 52 of Runyon's short stories in weekly broadcasts running from October 1948 to September 1949 (with reruns until 1951). The series was produced by Alan Ladd's Mayfair Transcription Company for syndication to local radio stations. John Brown played the character \"Broadway\", who doubled as host and narrator. The cast also comprised Alan Reed, Luis Van Rooten, Joseph Du Val, Gerald Mohr, Frank Lovejoy, Herb Vigran, Sheldon Leonard, William Conrad, Jeff Chandler, Lionel Stander, Sidney Miller, Olive Deering and Joe De Santis. Pat O'Brien was initially engaged for the role of \"Broadway\". The original stories were adapted for the radio by Russell Hughes.", "title": "Literary works" }, { "paragraph_id": 37, "text": "\"Broadway's New York had a crisis each week, though the streets had a rose-tinged aura\", wrote radio historian John Dunning. \"The sad shows then were all the sadder; plays like For a Pal had a special poignance. The bulk of Runyon's work had been untapped by radio, and the well was deep.\"", "title": "Literary works" }, { "paragraph_id": 38, "text": "Damon Runyon Theatre aired on CBS-TV from 1955 to 1956.", "title": "Literary works" }, { "paragraph_id": 39, "text": "Mike McShane told Runyon stories as monologues on British TV in 1994, and an accompanying book was released, both titled Broadway Stories.", "title": "Literary works" }, { "paragraph_id": 40, "text": "Three Wise Guys was a 2005 TV movie.", "title": "Literary works" } ]
Alfred Damon Runyon was an American journalist and short-story writer. He was best known for his short stories celebrating the world of Broadway in New York City that grew out of the Prohibition era. To New Yorkers of his generation, a "Damon Runyon character" evoked a distinctive social type from Brooklyn or Midtown Manhattan. The adjective "Runyonesque" refers to this type of character and the type of situations and dialog that Runyon depicts. He spun humorous and sentimental tales of gamblers, hustlers, actors, and gangsters, few of whom go by "square" names, preferring instead colorful monikers such as "Nathan Detroit", "Benny Southstreet", "Big Jule", "Harry the Horse", "Good Time Charley", "Dave the Dude", or "The Seldom Seen Kid". His distinctive vernacular style is known as "Runyonese": a mixture of formal speech and colorful slang, almost always in the present tense, and always devoid of contractions. He is credited with coining the phrase "Hooray Henry", a term now used in British English to describe the upper-class version of a loud-mouthed, arrogant twit. Runyon's fictional world is also known to the general public through the musical Guys and Dolls based on two of his stories, "The Idyll of Miss Sarah Brown" and "Blood Pressure". The musical additionally borrows characters and story elements from a few other Runyon stories, most notably "Pick The Winner". The film Little Miss Marker grew from his short story of the same name. Runyon was also a newspaper reporter, covering sports and general news for decades for various publications and syndicates owned by William Randolph Hearst. Already known for his fiction, he wrote a well-remembered "present tense" article on Franklin Delano Roosevelt's Presidential inauguration in 1933 for the Universal Service, a Hearst syndicate, which was merged with the co-owned International News Service in 1937.
2002-01-14T22:11:17Z
2023-12-29T17:40:13Z
[ "Template:1968 Baseball HOF", "Template:Short description", "Template:Use mdy dates", "Template:Infobox person", "Template:Col-2", "Template:Cite book", "Template:IMDb name", "Template:Rp", "Template:Col-begin", "Template:External media", "Template:Librivox author", "Template:Authority control", "Template:Col-end", "Template:Cite news", "Template:J. G. Taylor Spink Award", "Template:Lead too long", "Template:Col-3", "Template:ASIN", "Template:FadedPage", "Template:Quote box", "Template:Reflist", "Template:Commons category", "Template:Wikiquote", "Template:Cite web", "Template:ISBN", "Template:Webarchive", "Template:Internet Archive author", "Template:IBDB name" ]
https://en.wikipedia.org/wiki/Damon_Runyon
9,129
Don Tennant
Donald G. Tennant (November 23, 1922 – December 8, 2001) was an American advertising agency executive. He worked at the Leo Burnett agency in Chicago, Illinois. The agency placed anthropomorphic faces of 'critters' on packaged goods.Tennant was in charge of the Marlboro account and invented the Marlboro Man.
[ { "paragraph_id": 0, "text": "Donald G. Tennant (November 23, 1922 – December 8, 2001) was an American advertising agency executive.", "title": "" }, { "paragraph_id": 1, "text": "He worked at the Leo Burnett agency in Chicago, Illinois. The agency placed anthropomorphic faces of 'critters' on packaged goods.Tennant was in charge of the Marlboro account and invented the Marlboro Man.", "title": "" }, { "paragraph_id": 2, "text": "", "title": "References" } ]
Donald G. Tennant was an American advertising agency executive. He worked at the Leo Burnett agency in Chicago, Illinois. The agency placed anthropomorphic faces of 'critters' on packaged goods.Tennant was in charge of the Marlboro account and invented the Marlboro Man.
2022-03-26T16:19:00Z
[ "Template:Cite web", "Template:Cite encyclopedia", "Template:US-business-bio-1920s-stub", "Template:Short description", "Template:Reflist" ]
https://en.wikipedia.org/wiki/Don_Tennant
9,130
Devo
Devo (/ˈdiːvoʊ/, originally /diːˈvoʊ/) is an American new wave band from Akron, Ohio, formed in 1973. Their classic line-up consisted of two sets of brothers, the Mothersbaughs (Mark and Bob) and the Casales (Gerald and Bob), along with Alan Myers. The band had a No. 14 Billboard chart hit in 1980 with the single "Whip It", the song that gave the band mainstream popularity. Devo's music and visual presentation (including stage shows and costumes) mingle kitsch science fiction themes, deadpan surrealist humor and mordantly satirical social commentary. The band's namesake, the tongue-in-cheek social theory of "de-evolution", was an integral concept in their early work, which was marked by experimental and dissonant art punk that merged rock music with electronics. Their output in the 1980s embraced synth-pop and a more mainstream, less conceptual style, though the band's satirical and quirky humor remained intact. Their music has proven influential on subsequent movements, particularly on new wave, industrial, and alternative rock artists. Devo (most enthusiastically Gerald Casale) was also a pioneer of the music video format. The name Devo comes from the concept of "de-evolution" and the band's related idea that instead of continuing to evolve, mankind had begun to regress, as evidenced by the dysfunction and herd mentality of American society. In the late 1960s, this idea was developed as a joke by Kent State University art students Gerald Casale and Bob Lewis, who created a number of satirical art pieces in a devolution vein. At this time, Casale had also performed with the local band 15-60-75 (The Numbers Band). They met Mark Mothersbaugh around 1970, a talented keyboardist who had been playing with the band Flossy Bobbitt. Mothersbaugh brought a more humorous feel to the band, introducing them to material like the pamphlet "Jocko Homo Heavenbound", which includes an illustration of a winged devil labelled "D-EVOLUTION" and would later inspire the song "Jocko Homo". The "joke" about de-evolution became serious following the Kent State massacre of May 4, 1970. This event would be cited multiple times as the impetus for forming the band Devo. Throughout the band's career, they have often been considered a "joke band" by the music press. The first form of Devo was the "Sextet Devo" which performed at the 1973 Kent State performing arts festival. It included Casale, Lewis and Mothersbaugh, as well as Gerald's brother Bob Casale on guitar, and friends Rod Reisman and Fred Weber on drums and vocals, respectively. This performance was filmed and an excerpt was later included on the home video release The Complete Truth About De-Evolution. This lineup performed only once. Devo returned to perform in the Student Governance Center (featured prominently in the film) at the 1974 Creative Arts Festival with a lineup including the Casale brothers, Bob Lewis, Mark Mothersbaugh, and Jim Mothersbaugh on drums. The band continued to perform, generally as a quartet, but with a fluid lineup including Mark's brothers Bob Mothersbaugh and Jim Mothersbaugh. Bob played electric guitar, and Jim provided percussion using a set of home-made electronic drums. Their first two music videos, "Secret Agent Man" and "Jocko Homo", which both appeared in The Truth About De-Evolution, were filmed in Akron, and Cuyahoga Falls, Ohio, the hometown of most members. This lineup of Devo lasted until late 1975 when Jim left the band. Lewis would sometimes play guitar during this period, but mainly stayed in a managerial role. In concert, Devo would often perform in the guise of theatrical characters, such as Booji Boy and the Chinaman. Live concerts from this period were often confrontational, and would remain so until 1977. A recording of an early Devo performance from 1975 with the quartet lineup appears on Devo Live: The Mongoloid Years (1992), ending with the promoters unplugging Devo's equipment. Following Jim Mothersbaugh's departure, Bob Mothersbaugh found a new drummer, Alan Myers, who played on a conventional, acoustic drum kit. Casale re-recruited his brother Bob Casale, and the lineup of Devo remained the same for nearly ten years. Devo gained some fame in 1976 when their short film The Truth About De-Evolution, directed by Chuck Statler, won a prize at the Ann Arbor Film Festival. This attracted the attention of David Bowie, who began work to get the band a recording contract with Warner Music Group. In 1977, Devo were asked by Neil Young to participate in the making of his film Human Highway. Released in 1982, the film featured the band as "nuclear garbagemen". The band members were asked to write their own parts and Mark Mothersbaugh scored and recorded much of the soundtrack, his first of many. In March 1977, Devo released their first single, "Mongoloid" backed with "Jocko Homo", the B-side of which came from the soundtrack to The Truth About De-Evolution, on their independent label Booji Boy. This was followed by a cover of the Rolling Stones' "(I Can't Get No) Satisfaction". In 1978, the B Stiff EP was released by British independent label Stiff, which included the single "Be Stiff" plus two previous Booji Boy releases. "Mechanical Man", a 4-track 7-inch extended play (EP) of demos, an apparent bootleg, but actually put out by the band, was also released that year. Recommendations from David Bowie and Iggy Pop enabled Devo to secure a recording contract with Warner Bros. in 1978. After Bowie backed out of the business deal due to previous commitments, their first album, Q: Are We Not Men? A: We Are Devo! was produced by Brian Eno and featured re-recordings of their previous singles "Mongoloid" and "(I Can't Get No) Satisfaction". On October 14, 1978, Devo gained national exposure with an appearance on the late-night show Saturday Night Live, a week after the Rolling Stones, performing "(I Can't Get No) Satisfaction" and "Jocko Homo". The band followed up with Duty Now for the Future in 1979, which moved the band more towards electronic instrumentation. While not as successful as their first album, it did produce some fan favorites with the songs "Blockhead" and "The Day My Baby Gave Me a Surprize" [sic], as well as a cover of the Johnny Rivers hit "Secret Agent Man". "Secret Agent Man" had been recorded first in 1974 for Devo's first film and performed live as early as 1976. In 1979, Devo traveled to Japan for the first time, and a live show from this tour was partially recorded. Devo appeared on Don Kirshner's Rock Concert in 1979, performing "Blockhead", "Secret Agent Man", "Uncontrollable Urge", and "Mongoloid". Also in 1979, Rhino, in conjunction with the Los Angeles radio station KROQ-FM, released Devotees, a tribute album. It contained a set of covers of Devo songs interspersed with renditions of popular songs in Devo's style. Devo actively embraced the parody religion Church of the SubGenius. In concert, Devo sometimes performed as their own opening act, pretending to be a Christian soft rock band called "Dove (the Band of Love)", which is an anagram of "Devo". They appeared as Dove in the 1980 televangelism spoof film Pray TV. Devo gained a new level of visibility with 1980's Freedom of Choice. This album included their best-known hit, "Whip It", which quickly became a Top 40 hit. The album moved to an almost completely electronic sound, with the exception of acoustic drums and Bob Mothersbaugh's guitar. The tour for Freedom of Choice was ambitious for the band, including dates in Japan, the United Kingdom, France, Germany, Italy, the Netherlands, and Canada. The band used a minimalist set including large custom light boxes which could be laid on their back to form a second, smaller stage during the second half of the set. Other popular songs from Freedom of Choice were "Girl U Want", the title-track, and "Gates of Steel". The band released popular music videos for "Whip It" and "Girl U Want". Devo made two appearances on the TV show Fridays in 1980, as well as on Don Kirshner's Rock Concert, American Bandstand, and other shows. The band members often wore red, terraced energy dome hats as part of its stage outfit. The dome was first worn during the band's Freedom of Choice campaign of 1980. It reappeared in the 1981, 1982, and 1988 tours, as well as in most of their performances since 1997. Devo also recorded two albums of their own songs as elevator music for their fan club, Club Devo, released on cassette in 1981 and 1984. These were later re-released on the album E-Z Listening Disc (1987), with all but two of the original Club Devo songs. These songs were often played as house music before Devo concerts. In August 1981, the band's DEV-O Live EP spent three weeks at the top of the Australian charts. In 1982, they toured Australia and appeared on the TV show Countdown. Devo enjoyed continued popularity in Australia, where the nationally broadcast 1970s–1980s pop TV show Countdown was one of the first programs in the world to broadcast their video clips. They were given consistent radio support by Sydney-based non-commercial rock station Double Jay (2JJ) and Brisbane-based independent community station Triple Zed (4ZZZ), two of the first rock stations outside America to play their recordings. The late-night music program Nightmoves aired The Truth About De-Evolution. In 1981, Devo contributed a cover of "Working in the Coal Mine", recorded during the Freedom of Choice sessions, to the film Heavy Metal. They offered the song to be used in the film when Warner Bros. refused to include it on the album. Warner then included it as an independent bonus single accompanying their 1981 release, New Traditionalists. For this album Devo wore self-described "Utopian Boy Scout uniforms" topped with a "New Traditionalist Pomp"—a plastic half-wig modeled on the hairstyle of John F. Kennedy. Among the singles from the album was "Through Being Cool", written as a reaction to their new-found fame from "Whip It" and seen as a response to new fans who had misinterpreted the message behind the hit song. The album's accompanying tour featured the band performing an intensely physical show with treadmills and a large Greek temple set. That same year they served as Toni Basil's backing band on Word of Mouth, her debut album, which included versions of three Devo songs, recorded with Basil singing lead. Oh, No! It's Devo followed in 1982. Produced by Roy Thomas Baker, the album featured a more synth-pop-oriented sound than its predecessors. According to Gerald Casale, the album's sound was inspired by reviewers alternately describing them as both "fascists" and "clowns". The album's tour featured the band performing seven songs in front of a 12-foot high rear-projection screen with synchronized video, an image recreated using blue screen effects in the album's accompanying music videos. Devo also contributed two songs, "Theme from Doctor Detroit" and "Luv-Luv", to the 1983 Dan Aykroyd film Doctor Detroit, and produced a music video for "Theme from Doctor Detroit" featuring clips from the film interspersed with live-action segments. The band's sixth studio album, Shout (1984), which featured extensive use of the Fairlight CMI digital sampling synthesizer, was received poorly, and the expensive music video they'd produced for their cover of the Jimi Hendrix Experience's "Are You Experienced?" was criticized by some as being "disrespectful", all of which caused Warner Bros. to buy out the remainder of Devo's contract. Shortly thereafter, Myers left the band, citing creative unfulfillment. In the interim, Mark Mothersbaugh began composing music for the TV show Pee-wee's Playhouse and released an elaborately packaged solo cassette, Musik for Insomniaks, which was later expanded and released as two CDs in 1988. In 1987, Devo re-formed with former Sparks drummer David Kendrick to replace Myers. Their first project was a soundtrack for the horror film Slaughterhouse Rock (1988), starring Toni Basil. The band released the album Total Devo in 1988, on Enigma Records. This album included two songs used in the Slaughterhouse Rock soundtrack. The song "Baby Doll" was used that same year in the comedy film Tapeheads, with newly recorded Swedish lyrics, and was credited to (and shown in a music video by) a fictitious Swedish band called Cube-Squared. Devo followed this up with a world tour, and released the live album Now It Can Be Told: Devo at the Palace in 1989. However, Total Devo was not a commercial success and received poor critical reviews. In 1989, members of Devo were involved in the project Visiting Kids, releasing a self-titled EP on the New Rose label in 1990. The band featured Mark's then-wife Nancye Ferguson, as well as David Kendrick, Bob Mothersbaugh, and Bob's daughter Alex Mothersbaugh. Their record was produced by Bob Casale and Mark Mothersbaugh, and Mark also co-wrote some of the songs. Visiting Kids appeared on the soundtrack to the film Rockula, as well as on Late Night with David Letterman. A promotional video was filmed for the song "Trilobites". In 1990, Smooth Noodle Maps, Devo's last album for twenty years, was released. It too was a critical and commercial failure which, along with its two singles "Stuck in a Loop" and "Post Post-Modern Man", were Devo's worst-selling efforts; all failed to appear on the U.S. charts. Devo launched a concert tour in support of the album, but poor ticket sales and the bankruptcy and dissolution of Enigma Records, which was responsible for organizing and financing the tour, caused it to be cancelled part way through. In 1990, the members of Devo, bar Bob Mothersbaugh, appeared in the film The Spirit of '76. Two albums of demo recordings from 1974–1977, namely Hardcore Devo: Volume One (1990) and Hardcore Devo: Volume Two (1991), were released on Rykodisc, as well as an album of early live recordings, Devo Live: The Mongoloid Years (1992). The band played one final show in March 1991 before breaking up. In an interview with Mark Mothersbaugh concerning their 1996 computer game Devo Presents Adventures of the Smart Patrol, he explained, "Around '88, '89, '90 maybe, we did our last tour in Europe, and it was kind of at that point, We were watching This Is Spinal Tap on the bus and said, 'Oh my God, that's our life.' And we just said, 'Things have to change.' So we kind of agreed from there that we wouldn't do live shows anymore." Following the split, Mark Mothersbaugh established Mutato Muzika, a commercial music production studio, along with Bob Mothersbaugh and Bob Casale. Mothersbaugh meant to further a career as a composer, and the latter worked as an audio engineer. Mothersbaugh has had considerable success writing and producing music for television programs, including Pee-wee's Playhouse and Rugrats, video games, cartoons, and films, where he worked alongside director Wes Anderson. David Kendrick also worked at Mutato for a period during the early 1990s. Gerald Casale began a career as a director of music videos and commercials, working with bands including Rush, Soundgarden, Silverchair and the Foo Fighters. In the wake of Devo's dissolution, Bob Mothersbaugh attempted to start a solo career with The Bob I Band, recording an album that was never released. The tapes for this are now lost, though a bootleg recording of the band in concert exists and can be obtained through the bootleg aggregator Booji Boy's Basement. While they did not release any studio albums during this period, Devo sporadically reconvened to record a number of songs for various films and compilations, including a new recording of "Girl U Want" on the soundtrack to the 1995 film Tank Girl and a cover of the Nine Inch Nails hit "Head Like a Hole" for the 1996 North American version of the film Supercop. In January 1996, Devo performed a reunion concert at the Sundance Film Festival in Park City, Utah. The band performed on part of the 1996 Lollapalooza tour in the rotating Mystery Spot. On these tours and most subsequent tours, Devo performed a set-list mostly composed of material from between 1978 and 1982, ignoring their Enigma Records-era material. Also in 1996, Devo released a multimedia CD-ROM adventure game, Adventures of the Smart Patrol with Inscape. The game was not a success, but the Lollapalooza tour was received well enough to allow Devo to return in 1997 as a headliner. Devo performed sporadically from 1997 onwards. In 1999, the Oh, No! It's Devo era outtakes "Faster and Faster" and "One Dumb Thing", as well as the Shout era outtake "Modern Life", were restored, completed and used in the video game Interstate '82, developed by Activision and released. Also that year, Mothersbaugh started the Devo side-project The Wipeouters, after their band in junior high, featuring himself (keyboards, organ), Bob Mothersbaugh (guitar), Bob Casale (guitar), and Mutato Muzika composer Josh Mancell (drums). The Wipeouters performed the theme song to the Nickelodeon animated series Rocket Power, and in 2001 they released an album of surf rock material, titled P'Twaaang!!!. Around this same time, Devo's online fandom continued to grow, leading to 'Devotional', a Devo fan convention held annually in Cleveland, Ohio. The festival was most recently held in September 2022. In 2005, Devo recorded a new version of "Whip It" to be used in Swiffer television commercials, a decision they have said they regretted. During an interview with the Dallas Observer, Gerald Casale said, "It's just aesthetically offensive. It's got everything a commercial that turns people off has." The song "Beautiful World" was also used in a re-recorded form for an advertisement for Target stores. Due to rights issues with their back catalog, Devo has re-recorded songs for films and advertisements. In 2005, Gerald Casale announced his "solo" project, Jihad Jerry & the Evildoers (the Evildoers, including the other members of Devo), and released the first EP, Army Girls Gone Wild in 2006. A full-length album, Mine Is Not a Holy War, was released on September 12, 2006, after a several-month delay. It featured mostly new material, plus re-recordings of four obscure Devo songs: "I Need a Chick" and "I Been Refused" (from Hardcore Devo: Volume Two), "Find Out" (which appeared on the single and EP of "Peek-a-Boo!" in 1982), and "Beehive" (which was recorded by the band in 1974, whereupon it was apparently abandoned, with the exception of one appearance at a special show in 2001). Devo continued to tour actively in 2005 and 2006, unveiling a new stage show at appearances in October 2006, with the Jihad Jerry character performing "Beautiful World" as an encore. Also in 2006, Devo worked on a project with Disney known as Devo 2.0. A band of child performers was assembled and re-recorded Devo songs. A quote from the Akron Beacon Journal stated, "Devo recently finished a new project in cahoots with Disney called Devo 2.0, which features the band playing old songs and two new ones with vocals provided by children. Their debut album, a two disc CD/DVD combo entitled DEV2.0, was released on March 14, 2006. The lyrics of some of the songs were changed for family-friendly airplay, which has been claimed by the band to be a play on irony of the messages of their classic hits." In an April 2007 interview, Gerald Casale mentioned a tentative project for a biographical film about Devo's early days. According to Casale, a script was supposedly in development, called The Beginning Was the End. Devo played their first European tour since 1990 in the summer of 2007, including a performance at Festival Internacional de Benicàssim. In December 2007, Devo released their first new single since 1990, "Watch Us Work It", which was featured in a commercial for Dell. The song features a sampled drum track from the New Traditionalists song "The Super Thing". Casale said that the song was chosen from a batch that the band was working on, and that it was the closest the band had been to releasing a new album. Devo performed at the South by Southwest (SXSW) festival in March 2009, unveiling a new stage show with synchronized video backdrops (similar to the 1982 tour), new costumes, and three new songs: "Don't Shoot, I'm a Man!", "What We Do", and "Fresh". On September 16, Warner Bros. and Devo announced re-releases of Q: Are We Not Men? A: We Are Devo! and Freedom of Choice, as well as a subsequent tour, where they would perform both albums in their entirety. A new album, Something for Everybody, was eventually released on June 15, 2010, preceded by a 12-inch single of "Fresh"/"What We Do" on June 10. Devo was awarded the first Moog Innovator Award on October 29, during Moogfest 2010 in Asheville, North Carolina. The Moog Innovator Award has been said to celebrate "pioneering artists whose genre-defying work exemplifies the bold, innovative spirit of Bob Moog". Devo was scheduled to perform at Moogfest, but Bob Mothersbaugh severely injured his hand three days prior, and the band was forced to cancel. Mark Mothersbaugh and Gerald Casale collaborated with Austin-based band the Octopus Project to perform "Girl U Want" and "Beautiful World" at the event instead. The band split from Warner Bros in 2012 and launched a new "post-Warner Brothers" website that would offer "new protective gear" and "unreleased material from the archives in vinyl disc format". In August of that year, the band released a single called "Don't Roof Rack Me, Bro (Seamus Unleashed)", dedicated to the Republican Party presidential candidate Mitt Romney's former pet dog Seamus. The title refers to the Mitt Romney dog incident of 1983, when Romney travelled twelve hours with the dog in a crate on his car's roof rack. On June 24, 2013, the group's former drummer Alan Myers died of stomach cancer in Los Angeles, California. He was 58. News reports at the time of his death incorrectly cited brain cancer as the cause. One month later, Devo released their Something Else for Everybody album, which collected "Unreleased Demos and Focus Group Rejects" from 2006–2009. Gerald Casale had earlier teased the album in a 2012 interview with Billboard magazine. On February 17, 2014, founding member Bob Casale died of heart failure at age 61. Shortly afterwards, the group, a quartet for the first time in 38 years, embarked on their Hardcore Devo Tour, a ten-show tour across the US and Canada between June 18 and July 2, 2014. The tour focused on material the group had written before the release of their first album, which was largely written when the group were a quartet. Partial proceeds for the ten shows went to support Bob Casale's family after his sudden death. The show featured the group performing material written during 1974–1977. The June 28 Oakland show was filmed and later released as the concert film Hardcore Devo Live!, released on Blu-ray, DVD, and Video on Demand on February 10, 2015, accompanied by CD and double-vinyl audio releases. Immediately following from the Hardcore tour, Devo continued to tour a 'greatest hits' style show. Josh Hager joined the band at this time, playing both keyboards and guitar. On April 29, 2016, Devo performed at Will Ferrell and Chad Smith's Red Hot Benefit. On May 22, Robert Mothersbaugh Sr., father of Mark, Bob, and Jim Mothersbaugh, died. Robert portrayed General Boy in various Devo films. In 2017, the official Twitter account for the Are We Not Men? documentary film, which had been in production since 2009, stated that "the film was finished years ago" and that "mm [Mark Mothersbaugh] is blocking its release". Jeff Winner, who was consulting producer for the Devo documentary, went on to state that he and director Tony Pemberton had "delivered the film that was contracted, and on schedule. It's now in the hands of the band to decide when/how it's released/distributed." Devo headlined the Burger Boogaloo festival in Oakland, California, on June 30, 2018, with comedian and former Trenchmouth drummer Fred Armisen on drums. On October 12, 2020, Devo performed at the Desert Daze festival, with Jeff Friedl on drums. In January 2021, Funko released two Devo Funko Pops inspired by the group's "Whip It" and "Satisfaction" music videos. One month later, the band starred in Devolution: A Devo Theory, a television documentary based entirely on their theory of devolution, which had been completed in 2020. In September, Devo performed a short three-date tour of the USA, including a show at Riot Fest. These performances marked the return of Josh Freese on drums, who had not played live with Devo in over five years. Shortly afterwards, Gerald Casale announced the release of an official Devo potato-based vodka through the Trust Me Vodka brand. The packaging for the drink was themed around Devo imagery and featured original artwork. It was signed by the group's co-founders Gerald Casale and Mark Mothersbaugh, as well as Bob Mothersbaugh. On October 24, 2021, John Hinckley Jr posted on Twitter that he had not received any royalties for Devo's song "I Desire" in 35 years. "I Desire" had been written by Mark Mothersbaugh and Gerald Casale for their 1982 album Oh, No! It's Devo, inspired by a poem written by Hinckley that was published in a tabloid newspaper, following his attempt to assassinate then-current president Ronald Reagan. Hinckley had been adequately credited for his contributions through a co-writing credit on all releases. Casale claimed that Devo were not at fault, as it was the publishing company's duty to pay him, not the band's. Devotional 2021, an annual convention for Devo fans, was held on November 5–6, with the annual 5KDEVO race taking place on the 7th. On November 15, it was announced that Devo would perform a one-off show at the Rooftop at Pier 17 on May 18, 2022, in order to make up for their cancelled Radio City Music Hall gig in September 2021. Tickets went on sale on the 18th. In December, it was announced that rare images of Devo would feature in a book of rock photography from 1977–1980 titled HARD + FAST, to be released on February 1, 2022. The book will also include a 7-inch single of live recordings from the band, which were also released on SoundCloud prior to the book's release. The recordings were dated 1977, but the performances are identical to those found on an audience bootleg recorded on October 10, 1978. Devo were nominated for induction into the Rock and Roll Hall of Fame in 2018, 2021 and 2022. On May 14 and 15, 2022, Devo performed at the Cruel World Festival at the Rose Bowl's Brookside golf course in Pasadena, California, followed three days later by their performance at The Rooftop at Pier 17. In a February 20, 2023, article by the Akron Beacon Journal promoting the film Cocaine Bear, Mothersbaugh announced that 2023 would be celebrated as Devo's 50th anniversary, and that he had plans for Devo to remain active for 50 more years. He also stated that he, Gerald Casale and Bob Mothersbaugh were all interested in touring and jokingly wished for the remaining members of Devo to be buried in a car park near the Rock and Roll Hall of Fame. Two weeks later, Devo announced that they would perform at London's Eventim Apollo on August 19 as part of their "farewell tour". Other stops on the tour include the Øyafestivalen in Norway, Way Out West festival in Sweden, Flow Festival in Finland and Luna Fest in Portugal, throughout August 2023. On March 1, a show at Green Man Festival in Wales was added to the tour. On March 22, BMG, Fremantle Documentaries, and Warner Music Entertainment announced that they would be producing and financing a Chris Smith directed documentary titled Devo. According to a statement by the band the film "explores Devo's evolution from hippie artistes to art-rockers with a message, to their unexpected mainstream success as a hit rock band and the pioneers of the MTV age." The film will follow the group's career arc up to its status as "elder statesmen". Smith is known for directing American Movie, Fyre, and executive producing Tiger King, the latter of which was scored by Mark Mothersbaugh, with Bob Mothersbaugh co-scoring its first season. The film will be produced by Chris Holmes and Anita Greenspan for Mutato Entertainment, and will be executive produced by William Kennedy, Stuart Souter, and Kathy Rivkin Daum for BMG, Mandy Chang for Fremantle, and at Warners, Charlie Cohen for WME and Mark Pinkus for Rhino Entertainment. As of these announcements, the film had entered production. In April, Devo's energy domes were featured in Fat Mike's Punk Rock Museum. On July 6, it was confirmed in a post made to the group's Instagram account that Jeff Friedl would play drums on their 2023 tour. This tour would be the group's last and as they retired from live performance, the compilation Art Devo 1973–1977 and a documentary would be released. Current members Studio albums
[ { "paragraph_id": 0, "text": "Devo (/ˈdiːvoʊ/, originally /diːˈvoʊ/) is an American new wave band from Akron, Ohio, formed in 1973. Their classic line-up consisted of two sets of brothers, the Mothersbaughs (Mark and Bob) and the Casales (Gerald and Bob), along with Alan Myers. The band had a No. 14 Billboard chart hit in 1980 with the single \"Whip It\", the song that gave the band mainstream popularity.", "title": "" }, { "paragraph_id": 1, "text": "Devo's music and visual presentation (including stage shows and costumes) mingle kitsch science fiction themes, deadpan surrealist humor and mordantly satirical social commentary. The band's namesake, the tongue-in-cheek social theory of \"de-evolution\", was an integral concept in their early work, which was marked by experimental and dissonant art punk that merged rock music with electronics. Their output in the 1980s embraced synth-pop and a more mainstream, less conceptual style, though the band's satirical and quirky humor remained intact. Their music has proven influential on subsequent movements, particularly on new wave, industrial, and alternative rock artists. Devo (most enthusiastically Gerald Casale) was also a pioneer of the music video format.", "title": "" }, { "paragraph_id": 2, "text": "The name Devo comes from the concept of \"de-evolution\" and the band's related idea that instead of continuing to evolve, mankind had begun to regress, as evidenced by the dysfunction and herd mentality of American society. In the late 1960s, this idea was developed as a joke by Kent State University art students Gerald Casale and Bob Lewis, who created a number of satirical art pieces in a devolution vein. At this time, Casale had also performed with the local band 15-60-75 (The Numbers Band). They met Mark Mothersbaugh around 1970, a talented keyboardist who had been playing with the band Flossy Bobbitt. Mothersbaugh brought a more humorous feel to the band, introducing them to material like the pamphlet \"Jocko Homo Heavenbound\", which includes an illustration of a winged devil labelled \"D-EVOLUTION\" and would later inspire the song \"Jocko Homo\". The \"joke\" about de-evolution became serious following the Kent State massacre of May 4, 1970. This event would be cited multiple times as the impetus for forming the band Devo. Throughout the band's career, they have often been considered a \"joke band\" by the music press.", "title": "History" }, { "paragraph_id": 3, "text": "The first form of Devo was the \"Sextet Devo\" which performed at the 1973 Kent State performing arts festival. It included Casale, Lewis and Mothersbaugh, as well as Gerald's brother Bob Casale on guitar, and friends Rod Reisman and Fred Weber on drums and vocals, respectively. This performance was filmed and an excerpt was later included on the home video release The Complete Truth About De-Evolution. This lineup performed only once. Devo returned to perform in the Student Governance Center (featured prominently in the film) at the 1974 Creative Arts Festival with a lineup including the Casale brothers, Bob Lewis, Mark Mothersbaugh, and Jim Mothersbaugh on drums.", "title": "History" }, { "paragraph_id": 4, "text": "The band continued to perform, generally as a quartet, but with a fluid lineup including Mark's brothers Bob Mothersbaugh and Jim Mothersbaugh. Bob played electric guitar, and Jim provided percussion using a set of home-made electronic drums. Their first two music videos, \"Secret Agent Man\" and \"Jocko Homo\", which both appeared in The Truth About De-Evolution, were filmed in Akron, and Cuyahoga Falls, Ohio, the hometown of most members. This lineup of Devo lasted until late 1975 when Jim left the band. Lewis would sometimes play guitar during this period, but mainly stayed in a managerial role. In concert, Devo would often perform in the guise of theatrical characters, such as Booji Boy and the Chinaman. Live concerts from this period were often confrontational, and would remain so until 1977. A recording of an early Devo performance from 1975 with the quartet lineup appears on Devo Live: The Mongoloid Years (1992), ending with the promoters unplugging Devo's equipment.", "title": "History" }, { "paragraph_id": 5, "text": "Following Jim Mothersbaugh's departure, Bob Mothersbaugh found a new drummer, Alan Myers, who played on a conventional, acoustic drum kit. Casale re-recruited his brother Bob Casale, and the lineup of Devo remained the same for nearly ten years.", "title": "History" }, { "paragraph_id": 6, "text": "Devo gained some fame in 1976 when their short film The Truth About De-Evolution, directed by Chuck Statler, won a prize at the Ann Arbor Film Festival. This attracted the attention of David Bowie, who began work to get the band a recording contract with Warner Music Group. In 1977, Devo were asked by Neil Young to participate in the making of his film Human Highway. Released in 1982, the film featured the band as \"nuclear garbagemen\". The band members were asked to write their own parts and Mark Mothersbaugh scored and recorded much of the soundtrack, his first of many.", "title": "History" }, { "paragraph_id": 7, "text": "In March 1977, Devo released their first single, \"Mongoloid\" backed with \"Jocko Homo\", the B-side of which came from the soundtrack to The Truth About De-Evolution, on their independent label Booji Boy. This was followed by a cover of the Rolling Stones' \"(I Can't Get No) Satisfaction\".", "title": "History" }, { "paragraph_id": 8, "text": "In 1978, the B Stiff EP was released by British independent label Stiff, which included the single \"Be Stiff\" plus two previous Booji Boy releases. \"Mechanical Man\", a 4-track 7-inch extended play (EP) of demos, an apparent bootleg, but actually put out by the band, was also released that year.", "title": "History" }, { "paragraph_id": 9, "text": "Recommendations from David Bowie and Iggy Pop enabled Devo to secure a recording contract with Warner Bros. in 1978. After Bowie backed out of the business deal due to previous commitments, their first album, Q: Are We Not Men? A: We Are Devo! was produced by Brian Eno and featured re-recordings of their previous singles \"Mongoloid\" and \"(I Can't Get No) Satisfaction\". On October 14, 1978, Devo gained national exposure with an appearance on the late-night show Saturday Night Live, a week after the Rolling Stones, performing \"(I Can't Get No) Satisfaction\" and \"Jocko Homo\".", "title": "History" }, { "paragraph_id": 10, "text": "The band followed up with Duty Now for the Future in 1979, which moved the band more towards electronic instrumentation. While not as successful as their first album, it did produce some fan favorites with the songs \"Blockhead\" and \"The Day My Baby Gave Me a Surprize\" [sic], as well as a cover of the Johnny Rivers hit \"Secret Agent Man\". \"Secret Agent Man\" had been recorded first in 1974 for Devo's first film and performed live as early as 1976. In 1979, Devo traveled to Japan for the first time, and a live show from this tour was partially recorded. Devo appeared on Don Kirshner's Rock Concert in 1979, performing \"Blockhead\", \"Secret Agent Man\", \"Uncontrollable Urge\", and \"Mongoloid\". Also in 1979, Rhino, in conjunction with the Los Angeles radio station KROQ-FM, released Devotees, a tribute album. It contained a set of covers of Devo songs interspersed with renditions of popular songs in Devo's style.", "title": "History" }, { "paragraph_id": 11, "text": "Devo actively embraced the parody religion Church of the SubGenius. In concert, Devo sometimes performed as their own opening act, pretending to be a Christian soft rock band called \"Dove (the Band of Love)\", which is an anagram of \"Devo\". They appeared as Dove in the 1980 televangelism spoof film Pray TV.", "title": "History" }, { "paragraph_id": 12, "text": "Devo gained a new level of visibility with 1980's Freedom of Choice. This album included their best-known hit, \"Whip It\", which quickly became a Top 40 hit. The album moved to an almost completely electronic sound, with the exception of acoustic drums and Bob Mothersbaugh's guitar. The tour for Freedom of Choice was ambitious for the band, including dates in Japan, the United Kingdom, France, Germany, Italy, the Netherlands, and Canada. The band used a minimalist set including large custom light boxes which could be laid on their back to form a second, smaller stage during the second half of the set. Other popular songs from Freedom of Choice were \"Girl U Want\", the title-track, and \"Gates of Steel\". The band released popular music videos for \"Whip It\" and \"Girl U Want\". Devo made two appearances on the TV show Fridays in 1980, as well as on Don Kirshner's Rock Concert, American Bandstand, and other shows. The band members often wore red, terraced energy dome hats as part of its stage outfit. The dome was first worn during the band's Freedom of Choice campaign of 1980. It reappeared in the 1981, 1982, and 1988 tours, as well as in most of their performances since 1997. Devo also recorded two albums of their own songs as elevator music for their fan club, Club Devo, released on cassette in 1981 and 1984. These were later re-released on the album E-Z Listening Disc (1987), with all but two of the original Club Devo songs. These songs were often played as house music before Devo concerts.", "title": "History" }, { "paragraph_id": 13, "text": "In August 1981, the band's DEV-O Live EP spent three weeks at the top of the Australian charts. In 1982, they toured Australia and appeared on the TV show Countdown. Devo enjoyed continued popularity in Australia, where the nationally broadcast 1970s–1980s pop TV show Countdown was one of the first programs in the world to broadcast their video clips. They were given consistent radio support by Sydney-based non-commercial rock station Double Jay (2JJ) and Brisbane-based independent community station Triple Zed (4ZZZ), two of the first rock stations outside America to play their recordings. The late-night music program Nightmoves aired The Truth About De-Evolution.", "title": "History" }, { "paragraph_id": 14, "text": "In 1981, Devo contributed a cover of \"Working in the Coal Mine\", recorded during the Freedom of Choice sessions, to the film Heavy Metal. They offered the song to be used in the film when Warner Bros. refused to include it on the album. Warner then included it as an independent bonus single accompanying their 1981 release, New Traditionalists. For this album Devo wore self-described \"Utopian Boy Scout uniforms\" topped with a \"New Traditionalist Pomp\"—a plastic half-wig modeled on the hairstyle of John F. Kennedy. Among the singles from the album was \"Through Being Cool\", written as a reaction to their new-found fame from \"Whip It\" and seen as a response to new fans who had misinterpreted the message behind the hit song. The album's accompanying tour featured the band performing an intensely physical show with treadmills and a large Greek temple set. That same year they served as Toni Basil's backing band on Word of Mouth, her debut album, which included versions of three Devo songs, recorded with Basil singing lead.", "title": "History" }, { "paragraph_id": 15, "text": "Oh, No! It's Devo followed in 1982. Produced by Roy Thomas Baker, the album featured a more synth-pop-oriented sound than its predecessors. According to Gerald Casale, the album's sound was inspired by reviewers alternately describing them as both \"fascists\" and \"clowns\". The album's tour featured the band performing seven songs in front of a 12-foot high rear-projection screen with synchronized video, an image recreated using blue screen effects in the album's accompanying music videos. Devo also contributed two songs, \"Theme from Doctor Detroit\" and \"Luv-Luv\", to the 1983 Dan Aykroyd film Doctor Detroit, and produced a music video for \"Theme from Doctor Detroit\" featuring clips from the film interspersed with live-action segments.", "title": "History" }, { "paragraph_id": 16, "text": "The band's sixth studio album, Shout (1984), which featured extensive use of the Fairlight CMI digital sampling synthesizer, was received poorly, and the expensive music video they'd produced for their cover of the Jimi Hendrix Experience's \"Are You Experienced?\" was criticized by some as being \"disrespectful\", all of which caused Warner Bros. to buy out the remainder of Devo's contract. Shortly thereafter, Myers left the band, citing creative unfulfillment.", "title": "History" }, { "paragraph_id": 17, "text": "In the interim, Mark Mothersbaugh began composing music for the TV show Pee-wee's Playhouse and released an elaborately packaged solo cassette, Musik for Insomniaks, which was later expanded and released as two CDs in 1988.", "title": "History" }, { "paragraph_id": 18, "text": "In 1987, Devo re-formed with former Sparks drummer David Kendrick to replace Myers. Their first project was a soundtrack for the horror film Slaughterhouse Rock (1988), starring Toni Basil. The band released the album Total Devo in 1988, on Enigma Records. This album included two songs used in the Slaughterhouse Rock soundtrack. The song \"Baby Doll\" was used that same year in the comedy film Tapeheads, with newly recorded Swedish lyrics, and was credited to (and shown in a music video by) a fictitious Swedish band called Cube-Squared. Devo followed this up with a world tour, and released the live album Now It Can Be Told: Devo at the Palace in 1989. However, Total Devo was not a commercial success and received poor critical reviews.", "title": "History" }, { "paragraph_id": 19, "text": "In 1989, members of Devo were involved in the project Visiting Kids, releasing a self-titled EP on the New Rose label in 1990. The band featured Mark's then-wife Nancye Ferguson, as well as David Kendrick, Bob Mothersbaugh, and Bob's daughter Alex Mothersbaugh. Their record was produced by Bob Casale and Mark Mothersbaugh, and Mark also co-wrote some of the songs. Visiting Kids appeared on the soundtrack to the film Rockula, as well as on Late Night with David Letterman. A promotional video was filmed for the song \"Trilobites\".", "title": "History" }, { "paragraph_id": 20, "text": "In 1990, Smooth Noodle Maps, Devo's last album for twenty years, was released. It too was a critical and commercial failure which, along with its two singles \"Stuck in a Loop\" and \"Post Post-Modern Man\", were Devo's worst-selling efforts; all failed to appear on the U.S. charts. Devo launched a concert tour in support of the album, but poor ticket sales and the bankruptcy and dissolution of Enigma Records, which was responsible for organizing and financing the tour, caused it to be cancelled part way through.", "title": "History" }, { "paragraph_id": 21, "text": "In 1990, the members of Devo, bar Bob Mothersbaugh, appeared in the film The Spirit of '76. Two albums of demo recordings from 1974–1977, namely Hardcore Devo: Volume One (1990) and Hardcore Devo: Volume Two (1991), were released on Rykodisc, as well as an album of early live recordings, Devo Live: The Mongoloid Years (1992).", "title": "History" }, { "paragraph_id": 22, "text": "The band played one final show in March 1991 before breaking up. In an interview with Mark Mothersbaugh concerning their 1996 computer game Devo Presents Adventures of the Smart Patrol, he explained, \"Around '88, '89, '90 maybe, we did our last tour in Europe, and it was kind of at that point, We were watching This Is Spinal Tap on the bus and said, 'Oh my God, that's our life.' And we just said, 'Things have to change.' So we kind of agreed from there that we wouldn't do live shows anymore.\"", "title": "History" }, { "paragraph_id": 23, "text": "Following the split, Mark Mothersbaugh established Mutato Muzika, a commercial music production studio, along with Bob Mothersbaugh and Bob Casale. Mothersbaugh meant to further a career as a composer, and the latter worked as an audio engineer. Mothersbaugh has had considerable success writing and producing music for television programs, including Pee-wee's Playhouse and Rugrats, video games, cartoons, and films, where he worked alongside director Wes Anderson. David Kendrick also worked at Mutato for a period during the early 1990s. Gerald Casale began a career as a director of music videos and commercials, working with bands including Rush, Soundgarden, Silverchair and the Foo Fighters. In the wake of Devo's dissolution, Bob Mothersbaugh attempted to start a solo career with The Bob I Band, recording an album that was never released. The tapes for this are now lost, though a bootleg recording of the band in concert exists and can be obtained through the bootleg aggregator Booji Boy's Basement.", "title": "History" }, { "paragraph_id": 24, "text": "While they did not release any studio albums during this period, Devo sporadically reconvened to record a number of songs for various films and compilations, including a new recording of \"Girl U Want\" on the soundtrack to the 1995 film Tank Girl and a cover of the Nine Inch Nails hit \"Head Like a Hole\" for the 1996 North American version of the film Supercop.", "title": "History" }, { "paragraph_id": 25, "text": "In January 1996, Devo performed a reunion concert at the Sundance Film Festival in Park City, Utah. The band performed on part of the 1996 Lollapalooza tour in the rotating Mystery Spot. On these tours and most subsequent tours, Devo performed a set-list mostly composed of material from between 1978 and 1982, ignoring their Enigma Records-era material. Also in 1996, Devo released a multimedia CD-ROM adventure game, Adventures of the Smart Patrol with Inscape. The game was not a success, but the Lollapalooza tour was received well enough to allow Devo to return in 1997 as a headliner. Devo performed sporadically from 1997 onwards.", "title": "History" }, { "paragraph_id": 26, "text": "In 1999, the Oh, No! It's Devo era outtakes \"Faster and Faster\" and \"One Dumb Thing\", as well as the Shout era outtake \"Modern Life\", were restored, completed and used in the video game Interstate '82, developed by Activision and released. Also that year, Mothersbaugh started the Devo side-project The Wipeouters, after their band in junior high, featuring himself (keyboards, organ), Bob Mothersbaugh (guitar), Bob Casale (guitar), and Mutato Muzika composer Josh Mancell (drums). The Wipeouters performed the theme song to the Nickelodeon animated series Rocket Power, and in 2001 they released an album of surf rock material, titled P'Twaaang!!!.", "title": "History" }, { "paragraph_id": 27, "text": "Around this same time, Devo's online fandom continued to grow, leading to 'Devotional', a Devo fan convention held annually in Cleveland, Ohio. The festival was most recently held in September 2022.", "title": "History" }, { "paragraph_id": 28, "text": "In 2005, Devo recorded a new version of \"Whip It\" to be used in Swiffer television commercials, a decision they have said they regretted. During an interview with the Dallas Observer, Gerald Casale said, \"It's just aesthetically offensive. It's got everything a commercial that turns people off has.\" The song \"Beautiful World\" was also used in a re-recorded form for an advertisement for Target stores. Due to rights issues with their back catalog, Devo has re-recorded songs for films and advertisements.", "title": "History" }, { "paragraph_id": 29, "text": "In 2005, Gerald Casale announced his \"solo\" project, Jihad Jerry & the Evildoers (the Evildoers, including the other members of Devo), and released the first EP, Army Girls Gone Wild in 2006. A full-length album, Mine Is Not a Holy War, was released on September 12, 2006, after a several-month delay. It featured mostly new material, plus re-recordings of four obscure Devo songs: \"I Need a Chick\" and \"I Been Refused\" (from Hardcore Devo: Volume Two), \"Find Out\" (which appeared on the single and EP of \"Peek-a-Boo!\" in 1982), and \"Beehive\" (which was recorded by the band in 1974, whereupon it was apparently abandoned, with the exception of one appearance at a special show in 2001). Devo continued to tour actively in 2005 and 2006, unveiling a new stage show at appearances in October 2006, with the Jihad Jerry character performing \"Beautiful World\" as an encore.", "title": "History" }, { "paragraph_id": 30, "text": "Also in 2006, Devo worked on a project with Disney known as Devo 2.0. A band of child performers was assembled and re-recorded Devo songs. A quote from the Akron Beacon Journal stated, \"Devo recently finished a new project in cahoots with Disney called Devo 2.0, which features the band playing old songs and two new ones with vocals provided by children. Their debut album, a two disc CD/DVD combo entitled DEV2.0, was released on March 14, 2006. The lyrics of some of the songs were changed for family-friendly airplay, which has been claimed by the band to be a play on irony of the messages of their classic hits.\"", "title": "History" }, { "paragraph_id": 31, "text": "In an April 2007 interview, Gerald Casale mentioned a tentative project for a biographical film about Devo's early days. According to Casale, a script was supposedly in development, called The Beginning Was the End. Devo played their first European tour since 1990 in the summer of 2007, including a performance at Festival Internacional de Benicàssim.", "title": "History" }, { "paragraph_id": 32, "text": "In December 2007, Devo released their first new single since 1990, \"Watch Us Work It\", which was featured in a commercial for Dell. The song features a sampled drum track from the New Traditionalists song \"The Super Thing\". Casale said that the song was chosen from a batch that the band was working on, and that it was the closest the band had been to releasing a new album.", "title": "History" }, { "paragraph_id": 33, "text": "Devo performed at the South by Southwest (SXSW) festival in March 2009, unveiling a new stage show with synchronized video backdrops (similar to the 1982 tour), new costumes, and three new songs: \"Don't Shoot, I'm a Man!\", \"What We Do\", and \"Fresh\". On September 16, Warner Bros. and Devo announced re-releases of Q: Are We Not Men? A: We Are Devo! and Freedom of Choice, as well as a subsequent tour, where they would perform both albums in their entirety.", "title": "History" }, { "paragraph_id": 34, "text": "A new album, Something for Everybody, was eventually released on June 15, 2010, preceded by a 12-inch single of \"Fresh\"/\"What We Do\" on June 10. Devo was awarded the first Moog Innovator Award on October 29, during Moogfest 2010 in Asheville, North Carolina. The Moog Innovator Award has been said to celebrate \"pioneering artists whose genre-defying work exemplifies the bold, innovative spirit of Bob Moog\". Devo was scheduled to perform at Moogfest, but Bob Mothersbaugh severely injured his hand three days prior, and the band was forced to cancel. Mark Mothersbaugh and Gerald Casale collaborated with Austin-based band the Octopus Project to perform \"Girl U Want\" and \"Beautiful World\" at the event instead.", "title": "History" }, { "paragraph_id": 35, "text": "The band split from Warner Bros in 2012 and launched a new \"post-Warner Brothers\" website that would offer \"new protective gear\" and \"unreleased material from the archives in vinyl disc format\". In August of that year, the band released a single called \"Don't Roof Rack Me, Bro (Seamus Unleashed)\", dedicated to the Republican Party presidential candidate Mitt Romney's former pet dog Seamus. The title refers to the Mitt Romney dog incident of 1983, when Romney travelled twelve hours with the dog in a crate on his car's roof rack.", "title": "History" }, { "paragraph_id": 36, "text": "On June 24, 2013, the group's former drummer Alan Myers died of stomach cancer in Los Angeles, California. He was 58. News reports at the time of his death incorrectly cited brain cancer as the cause. One month later, Devo released their Something Else for Everybody album, which collected \"Unreleased Demos and Focus Group Rejects\" from 2006–2009. Gerald Casale had earlier teased the album in a 2012 interview with Billboard magazine.", "title": "History" }, { "paragraph_id": 37, "text": "On February 17, 2014, founding member Bob Casale died of heart failure at age 61. Shortly afterwards, the group, a quartet for the first time in 38 years, embarked on their Hardcore Devo Tour, a ten-show tour across the US and Canada between June 18 and July 2, 2014. The tour focused on material the group had written before the release of their first album, which was largely written when the group were a quartet. Partial proceeds for the ten shows went to support Bob Casale's family after his sudden death. The show featured the group performing material written during 1974–1977. The June 28 Oakland show was filmed and later released as the concert film Hardcore Devo Live!, released on Blu-ray, DVD, and Video on Demand on February 10, 2015, accompanied by CD and double-vinyl audio releases.", "title": "History" }, { "paragraph_id": 38, "text": "Immediately following from the Hardcore tour, Devo continued to tour a 'greatest hits' style show. Josh Hager joined the band at this time, playing both keyboards and guitar. On April 29, 2016, Devo performed at Will Ferrell and Chad Smith's Red Hot Benefit.", "title": "History" }, { "paragraph_id": 39, "text": "On May 22, Robert Mothersbaugh Sr., father of Mark, Bob, and Jim Mothersbaugh, died. Robert portrayed General Boy in various Devo films.", "title": "History" }, { "paragraph_id": 40, "text": "In 2017, the official Twitter account for the Are We Not Men? documentary film, which had been in production since 2009, stated that \"the film was finished years ago\" and that \"mm [Mark Mothersbaugh] is blocking its release\". Jeff Winner, who was consulting producer for the Devo documentary, went on to state that he and director Tony Pemberton had \"delivered the film that was contracted, and on schedule. It's now in the hands of the band to decide when/how it's released/distributed.\"", "title": "History" }, { "paragraph_id": 41, "text": "Devo headlined the Burger Boogaloo festival in Oakland, California, on June 30, 2018, with comedian and former Trenchmouth drummer Fred Armisen on drums. On October 12, 2020, Devo performed at the Desert Daze festival, with Jeff Friedl on drums.", "title": "History" }, { "paragraph_id": 42, "text": "In January 2021, Funko released two Devo Funko Pops inspired by the group's \"Whip It\" and \"Satisfaction\" music videos. One month later, the band starred in Devolution: A Devo Theory, a television documentary based entirely on their theory of devolution, which had been completed in 2020. In September, Devo performed a short three-date tour of the USA, including a show at Riot Fest. These performances marked the return of Josh Freese on drums, who had not played live with Devo in over five years.", "title": "History" }, { "paragraph_id": 43, "text": "Shortly afterwards, Gerald Casale announced the release of an official Devo potato-based vodka through the Trust Me Vodka brand. The packaging for the drink was themed around Devo imagery and featured original artwork. It was signed by the group's co-founders Gerald Casale and Mark Mothersbaugh, as well as Bob Mothersbaugh.", "title": "History" }, { "paragraph_id": 44, "text": "On October 24, 2021, John Hinckley Jr posted on Twitter that he had not received any royalties for Devo's song \"I Desire\" in 35 years. \"I Desire\" had been written by Mark Mothersbaugh and Gerald Casale for their 1982 album Oh, No! It's Devo, inspired by a poem written by Hinckley that was published in a tabloid newspaper, following his attempt to assassinate then-current president Ronald Reagan. Hinckley had been adequately credited for his contributions through a co-writing credit on all releases. Casale claimed that Devo were not at fault, as it was the publishing company's duty to pay him, not the band's.", "title": "History" }, { "paragraph_id": 45, "text": "Devotional 2021, an annual convention for Devo fans, was held on November 5–6, with the annual 5KDEVO race taking place on the 7th. On November 15, it was announced that Devo would perform a one-off show at the Rooftop at Pier 17 on May 18, 2022, in order to make up for their cancelled Radio City Music Hall gig in September 2021. Tickets went on sale on the 18th.", "title": "History" }, { "paragraph_id": 46, "text": "In December, it was announced that rare images of Devo would feature in a book of rock photography from 1977–1980 titled HARD + FAST, to be released on February 1, 2022. The book will also include a 7-inch single of live recordings from the band, which were also released on SoundCloud prior to the book's release. The recordings were dated 1977, but the performances are identical to those found on an audience bootleg recorded on October 10, 1978.", "title": "History" }, { "paragraph_id": 47, "text": "Devo were nominated for induction into the Rock and Roll Hall of Fame in 2018, 2021 and 2022.", "title": "History" }, { "paragraph_id": 48, "text": "On May 14 and 15, 2022, Devo performed at the Cruel World Festival at the Rose Bowl's Brookside golf course in Pasadena, California, followed three days later by their performance at The Rooftop at Pier 17.", "title": "History" }, { "paragraph_id": 49, "text": "In a February 20, 2023, article by the Akron Beacon Journal promoting the film Cocaine Bear, Mothersbaugh announced that 2023 would be celebrated as Devo's 50th anniversary, and that he had plans for Devo to remain active for 50 more years. He also stated that he, Gerald Casale and Bob Mothersbaugh were all interested in touring and jokingly wished for the remaining members of Devo to be buried in a car park near the Rock and Roll Hall of Fame. Two weeks later, Devo announced that they would perform at London's Eventim Apollo on August 19 as part of their \"farewell tour\". Other stops on the tour include the Øyafestivalen in Norway, Way Out West festival in Sweden, Flow Festival in Finland and Luna Fest in Portugal, throughout August 2023. On March 1, a show at Green Man Festival in Wales was added to the tour.", "title": "History" }, { "paragraph_id": 50, "text": "On March 22, BMG, Fremantle Documentaries, and Warner Music Entertainment announced that they would be producing and financing a Chris Smith directed documentary titled Devo. According to a statement by the band the film \"explores Devo's evolution from hippie artistes to art-rockers with a message, to their unexpected mainstream success as a hit rock band and the pioneers of the MTV age.\" The film will follow the group's career arc up to its status as \"elder statesmen\". Smith is known for directing American Movie, Fyre, and executive producing Tiger King, the latter of which was scored by Mark Mothersbaugh, with Bob Mothersbaugh co-scoring its first season.", "title": "History" }, { "paragraph_id": 51, "text": "The film will be produced by Chris Holmes and Anita Greenspan for Mutato Entertainment, and will be executive produced by William Kennedy, Stuart Souter, and Kathy Rivkin Daum for BMG, Mandy Chang for Fremantle, and at Warners, Charlie Cohen for WME and Mark Pinkus for Rhino Entertainment. As of these announcements, the film had entered production.", "title": "History" }, { "paragraph_id": 52, "text": "In April, Devo's energy domes were featured in Fat Mike's Punk Rock Museum.", "title": "History" }, { "paragraph_id": 53, "text": "On July 6, it was confirmed in a post made to the group's Instagram account that Jeff Friedl would play drums on their 2023 tour. This tour would be the group's last and as they retired from live performance, the compilation Art Devo 1973–1977 and a documentary would be released.", "title": "History" }, { "paragraph_id": 54, "text": "Current members", "title": "Band members" }, { "paragraph_id": 55, "text": "Studio albums", "title": "Discography" } ]
Devo is an American new wave band from Akron, Ohio, formed in 1973. Their classic line-up consisted of two sets of brothers, the Mothersbaughs and the Casales, along with Alan Myers. The band had a No. 14 Billboard chart hit in 1980 with the single "Whip It", the song that gave the band mainstream popularity. Devo's music and visual presentation mingle kitsch science fiction themes, deadpan surrealist humor and mordantly satirical social commentary. The band's namesake, the tongue-in-cheek social theory of "de-evolution", was an integral concept in their early work, which was marked by experimental and dissonant art punk that merged rock music with electronics. Their output in the 1980s embraced synth-pop and a more mainstream, less conceptual style, though the band's satirical and quirky humor remained intact. Their music has proven influential on subsequent movements, particularly on new wave, industrial, and alternative rock artists. Devo was also a pioneer of the music video format.
2002-02-25T15:43:11Z
2023-12-28T02:22:59Z
[ "Template:Reflist", "Template:Commons category", "Template:Short description", "Template:AllMusic", "Template:Discogs artist", "Template:Pp-sock", "Template:Additional citation needed", "Template:Devo", "Template:YouTube", "Template:Infobox musical artist", "Template:Failed verification", "Template:\"'", "Template:Cite press release", "Template:Cite magazine", "Template:Citation", "Template:Use mdy dates", "Template:Sic", "Template:Better source needed", "Template:Col-begin", "Template:Col-end", "Template:Cite web", "Template:Cite book", "Template:IMDb name", "Template:About", "Template:IPAc-en", "Template:Col-2", "Template:Efn", "Template:Official website", "Template:Use American English", "Template:Citation needed", "Template:Main", "Template:Cite AV media", "Template:Wikiquote", "Template:Authority control", "Template:Excessive citations inline", "Template:Notelist", "Template:Cite news", "Template:Cite Instagram" ]
https://en.wikipedia.org/wiki/Devo
9,132
Dale Chihuly
Dale Chihuly (/tʃɪˈhuːli/) (born September 20, 1941) is an American glass artist and entrepreneur. He is well known in the field of blown glass, "moving it into the realm of large-scale sculpture". Dale Patrick Chihuly was born on September 20, 1941, in Tacoma, Washington. His parents were George and Viola Chihuly; his paternal grandfather was born in Slovakia. In 1957, his older brother and only sibling George died in a Navy aviation training accident in Pensacola, Florida. In 1958, Chihuly's father died of a heart attack at the age of 51. Chihuly had no interest in continuing his formal education after graduating from Woodrow Wilson High School in 1959. However, at his mother's urging, he enrolled at the College of Puget Sound. A year later, he transferred to the University of Washington in Seattle to study interior design. In 1961, he joined the Delta Kappa Epsilon fraternity (Kappa Epsilon chapter), and the same year he learned how to melt and fuse glass. In 1962, Chihuly dropped out of the university to study art in Florence. He later traveled to the Middle East where he met architect Robert Landsman. Their meeting and his time abroad spurred Chihuly to return to his studies. In 1963, he took a weaving class where he incorporated glass shards into tapestries. He received an award for his work from the Seattle Weavers Guild in 1964. Chihuly graduated from the University of Washington in 1965 with a Bachelor of Arts degree in interior design. Chihuly began experimenting with glassblowing in 1965, and in 1966 he received a full scholarship to attend the University of Wisconsin–Madison. He studied under Harvey Littleton, who had established the first glass program in the United States at the university. In 1967, Chihuly received a Master of Science degree in sculpture. After graduating, he enrolled at the Rhode Island School of Design, where he met and became close friends with Italo Scanga. Chihuly earned a Master of Fine Arts degree in sculpture from the RISD in 1968. That same year, he was awarded a Louis Comfort Tiffany Foundation grant for his work in glass, as well as a Fulbright Fellowship. He traveled to Venice to work at the Venini factory on the island of Murano, where he first saw the team approach to blowing glass. After returning to the United States, Chihuly spent the first of four consecutive summers teaching at the Haystack Mountain School of Crafts in Deer Isle, Maine. In 1969, he traveled to Europe, in part to meet Erwin Eisch in Germany and Stanislav Libenský and Jaroslava Brychtová in Czechoslovakia. Chihuly donated a portion of a large exhibit to his alma mater, the University of Wisconsin, in 1997 and it is on permanent display in the Kohl Center. In 2013 the university awarded him an Honorary Doctorate of Fine Arts. In 1971, with the support of John Hauberg and Anne Gould Hauberg, Chihuly co-founded the Pilchuck Glass School near Stanwood, Washington. Chihuly also founded the HillTop Artists program in Tacoma, Washington at Hilltop Heritage Middle School and Wilson High School. In 1976, while Chihuly was in England, he was involved in a head-on car accident that propelled him through the windshield. His face was severely cut by glass and he was blinded in his left eye. After recovering, he continued to blow glass until he dislocated his right shoulder in 1979 while bodysurfing. In 1983, Chihuly returned to his native Pacific Northwest where he continued to develop his own work at the Pilchuck Glass School, which he had helped to found in 1971. No longer able to hold the glassblowing pipe, he hired others to do the work. Chihuly explained the change in a 2006 interview, saying "Once I stepped back, I liked the view", and said that it allowed him to see the work from more perspectives, enabling him to anticipate problems earlier. Chihuly's role has been described as "more choreographer than dancer, more supervisor than participant, more director than actor". San Diego Union-Tribune reporter Erin Glass wrote that she "wonders at the vision of not just the artist Chihuly, but the very successful entrepreneur Chihuly, whose estimated sales by 2004 was reported by The Seattle Times as $29 million." Chihuly and his team of artists were the subjects of the documentary Chihuly Over Venice. They were also featured in the documentary Chihuly in the Hotshop, syndicated to public television stations by American Public Television starting on November 1, 2008. In 2010, the Space Needle Corporation submitted a proposal for an exhibition of Chihuly's work at a site in the Seattle Center, in competition with proposals for other uses from several other groups. The project, which sees the new Chihuly exhibition hall occupy the site of the former Fun Forest amusement park in the Seattle Center park and entertainment complex, received the final approval from the Seattle City Council on April 25, 2011. Called Chihuly Garden and Glass, it opened May 21, 2012. In 2006, Chihuly filed a lawsuit against his former longtime employee, glassblower Bryan Rubino, and businessman Robert Kaindl, claiming copyright and trademark infringement. Kaindl's pieces used titles Chihuly had employed for his own works, such as Seaforms and Ikebana, and resembled the construction of Chihuly's pieces. Legal experts stated that influence on art style did not constitute copyright infringement. Chihuly settled the lawsuit with Rubino initially, and later with Kaindl as well. Regina Hackett, a Seattle Post-Intelligencer art critic, provided a chronology of Chihuly's work during the 1970s, 1980s, and 1990s: For his exhibition in Jerusalem, in 1999–2000, in addition to the glass pieces, he had enormous blocks of transparent ice brought in from an Alaskan artesian well and formed a wall, echoing the stones of the nearby Citadel. Lights with color gels were set up behind them for illumination. Chihuly said the melting wall represented the "dissolution of barriers" between people. This exhibit holds the world record for most visitors to a temporary exhibit with more than 1.3 million visitors. Chihuly's largest permanent exhibit is at the Oklahoma City Museum of Art. Other large collections can be found at the Morean Arts Center in St. Petersburg, Florida, and Chihuly Garden and Glass in Seattle, Washington. Chihuly also maintains two retail stores in partnership with MGM Resorts International, one at the Bellagio on the Las Vegas Strip, and the other at the MGM Grand Casino in Macau. Chihuly's art appears in over 400 permanent collections all over the world, including in the United States, Canada, England, Israel, China, Singapore, the United Arab Emirates, and Australia.
[ { "paragraph_id": 0, "text": "Dale Chihuly (/tʃɪˈhuːli/) (born September 20, 1941) is an American glass artist and entrepreneur. He is well known in the field of blown glass, \"moving it into the realm of large-scale sculpture\".", "title": "" }, { "paragraph_id": 1, "text": "Dale Patrick Chihuly was born on September 20, 1941, in Tacoma, Washington. His parents were George and Viola Chihuly; his paternal grandfather was born in Slovakia. In 1957, his older brother and only sibling George died in a Navy aviation training accident in Pensacola, Florida. In 1958, Chihuly's father died of a heart attack at the age of 51.", "title": "Early life" }, { "paragraph_id": 2, "text": "Chihuly had no interest in continuing his formal education after graduating from Woodrow Wilson High School in 1959. However, at his mother's urging, he enrolled at the College of Puget Sound. A year later, he transferred to the University of Washington in Seattle to study interior design. In 1961, he joined the Delta Kappa Epsilon fraternity (Kappa Epsilon chapter), and the same year he learned how to melt and fuse glass. In 1962, Chihuly dropped out of the university to study art in Florence. He later traveled to the Middle East where he met architect Robert Landsman. Their meeting and his time abroad spurred Chihuly to return to his studies. In 1963, he took a weaving class where he incorporated glass shards into tapestries. He received an award for his work from the Seattle Weavers Guild in 1964. Chihuly graduated from the University of Washington in 1965 with a Bachelor of Arts degree in interior design.", "title": "Early life" }, { "paragraph_id": 3, "text": "Chihuly began experimenting with glassblowing in 1965, and in 1966 he received a full scholarship to attend the University of Wisconsin–Madison. He studied under Harvey Littleton, who had established the first glass program in the United States at the university. In 1967, Chihuly received a Master of Science degree in sculpture. After graduating, he enrolled at the Rhode Island School of Design, where he met and became close friends with Italo Scanga. Chihuly earned a Master of Fine Arts degree in sculpture from the RISD in 1968. That same year, he was awarded a Louis Comfort Tiffany Foundation grant for his work in glass, as well as a Fulbright Fellowship. He traveled to Venice to work at the Venini factory on the island of Murano, where he first saw the team approach to blowing glass. After returning to the United States, Chihuly spent the first of four consecutive summers teaching at the Haystack Mountain School of Crafts in Deer Isle, Maine. In 1969, he traveled to Europe, in part to meet Erwin Eisch in Germany and Stanislav Libenský and Jaroslava Brychtová in Czechoslovakia. Chihuly donated a portion of a large exhibit to his alma mater, the University of Wisconsin, in 1997 and it is on permanent display in the Kohl Center. In 2013 the university awarded him an Honorary Doctorate of Fine Arts.", "title": "Early life" }, { "paragraph_id": 4, "text": "In 1971, with the support of John Hauberg and Anne Gould Hauberg, Chihuly co-founded the Pilchuck Glass School near Stanwood, Washington. Chihuly also founded the HillTop Artists program in Tacoma, Washington at Hilltop Heritage Middle School and Wilson High School.", "title": "Career" }, { "paragraph_id": 5, "text": "In 1976, while Chihuly was in England, he was involved in a head-on car accident that propelled him through the windshield. His face was severely cut by glass and he was blinded in his left eye. After recovering, he continued to blow glass until he dislocated his right shoulder in 1979 while bodysurfing.", "title": "Career" }, { "paragraph_id": 6, "text": "In 1983, Chihuly returned to his native Pacific Northwest where he continued to develop his own work at the Pilchuck Glass School, which he had helped to found in 1971. No longer able to hold the glassblowing pipe, he hired others to do the work. Chihuly explained the change in a 2006 interview, saying \"Once I stepped back, I liked the view\", and said that it allowed him to see the work from more perspectives, enabling him to anticipate problems earlier. Chihuly's role has been described as \"more choreographer than dancer, more supervisor than participant, more director than actor\". San Diego Union-Tribune reporter Erin Glass wrote that she \"wonders at the vision of not just the artist Chihuly, but the very successful entrepreneur Chihuly, whose estimated sales by 2004 was reported by The Seattle Times as $29 million.\"", "title": "Career" }, { "paragraph_id": 7, "text": "Chihuly and his team of artists were the subjects of the documentary Chihuly Over Venice. They were also featured in the documentary Chihuly in the Hotshop, syndicated to public television stations by American Public Television starting on November 1, 2008.", "title": "Career" }, { "paragraph_id": 8, "text": "In 2010, the Space Needle Corporation submitted a proposal for an exhibition of Chihuly's work at a site in the Seattle Center, in competition with proposals for other uses from several other groups. The project, which sees the new Chihuly exhibition hall occupy the site of the former Fun Forest amusement park in the Seattle Center park and entertainment complex, received the final approval from the Seattle City Council on April 25, 2011. Called Chihuly Garden and Glass, it opened May 21, 2012.", "title": "Career" }, { "paragraph_id": 9, "text": "In 2006, Chihuly filed a lawsuit against his former longtime employee, glassblower Bryan Rubino, and businessman Robert Kaindl, claiming copyright and trademark infringement. Kaindl's pieces used titles Chihuly had employed for his own works, such as Seaforms and Ikebana, and resembled the construction of Chihuly's pieces. Legal experts stated that influence on art style did not constitute copyright infringement. Chihuly settled the lawsuit with Rubino initially, and later with Kaindl as well.", "title": "Career" }, { "paragraph_id": 10, "text": "Regina Hackett, a Seattle Post-Intelligencer art critic, provided a chronology of Chihuly's work during the 1970s, 1980s, and 1990s:", "title": "Works" }, { "paragraph_id": 11, "text": "For his exhibition in Jerusalem, in 1999–2000, in addition to the glass pieces, he had enormous blocks of transparent ice brought in from an Alaskan artesian well and formed a wall, echoing the stones of the nearby Citadel. Lights with color gels were set up behind them for illumination. Chihuly said the melting wall represented the \"dissolution of barriers\" between people. This exhibit holds the world record for most visitors to a temporary exhibit with more than 1.3 million visitors.", "title": "Works" }, { "paragraph_id": 12, "text": "Chihuly's largest permanent exhibit is at the Oklahoma City Museum of Art. Other large collections can be found at the Morean Arts Center in St. Petersburg, Florida, and Chihuly Garden and Glass in Seattle, Washington.", "title": "Works" }, { "paragraph_id": 13, "text": "Chihuly also maintains two retail stores in partnership with MGM Resorts International, one at the Bellagio on the Las Vegas Strip, and the other at the MGM Grand Casino in Macau.", "title": "Works" }, { "paragraph_id": 14, "text": "Chihuly's art appears in over 400 permanent collections all over the world, including in the United States, Canada, England, Israel, China, Singapore, the United Arab Emirates, and Australia.", "title": "Works" } ]
Dale Chihuly is an American glass artist and entrepreneur. He is well known in the field of blown glass, "moving it into the realm of large-scale sculpture".
2002-02-25T15:51:15Z
2023-12-19T14:00:30Z
[ "Template:Reflist", "Template:Official website", "Template:Dale Chihuly", "Template:Infobox artist", "Template:Vanchor", "Template:Further", "Template:Library resources box", "Template:Authority control", "Template:Short description", "Template:IPAc-en", "Template:Citation", "Template:Cite news", "Template:Refbegin", "Template:Refend", "Template:ISBN", "Template:Citation needed", "Template:Columns-list", "Template:Cite web", "Template:Cite book", "Template:Commons category", "Template:American Craft Council" ]
https://en.wikipedia.org/wiki/Dale_Chihuly
9,133
Dean Kamen
Dean Lawrence Kamen (born April 5, 1951) is an American engineer, inventor, and businessman. He is known for his invention of the Segway and iBOT, as well as founding the non-profit organization FIRST with Woodie Flowers. Kamen holds over 1,000 patents. Kamen was born on Long Island, New York, to a Jewish family. His father was Jack Kamen, an illustrator for Mad, Weird Science and other EC Comics publications. During his teenage years, Kamen was already being paid for his ideas; local bands and museums paid him to build light and sound systems. His annual earnings reached $60,000 before his high school graduation. He attended Worcester Polytechnic Institute, but in 1976 quit before graduating, after five years of private advanced research for the insulin pump AutoSyringe. Kamen is known best for inventing the product that eventually became known as the Segway PT, an electric, self-balancing human transporter with a computer-controlled gyroscopic stabilization and control system. The device is balanced on two parallel wheels and is controlled by moving body weight. The machine's development was the object of much speculation and hype after segments of a book quoting Steve Jobs and other notable information technology visionaries espousing its society-revolutionizing potential were leaked in December 2001. Kamen was already a successful inventor: his company Auto Syringe manufactures and markets the first drug infusion pump. His company DEKA also holds patents for the technology used in portable dialysis machines, an insulin pump (based on the drug infusion pump technology), and an all-terrain electric wheelchair known as the iBOT, using many of the same gyroscopic balancing technologies that later made their way into the Segway. Kamen has worked extensively on a project involving Stirling engine designs, attempting to create two machines: one that would generate power, and the Slingshot that would serve as a water purification system. He hopes the project will help improve living standards in developing countries. Kamen has a patent on his water purifier, and other patents pending. In 2014, the film SlingShot was released, detailing Kamen's quest to use his vapor compression distiller to fix the world's water crisis. Kamen is also the co-inventor of a compressed air device that would launch a human into the air in order to quickly launch SWAT teams or other emergency workers to the roofs of tall, inaccessible buildings. In 2009 Kamen stated that his company DEKA was now working on solar powered inventions. Kamen and DEKA also developed the DEKA Arm System or "Luke", a prosthetic arm replacement that offers its user much more fine motor control than traditional prosthetic limbs. It was approved for use by the US Food and Drug Administration (FDA) in May 2014, and DEKA is looking for partners to mass-produce the prosthesis. In 1989, Kamen founded FIRST (For Inspiration and Recognition of Science and Technology), an organization intended to build students' interests in science, technology, engineering, and mathematics (STEM). In 1992, working with MIT Professor Emeritus Woodie Flowers, Kamen created the FIRST Robotics Competition (FRC), which evolved into an international competition that by 2020 had drawn 3,647 teams and more than 91,000 students. FIRST organizes robotics competition leagues for students in grades K-12, including FIRST LEGO League Discover for ages 4–6, FIRST LEGO League Explore for younger elementary school students, FIRST LEGO League Challenge for older elementary school and middle school students, FIRST Tech Challenge (FTC) for middle and high school students, and FIRST Robotics Competition (FRC) for high school students. In 2017, FIRST held its first Olympics-style competition – FGC (FIRST Global Challenge) – in Washington, D.C. In 2010, Kamen called FIRST the invention he is most proud of, and said that 1 million students had taken part in the contests. In 2017, Kamen founded the Advanced Regenerative Manufacturing Institute (ARMI) and launched BioFabUSA, a Manufacturing USA Innovation Institute with an $80 million grant from the Department of Defense. BioFabUSA's mission is to "...make practical the large-scale manufacturing of engineered tissues and tissue-related technologies, to benefit existing industries and grow new ones" In addition to DoD funding, Kamen brought together a consortium of private sector entities to form a public-private partnership which pledged $214M additional private dollars. In early 2020, ARMI was awarded a grant from the Department of Health and Human Services to establish the first Foundry for American Biotechnology, known as NextFab "to produce technological solutions that help the United States protect against and respond to health security threats, enhance daily medical care, and add to the U.S. bioeconomy". Kamen has won numerous awards. He was elected to the National Academy of Engineering in 1997 for inventing and commercializing biomedical devices and fluid measurement and control systems, and for popularizing engineering among young people. In 1999 he was awarded the 5th Annual Heinz Award in Technology, the Economy and Employment, and in 2000 received the National Medal of Technology from then President Clinton for inventions that have advanced medical care worldwide. In April 2002, Kamen was awarded the Lemelson-MIT Prize for inventors, for his invention of the Segway and of an infusion pump for diabetics. In 2003 his "Project Slingshot", an inexpensive portable water purification system, was named a runner-up for "coolest invention of 2003" by Time magazine. In 2005 he was inducted into the National Inventors Hall of Fame for his invention of the AutoSyringe. In 2006 Kamen was awarded the "Global Humanitarian Action Award" by the United Nations. In 2007 he received the ASME Medal, the highest award from the American Society of Mechanical Engineers, in 2008 he was the recipient of the IRI Achievement Award from the Industrial Research Institute, and in 2011 Kamen was awarded the Benjamin Franklin Medal in Mechanical Engineering of the Franklin Institute. Kamen received an honorary Doctor of Engineering degree from Worcester Polytechnic Institute in 1992, Rensselaer Polytechnic Institute May 17, 1996, a Doctor of Engineering degree from Kettering University in 2001, an honorary Doctor of Science degree from Clarkson University on May 13, 2001, an honorary "Doctor of Science" degree from the University of Arizona on May 16, 2009, and an honorary doctorate from the Wentworth Institute of Technology when he spoke at the college's centennial celebration in 2004, and other honorary doctorates from North Carolina State University in 2005, Bates College in 2007, the Georgia Institute of Technology in 2008, the Illinois Institute of Technology in 2008 the Plymouth State University in May 2008 and Rose-Hulman Institute of Technology in 2012. In 2015, Kamen received an honorary Doctor of Engineering and Technology degree from Yale University. In 2017, Kamen was honored with an institutional honorary degree from Université de Sherbrooke. Kamen received the Stevens Honor Award on November 6, 2009, given by the Stevens Institute of Technology and the Stevens Alumni Association. On November 14, 2013, he received the James C. Morgan Global Humanitarian Award. Kamen received the 2018 Public Service Award from the National Science Board, honoring his exemplary public service and contributions to the public's understanding of science and engineering. In 2007, his residence was a hexagonal, shed style mansion he dubbed Westwind, located in Bedford, New Hampshire, just outside Manchester. The house has at least four levels and is very eclectically conceived, with such things as: hallways resembling mine shafts; 1960s novelty furniture; a collection of vintage wheelchairs; spiral staircases; at least one secret passage; an observation tower; a fully equipped machine shop; and a huge cast iron steam engine which once belonged to Henry Ford (built into the multi-story center atrium of the house) which Kamen is working to convert into a Stirling engine-powered kinetic sculpture. Kamen owns and pilots an Embraer Phenom 300 light jet aircraft and three Enstrom helicopters, including a 280FX, a 480, and a 480B. He regularly commutes to work via his helicopters and had a hangar built into his house. In 2016 he flew as a passenger in a B-2 Spirit bomber at Whiteman AFB, marking the opening of the 2016 FRC World Championship in St. Louis. He is the main subject of Code Name Ginger: the Story Behind Segway and Dean Kamen's Quest to Invent a New World, a nonfiction narrative book by journalist Steve Kemper published by Harvard Business School Press in 2003 (released in paperback as Reinventing the Wheel). His company, DEKA, annually creates intricate mechanical presents for him. The company has created a robotic chess player, which is a mechanical arm attached to a chess board, and a vintage-looking computer with antique wood, and a converted typewriter as a keyboard. In addition, DEKA has received funding from DARPA to work on a brain-controlled prosthetic limb called the Luke Arm. Kamen is a member of the USA Science and Engineering Festival's Advisory Board and is also a member of the Xconomists, an ad hoc team of editorial advisors for the tech news and media company, Xconomy. He is also on the Board of Trustees of the X Prize Foundation. Dean of Invention, a TV show on Planet Green, premiered on October 22, 2010. It starred Kamen and correspondent Joanne Colan, in which they investigate new technologies, Kamen was a keynote speaker at the 2015 Congress of Future Science and Technology Leaders. In the 2016 New Hampshire Senate election, Kamen endorsed Kelly Ayotte, appearing in an ad supporting her.
[ { "paragraph_id": 0, "text": "Dean Lawrence Kamen (born April 5, 1951) is an American engineer, inventor, and businessman. He is known for his invention of the Segway and iBOT, as well as founding the non-profit organization FIRST with Woodie Flowers. Kamen holds over 1,000 patents.", "title": "" }, { "paragraph_id": 1, "text": "Kamen was born on Long Island, New York, to a Jewish family. His father was Jack Kamen, an illustrator for Mad, Weird Science and other EC Comics publications. During his teenage years, Kamen was already being paid for his ideas; local bands and museums paid him to build light and sound systems. His annual earnings reached $60,000 before his high school graduation.", "title": "Early life and family" }, { "paragraph_id": 2, "text": "He attended Worcester Polytechnic Institute, but in 1976 quit before graduating, after five years of private advanced research for the insulin pump AutoSyringe.", "title": "Early life and family" }, { "paragraph_id": 3, "text": "Kamen is known best for inventing the product that eventually became known as the Segway PT, an electric, self-balancing human transporter with a computer-controlled gyroscopic stabilization and control system. The device is balanced on two parallel wheels and is controlled by moving body weight. The machine's development was the object of much speculation and hype after segments of a book quoting Steve Jobs and other notable information technology visionaries espousing its society-revolutionizing potential were leaked in December 2001.", "title": "Career" }, { "paragraph_id": 4, "text": "Kamen was already a successful inventor: his company Auto Syringe manufactures and markets the first drug infusion pump. His company DEKA also holds patents for the technology used in portable dialysis machines, an insulin pump (based on the drug infusion pump technology), and an all-terrain electric wheelchair known as the iBOT, using many of the same gyroscopic balancing technologies that later made their way into the Segway.", "title": "Career" }, { "paragraph_id": 5, "text": "Kamen has worked extensively on a project involving Stirling engine designs, attempting to create two machines: one that would generate power, and the Slingshot that would serve as a water purification system. He hopes the project will help improve living standards in developing countries. Kamen has a patent on his water purifier, and other patents pending. In 2014, the film SlingShot was released, detailing Kamen's quest to use his vapor compression distiller to fix the world's water crisis.", "title": "Career" }, { "paragraph_id": 6, "text": "Kamen is also the co-inventor of a compressed air device that would launch a human into the air in order to quickly launch SWAT teams or other emergency workers to the roofs of tall, inaccessible buildings.", "title": "Career" }, { "paragraph_id": 7, "text": "In 2009 Kamen stated that his company DEKA was now working on solar powered inventions.", "title": "Career" }, { "paragraph_id": 8, "text": "Kamen and DEKA also developed the DEKA Arm System or \"Luke\", a prosthetic arm replacement that offers its user much more fine motor control than traditional prosthetic limbs. It was approved for use by the US Food and Drug Administration (FDA) in May 2014, and DEKA is looking for partners to mass-produce the prosthesis.", "title": "Career" }, { "paragraph_id": 9, "text": "In 1989, Kamen founded FIRST (For Inspiration and Recognition of Science and Technology), an organization intended to build students' interests in science, technology, engineering, and mathematics (STEM). In 1992, working with MIT Professor Emeritus Woodie Flowers, Kamen created the FIRST Robotics Competition (FRC), which evolved into an international competition that by 2020 had drawn 3,647 teams and more than 91,000 students.", "title": "Career" }, { "paragraph_id": 10, "text": "FIRST organizes robotics competition leagues for students in grades K-12, including FIRST LEGO League Discover for ages 4–6, FIRST LEGO League Explore for younger elementary school students, FIRST LEGO League Challenge for older elementary school and middle school students, FIRST Tech Challenge (FTC) for middle and high school students, and FIRST Robotics Competition (FRC) for high school students. In 2017, FIRST held its first Olympics-style competition – FGC (FIRST Global Challenge) – in Washington, D.C.", "title": "Career" }, { "paragraph_id": 11, "text": "In 2010, Kamen called FIRST the invention he is most proud of, and said that 1 million students had taken part in the contests.", "title": "Career" }, { "paragraph_id": 12, "text": "In 2017, Kamen founded the Advanced Regenerative Manufacturing Institute (ARMI) and launched BioFabUSA, a Manufacturing USA Innovation Institute with an $80 million grant from the Department of Defense. BioFabUSA's mission is to \"...make practical the large-scale manufacturing of engineered tissues and tissue-related technologies, to benefit existing industries and grow new ones\" In addition to DoD funding, Kamen brought together a consortium of private sector entities to form a public-private partnership which pledged $214M additional private dollars.", "title": "Career" }, { "paragraph_id": 13, "text": "In early 2020, ARMI was awarded a grant from the Department of Health and Human Services to establish the first Foundry for American Biotechnology, known as NextFab \"to produce technological solutions that help the United States protect against and respond to health security threats, enhance daily medical care, and add to the U.S. bioeconomy\".", "title": "Career" }, { "paragraph_id": 14, "text": "Kamen has won numerous awards. He was elected to the National Academy of Engineering in 1997 for inventing and commercializing biomedical devices and fluid measurement and control systems, and for popularizing engineering among young people. In 1999 he was awarded the 5th Annual Heinz Award in Technology, the Economy and Employment, and in 2000 received the National Medal of Technology from then President Clinton for inventions that have advanced medical care worldwide. In April 2002, Kamen was awarded the Lemelson-MIT Prize for inventors, for his invention of the Segway and of an infusion pump for diabetics. In 2003 his \"Project Slingshot\", an inexpensive portable water purification system, was named a runner-up for \"coolest invention of 2003\" by Time magazine.", "title": "Career" }, { "paragraph_id": 15, "text": "In 2005 he was inducted into the National Inventors Hall of Fame for his invention of the AutoSyringe. In 2006 Kamen was awarded the \"Global Humanitarian Action Award\" by the United Nations. In 2007 he received the ASME Medal, the highest award from the American Society of Mechanical Engineers, in 2008 he was the recipient of the IRI Achievement Award from the Industrial Research Institute, and in 2011 Kamen was awarded the Benjamin Franklin Medal in Mechanical Engineering of the Franklin Institute.", "title": "Career" }, { "paragraph_id": 16, "text": "Kamen received an honorary Doctor of Engineering degree from Worcester Polytechnic Institute in 1992, Rensselaer Polytechnic Institute May 17, 1996, a Doctor of Engineering degree from Kettering University in 2001, an honorary Doctor of Science degree from Clarkson University on May 13, 2001, an honorary \"Doctor of Science\" degree from the University of Arizona on May 16, 2009, and an honorary doctorate from the Wentworth Institute of Technology when he spoke at the college's centennial celebration in 2004, and other honorary doctorates from North Carolina State University in 2005, Bates College in 2007, the Georgia Institute of Technology in 2008, the Illinois Institute of Technology in 2008 the Plymouth State University in May 2008 and Rose-Hulman Institute of Technology in 2012. In 2015, Kamen received an honorary Doctor of Engineering and Technology degree from Yale University. In 2017, Kamen was honored with an institutional honorary degree from Université de Sherbrooke.", "title": "Career" }, { "paragraph_id": 17, "text": "Kamen received the Stevens Honor Award on November 6, 2009, given by the Stevens Institute of Technology and the Stevens Alumni Association. On November 14, 2013, he received the James C. Morgan Global Humanitarian Award.", "title": "Career" }, { "paragraph_id": 18, "text": "Kamen received the 2018 Public Service Award from the National Science Board, honoring his exemplary public service and contributions to the public's understanding of science and engineering.", "title": "Career" }, { "paragraph_id": 19, "text": "In 2007, his residence was a hexagonal, shed style mansion he dubbed Westwind, located in Bedford, New Hampshire, just outside Manchester. The house has at least four levels and is very eclectically conceived, with such things as: hallways resembling mine shafts; 1960s novelty furniture; a collection of vintage wheelchairs; spiral staircases; at least one secret passage; an observation tower; a fully equipped machine shop; and a huge cast iron steam engine which once belonged to Henry Ford (built into the multi-story center atrium of the house) which Kamen is working to convert into a Stirling engine-powered kinetic sculpture. Kamen owns and pilots an Embraer Phenom 300 light jet aircraft and three Enstrom helicopters, including a 280FX, a 480, and a 480B. He regularly commutes to work via his helicopters and had a hangar built into his house. In 2016 he flew as a passenger in a B-2 Spirit bomber at Whiteman AFB, marking the opening of the 2016 FRC World Championship in St. Louis.", "title": "Personal life" }, { "paragraph_id": 20, "text": "He is the main subject of Code Name Ginger: the Story Behind Segway and Dean Kamen's Quest to Invent a New World, a nonfiction narrative book by journalist Steve Kemper published by Harvard Business School Press in 2003 (released in paperback as Reinventing the Wheel).", "title": "Personal life" }, { "paragraph_id": 21, "text": "His company, DEKA, annually creates intricate mechanical presents for him. The company has created a robotic chess player, which is a mechanical arm attached to a chess board, and a vintage-looking computer with antique wood, and a converted typewriter as a keyboard. In addition, DEKA has received funding from DARPA to work on a brain-controlled prosthetic limb called the Luke Arm.", "title": "Personal life" }, { "paragraph_id": 22, "text": "Kamen is a member of the USA Science and Engineering Festival's Advisory Board and is also a member of the Xconomists, an ad hoc team of editorial advisors for the tech news and media company, Xconomy. He is also on the Board of Trustees of the X Prize Foundation.", "title": "Personal life" }, { "paragraph_id": 23, "text": "Dean of Invention, a TV show on Planet Green, premiered on October 22, 2010. It starred Kamen and correspondent Joanne Colan, in which they investigate new technologies,", "title": "Personal life" }, { "paragraph_id": 24, "text": "Kamen was a keynote speaker at the 2015 Congress of Future Science and Technology Leaders.", "title": "Personal life" }, { "paragraph_id": 25, "text": "In the 2016 New Hampshire Senate election, Kamen endorsed Kelly Ayotte, appearing in an ad supporting her.", "title": "Personal life" } ]
Dean Lawrence Kamen is an American engineer, inventor, and businessman. He is known for his invention of the Segway and iBOT, as well as founding the non-profit organization FIRST with Woodie Flowers. Kamen holds over 1,000 patents.
2002-01-16T14:14:04Z
2023-12-01T03:08:50Z
[ "Template:Infobox person", "Template:US patent", "Template:IMDb name", "Template:Segway", "Template:Short description", "Template:Sfn", "Template:Cite magazine", "Template:Cite news", "Template:TED speaker", "Template:FIRST", "Template:ASME Medal", "Template:Use mdy dates", "Template:Cite web", "Template:Prone to spam", "Template:Henry Laurence Gantt Medal", "Template:Reflist", "Template:Citation needed", "Template:Cite book", "Template:USPTO Application", "Template:Wikiquote", "Template:Authority control", "Template:Rp" ]
https://en.wikipedia.org/wiki/Dean_Kamen
9,135
Derivative (finance)
In finance, a derivative is a contract that derives its value from the performance of an underlying entity. This underlying entity can be an asset, index, or interest rate, and is often simply called the underlying. Derivatives can be used for a number of purposes, including insuring against price movements (hedging), increasing exposure to price movements for speculation, or getting access to otherwise hard-to-trade assets or markets. Some of the more common derivatives include forwards, futures, options, swaps, and variations of these such as synthetic collateralized debt obligations and credit default swaps. Most derivatives are traded over-the-counter (off-exchange) or on an exchange such as the Chicago Mercantile Exchange, while most insurance contracts have developed into a separate industry. In the United States, after the financial crisis of 2007–2009, there has been increased pressure to move derivatives to trade on exchanges. Derivatives are one of the three main categories of financial instruments, the other two being equity (i.e., stocks or shares) and debt (i.e., bonds and mortgages). The oldest example of a derivative in history, attested to by Aristotle, is thought to be a contract transaction of olives, entered into by ancient Greek philosopher Thales, who made a profit in the exchange. However, Aristotle did not define this arrangement as a derivative but as a monopoly (Aristotle's Politics, Book I, Chapter XI). Bucket shops, outlawed in 1936 in the US, are a more recent historical example. Derivatives are contracts between two parties that specify conditions (especially the dates, resulting values and definitions of the underlying variables, the parties' contractual obligations, and the notional amount) under which payments are to be made between the parties. The assets include commodities, stocks, bonds, interest rates and currencies, but they can also be other derivatives, which adds another layer of complexity to proper valuation. The components of a firm's capital structure, e.g., bonds and stock, can also be considered derivatives, more precisely options, with the underlying being the firm's assets, but this is unusual outside of technical contexts. From the economic point of view, financial derivatives are cash flows that are conditioned stochastically and discounted to present value. The market risk inherent in the underlying asset is attached to the financial derivative through contractual agreements and hence can be traded separately. The underlying asset does not have to be acquired. Derivatives therefore allow the breakup of ownership and participation in the market value of an asset. This also provides a considerable amount of freedom regarding the contract design. That contractual freedom allows derivative designers to modify the participation in the performance of the underlying asset almost arbitrarily. Thus, the participation in the market value of the underlying can be effectively weaker, stronger (leverage effect), or implemented as inverse. Hence, specifically the market price risk of the underlying asset can be controlled in almost every situation. There are two groups of derivative contracts: the privately traded over-the-counter (OTC) derivatives such as swaps that do not go through an exchange or other intermediary, and exchange-traded derivatives (ETD) that are traded through specialized derivatives exchanges or other exchanges. Derivatives are more common in the modern era, but their origins trace back several centuries. One of the oldest derivatives is rice futures, which have been traded on the Dojima Rice Exchange since the eighteenth century. Derivatives are broadly categorized by the relationship between the underlying asset and the derivative (such as forward, option, swap); the type of underlying asset (such as equity derivatives, foreign exchange derivatives, interest rate derivatives, commodity derivatives, or credit derivatives); the market in which they trade (such as exchange-traded or over-the-counter); and their pay-off profile. Derivatives may broadly be categorized as "lock" or "option" products. Lock products (such as swaps, futures, or forwards) obligate the contractual parties to the terms over the life of the contract. Option products (such as interest rate swaps) provide the buyer the right, but not the obligation to enter the contract under the terms specified. Derivatives can be used either for risk management (i.e. to "hedge" by providing offsetting compensation in case of an undesired event, a kind of "insurance") or for speculation (i.e. making a financial "bet"). This distinction is important because the former is a prudent aspect of operations and financial management for many firms across many industries; the latter offers managers and investors a risky opportunity to increase profit, which may not be properly disclosed to stakeholders. Along with many other financial products and services, derivatives reform is an element of the Dodd–Frank Wall Street Reform and Consumer Protection Act of 2010. The Act delegated many rule-making details of regulatory oversight to the Commodity Futures Trading Commission (CFTC) and those details are not finalized nor fully implemented as of late 2012. To give an idea of the size of the derivative market, The Economist has reported that as of June 2011, the over-the-counter (OTC) derivatives market amounted to approximately $700 trillion, and the size of the market traded on exchanges totaled an additional $83 trillion. For the fourth quarter 2017 the European Securities Market Authority estimated the size of European derivatives market at a size of €660 trillion with 74 million outstanding contracts. However, these are "notional" values, and some economists say that these aggregated values greatly exaggerate the market value and the true credit risk faced by the parties involved. For example, in 2010, while the aggregate of OTC derivatives exceeded $600 trillion, the value of the market was estimated to be much lower, at $21 trillion. The credit-risk equivalent of the derivative contracts was estimated at $3.3 trillion. Still, even these scaled-down figures represent huge amounts of money. For perspective, the budget for total expenditure of the United States government during 2012 was $3.5 trillion, and the total current value of the U.S. stock market is an estimated $23 trillion. Meanwhile, the global annual Gross Domestic Product is about $65 trillion. At least for one type of derivative, credit default swaps (CDS), for which the inherent risk is considered high , the higher, nominal value remains relevant. It was this type of derivative that investment magnate Warren Buffett referred to in his famous 2002 speech in which he warned against "financial weapons of mass destruction". CDS notional value in early 2012 amounted to $25.5 trillion, down from $55 trillion in 2008. Derivatives are used for the following: Lock products are theoretically valued at zero at the time of execution and thus do not typically require an up-front exchange between the parties. Based upon movements in the underlying asset over time, however, the value of the contract will fluctuate, and the derivative may be either an asset (i.e., "in the money") or a liability (i.e., "out of the money") at different points throughout its life. Importantly, either party is therefore exposed to the credit quality of its counterparty and is interested in protecting itself in an event of default. Option products have immediate value at the outset because they provide specified protection (intrinsic value) over a given time period (time value). One common form of option product familiar to many consumers is insurance for homes and automobiles. The insured would pay more for a policy with greater liability protections (intrinsic value) and one that extends for a year rather than six months (time value). Because of the immediate option value, the option purchaser typically pays an up front premium. Just like for lock products, movements in the underlying asset will cause the option's intrinsic value to change over time while its time value deteriorates steadily until the contract expires. An important difference between a lock product is that, after the initial exchange, the option purchaser has no further liability to its counterparty; upon maturity, the purchaser will execute the option if it has positive value (i.e., if it is "in the money") or expire at no cost (other than to the initial premium) (i.e., if the option is "out of the money"). Derivatives allow risk related to the price of the underlying asset to be transferred from one party to another. For example, a wheat farmer and a miller could sign a futures contract to exchange a specified amount of cash for a specified amount of wheat in the future. Both parties have reduced a future risk: for the wheat farmer, the uncertainty of the price, and for the miller, the availability of wheat. However, there is still the risk that no wheat will be available because of events unspecified by the contract, such as the weather, or that one party will renege on the contract. Although a third party, called a clearing house, insures a futures contract, not all derivatives are insured against counter-party risk. From another perspective, the farmer and the miller both reduce a risk and acquire a risk when they sign the futures contract: the farmer reduces the risk that the price of wheat will fall below the price specified in the contract and acquires the risk that the price of wheat will rise above the price specified in the contract (thereby losing additional income that he could have earned). The miller, on the other hand, acquires the risk that the price of wheat will fall below the price specified in the contract (thereby paying more in the future than he otherwise would have) and reduces the risk that the price of wheat will rise above the price specified in the contract. In this sense, one party is the insurer (risk taker) for one type of risk, and the counter-party is the insurer (risk taker) for another type of risk. Hedging also occurs when an individual or institution buys an asset (such as a commodity, a bond that has coupon payments, a stock that pays dividends, and so on) and sells it using a futures contract. The individual or institution has access to the asset for a specified amount of time, and can then sell it in the future at a specified price according to the futures contract. Of course, this allows the individual or institution the benefit of holding the asset, while reducing the risk that the future selling price will deviate unexpectedly from the market's current assessment of the future value of the asset. Derivatives trading of this kind may serve the financial interests of certain particular businesses. For example, a corporation borrows a large sum of money at a specific interest rate. The interest rate on the loan reprices every six months. The corporation is concerned that the rate of interest may be much higher in six months. The corporation could buy a forward rate agreement (FRA), which is a contract to pay a fixed rate of interest six months after purchases on a notional amount of money. If the interest rate after six months is above the contract rate, the seller will pay the difference to the corporation, or FRA buyer. If the rate is lower, the corporation will pay the difference to the seller. The purchase of the FRA serves to reduce the uncertainty concerning the rate increase and stabilize earnings. Derivatives can be used to acquire risk, rather than to hedge against risk. Thus, some individuals and institutions will enter into a derivative contract to speculate on the value of the underlying asset. Speculators look to buy an asset in the future at a low price according to a derivative contract when the future market price is high, or to sell an asset in the future at a high price according to a derivative contract when the future market price is less. Speculative trading in derivatives gained a great deal of notoriety in 1995 when Nick Leeson, a trader at Barings Bank, made poor and unauthorized investments in futures contracts. Through a combination of poor judgment, lack of oversight by the bank's management and regulators, and unfortunate events like the Kobe earthquake, Leeson incurred a $1.3 billion loss that bankrupted the centuries-old institution. Individuals and institutions may also look for arbitrage opportunities, as when the current buying price of an asset falls below the price specified in a futures contract to sell the asset. The true proportion of derivatives contracts used for hedging purposes is unknown, but it appears to be relatively small. Also, derivatives contracts account for only 3–6% of the median firms' total currency and interest rate exposure. Nonetheless, we know that many firms' derivatives activities have at least some speculative component for a variety of reasons. In broad terms, there are two groups of derivative contracts, which are distinguished by the way they are traded in the market: Over-the-counter (OTC) derivatives are contracts that are traded (and privately negotiated) directly between two parties, without going through an exchange or other intermediary. Products such as swaps, forward rate agreements, exotic options – and other exotic derivatives – are almost always traded in this way. The OTC derivative market is the largest market for derivatives, and is largely unregulated with respect to disclosure of information between the parties, since the OTC market is made up of banks and other highly sophisticated parties, such as hedge funds. Reporting of OTC amounts is difficult because trades can occur in private, without activity being visible on any exchanges According to the Bank for International Settlements, who first surveyed OTC derivatives in 1995, reported that the "gross market value, which represent the cost of replacing all open contracts at the prevailing market prices, ... increased by 74% since 2004, to $11 trillion at the end of June 2007 (BIS 2007:24)." Positions in the OTC derivatives market increased to $516 trillion at the end of June 2007, 135% higher than the level recorded in 2004. The total outstanding notional amount is US$708 trillion (as of June 2011). Of this total notional amount, 67% are interest rate contracts, 8% are credit default swaps (CDS), 9% are foreign exchange contracts, 2% are commodity contracts, 1% are equity contracts, and 12% are other. Because OTC derivatives are not traded on an exchange, there is no central counter-party. Therefore, they are subject to counterparty risk, like an ordinary contract, since each counter-party relies on the other to perform. Exchange-traded derivatives (ETD) are those derivatives instruments that are traded via specialized derivatives exchanges or other exchanges. A derivatives exchange is a market where individuals trade standardized contracts that have been defined by the exchange. A derivatives exchange acts as an intermediary to all related transactions, and takes initial margin from both sides of the trade to act as a guarantee. The world's largest derivatives exchanges (by number of transactions) are the Korea Exchange (which lists KOSPI Index Futures & Options), Eurex (which lists a wide range of European products such as interest rate & index products), and CME Group (made up of the 2007 merger of the Chicago Mercantile Exchange and the Chicago Board of Trade and the 2008 acquisition of the New York Mercantile Exchange). According to BIS, the combined turnover in the world's derivatives exchanges totaled US$344 trillion during Q4 2005. By December 2007 the Bank for International Settlements reported that "derivatives traded on exchanges surged 27% to a record $681 trillion." Inverse exchange-traded funds (IETFs) and leveraged exchange-traded funds (LETFs) are two special types of exchange traded funds (ETFs) that are available to common traders and investors on major exchanges like the NYSE and Nasdaq. To maintain these products' net asset value, these funds' administrators must employ more sophisticated financial engineering methods than what's usually required for maintenance of traditional ETFs. These instruments must also be regularly rebalanced and re-indexed each day. Some of the common variants of derivative contracts are as follows: Some common examples of these derivatives are the following: A collateralized debt obligation (CDO) is a type of structured asset-backed security (ABS). An "asset-backed security" is used as an umbrella term for a type of security backed by a pool of assets—including collateralized debt obligations and mortgage-backed securities (MBS) (Example: "The capital market in which asset-backed securities are issued and traded is composed of three main categories: ABS, MBS and CDOs".)—and sometimes for a particular type of that security—one backed by consumer loans (example: "As a rule of thumb, securitization issues backed by mortgages are called MBS, and securitization issues backed by debt obligations are called CDO, [and] Securitization issues backed by consumer-backed products—car loans, consumer loans and credit cards, among others—are called ABS.) Originally developed for the corporate debt markets, over time CDOs evolved to encompass the mortgage and mortgage-backed security (MBS) markets. Like other private-label securities backed by assets, a CDO can be thought of as a promise to pay investors in a prescribed sequence, based on the cash flow the CDO collects from the pool of bonds or other assets it owns. The CDO is "sliced" into "tranches", which "catch" the cash flow of interest and principal payments in sequence based on seniority. If some loans default and the cash collected by the CDO is insufficient to pay all of its investors, those in the lowest, most "junior" tranches suffer losses first. The last to lose payment from default are the safest, most senior tranches. Consequently, coupon payments (and interest rates) vary by tranche with the safest/most senior tranches paying the lowest and the lowest tranches paying the highest rates to compensate for higher default risk. As an example, a CDO might issue the following tranches in order of safeness: Senior AAA (sometimes known as "super senior"); Junior AAA; AA; A; BBB; Residual. Separate special-purpose entities—rather than the parent investment bank—issue the CDOs and pay interest to investors. As CDOs developed, some sponsors repackaged tranches into yet another iteration called "CDO-Squared" or the "CDOs of CDOs". In the early 2000s, CDOs were generally diversified, but by 2006–2007—when the CDO market grew to hundreds of billions of dollars—this changed. CDO collateral became dominated not by loans, but by lower level (BBB or A) tranches recycled from other asset-backed securities, whose assets were usually non-prime mortgages. These CDOs have been called "the engine that powered the mortgage supply chain" for nonprime mortgages, and are credited with giving lenders greater incentive to make non-prime loans leading up to the 2007-9 subprime mortgage crisis. A credit default swap (CDS) is a financial swap agreement that the seller of the CDS will compensate the buyer (the creditor of the reference loan) in the event of a loan default (by the debtor) or other credit event. The buyer of the CDS makes a series of payments (the CDS "fee" or "spread") to the seller and, in exchange, receives a payoff if the loan defaults. It was invented by Blythe Masters from JP Morgan in 1994. In the event of default the buyer of the CDS receives compensation (usually the face value of the loan), and the seller of the CDS takes possession of the defaulted loan. However, anyone with sufficient collateral to trade with a bank or hedge fund can purchase a CDS, even buyers who do not hold the loan instrument and who have no direct insurable interest in the loan (these are called "naked" CDSs). If there are more CDS contracts outstanding than bonds in existence, a protocol exists to hold a credit event auction; the payment received is usually substantially less than the face value of the loan. Credit default swaps have existed since the early 1990s, and increased in use after 2003. By the end of 2007, the outstanding CDS amount was $62.2 trillion, falling to $26.3 trillion by mid-year 2010 but reportedly $25.5 trillion in early 2012. CDSs are not traded on an exchange and there is no required reporting of transactions to a government agency. During the 2007–2010 financial crisis the lack of transparency in this large market became a concern to regulators as it could pose a systemic risk. In March 2010, the [DTCC] Trade Information Warehouse announced it would give regulators greater access to its credit default swaps database. CDS data can be used by financial professionals, regulators, and the media to monitor how the market views credit risk of any entity on which a CDS is available, which can be compared to that provided by credit rating agencies. U.S. courts may soon be following suit. Most CDSs are documented using standard forms drafted by the International Swaps and Derivatives Association (ISDA), although there are many variants. In addition to the basic, single-name swaps, there are basket default swaps (BDSs), index CDSs, funded CDSs (also called credit-linked notes), as well as loan-only credit default swaps (LCDS). In addition to corporations and governments, the reference entity can include a special-purpose vehicle issuing asset-backed securities. Some claim that derivatives such as CDS are potentially dangerous in that they combine priority in bankruptcy with a lack of transparency. A CDS can be unsecured (without collateral) and be at higher risk for a default. In finance, a forward contract or simply a forward is a non-standardized contract between two parties to buy or to sell an asset at a specified future time at an amount agreed upon today, making it a type of derivative instrument. This is in contrast to a spot contract, which is an agreement to buy or sell an asset on its spot date, which may vary depending on the instrument, for example most of the FX contracts have Spot Date two business days from today. The party agreeing to buy the underlying asset in the future assumes a long position, and the party agreeing to sell the asset in the future assumes a short position. The price agreed upon is called the delivery price, which is equal to the forward price at the time the contract is entered into. The price of the underlying instrument, in whatever form, is paid before control of the instrument changes. This is one of the many forms of buy/sell orders where the time and date of trade is not the same as the value date where the securities themselves are exchanged. The forward price of such a contract is commonly contrasted with the spot price, which is the price at which the asset changes hands on the spot date. The difference between the spot and the forward price is the forward premium or forward discount, generally considered in the form of a profit, or loss, by the purchasing party. Forwards, like other derivative securities, can be used to hedge risk (typically currency or exchange rate risk), as a means of speculation, or to allow a party to take advantage of a quality of the underlying instrument which is time-sensitive. A closely related contract is a futures contract; they differ in certain respects. Forward contracts are very similar to futures contracts, except they are not exchange-traded, or defined on standardized assets. Forwards also typically have no interim partial settlements or "true-ups" in margin requirements like futures—such that the parties do not exchange additional property securing the party at gain and the entire unrealized gain or loss builds up while the contract is open. However, being traded over the counter (OTC), forward contracts specification can be customized and may include mark-to-market and daily margin calls. Hence, a forward contract arrangement might call for the loss party to pledge collateral or additional collateral to better secure the party at gain. In other words, the terms of the forward contract will determine the collateral calls based upon certain "trigger" events relevant to a particular counterparty such as among other things, credit ratings, value of assets under management or redemptions over a specific time frame (e.g., quarterly, annually). In finance, a 'futures contract' (more colloquially, futures) is a standardized contract between two parties to buy or sell a specified asset of standardized quantity and quality for a price agreed upon today (the futures price) with delivery and payment occurring at a specified future date, the delivery date, making it a derivative product (i.e. a financial product that is derived from an underlying asset). The contracts are negotiated at a futures exchange, which acts as an intermediary between buyer and seller. The party agreeing to buy the underlying asset in the future, the "buyer" of the contract, is said to be "long", and the party agreeing to sell the asset in the future, the "seller" of the contract, is said to be "short". While the futures contract specifies a trade taking place in the future, the purpose of the futures exchange is to act as intermediary and mitigate the risk of default by either party in the intervening period. For this reason, the futures exchange requires both parties to put up an initial amount of cash (performance bond), the margin. Margins, sometimes set as a percentage of the value of the futures contract, need to be proportionally maintained at all times during the life of the contract to underpin this mitigation because the price of the contract will vary in keeping with supply and demand and will change daily and thus one party or the other will theoretically be making or losing money. To mitigate risk and the possibility of default by either party, the product is marked to market on a daily basis whereby the difference between the prior agreed-upon price and the actual daily futures price is settled on a daily basis. This is sometimes known as the variation margin where the futures exchange will draw money out of the losing party's margin account and put it into the other party's thus ensuring that the correct daily loss or profit is reflected in the respective account. If the margin account goes below a certain value set by the Exchange, then a margin call is made and the account owner must replenish the margin account. This process is known as "marking to market". Thus on the delivery date, the amount exchanged is not the specified price on the contract but the spot value (i.e., the original value agreed upon, since any gain or loss has already been previously settled by marking to market). Upon marketing the strike price is often reached and creates much income for the "caller". A closely related contract is a forward contract. A forward is like a futures in that it specifies the exchange of goods for a specified price at a specified future date. However, a forward is not traded on an exchange and thus does not have the interim partial payments due to marking to market. Nor is the contract standardized, as on the exchange. Unlike an option, both parties of a futures contract must fulfill the contract on the delivery date. The seller delivers the underlying asset to the buyer, or, if it is a cash-settled futures contract, then cash is transferred from the futures trader who sustained a loss to the one who made a profit. To exit the commitment prior to the settlement date, the holder of a futures position can close out its contract obligations by taking the opposite position on another futures contract on the same asset and settlement date. The difference in futures prices is then a profit or loss. A mortgage-backed security (MBS) is an asset-backed security that is secured by a mortgage, or more commonly a collection ("pool") of sometimes hundreds of mortgages. The mortgages are sold to a group of individuals (a government agency or investment bank) that "securitizes", or packages, the loans together into a security that can be sold to investors. The mortgages of an MBS may be residential or commercial, depending on whether it is an Agency MBS or a Non-Agency MBS; in the United States they may be issued by structures set up by government-sponsored enterprises like Fannie Mae or Freddie Mac, or they can be "private-label", issued by structures set up by investment banks. The structure of the MBS may be known as "pass-through", where the interest and principal payments from the borrower or homebuyer pass through it to the MBS holder, or it may be more complex, made up of a pool of other MBSs. Other types of MBS include collateralized mortgage obligations (CMOs, often structured as real estate mortgage investment conduits) and collateralized debt obligations (CDOs). The shares of subprime MBSs issued by various structures, such as CMOs, are not identical but rather issued as tranches (French for "slices"), each with a different level of priority in the debt repayment stream, giving them different levels of risk and reward. Tranches—especially the lower-priority, higher-interest tranches—of an MBS are/were often further repackaged and resold as collaterized debt obligations. These subprime MBSs issued by investment banks were a major issue in the subprime mortgage crisis of 2006–2008 . The total face value of an MBS decreases over time, because like mortgages, and unlike bonds, and most other fixed-income securities, the principal in an MBS is not paid back as a single payment to the bond holder at maturity but rather is paid along with the interest in each periodic payment (monthly, quarterly, etc.). This decrease in face value is measured by the MBS's "factor", the percentage of the original "face" that remains to be repaid. In finance, an option is a contract which gives the buyer (the owner) the right, but not the obligation, to buy or sell an underlying asset or instrument at a specified strike price on or before a specified date. The seller has the corresponding obligation to fulfill the transaction—that is to sell or buy—if the buyer (owner) "exercises" the option. The buyer pays a premium to the seller for this right. An option that conveys to the owner the right to buy something at a certain price is a "call option"; an option that conveys the right of the owner to sell something at a certain price is a "put option". Both are commonly traded, but for clarity, the call option is more frequently discussed. Options valuation is a topic of ongoing research in academic and practical finance. In basic terms, the value of an option is commonly decomposed into two parts: Although options valuation has been studied since the 19th century, the contemporary approach is based on the Black–Scholes model, which was first published in 1973. Options contracts have been known for many centuries. However, both trading activity and academic interest increased when, as from 1973, options were issued with standardized terms and traded through a guaranteed clearing house at the Chicago Board Options Exchange. Today, many options are created in a standardized form and traded through clearing houses on regulated options exchanges, while other over-the-counter options are written as bilateral, customized contracts between a single buyer and seller, one or both of which may be a dealer or market-maker. Options are part of a larger class of financial instruments known as derivative products or simply derivatives. A swap is a derivative in which two counterparties exchange cash flows of one party's financial instrument for those of the other party's financial instrument. The benefits in question depend on the type of financial instruments involved. For example, in the case of a swap involving two bonds, the benefits in question can be the periodic interest (coupon) payments associated with such bonds. Specifically, two counterparties agree to the exchange one stream of cash flows against another stream. These streams are called the swap's "legs". The swap agreement defines the dates when the cash flows are to be paid and the way they are accrued and calculated. Usually at the time when the contract is initiated, at least one of these series of cash flows is determined by an uncertain variable such as a floating interest rate, foreign exchange rate, equity price, or commodity price. The cash flows are calculated over a notional principal amount. Contrary to a future, a forward or an option, the notional amount is usually not exchanged between counterparties. Consequently, swaps can be in cash or collateral. Swaps can be used to hedge certain risks such as interest rate risk, or to speculate on changes in the expected direction of underlying prices. Swaps were first introduced to the public in 1981 when IBM and the World Bank entered into a swap agreement. Today, swaps are among the most heavily traded financial contracts in the world: the total amount of interest rates and currency swaps outstanding is more than $348 trillion in 2010, according to the Bank for International Settlements (BIS). The five generic types of swaps, in order of their quantitative importance, are: interest rate swaps, currency swaps, credit swaps, commodity swaps and equity swaps (there are many other types). Some of the salient economic functions of the derivative market include: In a nutshell, there is a substantial increase in savings and investment in the long run due to augmented activities by derivative market participant. Two common measures of value are: For exchange-traded derivatives, market price is usually transparent (often published in real time by the exchange, based on all the current bids and offers placed on that particular contract at any one time). Complications can arise with OTC or floor-traded contracts though, as trading is handled manually, making it difficult to automatically broadcast prices. In particular with OTC contracts, there is no central exchange to collate and disseminate prices. The arbitrage-free price for a derivatives contract can be complex, and there are many different variables to consider. Arbitrage-free pricing is a central topic of financial mathematics. For futures/forwards the arbitrage free price is relatively straightforward, involving the price of the underlying together with the cost of carry (income received less interest costs), although there can be complexities. However, for options and more complex derivatives, pricing involves developing a complex pricing model: understanding the stochastic process of the price of the underlying asset is often crucial. A key equation for the theoretical valuation of options is the Black–Scholes formula, which is based on the assumption that the cash flows from a European stock option can be replicated by a continuous buying and selling strategy using only the stock. A simplified version of this valuation technique is the binomial options model. OTC represents the biggest challenge in using models to price derivatives. Since these contracts are not publicly traded, no market price is available to validate the theoretical valuation. Most of the model's results are input-dependent (meaning the final price depends heavily on how we derive the pricing inputs). Therefore, it is common that OTC derivatives are priced by Independent Agents that both counterparties involved in the deal designate upfront (when signing the contract). Derivatives are often subject to the following criticisms; particularly since the Financial crisis of 2007–2008, the discipline of Risk management has developed attempting to address the below and other risks - see Financial risk management § Banking. According to Raghuram Rajan, a former chief economist of the International Monetary Fund (IMF), "... it may well be that the managers of these firms [investment funds] have figured out the correlations between the various instruments they hold and believe they are hedged. Yet as Chan and others (2005) point out, the lessons of summer 1998 following the default on Russian government debt is that correlations that are zero or negative in normal times can turn overnight to one – a phenomenon they term "phase lock-in". A hedged position "can become unhedged at the worst times, inflicting substantial losses on those who mistakenly believe they are protected". See the FRTB framework, which seeks to address this to some extent. The use of derivatives can result in large losses because of the use of leverage, or borrowing. Derivatives allow investors to earn large returns from small movements in the underlying asset's price. However, investors could lose large amounts if the price of the underlying moves against them significantly. There have been several instances of massive losses in derivative markets, such as the following: Derivatives typically have a large notional value. As such, there is the danger that their use could result in losses for which the investor would be unable to compensate. The possibility that this could lead to a chain reaction ensuing in an economic crisis was pointed out by famed investor Warren Buffett in Berkshire Hathaway's 2002 annual report. Buffett called them 'financial weapons of mass destruction.' A potential problem with derivatives is that they comprise an increasingly larger notional amount of assets which may lead to distortions in the underlying capital and equities markets themselves. Investors begin to look at the derivatives markets to make a decision to buy or sell securities and so what was originally meant to be a market to transfer risk now becomes a leading indicator.(See Berkshire Hathaway Annual Report for 2002) Some derivatives (especially swaps) expose investors to counterparty risk, or risk arising from the other party in a financial transaction. Counterparty risk results from the differences in the current price versus the expected future settlement price. Different types of derivatives have different levels of counter party risk. For example, standardized stock options by law require the party at risk to have a certain amount deposited with the exchange, showing that they can pay for any losses; banks that help businesses swap variable for fixed rates on loans may do credit checks on both parties. However, in private agreements between two companies, for example, there may not be benchmarks for performing due diligence and risk analysis. Under US law and the laws of most other developed countries, derivatives have special legal exemptions that make them a particularly attractive legal form to extend credit. The strong creditor protections afforded to derivatives counterparties, in combination with their complexity and lack of transparency however, can cause capital markets to underprice credit risk. This can contribute to credit booms, and increase systemic risks. Indeed, the use of derivatives to conceal credit risk from third parties while protecting derivative counterparties contributed to the financial crisis of 2008 in the United States. In the context of a 2010 examination of the ICE Trust, an industry self-regulatory body, Gary Gensler, the chairman of the Commodity Futures Trading Commission which regulates most derivatives, was quoted saying that the derivatives marketplace as it functions now "adds up to higher costs to all Americans". More oversight of the banks in this market is needed, he also said. Additionally, the report said, "[t]he Department of Justice is looking into derivatives, too. The department's antitrust unit is actively investigating 'the possibility of anticompetitive practices in the credit derivatives clearing, trading and information services industries', according to a department spokeswoman." For legislators and committees responsible for financial reform related to derivatives in the United States and elsewhere, distinguishing between hedging and speculative derivatives activities has been a nontrivial challenge. The distinction is critical because regulation should help to isolate and curtail speculation with derivatives, especially for "systemically significant" institutions whose default could be large enough to threaten the entire financial system. At the same time, the legislation should allow for responsible parties to hedge risk without unduly tying up working capital as collateral that firms may better employ elsewhere in their operations and investment. In this regard, it is important to distinguish between financial (e.g. banks) and non-financial end-users of derivatives (e.g. real estate development companies) because these firms' derivatives usage is inherently different. More importantly, the reasonable collateral that secures these different counterparties can be very different. The distinction between these firms is not always straight forward (e.g. hedge funds or even some private equity firms do not neatly fit either category). Finally, even financial users must be differentiated, as 'large' banks may classified as "systemically significant" whose derivatives activities must be more tightly monitored and restricted than those of smaller, local and regional banks. Over-the-counter dealing will be less common as the Dodd–Frank Wall Street Reform and Consumer Protection Act comes into effect. The law mandated the clearing of certain swaps at registered exchanges and imposed various restrictions on derivatives. To implement Dodd-Frank, the CFTC developed new rules in at least 30 areas. The Commission determines which swaps are subject to mandatory clearing and whether a derivatives exchange is eligible to clear a certain type of swap contract. Nonetheless, the above and other challenges of the rule-making process have delayed full enactment of aspects of the legislation relating to derivatives. The challenges are further complicated by the necessity to orchestrate globalized financial reform among the nations that comprise the world's major financial markets, a primary responsibility of the Financial Stability Board whose progress is ongoing. In the U.S., by February 2012 the combined effort of the SEC and CFTC had produced over 70 proposed and final derivatives rules. However, both of them had delayed adoption of a number of derivatives regulations because of the burden of other rulemaking, litigation and opposition to the rules, and many core definitions (such as the terms "swap", "security-based swap", "swap dealer", "security-based swap dealer", "major swap participant" and "major security-based swap participant") had still not been adopted. SEC Chairman Mary Schapiro opined: "At the end of the day, it probably does not make sense to harmonize everything [between the SEC and CFTC rules] because some of these products are quite different and certainly the market structures are quite different." On February 11, 2015, the Securities and Exchange Commission (SEC) released two final rules toward establishing a reporting and public disclosure framework for security-based swap transaction data. The two rules are not completely harmonized with the requirements with CFTC requirements. In November 2012, the SEC and regulators from Australia, Brazil, the European Union, Hong Kong, Japan, Ontario, Quebec, Singapore, and Switzerland met to discuss reforming the OTC derivatives market, as had been agreed by leaders at the 2009 G-20 Pittsburgh summit in September 2009. In December 2012, they released a joint statement to the effect that they recognized that the market is a global one and "firmly support the adoption and enforcement of robust and consistent standards in and across jurisdictions", with the goals of mitigating risk, improving transparency, protecting against market abuse, preventing regulatory gaps, reducing the potential for arbitrage opportunities, and fostering a level playing field for market participants. They also agreed on the need to reduce regulatory uncertainty and provide market participants with sufficient clarity on laws and regulations by avoiding, to the extent possible, the application of conflicting rules to the same entities and transactions, and minimizing the application of inconsistent and duplicative rules. At the same time, they noted that "complete harmonization – perfect alignment of rules across jurisdictions" would be difficult, because of jurisdictions' differences in law, policy, markets, implementation timing, and legislative and regulatory processes. On December 20, 2013 the CFTC provided information on its swaps regulation "comparability" determinations. The release addressed the CFTC's cross-border compliance exceptions. Specifically it addressed which entity level and in some cases transaction-level requirements in six jurisdictions (Australia, Canada, the European Union, Hong Kong, Japan, and Switzerland) it found comparable to its own rules, thus permitting non-US swap dealers, major swap participants, and the foreign branches of US Swap Dealers and major swap participants in these jurisdictions to comply with local rules in lieu of Commission rules. Mandatory reporting regulations are being finalized in a number of countries, such as Dodd Frank Act in the US, the European Market Infrastructure Regulations (EMIR) in Europe, as well as regulations in Hong Kong, Japan, Singapore, Canada, and other countries. The OTC Derivatives Regulators Forum (ODRF), a group of over 40 worldwide regulators, provided trade repositories with a set of guidelines regarding data access to regulators, and the Financial Stability Board and CPSS IOSCO also made recommendations in with regard to reporting. DTCC, through its "Global Trade Repository" (GTR) service, manages global trade repositories for interest rates, and commodities, foreign exchange, credit, and equity derivatives. It makes global trade reports to the CFTC in the U.S., and plans to do the same for ESMA in Europe and for regulators in Hong Kong, Japan, and Singapore. It covers cleared and uncleared OTC derivatives products, whether or not a trade is electronically processed or bespoke.
[ { "paragraph_id": 0, "text": "In finance, a derivative is a contract that derives its value from the performance of an underlying entity. This underlying entity can be an asset, index, or interest rate, and is often simply called the underlying. Derivatives can be used for a number of purposes, including insuring against price movements (hedging), increasing exposure to price movements for speculation, or getting access to otherwise hard-to-trade assets or markets.", "title": "" }, { "paragraph_id": 1, "text": "Some of the more common derivatives include forwards, futures, options, swaps, and variations of these such as synthetic collateralized debt obligations and credit default swaps. Most derivatives are traded over-the-counter (off-exchange) or on an exchange such as the Chicago Mercantile Exchange, while most insurance contracts have developed into a separate industry. In the United States, after the financial crisis of 2007–2009, there has been increased pressure to move derivatives to trade on exchanges.", "title": "" }, { "paragraph_id": 2, "text": "Derivatives are one of the three main categories of financial instruments, the other two being equity (i.e., stocks or shares) and debt (i.e., bonds and mortgages). The oldest example of a derivative in history, attested to by Aristotle, is thought to be a contract transaction of olives, entered into by ancient Greek philosopher Thales, who made a profit in the exchange. However, Aristotle did not define this arrangement as a derivative but as a monopoly (Aristotle's Politics, Book I, Chapter XI). Bucket shops, outlawed in 1936 in the US, are a more recent historical example.", "title": "" }, { "paragraph_id": 3, "text": "Derivatives are contracts between two parties that specify conditions (especially the dates, resulting values and definitions of the underlying variables, the parties' contractual obligations, and the notional amount) under which payments are to be made between the parties. The assets include commodities, stocks, bonds, interest rates and currencies, but they can also be other derivatives, which adds another layer of complexity to proper valuation. The components of a firm's capital structure, e.g., bonds and stock, can also be considered derivatives, more precisely options, with the underlying being the firm's assets, but this is unusual outside of technical contexts.", "title": "Basics" }, { "paragraph_id": 4, "text": "From the economic point of view, financial derivatives are cash flows that are conditioned stochastically and discounted to present value. The market risk inherent in the underlying asset is attached to the financial derivative through contractual agreements and hence can be traded separately. The underlying asset does not have to be acquired. Derivatives therefore allow the breakup of ownership and participation in the market value of an asset. This also provides a considerable amount of freedom regarding the contract design. That contractual freedom allows derivative designers to modify the participation in the performance of the underlying asset almost arbitrarily. Thus, the participation in the market value of the underlying can be effectively weaker, stronger (leverage effect), or implemented as inverse. Hence, specifically the market price risk of the underlying asset can be controlled in almost every situation.", "title": "Basics" }, { "paragraph_id": 5, "text": "There are two groups of derivative contracts: the privately traded over-the-counter (OTC) derivatives such as swaps that do not go through an exchange or other intermediary, and exchange-traded derivatives (ETD) that are traded through specialized derivatives exchanges or other exchanges.", "title": "Basics" }, { "paragraph_id": 6, "text": "Derivatives are more common in the modern era, but their origins trace back several centuries. One of the oldest derivatives is rice futures, which have been traded on the Dojima Rice Exchange since the eighteenth century. Derivatives are broadly categorized by the relationship between the underlying asset and the derivative (such as forward, option, swap); the type of underlying asset (such as equity derivatives, foreign exchange derivatives, interest rate derivatives, commodity derivatives, or credit derivatives); the market in which they trade (such as exchange-traded or over-the-counter); and their pay-off profile.", "title": "Basics" }, { "paragraph_id": 7, "text": "Derivatives may broadly be categorized as \"lock\" or \"option\" products. Lock products (such as swaps, futures, or forwards) obligate the contractual parties to the terms over the life of the contract. Option products (such as interest rate swaps) provide the buyer the right, but not the obligation to enter the contract under the terms specified.", "title": "Basics" }, { "paragraph_id": 8, "text": "Derivatives can be used either for risk management (i.e. to \"hedge\" by providing offsetting compensation in case of an undesired event, a kind of \"insurance\") or for speculation (i.e. making a financial \"bet\"). This distinction is important because the former is a prudent aspect of operations and financial management for many firms across many industries; the latter offers managers and investors a risky opportunity to increase profit, which may not be properly disclosed to stakeholders.", "title": "Basics" }, { "paragraph_id": 9, "text": "Along with many other financial products and services, derivatives reform is an element of the Dodd–Frank Wall Street Reform and Consumer Protection Act of 2010. The Act delegated many rule-making details of regulatory oversight to the Commodity Futures Trading Commission (CFTC) and those details are not finalized nor fully implemented as of late 2012.", "title": "Basics" }, { "paragraph_id": 10, "text": "To give an idea of the size of the derivative market, The Economist has reported that as of June 2011, the over-the-counter (OTC) derivatives market amounted to approximately $700 trillion, and the size of the market traded on exchanges totaled an additional $83 trillion. For the fourth quarter 2017 the European Securities Market Authority estimated the size of European derivatives market at a size of €660 trillion with 74 million outstanding contracts.", "title": "Size of market" }, { "paragraph_id": 11, "text": "However, these are \"notional\" values, and some economists say that these aggregated values greatly exaggerate the market value and the true credit risk faced by the parties involved. For example, in 2010, while the aggregate of OTC derivatives exceeded $600 trillion, the value of the market was estimated to be much lower, at $21 trillion. The credit-risk equivalent of the derivative contracts was estimated at $3.3 trillion.", "title": "Size of market" }, { "paragraph_id": 12, "text": "Still, even these scaled-down figures represent huge amounts of money. For perspective, the budget for total expenditure of the United States government during 2012 was $3.5 trillion, and the total current value of the U.S. stock market is an estimated $23 trillion. Meanwhile, the global annual Gross Domestic Product is about $65 trillion.", "title": "Size of market" }, { "paragraph_id": 13, "text": "At least for one type of derivative, credit default swaps (CDS), for which the inherent risk is considered high , the higher, nominal value remains relevant. It was this type of derivative that investment magnate Warren Buffett referred to in his famous 2002 speech in which he warned against \"financial weapons of mass destruction\". CDS notional value in early 2012 amounted to $25.5 trillion, down from $55 trillion in 2008.", "title": "Size of market" }, { "paragraph_id": 14, "text": "Derivatives are used for the following:", "title": "Usage" }, { "paragraph_id": 15, "text": "Lock products are theoretically valued at zero at the time of execution and thus do not typically require an up-front exchange between the parties. Based upon movements in the underlying asset over time, however, the value of the contract will fluctuate, and the derivative may be either an asset (i.e., \"in the money\") or a liability (i.e., \"out of the money\") at different points throughout its life. Importantly, either party is therefore exposed to the credit quality of its counterparty and is interested in protecting itself in an event of default.", "title": "Usage" }, { "paragraph_id": 16, "text": "Option products have immediate value at the outset because they provide specified protection (intrinsic value) over a given time period (time value). One common form of option product familiar to many consumers is insurance for homes and automobiles. The insured would pay more for a policy with greater liability protections (intrinsic value) and one that extends for a year rather than six months (time value). Because of the immediate option value, the option purchaser typically pays an up front premium. Just like for lock products, movements in the underlying asset will cause the option's intrinsic value to change over time while its time value deteriorates steadily until the contract expires. An important difference between a lock product is that, after the initial exchange, the option purchaser has no further liability to its counterparty; upon maturity, the purchaser will execute the option if it has positive value (i.e., if it is \"in the money\") or expire at no cost (other than to the initial premium) (i.e., if the option is \"out of the money\").", "title": "Usage" }, { "paragraph_id": 17, "text": "Derivatives allow risk related to the price of the underlying asset to be transferred from one party to another. For example, a wheat farmer and a miller could sign a futures contract to exchange a specified amount of cash for a specified amount of wheat in the future. Both parties have reduced a future risk: for the wheat farmer, the uncertainty of the price, and for the miller, the availability of wheat. However, there is still the risk that no wheat will be available because of events unspecified by the contract, such as the weather, or that one party will renege on the contract. Although a third party, called a clearing house, insures a futures contract, not all derivatives are insured against counter-party risk.", "title": "Usage" }, { "paragraph_id": 18, "text": "From another perspective, the farmer and the miller both reduce a risk and acquire a risk when they sign the futures contract: the farmer reduces the risk that the price of wheat will fall below the price specified in the contract and acquires the risk that the price of wheat will rise above the price specified in the contract (thereby losing additional income that he could have earned). The miller, on the other hand, acquires the risk that the price of wheat will fall below the price specified in the contract (thereby paying more in the future than he otherwise would have) and reduces the risk that the price of wheat will rise above the price specified in the contract. In this sense, one party is the insurer (risk taker) for one type of risk, and the counter-party is the insurer (risk taker) for another type of risk.", "title": "Usage" }, { "paragraph_id": 19, "text": "Hedging also occurs when an individual or institution buys an asset (such as a commodity, a bond that has coupon payments, a stock that pays dividends, and so on) and sells it using a futures contract. The individual or institution has access to the asset for a specified amount of time, and can then sell it in the future at a specified price according to the futures contract. Of course, this allows the individual or institution the benefit of holding the asset, while reducing the risk that the future selling price will deviate unexpectedly from the market's current assessment of the future value of the asset.", "title": "Usage" }, { "paragraph_id": 20, "text": "Derivatives trading of this kind may serve the financial interests of certain particular businesses. For example, a corporation borrows a large sum of money at a specific interest rate. The interest rate on the loan reprices every six months. The corporation is concerned that the rate of interest may be much higher in six months. The corporation could buy a forward rate agreement (FRA), which is a contract to pay a fixed rate of interest six months after purchases on a notional amount of money. If the interest rate after six months is above the contract rate, the seller will pay the difference to the corporation, or FRA buyer. If the rate is lower, the corporation will pay the difference to the seller. The purchase of the FRA serves to reduce the uncertainty concerning the rate increase and stabilize earnings.", "title": "Usage" }, { "paragraph_id": 21, "text": "Derivatives can be used to acquire risk, rather than to hedge against risk. Thus, some individuals and institutions will enter into a derivative contract to speculate on the value of the underlying asset. Speculators look to buy an asset in the future at a low price according to a derivative contract when the future market price is high, or to sell an asset in the future at a high price according to a derivative contract when the future market price is less.", "title": "Usage" }, { "paragraph_id": 22, "text": "Speculative trading in derivatives gained a great deal of notoriety in 1995 when Nick Leeson, a trader at Barings Bank, made poor and unauthorized investments in futures contracts. Through a combination of poor judgment, lack of oversight by the bank's management and regulators, and unfortunate events like the Kobe earthquake, Leeson incurred a $1.3 billion loss that bankrupted the centuries-old institution.", "title": "Usage" }, { "paragraph_id": 23, "text": "Individuals and institutions may also look for arbitrage opportunities, as when the current buying price of an asset falls below the price specified in a futures contract to sell the asset.", "title": "Usage" }, { "paragraph_id": 24, "text": "The true proportion of derivatives contracts used for hedging purposes is unknown, but it appears to be relatively small. Also, derivatives contracts account for only 3–6% of the median firms' total currency and interest rate exposure. Nonetheless, we know that many firms' derivatives activities have at least some speculative component for a variety of reasons.", "title": "Usage" }, { "paragraph_id": 25, "text": "In broad terms, there are two groups of derivative contracts, which are distinguished by the way they are traded in the market:", "title": "Types" }, { "paragraph_id": 26, "text": "Over-the-counter (OTC) derivatives are contracts that are traded (and privately negotiated) directly between two parties, without going through an exchange or other intermediary. Products such as swaps, forward rate agreements, exotic options – and other exotic derivatives – are almost always traded in this way. The OTC derivative market is the largest market for derivatives, and is largely unregulated with respect to disclosure of information between the parties, since the OTC market is made up of banks and other highly sophisticated parties, such as hedge funds. Reporting of OTC amounts is difficult because trades can occur in private, without activity being visible on any exchanges", "title": "Types" }, { "paragraph_id": 27, "text": "According to the Bank for International Settlements, who first surveyed OTC derivatives in 1995, reported that the \"gross market value, which represent the cost of replacing all open contracts at the prevailing market prices, ... increased by 74% since 2004, to $11 trillion at the end of June 2007 (BIS 2007:24).\" Positions in the OTC derivatives market increased to $516 trillion at the end of June 2007, 135% higher than the level recorded in 2004. The total outstanding notional amount is US$708 trillion (as of June 2011). Of this total notional amount, 67% are interest rate contracts, 8% are credit default swaps (CDS), 9% are foreign exchange contracts, 2% are commodity contracts, 1% are equity contracts, and 12% are other. Because OTC derivatives are not traded on an exchange, there is no central counter-party. Therefore, they are subject to counterparty risk, like an ordinary contract, since each counter-party relies on the other to perform.", "title": "Types" }, { "paragraph_id": 28, "text": "Exchange-traded derivatives (ETD) are those derivatives instruments that are traded via specialized derivatives exchanges or other exchanges. A derivatives exchange is a market where individuals trade standardized contracts that have been defined by the exchange. A derivatives exchange acts as an intermediary to all related transactions, and takes initial margin from both sides of the trade to act as a guarantee. The world's largest derivatives exchanges (by number of transactions) are the Korea Exchange (which lists KOSPI Index Futures & Options), Eurex (which lists a wide range of European products such as interest rate & index products), and CME Group (made up of the 2007 merger of the Chicago Mercantile Exchange and the Chicago Board of Trade and the 2008 acquisition of the New York Mercantile Exchange). According to BIS, the combined turnover in the world's derivatives exchanges totaled US$344 trillion during Q4 2005. By December 2007 the Bank for International Settlements reported that \"derivatives traded on exchanges surged 27% to a record $681 trillion.\"", "title": "Types" }, { "paragraph_id": 29, "text": "Inverse exchange-traded funds (IETFs) and leveraged exchange-traded funds (LETFs) are two special types of exchange traded funds (ETFs) that are available to common traders and investors on major exchanges like the NYSE and Nasdaq. To maintain these products' net asset value, these funds' administrators must employ more sophisticated financial engineering methods than what's usually required for maintenance of traditional ETFs. These instruments must also be regularly rebalanced and re-indexed each day.", "title": "Types" }, { "paragraph_id": 30, "text": "Some of the common variants of derivative contracts are as follows:", "title": "Types" }, { "paragraph_id": 31, "text": "Some common examples of these derivatives are the following:", "title": "Types" }, { "paragraph_id": 32, "text": "A collateralized debt obligation (CDO) is a type of structured asset-backed security (ABS). An \"asset-backed security\" is used as an umbrella term for a type of security backed by a pool of assets—including collateralized debt obligations and mortgage-backed securities (MBS) (Example: \"The capital market in which asset-backed securities are issued and traded is composed of three main categories: ABS, MBS and CDOs\".)—and sometimes for a particular type of that security—one backed by consumer loans (example: \"As a rule of thumb, securitization issues backed by mortgages are called MBS, and securitization issues backed by debt obligations are called CDO, [and] Securitization issues backed by consumer-backed products—car loans, consumer loans and credit cards, among others—are called ABS.) Originally developed for the corporate debt markets, over time CDOs evolved to encompass the mortgage and mortgage-backed security (MBS) markets.", "title": "Types" }, { "paragraph_id": 33, "text": "Like other private-label securities backed by assets, a CDO can be thought of as a promise to pay investors in a prescribed sequence, based on the cash flow the CDO collects from the pool of bonds or other assets it owns. The CDO is \"sliced\" into \"tranches\", which \"catch\" the cash flow of interest and principal payments in sequence based on seniority. If some loans default and the cash collected by the CDO is insufficient to pay all of its investors, those in the lowest, most \"junior\" tranches suffer losses first. The last to lose payment from default are the safest, most senior tranches. Consequently, coupon payments (and interest rates) vary by tranche with the safest/most senior tranches paying the lowest and the lowest tranches paying the highest rates to compensate for higher default risk. As an example, a CDO might issue the following tranches in order of safeness: Senior AAA (sometimes known as \"super senior\"); Junior AAA; AA; A; BBB; Residual.", "title": "Types" }, { "paragraph_id": 34, "text": "Separate special-purpose entities—rather than the parent investment bank—issue the CDOs and pay interest to investors. As CDOs developed, some sponsors repackaged tranches into yet another iteration called \"CDO-Squared\" or the \"CDOs of CDOs\". In the early 2000s, CDOs were generally diversified, but by 2006–2007—when the CDO market grew to hundreds of billions of dollars—this changed. CDO collateral became dominated not by loans, but by lower level (BBB or A) tranches recycled from other asset-backed securities, whose assets were usually non-prime mortgages. These CDOs have been called \"the engine that powered the mortgage supply chain\" for nonprime mortgages, and are credited with giving lenders greater incentive to make non-prime loans leading up to the 2007-9 subprime mortgage crisis.", "title": "Types" }, { "paragraph_id": 35, "text": "A credit default swap (CDS) is a financial swap agreement that the seller of the CDS will compensate the buyer (the creditor of the reference loan) in the event of a loan default (by the debtor) or other credit event. The buyer of the CDS makes a series of payments (the CDS \"fee\" or \"spread\") to the seller and, in exchange, receives a payoff if the loan defaults. It was invented by Blythe Masters from JP Morgan in 1994. In the event of default the buyer of the CDS receives compensation (usually the face value of the loan), and the seller of the CDS takes possession of the defaulted loan. However, anyone with sufficient collateral to trade with a bank or hedge fund can purchase a CDS, even buyers who do not hold the loan instrument and who have no direct insurable interest in the loan (these are called \"naked\" CDSs). If there are more CDS contracts outstanding than bonds in existence, a protocol exists to hold a credit event auction; the payment received is usually substantially less than the face value of the loan. Credit default swaps have existed since the early 1990s, and increased in use after 2003. By the end of 2007, the outstanding CDS amount was $62.2 trillion, falling to $26.3 trillion by mid-year 2010 but reportedly $25.5 trillion in early 2012. CDSs are not traded on an exchange and there is no required reporting of transactions to a government agency. During the 2007–2010 financial crisis the lack of transparency in this large market became a concern to regulators as it could pose a systemic risk.", "title": "Types" }, { "paragraph_id": 36, "text": "In March 2010, the [DTCC] Trade Information Warehouse announced it would give regulators greater access to its credit default swaps database. CDS data can be used by financial professionals, regulators, and the media to monitor how the market views credit risk of any entity on which a CDS is available, which can be compared to that provided by credit rating agencies. U.S. courts may soon be following suit. Most CDSs are documented using standard forms drafted by the International Swaps and Derivatives Association (ISDA), although there are many variants. In addition to the basic, single-name swaps, there are basket default swaps (BDSs), index CDSs, funded CDSs (also called credit-linked notes), as well as loan-only credit default swaps (LCDS). In addition to corporations and governments, the reference entity can include a special-purpose vehicle issuing asset-backed securities. Some claim that derivatives such as CDS are potentially dangerous in that they combine priority in bankruptcy with a lack of transparency. A CDS can be unsecured (without collateral) and be at higher risk for a default.", "title": "Types" }, { "paragraph_id": 37, "text": "In finance, a forward contract or simply a forward is a non-standardized contract between two parties to buy or to sell an asset at a specified future time at an amount agreed upon today, making it a type of derivative instrument. This is in contrast to a spot contract, which is an agreement to buy or sell an asset on its spot date, which may vary depending on the instrument, for example most of the FX contracts have Spot Date two business days from today. The party agreeing to buy the underlying asset in the future assumes a long position, and the party agreeing to sell the asset in the future assumes a short position. The price agreed upon is called the delivery price, which is equal to the forward price at the time the contract is entered into. The price of the underlying instrument, in whatever form, is paid before control of the instrument changes. This is one of the many forms of buy/sell orders where the time and date of trade is not the same as the value date where the securities themselves are exchanged.", "title": "Types" }, { "paragraph_id": 38, "text": "The forward price of such a contract is commonly contrasted with the spot price, which is the price at which the asset changes hands on the spot date. The difference between the spot and the forward price is the forward premium or forward discount, generally considered in the form of a profit, or loss, by the purchasing party. Forwards, like other derivative securities, can be used to hedge risk (typically currency or exchange rate risk), as a means of speculation, or to allow a party to take advantage of a quality of the underlying instrument which is time-sensitive.", "title": "Types" }, { "paragraph_id": 39, "text": "A closely related contract is a futures contract; they differ in certain respects. Forward contracts are very similar to futures contracts, except they are not exchange-traded, or defined on standardized assets. Forwards also typically have no interim partial settlements or \"true-ups\" in margin requirements like futures—such that the parties do not exchange additional property securing the party at gain and the entire unrealized gain or loss builds up while the contract is open. However, being traded over the counter (OTC), forward contracts specification can be customized and may include mark-to-market and daily margin calls. Hence, a forward contract arrangement might call for the loss party to pledge collateral or additional collateral to better secure the party at gain. In other words, the terms of the forward contract will determine the collateral calls based upon certain \"trigger\" events relevant to a particular counterparty such as among other things, credit ratings, value of assets under management or redemptions over a specific time frame (e.g., quarterly, annually).", "title": "Types" }, { "paragraph_id": 40, "text": "In finance, a 'futures contract' (more colloquially, futures) is a standardized contract between two parties to buy or sell a specified asset of standardized quantity and quality for a price agreed upon today (the futures price) with delivery and payment occurring at a specified future date, the delivery date, making it a derivative product (i.e. a financial product that is derived from an underlying asset). The contracts are negotiated at a futures exchange, which acts as an intermediary between buyer and seller. The party agreeing to buy the underlying asset in the future, the \"buyer\" of the contract, is said to be \"long\", and the party agreeing to sell the asset in the future, the \"seller\" of the contract, is said to be \"short\".", "title": "Types" }, { "paragraph_id": 41, "text": "While the futures contract specifies a trade taking place in the future, the purpose of the futures exchange is to act as intermediary and mitigate the risk of default by either party in the intervening period. For this reason, the futures exchange requires both parties to put up an initial amount of cash (performance bond), the margin. Margins, sometimes set as a percentage of the value of the futures contract, need to be proportionally maintained at all times during the life of the contract to underpin this mitigation because the price of the contract will vary in keeping with supply and demand and will change daily and thus one party or the other will theoretically be making or losing money. To mitigate risk and the possibility of default by either party, the product is marked to market on a daily basis whereby the difference between the prior agreed-upon price and the actual daily futures price is settled on a daily basis. This is sometimes known as the variation margin where the futures exchange will draw money out of the losing party's margin account and put it into the other party's thus ensuring that the correct daily loss or profit is reflected in the respective account. If the margin account goes below a certain value set by the Exchange, then a margin call is made and the account owner must replenish the margin account. This process is known as \"marking to market\". Thus on the delivery date, the amount exchanged is not the specified price on the contract but the spot value (i.e., the original value agreed upon, since any gain or loss has already been previously settled by marking to market). Upon marketing the strike price is often reached and creates much income for the \"caller\".", "title": "Types" }, { "paragraph_id": 42, "text": "A closely related contract is a forward contract. A forward is like a futures in that it specifies the exchange of goods for a specified price at a specified future date. However, a forward is not traded on an exchange and thus does not have the interim partial payments due to marking to market. Nor is the contract standardized, as on the exchange. Unlike an option, both parties of a futures contract must fulfill the contract on the delivery date. The seller delivers the underlying asset to the buyer, or, if it is a cash-settled futures contract, then cash is transferred from the futures trader who sustained a loss to the one who made a profit. To exit the commitment prior to the settlement date, the holder of a futures position can close out its contract obligations by taking the opposite position on another futures contract on the same asset and settlement date. The difference in futures prices is then a profit or loss.", "title": "Types" }, { "paragraph_id": 43, "text": "A mortgage-backed security (MBS) is an asset-backed security that is secured by a mortgage, or more commonly a collection (\"pool\") of sometimes hundreds of mortgages. The mortgages are sold to a group of individuals (a government agency or investment bank) that \"securitizes\", or packages, the loans together into a security that can be sold to investors. The mortgages of an MBS may be residential or commercial, depending on whether it is an Agency MBS or a Non-Agency MBS; in the United States they may be issued by structures set up by government-sponsored enterprises like Fannie Mae or Freddie Mac, or they can be \"private-label\", issued by structures set up by investment banks. The structure of the MBS may be known as \"pass-through\", where the interest and principal payments from the borrower or homebuyer pass through it to the MBS holder, or it may be more complex, made up of a pool of other MBSs. Other types of MBS include collateralized mortgage obligations (CMOs, often structured as real estate mortgage investment conduits) and collateralized debt obligations (CDOs).", "title": "Types" }, { "paragraph_id": 44, "text": "The shares of subprime MBSs issued by various structures, such as CMOs, are not identical but rather issued as tranches (French for \"slices\"), each with a different level of priority in the debt repayment stream, giving them different levels of risk and reward. Tranches—especially the lower-priority, higher-interest tranches—of an MBS are/were often further repackaged and resold as collaterized debt obligations. These subprime MBSs issued by investment banks were a major issue in the subprime mortgage crisis of 2006–2008 . The total face value of an MBS decreases over time, because like mortgages, and unlike bonds, and most other fixed-income securities, the principal in an MBS is not paid back as a single payment to the bond holder at maturity but rather is paid along with the interest in each periodic payment (monthly, quarterly, etc.). This decrease in face value is measured by the MBS's \"factor\", the percentage of the original \"face\" that remains to be repaid.", "title": "Types" }, { "paragraph_id": 45, "text": "In finance, an option is a contract which gives the buyer (the owner) the right, but not the obligation, to buy or sell an underlying asset or instrument at a specified strike price on or before a specified date. The seller has the corresponding obligation to fulfill the transaction—that is to sell or buy—if the buyer (owner) \"exercises\" the option. The buyer pays a premium to the seller for this right. An option that conveys to the owner the right to buy something at a certain price is a \"call option\"; an option that conveys the right of the owner to sell something at a certain price is a \"put option\". Both are commonly traded, but for clarity, the call option is more frequently discussed. Options valuation is a topic of ongoing research in academic and practical finance. In basic terms, the value of an option is commonly decomposed into two parts:", "title": "Types" }, { "paragraph_id": 46, "text": "Although options valuation has been studied since the 19th century, the contemporary approach is based on the Black–Scholes model, which was first published in 1973.", "title": "Types" }, { "paragraph_id": 47, "text": "Options contracts have been known for many centuries. However, both trading activity and academic interest increased when, as from 1973, options were issued with standardized terms and traded through a guaranteed clearing house at the Chicago Board Options Exchange. Today, many options are created in a standardized form and traded through clearing houses on regulated options exchanges, while other over-the-counter options are written as bilateral, customized contracts between a single buyer and seller, one or both of which may be a dealer or market-maker. Options are part of a larger class of financial instruments known as derivative products or simply derivatives.", "title": "Types" }, { "paragraph_id": 48, "text": "A swap is a derivative in which two counterparties exchange cash flows of one party's financial instrument for those of the other party's financial instrument. The benefits in question depend on the type of financial instruments involved. For example, in the case of a swap involving two bonds, the benefits in question can be the periodic interest (coupon) payments associated with such bonds. Specifically, two counterparties agree to the exchange one stream of cash flows against another stream. These streams are called the swap's \"legs\". The swap agreement defines the dates when the cash flows are to be paid and the way they are accrued and calculated. Usually at the time when the contract is initiated, at least one of these series of cash flows is determined by an uncertain variable such as a floating interest rate, foreign exchange rate, equity price, or commodity price.", "title": "Types" }, { "paragraph_id": 49, "text": "The cash flows are calculated over a notional principal amount. Contrary to a future, a forward or an option, the notional amount is usually not exchanged between counterparties. Consequently, swaps can be in cash or collateral. Swaps can be used to hedge certain risks such as interest rate risk, or to speculate on changes in the expected direction of underlying prices.", "title": "Types" }, { "paragraph_id": 50, "text": "Swaps were first introduced to the public in 1981 when IBM and the World Bank entered into a swap agreement. Today, swaps are among the most heavily traded financial contracts in the world: the total amount of interest rates and currency swaps outstanding is more than $348 trillion in 2010, according to the Bank for International Settlements (BIS). The five generic types of swaps, in order of their quantitative importance, are: interest rate swaps, currency swaps, credit swaps, commodity swaps and equity swaps (there are many other types).", "title": "Types" }, { "paragraph_id": 51, "text": "Some of the salient economic functions of the derivative market include:", "title": "Economic function of the derivative market" }, { "paragraph_id": 52, "text": "In a nutshell, there is a substantial increase in savings and investment in the long run due to augmented activities by derivative market participant.", "title": "Economic function of the derivative market" }, { "paragraph_id": 53, "text": "Two common measures of value are:", "title": "Valuation" }, { "paragraph_id": 54, "text": "For exchange-traded derivatives, market price is usually transparent (often published in real time by the exchange, based on all the current bids and offers placed on that particular contract at any one time). Complications can arise with OTC or floor-traded contracts though, as trading is handled manually, making it difficult to automatically broadcast prices. In particular with OTC contracts, there is no central exchange to collate and disseminate prices.", "title": "Valuation" }, { "paragraph_id": 55, "text": "The arbitrage-free price for a derivatives contract can be complex, and there are many different variables to consider. Arbitrage-free pricing is a central topic of financial mathematics. For futures/forwards the arbitrage free price is relatively straightforward, involving the price of the underlying together with the cost of carry (income received less interest costs), although there can be complexities.", "title": "Valuation" }, { "paragraph_id": 56, "text": "However, for options and more complex derivatives, pricing involves developing a complex pricing model: understanding the stochastic process of the price of the underlying asset is often crucial. A key equation for the theoretical valuation of options is the Black–Scholes formula, which is based on the assumption that the cash flows from a European stock option can be replicated by a continuous buying and selling strategy using only the stock. A simplified version of this valuation technique is the binomial options model.", "title": "Valuation" }, { "paragraph_id": 57, "text": "OTC represents the biggest challenge in using models to price derivatives. Since these contracts are not publicly traded, no market price is available to validate the theoretical valuation. Most of the model's results are input-dependent (meaning the final price depends heavily on how we derive the pricing inputs). Therefore, it is common that OTC derivatives are priced by Independent Agents that both counterparties involved in the deal designate upfront (when signing the contract).", "title": "Valuation" }, { "paragraph_id": 58, "text": "Derivatives are often subject to the following criticisms; particularly since the Financial crisis of 2007–2008, the discipline of Risk management has developed attempting to address the below and other risks - see Financial risk management § Banking.", "title": "Risks" }, { "paragraph_id": 59, "text": "According to Raghuram Rajan, a former chief economist of the International Monetary Fund (IMF), \"... it may well be that the managers of these firms [investment funds] have figured out the correlations between the various instruments they hold and believe they are hedged. Yet as Chan and others (2005) point out, the lessons of summer 1998 following the default on Russian government debt is that correlations that are zero or negative in normal times can turn overnight to one – a phenomenon they term \"phase lock-in\". A hedged position \"can become unhedged at the worst times, inflicting substantial losses on those who mistakenly believe they are protected\". See the FRTB framework, which seeks to address this to some extent.", "title": "Risks" }, { "paragraph_id": 60, "text": "The use of derivatives can result in large losses because of the use of leverage, or borrowing. Derivatives allow investors to earn large returns from small movements in the underlying asset's price. However, investors could lose large amounts if the price of the underlying moves against them significantly. There have been several instances of massive losses in derivative markets, such as the following:", "title": "Risks" }, { "paragraph_id": 61, "text": "Derivatives typically have a large notional value. As such, there is the danger that their use could result in losses for which the investor would be unable to compensate. The possibility that this could lead to a chain reaction ensuing in an economic crisis was pointed out by famed investor Warren Buffett in Berkshire Hathaway's 2002 annual report. Buffett called them 'financial weapons of mass destruction.' A potential problem with derivatives is that they comprise an increasingly larger notional amount of assets which may lead to distortions in the underlying capital and equities markets themselves. Investors begin to look at the derivatives markets to make a decision to buy or sell securities and so what was originally meant to be a market to transfer risk now becomes a leading indicator.(See Berkshire Hathaway Annual Report for 2002)", "title": "Risks" }, { "paragraph_id": 62, "text": "Some derivatives (especially swaps) expose investors to counterparty risk, or risk arising from the other party in a financial transaction. Counterparty risk results from the differences in the current price versus the expected future settlement price. Different types of derivatives have different levels of counter party risk. For example, standardized stock options by law require the party at risk to have a certain amount deposited with the exchange, showing that they can pay for any losses; banks that help businesses swap variable for fixed rates on loans may do credit checks on both parties. However, in private agreements between two companies, for example, there may not be benchmarks for performing due diligence and risk analysis.", "title": "Risks" }, { "paragraph_id": 63, "text": "Under US law and the laws of most other developed countries, derivatives have special legal exemptions that make them a particularly attractive legal form to extend credit. The strong creditor protections afforded to derivatives counterparties, in combination with their complexity and lack of transparency however, can cause capital markets to underprice credit risk. This can contribute to credit booms, and increase systemic risks. Indeed, the use of derivatives to conceal credit risk from third parties while protecting derivative counterparties contributed to the financial crisis of 2008 in the United States.", "title": "Financial reform and government regulation" }, { "paragraph_id": 64, "text": "In the context of a 2010 examination of the ICE Trust, an industry self-regulatory body, Gary Gensler, the chairman of the Commodity Futures Trading Commission which regulates most derivatives, was quoted saying that the derivatives marketplace as it functions now \"adds up to higher costs to all Americans\". More oversight of the banks in this market is needed, he also said. Additionally, the report said, \"[t]he Department of Justice is looking into derivatives, too. The department's antitrust unit is actively investigating 'the possibility of anticompetitive practices in the credit derivatives clearing, trading and information services industries', according to a department spokeswoman.\"", "title": "Financial reform and government regulation" }, { "paragraph_id": 65, "text": "For legislators and committees responsible for financial reform related to derivatives in the United States and elsewhere, distinguishing between hedging and speculative derivatives activities has been a nontrivial challenge. The distinction is critical because regulation should help to isolate and curtail speculation with derivatives, especially for \"systemically significant\" institutions whose default could be large enough to threaten the entire financial system. At the same time, the legislation should allow for responsible parties to hedge risk without unduly tying up working capital as collateral that firms may better employ elsewhere in their operations and investment. In this regard, it is important to distinguish between financial (e.g. banks) and non-financial end-users of derivatives (e.g. real estate development companies) because these firms' derivatives usage is inherently different. More importantly, the reasonable collateral that secures these different counterparties can be very different. The distinction between these firms is not always straight forward (e.g. hedge funds or even some private equity firms do not neatly fit either category). Finally, even financial users must be differentiated, as 'large' banks may classified as \"systemically significant\" whose derivatives activities must be more tightly monitored and restricted than those of smaller, local and regional banks.", "title": "Financial reform and government regulation" }, { "paragraph_id": 66, "text": "Over-the-counter dealing will be less common as the Dodd–Frank Wall Street Reform and Consumer Protection Act comes into effect. The law mandated the clearing of certain swaps at registered exchanges and imposed various restrictions on derivatives. To implement Dodd-Frank, the CFTC developed new rules in at least 30 areas. The Commission determines which swaps are subject to mandatory clearing and whether a derivatives exchange is eligible to clear a certain type of swap contract.", "title": "Financial reform and government regulation" }, { "paragraph_id": 67, "text": "Nonetheless, the above and other challenges of the rule-making process have delayed full enactment of aspects of the legislation relating to derivatives. The challenges are further complicated by the necessity to orchestrate globalized financial reform among the nations that comprise the world's major financial markets, a primary responsibility of the Financial Stability Board whose progress is ongoing.", "title": "Financial reform and government regulation" }, { "paragraph_id": 68, "text": "In the U.S., by February 2012 the combined effort of the SEC and CFTC had produced over 70 proposed and final derivatives rules. However, both of them had delayed adoption of a number of derivatives regulations because of the burden of other rulemaking, litigation and opposition to the rules, and many core definitions (such as the terms \"swap\", \"security-based swap\", \"swap dealer\", \"security-based swap dealer\", \"major swap participant\" and \"major security-based swap participant\") had still not been adopted. SEC Chairman Mary Schapiro opined: \"At the end of the day, it probably does not make sense to harmonize everything [between the SEC and CFTC rules] because some of these products are quite different and certainly the market structures are quite different.\" On February 11, 2015, the Securities and Exchange Commission (SEC) released two final rules toward establishing a reporting and public disclosure framework for security-based swap transaction data. The two rules are not completely harmonized with the requirements with CFTC requirements.", "title": "Financial reform and government regulation" }, { "paragraph_id": 69, "text": "In November 2012, the SEC and regulators from Australia, Brazil, the European Union, Hong Kong, Japan, Ontario, Quebec, Singapore, and Switzerland met to discuss reforming the OTC derivatives market, as had been agreed by leaders at the 2009 G-20 Pittsburgh summit in September 2009. In December 2012, they released a joint statement to the effect that they recognized that the market is a global one and \"firmly support the adoption and enforcement of robust and consistent standards in and across jurisdictions\", with the goals of mitigating risk, improving transparency, protecting against market abuse, preventing regulatory gaps, reducing the potential for arbitrage opportunities, and fostering a level playing field for market participants. They also agreed on the need to reduce regulatory uncertainty and provide market participants with sufficient clarity on laws and regulations by avoiding, to the extent possible, the application of conflicting rules to the same entities and transactions, and minimizing the application of inconsistent and duplicative rules. At the same time, they noted that \"complete harmonization – perfect alignment of rules across jurisdictions\" would be difficult, because of jurisdictions' differences in law, policy, markets, implementation timing, and legislative and regulatory processes.", "title": "Financial reform and government regulation" }, { "paragraph_id": 70, "text": "On December 20, 2013 the CFTC provided information on its swaps regulation \"comparability\" determinations. The release addressed the CFTC's cross-border compliance exceptions. Specifically it addressed which entity level and in some cases transaction-level requirements in six jurisdictions (Australia, Canada, the European Union, Hong Kong, Japan, and Switzerland) it found comparable to its own rules, thus permitting non-US swap dealers, major swap participants, and the foreign branches of US Swap Dealers and major swap participants in these jurisdictions to comply with local rules in lieu of Commission rules.", "title": "Financial reform and government regulation" }, { "paragraph_id": 71, "text": "Mandatory reporting regulations are being finalized in a number of countries, such as Dodd Frank Act in the US, the European Market Infrastructure Regulations (EMIR) in Europe, as well as regulations in Hong Kong, Japan, Singapore, Canada, and other countries. The OTC Derivatives Regulators Forum (ODRF), a group of over 40 worldwide regulators, provided trade repositories with a set of guidelines regarding data access to regulators, and the Financial Stability Board and CPSS IOSCO also made recommendations in with regard to reporting.", "title": "Financial reform and government regulation" }, { "paragraph_id": 72, "text": "DTCC, through its \"Global Trade Repository\" (GTR) service, manages global trade repositories for interest rates, and commodities, foreign exchange, credit, and equity derivatives. It makes global trade reports to the CFTC in the U.S., and plans to do the same for ESMA in Europe and for regulators in Hong Kong, Japan, and Singapore. It covers cleared and uncleared OTC derivatives products, whether or not a trade is electronically processed or bespoke.", "title": "Financial reform and government regulation" } ]
In finance, a derivative is a contract that derives its value from the performance of an underlying entity. This underlying entity can be an asset, index, or interest rate, and is often simply called the underlying. Derivatives can be used for a number of purposes, including insuring against price movements (hedging), increasing exposure to price movements for speculation, or getting access to otherwise hard-to-trade assets or markets. Some of the more common derivatives include forwards, futures, options, swaps, and variations of these such as synthetic collateralized debt obligations and credit default swaps. Most derivatives are traded over-the-counter (off-exchange) or on an exchange such as the Chicago Mercantile Exchange, while most insurance contracts have developed into a separate industry. In the United States, after the financial crisis of 2007–2009, there has been increased pressure to move derivatives to trade on exchanges. Derivatives are one of the three main categories of financial instruments, the other two being equity and debt. The oldest example of a derivative in history, attested to by Aristotle, is thought to be a contract transaction of olives, entered into by ancient Greek philosopher Thales, who made a profit in the exchange. However, Aristotle did not define this arrangement as a derivative but as a monopoly. Bucket shops, outlawed in 1936 in the US, are a more recent historical example.
2002-01-16T18:44:26Z
2023-12-21T16:16:51Z
[ "Template:About", "Template:Unreferenced section", "Template:Main", "Template:Reflist", "Template:Cite SSRN", "Template:Cite news", "Template:Dead link", "Template:ISBN", "Template:Citation-attribution", "Template:Authority control", "Template:More references", "Template:Clarify", "Template:Unreliable source?", "Template:Cite report", "Template:Cite web", "Template:Cite magazine", "Template:Derivatives market", "Template:Citation", "Template:Use mdy dates", "Template:Finance sidebar", "Template:Cite journal", "Template:Webarchive", "Template:Citation needed", "Template:See also", "Template:More", "Template:Cite book", "Template:Cite arXiv", "Template:Short description", "Template:By whom", "Template:Colbegin", "Template:Slink", "Template:Colend", "Template:Clear" ]
https://en.wikipedia.org/wiki/Derivative_(finance)
9,136
Disney (disambiguation)
Disney most commonly refers to: Disney may also refer to:
[ { "paragraph_id": 0, "text": "Disney most commonly refers to:", "title": "" }, { "paragraph_id": 1, "text": "Disney may also refer to:", "title": "" } ]
Disney most commonly refers to: The Walt Disney Company, an American diversified multinational mass media and entertainment conglomerate Walt Disney (1901–1966), founder of the Walt Disney Company Disney may also refer to:
2002-01-17T05:00:46Z
2023-12-14T20:59:30Z
[ "Template:Wiktionary", "Template:TOC right", "Template:Disambiguation" ]
https://en.wikipedia.org/wiki/Disney_(disambiguation)
9,137
Divine right of kings
In European Christianity, the divine right of kings, divine right, or God's mandation, is a political and religious doctrine of political legitimacy of a monarchy. It is also known as the divine-right theory of kingship. The doctrine asserts that a monarch is not accountable to any earthly authority (such as a parliament or pope) because their right to rule is derived from divine authority. Thus, the monarch is not subject to the will of the people, of the aristocracy, or of any other estate of the realm. It follows that only divine authority can judge a monarch, and that any attempt to depose, dethrone, resist or restrict their powers runs contrary to God's will and may constitute a sacrilegious act. It does not imply that their power is absolute. In its full-fledged form, the Divine Right of Kings is associated with Henry VIII of England (and the Acts of Supremacy,) James VI and I of Scotland and England, Louis XIV of France, and their successors. In contrast, conceptions of rights developed during the Age of Enlightenment – for example during the American and French Revolutions – often emphasised liberty and equality as being among the most important of rights. Divine right has been a key element of the self-legitimisation of many absolute monarchies, connected with their authority and right to rule. Related but distinct notions include Caesaropapism (the complete subordination of bishops etc. to the secular power), Supremacy (the legal sovereignty of the civil laws over the laws of the Church), Absolutism (a form of monarchical or despotic power that is unrestrained by all other institutions, such as churches, legislatures, or social elites) or Tyranny (an absolute ruler who is unrestrained even by moral law). Historically, many notions of rights have been authoritarian and hierarchical, with different people granted different rights and some having more rights than others. For instance, the right of a father to receive respect from his son did not indicate a right for the son to receive a return from that respect. Analogously, the divine right of kings, which permitted absolute power over subjects, provided few rights for the subjects themselves. It is sometimes signified by the phrase by the Grace of God or its Latin equivalent, Dei Gratia, which has historically been attached to the titles of certain reigning monarchs. Note, however, that such accountability only to God does not per se make the monarch a sacred king. Khvarenah (also spelled khwarenah or xwarra(h): Avestan: 𐬓𐬀𐬭𐬆𐬥𐬀𐬵 xᵛarənah; Persian: فرّ, romanized: far) is an Iranian and Zoroastrian concept, which literally means glory, about divine right of the kings. This may stem from early Mesopotamian culture, where kings were often regarded as deities after their death. Shulgi of Ur was among the first Mesopotamian rulers to declare himself to be divine. In the Iranian view, kings would never rule, unless Khvarenah is with them, and they will never fall unless Khvarenah leaves them. For example, according to the Kar-namag of Ardashir, when Ardashir I of Persia and Artabanus V of Parthia fought for the throne of Iran, on the road Artabanus and his contingent are overtaken by an enormous ram, which is also following Ardashir. Artabanus's religious advisors explain to him that the ram is the manifestation of the khwarrah of the ancient Iranian kings, which is leaving Artabanus to join Ardashir. The Imperial cult of ancient Rome identified Roman emperors and some members of their families with the "divinely sanctioned" authority (auctoritas) of the Roman State. The official offer of cultus to a living emperor acknowledged his office and rule as divinely approved and constitutional: his Principate should therefore demonstrate pious respect for traditional Republican deities and mores. Many of the rites, practices and status distinctions that characterized the cult to emperors were perpetuated in the theology and politics of the Christianised Empire. While the earliest references to kingship in Israel proclaim that "14 "When you come to the land that the Lord your God is giving you, and you possess it and dwell in it and then say, 'I will set a king over me, like all the nations that are around me,' 15 you may indeed set a king over you whom the Lord your God will choose. One from among your brothers you shall set as king over you. You may not put a foreigner over you, who is not your brother." (Deut 17:14-15), significant debate on the legitimacy of kingship has persisted in Rabbinical judaism until Maimonides, though many mainstream currents continue to reject the notion. The controversy is highlighted by the instructions to the Israelites in the above-quoted passage, as well as the passages in 1 Samuel 8 and 12, concerning the dispute over kingship; and Perashat Shoftim. It is from 1 Samuel 8 that the Jews receive mishpat ha-melech, the ius regium, or the law of kingship, and from this passage that Maimonides finally concludes that Judaism supports the institution of monarchy, stating that the Israelites had been given three commandments upon entering the land of Israel - to designate a king for themselves, to wipe out the memory of Amalek, and to build the Temple. The debate has primarily centred around the problem of being told to "designate" a king, which some rabbinical sources have argued is an invocation against a divine right of kings, and a call to elect a leader, in opposition to a notion of a divine right. Other rabbinical arguments have put forward an idea that it is through the collective decision of the people that God's will is made manifest, and that the king does therefore have a divine right - once appointed by the nation, he is God's emissary. Jewish law requires one to recite a special blessing upon seeing a monarch: "Blessed are You, L‑rd our G‑d, King of the universe, Who has given from His glory to flesh and blood". With the rise of firearms, nation-states and the Protestant Reformation in the late 16th century, the theory of divine right justified the king's absolute authority in both political and spiritual matters. Henry VIII of England declared himself the Supreme Head of the Church of England and exerted the power of the throne more than any of his predecessors. As a political theory, it was further developed by James VI of Scotland (1567–1625) and came to the fore in England under his reign as James I of England (1603–1625). Louis XIV of France (1643–1715) strongly promoted the theory as well. Historian J.P. Sommerville stresses the theory was polemic: "Absolutists magnified royal power. They did this to protect the state against anarchy and to refute the ideas of resistance theorists", those being in Britain Catholic and Presbyterian theorists. The concept of divine right incorporates, but exaggerates, the ancient Christian concept of "royal God-given rights", which teach that "the right to rule is anointed by God", although this idea is found in many other cultures, including Aryan and Egyptian traditions. In pagan religions, the king was often seen as a God incarnate and so was an unchallengeable despot. The ancient Roman Catholic tradition overcame this idea with the doctrine of the two swords and so achieved, for the very first time, a balanced constitution for states. The advent of Protestantism saw something of a return to the idea of a mere unchallengeable despot. The Christian notion of a divine right of kings is traced to a story found in 1 Samuel, where the prophet Samuel anoints Saul and then David as Messiah ("anointed one")—king over Israel. In the Jewish traditions, the lack of a divine leadership represented by an anointed king [beginning shortly after the death of Joshua] left the people of Israel vulnerable, and the promise of the "promised land" was not fully fulfilled until a king was anointed by a prophet on behalf of God. The effect of anointing was seen to be that the monarch became inviolable, so that even when Saul sought to kill David, David would not raise his hand against him because "he was the Lord's anointed". Raising a hand to a king was therefore considered to be as sacrilegious as raising a hand against God and stood on equal footing as blasphemy. In essence, the king stood in place of God and was never to be challenged "without the challenger being accused of blasphemy" - except by a prophet, which under Christianity was replaced by the church. Outside of Christianity, kings were often seen as ruling with the backing of heavenly powers. Although the later Roman Empire had developed the European concept of a divine regent in Late Antiquity, Adomnan of Iona provides one of the earliest written examples of a Western medieval concept of kings ruling with divine right. He wrote of the Irish King Diarmait mac Cerbaill's assassination and claimed that divine punishment fell on his assassin for the act of violating the monarch. Adomnan also recorded a story about Saint Columba supposedly being visited by an angel carrying a glass book, who told him to ordain Aedan mac Gabrain as King of Dal Riata. Columba initially refused, and the angel answered by whipping him and demanding that he perform the ordination because God had commanded it. The same angel visited Columba on three successive nights. Columba finally agreed, and Aedan came to receive ordination. At the ordination, Columba told Aedan that so long as he obeyed God's laws, then none of his enemies would prevail against him, but the moment he broke them, this protection would end, and the same whip with which Columba had been struck would be turned against the king. Adomnan's writings most likely influenced other Irish writers, who in turn influenced continental ideas as well. Pepin the Short's coronation may have also come from the same influence. The Byzantine Empire can be seen as the progenitor of this concept (which began with Constantine I). This in turn inspired the Carolingian dynasty and the Holy Roman Emperors, whose lasting impact on Western and Central Europe further inspired all subsequent Western ideas of kingship. In the Middle Ages, the idea that God had granted certain earthly powers to the monarch, just as he had given spiritual authority and power to the church, especially to the Pope, was already a well-known concept long before later writers coined the term "divine right of kings" and employed it as a theory in political science. However, the dividing line for the authority and power was a subject of frequent contention: notably in England with the murder of Archbishop Thomas Beckett(1170). For example, Richard I of England declared at his trial during the diet at Speyer in 1193: "I am born in a rank which recognizes no superior but God, to whom alone I am responsible for my actions", and it was Richard who first used the motto "Dieu et mon droit" ("God and my right") which is still the motto of the Monarch of the United Kingdom. Thomas Aquinas condoned extra-legal tyrannicide in the worst of circumstances: When there is no recourse to a superior by whom judgment can be made about an invader, then he who slays a tyrant to liberate his fatherland is [to be] praised and receives a reward. On the other hand, Aquinas forbade the overthrow of any morally, Christianly and spiritually legitimate king by his subjects. The only human power capable of deposing the king was the pope. The reasoning was that if a subject may overthrow his superior for some bad law, who was to be the judge of whether the law was bad? If the subject could so judge his own superior, then all lawful superior authority could lawfully be overthrown by the arbitrary judgement of an inferior, and thus all law was under constant threat. According to John of Paris, kings had their jurisdictions and bishops (and the pope) had theirs, but kings derived their supreme, non-absolute temporal jurisdiction from popular consent. Towards the end of the Middle Ages, many philosophers, such as Nicholas of Cusa and Francisco Suárez, propounded similar theories. The Church was the final guarantor that Christian kings would follow the laws and constitutional traditions of their ancestors and the laws of God and of justice. Radical English theologian John Wycliffe's theory of Dominium meant that injuries inflicted on someone personally by a king should be born by them submissively, a conventional idea, but that injuries by a king against God should be patiently resisted even to death; gravely sinful kings and popes forfeited their (divine) right to obedience and ownership, though the political order should be maintained. More aggressive versions of this were taken up by Lollards and Hussites. For Erasmus of Rotterdam it was the consent of the people which gives and takes away "the purple", not an unchangeable divine mandate. Roman Catholic jurisprudence, the monarch is always subject to natural and divine law, which are regarded as superior to the monarch. The possibility of monarchy declining morally, overturning natural law, and degenerating into a tyranny oppressive of the general welfare was answered theologically with the Catholic concept of the spiritual superiority of the Pope (there is no "Catholic concept of extra-legal tyrannicide", as some falsely suppose, the same being expressly condemned by St Thomas Aquinas in chapter 7 of his De Regno). Catholic thought justified limited submission to the monarchy by reference to the following: The divine right of kings, or divine-right theory of kingship, is a political and religious doctrine of royal and political legitimacy. It asserts that a monarch is subject to no earthly authority, deriving his right to rule directly from the will of God. The king is thus not subject to the will of his people, the aristocracy, or any other estate of the realm, including (in the view of some, especially in Protestant countries) the church. A weaker or more moderate form of this political theory does hold, however, that the king is subject to the church and the pope, although completely irreproachable in other ways; but according to this doctrine in its strong form, only God can judge an unjust king. The doctrine implies that any attempt to depose the king or to restrict his powers runs contrary to the will of God and may constitute a sacrilegious act. The Scots textbooks of the divine right of kings were written in 1597–1598 by James VI of Scotland. His Basilikon Doron, a manual on the powers of a king, was written to edify his four-year-old son Henry Frederick that a king "acknowledgeth himself ordained for his people, having received from God a burden of government, whereof he must be countable". The conception of ordination brought with it largely unspoken parallels with the Anglican and Catholic priesthood, but the overriding metaphor in James VI's 'Basilikon Doron' was that of a father's relation to his children. "Just as no misconduct on the part of a father can free his children from obedience to the fifth commandment", James, after becoming James I of England, also had printed his Defense of the Right of Kings in the face of English theories of inalienable popular and clerical rights. He based his theories in part on his understanding of the Bible, as noted by the following quote from a speech to parliament delivered in 1610 as James I of England: The state of monarchy is the supremest thing upon earth, for kings are not only God's lieutenants upon earth and sit upon God's throne, but even by God himself, they are called gods. There be three principal [comparisons] that illustrate the state of monarchy: one taken out of the word of God, and the two other out of the grounds of policy and philosophy. In the Scriptures, kings are called gods, and so their power after a certain relation compared to the Divine power. Kings are also compared to fathers of families; for a king is true parens patriae [parent of the country], the politic father of his people. And lastly, kings are compared to the head of this microcosm of the body of man. James's reference to "God's lieutenants" is apparently a reference to the text in Romans 13 where Paul refers to "God's ministers". (1) Let every soul be subject unto the higher powers. For there is no power but of God: the powers that be are ordained of God. (2) Whosoever, therefore, resisteth the power, resisteth the ordinance of God: and they that resist shall receive to themselves damnation. (3) For rulers are not a terror to good works, but to the evil. Wilt thou then not be afraid of the power? do that which is good, and thou shalt have praise of the same: (4) For he is the minister of God to thee for good. But if thou do that which is evil, be afraid; for he beareth not the sword in vain: for he is the minister of God, a revenger to execute wrath upon him that doeth evil. (5) Wherefore ye must needs be subject, not only for wrath but also for conscience sake. (6) For this cause pay ye tribute also: for they are God's ministers, attending continually upon this very thing. (7) Render therefore to all their dues: tribute to whom tribute is due; custom to whom custom; fear to whom fear; honour to whom honour. Some of the symbolism within the coronation ceremony for British monarchs, in which they are anointed with holy oils by the Archbishop of Canterbury, thereby ordaining them to monarchy, perpetuates the ancient Roman Catholic monarchical ideas and ceremonial (although few Protestants realize this, the ceremony is nearly entirely based upon that of the Coronation of the Holy Roman Emperor). However, in the UK, the symbolism ends there since the real governing authority of the monarch was all but extinguished by the Whig revolution of 1688–89 (see Glorious Revolution). The king or queen of the United Kingdom is one of the last monarchs still to be crowned in the traditional Christian ceremonial, which in most other countries has been replaced by an inauguration or other declaration. In England, it is not without significance that the sacerdotal vestments, generally discarded by the clergy – dalmatic, alb and stole – continued to be among the insignia of the sovereign (see Coronation of the British monarch). Moreover, this sacrosanct character he acquired not by virtue of his "sacring", but by hereditary right; the coronation, anointing and vesting were but the outward and visible symbol of a divine grace adherent in the sovereign by virtue of his title. Even Roman Catholic monarchs, like Louis XIV, would never have admitted that their coronation by the archbishop constituted any part of their title to reign; it was no more than the consecration of their title. The French prelate Jacques-Bénigne Bossuet made a classic statement of the doctrine of divine right in a sermon preached before King Louis XIV: Les rois règnent par moi, dit la Sagesse éternelle: 'Per me reges regnant'; et de là nous devons conclure non seulement que les droits de la royauté sont établis par ses lois, mais que le choix des personnes est un effet de sa providence. Kings reign by Me, says Eternal Wisdom: 'Per me reges regnant' [in Latin]; and from that we must conclude not only that the rights of royalty are established by its laws, but also that the choice of persons [to occupy the throne] is an effect of its providence. The French Huguenot nobles and clergy, having rejected the pope and the Catholic Church, were left only with the supreme power of the king who, they taught, could not be gainsaid or judged by anyone. Since there was no longer the countervailing power of the papacy and since the Church of England was a creature of the state and had become subservient to it, this meant that there was nothing to regulate the powers of the king, and he became an absolute power. In theory, divine, natural, customary, and constitutional law still held sway over the king, but, absent a superior spiritual power, it was difficult to see how they could be enforced since the king could not be tried by any of his own courts. One passage in scripture supporting the idea of the divine right of kings was used by Martin Luther, when urging the secular authorities to crush the Peasant Rebellion of 1525 in Germany in his Against the Murderous, Thieving Hordes of Peasants, basing his argument on Paul's Epistle to the Romans. It is related to the ancient Catholic philosophies regarding monarchy, in which the monarch is God's vicegerent upon the earth and therefore subject to no inferior power. Before the Reformation the anointed king was, within his realm, the accredited vicar of God for secular purposes (see the Investiture Controversy); after the Reformation he (or she if queen regnant) became this in Protestant states for religious purposes also. In the sixteenth century, both Catholic and Protestant political thinkers alike challenged the idea of a monarch's "divine right". The Spanish Catholic historian Juan de Mariana put forward the argument in his book De rege et regis institutione (1598) that since society was formed by a "pact" among all its members, "there can be no doubt that they are able to call a king to account". Mariana thus challenged divine right theories by stating in certain circumstances, tyrannicide could be justified. Cardinal Robert Bellarmine also "did not believe that the institute of monarchy had any divine sanction" and shared Mariana's belief that there were times where Catholics could lawfully remove a monarch. Among groups of English Protestant exiles fleeing from Queen Mary I, some of the earliest anti-monarchist publications emerged. "Weaned off uncritical royalism by the actions of Queen Mary ... The political thinking of men like Ponet, Knox, Goodman and Hales." In 1553, Mary I, a Roman Catholic, succeeded her Protestant half-brother, Edward VI, to the English throne. Mary set about trying to restore Roman Catholicism by making sure that: Edward's religious laws were abolished in the Statute of Repeal Act (1553); the Protestant religious laws passed in the time of Henry VIII were repealed; and the Revival of the Heresy Acts were passed in late 1554. When Thomas Wyatt the Younger instigated what became known as Wyatt's rebellion in early 1554, John Ponet, the highest-ranking ecclesiastic among the exiles, allegedly participated in the uprising. He escaped to Strasbourg after the Rebellion's defeat and, the following year, he published A Shorte Treatise of Politike Power, in which he put forward a theory of justified opposition to secular rulers. Ponet's treatise comes first in a new wave of anti-monarchical writings ... It has never been assessed at its true importance, for it antedates by several years those more brilliantly expressed but less radical Huguenot writings which have usually been taken to represent the Tyrannicide-theories of the Reformation. Ponet's pamphlet was republished on the eve of King Charles I's execution. According to U.S. President John Adams, Ponet's work contained "all the essential principles of liberty, which were afterward dilated on by Sidney and Locke", including the idea of a three-branched government. Over time, opposition to the divine right of kings came from a number of sources, including poet John Milton in his pamphlet The Tenure of Kings and Magistrates, and Thomas Paine in his pamphlet Common Sense. By 1700 an Anglican Archbishop was prepared to assert that Kings hold their Crowns by law alone, and the law may forfeit them. Probably the two most famous declarations of a right to revolution against tyranny in the English language are John Locke's Essay concerning The True Original, Extent, and End of Civil-Government and Thomas Jefferson's formulation in the United States Declaration of Independence that "all men are created equal". In England the doctrine of the divine right of kings was developed to its most extreme logical conclusions during the political controversies of the 17th century; its most famous exponent was Sir Robert Filmer. It was the main issue to be decided by the English Civil War, the Royalists holding that "all Christian kings, princes and governors" derive their authority direct from God, the Parliamentarians that this authority is the outcome of a contract, actual or implied, between sovereign and people. In one case the king's power would be unlimited, according to Louis XIV's famous saying: "L' état, c'est moi!", or limited only by his own free act; in the other his actions would be governed by the advice and consent of the people, to whom he would be ultimately responsible. The victory of this latter principle was proclaimed to all the world by the execution of Charles I. The doctrine of divine right, indeed, for a while drew nourishment from the blood of the royal "martyr"; it was the guiding principle of the Anglican Church of the Restoration; but it suffered a rude blow when James II of England made it impossible for the clergy to obey both their conscience and their king. The Glorious Revolution of 1688 made an end of it as a great political force. This has led to the constitutional development of the Crown in Britain, as held by descent modified and modifiable by parliamentary action.
[ { "paragraph_id": 0, "text": "In European Christianity, the divine right of kings, divine right, or God's mandation, is a political and religious doctrine of political legitimacy of a monarchy. It is also known as the divine-right theory of kingship.", "title": "" }, { "paragraph_id": 1, "text": "The doctrine asserts that a monarch is not accountable to any earthly authority (such as a parliament or pope) because their right to rule is derived from divine authority. Thus, the monarch is not subject to the will of the people, of the aristocracy, or of any other estate of the realm. It follows that only divine authority can judge a monarch, and that any attempt to depose, dethrone, resist or restrict their powers runs contrary to God's will and may constitute a sacrilegious act. It does not imply that their power is absolute.", "title": "" }, { "paragraph_id": 2, "text": "In its full-fledged form, the Divine Right of Kings is associated with Henry VIII of England (and the Acts of Supremacy,) James VI and I of Scotland and England, Louis XIV of France, and their successors.", "title": "" }, { "paragraph_id": 3, "text": "In contrast, conceptions of rights developed during the Age of Enlightenment – for example during the American and French Revolutions – often emphasised liberty and equality as being among the most important of rights.", "title": "" }, { "paragraph_id": 4, "text": "Divine right has been a key element of the self-legitimisation of many absolute monarchies, connected with their authority and right to rule. Related but distinct notions include Caesaropapism (the complete subordination of bishops etc. to the secular power), Supremacy (the legal sovereignty of the civil laws over the laws of the Church), Absolutism (a form of monarchical or despotic power that is unrestrained by all other institutions, such as churches, legislatures, or social elites) or Tyranny (an absolute ruler who is unrestrained even by moral law).", "title": "Concepts" }, { "paragraph_id": 5, "text": "Historically, many notions of rights have been authoritarian and hierarchical, with different people granted different rights and some having more rights than others. For instance, the right of a father to receive respect from his son did not indicate a right for the son to receive a return from that respect. Analogously, the divine right of kings, which permitted absolute power over subjects, provided few rights for the subjects themselves.", "title": "Concepts" }, { "paragraph_id": 6, "text": "It is sometimes signified by the phrase by the Grace of God or its Latin equivalent, Dei Gratia, which has historically been attached to the titles of certain reigning monarchs. Note, however, that such accountability only to God does not per se make the monarch a sacred king.", "title": "Concepts" }, { "paragraph_id": 7, "text": "Khvarenah (also spelled khwarenah or xwarra(h): Avestan: 𐬓𐬀𐬭𐬆𐬥𐬀𐬵 xᵛarənah; Persian: فرّ, romanized: far) is an Iranian and Zoroastrian concept, which literally means glory, about divine right of the kings. This may stem from early Mesopotamian culture, where kings were often regarded as deities after their death. Shulgi of Ur was among the first Mesopotamian rulers to declare himself to be divine. In the Iranian view, kings would never rule, unless Khvarenah is with them, and they will never fall unless Khvarenah leaves them. For example, according to the Kar-namag of Ardashir, when Ardashir I of Persia and Artabanus V of Parthia fought for the throne of Iran, on the road Artabanus and his contingent are overtaken by an enormous ram, which is also following Ardashir. Artabanus's religious advisors explain to him that the ram is the manifestation of the khwarrah of the ancient Iranian kings, which is leaving Artabanus to join Ardashir.", "title": "Pre-Christian conceptions" }, { "paragraph_id": 8, "text": "The Imperial cult of ancient Rome identified Roman emperors and some members of their families with the \"divinely sanctioned\" authority (auctoritas) of the Roman State. The official offer of cultus to a living emperor acknowledged his office and rule as divinely approved and constitutional: his Principate should therefore demonstrate pious respect for traditional Republican deities and mores. Many of the rites, practices and status distinctions that characterized the cult to emperors were perpetuated in the theology and politics of the Christianised Empire.", "title": "Pre-Christian conceptions" }, { "paragraph_id": 9, "text": "While the earliest references to kingship in Israel proclaim that \"14 \"When you come to the land that the Lord your God is giving you, and you possess it and dwell in it and then say, 'I will set a king over me, like all the nations that are around me,' 15 you may indeed set a king over you whom the Lord your God will choose. One from among your brothers you shall set as king over you. You may not put a foreigner over you, who is not your brother.\" (Deut 17:14-15), significant debate on the legitimacy of kingship has persisted in Rabbinical judaism until Maimonides, though many mainstream currents continue to reject the notion.", "title": "Pre-Christian conceptions" }, { "paragraph_id": 10, "text": "The controversy is highlighted by the instructions to the Israelites in the above-quoted passage, as well as the passages in 1 Samuel 8 and 12, concerning the dispute over kingship; and Perashat Shoftim. It is from 1 Samuel 8 that the Jews receive mishpat ha-melech, the ius regium, or the law of kingship, and from this passage that Maimonides finally concludes that Judaism supports the institution of monarchy, stating that the Israelites had been given three commandments upon entering the land of Israel - to designate a king for themselves, to wipe out the memory of Amalek, and to build the Temple.", "title": "Pre-Christian conceptions" }, { "paragraph_id": 11, "text": "The debate has primarily centred around the problem of being told to \"designate\" a king, which some rabbinical sources have argued is an invocation against a divine right of kings, and a call to elect a leader, in opposition to a notion of a divine right. Other rabbinical arguments have put forward an idea that it is through the collective decision of the people that God's will is made manifest, and that the king does therefore have a divine right - once appointed by the nation, he is God's emissary.", "title": "Pre-Christian conceptions" }, { "paragraph_id": 12, "text": "Jewish law requires one to recite a special blessing upon seeing a monarch: \"Blessed are You, L‑rd our G‑d, King of the universe, Who has given from His glory to flesh and blood\".", "title": "Pre-Christian conceptions" }, { "paragraph_id": 13, "text": "With the rise of firearms, nation-states and the Protestant Reformation in the late 16th century, the theory of divine right justified the king's absolute authority in both political and spiritual matters. Henry VIII of England declared himself the Supreme Head of the Church of England and exerted the power of the throne more than any of his predecessors.", "title": "European conceptions" }, { "paragraph_id": 14, "text": "As a political theory, it was further developed by James VI of Scotland (1567–1625) and came to the fore in England under his reign as James I of England (1603–1625). Louis XIV of France (1643–1715) strongly promoted the theory as well.", "title": "European conceptions" }, { "paragraph_id": 15, "text": "Historian J.P. Sommerville stresses the theory was polemic: \"Absolutists magnified royal power. They did this to protect the state against anarchy and to refute the ideas of resistance theorists\", those being in Britain Catholic and Presbyterian theorists.", "title": "European conceptions" }, { "paragraph_id": 16, "text": "The concept of divine right incorporates, but exaggerates, the ancient Christian concept of \"royal God-given rights\", which teach that \"the right to rule is anointed by God\", although this idea is found in many other cultures, including Aryan and Egyptian traditions.", "title": "European conceptions" }, { "paragraph_id": 17, "text": "In pagan religions, the king was often seen as a God incarnate and so was an unchallengeable despot. The ancient Roman Catholic tradition overcame this idea with the doctrine of the two swords and so achieved, for the very first time, a balanced constitution for states. The advent of Protestantism saw something of a return to the idea of a mere unchallengeable despot.", "title": "European conceptions" }, { "paragraph_id": 18, "text": "The Christian notion of a divine right of kings is traced to a story found in 1 Samuel, where the prophet Samuel anoints Saul and then David as Messiah (\"anointed one\")—king over Israel. In the Jewish traditions, the lack of a divine leadership represented by an anointed king [beginning shortly after the death of Joshua] left the people of Israel vulnerable, and the promise of the \"promised land\" was not fully fulfilled until a king was anointed by a prophet on behalf of God.", "title": "European conceptions" }, { "paragraph_id": 19, "text": "The effect of anointing was seen to be that the monarch became inviolable, so that even when Saul sought to kill David, David would not raise his hand against him because \"he was the Lord's anointed\". Raising a hand to a king was therefore considered to be as sacrilegious as raising a hand against God and stood on equal footing as blasphemy. In essence, the king stood in place of God and was never to be challenged \"without the challenger being accused of blasphemy\" - except by a prophet, which under Christianity was replaced by the church.", "title": "European conceptions" }, { "paragraph_id": 20, "text": "Outside of Christianity, kings were often seen as ruling with the backing of heavenly powers.", "title": "European conceptions" }, { "paragraph_id": 21, "text": "Although the later Roman Empire had developed the European concept of a divine regent in Late Antiquity, Adomnan of Iona provides one of the earliest written examples of a Western medieval concept of kings ruling with divine right. He wrote of the Irish King Diarmait mac Cerbaill's assassination and claimed that divine punishment fell on his assassin for the act of violating the monarch.", "title": "European conceptions" }, { "paragraph_id": 22, "text": "Adomnan also recorded a story about Saint Columba supposedly being visited by an angel carrying a glass book, who told him to ordain Aedan mac Gabrain as King of Dal Riata. Columba initially refused, and the angel answered by whipping him and demanding that he perform the ordination because God had commanded it. The same angel visited Columba on three successive nights. Columba finally agreed, and Aedan came to receive ordination. At the ordination, Columba told Aedan that so long as he obeyed God's laws, then none of his enemies would prevail against him, but the moment he broke them, this protection would end, and the same whip with which Columba had been struck would be turned against the king.", "title": "European conceptions" }, { "paragraph_id": 23, "text": "Adomnan's writings most likely influenced other Irish writers, who in turn influenced continental ideas as well. Pepin the Short's coronation may have also come from the same influence. The Byzantine Empire can be seen as the progenitor of this concept (which began with Constantine I). This in turn inspired the Carolingian dynasty and the Holy Roman Emperors, whose lasting impact on Western and Central Europe further inspired all subsequent Western ideas of kingship.", "title": "European conceptions" }, { "paragraph_id": 24, "text": "In the Middle Ages, the idea that God had granted certain earthly powers to the monarch, just as he had given spiritual authority and power to the church, especially to the Pope, was already a well-known concept long before later writers coined the term \"divine right of kings\" and employed it as a theory in political science.", "title": "European conceptions" }, { "paragraph_id": 25, "text": "However, the dividing line for the authority and power was a subject of frequent contention: notably in England with the murder of Archbishop Thomas Beckett(1170). For example, Richard I of England declared at his trial during the diet at Speyer in 1193: \"I am born in a rank which recognizes no superior but God, to whom alone I am responsible for my actions\", and it was Richard who first used the motto \"Dieu et mon droit\" (\"God and my right\") which is still the motto of the Monarch of the United Kingdom.", "title": "European conceptions" }, { "paragraph_id": 26, "text": "Thomas Aquinas condoned extra-legal tyrannicide in the worst of circumstances:", "title": "European conceptions" }, { "paragraph_id": 27, "text": "When there is no recourse to a superior by whom judgment can be made about an invader, then he who slays a tyrant to liberate his fatherland is [to be] praised and receives a reward.", "title": "European conceptions" }, { "paragraph_id": 28, "text": "On the other hand, Aquinas forbade the overthrow of any morally, Christianly and spiritually legitimate king by his subjects. The only human power capable of deposing the king was the pope. The reasoning was that if a subject may overthrow his superior for some bad law, who was to be the judge of whether the law was bad? If the subject could so judge his own superior, then all lawful superior authority could lawfully be overthrown by the arbitrary judgement of an inferior, and thus all law was under constant threat.", "title": "European conceptions" }, { "paragraph_id": 29, "text": "According to John of Paris, kings had their jurisdictions and bishops (and the pope) had theirs, but kings derived their supreme, non-absolute temporal jurisdiction from popular consent.", "title": "European conceptions" }, { "paragraph_id": 30, "text": "Towards the end of the Middle Ages, many philosophers, such as Nicholas of Cusa and Francisco Suárez, propounded similar theories.", "title": "European conceptions" }, { "paragraph_id": 31, "text": "The Church was the final guarantor that Christian kings would follow the laws and constitutional traditions of their ancestors and the laws of God and of justice.", "title": "European conceptions" }, { "paragraph_id": 32, "text": "Radical English theologian John Wycliffe's theory of Dominium meant that injuries inflicted on someone personally by a king should be born by them submissively, a conventional idea, but that injuries by a king against God should be patiently resisted even to death; gravely sinful kings and popes forfeited their (divine) right to obedience and ownership, though the political order should be maintained. More aggressive versions of this were taken up by Lollards and Hussites.", "title": "European conceptions" }, { "paragraph_id": 33, "text": "For Erasmus of Rotterdam it was the consent of the people which gives and takes away \"the purple\", not an unchangeable divine mandate.", "title": "European conceptions" }, { "paragraph_id": 34, "text": "Roman Catholic jurisprudence, the monarch is always subject to natural and divine law, which are regarded as superior to the monarch.", "title": "European conceptions" }, { "paragraph_id": 35, "text": "The possibility of monarchy declining morally, overturning natural law, and degenerating into a tyranny oppressive of the general welfare was answered theologically with the Catholic concept of the spiritual superiority of the Pope (there is no \"Catholic concept of extra-legal tyrannicide\", as some falsely suppose, the same being expressly condemned by St Thomas Aquinas in chapter 7 of his De Regno).", "title": "European conceptions" }, { "paragraph_id": 36, "text": "Catholic thought justified limited submission to the monarchy by reference to the following:", "title": "European conceptions" }, { "paragraph_id": 37, "text": "The divine right of kings, or divine-right theory of kingship, is a political and religious doctrine of royal and political legitimacy. It asserts that a monarch is subject to no earthly authority, deriving his right to rule directly from the will of God. The king is thus not subject to the will of his people, the aristocracy, or any other estate of the realm, including (in the view of some, especially in Protestant countries) the church.", "title": "European conceptions" }, { "paragraph_id": 38, "text": "A weaker or more moderate form of this political theory does hold, however, that the king is subject to the church and the pope, although completely irreproachable in other ways; but according to this doctrine in its strong form, only God can judge an unjust king.", "title": "European conceptions" }, { "paragraph_id": 39, "text": "The doctrine implies that any attempt to depose the king or to restrict his powers runs contrary to the will of God and may constitute a sacrilegious act.", "title": "European conceptions" }, { "paragraph_id": 40, "text": "The Scots textbooks of the divine right of kings were written in 1597–1598 by James VI of Scotland. His Basilikon Doron, a manual on the powers of a king, was written to edify his four-year-old son Henry Frederick that a king \"acknowledgeth himself ordained for his people, having received from God a burden of government, whereof he must be countable\".", "title": "European conceptions" }, { "paragraph_id": 41, "text": "The conception of ordination brought with it largely unspoken parallels with the Anglican and Catholic priesthood, but the overriding metaphor in James VI's 'Basilikon Doron' was that of a father's relation to his children. \"Just as no misconduct on the part of a father can free his children from obedience to the fifth commandment\",", "title": "European conceptions" }, { "paragraph_id": 42, "text": "James, after becoming James I of England, also had printed his Defense of the Right of Kings in the face of English theories of inalienable popular and clerical rights.", "title": "European conceptions" }, { "paragraph_id": 43, "text": "He based his theories in part on his understanding of the Bible, as noted by the following quote from a speech to parliament delivered in 1610 as James I of England:", "title": "European conceptions" }, { "paragraph_id": 44, "text": "The state of monarchy is the supremest thing upon earth, for kings are not only God's lieutenants upon earth and sit upon God's throne, but even by God himself, they are called gods. There be three principal [comparisons] that illustrate the state of monarchy: one taken out of the word of God, and the two other out of the grounds of policy and philosophy. In the Scriptures, kings are called gods, and so their power after a certain relation compared to the Divine power. Kings are also compared to fathers of families; for a king is true parens patriae [parent of the country], the politic father of his people. And lastly, kings are compared to the head of this microcosm of the body of man.", "title": "European conceptions" }, { "paragraph_id": 45, "text": "James's reference to \"God's lieutenants\" is apparently a reference to the text in Romans 13 where Paul refers to \"God's ministers\".", "title": "European conceptions" }, { "paragraph_id": 46, "text": "(1) Let every soul be subject unto the higher powers. For there is no power but of God: the powers that be are ordained of God. (2) Whosoever, therefore, resisteth the power, resisteth the ordinance of God: and they that resist shall receive to themselves damnation. (3) For rulers are not a terror to good works, but to the evil. Wilt thou then not be afraid of the power? do that which is good, and thou shalt have praise of the same: (4) For he is the minister of God to thee for good. But if thou do that which is evil, be afraid; for he beareth not the sword in vain: for he is the minister of God, a revenger to execute wrath upon him that doeth evil. (5) Wherefore ye must needs be subject, not only for wrath but also for conscience sake. (6) For this cause pay ye tribute also: for they are God's ministers, attending continually upon this very thing. (7) Render therefore to all their dues: tribute to whom tribute is due; custom to whom custom; fear to whom fear; honour to whom honour.", "title": "European conceptions" }, { "paragraph_id": 47, "text": "Some of the symbolism within the coronation ceremony for British monarchs, in which they are anointed with holy oils by the Archbishop of Canterbury, thereby ordaining them to monarchy, perpetuates the ancient Roman Catholic monarchical ideas and ceremonial (although few Protestants realize this, the ceremony is nearly entirely based upon that of the Coronation of the Holy Roman Emperor). However, in the UK, the symbolism ends there since the real governing authority of the monarch was all but extinguished by the Whig revolution of 1688–89 (see Glorious Revolution). The king or queen of the United Kingdom is one of the last monarchs still to be crowned in the traditional Christian ceremonial, which in most other countries has been replaced by an inauguration or other declaration.", "title": "European conceptions" }, { "paragraph_id": 48, "text": "In England, it is not without significance that the sacerdotal vestments, generally discarded by the clergy – dalmatic, alb and stole – continued to be among the insignia of the sovereign (see Coronation of the British monarch). Moreover, this sacrosanct character he acquired not by virtue of his \"sacring\", but by hereditary right; the coronation, anointing and vesting were but the outward and visible symbol of a divine grace adherent in the sovereign by virtue of his title. Even Roman Catholic monarchs, like Louis XIV, would never have admitted that their coronation by the archbishop constituted any part of their title to reign; it was no more than the consecration of their title.", "title": "European conceptions" }, { "paragraph_id": 49, "text": "The French prelate Jacques-Bénigne Bossuet made a classic statement of the doctrine of divine right in a sermon preached before King Louis XIV:", "title": "European conceptions" }, { "paragraph_id": 50, "text": "Les rois règnent par moi, dit la Sagesse éternelle: 'Per me reges regnant'; et de là nous devons conclure non seulement que les droits de la royauté sont établis par ses lois, mais que le choix des personnes est un effet de sa providence.", "title": "European conceptions" }, { "paragraph_id": 51, "text": "Kings reign by Me, says Eternal Wisdom: 'Per me reges regnant' [in Latin]; and from that we must conclude not only that the rights of royalty are established by its laws, but also that the choice of persons [to occupy the throne] is an effect of its providence.", "title": "European conceptions" }, { "paragraph_id": 52, "text": "The French Huguenot nobles and clergy, having rejected the pope and the Catholic Church, were left only with the supreme power of the king who, they taught, could not be gainsaid or judged by anyone. Since there was no longer the countervailing power of the papacy and since the Church of England was a creature of the state and had become subservient to it, this meant that there was nothing to regulate the powers of the king, and he became an absolute power. In theory, divine, natural, customary, and constitutional law still held sway over the king, but, absent a superior spiritual power, it was difficult to see how they could be enforced since the king could not be tried by any of his own courts.", "title": "European conceptions" }, { "paragraph_id": 53, "text": "One passage in scripture supporting the idea of the divine right of kings was used by Martin Luther, when urging the secular authorities to crush the Peasant Rebellion of 1525 in Germany in his Against the Murderous, Thieving Hordes of Peasants, basing his argument on Paul's Epistle to the Romans.", "title": "European conceptions" }, { "paragraph_id": 54, "text": "It is related to the ancient Catholic philosophies regarding monarchy, in which the monarch is God's vicegerent upon the earth and therefore subject to no inferior power.", "title": "European conceptions" }, { "paragraph_id": 55, "text": "Before the Reformation the anointed king was, within his realm, the accredited vicar of God for secular purposes (see the Investiture Controversy); after the Reformation he (or she if queen regnant) became this in Protestant states for religious purposes also.", "title": "European conceptions" }, { "paragraph_id": 56, "text": "In the sixteenth century, both Catholic and Protestant political thinkers alike challenged the idea of a monarch's \"divine right\".", "title": "European conceptions" }, { "paragraph_id": 57, "text": "The Spanish Catholic historian Juan de Mariana put forward the argument in his book De rege et regis institutione (1598) that since society was formed by a \"pact\" among all its members, \"there can be no doubt that they are able to call a king to account\". Mariana thus challenged divine right theories by stating in certain circumstances, tyrannicide could be justified.", "title": "European conceptions" }, { "paragraph_id": 58, "text": "Cardinal Robert Bellarmine also \"did not believe that the institute of monarchy had any divine sanction\" and shared Mariana's belief that there were times where Catholics could lawfully remove a monarch.", "title": "European conceptions" }, { "paragraph_id": 59, "text": "Among groups of English Protestant exiles fleeing from Queen Mary I, some of the earliest anti-monarchist publications emerged. \"Weaned off uncritical royalism by the actions of Queen Mary ... The political thinking of men like Ponet, Knox, Goodman and Hales.\"", "title": "European conceptions" }, { "paragraph_id": 60, "text": "In 1553, Mary I, a Roman Catholic, succeeded her Protestant half-brother, Edward VI, to the English throne. Mary set about trying to restore Roman Catholicism by making sure that: Edward's religious laws were abolished in the Statute of Repeal Act (1553); the Protestant religious laws passed in the time of Henry VIII were repealed; and the Revival of the Heresy Acts were passed in late 1554.", "title": "European conceptions" }, { "paragraph_id": 61, "text": "When Thomas Wyatt the Younger instigated what became known as Wyatt's rebellion in early 1554, John Ponet, the highest-ranking ecclesiastic among the exiles, allegedly participated in the uprising. He escaped to Strasbourg after the Rebellion's defeat and, the following year, he published A Shorte Treatise of Politike Power, in which he put forward a theory of justified opposition to secular rulers.", "title": "European conceptions" }, { "paragraph_id": 62, "text": "Ponet's treatise comes first in a new wave of anti-monarchical writings ... It has never been assessed at its true importance, for it antedates by several years those more brilliantly expressed but less radical Huguenot writings which have usually been taken to represent the Tyrannicide-theories of the Reformation.", "title": "European conceptions" }, { "paragraph_id": 63, "text": "Ponet's pamphlet was republished on the eve of King Charles I's execution.", "title": "European conceptions" }, { "paragraph_id": 64, "text": "According to U.S. President John Adams, Ponet's work contained \"all the essential principles of liberty, which were afterward dilated on by Sidney and Locke\", including the idea of a three-branched government.", "title": "European conceptions" }, { "paragraph_id": 65, "text": "Over time, opposition to the divine right of kings came from a number of sources, including poet John Milton in his pamphlet The Tenure of Kings and Magistrates, and Thomas Paine in his pamphlet Common Sense. By 1700 an Anglican Archbishop was prepared to assert that Kings hold their Crowns by law alone, and the law may forfeit them.", "title": "European conceptions" }, { "paragraph_id": 66, "text": "Probably the two most famous declarations of a right to revolution against tyranny in the English language are John Locke's Essay concerning The True Original, Extent, and End of Civil-Government and Thomas Jefferson's formulation in the United States Declaration of Independence that \"all men are created equal\".", "title": "European conceptions" }, { "paragraph_id": 67, "text": "In England the doctrine of the divine right of kings was developed to its most extreme logical conclusions during the political controversies of the 17th century; its most famous exponent was Sir Robert Filmer. It was the main issue to be decided by the English Civil War, the Royalists holding that \"all Christian kings, princes and governors\" derive their authority direct from God, the Parliamentarians that this authority is the outcome of a contract, actual or implied, between sovereign and people.", "title": "European conceptions" }, { "paragraph_id": 68, "text": "In one case the king's power would be unlimited, according to Louis XIV's famous saying: \"L' état, c'est moi!\", or limited only by his own free act; in the other his actions would be governed by the advice and consent of the people, to whom he would be ultimately responsible. The victory of this latter principle was proclaimed to all the world by the execution of Charles I.", "title": "European conceptions" }, { "paragraph_id": 69, "text": "The doctrine of divine right, indeed, for a while drew nourishment from the blood of the royal \"martyr\"; it was the guiding principle of the Anglican Church of the Restoration; but it suffered a rude blow when James II of England made it impossible for the clergy to obey both their conscience and their king.", "title": "European conceptions" }, { "paragraph_id": 70, "text": "The Glorious Revolution of 1688 made an end of it as a great political force. This has led to the constitutional development of the Crown in Britain, as held by descent modified and modifiable by parliamentary action.", "title": "European conceptions" } ]
In European Christianity, the divine right of kings, divine right, or God's mandation, is a political and religious doctrine of political legitimacy of a monarchy. It is also known as the divine-right theory of kingship. The doctrine asserts that a monarch is not accountable to any earthly authority because their right to rule is derived from divine authority. Thus, the monarch is not subject to the will of the people, of the aristocracy, or of any other estate of the realm. It follows that only divine authority can judge a monarch, and that any attempt to depose, dethrone, resist or restrict their powers runs contrary to God's will and may constitute a sacrilegious act. It does not imply that their power is absolute. In its full-fledged form, the Divine Right of Kings is associated with Henry VIII of England James VI and I of Scotland and England, Louis XIV of France, and their successors. In contrast, conceptions of rights developed during the Age of Enlightenment – for example during the American and French Revolutions – often emphasised liberty and equality as being among the most important of rights.
2002-01-17T22:30:57Z
2023-12-23T03:01:13Z
[ "Template:EB1911", "Template:Monarchism", "Template:Rights", "Template:Main", "Template:Lang", "Template:Further", "Template:Bibleverse", "Template:Citation needed", "Template:Multiple image", "Template:Short description", "Template:Cite web", "Template:Lang-fa", "Template:Quote", "Template:Cite journal", "Template:In Our Time", "Template:Redirect", "Template:Cn", "Template:Sfn", "Template:Cite book", "Template:Wikiquote", "Template:Relpolnav", "Template:Lang-ae", "Template:Blockquote", "Template:Cite news", "Template:About", "Template:Rp", "Template:Transl", "Template:Reflist", "Template:ISBN", "Template:Authority control" ]
https://en.wikipedia.org/wiki/Divine_right_of_kings
9,138
Davros
Davros (/ˈdævrɒs/) is a fictional character from the long-running British science fiction television series Doctor Who. He was created by screenwriter Terry Nation, originally for the 1975 serial Genesis of the Daleks. Davros is a major enemy of the series' protagonist, the Doctor, and is the creator of the Doctor's deadliest enemies, the Daleks. Davros is a genius who has mastered many areas of science, but also a megalomaniac who believes that through his creations he can become the supreme being and ruler of the Universe. The character has been compared to the infamous dictator Adolf Hitler several times, including by the actor Terry Molloy, while Julian Bleach defined him as a cross between Hitler and the renowned scientist Stephen Hawking. Davros is from the planet Skaro, whose people, the Kaleds, were engaged in a bitter thousand-year war of attrition with their enemies, the Thals. He is horribly scarred and disabled, a condition that various spin-off media attribute to his laboratory being attacked by a Thal shell. He has one functioning hand and one cybernetic eye mounted on his forehead to take the place of his real eyes, which he is not able to open for long; for much of his existence he depends completely upon a self-designed mobile life-support chair in place of his lower body. It would become an obvious inspiration for his eventual design of the Dalek. The lower half of his body is absent and he is physically incapable of leaving the chair for more than a few minutes without dying. Davros' voice, like those of the Daleks, is electronically distorted. His manner of speech is generally soft and contemplative, but when angered or excited he is prone to ranting outbursts that resemble the hysterical, staccatissimo speech of the Daleks. Davros first appeared in the 1975 serial Genesis of the Daleks, written by Terry Nation. Nation, creator of the Dalek concept, had deliberately modelled elements of the Daleks' character on Nazi ideology, and conceived of their creator as a scientist with strong fascist tendencies. The physical appearance of Davros was developed by visual effects designer Peter Day and sculptor John Friedlander, who based Davros' chair on the lower half of a Dalek. Producer Philip Hinchcliffe told Friedlander to consider a design similar to the Mekon from the Eagle comic Dan Dare, with a large dome-like head and a withered body. Cast in the role of Davros was Michael Wisher, who had previously appeared in several different roles on Doctor Who and had provided Dalek voices in the serials Frontier in Space, Planet of the Daleks and Death to the Daleks. Wisher based his performance as Davros on the philosopher Bertrand Russell. In order to prepare for filming under the heavy mask, Wisher rehearsed wearing a paper bag over his head. Friedlander's mask was cast in hard latex, with only the mouth revealing Wisher's features; make-up artist Sylvia James shaded the mask's tones and blackened Wisher's lips and teeth to hide the transition. In the serial Destiny of the Daleks, Davros is played by David Gooderson using the mask Friedlander made for Wisher after it was split into intersecting sections to get as good a fit as possible. When Terry Molloy took over the role in Resurrection of the Daleks, a new mask was designed by Stan Mitchell. In 2023, Bleach reprised the role of Davros for a minisode aired during Children in Need, informally titled "Destination: Skaro". For the first time on television, Davros is depicted as non-disabled. In an interview for Doctor Who: Unleashed, executive producer Russell T. Davies said that this is how Davros will be depicted in future appearances, to avoid contributing to harmful tropes of disabled villains in media. "We had long conversations about bringing Davros back, because he's a fantastic character, [but] time and society and culture and taste has moved on. And there's a problem with the Davros of old in that he's a wheelchair user, who is evil. And I had problems with that. And a lot of us on the production team had problems with that, of associating disability with evil. And trust me, there's a very long tradition of this. "I'm not blaming people in the past at all, but the world changes and when the world changes, Doctor Who has to change as well. "So we made the choice to bring back Davros without the facial scarring and without the wheelchair – or his support unit, which functions as a wheelchair. "I say, this is how we see Davros now, this is what he looks like. This is 2023. This is our lens. This is our eye. Things used to be black and white, they're not in black and white anymore, and Davros used to look like that and he looks like this now, and that we are absolutely standing by." The decision to portray Davros without his chair received a divisive reception from fans. The Fourth Doctor (Tom Baker) first encountered Davros (Michael Wisher) in Genesis of the Daleks when he and his companions were sent to Skaro to avert the creation of the Daleks. As chief scientist of the Kaleds and leader of their elite scientific division, Davros devised new military strategies in order to win his people's thousand-year war against the Thal race that also occupies Skaro. When Davros learned his people were evolving from exposure to nuclear weapons, chemical weapons and biological weapons used in the war, he artificially accelerates the process to his design and stores the resulting tentacled creatures in tank-like "Mark III travel machines" partly based on the design of his wheelchair. He later names these creatures "Daleks", an anagram of Kaleds. Davros quickly becomes obsessed with his creations, considering them to be the ultimate form of life compared to others. When other Kaleds attempted to thwart his project, Davros arranges the extinction of his own people by using the Thals, whom he mostly killed with the Daleks later. Davros then weeds out those in elite scientific division who are loyal to him so he can have the Daleks eliminate the rest. However, the Daleks ultimately turn on Davros, killing his supporters before shooting him when he tries to halt the Dalek production line. In Destiny of the Daleks, it is revealed that Davros (now played by David Gooderson) was not killed, but placed in suspended animation and buried underground in the destruction of his bunker. The Daleks unearth their creator to help them break a logical impasse in their war against the android Movellans. However, the Dalek force is destroyed by the Doctor, and Davros is captured and imprisoned in suspended animation by the humans, before being taken to Earth to face trial. In the Fifth Doctor story Resurrection of the Daleks, Davros (Terry Molloy) is released from his space station prison by a small Dalek force aided by human mercenaries and Dalek duplicates. The Daleks require Davros to find an antidote for a Movellan-created virus that has all but wiped them out. Believing his creations to be treacherous, Davros begins using a syringe-like mind control device hidden in a secret compartment in his wheelchair on Daleks and humans; he ultimately releases a sample of the virus to kill off the Daleks before they can exterminate him. Davros expresses a desire to build a new and improved race of Daleks, but he apparently succumbs to the virus himself, his physiology being close enough to that of the Daleks for the virus to affect him. In the Sixth Doctor story Revelation of the Daleks, it is revealed that Davros managed to escape at the end of Resurrection and has gone into hiding as "The Great Healer" of the funeral and cryogenic preservation centre Tranquil Repose on the planet Necros. There, creating a clone of his head to serve as a decoy while modifying his body so that it can fire electric bolts and his chair is able to hover, Davros uses the more intelligent frozen bodies to engineer a new variety of white armoured Daleks loyal to him (while using the lesser intellects as food for the galaxy, ending a galaxy-wide famine), but he is captured by the original Daleks and taken to Skaro to face trial. Davros' final classic appearance is as the Emperor Dalek in Remembrance of the Daleks, with his white and gold Daleks now based on Skaro and termed "Imperial Daleks", fighting against the grey "Renegade Dalek" faction, who answer to the Dalek Supreme. By this time, Davros has been physically transplanted into a customised Dalek casing. He is only revealed to be the Emperor in the final episode. Both Skaro and the Imperial Dalek mothership are apparently destroyed (in the future) when the Seventh Doctor tricks Davros into using the Time Lord artefact known as the Hand of Omega, which makes Skaro's Sun go supernova, before homing in on their mothership. However, a Dalek on the bridge of Davros' ship reports that the Emperor's escape pod is being launched and a white light is seen speeding away from the ship moments before its destruction, leaving a clear route to bring Davros back in the future. During the revived series, Davros was referred to in the episode "Dalek" (2005) by the Ninth Doctor (Christopher Eccleston), who explains to Henry Van Statten that the Daleks were created by "a genius... a man who was king of his own little world",, and again by the Tenth Doctor (David Tennant) in the episode "Evolution of the Daleks" (2007), where he refers to the Daleks' creator as believing that "removing emotions makes you stronger". Davros makes his first physical appearance in the episode "The Stolen Earth" (2008), portrayed by Julian Bleach. The episode reveals that Davros was thought to have died during the first year of the Time War, when his command ship "flew into the jaws of the Nightmare Child" at the Gates of Elysium, despite the Doctor's failed efforts to save him. But Davros was pulled out of the time lock of the war by Dalek Caan (voiced by Nicholas Briggs), using his own flesh to create a "new empire" of Daleks who place him in the Vault as their prisoner to make use of his knowledge. Under Davros' guidance, the Daleks steal 27 planets, including Earth, and hide them in the Medusa Cascade, one second out of sync with the rest of the universe. In the follow-up episode "Journey's End" (2008), it is revealed that the stolen planets are required as a power source for Davros' ideal final solution: the Reality Bomb, which produces a wavelength that would cancel out the electrical field binding atoms to reduce all life outside the Crucible into nothingness in both his universe and countless other realities. But Davros learns too late that Dalek Caan, who came to the realisation of his race's atrocities as a consequence of saving his creator, used his prophecies and influence to ensure the Daleks' destruction while manipulating events to bring the Tenth Doctor and Donna Noble (Catherine Tate) together for the role the latter would play. Though the Doctor attempts to save him, having earlier taunted the Doctor for turning his companions into killers and being the cause of the deaths of countless people during his travels, Davros furiously refuses the Doctor's help and accuses him of being responsible for the destruction while screaming: "Never forget, Doctor, you did this! I name you forever: You are the Destroyer of Worlds!" Thus the Doctor is forced to leave Davros to his supposed fate as the Crucible self-destructs. Davros returns in the two-part Series 9 opening "The Magician's Apprentice" and "The Witch's Familiar" (2015), having escaped the Crucible's destruction and ending up on a restored Skaro with his life being prolonged by the Daleks. But when the aged Davros' health begins to fail, he remembers his childhood self, played by Joey Price, meeting the Twelfth Doctor (Peter Capaldi) during the Kaleds' thousand-year war prior to Genesis of the Daleks. The young Davros finds himself lost on the battlefield and surrounded by handmines, with the Doctor throwing his sonic screwdriver to the boy with the intent to save him before learning his name and leaving the child to his fate. Davros, seeking a final revenge on the Doctor, employs the snake-like Colony Sarff (Jami Reid-Quarrell) to bring him to Skaro. When it appears that the Doctor has lost his companion Clara Oswald (Jenna Coleman) to the Daleks, Davros manages to trick the Doctor into using his regeneration energy to heal him, extending his own life while infusing every Dalek on Skaro with the energy. But the Doctor reveals Davros' scheme has also revitalised the decomposing-yet-still-alive Daleks left to rot in Skaro's sewers, causing them to revolt and destroy the city. The Doctor then discovers the Daleks have a concept of mercy and are allowed to have the word in their vocabulary when he encounters Clara, having been placed in a Dalek casing by Missy (Michelle Gomez). The Doctor and Clara escape, the former having an epiphany as to how Davros somehow put a sliver of compassion into the Daleks. He then returns to the battlefield in Davros' childhood, using a Dalek gun to destroy the handmines with the one bit of compassion in Davros' life instilled in the Daleks' design to ensure Clara being saved. In the Children in Need sketch "Destination: Skaro" (2023) (which takes place during an earlier time in the Kaled-Thal war), Davros (Julian Bleach) (who has not yet become disabled nor has the cybernetic eye) is seen presenting a Dalek prototype featuring a robotic claw to his assistant, Castavillian. When he briefly departs to attend to an urgent matter, the Fourteenth Doctor lands in the TARDIS, accidentally destroying the robotic claw. He inadvertently suggests the name "Dalek" for the prototype, and gives Castavillian a plunger-tipped arm as a replacement for the broken claw. Once he realises that he has accidentally assisted with the creation of his greatest enemy, he quickly departs saying that he was "never here". Davros returns and approves of the new plunger arm. Doctor Who Magazine printed several comics stories involving Davros. The first, "Nemesis of the Daleks" (#152–155), with the Seventh Doctor, features an appearance of a Dalek Emperor. Speaking with the Emperor, the Doctor addresses him as Davros, but the Emperor responds "Who is Davros?" The Doctor initially assumes Davros' personality has been totally subsumed, but in the later strip "Emperor of the Daleks" (#197–202) this Emperor is shown as a different entity from Davros. Set prior to Remembrance of the Daleks in Davros' timeline, but after in the timeline of the Doctor, the latter, accompanied by Bernice Summerfield, together with help from the Sixth Doctor, ensures that Davros will survive the wrath of the Daleks so that he can assume the title of Emperor, allowing history to take its course. "Up Above the Gods" (#227), a vignette following up on this, features the Sixth Doctor and Davros having a conversation in the TARDIS. Terry Molloy has reprised his role as Davros in the spin-off audio plays produced by Big Finish Productions, mostly notably Davros (taking place during the Sixth Doctor's era), which, through flashbacks, explored the scientist's life prior to his crippling injury, which is attributed to a Thal nuclear attack (an idea that first appeared in Terrance Dicks' novelisation of Genesis of the Daleks). Davros, which does not feature the Daleks, apparently fills in the gaps between Resurrection of the Daleks and Revelation of the Daleks, and has the scientist trying to manipulate the galaxy's economy into a war footing similar to Skaro's. The Sixth Doctor manages to defeat his plans, and Davros is last heard when his ship explodes, an event obliquely mentioned in Revelation. However the Doctor thinks he has survived. Davros also mentions he will work on a plan to combat famine, tying into Revelation of the Daleks. The Davros Mission is an original audio adventure (without the Doctor) available on The Complete Davros Collection DVD box set. It takes place directly after the television story Revelation, while leaving the planet Necros and beginning Davros' trial. At the end of Davros Mission, he turns the tables on the Daleks, forcing them to do his bidding. The Big Finish miniseries I, Davros, also features trial scenes, but mostly explores his early life. In those four stories, his journey is seen from his boyhood, to just before Genesis of the Daleks. The Curse of Davros begins with Davros and the Daleks working together to try and alter the outcome of the Battle of Waterloo using technology that Davros has created that allows him to swap peoples' minds, allowing him to switch various soldiers in Napoleon's army with his own Daleks, ultimately intending to replace Napoleon with a Dalek after Waterloo is won so that he can change history and lead humanity in a direction where they may ally with the Daleks. The plan is complicated when the Sixth Doctor arrives and uses the device to swap bodies with Davros in an attempt to subvert the Daleks' plans from the inside, but Davros-in-the-Doctor is eventually able to convince the Daleks of his true identity, planning to remain in the Doctor's healthy body while leaving the Doctor trapped in his original form. At the end, Davros and the Doctor are returned to their original bodies with the aid of the Doctor's new companion Flip Jackson, the Doctor exposes Davros's true agenda to Napoleon, and Davros is left with an army of Daleks who have had their minds wiped. These Daleks presumably become the "Imperial Daleks", first seen in Remembrance of the Daleks. In The Juggernauts, Davros is on the run from the original Daleks. He hatches a plan to add human tissue to robotic Mechanoids, using them, along with his own Daleks, to destroy the originals, but the Doctor learns the truth about this plan, and his companion Mel Bush—who unwittingly assisted in the programming of the new Mechanoids—uses a backdoor she installed in their programming to turn them against Davros. At the end of the story, the self-destruct mechanism of Davros' life-support chair explodes after he is attacked by the Mechanoids, destroying an entire human colony. It is not clear how Davros survives to become the Dalek Emperor as seen in Remembrance. However, in the DVD documentary Davros Connections, director Gary Russell points out that the explosion of Davros' life-support chair leaves the listener to believe there is little of Davros left. This fits chronologically the fact that Remembrance depicts Davros as just a head inside the Emperor Dalek. In Daleks Among Us, set after Remembrance, Davros returns to Azimuth, a planet that was invaded by the Daleks long ago, presenting himself as a victim of Dalek enslavement to infiltrate an underground movement against the repressive government- so desperate to prevent riots about individual actions during the Dalek occupation that official policy is now that the Dalek invasion never happened- seeking the remnants of an old experiment he carried out on the planet. This experiment is revealed to be Falkus, a clone of Davros's original body that was intended to be a new host for his mind, with Falkus having evolved an independent personality since the Daleks left Azimuth. Falkus attempts to acquire the Persuasion Machine, a dangerous device that the Seventh Doctor has been tracking with his companions Elizabeth Elizabeth Klein and Will Arrowsmith, but the Doctor is able to trick Falkus into using the reprogrammed Persuasion Machine to destroy himself and his Daleks, while Davros flees in an escape pod. Davros is last shown trapped on the planet Lamuria, faced with the spectral former residents of the planet who sought to punish all criminals in the universe. By the time of the Eighth Doctor audio play Terror Firma (set after Remembrance), Davros is commanding a Dalek army which has successfully conquered the Earth. His mental instability has grown to the point where "Davros" and "the Emperor" exist within him as different personalities. His Daleks recognise this instability and rebel against Davros. By the story's end the Emperor personality is dominant, and the Daleks agree to follow him and leave Earth. In the fourth volume of the Time War series, looking at the Eighth Doctor's role in the Time War, after The Valeyard uses a Dalek weapon to erase the Daleks from history, the Dalek Time Strategist escapes the erasure by travelling into a parallel universe where the Kaleds and Thals have been at peace for centuries, with Davros still fully human and married to a Thal woman. The Dalek Time Strategist manipulates this alternate Davros into using his dimensional portal technology to merge various alternate Skaros together to recreate the Daleks in the prime universe, convincing Davros that the Doctor is an enemy of the Kaleds rather than the Thals. Reference is made to the 'prime' Davros having been killed in the first year of the War (as mentioned in "The Stolen Earth"). The process of merging with his alternate selves causes the alternate Davros to gain the injuries and memories of his counterparts, to the extent that he forgets his wife and the peace with the Thals. Eventually his presence restores the Daleks in the prime universe, but the Dalek Emperor has Davros put into stasis to prevent his influence causing another civil war by causing the Daleks to become divided between loyalty to the Emperor and Davros. Terror Firma may contradict the events of the Eighth Doctor Adventures novel War of the Daleks by John Peel, in which an unmerged Davros is placed on trial by the Dalek Prime, a combination of the Dalek Emperor and the Dalek Supreme. In the novel the Dalek Prime claimed that the planet Antalin had been terraformed to resemble Skaro and was destroyed in its place. A subterfuge to destroy Daleks aligned to Davros; both on Skaro (Antalin) and those that remained hidden within Dalek ranks on Skaro (original). Despite finding evidence of threat to Skaro via evidence found on 22nd century earth of Davros' mission to 1960s Earth and seeing the event via time-tracking equipment, the Dalek Prime allowed the destruction of Skaro to destroy Daleks allied to Davros. Dalek Prime also claimed that the Dalek/Movellan war (and indeed most of Dalek history before the destruction of "Skaro") was actually faked for Davros' benefit; in fact another ruse designed to bait Davros into giving evidence against himself (as he does in his trial.) Skaro is later seen to be intact and undamaged, and one character notes that it is quite possible the Dalek Prime is lying in order to weaken Davros' claim to leadership of the Daleks, while using foreknowledge of events to destroy and entrap Davros and his allies. At the conclusion of War, Davros was seemingly disintegrated by a Spider Dalek on the order of the Dalek Prime. However, Davros had previously recruited one of the Spider Daleks as a sleeper agent for just such an eventuality, and even he was not certain in the end if he was being disintegrated or being teleported away to safety, leaving the possibility open for his return. Paul Cornell's dark vignette in the Doctor Who Magazine Brief Encounters series, "An Incident Concerning the Bombardment of the Phobos Colony" occurs sometime between Resurrection of the Daleks and his assumption of the role of Emperor. In 1993, Michael Wisher, the original Davros, with Peter Miles, who had played his confederate, Nyder, reprised the role in an unlicensed one-off amateur stage production, The Trial of Davros. The plot of the play involved the Time Lords putting Davros on trial, with Nyder as a witness. Terry Molloy played Davros in the remounting of the play, again with Miles, for another one-off production in 2005. During the production, specially shot footage portrayed Dalek atrocities. In 2008, Julian Bleach appeared live as Davros at the Doctor Who Prom, announcing that the Royal Albert Hall would become his new palace, and the audience his "obedient slaves". BBC staff have traditionally created parodies of its own programming to be shown to colleagues at Christmas events and parties. The BBC's 1993 Christmas tape parodied the allegedly robotic, dictatorial and ruthless management style of its then Director-General, John Birt, by portraying him as Davros taking over the BBC, carrying out bizarre mergers of departments, awarding himself a bonus and singing a song to the tune of "I Wan'na Be Like You (The Monkey Song)" describing his plans. Played by Terry Molloy, except when noted. On 26 November 2007, a DVD box set was released featuring all of the Davros stories from the shows original run, including Genesis of the Daleks, Destiny of the Daleks, Resurrection of the Daleks, Revelation of the Daleks, and Remembrance of the Daleks.
[ { "paragraph_id": 0, "text": "Davros (/ˈdævrɒs/) is a fictional character from the long-running British science fiction television series Doctor Who. He was created by screenwriter Terry Nation, originally for the 1975 serial Genesis of the Daleks. Davros is a major enemy of the series' protagonist, the Doctor, and is the creator of the Doctor's deadliest enemies, the Daleks. Davros is a genius who has mastered many areas of science, but also a megalomaniac who believes that through his creations he can become the supreme being and ruler of the Universe. The character has been compared to the infamous dictator Adolf Hitler several times, including by the actor Terry Molloy, while Julian Bleach defined him as a cross between Hitler and the renowned scientist Stephen Hawking.", "title": "" }, { "paragraph_id": 1, "text": "Davros is from the planet Skaro, whose people, the Kaleds, were engaged in a bitter thousand-year war of attrition with their enemies, the Thals. He is horribly scarred and disabled, a condition that various spin-off media attribute to his laboratory being attacked by a Thal shell. He has one functioning hand and one cybernetic eye mounted on his forehead to take the place of his real eyes, which he is not able to open for long; for much of his existence he depends completely upon a self-designed mobile life-support chair in place of his lower body. It would become an obvious inspiration for his eventual design of the Dalek. The lower half of his body is absent and he is physically incapable of leaving the chair for more than a few minutes without dying. Davros' voice, like those of the Daleks, is electronically distorted. His manner of speech is generally soft and contemplative, but when angered or excited he is prone to ranting outbursts that resemble the hysterical, staccatissimo speech of the Daleks.", "title": "" }, { "paragraph_id": 2, "text": "Davros first appeared in the 1975 serial Genesis of the Daleks, written by Terry Nation. Nation, creator of the Dalek concept, had deliberately modelled elements of the Daleks' character on Nazi ideology, and conceived of their creator as a scientist with strong fascist tendencies. The physical appearance of Davros was developed by visual effects designer Peter Day and sculptor John Friedlander, who based Davros' chair on the lower half of a Dalek. Producer Philip Hinchcliffe told Friedlander to consider a design similar to the Mekon from the Eagle comic Dan Dare, with a large dome-like head and a withered body.", "title": "Concept" }, { "paragraph_id": 3, "text": "Cast in the role of Davros was Michael Wisher, who had previously appeared in several different roles on Doctor Who and had provided Dalek voices in the serials Frontier in Space, Planet of the Daleks and Death to the Daleks. Wisher based his performance as Davros on the philosopher Bertrand Russell. In order to prepare for filming under the heavy mask, Wisher rehearsed wearing a paper bag over his head. Friedlander's mask was cast in hard latex, with only the mouth revealing Wisher's features; make-up artist Sylvia James shaded the mask's tones and blackened Wisher's lips and teeth to hide the transition.", "title": "Concept" }, { "paragraph_id": 4, "text": "In the serial Destiny of the Daleks, Davros is played by David Gooderson using the mask Friedlander made for Wisher after it was split into intersecting sections to get as good a fit as possible. When Terry Molloy took over the role in Resurrection of the Daleks, a new mask was designed by Stan Mitchell.", "title": "Concept" }, { "paragraph_id": 5, "text": "In 2023, Bleach reprised the role of Davros for a minisode aired during Children in Need, informally titled \"Destination: Skaro\". For the first time on television, Davros is depicted as non-disabled. In an interview for Doctor Who: Unleashed, executive producer Russell T. Davies said that this is how Davros will be depicted in future appearances, to avoid contributing to harmful tropes of disabled villains in media.", "title": "Concept" }, { "paragraph_id": 6, "text": "\"We had long conversations about bringing Davros back, because he's a fantastic character, [but] time and society and culture and taste has moved on. And there's a problem with the Davros of old in that he's a wheelchair user, who is evil. And I had problems with that. And a lot of us on the production team had problems with that, of associating disability with evil. And trust me, there's a very long tradition of this.", "title": "Concept" }, { "paragraph_id": 7, "text": "\"I'm not blaming people in the past at all, but the world changes and when the world changes, Doctor Who has to change as well.", "title": "Concept" }, { "paragraph_id": 8, "text": "\"So we made the choice to bring back Davros without the facial scarring and without the wheelchair – or his support unit, which functions as a wheelchair.", "title": "Concept" }, { "paragraph_id": 9, "text": "\"I say, this is how we see Davros now, this is what he looks like. This is 2023. This is our lens. This is our eye. Things used to be black and white, they're not in black and white anymore, and Davros used to look like that and he looks like this now, and that we are absolutely standing by.\"", "title": "Concept" }, { "paragraph_id": 10, "text": "The decision to portray Davros without his chair received a divisive reception from fans.", "title": "Concept" }, { "paragraph_id": 11, "text": "The Fourth Doctor (Tom Baker) first encountered Davros (Michael Wisher) in Genesis of the Daleks when he and his companions were sent to Skaro to avert the creation of the Daleks. As chief scientist of the Kaleds and leader of their elite scientific division, Davros devised new military strategies in order to win his people's thousand-year war against the Thal race that also occupies Skaro. When Davros learned his people were evolving from exposure to nuclear weapons, chemical weapons and biological weapons used in the war, he artificially accelerates the process to his design and stores the resulting tentacled creatures in tank-like \"Mark III travel machines\" partly based on the design of his wheelchair. He later names these creatures \"Daleks\", an anagram of Kaleds.", "title": "Character history" }, { "paragraph_id": 12, "text": "Davros quickly becomes obsessed with his creations, considering them to be the ultimate form of life compared to others. When other Kaleds attempted to thwart his project, Davros arranges the extinction of his own people by using the Thals, whom he mostly killed with the Daleks later. Davros then weeds out those in elite scientific division who are loyal to him so he can have the Daleks eliminate the rest. However, the Daleks ultimately turn on Davros, killing his supporters before shooting him when he tries to halt the Dalek production line.", "title": "Character history" }, { "paragraph_id": 13, "text": "In Destiny of the Daleks, it is revealed that Davros (now played by David Gooderson) was not killed, but placed in suspended animation and buried underground in the destruction of his bunker. The Daleks unearth their creator to help them break a logical impasse in their war against the android Movellans. However, the Dalek force is destroyed by the Doctor, and Davros is captured and imprisoned in suspended animation by the humans, before being taken to Earth to face trial.", "title": "Character history" }, { "paragraph_id": 14, "text": "In the Fifth Doctor story Resurrection of the Daleks, Davros (Terry Molloy) is released from his space station prison by a small Dalek force aided by human mercenaries and Dalek duplicates. The Daleks require Davros to find an antidote for a Movellan-created virus that has all but wiped them out. Believing his creations to be treacherous, Davros begins using a syringe-like mind control device hidden in a secret compartment in his wheelchair on Daleks and humans; he ultimately releases a sample of the virus to kill off the Daleks before they can exterminate him. Davros expresses a desire to build a new and improved race of Daleks, but he apparently succumbs to the virus himself, his physiology being close enough to that of the Daleks for the virus to affect him.", "title": "Character history" }, { "paragraph_id": 15, "text": "In the Sixth Doctor story Revelation of the Daleks, it is revealed that Davros managed to escape at the end of Resurrection and has gone into hiding as \"The Great Healer\" of the funeral and cryogenic preservation centre Tranquil Repose on the planet Necros. There, creating a clone of his head to serve as a decoy while modifying his body so that it can fire electric bolts and his chair is able to hover, Davros uses the more intelligent frozen bodies to engineer a new variety of white armoured Daleks loyal to him (while using the lesser intellects as food for the galaxy, ending a galaxy-wide famine), but he is captured by the original Daleks and taken to Skaro to face trial.", "title": "Character history" }, { "paragraph_id": 16, "text": "Davros' final classic appearance is as the Emperor Dalek in Remembrance of the Daleks, with his white and gold Daleks now based on Skaro and termed \"Imperial Daleks\", fighting against the grey \"Renegade Dalek\" faction, who answer to the Dalek Supreme. By this time, Davros has been physically transplanted into a customised Dalek casing. He is only revealed to be the Emperor in the final episode. Both Skaro and the Imperial Dalek mothership are apparently destroyed (in the future) when the Seventh Doctor tricks Davros into using the Time Lord artefact known as the Hand of Omega, which makes Skaro's Sun go supernova, before homing in on their mothership. However, a Dalek on the bridge of Davros' ship reports that the Emperor's escape pod is being launched and a white light is seen speeding away from the ship moments before its destruction, leaving a clear route to bring Davros back in the future.", "title": "Character history" }, { "paragraph_id": 17, "text": "During the revived series, Davros was referred to in the episode \"Dalek\" (2005) by the Ninth Doctor (Christopher Eccleston), who explains to Henry Van Statten that the Daleks were created by \"a genius... a man who was king of his own little world\",, and again by the Tenth Doctor (David Tennant) in the episode \"Evolution of the Daleks\" (2007), where he refers to the Daleks' creator as believing that \"removing emotions makes you stronger\". Davros makes his first physical appearance in the episode \"The Stolen Earth\" (2008), portrayed by Julian Bleach. The episode reveals that Davros was thought to have died during the first year of the Time War, when his command ship \"flew into the jaws of the Nightmare Child\" at the Gates of Elysium, despite the Doctor's failed efforts to save him. But Davros was pulled out of the time lock of the war by Dalek Caan (voiced by Nicholas Briggs), using his own flesh to create a \"new empire\" of Daleks who place him in the Vault as their prisoner to make use of his knowledge. Under Davros' guidance, the Daleks steal 27 planets, including Earth, and hide them in the Medusa Cascade, one second out of sync with the rest of the universe.", "title": "Character history" }, { "paragraph_id": 18, "text": "In the follow-up episode \"Journey's End\" (2008), it is revealed that the stolen planets are required as a power source for Davros' ideal final solution: the Reality Bomb, which produces a wavelength that would cancel out the electrical field binding atoms to reduce all life outside the Crucible into nothingness in both his universe and countless other realities. But Davros learns too late that Dalek Caan, who came to the realisation of his race's atrocities as a consequence of saving his creator, used his prophecies and influence to ensure the Daleks' destruction while manipulating events to bring the Tenth Doctor and Donna Noble (Catherine Tate) together for the role the latter would play. Though the Doctor attempts to save him, having earlier taunted the Doctor for turning his companions into killers and being the cause of the deaths of countless people during his travels, Davros furiously refuses the Doctor's help and accuses him of being responsible for the destruction while screaming: \"Never forget, Doctor, you did this! I name you forever: You are the Destroyer of Worlds!\" Thus the Doctor is forced to leave Davros to his supposed fate as the Crucible self-destructs.", "title": "Character history" }, { "paragraph_id": 19, "text": "Davros returns in the two-part Series 9 opening \"The Magician's Apprentice\" and \"The Witch's Familiar\" (2015), having escaped the Crucible's destruction and ending up on a restored Skaro with his life being prolonged by the Daleks. But when the aged Davros' health begins to fail, he remembers his childhood self, played by Joey Price, meeting the Twelfth Doctor (Peter Capaldi) during the Kaleds' thousand-year war prior to Genesis of the Daleks. The young Davros finds himself lost on the battlefield and surrounded by handmines, with the Doctor throwing his sonic screwdriver to the boy with the intent to save him before learning his name and leaving the child to his fate. Davros, seeking a final revenge on the Doctor, employs the snake-like Colony Sarff (Jami Reid-Quarrell) to bring him to Skaro. When it appears that the Doctor has lost his companion Clara Oswald (Jenna Coleman) to the Daleks, Davros manages to trick the Doctor into using his regeneration energy to heal him, extending his own life while infusing every Dalek on Skaro with the energy. But the Doctor reveals Davros' scheme has also revitalised the decomposing-yet-still-alive Daleks left to rot in Skaro's sewers, causing them to revolt and destroy the city. The Doctor then discovers the Daleks have a concept of mercy and are allowed to have the word in their vocabulary when he encounters Clara, having been placed in a Dalek casing by Missy (Michelle Gomez). The Doctor and Clara escape, the former having an epiphany as to how Davros somehow put a sliver of compassion into the Daleks. He then returns to the battlefield in Davros' childhood, using a Dalek gun to destroy the handmines with the one bit of compassion in Davros' life instilled in the Daleks' design to ensure Clara being saved.", "title": "Character history" }, { "paragraph_id": 20, "text": "In the Children in Need sketch \"Destination: Skaro\" (2023) (which takes place during an earlier time in the Kaled-Thal war), Davros (Julian Bleach) (who has not yet become disabled nor has the cybernetic eye) is seen presenting a Dalek prototype featuring a robotic claw to his assistant, Castavillian. When he briefly departs to attend to an urgent matter, the Fourteenth Doctor lands in the TARDIS, accidentally destroying the robotic claw. He inadvertently suggests the name \"Dalek\" for the prototype, and gives Castavillian a plunger-tipped arm as a replacement for the broken claw. Once he realises that he has accidentally assisted with the creation of his greatest enemy, he quickly departs saying that he was \"never here\". Davros returns and approves of the new plunger arm.", "title": "Character history" }, { "paragraph_id": 21, "text": "Doctor Who Magazine printed several comics stories involving Davros. The first, \"Nemesis of the Daleks\" (#152–155), with the Seventh Doctor, features an appearance of a Dalek Emperor. Speaking with the Emperor, the Doctor addresses him as Davros, but the Emperor responds \"Who is Davros?\" The Doctor initially assumes Davros' personality has been totally subsumed, but in the later strip \"Emperor of the Daleks\" (#197–202) this Emperor is shown as a different entity from Davros. Set prior to Remembrance of the Daleks in Davros' timeline, but after in the timeline of the Doctor, the latter, accompanied by Bernice Summerfield, together with help from the Sixth Doctor, ensures that Davros will survive the wrath of the Daleks so that he can assume the title of Emperor, allowing history to take its course. \"Up Above the Gods\" (#227), a vignette following up on this, features the Sixth Doctor and Davros having a conversation in the TARDIS.", "title": "Other appearances" }, { "paragraph_id": 22, "text": "Terry Molloy has reprised his role as Davros in the spin-off audio plays produced by Big Finish Productions, mostly notably Davros (taking place during the Sixth Doctor's era), which, through flashbacks, explored the scientist's life prior to his crippling injury, which is attributed to a Thal nuclear attack (an idea that first appeared in Terrance Dicks' novelisation of Genesis of the Daleks).", "title": "Other appearances" }, { "paragraph_id": 23, "text": "Davros, which does not feature the Daleks, apparently fills in the gaps between Resurrection of the Daleks and Revelation of the Daleks, and has the scientist trying to manipulate the galaxy's economy into a war footing similar to Skaro's. The Sixth Doctor manages to defeat his plans, and Davros is last heard when his ship explodes, an event obliquely mentioned in Revelation. However the Doctor thinks he has survived. Davros also mentions he will work on a plan to combat famine, tying into Revelation of the Daleks.", "title": "Other appearances" }, { "paragraph_id": 24, "text": "The Davros Mission is an original audio adventure (without the Doctor) available on The Complete Davros Collection DVD box set. It takes place directly after the television story Revelation, while leaving the planet Necros and beginning Davros' trial. At the end of Davros Mission, he turns the tables on the Daleks, forcing them to do his bidding. The Big Finish miniseries I, Davros, also features trial scenes, but mostly explores his early life. In those four stories, his journey is seen from his boyhood, to just before Genesis of the Daleks.", "title": "Other appearances" }, { "paragraph_id": 25, "text": "The Curse of Davros begins with Davros and the Daleks working together to try and alter the outcome of the Battle of Waterloo using technology that Davros has created that allows him to swap peoples' minds, allowing him to switch various soldiers in Napoleon's army with his own Daleks, ultimately intending to replace Napoleon with a Dalek after Waterloo is won so that he can change history and lead humanity in a direction where they may ally with the Daleks. The plan is complicated when the Sixth Doctor arrives and uses the device to swap bodies with Davros in an attempt to subvert the Daleks' plans from the inside, but Davros-in-the-Doctor is eventually able to convince the Daleks of his true identity, planning to remain in the Doctor's healthy body while leaving the Doctor trapped in his original form. At the end, Davros and the Doctor are returned to their original bodies with the aid of the Doctor's new companion Flip Jackson, the Doctor exposes Davros's true agenda to Napoleon, and Davros is left with an army of Daleks who have had their minds wiped. These Daleks presumably become the \"Imperial Daleks\", first seen in Remembrance of the Daleks.", "title": "Other appearances" }, { "paragraph_id": 26, "text": "In The Juggernauts, Davros is on the run from the original Daleks. He hatches a plan to add human tissue to robotic Mechanoids, using them, along with his own Daleks, to destroy the originals, but the Doctor learns the truth about this plan, and his companion Mel Bush—who unwittingly assisted in the programming of the new Mechanoids—uses a backdoor she installed in their programming to turn them against Davros. At the end of the story, the self-destruct mechanism of Davros' life-support chair explodes after he is attacked by the Mechanoids, destroying an entire human colony. It is not clear how Davros survives to become the Dalek Emperor as seen in Remembrance. However, in the DVD documentary Davros Connections, director Gary Russell points out that the explosion of Davros' life-support chair leaves the listener to believe there is little of Davros left. This fits chronologically the fact that Remembrance depicts Davros as just a head inside the Emperor Dalek.", "title": "Other appearances" }, { "paragraph_id": 27, "text": "In Daleks Among Us, set after Remembrance, Davros returns to Azimuth, a planet that was invaded by the Daleks long ago, presenting himself as a victim of Dalek enslavement to infiltrate an underground movement against the repressive government- so desperate to prevent riots about individual actions during the Dalek occupation that official policy is now that the Dalek invasion never happened- seeking the remnants of an old experiment he carried out on the planet. This experiment is revealed to be Falkus, a clone of Davros's original body that was intended to be a new host for his mind, with Falkus having evolved an independent personality since the Daleks left Azimuth. Falkus attempts to acquire the Persuasion Machine, a dangerous device that the Seventh Doctor has been tracking with his companions Elizabeth Elizabeth Klein and Will Arrowsmith, but the Doctor is able to trick Falkus into using the reprogrammed Persuasion Machine to destroy himself and his Daleks, while Davros flees in an escape pod. Davros is last shown trapped on the planet Lamuria, faced with the spectral former residents of the planet who sought to punish all criminals in the universe.", "title": "Other appearances" }, { "paragraph_id": 28, "text": "By the time of the Eighth Doctor audio play Terror Firma (set after Remembrance), Davros is commanding a Dalek army which has successfully conquered the Earth. His mental instability has grown to the point where \"Davros\" and \"the Emperor\" exist within him as different personalities. His Daleks recognise this instability and rebel against Davros. By the story's end the Emperor personality is dominant, and the Daleks agree to follow him and leave Earth.", "title": "Other appearances" }, { "paragraph_id": 29, "text": "In the fourth volume of the Time War series, looking at the Eighth Doctor's role in the Time War, after The Valeyard uses a Dalek weapon to erase the Daleks from history, the Dalek Time Strategist escapes the erasure by travelling into a parallel universe where the Kaleds and Thals have been at peace for centuries, with Davros still fully human and married to a Thal woman. The Dalek Time Strategist manipulates this alternate Davros into using his dimensional portal technology to merge various alternate Skaros together to recreate the Daleks in the prime universe, convincing Davros that the Doctor is an enemy of the Kaleds rather than the Thals. Reference is made to the 'prime' Davros having been killed in the first year of the War (as mentioned in \"The Stolen Earth\"). The process of merging with his alternate selves causes the alternate Davros to gain the injuries and memories of his counterparts, to the extent that he forgets his wife and the peace with the Thals. Eventually his presence restores the Daleks in the prime universe, but the Dalek Emperor has Davros put into stasis to prevent his influence causing another civil war by causing the Daleks to become divided between loyalty to the Emperor and Davros.", "title": "Other appearances" }, { "paragraph_id": 30, "text": "Terror Firma may contradict the events of the Eighth Doctor Adventures novel War of the Daleks by John Peel, in which an unmerged Davros is placed on trial by the Dalek Prime, a combination of the Dalek Emperor and the Dalek Supreme. In the novel the Dalek Prime claimed that the planet Antalin had been terraformed to resemble Skaro and was destroyed in its place. A subterfuge to destroy Daleks aligned to Davros; both on Skaro (Antalin) and those that remained hidden within Dalek ranks on Skaro (original). Despite finding evidence of threat to Skaro via evidence found on 22nd century earth of Davros' mission to 1960s Earth and seeing the event via time-tracking equipment, the Dalek Prime allowed the destruction of Skaro to destroy Daleks allied to Davros. Dalek Prime also claimed that the Dalek/Movellan war (and indeed most of Dalek history before the destruction of \"Skaro\") was actually faked for Davros' benefit; in fact another ruse designed to bait Davros into giving evidence against himself (as he does in his trial.) Skaro is later seen to be intact and undamaged, and one character notes that it is quite possible the Dalek Prime is lying in order to weaken Davros' claim to leadership of the Daleks, while using foreknowledge of events to destroy and entrap Davros and his allies.", "title": "Other appearances" }, { "paragraph_id": 31, "text": "At the conclusion of War, Davros was seemingly disintegrated by a Spider Dalek on the order of the Dalek Prime. However, Davros had previously recruited one of the Spider Daleks as a sleeper agent for just such an eventuality, and even he was not certain in the end if he was being disintegrated or being teleported away to safety, leaving the possibility open for his return.", "title": "Other appearances" }, { "paragraph_id": 32, "text": "Paul Cornell's dark vignette in the Doctor Who Magazine Brief Encounters series, \"An Incident Concerning the Bombardment of the Phobos Colony\" occurs sometime between Resurrection of the Daleks and his assumption of the role of Emperor.", "title": "Other appearances" }, { "paragraph_id": 33, "text": "In 1993, Michael Wisher, the original Davros, with Peter Miles, who had played his confederate, Nyder, reprised the role in an unlicensed one-off amateur stage production, The Trial of Davros. The plot of the play involved the Time Lords putting Davros on trial, with Nyder as a witness.", "title": "Other appearances" }, { "paragraph_id": 34, "text": "Terry Molloy played Davros in the remounting of the play, again with Miles, for another one-off production in 2005. During the production, specially shot footage portrayed Dalek atrocities.", "title": "Other appearances" }, { "paragraph_id": 35, "text": "In 2008, Julian Bleach appeared live as Davros at the Doctor Who Prom, announcing that the Royal Albert Hall would become his new palace, and the audience his \"obedient slaves\".", "title": "Other appearances" }, { "paragraph_id": 36, "text": "BBC staff have traditionally created parodies of its own programming to be shown to colleagues at Christmas events and parties. The BBC's 1993 Christmas tape parodied the allegedly robotic, dictatorial and ruthless management style of its then Director-General, John Birt, by portraying him as Davros taking over the BBC, carrying out bizarre mergers of departments, awarding himself a bonus and singing a song to the tune of \"I Wan'na Be Like You (The Monkey Song)\" describing his plans.", "title": "Other appearances" }, { "paragraph_id": 37, "text": "Played by Terry Molloy, except when noted.", "title": "List of appearances" }, { "paragraph_id": 38, "text": "On 26 November 2007, a DVD box set was released featuring all of the Davros stories from the shows original run, including Genesis of the Daleks, Destiny of the Daleks, Resurrection of the Daleks, Revelation of the Daleks, and Remembrance of the Daleks.", "title": "Other media" } ]
Davros is a fictional character from the long-running British science fiction television series Doctor Who. He was created by screenwriter Terry Nation, originally for the 1975 serial Genesis of the Daleks. Davros is a major enemy of the series' protagonist, the Doctor, and is the creator of the Doctor's deadliest enemies, the Daleks. Davros is a genius who has mastered many areas of science, but also a megalomaniac who believes that through his creations he can become the supreme being and ruler of the Universe. The character has been compared to the infamous dictator Adolf Hitler several times, including by the actor Terry Molloy, while Julian Bleach defined him as a cross between Hitler and the renowned scientist Stephen Hawking. Davros is from the planet Skaro, whose people, the Kaleds, were engaged in a bitter thousand-year war of attrition with their enemies, the Thals. He is horribly scarred and disabled, a condition that various spin-off media attribute to his laboratory being attacked by a Thal shell. He has one functioning hand and one cybernetic eye mounted on his forehead to take the place of his real eyes, which he is not able to open for long; for much of his existence he depends completely upon a self-designed mobile life-support chair in place of his lower body. It would become an obvious inspiration for his eventual design of the Dalek. The lower half of his body is absent and he is physically incapable of leaving the chair for more than a few minutes without dying. Davros' voice, like those of the Daleks, is electronically distorted. His manner of speech is generally soft and contemplative, but when angered or excited he is prone to ranting outbursts that resemble the hysterical, staccatissimo speech of the Daleks.
2002-01-18T14:16:05Z
2023-12-25T23:37:10Z
[ "Template:Infobox character", "Template:Multiple images", "Template:Original research inline", "Template:Webarchive", "Template:TardisIndexFile", "Template:Authority control", "Template:IPAc-en", "Template:Blockquote", "Template:Cite news", "Template:Cite video", "Template:Doctor Who", "Template:Davros stories", "Template:Subject bar", "Template:Short description", "Template:Distinguish", "Template:Use British English", "Template:Reflist", "Template:Cite serial", "Template:Cite episode", "Template:Redirect-distinguish2", "Template:About", "Template:Use dmy dates", "Template:Cite web", "Template:Cite book", "Template:Doctor Who characters" ]
https://en.wikipedia.org/wiki/Davros
9,140
Dalek
The Daleks (/ˈdɑːlɛks/ DAH-leks) are a fictional extraterrestrial race of extremely xenophobic mutants principally portrayed in the British science fiction television programme Doctor Who. They were conceived by writer Terry Nation and first appeared in the 1963 Doctor Who serial The Daleks, in casings designed by Raymond Cusick. Drawing inspiration from the Nazis, Nation portrayed the Daleks as violent, merciless and pitiless cyborg aliens, completely absent of any emotion other than hate, who demand total conformity to the will of the Dalek with the highest authority, and are bent on the conquest of the universe and the extermination of any other forms of life, including other 'impure' Daleks which are deemed inferior for being different to them. Collectively, they are the greatest enemies of Doctor Who's protagonist, the Time Lord known as "the Doctor". During the second year of the original Doctor Who programme (1963–1989), the Daleks developed their own form of time travel. At the beginning of the second Doctor Who TV series that debuted in 2005, it was established that the Daleks had engaged in a Time War against the Time Lords that affected much of the universe and altered parts of history. In the programme's narrative, the planet Skaro suffered a thousand-year war between two societies: the Kaleds and the Thals. During this time-period, many natives of Skaro became badly mutated by fallout from nuclear weapons and chemical warfare. The Kaled government believed in genetic purity and swore to "exterminate the Thals" for being inferior. Believing his own society was becoming weak and that it was his duty to create a new master race from the ashes of his people, the Kaled scientist Davros genetically modified several Kaleds into squid-like life-forms he called Daleks, removing "weaknesses" such as mercy and sympathy while increasing aggression and survival-instinct. He then integrated them with tank-like robotic shells equipped with advanced technology based on the same life-support system he himself used since being burned and blinded by a nuclear attack. His creations became intent on dominating the universe by enslaving or purging all "inferior" non-Dalek life. The Daleks are the show's most popular and famous villains and their returns to the series over the decades have often gained media attention. Their frequent declaration "Exterminate!" has become common usage. The Daleks were created by Terry Nation and designed by the BBC designer Raymond Cusick. They were introduced in December 1963 in the second Doctor Who serial, colloquially known as The Daleks. They became an immediate and huge hit with viewers, featuring in many subsequent serials and, in the 1960s, two films. They have become as synonymous with Doctor Who as the Doctor himself, and their behaviour and catchphrases are now part of British popular culture. "Hiding behind the sofa whenever the Daleks appear" has been cited as an element of British cultural identity, and a 2008 survey indicated that nine out of ten British children were able to identify a Dalek correctly. In 1999 a Dalek photographed by Lord Snowdon appeared on a postage stamp celebrating British popular culture. In 2010, readers of science fiction magazine SFX voted the Dalek as the all-time greatest monster, beating competition including Japanese movie monster Godzilla and J. R. R. Tolkien's Gollum, of The Lord of the Rings. As early as one year after first appearing on Doctor Who, the Daleks had become popular enough to be recognized even by non-viewers. In December 1964 editorial cartoonist Leslie Gilbert Illingworth published a cartoon in the Daily Mail captioned "THE DEGAULLEK", caricaturing French President Charles de Gaulle arriving at a NATO meeting as a Dalek with de Gaulle's prominent nose. The word "Dalek" has entered major dictionaries, including the Oxford English Dictionary, which defines "Dalek" as "a type of robot appearing in 'Dr. Who' [sic], a B.B.C. Television science-fiction programme; hence used allusively." English-speakers sometimes use the term metaphorically to describe people, usually authority figures, who act like robots unable to break from their programming. For example, John Birt, the Director-General of the BBC from 1992 to 2000, was called a "croak-voiced Dalek" by playwright Dennis Potter in the MacTaggart Lecture at the 1993 Edinburgh Television Festival. Externally Daleks resemble human-sized pepper pots with a single mechanical eyestalk mounted on a rotating dome, a gun-mount containing an energy-weapon ("gunstick" or "death ray") resembling an egg-whisk, and a telescopic manipulator arm usually tipped by an appendage resembling a sink-plunger. Daleks have been known to use their plungers to interface with technology, crush a man's skull by suction, measure the intelligence of a subject, and extract information from a man's mind. Dalek casings are made of a bonded polycarbide material called "Dalekanium" by a member of the human resistance in The Dalek Invasion of Earth and the Dalek comics, as well as by the Cult of Skaro in "Daleks in Manhattan". The lower half of a Dalek's shell is covered with hemispherical protrusions, or 'Dalek-bumps', which are shown in the episode "Dalek" to be spheres embedded in the casing. Both the BBC-licensed Dalek Book (1964) and The Doctor Who Technical Manual (1983) describe these items as being part of a sensory array, while in the 2005 series episode "Dalek" they are integral to a Dalek's forcefield mechanism, which evaporates most bullets and resists most types of energy weapons. The forcefield seems to be concentrated around the Dalek's midsection (where the mutant is located), as normally ineffective firepower can be concentrated on the eyestalk to blind a Dalek. In 2019 episode "Resolution" the bumps give way to reveal missile launchers capable of wiping out a military tank with ease. Daleks have a very limited visual field, with no peripheral sight at all, and are relatively easy to hide from in fairly exposed places. Their own energy weapons are capable of destroying them. Their weapons fire a beam that has electrical tendencies, is capable of propagating through water, and may be a form of plasma or electrolaser. The eyepiece is a Dalek's most vulnerable spot; impairing its vision often leads to a blind, panicked firing of its weapon while exclaiming "My vision is impaired; I cannot see!" Russell T Davies subverted the catchphrase in his 2008 episode "The Stolen Earth", in which a Dalek vaporises a paintball that has blocked its vision while proclaiming, "My vision is not impaired!" The creature inside the mechanical casing is soft and repulsive in appearance, and vicious in temperament. The first-ever glimpse of a Dalek mutant, in The Daleks, was a claw peeking out from under a Thal cloak after it had been removed from its casing. The mutants' actual appearance has varied, but often adheres to the Doctor's description of the species in Remembrance of the Daleks as "little green blobs in bonded polycarbide armour". In Resurrection of the Daleks a Dalek creature, separated from its casing, attacks and severely injures a human soldier; in Remembrance of the Daleks there are two Dalek factions (Imperial and Renegade), and the creatures inside have a different appearance in each case, one resembling the amorphous creature from Resurrection, the other the crab-like creature from the original Dalek serial. As the creature inside is rarely seen on screen there is a common misconception that Daleks are wholly mechanical robots. In the new series Daleks are retconned to be squid-like in appearance, with small tentacles, one or two eyes, and an exposed brain. In the new series, a Dalek creature separated from its casing is shown capable of inserting a tentacle into the back of a human's neck and controlling them. Daleks' voices are electronic; when out of its casing the mutant is only able to squeak. Once the mutant is removed the casing itself can be entered and operated by humanoids; for example, in The Daleks, Ian Chesterton (William Russell) enters a Dalek shell to masquerade as a guard as part of an escape plan. For many years it was assumed that, due to their design and gliding motion, Daleks were unable to climb stairs, and that this provided a simple way of escaping them. A cartoon from Punch pictured a group of Daleks at the foot of a flight of stairs with the caption, "Well, this certainly buggers our plan to conquer the Universe". In a scene from the serial Destiny of the Daleks, the Doctor and companions escape from Dalek pursuers by climbing into a ceiling duct. The Fourth Doctor calls down, "If you're supposed to be the superior race of the universe, why don't you try climbing after us?" The Daleks generally make up for their lack of mobility with overwhelming firepower; a joke among Doctor Who fans is that "Real Daleks don't climb stairs; they level the building." Dalek mobility has improved over the history of the series: in their first appearance, in The Daleks, they were capable of movement only on the conductive metal floors of their city; in The Dalek Invasion of Earth a Dalek emerges from the waters of the River Thames, indicating not only that they had become freely mobile, but that they are amphibious; Planet of the Daleks showed that they could ascend a vertical shaft by means of an external anti-gravity mat placed on the floor; Revelation of the Daleks showed Davros in his life-support chair and one of his Daleks hovering and Remembrance of the Daleks depicted them as capable of hovering up a flight of stairs. Despite this, journalists covering the series frequently refer to the Daleks' supposed inability to climb stairs; characters escaping up a flight of stairs in the 2005 episode "Dalek" made the same joke and were shocked when the Dalek began to hover up the stairs after uttering the phrase "ELEVATE", in a similar manner to their normal phrase "EXTERMINATE". The new series depicts the Daleks as fully capable of flight, even space flight. The non-humanoid shape of the Dalek did much to enhance the creatures' sense of menace. A lack of familiar reference points differentiated them from the traditional "bug-eyed monster" of science fiction, which Doctor Who creator Sydney Newman had wanted the show to avoid. The unsettling Dalek form, coupled with their alien voices, made many believe that the props were wholly mechanical and operated by remote control. The Daleks were actually controlled from inside by short operators, who had to manipulate their eyestalks, domes and arms, as well as flashing the lights on their heads in sync with the actors supplying their voices. The Dalek cases were built in two pieces; an operator would step into the lower section and then the top would be secured. The operators looked out between the cylindrical louvres just beneath the dome, which were lined with mesh to conceal their faces. In addition to being hot and cramped, the Dalek casings also muffled external sounds, making it difficult for operators to hear the director or dialogue. John Scott Martin, a Dalek operator from the original series, said that Dalek operation was a challenge: "You had to have about six hands: one to do the eyestalk, one to do the lights, one for the gun, another for the smoke canister underneath, yet another for the sink plunger. If you were related to an octopus then it helped." For Doctor Who's 21st-century revival the Dalek casings retain the same overall shape and dimensional proportions of previous Daleks, although many details have been redesigned to give the Dalek a heavier and more solid look. Changes include a larger, more pointed base; a glowing eyepiece; an all-over metallic-brass finish (specified by Davies); thicker, nailed strips on the "neck" section; a housing for the eyestalk pivot; and significantly larger dome lights. The new prop made its on-screen debut in the 2005 episode "Dalek". These Dalek casings use a short operator inside the housing while the 'head' and eyestalk are operated via remote control. A third person, Nicholas Briggs, supplies the voice in their various appearances. In the 2010 season, a new, larger model appeared in several colours representing different parts of the Dalek command hierarchy. Terry Nation's original plan was for the Daleks to glide across the floor. Early versions of the Daleks rolled on nylon castors, propelled by the operator's feet. Although castors were adequate for the Daleks' debut serial, which was shot entirely at the BBC's Lime Grove Studios, for The Dalek Invasion of Earth Terry Nation wanted the Daleks to be filmed on the streets of London. To enable the Daleks to travel smoothly on location, designer Spencer Chapman built the new Dalek shells around miniature tricycles with sturdier wheels, which were hidden by enlarged fenders fitted below the original base. The uneven flagstones of Central London caused the Daleks to rattle as they moved and it was not possible to remove this noise from the final soundtrack. A small parabolic dish was added to the rear of the prop's casing to explain why these Daleks, unlike the ones in their first serial, were not dependent on static electricity drawn up from the floors of the Dalek city for their motive power. Later versions of the prop had more efficient wheels and were once again simply propelled by the seated operators' feet, but they remained so heavy that when going up ramps they often had to be pushed by stagehands out of camera shot. The difficulty of operating all the prop's parts at once contributed to the occasionally jerky Dalek movements. This problem has largely been eradicated with the advent of the "new series" version, as its remotely controlled dome and eyestalk allow the operator to concentrate on the smooth movement of the Dalek and its arms. The staccato delivery, harsh tone and rising inflection of the Dalek voice were initially developed by two voice actors, Peter Hawkins and David Graham, who varied the pitch and speed of the lines according to the emotion needed. Their voices were further processed electronically by Brian Hodgson at the BBC Radiophonic Workshop. The sound-processing devices used have varied over the decades. In 1963 Hodgson and his colleagues used equalisation to boost the mid-range of the actor's voice, then subjected it to ring modulation with a 30 Hz sine wave. The distinctive harsh, grating vocal timbre this produced has remained the pattern for all Dalek voices since (with the exception of those in the 1985 serial Revelation of the Daleks, for which the director, Graeme Harper, deliberately used less distortion). Besides Hawkins and Graham, other voice actors for the Daleks have included Roy Skelton, who first voiced the Daleks in the 1967 story The Evil of the Daleks and provided voices for five additional Dalek serials including Planet of the Daleks, and for the one-off anniversary special The Five Doctors. Michael Wisher, the actor who originated the role of Dalek creator Davros in Genesis of the Daleks, provided Dalek voices for that same story, as well as for Frontier in Space, Planet of the Daleks, and Death to the Daleks. Other Dalek voice actors include Royce Mills (three stories), Brian Miller (two stories), and Oliver Gilbert and Peter Messaline (one story). John Leeson, who performed the voice of K9 in several Doctor Who stories, and Davros actors Terry Molloy and David Gooderson also contributed supporting voices for various Dalek serials. Since 2005 the Dalek voice in the television series has been provided by Nicholas Briggs, speaking into a microphone connected to a voice modulator. Briggs had previously provided Dalek and other alien voices for Big Finish Productions audio plays, and continues to do so. In a 2006 BBC Radio interview, Briggs said that when the BBC asked him to do the voice for the new television series, they instructed him to bring his own analogue ring modulator that he had used in the audio plays. The BBC's sound department had changed to a digital platform and could not adequately create the distinctive Dalek sound with their modern equipment. Briggs went as far as to bring the voice modulator to the actors' readings of the scripts. Manufacturing the props was expensive. In scenes where many Daleks had to appear, some of them would be represented by wooden replicas (Destiny of the Daleks) or life-size photographic enlargements in the early black-and-white episodes (The Daleks, The Dalek Invasion of Earth, and The Power of the Daleks). In stories involving armies of Daleks, the BBC effects team even turned to using commercially available toy Daleks, manufactured by Louis Marx & Co and Herts Plastic Moulders Ltd. Examples of this can be observed in the serials The Power of the Daleks, The Evil of the Daleks, and Planet of the Daleks. Judicious editing techniques also gave the impression that there were more Daleks than were actually available, such as using a split screen in "The Parting of the Ways". Four fully functioning props were commissioned for the first serial "The Daleks" in 1963, and were constructed from BBC plans by Shawcraft Engineering. These became known in fan circles as "Mk I Daleks". Shawcraft were also commissioned to construct approximately 20 Daleks for the two Dalek movies in 1965 and 1966 (see below). Some of these movie props filtered back to the BBC and were seen in the televised serials, notably The Chase, which was aired before the first movie's debut. The remaining props not bought by the BBC were either donated to charity or given away as prizes in competitions. The BBC's own Dalek props were reused many times, with components of the original Shawcraft "Mk I Daleks" surviving right through to their final classic series appearance in 1988. But years of storage and repainting took their toll. By the time of the Sixth Doctor's Revelation of the Daleks new props were being manufactured out of fibreglass. These models were lighter and more affordable to construct than their predecessors. These newer models were slightly bulkier in appearance around the mid-shoulder section, and also had a redesigned skirt section which was more vertical at the back. Other minor changes were made to the design due to these new construction methods, including altering the fender and incorporating the arm boxes, collars, and slats into a single fibreglass moulding. These props were repainted in grey for the Seventh Doctor serial Remembrance of the Daleks and designated as "Renegade Daleks"; another redesign, painted in cream and gold, became the "Imperial Dalek" faction. New Dalek props were built for the 21st-century version of Doctor Who. The first, which appeared alone in the 2005 episode "Dalek", was built by modelmaker Mike Tucker. Additional Dalek props based on Tucker's master were subsequently built out of fibreglass by Cardiff-based Specialist Models. Wishing to create an alien creature that did not look like a "man in a suit", Terry Nation stated in his script for the first Dalek serial that they should have no legs. He was also inspired by a performance by the Georgian National Ballet, in which dancers in long skirts appeared to glide across the stage. For many of the shows the Daleks were operated by retired ballet dancers wearing black socks while sitting inside the Dalek. Raymond Cusick was given the task of designing the Daleks when Ridley Scott, then a designer for the BBC, proved unavailable after having been initially assigned to their debut serial. According to Jeremy Bentham's Doctor Who—The Early Years (1986), after Nation wrote the script, Cusick was given only an hour to come up with the design for the Daleks and was inspired in his initial sketches by a pepper pot on a table. Cusick himself, however, states that he based it on a man seated in a chair, and used the pepper pot only to demonstrate how it might move. In 1964, Nation told a Daily Mirror reporter that the Dalek name came from a dictionary or encyclopaedia volume, the spine of which read "Dal – Lek" (or, according to another version, "Dal – Eks"). He later admitted that this book and the associated origin of the Dalek name were completely fictitious, and that anyone bothering to check out his story would have found him out. The name had simply rolled off his typewriter. Later, Nation was pleasantly surprised to discover that in Serbo-Croatian the word "dalek" means "far" or "distant". Nation grew up during the Second World War and remembered the fear caused by German bombings. He consciously based the Daleks on the Nazis, conceiving the species as faceless, authoritarian figures dedicated to conquest, racial purity and complete conformity. The allusion is most obvious in the Dalek stories written by Nation, in particular The Dalek Invasion of Earth (1964) and Genesis of the Daleks (1975). Before he wrote the first Dalek serial, Nation was a scriptwriter for the comedian Tony Hancock. The two men had a falling out and Nation either resigned or was fired. Hancock worked on several series proposals, one of which was called From Plip to Plop, a comedic history of the world that would have ended with a nuclear apocalypse, the survivors being reduced to living in dustbin-like robot casings and eating radiation to stay alive. According to Hancock's biographer Cliff Goodwin, when Hancock saw the Daleks he allegedly shouted at the screen, "That bloody Nation — he's stolen my robots!" The titling of early Doctor Who stories is complex and sometimes controversial. The first Dalek serial is called, variously, The Survivors (the pre-production title and on-screen title used for the serial's second episode), The Mutants (its official title at the time of production and broadcast, later taken by another unrelated story), Beyond the Sun (used on some production documentation), The Dead Planet (the on-screen title of the serial's first episode), or simply The Daleks. The instant appeal of the Daleks caught the BBC off-guard, and transformed Doctor Who into a national phenomenon. Children were both frightened and fascinated by the alien look of the monsters, and the idea of 'hiding behind the sofa' became a popular, if inaccurate or exaggerated, meme. The Doctor Who production office was inundated with letters and calls asking about the creatures. Newspaper articles focused attention on the series and the Daleks, further enhancing their popularity. Nation jointly owned the intellectual property rights to the Daleks with the BBC, and the money-making concept proved nearly impossible to sell to anyone else, so he was dependent on the BBC wanting to produce stories featuring the creatures. Several attempts to market the Daleks outside the series were unsuccessful. Since Nation's death in 1997, his share of the rights is now administered by his former agent, Tim Hancock. Early plans for what eventually became the 1996 Doctor Who television movie included radically redesigned Daleks whose cases unfolded like spiders' legs. The concept for these "Spider Daleks" was abandoned, but it was picked up again in several Doctor Who spin-offs. When the new series was announced, many fans hoped that the Daleks would return once more to the programme. The Nation estate, however, demanded levels of creative control over the Daleks' appearances and scripts that were unacceptable to the BBC. Eventually the Daleks were cleared to appear in the first series. In 2014, Doctor Who showrunner Steven Moffat denied their numerous appearances since was as a result of a contractual obligation. Dalek in-universe history has seen many retroactive changes, which have caused continuity problems. When the Daleks first appeared, they were presented as the descendants of the Dals, mutated after a brief nuclear war between the Dal and Thal races 500 years ago. This race of Daleks is destroyed when their power supply is wrecked. However, when they reappear in The Dalek Invasion of Earth, they have conquered Earth in the 22nd century. Later stories saw them develop time travel and a space empire. In 1975, Terry Nation revised the Daleks' origins in Genesis of the Daleks, where the Dals were now called Kaleds (of which "Daleks" is an anagram), and the Dalek design was attributed to one man, the paralyzed Kaled chief scientist and evil genius, Davros. Later Big Finish Productions audio plays attempted to explain this retcon by saying that the Skaro word "dal" simply means warrior, which is how the Kaleds described themselves, while "dal-ek" means "god." According to Genesis of the Daleks, instead of a short nuclear exchange, the Kaled-Thal war was a thousand-year-long war of attrition, fought with nuclear, biological and chemical weapons which caused widespread mutations among the life forms of Skaro. Davros experimented on living Kaled cells to find the ultimate mutated form of the Kaled species, believing his own people had become weak and needed to be replaced by a greater life form. He placed his new Dalek creations in tank-like "travel machines" of advanced technology whose design was based on his own life-support chair. Genesis of the Daleks marked a new era for the depiction of the species, with most of their previous history either forgotten or barely referred to again. Future stories in the original Doctor Who series, which followed a rough story arc, would also focus more on Davros, much to the dissatisfaction of some fans who felt that the Daleks should take centre stage rather than merely becoming minions of their creator. Davros made his last televised appearance for 20 years in Remembrance of the Daleks, which depicted a civil war between two factions of Daleks. One faction, the "Imperial Daleks", were loyal to Davros, who had become their Emperor, whilst the other, the "Renegade Daleks", followed a black Supreme Dalek. By the end of the story, armies of both factions have been wiped out and the Doctor has tricked them into destroying Skaro. However, Davros escapes and based on the fact that Daleks possess time travel and were spread throughout the universe, there was still a possibility that many had survived these events. The original "classic" Doctor Who series ended in 1989. In the 1996 Doctor Who TV-movie (which introduced the Eighth Doctor), Skaro has seemingly been recreated and the Daleks are shown to still rule it. Though the aliens are never seen on-screen, the story shows the Time Lord villain the Master being executed on Skaro as Dalek voices chant "Exterminate." In Eighth Doctor audio plays produced by Big Finish from 2000–2005, Paul McGann reprised his role. The audio play The Time of the Daleks featured the Daleks without Davros and nearly removing William Shakespeare from history. In Terror Firma, the Eighth Doctor met a Dalek faction led by Davros who was devolving more into a Dalek-like life form himself while attempting to create new Daleks from mutated humans of Earth. The audio dramas The Apocalypse Element and Dalek Empire also depicted the alien villains invading Gallifrey and then creating their own version of the Time Lord power source known as the Eye of Harmony, allowing the Daleks to rebuild an empire and become a greater threat against the Time Lords and other races that possess time travel. A new Doctor Who series premiered in 2005, introducing the Ninth Doctor and revealing that the "Last Great Time War" had just ended, resulting in the seeming destruction of the Time Lord society. The episode "Dalek", written by Robert Shearman, was broadcast on BBC One on 30 April 2005 and confirmed that the Time War had mainly involved the Daleks fighting the Time Lords, with the Doctor ending the conflict by seemingly destroying both sides, remarking that his own survival was "not by choice." The episode featured a single Dalek who appeared to be the sole survivor of his race from the Time War. Later audio plays by Big Finish Productions expanded on the Time War in different audio drama series such as Gallifrey: Time War, The Eighth Doctor: Time War, The War Doctor, and The War Master. A Dalek Emperor returned at the end of the 2005 series, having survived the Time War and then rebuilt the Dalek race with genetic material harvested from human subjects. It saw itself as a god, and the new human-based Daleks were shown worshipping it. The Emperor and this Dalek fleet were destroyed in "The Parting of the Ways". The 2006 season finale "Army of Ghosts"/"Doomsday" featured a squad of four Dalek survivors from the old Empire, known as the Cult of Skaro, composed of Daleks who were tasked with developing imagination to better predict and combat enemies. These Daleks took on names: Jast, Thay, Caan, and their black Dalek leader Sec. The Cult had survived the Time War by escaping into the Void between dimensions. They emerged along with the Genesis Ark, a Time Lord prison vessel containing millions of Daleks, at Canary Wharf due to the actions of the Torchwood Institute and Cybermen from a parallel world. This resulted in a Cyberman-Dalek clash in London, which was resolved when the Tenth Doctor caused both groups to be sucked into the Void. The Cult survived by utilising an "emergency temporal shift" to escape. These four Daleks - Sec, Jast, Thay and Caan - returned in the two-part story "Daleks in Manhattan"/"Evolution of the Daleks", in which whilst stranded in 1930s New York, they set up a base in the partially built Empire State Building and attempt to rebuild the Dalek race. To this end, Dalek Sec merges with a human being to become a Human/Dalek hybrid. The Cult then set about creating "Human Daleks" by "formatting" the brains of a few thousand captured humans so they can have Dalek minds. Dalek Sec, however, becomes more human in personality and alters the plan so the hybrids will be more human like him. The rest of the Cult mutinies. Sec is killed, while Thay and Jast are later wiped out with the hybrids. Dalek Caan, believing it may be the last of its kind now, escapes once more via an emergency temporal shift. The Daleks returned in the 2008 season's two-part finale, "The Stolen Earth"/"Journey's End", accompanied once again by their creator Davros. The story reveals that Caan's temporal shift sent him into the Time War, despite the War being "Time-Locked". The experience of piercing the Time-Lock resulted in Caan seeing parts of several futures, destroying his sanity in the process. Caan rescued many Time War era Daleks and Davros, who created new Dalek troops using his own body's cells. A red Supreme Dalek leads the new army while keeping Caan and Davros imprisoned on the Dalek flagship, the Crucible. Davros and the Daleks plan to destroy reality itself with a "reality bomb". The plan fails due to the interference of Donna Noble, a companion of the Doctor, and Caan, who has been manipulating events to destroy the Daleks after realising the severity of the atrocities they have committed. The Daleks returned in the 2010 episode "Victory of the Daleks", wherein it is revealed that some Daleks survived the destruction of their army in "Journey's End" and retrieved the "Progenitor", a tiny apparatus containing 'original' Dalek DNA. The activation of the Progenitor results in the creation of New Paradigm Daleks who deem the Time War era Daleks to be inferior. The new Daleks are organised into different roles (drone, scientist, strategists, supreme and eternal), which are identifiable with colour-coded armour instead of the identification plates under the eyestalk used by their predecessors. They escape the Doctor at the end of the episode via time travel with the intent to rebuild their Empire. The Daleks appeared only briefly in subsequent finales "The Pandorica Opens"/"The Big Bang" (2010) and The Wedding of River Song (2011) as Steven Moffat decided to "give them a rest" and stated, "There's a problem with the Daleks. They are the most famous of the Doctor's adversaries and the most frequent, which means they are the most reliably defeatable enemies in the universe." These episodes also reveal that Skaro has been recreated yet again. They next appear in "Asylum of the Daleks" (2012), where the Daleks are shown to have greatly increased numbers and now have a Parliament; in addition to the traditional "modern" Daleks, several designs from both the original and new series appear, all co-existing rather than judging each other as inferior or outdated (except for those Daleks whose personalities deem them "insane" or can no longer battle). All record of the Doctor is removed from their collective consciousness at the end of the episode. The Daleks then appear in the 50th Anniversary special "The Day of the Doctor", where they are seen being defeated in the Time War. The same special reveals that many Time Lords survived the war since the Doctor found a way to transfer their planet Gallifrey out of phase with reality and into a pocket dimension. In "The Time of the Doctor", the Daleks are one of the races that besieges Trenzalore in an attempt to stop the Doctor from releasing the Time Lords from imprisonment. After converting Tasha Lem into a Dalek puppet, they regain knowledge of the Doctor. The Twelfth Doctor's first encounter with the Daleks is in his second full episode, "Into the Dalek" (2014), where he encounters a damaged Dalek he names 'Rusty.' Connecting to the Doctor's love of the universe and his hatred of the Daleks, Rusty assumes a mission to destroy other Daleks. In "The Magician's Apprentice"/"The Witch's Familiar" (2015), the Doctor is summoned to Skaro where he learns Davros has rebuilt the Dalek Empire. In "The Pilot" (2017), the Doctor briefly visits a battle during the Dalek-Movellan war. The Thirteenth Doctor encountered a Dalek in a New Year's Day episode, "Resolution" (2019). A Dalek mutant, separated from its armoured casing, takes control of a human in order to build a new travel device for itself and summon more Daleks to conquer Earth. This Dalek is cloned by a scientist in "Revolution of the Daleks" (2021), and attempts to take over Earth using further clones, but are killed by other Daleks for perceived genetic impurity. The Dalek army is later sent by the Doctor into the "void" between worlds to be destroyed, using a spare TARDIS she recently acquired on Gallifrey. After cameo appearances depicting them as one of several villains trying to take advantage of "the Flux" event tearing through space-time in series 13, the Daleks returned in first 2022 special, "Eve of the Daleks". In the episode, a team of Dalek Executioners are dispatched by High Command to avenge the Dalek War Fleet destroyed by the Doctor in the series 13 finale "The Vanquishers", only for a time loop established by the TARDIS to save the Doctor's life and give her a chance to destroy the executioners instead. The Daleks later appeared alongside the Cybermen as allies to the Master in "The Power of the Doctor" as part of a plot to finally destroy their nemesis, but the alliance is defeated by the Doctor and new and old companions. In 2023, the origin of the iconic plunger-like appendages used by Daleks was retroactively established as being from the Fourteenth Doctor's TARDIS. Daleks have little, if any, individual personality, ostensibly no emotions other than hatred and anger, and a strict command structure in which they are conditioned to obey superiors' orders without question. Dalek speech is characterised by repeated phrases, and by orders given to themselves and to others. Unlike the stereotypical emotionless robots often found in science fiction, Daleks are often angry; author Kim Newman has described the Daleks as behaving "like toddlers in perpetual hissy fits", gloating when in power and flying into a rage when thwarted. They tend to be excitable and will repeat the same word or phrase over and over again in heightened emotional states, most famously "Exterminate! Exterminate!" Daleks are extremely aggressive, and seem driven by an instinct to attack. This instinct is so strong that Daleks have been depicted fighting the urge to kill or even attacking when unarmed. The Fifth Doctor characterises this impulse by saying, "However you respond [to Daleks] is seen as an act of provocation." The fundamental feature of Dalek culture and psychology is an unquestioned belief in the superiority of the Dalek race, and their default directive is to destroy all non-Dalek life-forms. Other species are either to be exterminated immediately or enslaved and then exterminated once they are no longer useful. The Dalek obsession with their own superiority is illustrated by the schism between the Renegade and Imperial Daleks seen in Revelation of the Daleks and Remembrance of the Daleks: the two factions each consider the other to be a perversion despite the relatively minor differences between them. This intolerance of any "contamination" within themselves is also shown in "Dalek", The Evil of the Daleks and in the Big Finish Productions audio play The Mutant Phase. This superiority complex is the basis of the Daleks' ruthlessness and lack of compassion. This is shown in extreme in "Victory of the Daleks", where the new, pure Daleks destroy their creators, impure Daleks, with the latter's consent. It is nearly impossible to negotiate or reason with a Dalek, a single-mindedness that makes them dangerous and not to be underestimated. The Eleventh Doctor (Matt Smith) is later puzzled in the "Asylum of the Daleks" as to why the Daleks don't just kill the sequestered ones that have "gone wrong". Although the Asylum is subsequently obliterated, the Prime Minister of the Daleks explains that "it is offensive to us to destroy such divine hatred", and the Doctor is sickened at the revelation that hatred is actually considered beautiful by the Daleks. Dalek society is depicted as one of extreme scientific and technological advancement; the Third Doctor states that "it was their inventive genius that made them one of the greatest powers in the universe." However, their reliance on logic and machinery is also a strategic weakness which they recognise, and thus use more emotion-driven species as agents to compensate for these shortcomings. Although the Daleks are not known for their regard for due process, they have taken at least two enemies back to Skaro for a "trial", rather than killing them immediately. The first was their creator, Davros, in Revelation of the Daleks, and the second was the renegade Time Lord known as the Master in the 1996 television movie. The reasons for the Master's trial, and why the Doctor would be allowed to retrieve the Master's remains, have never been explained on screen. The Doctor Who Annual 2006 implies that the trial may have been due to a treaty signed between the Time Lords and the Daleks. The framing device for the I, Davros audio plays is a Dalek trial to determine if Davros should be the Daleks' leader once more. Spin-off novels contain several tongue-in-cheek mentions of Dalek poetry, and an anecdote about an opera based upon it, which was lost to posterity when the entire cast was exterminated on the opening night. Two stanzas are given in the novel The Also People by Ben Aaronovitch. In an alternative timeline portrayed in the Big Finish Productions audio adventure The Time of the Daleks, the Daleks show a fondness for the works of Shakespeare. A similar idea was satirised by comedian Frankie Boyle in the BBC comedy quiz programme Mock the Week; he gave the fictional Dalek poem "Daffodils; EXTERMINATE DAFFODILS!" as an "unlikely line to hear in Doctor Who". Because the Doctor has defeated the Daleks so often, he has become their collective arch-enemy and they have standing orders to capture or exterminate him on sight. In later fiction, the Daleks know the Doctor as "Ka Faraq Gatri" ("Bringer of Darkness" or "Destroyer of Worlds"), and "The Oncoming Storm". Both the Ninth Doctor (Christopher Eccleston) and Rose Tyler (Billie Piper) suggest that the Doctor is one of the few beings the Daleks fear. In "Doomsday", Rose notes that while the Daleks see the extermination of five million Cybermen as "pest control", "one Doctor" visibly un-nerves them (to the point they physically recoil). To his indignant surprise, in "Asylum of the Daleks", the Eleventh Doctor (Matt Smith) learns that the Daleks have designated him as "The Predator". A rel is a Dalek and Kaled unit of measurement. It was usually a measurement of time, with a duration of slightly more than one second, as mentioned in "Doomsday", "Evolution of the Daleks" and "Journey's End", counting down to the ignition of the reality bomb. (One earth minute most likely equals about 50 rels.) However, in some comic books it was also used as a unit of velocity. Finally, in some cases it was used as a unit of hydroelectric energy (not to be confused with a vep, the unit used to measure artificial sunlight). The rel was first used in the non-canonical feature film Daleks – Invasion Earth: 2150 A.D., soon after appearing in early Doctor Who comic books. Two Doctor Who movies starring Peter Cushing featured the Daleks as the main villains: Dr. Who and the Daleks, and Daleks - Invasion Earth 2150 AD, based on the television serials The Daleks and The Dalek Invasion of Earth, respectively. The movies were not direct remakes; for example, the Doctor in the Cushing films was a human inventor called "Dr. Who" who built a time-travelling device named Tardis, instead of a mysterious alien who stole a device called "the TARDIS". Four books focusing on the Daleks were published in the 1960s. The Dalek Book (1964, written by Terry Nation and David Whitaker), The Dalek World (1965, written by Nation and Whitaker) and The Dalek Outer Space Book (1966, by Nation and Brad Ashton) were all hardcover books formatted like annuals, containing text stories and comics about the Daleks, along with fictional information (sometimes based on the television serials, other times made up for the books). Nation also published The Dalek Pocketbook and Space-Travellers Guide, which collected articles and features treating the Daleks as if they were real. Four more annuals were published in the 1970s by World Distributors under the title Terry Nation's Dalek Annual (with cover dates 1976–1979, but published 1975–1978). Two original novels by John Peel, War of the Daleks (1997) and Legacy of the Daleks (1998), were released as part of the Eighth Doctor Adventures series of Doctor Who novels. A novella, The Dalek Factor by Simon Clark, was published in 2004, and two books featuring the Daleks and the Tenth Doctor (I am a Dalek by Gareth Roberts, 2006, and Prisoner of the Daleks by Trevor Baxendale, 2009) have been released as part of the New Series Adventures. Nation authorised the publication of the comic strip The Daleks in the comic TV Century 21 in 1965. The weekly one-page strip, written by Whitaker but credited to Nation, featured the Daleks as protagonists and "heroes", and continued for two years, from their creation of the mechanised Daleks by the humanoid Dalek scientist, Yarvelling, to their eventual discovery in the ruins of a crashed space-liner of the co-ordinates for Earth, which they proposed to invade. Although much of the material in these strips was directly contradicted by what was later shown on television, some concepts like the Daleks using humanoid duplicates and the design of the Dalek Emperor did show up later on in the programme. At the same time, a Doctor Who strip was also being published in TV Comic. Initially, the strip did not have the rights to use the Daleks, so the First Doctor battled the "Trods" instead, cone-shaped robotic creatures that ran on static electricity. By the time the Second Doctor appeared in the strip in 1967 the rights issues had been resolved, and the Daleks began making appearances starting in The Trodos Ambush (TVC #788-#791), where they massacred the Trods. The Daleks also made appearances in the Third Doctor-era Dr. Who comic strip that featured in the combined Countdown/TV Action comic during the early 1970s. An animated series called Daleks!, which consists of five 10-minute long episodes, was released on the official Doctor Who YouTube channel in 2020. Other licensed appearances have included a number of stage plays (see Stage plays below) and television adverts for Wall's "Sky Ray" ice lollies (1966), Weetabix breakfast cereal (1977), Kit Kat chocolate bars (2001), and the ANZ Bank (2005). In 2003, Daleks also appeared in UK billboard ads for Energizer batteries, alongside the slogan "Are You Power Mad?" Daleks have made cameo appearances in television programmes and films unrelated to Doctor Who from the 1960s to the present day. Daleks have been referred to or associated in many musical compositions. Licensed Doctor Who games featuring Daleks include 1984's The Key to Time, a text adventure game for the ZX Spectrum. The first graphical game to feature daleks was the eponymous, turn-based title released by Johan Strandberg for the Macintosh in the same year. Daleks also appeared in minor roles or as thinly disguised versions in other, minor games throughout the 80s, but did not feature as central adversaries in a licensed game until 1992, when Admiral Software published Dalek Attack. The game allowed the player to play various Doctors or companions, running them through several environments to defeat the Daleks. In 1997 the BBC released a PC game entitled Destiny of the Doctors which also featured the Daleks, among other adversaries. One authorised online game is The Last Dalek, a Flash game created by New Media Collective for the BBC. It is based on the 2005 episode "Dalek" and can be played at the official BBC Doctor Who website. The Doctor Who website also features another game, Daleks vs Cybermen (also known as Cyber Troop Control Interface), based on the 2006 episode "Doomsday"; in this game, the player controls troops of Cybermen which must fight Daleks as well as Torchwood Institute members. On 5 June 2010, the BBC released the first of four official computer games on its website, Doctor Who: The Adventure Games, which are intended as part of the official TV series adventures. In the first of these, 'The City of the Daleks', the Doctor in his 11th incarnation and Amy Pond must stop the Daleks re-writing time and reviving Skaro, their homeland. They also appear in the Nintendo DS and Wii games Doctor Who: Evacuation Earth and Doctor Who: Return to Earth. Several Daleks appear in the iOS game The Mazes of Time as rare enemies the player faces, appearing only in the first and final levels. The Daleks also appear in Lego Dimensions where they ally themselves with Lord Vortech and possess the size-altering scale keystone. When Batman, Gandalf, and Wyldstyle encounter them, they assume that they are allies of the Doctor and attack the trio. The main characters continue to fight the Daleks until they call the Doctor to save them. A Dalek saucer also appears in the level based on Metropolis, in which the top of it serves as the stage for the boss battle against Sauron and includes Daleks among the various enemies summoned to attack the player. A Dalek is also among the elements summoned by the player to deal with the obstacles in the Portal 2 story level. The Daleks also appear in Doctor Who: The Edge of Time, a Virtual Reality Game for the PlayStation VR, Oculus Rift, Oculus Quest, HTC Vive, and Vive Cosmos, which is set to be released in September 2019. The Daleks are a licensed costume in Fall Guys. At the 1966 Conservative Party conference in Blackpool, delegate Hugh Dykes publicly compared the Labour government's Defence Secretary Denis Healey to the creatures. "Mr. Healey is the Dalek of defence, pointing a metal finger at the armed forces and saying 'I will eliminate you'." In a British Government Parliamentary Debate in the House of Commons on 12 February 1968, the then Minister of Technology Tony Benn mentioned the Daleks during a reply to a question from the Labour MP Hugh Jenkins concerning the Concorde aircraft project. In the context of the dangers of solar flares, he said, "Because we are exploring the frontiers of technology, some people think Concorde will be avoiding solar flares like Dr. Who avoiding Daleks. It is not like this at all." Australian Labor Party luminary Robert Ray described his right wing Labor Unity faction successor, Victorian Senator Stephen Conroy, and his Socialist Left faction counterpart, Kim Carr, as "factional Daleks" during a 2006 Australian Fabian Society lunch in Sydney. During a 2021 House of Commons debate about the retention of dentists in rural areas of the United Kingdom during the COVID-19 pandemic, the voice of Conservative MP Scott Mann of North Cornwall, while on a video link, became distorted due to a malfunction with his audio feed. Deputy Speaker of the House Nigel Evans interrupted his broadcast, amidst the chuckles from other MPs; by saying, "Scott, you sound like a Dalek and I don't mean that unkindly. There's clearly a communications problem." Mann later returned to apologise. Daleks have been used in political cartoons to caricature: Douglas Hurd, as the 'Douglek', in Private Eye's Dan Dire – Pilot of the Future; Tony Benn, John Birt, Tony Blair (also portrayed as Davros), Alec Douglas-Home, Charles de Gaulle, Mark Thompson. Daleks have appeared on magazine covers promoting Doctor Who since the "Dalekmania" fad of the 1960s. Radio Times has featured the Daleks on its cover several times, beginning with the 21–27 November 1964 issue which promoted The Dalek Invasion of Earth. Other magazines also used Daleks to attract readers' attention, including Girl Illustrated. In April 2005, Radio Times created a special cover to commemorate both the return of the Daleks to the screen in "Dalek" and the forthcoming general election. This cover recreated a scene from The Dalek Invasion of Earth in which the Daleks were seen crossing Westminster Bridge, with the Houses of Parliament in the background. The cover text read "VOTE DALEK!" In a 2008 contest sponsored by the Periodical Publishers Association, this cover was voted the best British magazine cover of all time. In 2013 it was voted "Cover of the century" by the Professional Publishers Association. The 2010 United Kingdom general election campaign also prompted a collector's set of three near-identical covers of the Radio Times on 17 April with exactly the same headline but with the newly redesigned Daleks in their primary colours representing the three main political parties, Red being Labour, Blue as Conservative and Yellow as Liberal Democrats. Daleks have been the subject of many parodies, including Spike Milligan's "Pakistani Dalek" sketch in his comedy series Q, and Victor Lewis-Smith's "Gay Daleks". Occasionally the BBC has used the Daleks to parody other subjects: in 2002, BBC Worldwide published the Dalek Survival Guide, a parody of The Worst-Case Scenario Survival Handbooks. Comedian Eddie Izzard has an extended stand-up routine about Daleks, which was included in her 1993 stand-up show "Live at the Ambassadors". The Daleks made two brief appearances in a pantomime version of Aladdin at the Birmingham Hippodrome which starred Torchwood star John Barrowman in the lead role. A joke-telling robot, possessing a Dalek-like boom, and loosely modelled after the Dalek, also appeared in the South Park episode "Funnybot", even spouting out "exterminate". A Dalek can also be seen in the background at timepoints 1:13 and 1:17 in the Sam & Max animated series episode "The Trouble with Gary". In the Community parody of Doctor Who called Inspector Spacetime, they are referred to as Blorgons. The BBC approached Walter Tuckwell, a New Zealand-born entrepreneur who was handling product merchandising for other BBC shows, and asked him to do the same for the Daleks and Doctor Who. Tuckwell created a glossy sales brochure that sparked off a Dalek craze, dubbed "Dalekmania" by the press, which peaked in 1965. The first Dalek toys were released in 1965 as part of the "Dalekmania" craze. These included battery-operated, friction drive and "Rolykins" Daleks from Louis Marx & Co., as well as models from Cherilea, Herts Plastic Moulders Ltd and Cowan, de Groot Ltd, and "Bendy" Daleks made by Newfeld Ltd. At the height of the Daleks' popularity, in addition to toy replicas, there were Dalek board games and activity sets, slide projectors for children and even Dalek playsuits made from PVC. Collectible cards, stickers, toy guns, music singles, punching bags and many other items were also produced in this period. Dalek toys released in the 1970s included a new version of Louis Marx's battery-operated Dalek (1974), a "talking Dalek" from Palitoy (1975) and a Dalek board game (1975) and Dalek action figure (1977), both from Denys Fisher. From 1988 to 2002, Dapol released a line of Dalek toys in conjunction with its Doctor Who action figure series. In 1984, Sevans Models released a self-assembly model kit for a one-fifth scale Dalek, which Doctor Who historian David Howe has described as "the most accurate model of a Dalek ever to be released". Comet Miniatures released two Dalek self-assembly model kits in the 1990s. In 1992, Bally released a Doctor Who pinball machine which prominently featured the Daleks both as a primary playfield feature and as a motorised toy in the topper. Bluebird Toys produced a Dalek-themed Doctor Who playset in 1998. Beginning in 2000, Product Enterprise (who later operated under the names "Iconic Replicas" and "Sixteen 12 Collectibles") produced various Dalek toys. These included one-inch (2.5 cm) Dalek "Rolykins" (based on the Louis Marx toy from 1965); push-along "talking" 7-inch (17.8 cm) Daleks; 21⁄2-inch (6.4 cm) Dalek "Rollamatics" with a pull back and release mechanism; and a one-foot (30.5 cm) remote control Dalek. In 2005 Character Options was granted the "Master Toy License" for the revived Doctor Who series, including the Daleks. Their product lines have included 5-inch (12.7 cm) static/push-along and radio controlled Daleks, radio controlled 12-inch (30.5 cm) versions and radio controlled 18-inch (45.7 cm) / 1:3 scale variants. The 12-inch remote control Dalek won the 2005 award for Best Electronic Toy of the Year from the Toy Retailers Association. Some versions of the 18-inch model included semi-autonomous and voice command-features. In 2008, the company acquired a license to produce 5-inch (12.7 cm) Daleks of the various "classic series" variants. For the fifth revived series, both Ironside (Post-Time war Daleks in camouflage khaki), Drone (new, red) and, later, Strategist Daleks (new, blue) were released as both RC Infrared Battle Daleks and action figures. A pair of Lego based Daleks were included in the Lego Ideas Doctor Who set, and another appeared in the Lego Dimensions Cyberman Fun-Pack. Dalek fans have been building life-size reproduction Daleks for many years. The BBC and Terry Nation estate officially disapprove of self-built Daleks, but usually intervene only if attempts are made to trade unlicensed Daleks and Dalek components commercially, or if it is considered that actual or intended use may damage the BBC's reputation or the Doctor Who/Dalek brand. The Crewe, Cheshire-based company "This Planet Earth" is the only business which has been licensed by the BBC and the Terry Nation Estate to produce full-size TV Dalek replicas, and by Canal+ Image UK Ltd. to produce full size Movie Dalek replicas commercially.
[ { "paragraph_id": 0, "text": "The Daleks (/ˈdɑːlɛks/ DAH-leks) are a fictional extraterrestrial race of extremely xenophobic mutants principally portrayed in the British science fiction television programme Doctor Who. They were conceived by writer Terry Nation and first appeared in the 1963 Doctor Who serial The Daleks, in casings designed by Raymond Cusick.", "title": "" }, { "paragraph_id": 1, "text": "Drawing inspiration from the Nazis, Nation portrayed the Daleks as violent, merciless and pitiless cyborg aliens, completely absent of any emotion other than hate, who demand total conformity to the will of the Dalek with the highest authority, and are bent on the conquest of the universe and the extermination of any other forms of life, including other 'impure' Daleks which are deemed inferior for being different to them. Collectively, they are the greatest enemies of Doctor Who's protagonist, the Time Lord known as \"the Doctor\". During the second year of the original Doctor Who programme (1963–1989), the Daleks developed their own form of time travel. At the beginning of the second Doctor Who TV series that debuted in 2005, it was established that the Daleks had engaged in a Time War against the Time Lords that affected much of the universe and altered parts of history.", "title": "" }, { "paragraph_id": 2, "text": "In the programme's narrative, the planet Skaro suffered a thousand-year war between two societies: the Kaleds and the Thals. During this time-period, many natives of Skaro became badly mutated by fallout from nuclear weapons and chemical warfare. The Kaled government believed in genetic purity and swore to \"exterminate the Thals\" for being inferior. Believing his own society was becoming weak and that it was his duty to create a new master race from the ashes of his people, the Kaled scientist Davros genetically modified several Kaleds into squid-like life-forms he called Daleks, removing \"weaknesses\" such as mercy and sympathy while increasing aggression and survival-instinct. He then integrated them with tank-like robotic shells equipped with advanced technology based on the same life-support system he himself used since being burned and blinded by a nuclear attack. His creations became intent on dominating the universe by enslaving or purging all \"inferior\" non-Dalek life.", "title": "" }, { "paragraph_id": 3, "text": "The Daleks are the show's most popular and famous villains and their returns to the series over the decades have often gained media attention. Their frequent declaration \"Exterminate!\" has become common usage.", "title": "" }, { "paragraph_id": 4, "text": "The Daleks were created by Terry Nation and designed by the BBC designer Raymond Cusick. They were introduced in December 1963 in the second Doctor Who serial, colloquially known as The Daleks. They became an immediate and huge hit with viewers, featuring in many subsequent serials and, in the 1960s, two films. They have become as synonymous with Doctor Who as the Doctor himself, and their behaviour and catchphrases are now part of British popular culture. \"Hiding behind the sofa whenever the Daleks appear\" has been cited as an element of British cultural identity, and a 2008 survey indicated that nine out of ten British children were able to identify a Dalek correctly. In 1999 a Dalek photographed by Lord Snowdon appeared on a postage stamp celebrating British popular culture. In 2010, readers of science fiction magazine SFX voted the Dalek as the all-time greatest monster, beating competition including Japanese movie monster Godzilla and J. R. R. Tolkien's Gollum, of The Lord of the Rings.", "title": "Creation" }, { "paragraph_id": 5, "text": "As early as one year after first appearing on Doctor Who, the Daleks had become popular enough to be recognized even by non-viewers. In December 1964 editorial cartoonist Leslie Gilbert Illingworth published a cartoon in the Daily Mail captioned \"THE DEGAULLEK\", caricaturing French President Charles de Gaulle arriving at a NATO meeting as a Dalek with de Gaulle's prominent nose.", "title": "Entry into popular culture" }, { "paragraph_id": 6, "text": "The word \"Dalek\" has entered major dictionaries, including the Oxford English Dictionary, which defines \"Dalek\" as \"a type of robot appearing in 'Dr. Who' [sic], a B.B.C. Television science-fiction programme; hence used allusively.\" English-speakers sometimes use the term metaphorically to describe people, usually authority figures, who act like robots unable to break from their programming. For example, John Birt, the Director-General of the BBC from 1992 to 2000, was called a \"croak-voiced Dalek\" by playwright Dennis Potter in the MacTaggart Lecture at the 1993 Edinburgh Television Festival.", "title": "Entry into popular culture" }, { "paragraph_id": 7, "text": "Externally Daleks resemble human-sized pepper pots with a single mechanical eyestalk mounted on a rotating dome, a gun-mount containing an energy-weapon (\"gunstick\" or \"death ray\") resembling an egg-whisk, and a telescopic manipulator arm usually tipped by an appendage resembling a sink-plunger. Daleks have been known to use their plungers to interface with technology, crush a man's skull by suction, measure the intelligence of a subject, and extract information from a man's mind. Dalek casings are made of a bonded polycarbide material called \"Dalekanium\" by a member of the human resistance in The Dalek Invasion of Earth and the Dalek comics, as well as by the Cult of Skaro in \"Daleks in Manhattan\".", "title": "Physical characteristics" }, { "paragraph_id": 8, "text": "The lower half of a Dalek's shell is covered with hemispherical protrusions, or 'Dalek-bumps', which are shown in the episode \"Dalek\" to be spheres embedded in the casing. Both the BBC-licensed Dalek Book (1964) and The Doctor Who Technical Manual (1983) describe these items as being part of a sensory array, while in the 2005 series episode \"Dalek\" they are integral to a Dalek's forcefield mechanism, which evaporates most bullets and resists most types of energy weapons. The forcefield seems to be concentrated around the Dalek's midsection (where the mutant is located), as normally ineffective firepower can be concentrated on the eyestalk to blind a Dalek. In 2019 episode \"Resolution\" the bumps give way to reveal missile launchers capable of wiping out a military tank with ease. Daleks have a very limited visual field, with no peripheral sight at all, and are relatively easy to hide from in fairly exposed places. Their own energy weapons are capable of destroying them. Their weapons fire a beam that has electrical tendencies, is capable of propagating through water, and may be a form of plasma or electrolaser. The eyepiece is a Dalek's most vulnerable spot; impairing its vision often leads to a blind, panicked firing of its weapon while exclaiming \"My vision is impaired; I cannot see!\" Russell T Davies subverted the catchphrase in his 2008 episode \"The Stolen Earth\", in which a Dalek vaporises a paintball that has blocked its vision while proclaiming, \"My vision is not impaired!\"", "title": "Physical characteristics" }, { "paragraph_id": 9, "text": "The creature inside the mechanical casing is soft and repulsive in appearance, and vicious in temperament. The first-ever glimpse of a Dalek mutant, in The Daleks, was a claw peeking out from under a Thal cloak after it had been removed from its casing. The mutants' actual appearance has varied, but often adheres to the Doctor's description of the species in Remembrance of the Daleks as \"little green blobs in bonded polycarbide armour\". In Resurrection of the Daleks a Dalek creature, separated from its casing, attacks and severely injures a human soldier; in Remembrance of the Daleks there are two Dalek factions (Imperial and Renegade), and the creatures inside have a different appearance in each case, one resembling the amorphous creature from Resurrection, the other the crab-like creature from the original Dalek serial. As the creature inside is rarely seen on screen there is a common misconception that Daleks are wholly mechanical robots. In the new series Daleks are retconned to be squid-like in appearance, with small tentacles, one or two eyes, and an exposed brain. In the new series, a Dalek creature separated from its casing is shown capable of inserting a tentacle into the back of a human's neck and controlling them.", "title": "Physical characteristics" }, { "paragraph_id": 10, "text": "Daleks' voices are electronic; when out of its casing the mutant is only able to squeak. Once the mutant is removed the casing itself can be entered and operated by humanoids; for example, in The Daleks, Ian Chesterton (William Russell) enters a Dalek shell to masquerade as a guard as part of an escape plan.", "title": "Physical characteristics" }, { "paragraph_id": 11, "text": "For many years it was assumed that, due to their design and gliding motion, Daleks were unable to climb stairs, and that this provided a simple way of escaping them. A cartoon from Punch pictured a group of Daleks at the foot of a flight of stairs with the caption, \"Well, this certainly buggers our plan to conquer the Universe\". In a scene from the serial Destiny of the Daleks, the Doctor and companions escape from Dalek pursuers by climbing into a ceiling duct. The Fourth Doctor calls down, \"If you're supposed to be the superior race of the universe, why don't you try climbing after us?\" The Daleks generally make up for their lack of mobility with overwhelming firepower; a joke among Doctor Who fans is that \"Real Daleks don't climb stairs; they level the building.\" Dalek mobility has improved over the history of the series: in their first appearance, in The Daleks, they were capable of movement only on the conductive metal floors of their city; in The Dalek Invasion of Earth a Dalek emerges from the waters of the River Thames, indicating not only that they had become freely mobile, but that they are amphibious; Planet of the Daleks showed that they could ascend a vertical shaft by means of an external anti-gravity mat placed on the floor; Revelation of the Daleks showed Davros in his life-support chair and one of his Daleks hovering and Remembrance of the Daleks depicted them as capable of hovering up a flight of stairs. Despite this, journalists covering the series frequently refer to the Daleks' supposed inability to climb stairs; characters escaping up a flight of stairs in the 2005 episode \"Dalek\" made the same joke and were shocked when the Dalek began to hover up the stairs after uttering the phrase \"ELEVATE\", in a similar manner to their normal phrase \"EXTERMINATE\". The new series depicts the Daleks as fully capable of flight, even space flight.", "title": "Physical characteristics" }, { "paragraph_id": 12, "text": "The non-humanoid shape of the Dalek did much to enhance the creatures' sense of menace. A lack of familiar reference points differentiated them from the traditional \"bug-eyed monster\" of science fiction, which Doctor Who creator Sydney Newman had wanted the show to avoid. The unsettling Dalek form, coupled with their alien voices, made many believe that the props were wholly mechanical and operated by remote control.", "title": "Physical characteristics" }, { "paragraph_id": 13, "text": "The Daleks were actually controlled from inside by short operators, who had to manipulate their eyestalks, domes and arms, as well as flashing the lights on their heads in sync with the actors supplying their voices. The Dalek cases were built in two pieces; an operator would step into the lower section and then the top would be secured. The operators looked out between the cylindrical louvres just beneath the dome, which were lined with mesh to conceal their faces.", "title": "Physical characteristics" }, { "paragraph_id": 14, "text": "In addition to being hot and cramped, the Dalek casings also muffled external sounds, making it difficult for operators to hear the director or dialogue. John Scott Martin, a Dalek operator from the original series, said that Dalek operation was a challenge: \"You had to have about six hands: one to do the eyestalk, one to do the lights, one for the gun, another for the smoke canister underneath, yet another for the sink plunger. If you were related to an octopus then it helped.\"", "title": "Physical characteristics" }, { "paragraph_id": 15, "text": "For Doctor Who's 21st-century revival the Dalek casings retain the same overall shape and dimensional proportions of previous Daleks, although many details have been redesigned to give the Dalek a heavier and more solid look. Changes include a larger, more pointed base; a glowing eyepiece; an all-over metallic-brass finish (specified by Davies); thicker, nailed strips on the \"neck\" section; a housing for the eyestalk pivot; and significantly larger dome lights. The new prop made its on-screen debut in the 2005 episode \"Dalek\". These Dalek casings use a short operator inside the housing while the 'head' and eyestalk are operated via remote control. A third person, Nicholas Briggs, supplies the voice in their various appearances. In the 2010 season, a new, larger model appeared in several colours representing different parts of the Dalek command hierarchy.", "title": "Physical characteristics" }, { "paragraph_id": 16, "text": "Terry Nation's original plan was for the Daleks to glide across the floor. Early versions of the Daleks rolled on nylon castors, propelled by the operator's feet. Although castors were adequate for the Daleks' debut serial, which was shot entirely at the BBC's Lime Grove Studios, for The Dalek Invasion of Earth Terry Nation wanted the Daleks to be filmed on the streets of London. To enable the Daleks to travel smoothly on location, designer Spencer Chapman built the new Dalek shells around miniature tricycles with sturdier wheels, which were hidden by enlarged fenders fitted below the original base. The uneven flagstones of Central London caused the Daleks to rattle as they moved and it was not possible to remove this noise from the final soundtrack. A small parabolic dish was added to the rear of the prop's casing to explain why these Daleks, unlike the ones in their first serial, were not dependent on static electricity drawn up from the floors of the Dalek city for their motive power.", "title": "Physical characteristics" }, { "paragraph_id": 17, "text": "Later versions of the prop had more efficient wheels and were once again simply propelled by the seated operators' feet, but they remained so heavy that when going up ramps they often had to be pushed by stagehands out of camera shot. The difficulty of operating all the prop's parts at once contributed to the occasionally jerky Dalek movements. This problem has largely been eradicated with the advent of the \"new series\" version, as its remotely controlled dome and eyestalk allow the operator to concentrate on the smooth movement of the Dalek and its arms.", "title": "Physical characteristics" }, { "paragraph_id": 18, "text": "The staccato delivery, harsh tone and rising inflection of the Dalek voice were initially developed by two voice actors, Peter Hawkins and David Graham, who varied the pitch and speed of the lines according to the emotion needed. Their voices were further processed electronically by Brian Hodgson at the BBC Radiophonic Workshop. The sound-processing devices used have varied over the decades. In 1963 Hodgson and his colleagues used equalisation to boost the mid-range of the actor's voice, then subjected it to ring modulation with a 30 Hz sine wave. The distinctive harsh, grating vocal timbre this produced has remained the pattern for all Dalek voices since (with the exception of those in the 1985 serial Revelation of the Daleks, for which the director, Graeme Harper, deliberately used less distortion).", "title": "Physical characteristics" }, { "paragraph_id": 19, "text": "Besides Hawkins and Graham, other voice actors for the Daleks have included Roy Skelton, who first voiced the Daleks in the 1967 story The Evil of the Daleks and provided voices for five additional Dalek serials including Planet of the Daleks, and for the one-off anniversary special The Five Doctors. Michael Wisher, the actor who originated the role of Dalek creator Davros in Genesis of the Daleks, provided Dalek voices for that same story, as well as for Frontier in Space, Planet of the Daleks, and Death to the Daleks. Other Dalek voice actors include Royce Mills (three stories), Brian Miller (two stories), and Oliver Gilbert and Peter Messaline (one story). John Leeson, who performed the voice of K9 in several Doctor Who stories, and Davros actors Terry Molloy and David Gooderson also contributed supporting voices for various Dalek serials.", "title": "Physical characteristics" }, { "paragraph_id": 20, "text": "Since 2005 the Dalek voice in the television series has been provided by Nicholas Briggs, speaking into a microphone connected to a voice modulator. Briggs had previously provided Dalek and other alien voices for Big Finish Productions audio plays, and continues to do so. In a 2006 BBC Radio interview, Briggs said that when the BBC asked him to do the voice for the new television series, they instructed him to bring his own analogue ring modulator that he had used in the audio plays. The BBC's sound department had changed to a digital platform and could not adequately create the distinctive Dalek sound with their modern equipment. Briggs went as far as to bring the voice modulator to the actors' readings of the scripts.", "title": "Physical characteristics" }, { "paragraph_id": 21, "text": "Manufacturing the props was expensive. In scenes where many Daleks had to appear, some of them would be represented by wooden replicas (Destiny of the Daleks) or life-size photographic enlargements in the early black-and-white episodes (The Daleks, The Dalek Invasion of Earth, and The Power of the Daleks). In stories involving armies of Daleks, the BBC effects team even turned to using commercially available toy Daleks, manufactured by Louis Marx & Co and Herts Plastic Moulders Ltd. Examples of this can be observed in the serials The Power of the Daleks, The Evil of the Daleks, and Planet of the Daleks. Judicious editing techniques also gave the impression that there were more Daleks than were actually available, such as using a split screen in \"The Parting of the Ways\".", "title": "Physical characteristics" }, { "paragraph_id": 22, "text": "Four fully functioning props were commissioned for the first serial \"The Daleks\" in 1963, and were constructed from BBC plans by Shawcraft Engineering. These became known in fan circles as \"Mk I Daleks\". Shawcraft were also commissioned to construct approximately 20 Daleks for the two Dalek movies in 1965 and 1966 (see below). Some of these movie props filtered back to the BBC and were seen in the televised serials, notably The Chase, which was aired before the first movie's debut. The remaining props not bought by the BBC were either donated to charity or given away as prizes in competitions.", "title": "Physical characteristics" }, { "paragraph_id": 23, "text": "The BBC's own Dalek props were reused many times, with components of the original Shawcraft \"Mk I Daleks\" surviving right through to their final classic series appearance in 1988. But years of storage and repainting took their toll. By the time of the Sixth Doctor's Revelation of the Daleks new props were being manufactured out of fibreglass. These models were lighter and more affordable to construct than their predecessors. These newer models were slightly bulkier in appearance around the mid-shoulder section, and also had a redesigned skirt section which was more vertical at the back. Other minor changes were made to the design due to these new construction methods, including altering the fender and incorporating the arm boxes, collars, and slats into a single fibreglass moulding. These props were repainted in grey for the Seventh Doctor serial Remembrance of the Daleks and designated as \"Renegade Daleks\"; another redesign, painted in cream and gold, became the \"Imperial Dalek\" faction.", "title": "Physical characteristics" }, { "paragraph_id": 24, "text": "New Dalek props were built for the 21st-century version of Doctor Who. The first, which appeared alone in the 2005 episode \"Dalek\", was built by modelmaker Mike Tucker. Additional Dalek props based on Tucker's master were subsequently built out of fibreglass by Cardiff-based Specialist Models.", "title": "Physical characteristics" }, { "paragraph_id": 25, "text": "Wishing to create an alien creature that did not look like a \"man in a suit\", Terry Nation stated in his script for the first Dalek serial that they should have no legs. He was also inspired by a performance by the Georgian National Ballet, in which dancers in long skirts appeared to glide across the stage. For many of the shows the Daleks were operated by retired ballet dancers wearing black socks while sitting inside the Dalek. Raymond Cusick was given the task of designing the Daleks when Ridley Scott, then a designer for the BBC, proved unavailable after having been initially assigned to their debut serial. According to Jeremy Bentham's Doctor Who—The Early Years (1986), after Nation wrote the script, Cusick was given only an hour to come up with the design for the Daleks and was inspired in his initial sketches by a pepper pot on a table. Cusick himself, however, states that he based it on a man seated in a chair, and used the pepper pot only to demonstrate how it might move.", "title": "Development" }, { "paragraph_id": 26, "text": "In 1964, Nation told a Daily Mirror reporter that the Dalek name came from a dictionary or encyclopaedia volume, the spine of which read \"Dal – Lek\" (or, according to another version, \"Dal – Eks\"). He later admitted that this book and the associated origin of the Dalek name were completely fictitious, and that anyone bothering to check out his story would have found him out. The name had simply rolled off his typewriter. Later, Nation was pleasantly surprised to discover that in Serbo-Croatian the word \"dalek\" means \"far\" or \"distant\".", "title": "Development" }, { "paragraph_id": 27, "text": "Nation grew up during the Second World War and remembered the fear caused by German bombings. He consciously based the Daleks on the Nazis, conceiving the species as faceless, authoritarian figures dedicated to conquest, racial purity and complete conformity. The allusion is most obvious in the Dalek stories written by Nation, in particular The Dalek Invasion of Earth (1964) and Genesis of the Daleks (1975).", "title": "Development" }, { "paragraph_id": 28, "text": "Before he wrote the first Dalek serial, Nation was a scriptwriter for the comedian Tony Hancock. The two men had a falling out and Nation either resigned or was fired. Hancock worked on several series proposals, one of which was called From Plip to Plop, a comedic history of the world that would have ended with a nuclear apocalypse, the survivors being reduced to living in dustbin-like robot casings and eating radiation to stay alive. According to Hancock's biographer Cliff Goodwin, when Hancock saw the Daleks he allegedly shouted at the screen, \"That bloody Nation — he's stolen my robots!\"", "title": "Development" }, { "paragraph_id": 29, "text": "The titling of early Doctor Who stories is complex and sometimes controversial. The first Dalek serial is called, variously, The Survivors (the pre-production title and on-screen title used for the serial's second episode), The Mutants (its official title at the time of production and broadcast, later taken by another unrelated story), Beyond the Sun (used on some production documentation), The Dead Planet (the on-screen title of the serial's first episode), or simply The Daleks.", "title": "Development" }, { "paragraph_id": 30, "text": "The instant appeal of the Daleks caught the BBC off-guard, and transformed Doctor Who into a national phenomenon. Children were both frightened and fascinated by the alien look of the monsters, and the idea of 'hiding behind the sofa' became a popular, if inaccurate or exaggerated, meme. The Doctor Who production office was inundated with letters and calls asking about the creatures. Newspaper articles focused attention on the series and the Daleks, further enhancing their popularity.", "title": "Development" }, { "paragraph_id": 31, "text": "Nation jointly owned the intellectual property rights to the Daleks with the BBC, and the money-making concept proved nearly impossible to sell to anyone else, so he was dependent on the BBC wanting to produce stories featuring the creatures. Several attempts to market the Daleks outside the series were unsuccessful. Since Nation's death in 1997, his share of the rights is now administered by his former agent, Tim Hancock.", "title": "Development" }, { "paragraph_id": 32, "text": "Early plans for what eventually became the 1996 Doctor Who television movie included radically redesigned Daleks whose cases unfolded like spiders' legs. The concept for these \"Spider Daleks\" was abandoned, but it was picked up again in several Doctor Who spin-offs.", "title": "Development" }, { "paragraph_id": 33, "text": "When the new series was announced, many fans hoped that the Daleks would return once more to the programme. The Nation estate, however, demanded levels of creative control over the Daleks' appearances and scripts that were unacceptable to the BBC. Eventually the Daleks were cleared to appear in the first series. In 2014, Doctor Who showrunner Steven Moffat denied their numerous appearances since was as a result of a contractual obligation.", "title": "Development" }, { "paragraph_id": 34, "text": "Dalek in-universe history has seen many retroactive changes, which have caused continuity problems. When the Daleks first appeared, they were presented as the descendants of the Dals, mutated after a brief nuclear war between the Dal and Thal races 500 years ago. This race of Daleks is destroyed when their power supply is wrecked. However, when they reappear in The Dalek Invasion of Earth, they have conquered Earth in the 22nd century. Later stories saw them develop time travel and a space empire. In 1975, Terry Nation revised the Daleks' origins in Genesis of the Daleks, where the Dals were now called Kaleds (of which \"Daleks\" is an anagram), and the Dalek design was attributed to one man, the paralyzed Kaled chief scientist and evil genius, Davros. Later Big Finish Productions audio plays attempted to explain this retcon by saying that the Skaro word \"dal\" simply means warrior, which is how the Kaleds described themselves, while \"dal-ek\" means \"god.\" According to Genesis of the Daleks, instead of a short nuclear exchange, the Kaled-Thal war was a thousand-year-long war of attrition, fought with nuclear, biological and chemical weapons which caused widespread mutations among the life forms of Skaro. Davros experimented on living Kaled cells to find the ultimate mutated form of the Kaled species, believing his own people had become weak and needed to be replaced by a greater life form. He placed his new Dalek creations in tank-like \"travel machines\" of advanced technology whose design was based on his own life-support chair.", "title": "Fictional history" }, { "paragraph_id": 35, "text": "Genesis of the Daleks marked a new era for the depiction of the species, with most of their previous history either forgotten or barely referred to again. Future stories in the original Doctor Who series, which followed a rough story arc, would also focus more on Davros, much to the dissatisfaction of some fans who felt that the Daleks should take centre stage rather than merely becoming minions of their creator. Davros made his last televised appearance for 20 years in Remembrance of the Daleks, which depicted a civil war between two factions of Daleks. One faction, the \"Imperial Daleks\", were loyal to Davros, who had become their Emperor, whilst the other, the \"Renegade Daleks\", followed a black Supreme Dalek. By the end of the story, armies of both factions have been wiped out and the Doctor has tricked them into destroying Skaro. However, Davros escapes and based on the fact that Daleks possess time travel and were spread throughout the universe, there was still a possibility that many had survived these events.", "title": "Fictional history" }, { "paragraph_id": 36, "text": "The original \"classic\" Doctor Who series ended in 1989. In the 1996 Doctor Who TV-movie (which introduced the Eighth Doctor), Skaro has seemingly been recreated and the Daleks are shown to still rule it. Though the aliens are never seen on-screen, the story shows the Time Lord villain the Master being executed on Skaro as Dalek voices chant \"Exterminate.\" In Eighth Doctor audio plays produced by Big Finish from 2000–2005, Paul McGann reprised his role. The audio play The Time of the Daleks featured the Daleks without Davros and nearly removing William Shakespeare from history. In Terror Firma, the Eighth Doctor met a Dalek faction led by Davros who was devolving more into a Dalek-like life form himself while attempting to create new Daleks from mutated humans of Earth. The audio dramas The Apocalypse Element and Dalek Empire also depicted the alien villains invading Gallifrey and then creating their own version of the Time Lord power source known as the Eye of Harmony, allowing the Daleks to rebuild an empire and become a greater threat against the Time Lords and other races that possess time travel.", "title": "Fictional history" }, { "paragraph_id": 37, "text": "A new Doctor Who series premiered in 2005, introducing the Ninth Doctor and revealing that the \"Last Great Time War\" had just ended, resulting in the seeming destruction of the Time Lord society. The episode \"Dalek\", written by Robert Shearman, was broadcast on BBC One on 30 April 2005 and confirmed that the Time War had mainly involved the Daleks fighting the Time Lords, with the Doctor ending the conflict by seemingly destroying both sides, remarking that his own survival was \"not by choice.\" The episode featured a single Dalek who appeared to be the sole survivor of his race from the Time War. Later audio plays by Big Finish Productions expanded on the Time War in different audio drama series such as Gallifrey: Time War, The Eighth Doctor: Time War, The War Doctor, and The War Master.", "title": "Fictional history" }, { "paragraph_id": 38, "text": "A Dalek Emperor returned at the end of the 2005 series, having survived the Time War and then rebuilt the Dalek race with genetic material harvested from human subjects. It saw itself as a god, and the new human-based Daleks were shown worshipping it. The Emperor and this Dalek fleet were destroyed in \"The Parting of the Ways\". The 2006 season finale \"Army of Ghosts\"/\"Doomsday\" featured a squad of four Dalek survivors from the old Empire, known as the Cult of Skaro, composed of Daleks who were tasked with developing imagination to better predict and combat enemies. These Daleks took on names: Jast, Thay, Caan, and their black Dalek leader Sec. The Cult had survived the Time War by escaping into the Void between dimensions. They emerged along with the Genesis Ark, a Time Lord prison vessel containing millions of Daleks, at Canary Wharf due to the actions of the Torchwood Institute and Cybermen from a parallel world. This resulted in a Cyberman-Dalek clash in London, which was resolved when the Tenth Doctor caused both groups to be sucked into the Void. The Cult survived by utilising an \"emergency temporal shift\" to escape.", "title": "Fictional history" }, { "paragraph_id": 39, "text": "These four Daleks - Sec, Jast, Thay and Caan - returned in the two-part story \"Daleks in Manhattan\"/\"Evolution of the Daleks\", in which whilst stranded in 1930s New York, they set up a base in the partially built Empire State Building and attempt to rebuild the Dalek race. To this end, Dalek Sec merges with a human being to become a Human/Dalek hybrid. The Cult then set about creating \"Human Daleks\" by \"formatting\" the brains of a few thousand captured humans so they can have Dalek minds. Dalek Sec, however, becomes more human in personality and alters the plan so the hybrids will be more human like him. The rest of the Cult mutinies. Sec is killed, while Thay and Jast are later wiped out with the hybrids. Dalek Caan, believing it may be the last of its kind now, escapes once more via an emergency temporal shift.", "title": "Fictional history" }, { "paragraph_id": 40, "text": "The Daleks returned in the 2008 season's two-part finale, \"The Stolen Earth\"/\"Journey's End\", accompanied once again by their creator Davros. The story reveals that Caan's temporal shift sent him into the Time War, despite the War being \"Time-Locked\". The experience of piercing the Time-Lock resulted in Caan seeing parts of several futures, destroying his sanity in the process. Caan rescued many Time War era Daleks and Davros, who created new Dalek troops using his own body's cells. A red Supreme Dalek leads the new army while keeping Caan and Davros imprisoned on the Dalek flagship, the Crucible. Davros and the Daleks plan to destroy reality itself with a \"reality bomb\". The plan fails due to the interference of Donna Noble, a companion of the Doctor, and Caan, who has been manipulating events to destroy the Daleks after realising the severity of the atrocities they have committed.", "title": "Fictional history" }, { "paragraph_id": 41, "text": "The Daleks returned in the 2010 episode \"Victory of the Daleks\", wherein it is revealed that some Daleks survived the destruction of their army in \"Journey's End\" and retrieved the \"Progenitor\", a tiny apparatus containing 'original' Dalek DNA. The activation of the Progenitor results in the creation of New Paradigm Daleks who deem the Time War era Daleks to be inferior. The new Daleks are organised into different roles (drone, scientist, strategists, supreme and eternal), which are identifiable with colour-coded armour instead of the identification plates under the eyestalk used by their predecessors. They escape the Doctor at the end of the episode via time travel with the intent to rebuild their Empire.", "title": "Fictional history" }, { "paragraph_id": 42, "text": "The Daleks appeared only briefly in subsequent finales \"The Pandorica Opens\"/\"The Big Bang\" (2010) and The Wedding of River Song (2011) as Steven Moffat decided to \"give them a rest\" and stated, \"There's a problem with the Daleks. They are the most famous of the Doctor's adversaries and the most frequent, which means they are the most reliably defeatable enemies in the universe.\" These episodes also reveal that Skaro has been recreated yet again. They next appear in \"Asylum of the Daleks\" (2012), where the Daleks are shown to have greatly increased numbers and now have a Parliament; in addition to the traditional \"modern\" Daleks, several designs from both the original and new series appear, all co-existing rather than judging each other as inferior or outdated (except for those Daleks whose personalities deem them \"insane\" or can no longer battle). All record of the Doctor is removed from their collective consciousness at the end of the episode.", "title": "Fictional history" }, { "paragraph_id": 43, "text": "The Daleks then appear in the 50th Anniversary special \"The Day of the Doctor\", where they are seen being defeated in the Time War. The same special reveals that many Time Lords survived the war since the Doctor found a way to transfer their planet Gallifrey out of phase with reality and into a pocket dimension. In \"The Time of the Doctor\", the Daleks are one of the races that besieges Trenzalore in an attempt to stop the Doctor from releasing the Time Lords from imprisonment. After converting Tasha Lem into a Dalek puppet, they regain knowledge of the Doctor.", "title": "Fictional history" }, { "paragraph_id": 44, "text": "The Twelfth Doctor's first encounter with the Daleks is in his second full episode, \"Into the Dalek\" (2014), where he encounters a damaged Dalek he names 'Rusty.' Connecting to the Doctor's love of the universe and his hatred of the Daleks, Rusty assumes a mission to destroy other Daleks. In \"The Magician's Apprentice\"/\"The Witch's Familiar\" (2015), the Doctor is summoned to Skaro where he learns Davros has rebuilt the Dalek Empire. In \"The Pilot\" (2017), the Doctor briefly visits a battle during the Dalek-Movellan war.", "title": "Fictional history" }, { "paragraph_id": 45, "text": "The Thirteenth Doctor encountered a Dalek in a New Year's Day episode, \"Resolution\" (2019). A Dalek mutant, separated from its armoured casing, takes control of a human in order to build a new travel device for itself and summon more Daleks to conquer Earth. This Dalek is cloned by a scientist in \"Revolution of the Daleks\" (2021), and attempts to take over Earth using further clones, but are killed by other Daleks for perceived genetic impurity. The Dalek army is later sent by the Doctor into the \"void\" between worlds to be destroyed, using a spare TARDIS she recently acquired on Gallifrey. After cameo appearances depicting them as one of several villains trying to take advantage of \"the Flux\" event tearing through space-time in series 13, the Daleks returned in first 2022 special, \"Eve of the Daleks\". In the episode, a team of Dalek Executioners are dispatched by High Command to avenge the Dalek War Fleet destroyed by the Doctor in the series 13 finale \"The Vanquishers\", only for a time loop established by the TARDIS to save the Doctor's life and give her a chance to destroy the executioners instead. The Daleks later appeared alongside the Cybermen as allies to the Master in \"The Power of the Doctor\" as part of a plot to finally destroy their nemesis, but the alliance is defeated by the Doctor and new and old companions.", "title": "Fictional history" }, { "paragraph_id": 46, "text": "In 2023, the origin of the iconic plunger-like appendages used by Daleks was retroactively established as being from the Fourteenth Doctor's TARDIS.", "title": "Fictional history" }, { "paragraph_id": 47, "text": "Daleks have little, if any, individual personality, ostensibly no emotions other than hatred and anger, and a strict command structure in which they are conditioned to obey superiors' orders without question. Dalek speech is characterised by repeated phrases, and by orders given to themselves and to others. Unlike the stereotypical emotionless robots often found in science fiction, Daleks are often angry; author Kim Newman has described the Daleks as behaving \"like toddlers in perpetual hissy fits\", gloating when in power and flying into a rage when thwarted. They tend to be excitable and will repeat the same word or phrase over and over again in heightened emotional states, most famously \"Exterminate! Exterminate!\"", "title": "Dalek culture" }, { "paragraph_id": 48, "text": "Daleks are extremely aggressive, and seem driven by an instinct to attack. This instinct is so strong that Daleks have been depicted fighting the urge to kill or even attacking when unarmed. The Fifth Doctor characterises this impulse by saying, \"However you respond [to Daleks] is seen as an act of provocation.\" The fundamental feature of Dalek culture and psychology is an unquestioned belief in the superiority of the Dalek race, and their default directive is to destroy all non-Dalek life-forms. Other species are either to be exterminated immediately or enslaved and then exterminated once they are no longer useful.", "title": "Dalek culture" }, { "paragraph_id": 49, "text": "The Dalek obsession with their own superiority is illustrated by the schism between the Renegade and Imperial Daleks seen in Revelation of the Daleks and Remembrance of the Daleks: the two factions each consider the other to be a perversion despite the relatively minor differences between them. This intolerance of any \"contamination\" within themselves is also shown in \"Dalek\", The Evil of the Daleks and in the Big Finish Productions audio play The Mutant Phase. This superiority complex is the basis of the Daleks' ruthlessness and lack of compassion. This is shown in extreme in \"Victory of the Daleks\", where the new, pure Daleks destroy their creators, impure Daleks, with the latter's consent. It is nearly impossible to negotiate or reason with a Dalek, a single-mindedness that makes them dangerous and not to be underestimated. The Eleventh Doctor (Matt Smith) is later puzzled in the \"Asylum of the Daleks\" as to why the Daleks don't just kill the sequestered ones that have \"gone wrong\". Although the Asylum is subsequently obliterated, the Prime Minister of the Daleks explains that \"it is offensive to us to destroy such divine hatred\", and the Doctor is sickened at the revelation that hatred is actually considered beautiful by the Daleks.", "title": "Dalek culture" }, { "paragraph_id": 50, "text": "Dalek society is depicted as one of extreme scientific and technological advancement; the Third Doctor states that \"it was their inventive genius that made them one of the greatest powers in the universe.\" However, their reliance on logic and machinery is also a strategic weakness which they recognise, and thus use more emotion-driven species as agents to compensate for these shortcomings.", "title": "Dalek culture" }, { "paragraph_id": 51, "text": "Although the Daleks are not known for their regard for due process, they have taken at least two enemies back to Skaro for a \"trial\", rather than killing them immediately. The first was their creator, Davros, in Revelation of the Daleks, and the second was the renegade Time Lord known as the Master in the 1996 television movie. The reasons for the Master's trial, and why the Doctor would be allowed to retrieve the Master's remains, have never been explained on screen. The Doctor Who Annual 2006 implies that the trial may have been due to a treaty signed between the Time Lords and the Daleks. The framing device for the I, Davros audio plays is a Dalek trial to determine if Davros should be the Daleks' leader once more.", "title": "Dalek culture" }, { "paragraph_id": 52, "text": "Spin-off novels contain several tongue-in-cheek mentions of Dalek poetry, and an anecdote about an opera based upon it, which was lost to posterity when the entire cast was exterminated on the opening night. Two stanzas are given in the novel The Also People by Ben Aaronovitch. In an alternative timeline portrayed in the Big Finish Productions audio adventure The Time of the Daleks, the Daleks show a fondness for the works of Shakespeare. A similar idea was satirised by comedian Frankie Boyle in the BBC comedy quiz programme Mock the Week; he gave the fictional Dalek poem \"Daffodils; EXTERMINATE DAFFODILS!\" as an \"unlikely line to hear in Doctor Who\".", "title": "Dalek culture" }, { "paragraph_id": 53, "text": "Because the Doctor has defeated the Daleks so often, he has become their collective arch-enemy and they have standing orders to capture or exterminate him on sight. In later fiction, the Daleks know the Doctor as \"Ka Faraq Gatri\" (\"Bringer of Darkness\" or \"Destroyer of Worlds\"), and \"The Oncoming Storm\". Both the Ninth Doctor (Christopher Eccleston) and Rose Tyler (Billie Piper) suggest that the Doctor is one of the few beings the Daleks fear. In \"Doomsday\", Rose notes that while the Daleks see the extermination of five million Cybermen as \"pest control\", \"one Doctor\" visibly un-nerves them (to the point they physically recoil). To his indignant surprise, in \"Asylum of the Daleks\", the Eleventh Doctor (Matt Smith) learns that the Daleks have designated him as \"The Predator\".", "title": "Dalek culture" }, { "paragraph_id": 54, "text": "A rel is a Dalek and Kaled unit of measurement. It was usually a measurement of time, with a duration of slightly more than one second, as mentioned in \"Doomsday\", \"Evolution of the Daleks\" and \"Journey's End\", counting down to the ignition of the reality bomb. (One earth minute most likely equals about 50 rels.) However, in some comic books it was also used as a unit of velocity. Finally, in some cases it was used as a unit of hydroelectric energy (not to be confused with a vep, the unit used to measure artificial sunlight).", "title": "Dalek culture" }, { "paragraph_id": 55, "text": "The rel was first used in the non-canonical feature film Daleks – Invasion Earth: 2150 A.D., soon after appearing in early Doctor Who comic books.", "title": "Dalek culture" }, { "paragraph_id": 56, "text": "Two Doctor Who movies starring Peter Cushing featured the Daleks as the main villains: Dr. Who and the Daleks, and Daleks - Invasion Earth 2150 AD, based on the television serials The Daleks and The Dalek Invasion of Earth, respectively. The movies were not direct remakes; for example, the Doctor in the Cushing films was a human inventor called \"Dr. Who\" who built a time-travelling device named Tardis, instead of a mysterious alien who stole a device called \"the TARDIS\".", "title": "Licensed appearances" }, { "paragraph_id": 57, "text": "Four books focusing on the Daleks were published in the 1960s. The Dalek Book (1964, written by Terry Nation and David Whitaker), The Dalek World (1965, written by Nation and Whitaker) and The Dalek Outer Space Book (1966, by Nation and Brad Ashton) were all hardcover books formatted like annuals, containing text stories and comics about the Daleks, along with fictional information (sometimes based on the television serials, other times made up for the books). Nation also published The Dalek Pocketbook and Space-Travellers Guide, which collected articles and features treating the Daleks as if they were real. Four more annuals were published in the 1970s by World Distributors under the title Terry Nation's Dalek Annual (with cover dates 1976–1979, but published 1975–1978). Two original novels by John Peel, War of the Daleks (1997) and Legacy of the Daleks (1998), were released as part of the Eighth Doctor Adventures series of Doctor Who novels. A novella, The Dalek Factor by Simon Clark, was published in 2004, and two books featuring the Daleks and the Tenth Doctor (I am a Dalek by Gareth Roberts, 2006, and Prisoner of the Daleks by Trevor Baxendale, 2009) have been released as part of the New Series Adventures.", "title": "Licensed appearances" }, { "paragraph_id": 58, "text": "Nation authorised the publication of the comic strip The Daleks in the comic TV Century 21 in 1965. The weekly one-page strip, written by Whitaker but credited to Nation, featured the Daleks as protagonists and \"heroes\", and continued for two years, from their creation of the mechanised Daleks by the humanoid Dalek scientist, Yarvelling, to their eventual discovery in the ruins of a crashed space-liner of the co-ordinates for Earth, which they proposed to invade. Although much of the material in these strips was directly contradicted by what was later shown on television, some concepts like the Daleks using humanoid duplicates and the design of the Dalek Emperor did show up later on in the programme.", "title": "Licensed appearances" }, { "paragraph_id": 59, "text": "At the same time, a Doctor Who strip was also being published in TV Comic. Initially, the strip did not have the rights to use the Daleks, so the First Doctor battled the \"Trods\" instead, cone-shaped robotic creatures that ran on static electricity. By the time the Second Doctor appeared in the strip in 1967 the rights issues had been resolved, and the Daleks began making appearances starting in The Trodos Ambush (TVC #788-#791), where they massacred the Trods. The Daleks also made appearances in the Third Doctor-era Dr. Who comic strip that featured in the combined Countdown/TV Action comic during the early 1970s.", "title": "Licensed appearances" }, { "paragraph_id": 60, "text": "An animated series called Daleks!, which consists of five 10-minute long episodes, was released on the official Doctor Who YouTube channel in 2020.", "title": "Licensed appearances" }, { "paragraph_id": 61, "text": "Other licensed appearances have included a number of stage plays (see Stage plays below) and television adverts for Wall's \"Sky Ray\" ice lollies (1966), Weetabix breakfast cereal (1977), Kit Kat chocolate bars (2001), and the ANZ Bank (2005). In 2003, Daleks also appeared in UK billboard ads for Energizer batteries, alongside the slogan \"Are You Power Mad?\"", "title": "Licensed appearances" }, { "paragraph_id": 62, "text": "Daleks have made cameo appearances in television programmes and films unrelated to Doctor Who from the 1960s to the present day.", "title": "Other appearances" }, { "paragraph_id": 63, "text": "Daleks have been referred to or associated in many musical compositions.", "title": "Other appearances" }, { "paragraph_id": 64, "text": "Licensed Doctor Who games featuring Daleks include 1984's The Key to Time, a text adventure game for the ZX Spectrum. The first graphical game to feature daleks was the eponymous, turn-based title released by Johan Strandberg for the Macintosh in the same year. Daleks also appeared in minor roles or as thinly disguised versions in other, minor games throughout the 80s, but did not feature as central adversaries in a licensed game until 1992, when Admiral Software published Dalek Attack. The game allowed the player to play various Doctors or companions, running them through several environments to defeat the Daleks. In 1997 the BBC released a PC game entitled Destiny of the Doctors which also featured the Daleks, among other adversaries.", "title": "Other appearances" }, { "paragraph_id": 65, "text": "One authorised online game is The Last Dalek, a Flash game created by New Media Collective for the BBC. It is based on the 2005 episode \"Dalek\" and can be played at the official BBC Doctor Who website. The Doctor Who website also features another game, Daleks vs Cybermen (also known as Cyber Troop Control Interface), based on the 2006 episode \"Doomsday\"; in this game, the player controls troops of Cybermen which must fight Daleks as well as Torchwood Institute members.", "title": "Other appearances" }, { "paragraph_id": 66, "text": "On 5 June 2010, the BBC released the first of four official computer games on its website, Doctor Who: The Adventure Games, which are intended as part of the official TV series adventures. In the first of these, 'The City of the Daleks', the Doctor in his 11th incarnation and Amy Pond must stop the Daleks re-writing time and reviving Skaro, their homeland.", "title": "Other appearances" }, { "paragraph_id": 67, "text": "They also appear in the Nintendo DS and Wii games Doctor Who: Evacuation Earth and Doctor Who: Return to Earth.", "title": "Other appearances" }, { "paragraph_id": 68, "text": "Several Daleks appear in the iOS game The Mazes of Time as rare enemies the player faces, appearing only in the first and final levels.", "title": "Other appearances" }, { "paragraph_id": 69, "text": "The Daleks also appear in Lego Dimensions where they ally themselves with Lord Vortech and possess the size-altering scale keystone. When Batman, Gandalf, and Wyldstyle encounter them, they assume that they are allies of the Doctor and attack the trio. The main characters continue to fight the Daleks until they call the Doctor to save them. A Dalek saucer also appears in the level based on Metropolis, in which the top of it serves as the stage for the boss battle against Sauron and includes Daleks among the various enemies summoned to attack the player. A Dalek is also among the elements summoned by the player to deal with the obstacles in the Portal 2 story level.", "title": "Other appearances" }, { "paragraph_id": 70, "text": "The Daleks also appear in Doctor Who: The Edge of Time, a Virtual Reality Game for the PlayStation VR, Oculus Rift, Oculus Quest, HTC Vive, and Vive Cosmos, which is set to be released in September 2019.", "title": "Other appearances" }, { "paragraph_id": 71, "text": "The Daleks are a licensed costume in Fall Guys.", "title": "Other appearances" }, { "paragraph_id": 72, "text": "At the 1966 Conservative Party conference in Blackpool, delegate Hugh Dykes publicly compared the Labour government's Defence Secretary Denis Healey to the creatures. \"Mr. Healey is the Dalek of defence, pointing a metal finger at the armed forces and saying 'I will eliminate you'.\"", "title": "Other appearances" }, { "paragraph_id": 73, "text": "In a British Government Parliamentary Debate in the House of Commons on 12 February 1968, the then Minister of Technology Tony Benn mentioned the Daleks during a reply to a question from the Labour MP Hugh Jenkins concerning the Concorde aircraft project. In the context of the dangers of solar flares, he said, \"Because we are exploring the frontiers of technology, some people think Concorde will be avoiding solar flares like Dr. Who avoiding Daleks. It is not like this at all.\"", "title": "Other appearances" }, { "paragraph_id": 74, "text": "Australian Labor Party luminary Robert Ray described his right wing Labor Unity faction successor, Victorian Senator Stephen Conroy, and his Socialist Left faction counterpart, Kim Carr, as \"factional Daleks\" during a 2006 Australian Fabian Society lunch in Sydney.", "title": "Other appearances" }, { "paragraph_id": 75, "text": "During a 2021 House of Commons debate about the retention of dentists in rural areas of the United Kingdom during the COVID-19 pandemic, the voice of Conservative MP Scott Mann of North Cornwall, while on a video link, became distorted due to a malfunction with his audio feed. Deputy Speaker of the House Nigel Evans interrupted his broadcast, amidst the chuckles from other MPs; by saying, \"Scott, you sound like a Dalek and I don't mean that unkindly. There's clearly a communications problem.\" Mann later returned to apologise.", "title": "Other appearances" }, { "paragraph_id": 76, "text": "Daleks have been used in political cartoons to caricature: Douglas Hurd, as the 'Douglek', in Private Eye's Dan Dire – Pilot of the Future; Tony Benn, John Birt, Tony Blair (also portrayed as Davros), Alec Douglas-Home, Charles de Gaulle, Mark Thompson.", "title": "Other appearances" }, { "paragraph_id": 77, "text": "Daleks have appeared on magazine covers promoting Doctor Who since the \"Dalekmania\" fad of the 1960s. Radio Times has featured the Daleks on its cover several times, beginning with the 21–27 November 1964 issue which promoted The Dalek Invasion of Earth. Other magazines also used Daleks to attract readers' attention, including Girl Illustrated.", "title": "Other appearances" }, { "paragraph_id": 78, "text": "In April 2005, Radio Times created a special cover to commemorate both the return of the Daleks to the screen in \"Dalek\" and the forthcoming general election. This cover recreated a scene from The Dalek Invasion of Earth in which the Daleks were seen crossing Westminster Bridge, with the Houses of Parliament in the background. The cover text read \"VOTE DALEK!\" In a 2008 contest sponsored by the Periodical Publishers Association, this cover was voted the best British magazine cover of all time. In 2013 it was voted \"Cover of the century\" by the Professional Publishers Association. The 2010 United Kingdom general election campaign also prompted a collector's set of three near-identical covers of the Radio Times on 17 April with exactly the same headline but with the newly redesigned Daleks in their primary colours representing the three main political parties, Red being Labour, Blue as Conservative and Yellow as Liberal Democrats.", "title": "Other appearances" }, { "paragraph_id": 79, "text": "Daleks have been the subject of many parodies, including Spike Milligan's \"Pakistani Dalek\" sketch in his comedy series Q, and Victor Lewis-Smith's \"Gay Daleks\". Occasionally the BBC has used the Daleks to parody other subjects: in 2002, BBC Worldwide published the Dalek Survival Guide, a parody of The Worst-Case Scenario Survival Handbooks. Comedian Eddie Izzard has an extended stand-up routine about Daleks, which was included in her 1993 stand-up show \"Live at the Ambassadors\". The Daleks made two brief appearances in a pantomime version of Aladdin at the Birmingham Hippodrome which starred Torchwood star John Barrowman in the lead role. A joke-telling robot, possessing a Dalek-like boom, and loosely modelled after the Dalek, also appeared in the South Park episode \"Funnybot\", even spouting out \"exterminate\". A Dalek can also be seen in the background at timepoints 1:13 and 1:17 in the Sam & Max animated series episode \"The Trouble with Gary\". In the Community parody of Doctor Who called Inspector Spacetime, they are referred to as Blorgons.", "title": "Other appearances" }, { "paragraph_id": 80, "text": "The BBC approached Walter Tuckwell, a New Zealand-born entrepreneur who was handling product merchandising for other BBC shows, and asked him to do the same for the Daleks and Doctor Who. Tuckwell created a glossy sales brochure that sparked off a Dalek craze, dubbed \"Dalekmania\" by the press, which peaked in 1965.", "title": "Merchandising" }, { "paragraph_id": 81, "text": "The first Dalek toys were released in 1965 as part of the \"Dalekmania\" craze. These included battery-operated, friction drive and \"Rolykins\" Daleks from Louis Marx & Co., as well as models from Cherilea, Herts Plastic Moulders Ltd and Cowan, de Groot Ltd, and \"Bendy\" Daleks made by Newfeld Ltd. At the height of the Daleks' popularity, in addition to toy replicas, there were Dalek board games and activity sets, slide projectors for children and even Dalek playsuits made from PVC. Collectible cards, stickers, toy guns, music singles, punching bags and many other items were also produced in this period. Dalek toys released in the 1970s included a new version of Louis Marx's battery-operated Dalek (1974), a \"talking Dalek\" from Palitoy (1975) and a Dalek board game (1975) and Dalek action figure (1977), both from Denys Fisher. From 1988 to 2002, Dapol released a line of Dalek toys in conjunction with its Doctor Who action figure series.", "title": "Merchandising" }, { "paragraph_id": 82, "text": "In 1984, Sevans Models released a self-assembly model kit for a one-fifth scale Dalek, which Doctor Who historian David Howe has described as \"the most accurate model of a Dalek ever to be released\". Comet Miniatures released two Dalek self-assembly model kits in the 1990s.", "title": "Merchandising" }, { "paragraph_id": 83, "text": "In 1992, Bally released a Doctor Who pinball machine which prominently featured the Daleks both as a primary playfield feature and as a motorised toy in the topper.", "title": "Merchandising" }, { "paragraph_id": 84, "text": "Bluebird Toys produced a Dalek-themed Doctor Who playset in 1998.", "title": "Merchandising" }, { "paragraph_id": 85, "text": "Beginning in 2000, Product Enterprise (who later operated under the names \"Iconic Replicas\" and \"Sixteen 12 Collectibles\") produced various Dalek toys. These included one-inch (2.5 cm) Dalek \"Rolykins\" (based on the Louis Marx toy from 1965); push-along \"talking\" 7-inch (17.8 cm) Daleks; 21⁄2-inch (6.4 cm) Dalek \"Rollamatics\" with a pull back and release mechanism; and a one-foot (30.5 cm) remote control Dalek.", "title": "Merchandising" }, { "paragraph_id": 86, "text": "In 2005 Character Options was granted the \"Master Toy License\" for the revived Doctor Who series, including the Daleks. Their product lines have included 5-inch (12.7 cm) static/push-along and radio controlled Daleks, radio controlled 12-inch (30.5 cm) versions and radio controlled 18-inch (45.7 cm) / 1:3 scale variants. The 12-inch remote control Dalek won the 2005 award for Best Electronic Toy of the Year from the Toy Retailers Association. Some versions of the 18-inch model included semi-autonomous and voice command-features. In 2008, the company acquired a license to produce 5-inch (12.7 cm) Daleks of the various \"classic series\" variants. For the fifth revived series, both Ironside (Post-Time war Daleks in camouflage khaki), Drone (new, red) and, later, Strategist Daleks (new, blue) were released as both RC Infrared Battle Daleks and action figures.", "title": "Merchandising" }, { "paragraph_id": 87, "text": "A pair of Lego based Daleks were included in the Lego Ideas Doctor Who set, and another appeared in the Lego Dimensions Cyberman Fun-Pack.", "title": "Merchandising" }, { "paragraph_id": 88, "text": "Dalek fans have been building life-size reproduction Daleks for many years. The BBC and Terry Nation estate officially disapprove of self-built Daleks, but usually intervene only if attempts are made to trade unlicensed Daleks and Dalek components commercially, or if it is considered that actual or intended use may damage the BBC's reputation or the Doctor Who/Dalek brand. The Crewe, Cheshire-based company \"This Planet Earth\" is the only business which has been licensed by the BBC and the Terry Nation Estate to produce full-size TV Dalek replicas, and by Canal+ Image UK Ltd. to produce full size Movie Dalek replicas commercially.", "title": "Merchandising" } ]
The Daleks are a fictional extraterrestrial race of extremely xenophobic mutants principally portrayed in the British science fiction television programme Doctor Who. They were conceived by writer Terry Nation and first appeared in the 1963 Doctor Who serial The Daleks, in casings designed by Raymond Cusick. Drawing inspiration from the Nazis, Nation portrayed the Daleks as violent, merciless and pitiless cyborg aliens, completely absent of any emotion other than hate, who demand total conformity to the will of the Dalek with the highest authority, and are bent on the conquest of the universe and the extermination of any other forms of life, including other 'impure' Daleks which are deemed inferior for being different to them. Collectively, they are the greatest enemies of Doctor Who's protagonist, the Time Lord known as "the Doctor". During the second year of the original Doctor Who programme (1963–1989), the Daleks developed their own form of time travel. At the beginning of the second Doctor Who TV series that debuted in 2005, it was established that the Daleks had engaged in a Time War against the Time Lords that affected much of the universe and altered parts of history. In the programme's narrative, the planet Skaro suffered a thousand-year war between two societies: the Kaleds and the Thals. During this time-period, many natives of Skaro became badly mutated by fallout from nuclear weapons and chemical warfare. The Kaled government believed in genetic purity and swore to "exterminate the Thals" for being inferior. Believing his own society was becoming weak and that it was his duty to create a new master race from the ashes of his people, the Kaled scientist Davros genetically modified several Kaleds into squid-like life-forms he called Daleks, removing "weaknesses" such as mercy and sympathy while increasing aggression and survival-instinct. He then integrated them with tank-like robotic shells equipped with advanced technology based on the same life-support system he himself used since being burned and blinded by a nuclear attack. His creations became intent on dominating the universe by enslaving or purging all "inferior" non-Dalek life. The Daleks are the show's most popular and famous villains and their returns to the series over the decades have often gained media attention. Their frequent declaration "Exterminate!" has become common usage.
2001-07-28T16:55:43Z
2023-12-14T16:51:14Z
[ "Template:Use British English", "Template:Cite episode", "Template:Refend", "Template:Wikiquote", "Template:'", "Template:Citation needed", "Template:Cite video", "Template:Doctor Who", "Template:Infobox fictional race", "Template:Respell", "Template:Cite news", "Template:Refbegin", "Template:Wiktionary", "Template:Cite book", "Template:IPAc-en", "Template:Multiple images", "Template:Webarchive", "Template:Nowrap", "Template:Cite web", "Template:Reflist", "Template:Cite magazine", "Template:Cite comic", "Template:Cite journal", "Template:TardisIndexFile", "Template:Short description", "Template:Sic", "Template:Main", "Template:Subject bar", "Template:Multiple image", "Template:See also", "Template:Doctor Who characters", "Template:Dalek stories", "Template:About", "Template:Use dmy dates", "Template:Portal" ]
https://en.wikipedia.org/wiki/Dalek
9,141
Davy Jones (musician)
David Thomas Jones (30 December 1945 – 29 February 2012) was an English actor and singer. Best known as a member of the band The Monkees and a co-star of the TV series The Monkees (1966–1968), Jones was considered a teen idol. Aside from his work on The Monkees TV show, Jones's acting credits include a Tony-nominated performance as the Artful Dodger in the original London and Broadway productions of Oliver! and a guest-starring role in a hallmark episode of The Brady Bunch television show and a later reprised parody film. David Thomas Jones was born on 30 December 1945 in Openshaw, England, to Harry and Doris Jones. He had three sisters: Hazel, Lynda and Beryl. Jones' mother died from emphysema when he was 14 years of age. Jones' television acting debut was in the British television soap opera Coronation Street, in which he appeared as Colin Lomax, grandson of the regular character Ena Sharples, for one episode on 6 March 1961. He also appeared in the BBC police series Z-Cars. Following the death of his mother, Jones rejected acting in favour of becoming a jockey, commencing an apprenticeship with Newmarket trainer Basil Foster. He dropped out of secondary school to begin working in that field, but this career was short-lived. Even though Foster believed Jones would be successful as a jockey, he encouraged his young protégé to take a role as the Artful Dodger in a production of Oliver! in London's West End. When approached by a friend who worked in a West End theatre during the show's casting, Foster replied, "I've got the kid." Jones's portrayal brought him great acclaim. He played the role in London and then on Broadway, and was nominated for a Tony Award. On 9 February 1964, Jones appeared on The Ed Sullivan Show with Georgia Brown, who was playing Nancy in the Broadway production of Oliver!. This was the same episode of the show in which the Beatles made their first appearance on U.S. television. Jones said of that night, "I watched the Beatles from the side of the stage, I saw the girls going crazy, and I said to myself, this is it, I want a piece of that." Following his Ed Sullivan appearance, Jones signed a contract with Ward Sylvester of Screen Gems (at that time the television division of Columbia Pictures). A pair of U.S. television appearances followed, as Jones received screen time in episodes of Ben Casey and The Farmer's Daughter. Jones debuted on the Billboard Hot 100 in the week of 14 August 1965, with the single "What Are We Going To Do?", which peaked at number 93. The 19-year-old singer was signed to Colpix Records, a label owned by Columbia. His debut album, David Jones, on the same label, followed soon afterward (CP493). From 1966 to 1970, Jones was a member of the Monkees, a pop-rock band formed expressly for a television show of the same name. With Screen Gems producing the series, Jones was shortlisted for auditions, as he was the only Monkee who was signed to a deal with the studio, but still had to meet the standards of producers Bob Rafelson and Bert Schneider. Jones sang lead vocals on many of the Monkees' recordings, including "I Wanna Be Free" and "Daydream Believer". The DVD release of the first season of the show contained commentary from the various bandmates. In Peter Tork's commentary, he stated that Jones was a good drummer and had the live performance line-up been based solely on playing ability, it ought to have been Tork on guitar, Mike Nesmith on bass, and Jones on drums, with Micky Dolenz taking the fronting role, rather than as it was done (with Nesmith on guitar, Tork on bass, and Dolenz on drums). Like Peter Tork, Jones, despite playing mostly tambourine or maracas, was a multi-instrumentalist and would fill in for Tork on bass when he played keyboards and vice versa and for Dolenz on drums when the Monkees performed live concerts. The Monkees officially disbanded in 1970. The NBC television series The Monkees was popular and remained in syndication. Bell Records, then having a string of hits with The Partridge Family, signed Jones to a somewhat inflexible solo record contract in 1971. Jones was not allowed to choose his songs or producer, resulting in several lacklustre and aimless records. His second solo album, Davy Jones (1971) was notable for the song "Rainy Jane", which reached No. 52 in the Billboard charts. To promote the album, Jones performed "Girl" on an episode of The Brady Bunch entitled "Getting Davy Jones". Although the single sold poorly, the popularity of Jones' appearance on the show resulted in "Girl" becoming his best-remembered solo hit, even though it was not included in the album. The final single, "I'll Believe In You"/"Road to Love", was poorly received. Thanks in part to reruns of The Monkees on Saturday mornings and in syndication, The Monkees Greatest Hits charted in 1976. The LP, issued by Arista (a subsidiary of Screen Gems), was actually a repackaging of a 1972 compilation LP called Refocus that had been issued by Arista's previous label imprint, Bell Records, also owned by Screen Gems. Dolenz and Jones took advantage of this, joining ex-Monkees songwriters Tommy Boyce and Bobby Hart to tour the United States. From 1975 to 1977, as the "Golden Hits of The Monkees" show ("The Guys who Wrote 'Em and the Guys who Sang 'Em!"), they successfully performed in smaller venues such as state fairs and amusement parks as well as making stops in Japan, Thailand, and Singapore (although they were forbidden from using the "Monkees" name, as it was owned by Screen Gems at the time). They also released an album of new material appropriately as Dolenz, Jones, Boyce & Hart; a live album entitled Concert in Japan was also recorded in 1976, but was not released until 1996. Despite his initial high-profile after the Monkees disbanded, Jones struggled to establish himself as a solo music artist. Glenn A. Baker, author of Monkeemania: The True Story of the Monkees, commented in 1986 that "for an artist as versatile and confident as (Davy) Jones, the relative failure of his post-Monkees activities is puzzling. For all his cocky predictions to the press about his future plans, Davy fell into a directionless heap when left to his own devices." Jones returned to theatre several times after the Monkees disbanded. In 1977, he performed with former bandmate Micky Dolenz in a stage production of the Harry Nilsson musical The Point! in London at the Mermaid Theatre, playing and singing the starring role of "Oblio" to Dolenz' roles as the "Count's Kid" and the "Leafman", (according to the CD booklet). An original cast recording was made and released. The comedic chemistry of Jones and Dolenz proved so strong that the show was revived in 1978 with Nilsson inserting additional comedy for the two, plus two more songs, with one of them ("Gotta Get Up") being sung by Jones and Dolenz. The show was considered so good that it was planned to be revived again in 1979 but it proved cost prohibitive (source CD booklet "Harry Nilsson's The Point"). Jones also appeared in several productions of Oliver! as the Artful Dodger, and in 1989 toured the US portraying "Fagin". Jones appeared in two episodes each of Love, American Style and My Two Dads. Jones also appeared in animated form as himself in 1972 in an hour-long episode of The New Scooby-Doo Movies. A Monkees television show marathon ("Pleasant Valley Sunday") broadcast on 23 February 1986 by MTV resulted in a wave of Monkeemania not seen since the band's heyday. Jones reunited with Dolenz and Peter Tork from 1986 to 1989 to celebrate the band's renewed success and promote the 20th anniversary of the band. A new top 20 hit, "That Was Then, This Is Now" was released (though Jones did not perform on the song) as well as an album, Pool It! In 1996, Jones reunited with Dolenz, Tork and Michael Nesmith to celebrate the 30th anniversary of the Monkees. The band released a new album entitled Justus, the first album since 1967's Headquarters that featured the band members performing all instrumental duties. It was the last time all four Monkees performed together. Other television appearances include Sledge Hammer!, Boy Meets World, Hey Arnold!, The Single Guy (where he is mistaken for Dudley Moore) and Sabrina, the Teenage Witch in which he sang "Daydream Believer" to Sabrina Spellman (played by Melissa Joan Hart) as well as (I'll) Love You Forever. In 1995, Jones acted in a notable episode of the sitcom Boy Meets World. The continued popularity of Jones' 1971 Brady Bunch appearance led to his being cast as himself in The Brady Bunch Movie (1995). Jones sang his signature solo hit "Girl", with a grunge band providing backing, this time with middle-aged women swooning over him. Micky Dolenz and Peter Tork also appeared alongside Jones as judges. On 2 August 1996, while The Monkees were on their 30th-anniversary tour in New England, Jones was interviewed on the "Sports Break" radio show on WBPS 890-AM in Boston by host Roland Regan about his early days as a jockey and amateur boxer back in England as a youth, and now how he stays in shape by jogging and playing in celebrity tennis tournaments. On 21 June 1997, during a concert at the Los Angeles Coliseum, Jones joined U2's The Edge onstage for a karaoke performance of "Daydream Believer", which had become a fixture of the band's set during that year's PopMart Tour. In 2001, Jones released Just Me, an album of his own songs, some written for the album and others originally on Monkees releases. In the early 2000s he was performing in the Flower Power Concert Series during Epcot's Flower and Garden Festival, a yearly gig he would continue until his death. In April 2006, Jones recorded the single "Your Personal Penguin", written by children's author Sandra Boynton, as a companion piece to her new board book of the same title. In 2007, Jones performed the theme song for the film Sexina: Popstar P.I.. On 1 November 2007, the Boynton book and CD titled Blue Moo was released and Jones is featured in both the book and CD, singing "Your Personal Penguin". In 2009, Jones released a collection of classics and standards from the 1940s through the 1970s entitled She. In December 2008, Yahoo! Music named Jones the "Number 1 teen idol of all time". In 2009, Jones was rated second in a list of 10 best teen idols compiled by Fox News. In 2009, Jones made a cameo appearance as himself in the SpongeBob SquarePants episode "SpongeBob SquarePants vs. The Big One" (his appearance was meant as a pun on the phrase "Davy Jones' Locker"). In February 2011, Jones confirmed rumours of another Monkees reunion. "There's even talk of putting the Monkees back together again in the next year or so for a U.S. and UK tour," he told Disney's Backstage Pass newsletter. "You're always hearing all those great songs on the radio, in commercials, movies, almost everywhere." The tour (Jones' last) came to fruition and was entitled An Evening with The Monkees: The 45th Anniversary Tour. In 1967, Jones opened his first store, called Zilch, at 217 Thompson Street in the Greenwich Village section of New York City. The store sold "hip" clothing and accessories, and also allowed customers to design their own clothes. After the Monkees disbanded in 1970, Jones kept himself busy by establishing a New York City-style street market in Los Angeles, called "The Street", which cost approximately $40,000. He also collaborated with musical director Doug Trevor on a one-hour ABC television special titled Pop Goes Davy Jones, which featured new artists The Jackson 5 and the Osmonds. In addition to his career as an entertainer, Jones' other great love was horses. Having trained as a jockey in his teens in the UK, he had at first intended to pursue a career as a professional race jockey. He held an amateur rider's licence, and rode in his first race at Newbury in Berkshire for renowned trainer Toby Balding. On 1 February 1996, Jones won his first race, on Digpast, in the one-mile Ontario Amateur Riders Handicap at Lingfield in Surrey. Jones also had horse ownership interests in both the US and the UK, and served as a commercial spokesman for Colonial Downs racetrack in Virginia. Following Jones' death, Lingfield announced that the first two races on the racecard for 3 March 2012 would be renamed the "Hey Hey We're The Monkees Handicap" and the "In Memory of Davy Jones Selling Stakes", with successful horses in those races accompanied into the winners' enclosure by some of the Monkees' biggest hits. Plans were also announced to erect a plaque to commemorate Jones next to a Monkey Puzzle tree on the course. Jones was married three times and had four children. In December 1967, he married Dixie Linda Haines, with whom he had been living. Their relationship had been kept out of the public eye until after the birth of their first child in October 1968. It caused a considerable backlash for Jones from his fans when it was finally made public. Jones later stated in Tiger Beat magazine, "I kept my marriage a secret because I believe stars should be allowed a private life." Jones and Haines had two daughters, Talia Elizabeth Jones (2 October 1968) and Sarah Lee Jones (3 July 1971). The marriage ended in 1975. Jones married his second wife, Anita Pollinger, on 24 January 1981, and also had two daughters. Jessica Lillian Jones (4 September 1981) and Annabel Charlotte Jones (26 June 1988). The couple divorced in 1996 during the Monkees' 30th-anniversary reunion tour. Jones married Jessica Pacheco in 2009. Jones and his wife appeared on the Dr. Phil show in April 2011. On 28 July 2011, Pacheco filed to divorce Jones in Miami-Dade County, Florida, but dropped the suit in October. They were still married when he died in February 2012. Pacheco was omitted from Jones' will, which he had made before their marriage. His oldest daughter, whom he named his executrix, was granted by the court the unusual request that her father's will be sealed, on the basis that "planning documents and financial affairs as public opinion could have a material effect on his copyrights, royalties and ongoing goodwill"." On the morning of 29 February 2012, Jones went to tend his 14 horses at a farm in Indiantown, Florida. After riding one of his favourite horses around the track, he complained of chest pains and difficulty breathing and was given antacid pills. He got in his car to go home. Just after 8:00 a.m., a ranch-hand found him unconscious and an ambulance was called but Jones could not be revived. He was taken to Martin Memorial South Hospital in Stuart, Florida, where he died of a heart attack resulting from arteriosclerosis. He was 66. On 7 March, a private funeral service was held at Holy Cross Catholic parish church in Indiantown. To avoid drawing attention to the grieving family, the three surviving Monkees did not attend. Instead, the bandmates attended memorial services in New York City and organised their own private memorial in Los Angeles along with Jones' family and close friends. A public memorial service was held on 10 March in Beavertown, Pennsylvania, near a church Jones had purchased for future renovation. On 12 March, a private memorial service was held in Jones' hometown of Openshaw, Manchester, at Lees Street Congregational Church, where Jones performed as a child in church plays. Jones' wife and daughters travelled to England to join his relatives based there for the service, and placed his ashes on his parents' graves for a time. "For me, David was the Monkees. They were his band. We were just his side men." – Michael Nesmith The news of Jones' death triggered a surge of Internet traffic, causing sales of the Monkees' music to increase dramatically. Guitarist Michael Nesmith stated that Jones' "spirit and soul live well in my heart, among all the lovely people, who remember with me the good times, and the healing times, that were created for so many, including us. I have fond memories. I wish him safe travels." In an 8 March 2012 interview with Rolling Stone magazine, Nesmith commented, "For me, David was the Monkees. They were his band. We were his side men." Bassist Peter Tork said, "Adios to the Manchester Cowboy", and speaking to CNN, drummer/singer Micky Dolenz said, "He was the brother I never had and this leaves a gigantic hole in my heart." Dolenz claimed that he knew that something bad was about to happen and said "Can't believe it.. Still in shock.. had bad dreams all night long." Dolenz was gratified by the public affection expressed for both Jones and the Monkees in the wake of his bandmate's death. "He was a very well-known and well-loved character and person. There are a lot of people who are grieving pretty hard. The Monkees obviously had a following, and so did (Jones) on his own. So I'm not surprised, but I was flattered and honored to be considered one of his friends and a cohort in Monkee business." The Monkees co-creator Bob Rafelson commented that Jones "deserves a lot of credit, let me tell you. He may not have lived as long as we wanted him to, but he survived about seven lifetimes, including being perhaps the biggest rock star of his time." Brady Bunch co-star Maureen McCormick commented that "Davy was a beautiful soul," and that he "spread love and goodness around the world. He filled our lives with happiness, music, and joy. He will live on in our hearts forever. May he rest in peace." Yahoo! Music commented that Jones' death "hit so many people so hard" because "Monkees nostalgia cuts across generations: from the people who discovered the band during their original 1960s run; to the kids who came of age watching 1970s reruns; to the 20- and 30-somethings who discovered the Monkees when MTV (a network that owes much to the Monkees' influence) began airing old episodes in 1986." Time contributor James Poniewozik praised the Monkees' classic sitcom, and Jones in particular, saying, "even if the show never meant to be more than entertainment and a hit-single generator, we shouldn't sell The Monkees short. It was far better television than it had to be; during an era of formulaic domestic sitcoms and wacky comedies, it was a stylistically ambitious show, with a distinctive visual style, absurdist sense of humor and unusual story structure. Whatever Jones and the Monkees were meant to be, they became creative artists in their own right, and Jones' chipper Brit-pop presence was a big reason they were able to produce work that was commercial, wholesome, and yet impressively weird." Mediaite columnist Paul Levinson noted, "The Monkees were the first example of something created in a medium – in this case, a rock band on television – that jumped off the screen to have big impact in the real world."
[ { "paragraph_id": 0, "text": "David Thomas Jones (30 December 1945 – 29 February 2012) was an English actor and singer. Best known as a member of the band The Monkees and a co-star of the TV series The Monkees (1966–1968), Jones was considered a teen idol.", "title": "" }, { "paragraph_id": 1, "text": "Aside from his work on The Monkees TV show, Jones's acting credits include a Tony-nominated performance as the Artful Dodger in the original London and Broadway productions of Oliver! and a guest-starring role in a hallmark episode of The Brady Bunch television show and a later reprised parody film.", "title": "" }, { "paragraph_id": 2, "text": "David Thomas Jones was born on 30 December 1945 in Openshaw, England, to Harry and Doris Jones. He had three sisters: Hazel, Lynda and Beryl. Jones' mother died from emphysema when he was 14 years of age.", "title": "Early life" }, { "paragraph_id": 3, "text": "Jones' television acting debut was in the British television soap opera Coronation Street, in which he appeared as Colin Lomax, grandson of the regular character Ena Sharples, for one episode on 6 March 1961. He also appeared in the BBC police series Z-Cars. Following the death of his mother, Jones rejected acting in favour of becoming a jockey, commencing an apprenticeship with Newmarket trainer Basil Foster. He dropped out of secondary school to begin working in that field, but this career was short-lived. Even though Foster believed Jones would be successful as a jockey, he encouraged his young protégé to take a role as the Artful Dodger in a production of Oliver! in London's West End. When approached by a friend who worked in a West End theatre during the show's casting, Foster replied, \"I've got the kid.\" Jones's portrayal brought him great acclaim. He played the role in London and then on Broadway, and was nominated for a Tony Award.", "title": "Career as actor and singer" }, { "paragraph_id": 4, "text": "On 9 February 1964, Jones appeared on The Ed Sullivan Show with Georgia Brown, who was playing Nancy in the Broadway production of Oliver!. This was the same episode of the show in which the Beatles made their first appearance on U.S. television. Jones said of that night, \"I watched the Beatles from the side of the stage, I saw the girls going crazy, and I said to myself, this is it, I want a piece of that.\"", "title": "Career as actor and singer" }, { "paragraph_id": 5, "text": "Following his Ed Sullivan appearance, Jones signed a contract with Ward Sylvester of Screen Gems (at that time the television division of Columbia Pictures). A pair of U.S. television appearances followed, as Jones received screen time in episodes of Ben Casey and The Farmer's Daughter.", "title": "Career as actor and singer" }, { "paragraph_id": 6, "text": "Jones debuted on the Billboard Hot 100 in the week of 14 August 1965, with the single \"What Are We Going To Do?\", which peaked at number 93. The 19-year-old singer was signed to Colpix Records, a label owned by Columbia. His debut album, David Jones, on the same label, followed soon afterward (CP493).", "title": "Career as actor and singer" }, { "paragraph_id": 7, "text": "From 1966 to 1970, Jones was a member of the Monkees, a pop-rock band formed expressly for a television show of the same name. With Screen Gems producing the series, Jones was shortlisted for auditions, as he was the only Monkee who was signed to a deal with the studio, but still had to meet the standards of producers Bob Rafelson and Bert Schneider. Jones sang lead vocals on many of the Monkees' recordings, including \"I Wanna Be Free\" and \"Daydream Believer\". The DVD release of the first season of the show contained commentary from the various bandmates. In Peter Tork's commentary, he stated that Jones was a good drummer and had the live performance line-up been based solely on playing ability, it ought to have been Tork on guitar, Mike Nesmith on bass, and Jones on drums, with Micky Dolenz taking the fronting role, rather than as it was done (with Nesmith on guitar, Tork on bass, and Dolenz on drums). Like Peter Tork, Jones, despite playing mostly tambourine or maracas, was a multi-instrumentalist and would fill in for Tork on bass when he played keyboards and vice versa and for Dolenz on drums when the Monkees performed live concerts.", "title": "Career as actor and singer" }, { "paragraph_id": 8, "text": "The Monkees officially disbanded in 1970. The NBC television series The Monkees was popular and remained in syndication.", "title": "Career as actor and singer" }, { "paragraph_id": 9, "text": "Bell Records, then having a string of hits with The Partridge Family, signed Jones to a somewhat inflexible solo record contract in 1971. Jones was not allowed to choose his songs or producer, resulting in several lacklustre and aimless records. His second solo album, Davy Jones (1971) was notable for the song \"Rainy Jane\", which reached No. 52 in the Billboard charts. To promote the album, Jones performed \"Girl\" on an episode of The Brady Bunch entitled \"Getting Davy Jones\". Although the single sold poorly, the popularity of Jones' appearance on the show resulted in \"Girl\" becoming his best-remembered solo hit, even though it was not included in the album. The final single, \"I'll Believe In You\"/\"Road to Love\", was poorly received.", "title": "Career as actor and singer" }, { "paragraph_id": 10, "text": "Thanks in part to reruns of The Monkees on Saturday mornings and in syndication, The Monkees Greatest Hits charted in 1976. The LP, issued by Arista (a subsidiary of Screen Gems), was actually a repackaging of a 1972 compilation LP called Refocus that had been issued by Arista's previous label imprint, Bell Records, also owned by Screen Gems.", "title": "Career as actor and singer" }, { "paragraph_id": 11, "text": "Dolenz and Jones took advantage of this, joining ex-Monkees songwriters Tommy Boyce and Bobby Hart to tour the United States. From 1975 to 1977, as the \"Golden Hits of The Monkees\" show (\"The Guys who Wrote 'Em and the Guys who Sang 'Em!\"), they successfully performed in smaller venues such as state fairs and amusement parks as well as making stops in Japan, Thailand, and Singapore (although they were forbidden from using the \"Monkees\" name, as it was owned by Screen Gems at the time). They also released an album of new material appropriately as Dolenz, Jones, Boyce & Hart; a live album entitled Concert in Japan was also recorded in 1976, but was not released until 1996.", "title": "Career as actor and singer" }, { "paragraph_id": 12, "text": "Despite his initial high-profile after the Monkees disbanded, Jones struggled to establish himself as a solo music artist. Glenn A. Baker, author of Monkeemania: The True Story of the Monkees, commented in 1986 that \"for an artist as versatile and confident as (Davy) Jones, the relative failure of his post-Monkees activities is puzzling. For all his cocky predictions to the press about his future plans, Davy fell into a directionless heap when left to his own devices.\"", "title": "Career as actor and singer" }, { "paragraph_id": 13, "text": "Jones returned to theatre several times after the Monkees disbanded. In 1977, he performed with former bandmate Micky Dolenz in a stage production of the Harry Nilsson musical The Point! in London at the Mermaid Theatre, playing and singing the starring role of \"Oblio\" to Dolenz' roles as the \"Count's Kid\" and the \"Leafman\", (according to the CD booklet). An original cast recording was made and released. The comedic chemistry of Jones and Dolenz proved so strong that the show was revived in 1978 with Nilsson inserting additional comedy for the two, plus two more songs, with one of them (\"Gotta Get Up\") being sung by Jones and Dolenz. The show was considered so good that it was planned to be revived again in 1979 but it proved cost prohibitive (source CD booklet \"Harry Nilsson's The Point\"). Jones also appeared in several productions of Oliver! as the Artful Dodger, and in 1989 toured the US portraying \"Fagin\".", "title": "Career as actor and singer" }, { "paragraph_id": 14, "text": "Jones appeared in two episodes each of Love, American Style and My Two Dads. Jones also appeared in animated form as himself in 1972 in an hour-long episode of The New Scooby-Doo Movies.", "title": "Career as actor and singer" }, { "paragraph_id": 15, "text": "A Monkees television show marathon (\"Pleasant Valley Sunday\") broadcast on 23 February 1986 by MTV resulted in a wave of Monkeemania not seen since the band's heyday. Jones reunited with Dolenz and Peter Tork from 1986 to 1989 to celebrate the band's renewed success and promote the 20th anniversary of the band. A new top 20 hit, \"That Was Then, This Is Now\" was released (though Jones did not perform on the song) as well as an album, Pool It!", "title": "Career as actor and singer" }, { "paragraph_id": 16, "text": "In 1996, Jones reunited with Dolenz, Tork and Michael Nesmith to celebrate the 30th anniversary of the Monkees. The band released a new album entitled Justus, the first album since 1967's Headquarters that featured the band members performing all instrumental duties. It was the last time all four Monkees performed together.", "title": "Career as actor and singer" }, { "paragraph_id": 17, "text": "Other television appearances include Sledge Hammer!, Boy Meets World, Hey Arnold!, The Single Guy (where he is mistaken for Dudley Moore) and Sabrina, the Teenage Witch in which he sang \"Daydream Believer\" to Sabrina Spellman (played by Melissa Joan Hart) as well as (I'll) Love You Forever. In 1995, Jones acted in a notable episode of the sitcom Boy Meets World.", "title": "Career as actor and singer" }, { "paragraph_id": 18, "text": "The continued popularity of Jones' 1971 Brady Bunch appearance led to his being cast as himself in The Brady Bunch Movie (1995). Jones sang his signature solo hit \"Girl\", with a grunge band providing backing, this time with middle-aged women swooning over him. Micky Dolenz and Peter Tork also appeared alongside Jones as judges.", "title": "Career as actor and singer" }, { "paragraph_id": 19, "text": "On 2 August 1996, while The Monkees were on their 30th-anniversary tour in New England, Jones was interviewed on the \"Sports Break\" radio show on WBPS 890-AM in Boston by host Roland Regan about his early days as a jockey and amateur boxer back in England as a youth, and now how he stays in shape by jogging and playing in celebrity tennis tournaments.", "title": "Career as actor and singer" }, { "paragraph_id": 20, "text": "On 21 June 1997, during a concert at the Los Angeles Coliseum, Jones joined U2's The Edge onstage for a karaoke performance of \"Daydream Believer\", which had become a fixture of the band's set during that year's PopMart Tour.", "title": "Career as actor and singer" }, { "paragraph_id": 21, "text": "In 2001, Jones released Just Me, an album of his own songs, some written for the album and others originally on Monkees releases. In the early 2000s he was performing in the Flower Power Concert Series during Epcot's Flower and Garden Festival, a yearly gig he would continue until his death.", "title": "Career as actor and singer" }, { "paragraph_id": 22, "text": "In April 2006, Jones recorded the single \"Your Personal Penguin\", written by children's author Sandra Boynton, as a companion piece to her new board book of the same title.", "title": "Career as actor and singer" }, { "paragraph_id": 23, "text": "In 2007, Jones performed the theme song for the film Sexina: Popstar P.I.. On 1 November 2007, the Boynton book and CD titled Blue Moo was released and Jones is featured in both the book and CD, singing \"Your Personal Penguin\". In 2009, Jones released a collection of classics and standards from the 1940s through the 1970s entitled She.", "title": "Career as actor and singer" }, { "paragraph_id": 24, "text": "In December 2008, Yahoo! Music named Jones the \"Number 1 teen idol of all time\". In 2009, Jones was rated second in a list of 10 best teen idols compiled by Fox News.", "title": "Career as actor and singer" }, { "paragraph_id": 25, "text": "In 2009, Jones made a cameo appearance as himself in the SpongeBob SquarePants episode \"SpongeBob SquarePants vs. The Big One\" (his appearance was meant as a pun on the phrase \"Davy Jones' Locker\").", "title": "Career as actor and singer" }, { "paragraph_id": 26, "text": "In February 2011, Jones confirmed rumours of another Monkees reunion. \"There's even talk of putting the Monkees back together again in the next year or so for a U.S. and UK tour,\" he told Disney's Backstage Pass newsletter. \"You're always hearing all those great songs on the radio, in commercials, movies, almost everywhere.\" The tour (Jones' last) came to fruition and was entitled An Evening with The Monkees: The 45th Anniversary Tour.", "title": "Career as actor and singer" }, { "paragraph_id": 27, "text": "In 1967, Jones opened his first store, called Zilch, at 217 Thompson Street in the Greenwich Village section of New York City. The store sold \"hip\" clothing and accessories, and also allowed customers to design their own clothes.", "title": "Other ventures" }, { "paragraph_id": 28, "text": "After the Monkees disbanded in 1970, Jones kept himself busy by establishing a New York City-style street market in Los Angeles, called \"The Street\", which cost approximately $40,000. He also collaborated with musical director Doug Trevor on a one-hour ABC television special titled Pop Goes Davy Jones, which featured new artists The Jackson 5 and the Osmonds.", "title": "Other ventures" }, { "paragraph_id": 29, "text": "In addition to his career as an entertainer, Jones' other great love was horses. Having trained as a jockey in his teens in the UK, he had at first intended to pursue a career as a professional race jockey. He held an amateur rider's licence, and rode in his first race at Newbury in Berkshire for renowned trainer Toby Balding.", "title": "Other ventures" }, { "paragraph_id": 30, "text": "On 1 February 1996, Jones won his first race, on Digpast, in the one-mile Ontario Amateur Riders Handicap at Lingfield in Surrey. Jones also had horse ownership interests in both the US and the UK, and served as a commercial spokesman for Colonial Downs racetrack in Virginia. Following Jones' death, Lingfield announced that the first two races on the racecard for 3 March 2012 would be renamed the \"Hey Hey We're The Monkees Handicap\" and the \"In Memory of Davy Jones Selling Stakes\", with successful horses in those races accompanied into the winners' enclosure by some of the Monkees' biggest hits. Plans were also announced to erect a plaque to commemorate Jones next to a Monkey Puzzle tree on the course.", "title": "Other ventures" }, { "paragraph_id": 31, "text": "Jones was married three times and had four children. In December 1967, he married Dixie Linda Haines, with whom he had been living. Their relationship had been kept out of the public eye until after the birth of their first child in October 1968. It caused a considerable backlash for Jones from his fans when it was finally made public. Jones later stated in Tiger Beat magazine, \"I kept my marriage a secret because I believe stars should be allowed a private life.\" Jones and Haines had two daughters, Talia Elizabeth Jones (2 October 1968) and Sarah Lee Jones (3 July 1971). The marriage ended in 1975.", "title": "Personal life" }, { "paragraph_id": 32, "text": "Jones married his second wife, Anita Pollinger, on 24 January 1981, and also had two daughters. Jessica Lillian Jones (4 September 1981) and Annabel Charlotte Jones (26 June 1988). The couple divorced in 1996 during the Monkees' 30th-anniversary reunion tour.", "title": "Personal life" }, { "paragraph_id": 33, "text": "Jones married Jessica Pacheco in 2009. Jones and his wife appeared on the Dr. Phil show in April 2011. On 28 July 2011, Pacheco filed to divorce Jones in Miami-Dade County, Florida, but dropped the suit in October. They were still married when he died in February 2012. Pacheco was omitted from Jones' will, which he had made before their marriage. His oldest daughter, whom he named his executrix, was granted by the court the unusual request that her father's will be sealed, on the basis that \"planning documents and financial affairs as public opinion could have a material effect on his copyrights, royalties and ongoing goodwill\".\"", "title": "Personal life" }, { "paragraph_id": 34, "text": "On the morning of 29 February 2012, Jones went to tend his 14 horses at a farm in Indiantown, Florida. After riding one of his favourite horses around the track, he complained of chest pains and difficulty breathing and was given antacid pills. He got in his car to go home. Just after 8:00 a.m., a ranch-hand found him unconscious and an ambulance was called but Jones could not be revived. He was taken to Martin Memorial South Hospital in Stuart, Florida, where he died of a heart attack resulting from arteriosclerosis. He was 66.", "title": "Death" }, { "paragraph_id": 35, "text": "On 7 March, a private funeral service was held at Holy Cross Catholic parish church in Indiantown. To avoid drawing attention to the grieving family, the three surviving Monkees did not attend. Instead, the bandmates attended memorial services in New York City and organised their own private memorial in Los Angeles along with Jones' family and close friends. A public memorial service was held on 10 March in Beavertown, Pennsylvania, near a church Jones had purchased for future renovation.", "title": "Death" }, { "paragraph_id": 36, "text": "On 12 March, a private memorial service was held in Jones' hometown of Openshaw, Manchester, at Lees Street Congregational Church, where Jones performed as a child in church plays. Jones' wife and daughters travelled to England to join his relatives based there for the service, and placed his ashes on his parents' graves for a time.", "title": "Death" }, { "paragraph_id": 37, "text": "\"For me, David was the Monkees. They were his band. We were just his side men.\"", "title": "Death" }, { "paragraph_id": 38, "text": "– Michael Nesmith", "title": "Death" }, { "paragraph_id": 39, "text": "The news of Jones' death triggered a surge of Internet traffic, causing sales of the Monkees' music to increase dramatically.", "title": "Death" }, { "paragraph_id": 40, "text": "Guitarist Michael Nesmith stated that Jones' \"spirit and soul live well in my heart, among all the lovely people, who remember with me the good times, and the healing times, that were created for so many, including us. I have fond memories. I wish him safe travels.\" In an 8 March 2012 interview with Rolling Stone magazine, Nesmith commented, \"For me, David was the Monkees. They were his band. We were his side men.\" Bassist Peter Tork said, \"Adios to the Manchester Cowboy\", and speaking to CNN, drummer/singer Micky Dolenz said, \"He was the brother I never had and this leaves a gigantic hole in my heart.\" Dolenz claimed that he knew that something bad was about to happen and said \"Can't believe it.. Still in shock.. had bad dreams all night long.\" Dolenz was gratified by the public affection expressed for both Jones and the Monkees in the wake of his bandmate's death. \"He was a very well-known and well-loved character and person. There are a lot of people who are grieving pretty hard. The Monkees obviously had a following, and so did (Jones) on his own. So I'm not surprised, but I was flattered and honored to be considered one of his friends and a cohort in Monkee business.\"", "title": "Death" }, { "paragraph_id": 41, "text": "The Monkees co-creator Bob Rafelson commented that Jones \"deserves a lot of credit, let me tell you. He may not have lived as long as we wanted him to, but he survived about seven lifetimes, including being perhaps the biggest rock star of his time.\"", "title": "Death" }, { "paragraph_id": 42, "text": "Brady Bunch co-star Maureen McCormick commented that \"Davy was a beautiful soul,\" and that he \"spread love and goodness around the world. He filled our lives with happiness, music, and joy. He will live on in our hearts forever. May he rest in peace.\"", "title": "Death" }, { "paragraph_id": 43, "text": "Yahoo! Music commented that Jones' death \"hit so many people so hard\" because \"Monkees nostalgia cuts across generations: from the people who discovered the band during their original 1960s run; to the kids who came of age watching 1970s reruns; to the 20- and 30-somethings who discovered the Monkees when MTV (a network that owes much to the Monkees' influence) began airing old episodes in 1986.\"", "title": "Death" }, { "paragraph_id": 44, "text": "Time contributor James Poniewozik praised the Monkees' classic sitcom, and Jones in particular, saying, \"even if the show never meant to be more than entertainment and a hit-single generator, we shouldn't sell The Monkees short. It was far better television than it had to be; during an era of formulaic domestic sitcoms and wacky comedies, it was a stylistically ambitious show, with a distinctive visual style, absurdist sense of humor and unusual story structure. Whatever Jones and the Monkees were meant to be, they became creative artists in their own right, and Jones' chipper Brit-pop presence was a big reason they were able to produce work that was commercial, wholesome, and yet impressively weird.\"", "title": "Death" }, { "paragraph_id": 45, "text": "Mediaite columnist Paul Levinson noted, \"The Monkees were the first example of something created in a medium – in this case, a rock band on television – that jumped off the screen to have big impact in the real world.\"", "title": "Death" } ]
David Thomas Jones was an English actor and singer. Best known as a member of the band The Monkees and a co-star of the TV series The Monkees (1966–1968), Jones was considered a teen idol. Aside from his work on The Monkees TV show, Jones's acting credits include a Tony-nominated performance as the Artful Dodger in the original London and Broadway productions of Oliver! and a guest-starring role in a hallmark episode of The Brady Bunch television show and a later reprised parody film.
2002-02-09T12:03:38Z
2023-12-21T16:17:09Z
[ "Template:Infobox person", "Template:Citation needed", "Template:Cite magazine", "Template:Citation", "Template:Commons category", "Template:Davy Jones", "Template:About", "Template:Use dmy dates", "Template:ISBN", "Template:Dead link", "Template:The Monkees", "Template:Short description", "Template:See also", "Template:Reflist", "Template:Cite web", "Template:Cbignore", "Template:Cite book", "Template:Cite journal", "Template:IMDb name", "Template:Use British English", "Template:Main", "Template:Authority control", "Template:Official website", "Template:IBDB name", "Template:Quote box", "Template:Cite news" ]
https://en.wikipedia.org/wiki/Davy_Jones_(musician)
9,142
Discharge
Discharge may refer to
[ { "paragraph_id": 0, "text": "Discharge may refer to", "title": "" }, { "paragraph_id": 1, "text": "", "title": "Music" } ]
Discharge may refer to
2002-01-20T18:46:54Z
2023-08-11T02:12:33Z
[ "Template:TOC right", "Template:Anchor", "Template:Disambiguation", "Template:Wiktionary", "Template:Commons category" ]
https://en.wikipedia.org/wiki/Discharge
9,146
Dolly (sheep)
Dolly (5 July 1996 – 14 February 2003) was a female Finn-Dorset sheep and the first mammal that was cloned from an adult somatic cell. She was cloned by associates of the Roslin Institute in Scotland, using the process of nuclear transfer from a cell taken from a mammary gland. Her cloning proved that a cloned organism could be produced from a mature cell from a specific body part. Contrary to popular belief, she was not the first animal to be cloned. The employment of adult somatic cells in lieu of embryonic stem cells for cloning emerged from the foundational work of John Gurdon, who cloned African clawed frogs in 1958 with this approach. The successful cloning of Dolly led to widespread advancements within stem cell research, including the discovery of induced pluripotent stem cells. Dolly lived at the Roslin Institute throughout her life and produced several lambs. She was euthanized at the age of six years due to a progressive lung disease. No cause which linked the disease to her cloning was found. Dolly's body was preserved and donated by the Roslin Institute in Scotland to the National Museum of Scotland, where it has been regularly exhibited since 2003. Dolly was cloned by Keith Campbell, Ian Wilmut and colleagues at the Roslin Institute, part of the University of Edinburgh, Scotland, and the biotechnology company PPL Therapeutics, based near Edinburgh. The funding for Dolly's cloning was provided by PPL Therapeutics and the Ministry of Agriculture. She was born on 5 July 1996 and died on 14 February 2003 from a progressive lung disease that was considered unrelated to her being a clone. She has been called "the world's most famous sheep" by sources including BBC News and Scientific American. The cell used as the donor for the cloning of Dolly was taken from a mammary gland, and the production of a healthy clone, therefore, proved that a cell taken from a specific part of the body could recreate a whole individual. On Dolly's name, Wilmut stated "Dolly is derived from a mammary gland cell and we couldn't think of a more impressive pair of glands than Dolly Parton's." Dolly was born on 5 July 1996 and had three mothers: one provided the egg, another the DNA, and a third carried the cloned embryo to term. She was created using the technique of somatic cell nuclear transfer, where the cell nucleus from an adult cell is transferred into an unfertilized oocyte (developing egg cell) that has had its cell nucleus removed. The hybrid cell is then stimulated to divide by an electric shock, and when it develops into a blastocyst it is implanted in a surrogate mother. Dolly was the first clone produced from a cell taken from an adult mammal. The production of Dolly showed that genes in the nucleus of such a mature differentiated somatic cell are still capable of reverting to an embryonic totipotent state, creating a cell that can then go on to develop into any part of an animal. Dolly's existence was announced to the public on 22 February 1997. It gained much attention in the media. A commercial with Scottish scientists playing with sheep was aired on TV, and a special report in Time magazine featured Dolly. Science featured Dolly as the breakthrough of the year. Even though Dolly was not the first animal cloned, she received media attention because she was the first cloned from an adult cell. Dolly lived her entire life at the Roslin Institute in Midlothian. There she was bred with a Welsh Mountain ram and produced six lambs in total. Her first lamb, named Bonnie, was born in April 1998. The next year Dolly produced twin lambs Sally and Rosie, and she gave birth to triplets Lucy, Darcy and Cotton in 2000. In late 2001, at the age of four, Dolly developed arthritis and began to walk stiffly. This was treated with anti-inflammatory drugs. On 14 February 2003, Dolly was euthanised because she had a progressive lung disease and severe arthritis. A Finn Dorset such as Dolly has a life expectancy of around 11 to 12 years, but Dolly lived 6.5 years. A post-mortem examination showed she had a form of lung cancer called ovine pulmonary adenocarcinoma, also known as Jaagsiekte, which is a fairly common disease of sheep and is caused by the retrovirus JSRV. Roslin scientists stated that they did not think there was a connection with Dolly being a clone, and that other sheep in the same flock had died of the same disease. Such lung diseases are a particular danger for sheep kept indoors, and Dolly had to sleep inside for security reasons. Some in the press speculated that a contributing factor to Dolly's death was that she could have been born with a genetic age of six years, the same age as the sheep from which she was cloned. One basis for this idea was the finding that Dolly's telomeres were short, which is typically a result of the aging process. The Roslin Institute stated that intensive health screening did not reveal any abnormalities in Dolly that could have come from advanced aging. In 2016, scientists reported no defects in thirteen cloned sheep, including four from the same cell line as Dolly. The first study to review the long-term health outcomes of cloning, the authors found no evidence of late-onset, non-communicable diseases other than some minor examples of osteoarthritis and concluded "We could find no evidence, therefore, of a detrimental long-term effect of cloning by SCNT on the health of aged offspring among our cohort." After her death Dolly's body was preserved via taxidermy and is currently on display at the National Museum of Scotland in Edinburgh. After cloning was successfully demonstrated through the production of Dolly, many other large mammals were cloned, including pigs, deer, horses and bulls. The attempt to clone argali (mountain sheep) did not produce viable embryos. The attempt to clone a banteng bull was more successful, as were the attempts to clone mouflon (a form of wild sheep), both resulting in viable offspring. The reprogramming process that cells need to go through during cloning is not perfect and embryos produced by nuclear transfer often show abnormal development. Making cloned mammals was highly inefficient – in 1996 Dolly was the only lamb that survived to adulthood from 277 attempts. By 2014 Chinese scientists were reported to have 70–80% success rates cloning pigs, and in 2016, a Korean company, Sooam Biotech, was producing 500 cloned embryos a day. Wilmut, who led the team that created Dolly, announced in 2007 that the nuclear transfer technique may never be sufficiently efficient for use in humans. Cloning may have uses in preserving endangered species, and may become a viable tool for reviving extinct species. In January 2009, scientists from the Centre of Food Technology and Research of Aragon in northern Spain announced the cloning of the Pyrenean ibex, a form of wild mountain goat, which was officially declared extinct in 2000. Although the newborn ibex died shortly after birth due to physical defects in its lungs, it is the first time an extinct animal has been cloned, and may open doors for saving endangered and newly extinct species by resurrecting them from frozen tissue. In July 2016, four identical clones of Dolly (Daisy, Debbie, Dianna, and Denise) were alive and healthy at nine years old. Scientific American concluded in 2016 that the main legacy of Dolly has not been cloning of animals but in advances into stem cell research. After Dolly, researchers realised that ordinary cells could be reprogrammed to induced pluripotent stem cells, which can be grown into any tissue. The first successful cloning of a primate species was reported in January 2018, using the same method which produced Dolly. Two identical clones of a macaque monkey, Zhong Zhong and Hua Hua, were created by researchers in China and were born in late 2017. In January 2019, scientists in China reported the creation of five identical cloned gene-edited monkeys, again using this method, and the gene-editing CRISPR-Cas9 technique allegedly used by He Jiankui in creating the first ever gene-modified human babies Lulu and Nana. The monkey clones were made in order to study several medical diseases.
[ { "paragraph_id": 0, "text": "Dolly (5 July 1996 – 14 February 2003) was a female Finn-Dorset sheep and the first mammal that was cloned from an adult somatic cell. She was cloned by associates of the Roslin Institute in Scotland, using the process of nuclear transfer from a cell taken from a mammary gland. Her cloning proved that a cloned organism could be produced from a mature cell from a specific body part. Contrary to popular belief, she was not the first animal to be cloned.", "title": "" }, { "paragraph_id": 1, "text": "The employment of adult somatic cells in lieu of embryonic stem cells for cloning emerged from the foundational work of John Gurdon, who cloned African clawed frogs in 1958 with this approach. The successful cloning of Dolly led to widespread advancements within stem cell research, including the discovery of induced pluripotent stem cells.", "title": "" }, { "paragraph_id": 2, "text": "Dolly lived at the Roslin Institute throughout her life and produced several lambs. She was euthanized at the age of six years due to a progressive lung disease. No cause which linked the disease to her cloning was found.", "title": "" }, { "paragraph_id": 3, "text": "Dolly's body was preserved and donated by the Roslin Institute in Scotland to the National Museum of Scotland, where it has been regularly exhibited since 2003.", "title": "" }, { "paragraph_id": 4, "text": "Dolly was cloned by Keith Campbell, Ian Wilmut and colleagues at the Roslin Institute, part of the University of Edinburgh, Scotland, and the biotechnology company PPL Therapeutics, based near Edinburgh. The funding for Dolly's cloning was provided by PPL Therapeutics and the Ministry of Agriculture. She was born on 5 July 1996 and died on 14 February 2003 from a progressive lung disease that was considered unrelated to her being a clone. She has been called \"the world's most famous sheep\" by sources including BBC News and Scientific American.", "title": "Genesis" }, { "paragraph_id": 5, "text": "The cell used as the donor for the cloning of Dolly was taken from a mammary gland, and the production of a healthy clone, therefore, proved that a cell taken from a specific part of the body could recreate a whole individual. On Dolly's name, Wilmut stated \"Dolly is derived from a mammary gland cell and we couldn't think of a more impressive pair of glands than Dolly Parton's.\"", "title": "Genesis" }, { "paragraph_id": 6, "text": "Dolly was born on 5 July 1996 and had three mothers: one provided the egg, another the DNA, and a third carried the cloned embryo to term. She was created using the technique of somatic cell nuclear transfer, where the cell nucleus from an adult cell is transferred into an unfertilized oocyte (developing egg cell) that has had its cell nucleus removed. The hybrid cell is then stimulated to divide by an electric shock, and when it develops into a blastocyst it is implanted in a surrogate mother. Dolly was the first clone produced from a cell taken from an adult mammal. The production of Dolly showed that genes in the nucleus of such a mature differentiated somatic cell are still capable of reverting to an embryonic totipotent state, creating a cell that can then go on to develop into any part of an animal.", "title": "Birth" }, { "paragraph_id": 7, "text": "Dolly's existence was announced to the public on 22 February 1997. It gained much attention in the media. A commercial with Scottish scientists playing with sheep was aired on TV, and a special report in Time magazine featured Dolly. Science featured Dolly as the breakthrough of the year. Even though Dolly was not the first animal cloned, she received media attention because she was the first cloned from an adult cell.", "title": "Birth" }, { "paragraph_id": 8, "text": "Dolly lived her entire life at the Roslin Institute in Midlothian. There she was bred with a Welsh Mountain ram and produced six lambs in total. Her first lamb, named Bonnie, was born in April 1998. The next year Dolly produced twin lambs Sally and Rosie, and she gave birth to triplets Lucy, Darcy and Cotton in 2000. In late 2001, at the age of four, Dolly developed arthritis and began to walk stiffly. This was treated with anti-inflammatory drugs.", "title": "Life" }, { "paragraph_id": 9, "text": "On 14 February 2003, Dolly was euthanised because she had a progressive lung disease and severe arthritis. A Finn Dorset such as Dolly has a life expectancy of around 11 to 12 years, but Dolly lived 6.5 years. A post-mortem examination showed she had a form of lung cancer called ovine pulmonary adenocarcinoma, also known as Jaagsiekte, which is a fairly common disease of sheep and is caused by the retrovirus JSRV. Roslin scientists stated that they did not think there was a connection with Dolly being a clone, and that other sheep in the same flock had died of the same disease. Such lung diseases are a particular danger for sheep kept indoors, and Dolly had to sleep inside for security reasons.", "title": "Death" }, { "paragraph_id": 10, "text": "Some in the press speculated that a contributing factor to Dolly's death was that she could have been born with a genetic age of six years, the same age as the sheep from which she was cloned. One basis for this idea was the finding that Dolly's telomeres were short, which is typically a result of the aging process. The Roslin Institute stated that intensive health screening did not reveal any abnormalities in Dolly that could have come from advanced aging.", "title": "Death" }, { "paragraph_id": 11, "text": "In 2016, scientists reported no defects in thirteen cloned sheep, including four from the same cell line as Dolly. The first study to review the long-term health outcomes of cloning, the authors found no evidence of late-onset, non-communicable diseases other than some minor examples of osteoarthritis and concluded \"We could find no evidence, therefore, of a detrimental long-term effect of cloning by SCNT on the health of aged offspring among our cohort.\"", "title": "Death" }, { "paragraph_id": 12, "text": "After her death Dolly's body was preserved via taxidermy and is currently on display at the National Museum of Scotland in Edinburgh.", "title": "Death" }, { "paragraph_id": 13, "text": "After cloning was successfully demonstrated through the production of Dolly, many other large mammals were cloned, including pigs, deer, horses and bulls. The attempt to clone argali (mountain sheep) did not produce viable embryos. The attempt to clone a banteng bull was more successful, as were the attempts to clone mouflon (a form of wild sheep), both resulting in viable offspring. The reprogramming process that cells need to go through during cloning is not perfect and embryos produced by nuclear transfer often show abnormal development. Making cloned mammals was highly inefficient – in 1996 Dolly was the only lamb that survived to adulthood from 277 attempts. By 2014 Chinese scientists were reported to have 70–80% success rates cloning pigs, and in 2016, a Korean company, Sooam Biotech, was producing 500 cloned embryos a day. Wilmut, who led the team that created Dolly, announced in 2007 that the nuclear transfer technique may never be sufficiently efficient for use in humans.", "title": "Legacy" }, { "paragraph_id": 14, "text": "Cloning may have uses in preserving endangered species, and may become a viable tool for reviving extinct species. In January 2009, scientists from the Centre of Food Technology and Research of Aragon in northern Spain announced the cloning of the Pyrenean ibex, a form of wild mountain goat, which was officially declared extinct in 2000. Although the newborn ibex died shortly after birth due to physical defects in its lungs, it is the first time an extinct animal has been cloned, and may open doors for saving endangered and newly extinct species by resurrecting them from frozen tissue.", "title": "Legacy" }, { "paragraph_id": 15, "text": "In July 2016, four identical clones of Dolly (Daisy, Debbie, Dianna, and Denise) were alive and healthy at nine years old.", "title": "Legacy" }, { "paragraph_id": 16, "text": "Scientific American concluded in 2016 that the main legacy of Dolly has not been cloning of animals but in advances into stem cell research. After Dolly, researchers realised that ordinary cells could be reprogrammed to induced pluripotent stem cells, which can be grown into any tissue.", "title": "Legacy" }, { "paragraph_id": 17, "text": "The first successful cloning of a primate species was reported in January 2018, using the same method which produced Dolly. Two identical clones of a macaque monkey, Zhong Zhong and Hua Hua, were created by researchers in China and were born in late 2017.", "title": "Legacy" }, { "paragraph_id": 18, "text": "In January 2019, scientists in China reported the creation of five identical cloned gene-edited monkeys, again using this method, and the gene-editing CRISPR-Cas9 technique allegedly used by He Jiankui in creating the first ever gene-modified human babies Lulu and Nana. The monkey clones were made in order to study several medical diseases.", "title": "Legacy" } ]
Dolly was a female Finn-Dorset sheep and the first mammal that was cloned from an adult somatic cell. She was cloned by associates of the Roslin Institute in Scotland, using the process of nuclear transfer from a cell taken from a mammary gland. Her cloning proved that a cloned organism could be produced from a mature cell from a specific body part. Contrary to popular belief, she was not the first animal to be cloned. The employment of adult somatic cells in lieu of embryonic stem cells for cloning emerged from the foundational work of John Gurdon, who cloned African clawed frogs in 1958 with this approach. The successful cloning of Dolly led to widespread advancements within stem cell research, including the discovery of induced pluripotent stem cells. Dolly lived at the Roslin Institute throughout her life and produced several lambs. She was euthanized at the age of six years due to a progressive lung disease. No cause which linked the disease to her cloning was found. Dolly's body was preserved and donated by the Roslin Institute in Scotland to the National Museum of Scotland, where it has been regularly exhibited since 2003.
2002-01-21T05:54:04Z
2023-12-24T16:30:26Z
[ "Template:Use dmy dates", "Template:Cite magazine", "Template:Commons category", "Template:Infobox animal", "Template:Cite web", "Template:Webarchive", "Template:Cite AV media", "Template:Breakthrough of the Year", "Template:Short description", "Template:Snd", "Template:Reflist", "Template:Cite news", "Template:Cite journal", "Template:Pp-semi-indef", "Template:Cite book", "Template:Authority control" ]
https://en.wikipedia.org/wiki/Dolly_(sheep)
9,156
Dolores Fuller
Dolores Agnes Fuller (née Eble, later Chamberlin; March 10, 1923 – May 9, 2011) was an American actress and songwriter known as the one-time girlfriend of the low-budget film director Ed Wood. She played the protagonist's girlfriend in Glen or Glenda, co-starred in Wood's Jail Bait, and had a minor role in his Bride of the Monster. After she broke up with Wood in 1955, she relocated to New York and had a very successful career there as a songwriter. Elvis Presley recorded a number of her songs written for his films. Her first screen appearance was at the age of 10, when she appeared briefly in Frank Capra's It Happened One Night. According to Fuller, the female lead in Bride of the Monster was written for her but Wood gave it to Loretta King instead. In August 1954, Fuller was cast in Wood's The Vampire's Tomb, intended to star Bela Lugosi. Frank Yaconelli was named as her co-star and 'comic killer'. The film was never made. She ended up making an appearance in Bride of the Monster (1956), also with Lugosi. Fuller hosted a benefit for Lugosi which preceded the showing of Bride of the Atom (early working title of Bride of the Monster) on May 11, 1955. A cocktail party was held at the Gardens Restaurant at 4311 Magnolia Avenue in Burbank, California. Vampira attended and was escorted by Paul Marco. A single screening of the film was presented at the Hollywood Paramount. According to Fuller, as quoted in Wood biography Nightmare of Ecstasy (1992), she first met Ed Wood when she attended a casting call with a friend for a movie he was supposed to direct called Behind Locked Doors (which he did not go on to direct); it has also been stated that they met in a restaurant. She became his girlfriend shortly thereafter and began acting in his films. Her movie career included a bit part in It Happened One Night (1934) and roles in Outlaw Women (1952), Glen or Glenda (1953), Body Beautiful (1953), The Blue Gardenia (1953), Count the Hours (1953), Mesa of Lost Women (1953), College Capers (1954), Jail Bait (1954), The Raid (1954), This Is My Love (1954), The Opposite Sex (1956), and many years later appearances in The Ironbound Vampire (1997) and Dimensions in Fear (1998). Fuller had already had earlier experience on television in Queen for a Day and The Dinah Shore Show. She also appeared on an episode of It's a Great Life as "the blonde in the mink coat." Fuller's ability as a songwriter manifested itself through the intervention of her friend, producer Hal Wallis; Fuller had wanted to get an acting role in the Elvis Presley movie Blue Hawaii, which Wallis was producing, but instead he put her in touch with Hill & Range, the publisher that provided Presley with songs. Fuller went into a collaborative partnership with composer Ben Weisman and co-wrote one song, "Rock-A-Hula Baby", for the film. Over time, this led to Presley recording a dozen of her songs, including "I Got Lucky" and "Spinout", primarily for his film soundtracks, though he also recorded "Cindy, Cindy" for his 1971 album Love Letters From Elvis. Fuller's music was also recorded by Nat 'King' Cole, Peggy Lee, and other leading talents of the time. Toward the end of her life, Dolores helped edit and score a short western film Ed Wood had begun, but never completed, in the 1940s called Crossroads of Laredo Dolores married Donald Fuller in 1941, with whom she had two children. At the time she met Ed Wood, she was in the process of divorcing her husband (they finally divorced in 1955). She and Wood shared an apartment together for several years. Wood biographer Rudolph Grey quotes Fuller as saying of the period before her success, He [Ed Wood] begged me to marry him. I loved him in a way, but I couldn't handle the transvestism. I'm a very normal person. It's hard for me to deviate! I wanted a man that was all man. After we broke up, he would stand outside my home in Burbank and cry. "Let me in, I love you!" What good would I have done if I had married him? We would have starved together. I bettered myself. I had to uplift myself. She has also been quoted as saying that "His dressing up didn't bother me—we all have our little queer habits" and giving Wood's drinking as the reason for their breakup. Dolores remarried in 1988 at age 65, to Philip Chamberlin, and they remained married until her death in 2011. Fuller's autobiography, A Fuller Life: Hollywood, Ed Wood and Me, co-authored by Winnipeg writer Stone Wallace and her husband Philip Chamberlin, was published in 2008. Fuller was portrayed by Sarah Jessica Parker in Tim Burton's 1994 Wood biographical film Ed Wood, a portrayal of which she disapproved due to the fact that she was depicted smoking in the film, while Fuller said she herself was a lifelong non-smoker. She also complained that she was only portrayed as "sort of as an actress" and did not feel she was given credit for her other accomplishments and contributions towards Wood's career. However, she stated that she liked the film overall, praising Johnny Depp's performance in the title role. Songs recorded by Elvis Presley with lyrics by Dolores Fuller: According to AllMusic, other songs co-written by her include "I'll Touch a Star" by Terry Stafford, "Lost Summer Love" by Shelley Fabares and "Someone to Tell It To" by Nat King Cole.
[ { "paragraph_id": 0, "text": "Dolores Agnes Fuller (née Eble, later Chamberlin; March 10, 1923 – May 9, 2011) was an American actress and songwriter known as the one-time girlfriend of the low-budget film director Ed Wood. She played the protagonist's girlfriend in Glen or Glenda, co-starred in Wood's Jail Bait, and had a minor role in his Bride of the Monster. After she broke up with Wood in 1955, she relocated to New York and had a very successful career there as a songwriter. Elvis Presley recorded a number of her songs written for his films.", "title": "" }, { "paragraph_id": 1, "text": "Her first screen appearance was at the age of 10, when she appeared briefly in Frank Capra's It Happened One Night. According to Fuller, the female lead in Bride of the Monster was written for her but Wood gave it to Loretta King instead.", "title": "Film career" }, { "paragraph_id": 2, "text": "In August 1954, Fuller was cast in Wood's The Vampire's Tomb, intended to star Bela Lugosi. Frank Yaconelli was named as her co-star and 'comic killer'. The film was never made. She ended up making an appearance in Bride of the Monster (1956), also with Lugosi. Fuller hosted a benefit for Lugosi which preceded the showing of Bride of the Atom (early working title of Bride of the Monster) on May 11, 1955. A cocktail party was held at the Gardens Restaurant at 4311 Magnolia Avenue in Burbank, California. Vampira attended and was escorted by Paul Marco. A single screening of the film was presented at the Hollywood Paramount.", "title": "Film career" }, { "paragraph_id": 3, "text": "According to Fuller, as quoted in Wood biography Nightmare of Ecstasy (1992), she first met Ed Wood when she attended a casting call with a friend for a movie he was supposed to direct called Behind Locked Doors (which he did not go on to direct); it has also been stated that they met in a restaurant.", "title": "Film career" }, { "paragraph_id": 4, "text": "She became his girlfriend shortly thereafter and began acting in his films. Her movie career included a bit part in It Happened One Night (1934) and roles in Outlaw Women (1952), Glen or Glenda (1953), Body Beautiful (1953), The Blue Gardenia (1953), Count the Hours (1953), Mesa of Lost Women (1953), College Capers (1954), Jail Bait (1954), The Raid (1954), This Is My Love (1954), The Opposite Sex (1956), and many years later appearances in The Ironbound Vampire (1997) and Dimensions in Fear (1998).", "title": "Film career" }, { "paragraph_id": 5, "text": "Fuller had already had earlier experience on television in Queen for a Day and The Dinah Shore Show.", "title": "Television performer and songwriter" }, { "paragraph_id": 6, "text": "She also appeared on an episode of It's a Great Life as \"the blonde in the mink coat.\"", "title": "Television performer and songwriter" }, { "paragraph_id": 7, "text": "Fuller's ability as a songwriter manifested itself through the intervention of her friend, producer Hal Wallis; Fuller had wanted to get an acting role in the Elvis Presley movie Blue Hawaii, which Wallis was producing, but instead he put her in touch with Hill & Range, the publisher that provided Presley with songs. Fuller went into a collaborative partnership with composer Ben Weisman and co-wrote one song, \"Rock-A-Hula Baby\", for the film. Over time, this led to Presley recording a dozen of her songs, including \"I Got Lucky\" and \"Spinout\", primarily for his film soundtracks, though he also recorded \"Cindy, Cindy\" for his 1971 album Love Letters From Elvis. Fuller's music was also recorded by Nat 'King' Cole, Peggy Lee, and other leading talents of the time. Toward the end of her life, Dolores helped edit and score a short western film Ed Wood had begun, but never completed, in the 1940s called Crossroads of Laredo", "title": "Television performer and songwriter" }, { "paragraph_id": 8, "text": "Dolores married Donald Fuller in 1941, with whom she had two children. At the time she met Ed Wood, she was in the process of divorcing her husband (they finally divorced in 1955). She and Wood shared an apartment together for several years. Wood biographer Rudolph Grey quotes Fuller as saying of the period before her success,", "title": "Private life" }, { "paragraph_id": 9, "text": "He [Ed Wood] begged me to marry him. I loved him in a way, but I couldn't handle the transvestism. I'm a very normal person. It's hard for me to deviate! I wanted a man that was all man. After we broke up, he would stand outside my home in Burbank and cry. \"Let me in, I love you!\" What good would I have done if I had married him? We would have starved together. I bettered myself. I had to uplift myself.", "title": "Private life" }, { "paragraph_id": 10, "text": "She has also been quoted as saying that \"His dressing up didn't bother me—we all have our little queer habits\" and giving Wood's drinking as the reason for their breakup.", "title": "Private life" }, { "paragraph_id": 11, "text": "Dolores remarried in 1988 at age 65, to Philip Chamberlin, and they remained married until her death in 2011. Fuller's autobiography, A Fuller Life: Hollywood, Ed Wood and Me, co-authored by Winnipeg writer Stone Wallace and her husband Philip Chamberlin, was published in 2008.", "title": "Private life" }, { "paragraph_id": 12, "text": "Fuller was portrayed by Sarah Jessica Parker in Tim Burton's 1994 Wood biographical film Ed Wood, a portrayal of which she disapproved due to the fact that she was depicted smoking in the film, while Fuller said she herself was a lifelong non-smoker. She also complained that she was only portrayed as \"sort of as an actress\" and did not feel she was given credit for her other accomplishments and contributions towards Wood's career. However, she stated that she liked the film overall, praising Johnny Depp's performance in the title role.", "title": "Portrayal in Ed Wood" }, { "paragraph_id": 13, "text": "Songs recorded by Elvis Presley with lyrics by Dolores Fuller:", "title": "Discography" }, { "paragraph_id": 14, "text": "According to AllMusic, other songs co-written by her include \"I'll Touch a Star\" by Terry Stafford, \"Lost Summer Love\" by Shelley Fabares and \"Someone to Tell It To\" by Nat King Cole.", "title": "Discography" } ]
Dolores Agnes Fuller was an American actress and songwriter known as the one-time girlfriend of the low-budget film director Ed Wood. She played the protagonist's girlfriend in Glen or Glenda, co-starred in Wood's Jail Bait, and had a minor role in his Bride of the Monster. After she broke up with Wood in 1955, she relocated to New York and had a very successful career there as a songwriter. Elvis Presley recorded a number of her songs written for his films.
2002-01-22T21:09:57Z
2023-10-26T09:47:59Z
[ "Template:Reflist", "Template:Cite news", "Template:Cite web", "Template:Use American English", "Template:Use mdy dates", "Template:More citations needed", "Template:Infobox person", "Template:Née", "Template:ISBN", "Template:AllMusic", "Template:Authority control", "Template:Short description", "Template:IMDb name" ]
https://en.wikipedia.org/wiki/Dolores_Fuller
9,160
De jure
In law and government, de jure (/deɪ ˈdʒʊəri, di -, - ˈjʊər-/ day JOOR-ee, dee -, - YOOR-ee, Latin: [deː ˈjuːre]; lit. 'by law') describes practices that are legally recognized, regardless of whether the practice exists in reality. In contrast, de facto ('in fact') describes situations that exist in reality, even if not formally recognized. Between 1805 and 1914, the ruling dynasty of Egypt were subject to the rulers of the Ottoman Empire, but acted as de facto independent rulers who maintained a polite fiction of Ottoman suzerainty. However, starting from around 1882, the rulers had only de jure rule over Egypt, as it had by then become a British puppet state. Thus, by Ottoman law, Egypt was de jure a province of the Ottoman Empire, but de facto was part of the British Empire. In U.S. law, particularly after Brown v. Board of Education (1954), the difference between de facto segregation (segregation that existed because of the voluntary associations and neighborhoods) and de jure segregation (segregation that existed because of local laws that mandated the segregation) became important distinctions for court-mandated remedial purposes. In a hypothetical situation, a king or emperor could be the de jure head of state. However, if the sovereign is unfit to rule the country, the prime minister or chancellor would usually become the practical, or de facto, leader, while the king remains the de jure leader. For example, Edward V was de jure King of England for a part of 1483, but he was never crowned and his uncle Richard III was the de facto king during this period.
[ { "paragraph_id": 0, "text": "In law and government, de jure (/deɪ ˈdʒʊəri, di -, - ˈjʊər-/ day JOOR-ee, dee -, - YOOR-ee, Latin: [deː ˈjuːre]; lit. 'by law') describes practices that are legally recognized, regardless of whether the practice exists in reality. In contrast, de facto ('in fact') describes situations that exist in reality, even if not formally recognized.", "title": "" }, { "paragraph_id": 1, "text": "Between 1805 and 1914, the ruling dynasty of Egypt were subject to the rulers of the Ottoman Empire, but acted as de facto independent rulers who maintained a polite fiction of Ottoman suzerainty. However, starting from around 1882, the rulers had only de jure rule over Egypt, as it had by then become a British puppet state. Thus, by Ottoman law, Egypt was de jure a province of the Ottoman Empire, but de facto was part of the British Empire.", "title": "Examples" }, { "paragraph_id": 2, "text": "In U.S. law, particularly after Brown v. Board of Education (1954), the difference between de facto segregation (segregation that existed because of the voluntary associations and neighborhoods) and de jure segregation (segregation that existed because of local laws that mandated the segregation) became important distinctions for court-mandated remedial purposes.", "title": "Examples" }, { "paragraph_id": 3, "text": "In a hypothetical situation, a king or emperor could be the de jure head of state. However, if the sovereign is unfit to rule the country, the prime minister or chancellor would usually become the practical, or de facto, leader, while the king remains the de jure leader. For example, Edward V was de jure King of England for a part of 1483, but he was never crowned and his uncle Richard III was the de facto king during this period.", "title": "Examples" } ]
In law and government, de jure describes practices that are legally recognized, regardless of whether the practice exists in reality. In contrast, de facto describes situations that exist in reality, even if not formally recognized.
2002-01-25T00:30:55Z
2023-12-04T21:23:34Z
[ "Template:Literal translation", "Template:Reflist", "Template:Cite web", "Template:Distinguish", "Template:Use dmy dates", "Template:Wiktionary", "Template:IPAc-en", "Template:IPA", "Template:No wrap", "Template:Portal", "Template:Cite book", "Template:Short description", "Template:Italic title", "Template:Respell" ]
https://en.wikipedia.org/wiki/De_jure
9,163
Des Moines, Iowa
Des Moines (/dəˈmɔɪn/ ) is the capital and the most populous city in Iowa, United States. It is also the county seat of Polk County. A small part of the city extends into Warren County. It was incorporated on September 22, 1851, as Fort Des Moines, which was shortened to "Des Moines" in 1857. It is located on, and named after, the Des Moines River, which likely was adapted from the early French name, Rivière des Moines, meaning "River of the Monks". The city's population was 214,133 as of the 2020 census. The six-county metropolitan area is ranked 81st in terms of population in the United States, with 709,466 residents according to the 2020 census by the United States Census Bureau, and is the largest metropolitan area fully located within the state. Des Moines is a major center of the US insurance industry and has a sizable financial-services and publishing business base. The city was credited as the "number one spot for U.S. insurance companies" in a Business Wire article and named the third-largest "insurance capital" of the world. The city is the headquarters for the Principal Financial Group, Ruan Transportation, TMC Transportation, EMC Insurance Companies, and Wellmark Blue Cross Blue Shield. Other major corporations such as Wells Fargo, Cognizant, Voya Financial, Nationwide Mutual Insurance Company, ACE Limited, Marsh, Monsanto, and Corteva have large operations in or near the metropolitan area. In recent years, Microsoft, Hewlett-Packard, and Facebook have built data-processing and logistical facilities in the Des Moines area. Des Moines is an important city in U.S. presidential politics; as the state's capital, it is the site of the first caucuses of the presidential primary cycle. Many presidential candidates set up campaign headquarters in Des Moines. A 2007 article in The New York Times said, "If you have any desire to witness presidential candidates in the most close-up and intimate of settings, there is arguably no better place to go than Des Moines." Des Moines takes its name from Fort Des Moines (1843–46), which was named for the Des Moines River. This was adopted from the name given by French colonists. Des Moines (pronounced [de mwan] ; formerly [de mwɛn]) translates literally to either "from the monks" or "of the monks". One popular interpretation of "Des Moines" concludes that it refers to a group of French Trappist monks, who in the 17th century lived in huts built on top of what is now known as the ancient Monks Mound at Cahokia, the major center of Mississippian culture, which developed in what is present-day Illinois, east of the Mississippi River and the city of St. Louis. This was some 200 miles (320 km) from the Des Moines River. Based on archaeological evidence, the junction of the Des Moines and Raccoon Rivers has attracted humans for at least 7,000 years. Several prehistoric occupation areas have been identified by archaeologists in downtown Des Moines. Discovered in December 2010, the "Palace" is an expansive, 7,000-year-old site found during excavations prior to construction of the new wastewater treatment plant in southeastern Des Moines. It contains well-preserved house deposits and numerous graves. More than 6,000 artifacts were found at this site. State of Iowa archaeologist John Doershuk was assisted by University of Iowa archaeologists at this dig. At least three Late Prehistoric villages, dating from about AD 1300 to 1700, stood in or near what developed later as downtown Des Moines. In addition, 15 to 18 prehistoric American Indian mounds were observed in this area by early settlers. All have been destroyed during development of the city. Des Moines traces its origins to May 1843, when Captain James Allen supervised the construction of a fort on the site where the Des Moines and Raccoon Rivers merge. Allen wanted to use the name Fort Raccoon; however, the U.S. War Department preferred Fort Des Moines. The fort was built to control the Sauk and Meskwaki tribes, whom the government had moved to the area from their traditional lands in eastern Iowa. The fort was abandoned in 1846 after the Sauk and Meskwaki were removed from the state and shifted to the Indian Territory. The Sauk and Meskwaki did not fare well in Des Moines. The illegal whiskey trade, combined with the destruction of traditional lifeways, led to severe problems for their society. One newspaper reported: "It is a fact that the location of Fort Des Moines among the Sac and Fox Indians (under its present commander) for the last two years, had corrupted them more and lowered them deeper in the scale of vice and degradation, than all their intercourse with the whites for the ten years previous". After official removal, the Meskwaki continued to return to Des Moines until around 1857. Archaeological excavations have shown that many fort-related features survived under what is now Martin Luther King Jr. Parkway and First Street. Soldiers stationed at Fort Des Moines opened the first coal mines in the area, mining coal from the riverbank for the fort's blacksmith. Settlers occupied the abandoned fort and nearby areas. On May 25, 1846, the state legislature designated Fort Des Moines as the seat of Polk County. Arozina Perkins, a school teacher who spent the winter of 1850–1851 in the town of Fort Des Moines, was not favorably impressed: This is one of the strangest looking "cities" I ever saw... This town is at the juncture of the Des Moines and Raccoon Rivers. It is mostly a level prairie with a few swells or hills around it. We have a court house of "brick" and one church, a plain, framed building belonging to the Methodists. There are two taverns here, one of which has a most important little bell that rings together some fifty boarders. I cannot tell you how many dwellings there are, for I have not counted them; some are of logs, some of brick, some framed, and some are the remains of the old dragoon houses... The people support two papers and there are several dry goods shops. I have been into but four of them... Society is as varied as the buildings are. There are people from nearly every state, and Dutch, Swedes, etc. In May 1851, much of the town was destroyed during the Flood of 1851. "The Des Moines and Raccoon Rivers rose to an unprecedented height, inundating the entire country east of the Des Moines River. Crops were utterly destroyed, houses and fences swept away." The city started to rebuild from scratch. On September 22, 1851, Des Moines was incorporated as a city; the charter was approved by voters on October 18. In 1857, the name "Fort Des Moines" was shortened to "Des Moines", and it was designated as the second state capital, previously at Iowa City. Growth was slow during the Civil War period, but the city exploded in size and importance after a railroad link was completed in 1866. In 1864, the Des Moines Coal Company was organized to begin the first systematic mining in the region. Its first mine, north of town on the river's west side, was exhausted by 1873. The Black Diamond mine, near the south end of the West Seventh Street Bridge, sank a 150-foot (46 m) mine shaft to reach a 5-foot-thick (1.5 m) coal bed. By 1876, this mine employed 150 men and shipped 20 carloads of coal per day. By 1885, numerous mine shafts were within the city limits, and mining began to spread into the surrounding countryside. By 1893, 23 mines were in the region. By 1908, Des Moines' coal resources were largely exhausted. In 1912, Des Moines still had eight locals of the United Mine Workers union, representing 1,410 miners. This was about 1.7% of the city's population in 1910. By 1880, Des Moines had a population of 22,408, making it Iowa's largest city. It displaced the three Mississippi River ports: Burlington, Dubuque, and Davenport, that had alternated holding the position since the territorial period. Des Moines has remained Iowa's most populous city. In 1910, the Census Bureau reported Des Moines' population as 97.3% white and 2.7% black, reflecting its early settlement pattern primarily by ethnic Europeans. At the turn of the 20th century, encouraged by the Civic Committee of the Des Moines Women's Club, Des Moines undertook a "City Beautiful" project in which large Beaux Arts public buildings and fountains were constructed along the Des Moines River. The former Des Moines Public Library building (now the home of the World Food Prize); the United States central Post Office, built by the federal government (now the Polk County Administrative Building, with a newer addition); and the City Hall are surviving examples of the 1900–1910 buildings. They form the Civic Center Historic District. The ornate riverfront balustrades that line the Des Moines and Raccoon Rivers were built by the federal Civilian Conservation Corps in the mid-1930s, during the Great Depression under Democratic President Franklin D. Roosevelt, as a project to provide local employment and improve infrastructure. The ornamental fountains that stood along the riverbank were buried in the 1950s when the city began a postindustrial decline that lasted until the late 1980s. The city has since rebounded, transforming from a blue-collar industrial city to a white-collar professional city. In 1907, the city adopted a city commission government known as the Des Moines Plan, comprising an elected mayor and four commissioners, all elected at-large, who were responsible for public works, public property, public safety, and finance. Considered progressive at the time, it diluted the votes of ethnic and national minorities, who generally could not command the majority to elect a candidate of their choice. That form of government was scrapped in 1950 in favor of a council-manager government, with the council members elected at-large. In 1967, the city changed its government to elect four of the seven city council members from single-member districts or wards, rather than at-large. This enabled a broader representation of voters. As with many major urban areas, the city core began losing population to the suburbs in the 1960s (the peak population of 208,982 was recorded in 1960), as highway construction led to new residential construction outside the city. The population was 198,682 in 2000 and grew slightly to 200,538 in 2009. The growth of the outlying suburbs has continued, and the overall metropolitan-area population is over 700,000 today. During the Great Flood of 1993, heavy rains throughout June and early July caused the Des Moines and Raccoon Rivers to rise above flood stage levels. The Des Moines Water Works was submerged by floodwaters during the early morning hours of July 11, 1993, leaving an estimated 250,000 people without running water for 12 days and without drinking water for 20 days. Des Moines suffered major flooding again in June 2008 with a major levee breach. The Des Moines river is controlled upstream by Saylorville Reservoir. In both 1993 and 2008, the flooding river overtopped the reservoir spillway. Today, Des Moines is a member of ICLEI Local Governments for Sustainability USA. Through ICLEI, Des Moines has implemented "The Tomorrow Plan", a regional plan focused on developing central Iowa in a sustainable fashion, centrally-planned growth, and resource consumption to manage the local population. The skyline of Des Moines changed in the 1970s and the 1980s, when several new skyscrapers were built. Additional skyscrapers were built in the 1990s, including Iowa's tallest. Before then, the 19-story Equitable Building, from 1924, was the tallest building in the city and the tallest building in Iowa. The 25-story Financial Center was completed in 1973 and the 36-story Ruan Center was completed in 1974. They were later joined by the 33-story Des Moines Marriott Hotel (1981), the 25-story HUB Tower and 25-story Plaza Building (1985). Iowa's tallest building, Principal Financial Group's 45-story tower at 801 Grand was built in 1991, and the 19-story EMC Insurance Building was erected in 1997. During this time period, the Civic Center of Greater Des Moines (1979) was developed; it hosts Broadway shows and special events. Also constructed were the Greater Des Moines Botanical Garden (1979), a large city botanical garden/greenhouse on the east side of the river; the Polk County Convention Complex (1985), and the State of Iowa Historical Museum (1987). The Des Moines skywalk also began to take shape during the 1980s. The skywalk system is 4 miles (6.4 km) long and connects many downtown buildings. In the early 21st century, the city has had more major construction in the downtown area. The new Science Center of Iowa and Blank IMAX Dome Theater and the Iowa Events Center opened in 2005. The new central branch of the Des Moines Public Library, designed by renowned architect David Chipperfield of London, opened on April 8, 2006. The World Food Prize Foundation, which is based in Des Moines, completed adaptation and restoration of the former Des Moines Public Library building in October 2011. The former library now serves as the home and headquarters of the Norman Borlaug/World Food Prize Hall of Laureates. According to the United States Census Bureau, the city has an area of 90.65 square miles (234.78 km), of which 88.93 square miles (230.33 km) is land and 1.73 square miles (4.48 km) is covered by water. It is 850 feet (260 m) above sea level at the confluence of the Raccoon and Des Moines Rivers. In November 2005, Des Moines voters approved a measure that allowed the city to annex parcels of land in the northeast, southeast, and southern corners of Des Moines without agreement by local residents, particularly areas bordering the Iowa Highway 5/U.S. 65 bypass. The annexations became official on June 26, 2009, as 5,174 acres (20.94 km) and around 868 new residents were added to the city of Des Moines. An additional 759 acres (3.07 km) were voluntarily annexed to the city over that same period. Des Moines-West Des Moines Metropolitan Statistical Area Des Moines-Ames-West Des Moines Combined Statistical Area Des Moines' suburban communities include Altoona, Ankeny, Bondurant, Carlisle, Clive, Grimes, Johnston, Norwalk, Pleasant Hill, Urbandale, Waukee, West Des Moines, and Windsor Heights. At the center of North America and far removed from large bodies of water, the Des Moines area has a hot summer type humid continental climate (Köppen Dfa), with warm to hot, humid summers and cold, dry winters. Summer temperatures can often climb into the 90 °F (32 °C) range, occasionally reaching 100 °F (38 °C). Humidity can be high in spring and summer, with frequent afternoon thunderstorms. Fall brings pleasant temperatures and colorful fall foliage. Winters vary from moderately cold to bitterly cold, with low temperatures venturing below 0 °F (−18 °C) quite often. Snowfall averages 36.5 inches (93 cm) per season, and annual precipitation averages 36.55 inches (928 mm), with a peak in the warmer months. Winters are slightly colder than Chicago, but still warmer than Minneapolis, with summer temperatures being very similar between the Upper Midwest metropolitan areas. As of the census of 2020, the population was 214,133. The population density was 2,428.4 per square mile (937.6/km). There were 93,052 housing units at an average density of 1,055.3 per square mile (407.4/km). The racial makeup was 64.54% (138,200) white, 11.68% (25,011) black or African-American, 0.69% (1,474) Native American, 6.76% (14,474) Asian, 0.06% (135) Pacific Islander, 6.62% (14,178) from other races, and 9.65% (20,661) from two or more races. Hispanic or Latino of any race was 14.0% (30,105) of the population. The 2020 census population of the city included 252 people incarcerated in adult correctional facilities and 2,378 people in student housing. According to the American Community Survey estimates for 2016–2020, the median income for a household in the city was $54,843, and the median income for a family was $66,420. Male full-time workers had a median income of $47,048 versus $40,290 for female workers. The per capita income for the city was $29,064. About 12.1% of families and 16.0% of the population were below the poverty line, including 24.3% of those under age 18 and 9.8% of those age 65 or over. Of the population age 25 and over, 86.7% were high school graduates or higher and 27.9% had a bachelor's degree or higher. As of the census of 2010, there were 203,433 people, 81,369 households, and 47,491 families residing in the city. Population density was 2,515.6 inhabitants per square mile (971.3/km). There were 88,729 housing units at an average density of 1,097.2 per square mile (423.6/km). The racial makeup of the city for Unincorporated areas not merged with the city proper was 66.2% White, 15.5% African Americans, 0.5% Native American, 4.0% Asian, and 2.6% from Two or more races. People of Hispanic or Latino origin, of any race, made up 12.1% of the population. The city's racial make up during the 2010 census was 76.4% White, 10.2% African American, 0.5% Native American, 4.4% Asian (1.2% Vietnamese, 0.9% Laotian, 0.4% Burmese, 0.3% Asian Indian, 0.3% Thai, 0.2% Chinese, 0.2% Cambodian, 0.2% Filipino, 0.1% Hmong, 0.1% Korean, 0.1% Nepalese), 0.1% Pacific Islander, 5.0% from other races, and 3.4% from two or more races. People of Hispanic or Latino origin, of any race, formed 12.0% of the population (9.4% Mexican, 0.7% Salvadoran, 0.3% Guatemalan, 0.3% Puerto Rican, 0.1% Honduran, 0.1% Ecuadorian, 0.1% Cuban, 0.1% Spaniard, 0.1% Spanish). Non-Hispanic Whites were 70.5% of the population in 2010. Des Moines also has a sizeable South Sudanese community. There were 81,369 households, of which 31.6% had children under the age of 18 living with them, 38.9% were married couples living together, 14.2% had a female householder with no husband present, 5.3% had a male householder with no wife present, and 41.6% were non-families. 32.5% of all households were made up of individuals, and 9.4% had someone living alone who was 65 years of age or older. The average household size was 2.43 and the average family size was 3.11. The median age in the city was 33.5 years. 24.8% of residents were under the age of 18; 10.9% were between the ages of 18 and 24; 29.4% were from 25 to 44; 23.9% were from 45 to 64; and 11% were 65 years of age or older. The gender makeup of the city was 48.9% male and 51.1% female. As of the 2000 census, there were 198,682 people, 80,504 households, and 48,704 families in the city. The population density was 2,621.3 inhabitants per square mile (1,012.1/km). There were 85,067 housing units at an average density of 1,122.3 per square mile (433.3/km). The racial makeup of the city was 82.3% white, 8.07% Black, 0.35% American Indian, 3.50% Asian, 0.05% Pacific Islander, 3.52% from other races, and 2.23% from two or more races. 6.61% of the population were Hispanic or Latino of any race. 20.9% were of German, 10.3% Irish, 9.1% "American" and 8.0% English ancestry, according to Census 2000. There were 80,504 households, out of which 29.5% had children under the age of 18 living with them, 43.7% were married couples living together, 12.6% had a female householder with no husband present, and 39.5% were non-families. 31.9% of all households were made up of individuals, and 10.2% had someone living alone who was 65 years of age or older. The average household size was 2.39 and the average family size was 3.04. The age distribution was 24.8% under the age of 18, 10.6% from 18 to 24, 31.8% from 25 to 44, 20.4% from 45 to 64, and 12.4% who were 65 years of age or older. The median age was 34 years. For every 100 females, there were 93.8 males. For every 100 females age 18 and over, there were 90.5 males. The median income for a household in the city was $38,408, and the median income for a family was $46,590. Males had a median income of $31,712 versus $25,832 for females. The per capita income for the city was $19,467. About 7.9% of families and 11.4% of the population were below the poverty line, including 14.9% of those under age 18 and 7.6% of those ages 65 or over. Many insurance companies are headquartered in Des Moines, including the Principal Financial Group, EMC Insurance Group, Fidelity & Guaranty Life, Allied Insurance, GuideOne Insurance, Wellmark Blue Cross Blue Shield of Iowa, FBL Financial Group, and American Republic Insurance Company. Iowa has one of the lowest insurance premium taxes in the nation at 1%, and does not charge any premium taxes on qualified life insurance plans, making the state attractive to insurance business. Des Moines has been referred to as the "Hartford of the West" and "Insurance Capital" because of this. Principal is one of two Fortune 500 companies with headquarters in Iowa (the other being Casey's General Stores), ranking 201st on the magazine's list in 2020. As a center of financial and insurance services, other major corporations headquartered outside of Iowa have a presence in the Des Moines Metro area, including Wells Fargo, Voya Financial, and Electronic Data Systems (EDS). The Meredith Corporation, a leading publishing and marketing company, was also based in Des Moines prior to its acquisition by IAC and merger with Dotdash in 2021. Meredith published Better Homes and Gardens, one of the most widely circulated publications in the United States. Des Moines was also the headquarters of Golf Digest magazine. Other major employers in Des Moines include UnityPoint Health, Mercy Medical Center, MidAmerican Energy Company, CDS Global, UPS, Firestone Agricultural Tire Company, EDS, Drake University, Titan Tire, The Des Moines Register, Anderson Erickson, Dee Zee and EMCO. In 2017, Kemin Industries opened a state-of-the-art worldwide headquarters building in Des Moines. The City of Des Moines is a cultural center for Iowa and home to several art and history museums and performing arts groups. The Des Moines Performing Arts routinely hosts touring Broadway shows and other live professional theater. Its president and CEO, Jeff Chelsvig, is a member of the League of American Theatres and Producers, Inc. The Temple for Performing Arts and Des Moines Playhouse are other venues for live theater, comedy, and performance arts. The Des Moines Metro Opera has been a cultural resource in Des Moines since 1973. The Opera offers educational and outreach programs and is one of the largest performing arts organizations in the state. Ballet Des Moines was established in 2002. Performing three productions each year, the Ballet also provides opportunities for education and outreach. The Des Moines Symphony performs frequently at different venues. In addition to performing seven pairs of classical concerts each season, the Symphony also entertains with New Year's Eve Pops and its annual Yankee Doodle Pops concerts. Jazz in July is an annual event founded in 1969 that performs free jazz shows daily at venues throughout the city during July. Wells Fargo Arena is the Des Moines area's primary venue for sporting events and concerts since its opening in 2005. Named for title sponsor Wells Fargo Financial Services, Wells Fargo Arena holds 16,980 and books large, national touring acts for arena concert performances, while several smaller venues host local, regional, and national bands. It is the home of the Iowa Wolves of the NBA G League, the Iowa Wild of the American Hockey League, and the Iowa Barnstormers of the Indoor Football League. The Simon Estes Riverfront Amphitheater is an outdoor concert venue on the east bank of the Des Moines River which hosts music events such as the Alive Concert Series. The Des Moines Art Center, with a wing designed by architect I. M. Pei, presents art exhibitions and educational programs as well as studio art classes. The Center houses a collection of artwork from the 19th century to the present. An extension of the art center is downtown in an urban museum space, featuring three or four exhibitions each year. The Pappajohn Sculpture Park was established in 2009. It showcases a collection of 24 sculptures donated by Des Moines philanthropists John and Mary Pappajohn. Nearby is the Temple for Performing Arts, a cultural center for the city. Next to the Temple is the 117,000-square-foot (10,900 m) Central Library, designed by renowned English architect David Chipperfield. Salisbury House and Gardens is a 42-room historic house museum on 10 acres (4 ha) of woodlands in the South of Grand neighborhood of Des Moines. It is named after—and loosely inspired by—King's House in Salisbury, England. Built in the 1920s by cosmetics magnate Carl Weeks and his wife, Edith, the Salisbury House contains authentic 16th-century English oak and rafters dating to Shakespeare's days, numerous other architectural features re-purposed from other historic English homes, and an internationally significant collection of original fine art, tapestries, decorative art, furniture, musical instruments, and rare books and documents. The Salisbury House is listed on the National Register of Historic Places, and has been featured on A&E's America's Castles and PBS's Antiques Roadshow. Prominent artists in the Salisbury House collection include Joseph Stella, Lillian Genth, Anthony van Dyck and Lawrence Alma-Tadema. Built in 1877 by prominent pioneer businessman Hoyt Sherman, Hoyt Sherman Place mansion was Des Moines' first public art gallery and houses a distinctive collection of 19th and 20th century artwork. Its restored 1,250-seat theater features an intricate rococo plaster ceiling and excellent acoustics and is used for a variety of cultural performances and entertainment. Arising in the east and facing westward toward downtown, the Iowa State Capitol building with its 275-foot (84 m), 23-karat gold leafed dome towering above the city is a favorite of sightseers. Four smaller domes flank the main dome. The Capitol houses the governor's offices, legislature, and the old Supreme Court Chambers. The ornate interior also features a grand staircase, mural "Westward", five-story law library, scale model of the USS Iowa, and collection of first lady dolls. Guided tours are available. The Capitol grounds include a World War II memorial with sculpture and Wall of Memories, the 1894 Soldiers and Sailors Monument of the Civil War and memorials honoring those who served in the Spanish–American, Korean, and Vietnam Wars. The West Capitol Terrace provides the entrance from the west to the state's grandest building, the State Capitol Building. The 10-acre (4 ha) "people's park" at the foot of the Capitol complex includes a promenade and landscaped gardens, in addition to providing public space for rallies and special events. A granite map of Iowa depicting all 99 counties rests at the base of the terrace and has become an attraction for in-state visitors, many of whom walk over the map to find their home county. Iowa's history lives on in the State of Iowa Historical Museum. This modern granite and glass structure at the foot of the State Capitol Building houses permanent and temporary exhibits exploring the people, places, events, and issues of Iowa's past. The showcase includes native wildlife, American Indian and pioneer artifacts, and political and military items. The museum features a genealogy and Iowa history library, museum gift shop, and cafe. Terrace Hill, a National Historic Landmark and Iowa Governor's Residence, is among the best examples of American Victorian Second Empire architecture. This opulent 1869 home was built by Iowa's first millionaire, Benjamin F. Allen, and restored to the late 19th century period. It overlooks downtown Des Moines and is situated on 8 acres (3.2 ha) with a re-created Victorian formal garden. Tours are conducted Tuesdays through Saturdays from March through December. The 110,000-square-foot (10,000 m) Science Center of Iowa and Blank IMAX Dome Theater offers seven interactive learning areas, live programs, and hands-on activities encouraging learning and fun for all ages. Among its three theaters include the 216-seat Blank IMAX Dome Theater, 175-seat John Deere Adventure Theater featuring live performances, and a 50-foot (15 m) domed Star Theater. The Greater Des Moines Botanical Garden, an indoor conservatory of over 15,000 exotic plants, is one of the largest collections of tropical, subtropical, and desert-growing plants in the Midwest. The Center blooms with thousands of flowers year-round. Nearby are the Robert D. Ray Asian Gardens and Pavilion, named in honor of the former governor whose influence helped relocate thousands of Vietnamese refugees to Iowa homes in the 1970s and 1980s. Developed by the city's Asian community, the Gardens include a three-story Chinese pavilion, bonsai landscaping, and granite sculptures to highlight the importance of diversity and recognize Asian American contributions in Iowa. Blank Park Zoo is a landscaped 22-acre (8.9 ha) zoological park on the south side. Among the exhibits include a tropical rain forest, Australian Outback, and Africa. The Zoo offers education classes, tours, and rental facilities. The Iowa Primate Learning Sanctuary was established as a scientific research facility with a 230-acre (93 ha) campus housing bonobos and orangutans for the noninvasive interdisciplinary study of their cognitive and communicative capabilities. The East Village, on the east side of the Des Moines River, begins at the river and extends about five blocks east to the State Capitol Building, offering an eclectic blend of historic buildings, hip eateries, boutiques, art galleries, and a wide variety of other retail establishments mixed with residences. Adventureland Park is an amusement park in neighboring Altoona, just northeast of Des Moines. The park boasts more than 100 rides, shows, and attractions, including six rollercoasters. A hotel and campground is just outside the park. Also in Altoona is Prairie Meadows Racetrack and Casino, an entertainment venue for gambling and horse racing. Open 24 hours a day, year-round, the racetrack and casino features live racing, plus over 1,750 slot machines, table games, and concert and show entertainment. The racetrack hosts two Grade III races annually, the Iowa Oaks and the Cornhusker Handicap. Living History Farms in suburban Urbandale tells the story of Midwestern agriculture and rural life in a 500-acre (2.0 km) open-air museum with interpreters dressed in period costume who recreate the daily routines of early Iowans. Open daily from May through October, the Living History Farms include a 1700 Ioway Indian village, 1850 pioneer farm, 1875 frontier town, 1900 horse-powered farm, and a modern crop center. Wallace House was the home of the first Henry Wallace, a national leader in agriculture and conservation and the first editor of Wallaces' Farmer farm journal. This restored 1883 Italianate Victorian houses exhibits, artifacts, and information covering four generations of Henry Wallaces and other family members. Historic Jordan House in West Des Moines is a stately Victorian home built in 1850 and added to in 1870 by the first white settler in West Des Moines, James C. Jordan. Completely refurbished, this mansion was part of the Underground Railroad and today houses 16 period rooms, a railroad museum, West Des Moines community history, and a museum dedicated to the Underground Railroad in Iowa. In 1893 Jordan's daughter Eda was sliding down the banister when she fell off and broke her neck. She died two days later, and her ghost is reputed to haunt the house. The Chicago Tribune wrote that Iowa's capital city has "walker-friendly downtown streets and enough outdoor sculpture, sleek buildings, storefronts and cafes to delight the most jaded stroller". Des Moines plays host to a growing number of nationally acclaimed cultural events, including the annual Des Moines Arts Festival in June, Metro Arts Jazz in July, Iowa State Fair in August, and the World Food & Music Festival in September. On Saturdays from May through October, the Downtown Farmers' Market draws visitors from across the state. Local parades include Saint Patrick's Day Parade, Drake Relays Parade, Capitol City Pride Parade, Iowa State Fair Parade, Labor Day Parade, and Beaverdale Fall Festival Parade. Other annual festivals and events include: Des Moines Beer Week, 80/35 Music Festival, 515 Alive Music Festival, ArtFest Midwest, Blue Ribbon Bacon Fest, CelebrAsian Heritage Festival, Des Moines Pride Festival, Des Moines Renaissance Faire, Festa Italiana, Festival of Trees and Lights, World Food & Music Festival, I'll Make Me a World Iowa, Latino Heritage Festival, Oktoberfest, Winefest, ImaginEve!, Iowa's Premier Beer, Wine & Food Show, and Wild Rose Film Festival. Des Moines operates under a council–manager form of government. The council consists of a mayor who is elected in citywide vote, two at-large members, and four members representing each of the city's four wards. In 2014, Jonathan Gano was appointed as the new Public Works Director. In 2015, Dana Wingert was appointed as Police Chief. In 2018, Steven L. Naber was appointed as the new City Engineer. The council members include: A plan to merge the governments of Des Moines and Polk County was rejected by voters during the November 2, 2004, election. The consolidated city-county government would have had a full-time mayor and a 15-member council that would have been divided among the city and its suburbs. Each suburb would still have retained its individual government but with the option to join the consolidated government at any time. Although a full merger was soundly rejected, many city and county departments and programs have been consolidated. Des Moines has an extensive skywalk system within its downtown core. With over four miles of enclosed walkway, it is one of the largest of such systems in the United States. The Des Moines Skywalk System has been criticized for hurting street-level business, though a recent initiative has been made to make street-level Skywalk entrances more visible. Interstate 235 (I-235) cuts through the city, and I-35 and I-80 both pass through the Des Moines metropolitan area, as well as the city of Des Moines. On the northern side of the city of Des Moines and passing through the cities of Altoona, Clive, Johnston, Urbandale and West Des Moines, I-35 and I-80 converge into a long concurrency while I-235 takes a direct route through Des Moines, Windsor Heights, and West Des Moines before meeting up with I-35 and I-80 on the western edge of the metro. The Des Moines Bypass passes south and east of the city. Other routes in and around the city include US 6, US 69, Iowa 28, Iowa 141, Iowa 163, Iowa 330, Iowa 415, and Iowa 160. Des Moines's public transit system, operated by DART (Des Moines Area Regional Transit), which was the Des Moines Metropolitan Transit Authority until October 2006, consists entirely of buses, including regular in-city routes and express and commuter buses to outlying suburban areas. Characteristics of household ownership of cars in Des Moines are similar to national averages. In 2015, 8.5 percent of Des Moines households lacked a car, and increased to 9.6 percent in 2016. The national average was 8.7 percent in 2016. Des Moines averaged 1.71 cars per household in 2016, compared to a national average of 1.8. Burlington Trailways, and Jefferson Lines run long-distance, intercity bus routes through Des Moines. The bus station is located north of downtown. Although Des Moines was historically a train hub, it does not have direct passenger train service. For east–west traffic it was served at the Rock Island Depot by the Corn Belt Rocket express from Omaha to the west, to Chicago in the east. The Rock Island also offered the Rocky Mountain Rocket from Colorado Springs in the west, to Chicago, and the Twin Star Rocket to Minneapolis to the north and Dallas and Houston to the south. The last train was an unnamed service ending at Council Bluffs, and it was discontinued on May 31, 1970. Today, this line constitutes the mainline of the Iowa Interstate Railroad. Other railroads used the East Des Moines Union Station. Northward and northwest bound, there were Chicago and North Western trains to destinations including Minneapolis. The Wabash Railroad ran service to the southeast to St. Louis. These lines remain in use but are now operated by Union Pacific and BNSF. The nearest Amtrak station is in Osceola, about 40 miles (64 km) south of Des Moines. The Osceola station is served by the Chicago–San Francisco California Zephyr; there is no Osceola–Des Moines Amtrak Thruway connecting service. There have been proposals to extend Amtrak's planned Chicago–Moline Quad City Rocket to Des Moines via the Iowa Interstate Railroad. The Des Moines International Airport (DSM), on Fleur Drive in the southern part of Des Moines, offers nonstop service to destinations within the United States. The only international service is cargo service, but there have been discussions about adding an international terminal. The Des Moines Public Schools district is the largest community school district in Iowa with 32,062 enrolled students as of the 2012–2013 school year. The district consists of 63 schools: 38 elementary schools, eleven middle schools, five high schools (East, Hoover, Lincoln, North, and Roosevelt), and ten special schools and programs. Small parts of the city are instead served by Carlisle Community Schools, Johnston Community School District, the Southeast Polk Community School District and the Saydel School District Grand View Christian School is the only private school in the city, although Des Moines Christian School (in Des Moines from 1947 to 2006) in Urbandale, Dowling Catholic High School in West Des Moines, and Ankeny Christian Academy on the north side of the metro area serve some city residents. Des Moines is also home to the main campuses of three four-year private colleges: Drake University, Grand View University, and Mercy College of Health Sciences. The University of Iowa has a satellite facility in the city's Western Gateway Park, while Iowa State University hosts Master of Business Administration classes downtown. Simpson College, Upper Iowa University, William Penn University, and Purdue University Global. Des Moines Area Community College is the area's community college with campuses in Ankeny, Des Moines, and West Des Moines. The city is also home to Des Moines University, an osteopathic medical school. The Des Moines market, which originally consisted of Polk, Dallas, Story, and Warren counties, was ranked 91st by Arbitron as of the fall of 2007 with a population of 512,000 aged 12 and older. But in June 2011 it was moved up to 72nd with the addition of Boone, Clarke, Greene, Guthrie, Jasper, Lucas, Madison and Marion counties. iHeartMedia owns five radio stations in the area, including WHO 1040 AM, a 50,000-watt AM news/talk station that has the highest ratings in the area and once employed future President Ronald Reagan as a sportscaster. In addition to WHO, iHeartMedia owns KDRB 100.3 FM (adult hits), KKDM 107.5 FM (contemporary hits), KXNO-FM 106.3, and KXNO 1460 AM (sports radio). They also own news/talk station KASI 1430 AM and hot adult contemporary station KCYZ 105.1 FM, both of which broadcast from Ames. Cumulus Media owns five stations that broadcast from facilities in Urbandale: KBGG 1700 AM (sports), KGGO 94.9 FM (classic rock), KHKI 97.3 FM (country music), KJJY 92.5 FM (country music), and KWQW 98.3 FM (classic hip hop). Saga Communications owns nine stations in the area: KAZR 103.3 FM (rock), KAZR-HD2 (oldies), KIOA 93.3 FM (oldies), KIOA-HD2 99.9FM & 93.3 HD2 (Rhythmic Top 40), KOEZ 104.1 FM (soft adult contemporary), KPSZ 940 AM (contemporary Christian music, religious teaching, and conservative talk), KRNT 1350 AM (ESPN Radio), KSTZ 102.5 FM (adult contemporary hits), and KSTZ-HD2 (classic country). Other stations in the Des Moines area include religious stations KWKY 1150 AM, and KPUL 101.7 FM. Non-commercial radio stations in the Des Moines area include KDPS 88.1 FM, a station operated by the Des Moines Public Schools; KWDM 88.7 FM, a station operated by Valley High School; KJMC 89.3 FM, an urban contemporary station; K213DV 90.5 FM, the contemporary Christian K-Love affiliate for the area; and KDFR 91.3 FM, operated by Family Radio. Iowa Public Radio broadcasts several stations in the Des Moines area, all of which are owned by Iowa State University and operated on campus. WOI 640 am, the networks flagship station, and WOI-FM 90.1, the networks flagship "Studio One" station, are both based out of Ames and serve as the area's National Public Radio outlets. The network also operates classical stations KICG, KICJ, KICL and KICP. The University of Northwestern – St. Paul operates Contemporary Christian simulcasts of KNWI-FM at 107.1 Osceola/Des Moines, KNWM-FM at 96.1 Madrid/Ames/Des Moines, and K264CD at 100.7 in downtown Des Moines. Low-power FM stations include KFMG-LP 99.1, a community radio station broadcasting from the Hotel Fort Des Moines and also webstreamed. The Des Moines-Ames media market consists of 35 central Iowa counties: Adair, Adams, Appanoose, Audubon, Boone, Calhoun, Carroll, Clarke, Dallas, Decatur, Franklin, Greene, Guthrie, Hamilton, Hardin, Humboldt, Jasper, Kossuth, Lucas, Madison, Mahaska, Marion, Marshall, Monroe, Pocahontas, Polk, Poweshiek, Ringgold, Story, Taylor, Union, Warren, Wayne, Webster, and Wright. It was ranked 71st by Nielsen Media Research for the 2008–2009 television season with 432,410 television households. Commercial television stations serving Des Moines include CBS affiliate KCCI channel 8, NBC affiliate WHO-DT channel 13, and Fox affiliate KDSM-TV channel 17. ABC affiliate WOI-TV channel 5 and CW affiliate KCWI-TV channel 23 are both licensed to Ames and broadcast from studios in West Des Moines. KFPX-TV channel 39, the local ION affiliate, is licensed to Newton. Two non-commercial stations are also licensed to Des Moines: KDIN channel 11, the local PBS member station and flagship of the Iowa Public Television network, and KDMI channel 19, a TCT affiliate. Mediacom is the Des Moines area's cable television provider. Television sports listings for Des Moines and Iowa can be found on the Des Moines Register website. The Des Moines Register is the city's primary daily newspaper. As of March 31, 2007, the Register ranked 71st in circulation among daily newspapers in the United States according to the Audit Bureau of Circulations with 146,050 daily and 233,229 Sunday subscribers. Weekly newspapers include Juice, a publication aimed at the 25–34 demographic published by the Register on Wednesdays; Cityview, an alternative weekly published on Thursdays; and the Des Moines Business Record, a business journal published on Sundays, along with the West Des Moines Register, the Johnston Register, and the Waukee Register on Tuesdays, Wednesdays, or Thursdays depending on the address of the subscriber. Additionally, magazine publisher Meredith Corporation was based in Des Moines prior to its acquisition by IAC and merger with Dotdash in 2021. Des Moines hosts professional minor league teams in several sports — baseball, basketball, hockey, indoor football, and soccer — and is home to the sports teams of Drake University which play in NCAA Division I. The Des Moines Menace soccer club, a member of USL League Two, play their home games at Valley Stadium in West Des Moines. Des Moines United FC of the National Premier Soccer League also utilize Valley Stadium. Des Moines is home to the Iowa Cubs baseball team of the Triple-A East. The I-Cubs, which are the Triple-A affiliate of the major league Chicago Cubs, play their home games at Principal Park near the confluence of the Des Moines and Raccoon Rivers. Wells Fargo Arena of the Iowa Events Center is home to the Iowa Barnstormers of the Indoor Football League, the Iowa Wild of the American Hockey League, and the Iowa Wolves of the NBA G League. The Barnstormers relaunched as an af2 club in 2008 before joining a relaunched Arena Football League in 2010 and the Indoor Football League in 2015; the Barnstormers had previously played in the Arena Football League from 1994 to 2000 (featuring future NFL Hall of Famer and Super Bowl MVP quarterback Kurt Warner) before relocating to New York. The Iowa Energy, a D-League team, began play in 2007. They were bought by the Minnesota Timberwolves in 2017 and were renamed the Iowa Wolves to reflect the new ownership. The Wild, the AHL affiliate of the National Hockey League's Minnesota Wild have played at Wells Fargo Arena since 2013; previously, the Iowa Chops played four seasons in Des Moines (known as the Iowa Stars for three of those seasons.) Additionally, the Des Moines Buccaneers of the United States Hockey League play at Buccaneer Arena in suburban Urbandale. Des Moines is also home to the Drake University Bulldogs, an NCAA Division I member of the Missouri Valley Conference, primarily playing northwest of downtown at the on-campus Drake Stadium and Knapp Center. Drake Stadium is home to the famed Drake Relays each April. In addition to the Drake Relays, Drake Stadium has hosted multiple NCAA Outdoor Track and Field Championships and USA Outdoor Track and Field Championships. The Vikings of Grand View University also compete in intercollegiate athletics in Des Moines. A member of the Heart of America Athletic Conference, within the NAIA, they field 21 varsity athletic teams. They were NAIA National Champions in football in 2013. The Principal Charity Classic, a Champions Tour golf event, is held at Wakonda Club in late May or early June. The IMT Des Moines Marathon is held throughout the city each October. Des Moines has 76 city parks and three golf courses, as well as three family aquatic centers, five community centers and three swimming pools. The city has 45 miles (72 km) of trails. The first major park was Greenwood Park. The park commissioners purchased the land on April 21, 1894. The Principal Riverwalk is a riverwalk park district being constructed along the banks of the Des Moines River in the downtown. Primarily funded by the Principal Financial Group, the Riverwalk is a multi-year jointly funded project also funded by the city and state. Upon completion, it will feature a 1.2-mile (1.9 km) recreational trail connecting the east and west sides of downtown via two pedestrian bridges. A landscaped promenade along the street level is planned. The Riverwalk includes the downtown Brenton Skating Plaza, open from November through March. Gray's Lake, part of the 167 acres (68 ha) of Gray's Lake Park, features a boat rental facility, fishing pier, floating boardwalks, and a park resource center. Located just south of the downtown, the centerpiece of the park is a lighted 1.9-mile (3.1 km) Kruidenier Trail, encircling it entirely. From downtown Des Moines primarily along the east bank of the Des Moines River, the Neil Smith and John Pat Dorrian Trails are 28.2-mile (45.4 km) paved recreational trails that connect Gray's Lake northward to the east shore of Saylorville Lake, Big Creek State Park, and the recreational trails of Ankeny including the High Trestle Trail. These trails are near several recreational facilities including the Pete Crivaro Park, Principal Park, the Principal Riverwalk, the Greater Des Moines Botanical Garden, Union Park and its Heritage Carousel of Des Moines, Birdland Park and the Birdland Marina/Boatramp on the Des Moines River, Riverview Park, McHenry Park, and River Drive Park. Although outside of Des Moines, Jester Park has 1,834 acres (742 ha) of land along the western shore of Saylorville Lake and can be reached from the Neil Smith Trail over the Saylorville Dam. Just west of Gray's Lake are the 1,500 acres (607 ha) of the Des Moines Water Works Park. The Water Works Park is along the banks of the Raccoon River immediately upstream from where the Raccoon River empties into the Des Moines River. The Des Moines Water Works Facility, which obtains the city's drinking water from the Raccoon River, is entirely within the Water Works Park. A bridge in the park crosses the Raccoon River. The Water Works Park recreational trails link to downtown Des Moines by travelling past Gray's Lake and back across the Raccoon River via either along the Meredith Trail near Principal Park, or along the Martin Luther King Jr. Parkway. The Water Works Park trails connect westward to Valley Junction and the recreational trails of the western suburbs: Windsor Heights, Urbandale, Clive, and Waukee. Also originating from Water Works Park, the Great Western Trail is an 18-mile (29 km) journey southward from Des Moines to Martensdale through the Willow Creek Golf Course, Orilla, and Cumming. Often, the location for summer music festivals and concerts, Water Works Park was the overnight campground for thousands of bicyclists on Tuesday, July 23, 2013, during RAGBRAI XLI. The Greater Des Moines Sister City Commission, with members from the City of Des Moines and the suburbs of Cumming, Norwalk, Windsor Heights, Johnston, Urbandale, and Ankeny, maintains sister city relationships with:
[ { "paragraph_id": 0, "text": "Des Moines (/dəˈmɔɪn/ ) is the capital and the most populous city in Iowa, United States. It is also the county seat of Polk County. A small part of the city extends into Warren County. It was incorporated on September 22, 1851, as Fort Des Moines, which was shortened to \"Des Moines\" in 1857. It is located on, and named after, the Des Moines River, which likely was adapted from the early French name, Rivière des Moines, meaning \"River of the Monks\". The city's population was 214,133 as of the 2020 census. The six-county metropolitan area is ranked 81st in terms of population in the United States, with 709,466 residents according to the 2020 census by the United States Census Bureau, and is the largest metropolitan area fully located within the state.", "title": "" }, { "paragraph_id": 1, "text": "Des Moines is a major center of the US insurance industry and has a sizable financial-services and publishing business base. The city was credited as the \"number one spot for U.S. insurance companies\" in a Business Wire article and named the third-largest \"insurance capital\" of the world. The city is the headquarters for the Principal Financial Group, Ruan Transportation, TMC Transportation, EMC Insurance Companies, and Wellmark Blue Cross Blue Shield. Other major corporations such as Wells Fargo, Cognizant, Voya Financial, Nationwide Mutual Insurance Company, ACE Limited, Marsh, Monsanto, and Corteva have large operations in or near the metropolitan area. In recent years, Microsoft, Hewlett-Packard, and Facebook have built data-processing and logistical facilities in the Des Moines area.", "title": "" }, { "paragraph_id": 2, "text": "Des Moines is an important city in U.S. presidential politics; as the state's capital, it is the site of the first caucuses of the presidential primary cycle. Many presidential candidates set up campaign headquarters in Des Moines. A 2007 article in The New York Times said, \"If you have any desire to witness presidential candidates in the most close-up and intimate of settings, there is arguably no better place to go than Des Moines.\"", "title": "" }, { "paragraph_id": 3, "text": "Des Moines takes its name from Fort Des Moines (1843–46), which was named for the Des Moines River. This was adopted from the name given by French colonists. Des Moines (pronounced [de mwan] ; formerly [de mwɛn]) translates literally to either \"from the monks\" or \"of the monks\".", "title": "Etymology" }, { "paragraph_id": 4, "text": "One popular interpretation of \"Des Moines\" concludes that it refers to a group of French Trappist monks, who in the 17th century lived in huts built on top of what is now known as the ancient Monks Mound at Cahokia, the major center of Mississippian culture, which developed in what is present-day Illinois, east of the Mississippi River and the city of St. Louis. This was some 200 miles (320 km) from the Des Moines River.", "title": "Etymology" }, { "paragraph_id": 5, "text": "Based on archaeological evidence, the junction of the Des Moines and Raccoon Rivers has attracted humans for at least 7,000 years. Several prehistoric occupation areas have been identified by archaeologists in downtown Des Moines. Discovered in December 2010, the \"Palace\" is an expansive, 7,000-year-old site found during excavations prior to construction of the new wastewater treatment plant in southeastern Des Moines. It contains well-preserved house deposits and numerous graves. More than 6,000 artifacts were found at this site. State of Iowa archaeologist John Doershuk was assisted by University of Iowa archaeologists at this dig.", "title": "Prehistory" }, { "paragraph_id": 6, "text": "At least three Late Prehistoric villages, dating from about AD 1300 to 1700, stood in or near what developed later as downtown Des Moines. In addition, 15 to 18 prehistoric American Indian mounds were observed in this area by early settlers. All have been destroyed during development of the city.", "title": "Prehistory" }, { "paragraph_id": 7, "text": "Des Moines traces its origins to May 1843, when Captain James Allen supervised the construction of a fort on the site where the Des Moines and Raccoon Rivers merge. Allen wanted to use the name Fort Raccoon; however, the U.S. War Department preferred Fort Des Moines. The fort was built to control the Sauk and Meskwaki tribes, whom the government had moved to the area from their traditional lands in eastern Iowa. The fort was abandoned in 1846 after the Sauk and Meskwaki were removed from the state and shifted to the Indian Territory.", "title": "History" }, { "paragraph_id": 8, "text": "The Sauk and Meskwaki did not fare well in Des Moines. The illegal whiskey trade, combined with the destruction of traditional lifeways, led to severe problems for their society. One newspaper reported:", "title": "History" }, { "paragraph_id": 9, "text": "\"It is a fact that the location of Fort Des Moines among the Sac and Fox Indians (under its present commander) for the last two years, had corrupted them more and lowered them deeper in the scale of vice and degradation, than all their intercourse with the whites for the ten years previous\".", "title": "History" }, { "paragraph_id": 10, "text": "After official removal, the Meskwaki continued to return to Des Moines until around 1857.", "title": "History" }, { "paragraph_id": 11, "text": "Archaeological excavations have shown that many fort-related features survived under what is now Martin Luther King Jr. Parkway and First Street. Soldiers stationed at Fort Des Moines opened the first coal mines in the area, mining coal from the riverbank for the fort's blacksmith.", "title": "History" }, { "paragraph_id": 12, "text": "Settlers occupied the abandoned fort and nearby areas. On May 25, 1846, the state legislature designated Fort Des Moines as the seat of Polk County. Arozina Perkins, a school teacher who spent the winter of 1850–1851 in the town of Fort Des Moines, was not favorably impressed:", "title": "History" }, { "paragraph_id": 13, "text": "This is one of the strangest looking \"cities\" I ever saw... This town is at the juncture of the Des Moines and Raccoon Rivers. It is mostly a level prairie with a few swells or hills around it. We have a court house of \"brick\" and one church, a plain, framed building belonging to the Methodists. There are two taverns here, one of which has a most important little bell that rings together some fifty boarders. I cannot tell you how many dwellings there are, for I have not counted them; some are of logs, some of brick, some framed, and some are the remains of the old dragoon houses... The people support two papers and there are several dry goods shops. I have been into but four of them... Society is as varied as the buildings are. There are people from nearly every state, and Dutch, Swedes, etc.", "title": "History" }, { "paragraph_id": 14, "text": "In May 1851, much of the town was destroyed during the Flood of 1851. \"The Des Moines and Raccoon Rivers rose to an unprecedented height, inundating the entire country east of the Des Moines River. Crops were utterly destroyed, houses and fences swept away.\" The city started to rebuild from scratch.", "title": "History" }, { "paragraph_id": 15, "text": "On September 22, 1851, Des Moines was incorporated as a city; the charter was approved by voters on October 18. In 1857, the name \"Fort Des Moines\" was shortened to \"Des Moines\", and it was designated as the second state capital, previously at Iowa City. Growth was slow during the Civil War period, but the city exploded in size and importance after a railroad link was completed in 1866.", "title": "History" }, { "paragraph_id": 16, "text": "In 1864, the Des Moines Coal Company was organized to begin the first systematic mining in the region. Its first mine, north of town on the river's west side, was exhausted by 1873. The Black Diamond mine, near the south end of the West Seventh Street Bridge, sank a 150-foot (46 m) mine shaft to reach a 5-foot-thick (1.5 m) coal bed. By 1876, this mine employed 150 men and shipped 20 carloads of coal per day. By 1885, numerous mine shafts were within the city limits, and mining began to spread into the surrounding countryside. By 1893, 23 mines were in the region. By 1908, Des Moines' coal resources were largely exhausted. In 1912, Des Moines still had eight locals of the United Mine Workers union, representing 1,410 miners. This was about 1.7% of the city's population in 1910.", "title": "History" }, { "paragraph_id": 17, "text": "By 1880, Des Moines had a population of 22,408, making it Iowa's largest city. It displaced the three Mississippi River ports: Burlington, Dubuque, and Davenport, that had alternated holding the position since the territorial period. Des Moines has remained Iowa's most populous city. In 1910, the Census Bureau reported Des Moines' population as 97.3% white and 2.7% black, reflecting its early settlement pattern primarily by ethnic Europeans.", "title": "History" }, { "paragraph_id": 18, "text": "At the turn of the 20th century, encouraged by the Civic Committee of the Des Moines Women's Club, Des Moines undertook a \"City Beautiful\" project in which large Beaux Arts public buildings and fountains were constructed along the Des Moines River. The former Des Moines Public Library building (now the home of the World Food Prize); the United States central Post Office, built by the federal government (now the Polk County Administrative Building, with a newer addition); and the City Hall are surviving examples of the 1900–1910 buildings. They form the Civic Center Historic District.", "title": "History" }, { "paragraph_id": 19, "text": "The ornate riverfront balustrades that line the Des Moines and Raccoon Rivers were built by the federal Civilian Conservation Corps in the mid-1930s, during the Great Depression under Democratic President Franklin D. Roosevelt, as a project to provide local employment and improve infrastructure. The ornamental fountains that stood along the riverbank were buried in the 1950s when the city began a postindustrial decline that lasted until the late 1980s. The city has since rebounded, transforming from a blue-collar industrial city to a white-collar professional city.", "title": "History" }, { "paragraph_id": 20, "text": "In 1907, the city adopted a city commission government known as the Des Moines Plan, comprising an elected mayor and four commissioners, all elected at-large, who were responsible for public works, public property, public safety, and finance. Considered progressive at the time, it diluted the votes of ethnic and national minorities, who generally could not command the majority to elect a candidate of their choice.", "title": "History" }, { "paragraph_id": 21, "text": "That form of government was scrapped in 1950 in favor of a council-manager government, with the council members elected at-large. In 1967, the city changed its government to elect four of the seven city council members from single-member districts or wards, rather than at-large. This enabled a broader representation of voters. As with many major urban areas, the city core began losing population to the suburbs in the 1960s (the peak population of 208,982 was recorded in 1960), as highway construction led to new residential construction outside the city. The population was 198,682 in 2000 and grew slightly to 200,538 in 2009. The growth of the outlying suburbs has continued, and the overall metropolitan-area population is over 700,000 today.", "title": "History" }, { "paragraph_id": 22, "text": "During the Great Flood of 1993, heavy rains throughout June and early July caused the Des Moines and Raccoon Rivers to rise above flood stage levels. The Des Moines Water Works was submerged by floodwaters during the early morning hours of July 11, 1993, leaving an estimated 250,000 people without running water for 12 days and without drinking water for 20 days. Des Moines suffered major flooding again in June 2008 with a major levee breach. The Des Moines river is controlled upstream by Saylorville Reservoir. In both 1993 and 2008, the flooding river overtopped the reservoir spillway.", "title": "History" }, { "paragraph_id": 23, "text": "Today, Des Moines is a member of ICLEI Local Governments for Sustainability USA. Through ICLEI, Des Moines has implemented \"The Tomorrow Plan\", a regional plan focused on developing central Iowa in a sustainable fashion, centrally-planned growth, and resource consumption to manage the local population.", "title": "History" }, { "paragraph_id": 24, "text": "The skyline of Des Moines changed in the 1970s and the 1980s, when several new skyscrapers were built. Additional skyscrapers were built in the 1990s, including Iowa's tallest. Before then, the 19-story Equitable Building, from 1924, was the tallest building in the city and the tallest building in Iowa. The 25-story Financial Center was completed in 1973 and the 36-story Ruan Center was completed in 1974. They were later joined by the 33-story Des Moines Marriott Hotel (1981), the 25-story HUB Tower and 25-story Plaza Building (1985). Iowa's tallest building, Principal Financial Group's 45-story tower at 801 Grand was built in 1991, and the 19-story EMC Insurance Building was erected in 1997.", "title": "Cityscape" }, { "paragraph_id": 25, "text": "During this time period, the Civic Center of Greater Des Moines (1979) was developed; it hosts Broadway shows and special events. Also constructed were the Greater Des Moines Botanical Garden (1979), a large city botanical garden/greenhouse on the east side of the river; the Polk County Convention Complex (1985), and the State of Iowa Historical Museum (1987). The Des Moines skywalk also began to take shape during the 1980s. The skywalk system is 4 miles (6.4 km) long and connects many downtown buildings.", "title": "Cityscape" }, { "paragraph_id": 26, "text": "In the early 21st century, the city has had more major construction in the downtown area. The new Science Center of Iowa and Blank IMAX Dome Theater and the Iowa Events Center opened in 2005. The new central branch of the Des Moines Public Library, designed by renowned architect David Chipperfield of London, opened on April 8, 2006.", "title": "Cityscape" }, { "paragraph_id": 27, "text": "The World Food Prize Foundation, which is based in Des Moines, completed adaptation and restoration of the former Des Moines Public Library building in October 2011. The former library now serves as the home and headquarters of the Norman Borlaug/World Food Prize Hall of Laureates.", "title": "Cityscape" }, { "paragraph_id": 28, "text": "According to the United States Census Bureau, the city has an area of 90.65 square miles (234.78 km), of which 88.93 square miles (230.33 km) is land and 1.73 square miles (4.48 km) is covered by water. It is 850 feet (260 m) above sea level at the confluence of the Raccoon and Des Moines Rivers.", "title": "Geography" }, { "paragraph_id": 29, "text": "In November 2005, Des Moines voters approved a measure that allowed the city to annex parcels of land in the northeast, southeast, and southern corners of Des Moines without agreement by local residents, particularly areas bordering the Iowa Highway 5/U.S. 65 bypass. The annexations became official on June 26, 2009, as 5,174 acres (20.94 km) and around 868 new residents were added to the city of Des Moines. An additional 759 acres (3.07 km) were voluntarily annexed to the city over that same period.", "title": "Geography" }, { "paragraph_id": 30, "text": "Des Moines-West Des Moines Metropolitan Statistical Area", "title": "Geography" }, { "paragraph_id": 31, "text": "Des Moines-Ames-West Des Moines Combined Statistical Area", "title": "Geography" }, { "paragraph_id": 32, "text": "Des Moines' suburban communities include Altoona, Ankeny, Bondurant, Carlisle, Clive, Grimes, Johnston, Norwalk, Pleasant Hill, Urbandale, Waukee, West Des Moines, and Windsor Heights.", "title": "Geography" }, { "paragraph_id": 33, "text": "At the center of North America and far removed from large bodies of water, the Des Moines area has a hot summer type humid continental climate (Köppen Dfa), with warm to hot, humid summers and cold, dry winters. Summer temperatures can often climb into the 90 °F (32 °C) range, occasionally reaching 100 °F (38 °C). Humidity can be high in spring and summer, with frequent afternoon thunderstorms. Fall brings pleasant temperatures and colorful fall foliage. Winters vary from moderately cold to bitterly cold, with low temperatures venturing below 0 °F (−18 °C) quite often. Snowfall averages 36.5 inches (93 cm) per season, and annual precipitation averages 36.55 inches (928 mm), with a peak in the warmer months. Winters are slightly colder than Chicago, but still warmer than Minneapolis, with summer temperatures being very similar between the Upper Midwest metropolitan areas.", "title": "Geography" }, { "paragraph_id": 34, "text": "As of the census of 2020, the population was 214,133. The population density was 2,428.4 per square mile (937.6/km). There were 93,052 housing units at an average density of 1,055.3 per square mile (407.4/km). The racial makeup was 64.54% (138,200) white, 11.68% (25,011) black or African-American, 0.69% (1,474) Native American, 6.76% (14,474) Asian, 0.06% (135) Pacific Islander, 6.62% (14,178) from other races, and 9.65% (20,661) from two or more races. Hispanic or Latino of any race was 14.0% (30,105) of the population.", "title": "Demographics" }, { "paragraph_id": 35, "text": "The 2020 census population of the city included 252 people incarcerated in adult correctional facilities and 2,378 people in student housing.", "title": "Demographics" }, { "paragraph_id": 36, "text": "According to the American Community Survey estimates for 2016–2020, the median income for a household in the city was $54,843, and the median income for a family was $66,420. Male full-time workers had a median income of $47,048 versus $40,290 for female workers. The per capita income for the city was $29,064. About 12.1% of families and 16.0% of the population were below the poverty line, including 24.3% of those under age 18 and 9.8% of those age 65 or over. Of the population age 25 and over, 86.7% were high school graduates or higher and 27.9% had a bachelor's degree or higher.", "title": "Demographics" }, { "paragraph_id": 37, "text": "As of the census of 2010, there were 203,433 people, 81,369 households, and 47,491 families residing in the city. Population density was 2,515.6 inhabitants per square mile (971.3/km). There were 88,729 housing units at an average density of 1,097.2 per square mile (423.6/km). The racial makeup of the city for Unincorporated areas not merged with the city proper was 66.2% White, 15.5% African Americans, 0.5% Native American, 4.0% Asian, and 2.6% from Two or more races. People of Hispanic or Latino origin, of any race, made up 12.1% of the population. The city's racial make up during the 2010 census was 76.4% White, 10.2% African American, 0.5% Native American, 4.4% Asian (1.2% Vietnamese, 0.9% Laotian, 0.4% Burmese, 0.3% Asian Indian, 0.3% Thai, 0.2% Chinese, 0.2% Cambodian, 0.2% Filipino, 0.1% Hmong, 0.1% Korean, 0.1% Nepalese), 0.1% Pacific Islander, 5.0% from other races, and 3.4% from two or more races. People of Hispanic or Latino origin, of any race, formed 12.0% of the population (9.4% Mexican, 0.7% Salvadoran, 0.3% Guatemalan, 0.3% Puerto Rican, 0.1% Honduran, 0.1% Ecuadorian, 0.1% Cuban, 0.1% Spaniard, 0.1% Spanish). Non-Hispanic Whites were 70.5% of the population in 2010. Des Moines also has a sizeable South Sudanese community.", "title": "Demographics" }, { "paragraph_id": 38, "text": "There were 81,369 households, of which 31.6% had children under the age of 18 living with them, 38.9% were married couples living together, 14.2% had a female householder with no husband present, 5.3% had a male householder with no wife present, and 41.6% were non-families. 32.5% of all households were made up of individuals, and 9.4% had someone living alone who was 65 years of age or older. The average household size was 2.43 and the average family size was 3.11.", "title": "Demographics" }, { "paragraph_id": 39, "text": "The median age in the city was 33.5 years. 24.8% of residents were under the age of 18; 10.9% were between the ages of 18 and 24; 29.4% were from 25 to 44; 23.9% were from 45 to 64; and 11% were 65 years of age or older. The gender makeup of the city was 48.9% male and 51.1% female.", "title": "Demographics" }, { "paragraph_id": 40, "text": "As of the 2000 census, there were 198,682 people, 80,504 households, and 48,704 families in the city. The population density was 2,621.3 inhabitants per square mile (1,012.1/km). There were 85,067 housing units at an average density of 1,122.3 per square mile (433.3/km). The racial makeup of the city was 82.3% white, 8.07% Black, 0.35% American Indian, 3.50% Asian, 0.05% Pacific Islander, 3.52% from other races, and 2.23% from two or more races. 6.61% of the population were Hispanic or Latino of any race. 20.9% were of German, 10.3% Irish, 9.1% \"American\" and 8.0% English ancestry, according to Census 2000.", "title": "Demographics" }, { "paragraph_id": 41, "text": "There were 80,504 households, out of which 29.5% had children under the age of 18 living with them, 43.7% were married couples living together, 12.6% had a female householder with no husband present, and 39.5% were non-families. 31.9% of all households were made up of individuals, and 10.2% had someone living alone who was 65 years of age or older. The average household size was 2.39 and the average family size was 3.04.", "title": "Demographics" }, { "paragraph_id": 42, "text": "The age distribution was 24.8% under the age of 18, 10.6% from 18 to 24, 31.8% from 25 to 44, 20.4% from 45 to 64, and 12.4% who were 65 years of age or older. The median age was 34 years. For every 100 females, there were 93.8 males. For every 100 females age 18 and over, there were 90.5 males.", "title": "Demographics" }, { "paragraph_id": 43, "text": "The median income for a household in the city was $38,408, and the median income for a family was $46,590. Males had a median income of $31,712 versus $25,832 for females. The per capita income for the city was $19,467. About 7.9% of families and 11.4% of the population were below the poverty line, including 14.9% of those under age 18 and 7.6% of those ages 65 or over.", "title": "Demographics" }, { "paragraph_id": 44, "text": "Many insurance companies are headquartered in Des Moines, including the Principal Financial Group, EMC Insurance Group, Fidelity & Guaranty Life, Allied Insurance, GuideOne Insurance, Wellmark Blue Cross Blue Shield of Iowa, FBL Financial Group, and American Republic Insurance Company. Iowa has one of the lowest insurance premium taxes in the nation at 1%, and does not charge any premium taxes on qualified life insurance plans, making the state attractive to insurance business. Des Moines has been referred to as the \"Hartford of the West\" and \"Insurance Capital\" because of this. Principal is one of two Fortune 500 companies with headquarters in Iowa (the other being Casey's General Stores), ranking 201st on the magazine's list in 2020.", "title": "Economy" }, { "paragraph_id": 45, "text": "As a center of financial and insurance services, other major corporations headquartered outside of Iowa have a presence in the Des Moines Metro area, including Wells Fargo, Voya Financial, and Electronic Data Systems (EDS). The Meredith Corporation, a leading publishing and marketing company, was also based in Des Moines prior to its acquisition by IAC and merger with Dotdash in 2021. Meredith published Better Homes and Gardens, one of the most widely circulated publications in the United States. Des Moines was also the headquarters of Golf Digest magazine.", "title": "Economy" }, { "paragraph_id": 46, "text": "Other major employers in Des Moines include UnityPoint Health, Mercy Medical Center, MidAmerican Energy Company, CDS Global, UPS, Firestone Agricultural Tire Company, EDS, Drake University, Titan Tire, The Des Moines Register, Anderson Erickson, Dee Zee and EMCO.", "title": "Economy" }, { "paragraph_id": 47, "text": "In 2017, Kemin Industries opened a state-of-the-art worldwide headquarters building in Des Moines.", "title": "Economy" }, { "paragraph_id": 48, "text": "The City of Des Moines is a cultural center for Iowa and home to several art and history museums and performing arts groups. The Des Moines Performing Arts routinely hosts touring Broadway shows and other live professional theater. Its president and CEO, Jeff Chelsvig, is a member of the League of American Theatres and Producers, Inc. The Temple for Performing Arts and Des Moines Playhouse are other venues for live theater, comedy, and performance arts.", "title": "Culture" }, { "paragraph_id": 49, "text": "The Des Moines Metro Opera has been a cultural resource in Des Moines since 1973. The Opera offers educational and outreach programs and is one of the largest performing arts organizations in the state. Ballet Des Moines was established in 2002. Performing three productions each year, the Ballet also provides opportunities for education and outreach.", "title": "Culture" }, { "paragraph_id": 50, "text": "The Des Moines Symphony performs frequently at different venues. In addition to performing seven pairs of classical concerts each season, the Symphony also entertains with New Year's Eve Pops and its annual Yankee Doodle Pops concerts.", "title": "Culture" }, { "paragraph_id": 51, "text": "Jazz in July is an annual event founded in 1969 that performs free jazz shows daily at venues throughout the city during July.", "title": "Culture" }, { "paragraph_id": 52, "text": "Wells Fargo Arena is the Des Moines area's primary venue for sporting events and concerts since its opening in 2005. Named for title sponsor Wells Fargo Financial Services, Wells Fargo Arena holds 16,980 and books large, national touring acts for arena concert performances, while several smaller venues host local, regional, and national bands. It is the home of the Iowa Wolves of the NBA G League, the Iowa Wild of the American Hockey League, and the Iowa Barnstormers of the Indoor Football League.", "title": "Culture" }, { "paragraph_id": 53, "text": "The Simon Estes Riverfront Amphitheater is an outdoor concert venue on the east bank of the Des Moines River which hosts music events such as the Alive Concert Series.", "title": "Culture" }, { "paragraph_id": 54, "text": "The Des Moines Art Center, with a wing designed by architect I. M. Pei, presents art exhibitions and educational programs as well as studio art classes. The Center houses a collection of artwork from the 19th century to the present. An extension of the art center is downtown in an urban museum space, featuring three or four exhibitions each year.", "title": "Culture" }, { "paragraph_id": 55, "text": "The Pappajohn Sculpture Park was established in 2009. It showcases a collection of 24 sculptures donated by Des Moines philanthropists John and Mary Pappajohn. Nearby is the Temple for Performing Arts, a cultural center for the city. Next to the Temple is the 117,000-square-foot (10,900 m) Central Library, designed by renowned English architect David Chipperfield.", "title": "Culture" }, { "paragraph_id": 56, "text": "Salisbury House and Gardens is a 42-room historic house museum on 10 acres (4 ha) of woodlands in the South of Grand neighborhood of Des Moines. It is named after—and loosely inspired by—King's House in Salisbury, England. Built in the 1920s by cosmetics magnate Carl Weeks and his wife, Edith, the Salisbury House contains authentic 16th-century English oak and rafters dating to Shakespeare's days, numerous other architectural features re-purposed from other historic English homes, and an internationally significant collection of original fine art, tapestries, decorative art, furniture, musical instruments, and rare books and documents. The Salisbury House is listed on the National Register of Historic Places, and has been featured on A&E's America's Castles and PBS's Antiques Roadshow. Prominent artists in the Salisbury House collection include Joseph Stella, Lillian Genth, Anthony van Dyck and Lawrence Alma-Tadema.", "title": "Culture" }, { "paragraph_id": 57, "text": "Built in 1877 by prominent pioneer businessman Hoyt Sherman, Hoyt Sherman Place mansion was Des Moines' first public art gallery and houses a distinctive collection of 19th and 20th century artwork. Its restored 1,250-seat theater features an intricate rococo plaster ceiling and excellent acoustics and is used for a variety of cultural performances and entertainment.", "title": "Culture" }, { "paragraph_id": 58, "text": "Arising in the east and facing westward toward downtown, the Iowa State Capitol building with its 275-foot (84 m), 23-karat gold leafed dome towering above the city is a favorite of sightseers. Four smaller domes flank the main dome. The Capitol houses the governor's offices, legislature, and the old Supreme Court Chambers. The ornate interior also features a grand staircase, mural \"Westward\", five-story law library, scale model of the USS Iowa, and collection of first lady dolls. Guided tours are available.", "title": "Culture" }, { "paragraph_id": 59, "text": "The Capitol grounds include a World War II memorial with sculpture and Wall of Memories, the 1894 Soldiers and Sailors Monument of the Civil War and memorials honoring those who served in the Spanish–American, Korean, and Vietnam Wars. The West Capitol Terrace provides the entrance from the west to the state's grandest building, the State Capitol Building. The 10-acre (4 ha) \"people's park\" at the foot of the Capitol complex includes a promenade and landscaped gardens, in addition to providing public space for rallies and special events. A granite map of Iowa depicting all 99 counties rests at the base of the terrace and has become an attraction for in-state visitors, many of whom walk over the map to find their home county.", "title": "Culture" }, { "paragraph_id": 60, "text": "Iowa's history lives on in the State of Iowa Historical Museum. This modern granite and glass structure at the foot of the State Capitol Building houses permanent and temporary exhibits exploring the people, places, events, and issues of Iowa's past. The showcase includes native wildlife, American Indian and pioneer artifacts, and political and military items. The museum features a genealogy and Iowa history library, museum gift shop, and cafe.", "title": "Culture" }, { "paragraph_id": 61, "text": "Terrace Hill, a National Historic Landmark and Iowa Governor's Residence, is among the best examples of American Victorian Second Empire architecture. This opulent 1869 home was built by Iowa's first millionaire, Benjamin F. Allen, and restored to the late 19th century period. It overlooks downtown Des Moines and is situated on 8 acres (3.2 ha) with a re-created Victorian formal garden. Tours are conducted Tuesdays through Saturdays from March through December.", "title": "Culture" }, { "paragraph_id": 62, "text": "The 110,000-square-foot (10,000 m) Science Center of Iowa and Blank IMAX Dome Theater offers seven interactive learning areas, live programs, and hands-on activities encouraging learning and fun for all ages. Among its three theaters include the 216-seat Blank IMAX Dome Theater, 175-seat John Deere Adventure Theater featuring live performances, and a 50-foot (15 m) domed Star Theater.", "title": "Culture" }, { "paragraph_id": 63, "text": "The Greater Des Moines Botanical Garden, an indoor conservatory of over 15,000 exotic plants, is one of the largest collections of tropical, subtropical, and desert-growing plants in the Midwest. The Center blooms with thousands of flowers year-round. Nearby are the Robert D. Ray Asian Gardens and Pavilion, named in honor of the former governor whose influence helped relocate thousands of Vietnamese refugees to Iowa homes in the 1970s and 1980s. Developed by the city's Asian community, the Gardens include a three-story Chinese pavilion, bonsai landscaping, and granite sculptures to highlight the importance of diversity and recognize Asian American contributions in Iowa.", "title": "Culture" }, { "paragraph_id": 64, "text": "Blank Park Zoo is a landscaped 22-acre (8.9 ha) zoological park on the south side. Among the exhibits include a tropical rain forest, Australian Outback, and Africa. The Zoo offers education classes, tours, and rental facilities.", "title": "Culture" }, { "paragraph_id": 65, "text": "The Iowa Primate Learning Sanctuary was established as a scientific research facility with a 230-acre (93 ha) campus housing bonobos and orangutans for the noninvasive interdisciplinary study of their cognitive and communicative capabilities.", "title": "Culture" }, { "paragraph_id": 66, "text": "The East Village, on the east side of the Des Moines River, begins at the river and extends about five blocks east to the State Capitol Building, offering an eclectic blend of historic buildings, hip eateries, boutiques, art galleries, and a wide variety of other retail establishments mixed with residences.", "title": "Culture" }, { "paragraph_id": 67, "text": "Adventureland Park is an amusement park in neighboring Altoona, just northeast of Des Moines. The park boasts more than 100 rides, shows, and attractions, including six rollercoasters. A hotel and campground is just outside the park. Also in Altoona is Prairie Meadows Racetrack and Casino, an entertainment venue for gambling and horse racing. Open 24 hours a day, year-round, the racetrack and casino features live racing, plus over 1,750 slot machines, table games, and concert and show entertainment. The racetrack hosts two Grade III races annually, the Iowa Oaks and the Cornhusker Handicap.", "title": "Culture" }, { "paragraph_id": 68, "text": "Living History Farms in suburban Urbandale tells the story of Midwestern agriculture and rural life in a 500-acre (2.0 km) open-air museum with interpreters dressed in period costume who recreate the daily routines of early Iowans. Open daily from May through October, the Living History Farms include a 1700 Ioway Indian village, 1850 pioneer farm, 1875 frontier town, 1900 horse-powered farm, and a modern crop center.", "title": "Culture" }, { "paragraph_id": 69, "text": "Wallace House was the home of the first Henry Wallace, a national leader in agriculture and conservation and the first editor of Wallaces' Farmer farm journal. This restored 1883 Italianate Victorian houses exhibits, artifacts, and information covering four generations of Henry Wallaces and other family members.", "title": "Culture" }, { "paragraph_id": 70, "text": "Historic Jordan House in West Des Moines is a stately Victorian home built in 1850 and added to in 1870 by the first white settler in West Des Moines, James C. Jordan. Completely refurbished, this mansion was part of the Underground Railroad and today houses 16 period rooms, a railroad museum, West Des Moines community history, and a museum dedicated to the Underground Railroad in Iowa. In 1893 Jordan's daughter Eda was sliding down the banister when she fell off and broke her neck. She died two days later, and her ghost is reputed to haunt the house.", "title": "Culture" }, { "paragraph_id": 71, "text": "The Chicago Tribune wrote that Iowa's capital city has \"walker-friendly downtown streets and enough outdoor sculpture, sleek buildings, storefronts and cafes to delight the most jaded stroller\".", "title": "Culture" }, { "paragraph_id": 72, "text": "Des Moines plays host to a growing number of nationally acclaimed cultural events, including the annual Des Moines Arts Festival in June, Metro Arts Jazz in July, Iowa State Fair in August, and the World Food & Music Festival in September. On Saturdays from May through October, the Downtown Farmers' Market draws visitors from across the state. Local parades include Saint Patrick's Day Parade, Drake Relays Parade, Capitol City Pride Parade, Iowa State Fair Parade, Labor Day Parade, and Beaverdale Fall Festival Parade.", "title": "Culture" }, { "paragraph_id": 73, "text": "Other annual festivals and events include: Des Moines Beer Week, 80/35 Music Festival, 515 Alive Music Festival, ArtFest Midwest, Blue Ribbon Bacon Fest, CelebrAsian Heritage Festival, Des Moines Pride Festival, Des Moines Renaissance Faire, Festa Italiana, Festival of Trees and Lights, World Food & Music Festival, I'll Make Me a World Iowa, Latino Heritage Festival, Oktoberfest, Winefest, ImaginEve!, Iowa's Premier Beer, Wine & Food Show, and Wild Rose Film Festival.", "title": "Culture" }, { "paragraph_id": 74, "text": "Des Moines operates under a council–manager form of government. The council consists of a mayor who is elected in citywide vote, two at-large members, and four members representing each of the city's four wards. In 2014, Jonathan Gano was appointed as the new Public Works Director. In 2015, Dana Wingert was appointed as Police Chief. In 2018, Steven L. Naber was appointed as the new City Engineer.", "title": "Government" }, { "paragraph_id": 75, "text": "The council members include:", "title": "Government" }, { "paragraph_id": 76, "text": "A plan to merge the governments of Des Moines and Polk County was rejected by voters during the November 2, 2004, election. The consolidated city-county government would have had a full-time mayor and a 15-member council that would have been divided among the city and its suburbs. Each suburb would still have retained its individual government but with the option to join the consolidated government at any time. Although a full merger was soundly rejected, many city and county departments and programs have been consolidated.", "title": "Government" }, { "paragraph_id": 77, "text": "Des Moines has an extensive skywalk system within its downtown core. With over four miles of enclosed walkway, it is one of the largest of such systems in the United States. The Des Moines Skywalk System has been criticized for hurting street-level business, though a recent initiative has been made to make street-level Skywalk entrances more visible.", "title": "Transportation" }, { "paragraph_id": 78, "text": "Interstate 235 (I-235) cuts through the city, and I-35 and I-80 both pass through the Des Moines metropolitan area, as well as the city of Des Moines. On the northern side of the city of Des Moines and passing through the cities of Altoona, Clive, Johnston, Urbandale and West Des Moines, I-35 and I-80 converge into a long concurrency while I-235 takes a direct route through Des Moines, Windsor Heights, and West Des Moines before meeting up with I-35 and I-80 on the western edge of the metro. The Des Moines Bypass passes south and east of the city. Other routes in and around the city include US 6, US 69, Iowa 28, Iowa 141, Iowa 163, Iowa 330, Iowa 415, and Iowa 160.", "title": "Transportation" }, { "paragraph_id": 79, "text": "Des Moines's public transit system, operated by DART (Des Moines Area Regional Transit), which was the Des Moines Metropolitan Transit Authority until October 2006, consists entirely of buses, including regular in-city routes and express and commuter buses to outlying suburban areas.", "title": "Transportation" }, { "paragraph_id": 80, "text": "Characteristics of household ownership of cars in Des Moines are similar to national averages. In 2015, 8.5 percent of Des Moines households lacked a car, and increased to 9.6 percent in 2016. The national average was 8.7 percent in 2016. Des Moines averaged 1.71 cars per household in 2016, compared to a national average of 1.8.", "title": "Transportation" }, { "paragraph_id": 81, "text": "Burlington Trailways, and Jefferson Lines run long-distance, intercity bus routes through Des Moines. The bus station is located north of downtown.", "title": "Transportation" }, { "paragraph_id": 82, "text": "Although Des Moines was historically a train hub, it does not have direct passenger train service. For east–west traffic it was served at the Rock Island Depot by the Corn Belt Rocket express from Omaha to the west, to Chicago in the east. The Rock Island also offered the Rocky Mountain Rocket from Colorado Springs in the west, to Chicago, and the Twin Star Rocket to Minneapolis to the north and Dallas and Houston to the south. The last train was an unnamed service ending at Council Bluffs, and it was discontinued on May 31, 1970. Today, this line constitutes the mainline of the Iowa Interstate Railroad.", "title": "Transportation" }, { "paragraph_id": 83, "text": "Other railroads used the East Des Moines Union Station. Northward and northwest bound, there were Chicago and North Western trains to destinations including Minneapolis. The Wabash Railroad ran service to the southeast to St. Louis. These lines remain in use but are now operated by Union Pacific and BNSF.", "title": "Transportation" }, { "paragraph_id": 84, "text": "The nearest Amtrak station is in Osceola, about 40 miles (64 km) south of Des Moines. The Osceola station is served by the Chicago–San Francisco California Zephyr; there is no Osceola–Des Moines Amtrak Thruway connecting service. There have been proposals to extend Amtrak's planned Chicago–Moline Quad City Rocket to Des Moines via the Iowa Interstate Railroad.", "title": "Transportation" }, { "paragraph_id": 85, "text": "The Des Moines International Airport (DSM), on Fleur Drive in the southern part of Des Moines, offers nonstop service to destinations within the United States. The only international service is cargo service, but there have been discussions about adding an international terminal.", "title": "Transportation" }, { "paragraph_id": 86, "text": "The Des Moines Public Schools district is the largest community school district in Iowa with 32,062 enrolled students as of the 2012–2013 school year. The district consists of 63 schools: 38 elementary schools, eleven middle schools, five high schools (East, Hoover, Lincoln, North, and Roosevelt), and ten special schools and programs. Small parts of the city are instead served by Carlisle Community Schools, Johnston Community School District, the Southeast Polk Community School District and the Saydel School District Grand View Christian School is the only private school in the city, although Des Moines Christian School (in Des Moines from 1947 to 2006) in Urbandale, Dowling Catholic High School in West Des Moines, and Ankeny Christian Academy on the north side of the metro area serve some city residents.", "title": "Education" }, { "paragraph_id": 87, "text": "Des Moines is also home to the main campuses of three four-year private colleges: Drake University, Grand View University, and Mercy College of Health Sciences. The University of Iowa has a satellite facility in the city's Western Gateway Park, while Iowa State University hosts Master of Business Administration classes downtown. Simpson College, Upper Iowa University, William Penn University, and Purdue University Global. Des Moines Area Community College is the area's community college with campuses in Ankeny, Des Moines, and West Des Moines. The city is also home to Des Moines University, an osteopathic medical school.", "title": "Education" }, { "paragraph_id": 88, "text": "The Des Moines market, which originally consisted of Polk, Dallas, Story, and Warren counties, was ranked 91st by Arbitron as of the fall of 2007 with a population of 512,000 aged 12 and older. But in June 2011 it was moved up to 72nd with the addition of Boone, Clarke, Greene, Guthrie, Jasper, Lucas, Madison and Marion counties.", "title": "Media" }, { "paragraph_id": 89, "text": "iHeartMedia owns five radio stations in the area, including WHO 1040 AM, a 50,000-watt AM news/talk station that has the highest ratings in the area and once employed future President Ronald Reagan as a sportscaster. In addition to WHO, iHeartMedia owns KDRB 100.3 FM (adult hits), KKDM 107.5 FM (contemporary hits), KXNO-FM 106.3, and KXNO 1460 AM (sports radio). They also own news/talk station KASI 1430 AM and hot adult contemporary station KCYZ 105.1 FM, both of which broadcast from Ames.", "title": "Media" }, { "paragraph_id": 90, "text": "Cumulus Media owns five stations that broadcast from facilities in Urbandale: KBGG 1700 AM (sports), KGGO 94.9 FM (classic rock), KHKI 97.3 FM (country music), KJJY 92.5 FM (country music), and KWQW 98.3 FM (classic hip hop).", "title": "Media" }, { "paragraph_id": 91, "text": "Saga Communications owns nine stations in the area: KAZR 103.3 FM (rock), KAZR-HD2 (oldies), KIOA 93.3 FM (oldies), KIOA-HD2 99.9FM & 93.3 HD2 (Rhythmic Top 40), KOEZ 104.1 FM (soft adult contemporary), KPSZ 940 AM (contemporary Christian music, religious teaching, and conservative talk), KRNT 1350 AM (ESPN Radio), KSTZ 102.5 FM (adult contemporary hits), and KSTZ-HD2 (classic country).", "title": "Media" }, { "paragraph_id": 92, "text": "Other stations in the Des Moines area include religious stations KWKY 1150 AM, and KPUL 101.7 FM.", "title": "Media" }, { "paragraph_id": 93, "text": "Non-commercial radio stations in the Des Moines area include KDPS 88.1 FM, a station operated by the Des Moines Public Schools; KWDM 88.7 FM, a station operated by Valley High School; KJMC 89.3 FM, an urban contemporary station; K213DV 90.5 FM, the contemporary Christian K-Love affiliate for the area; and KDFR 91.3 FM, operated by Family Radio. Iowa Public Radio broadcasts several stations in the Des Moines area, all of which are owned by Iowa State University and operated on campus. WOI 640 am, the networks flagship station, and WOI-FM 90.1, the networks flagship \"Studio One\" station, are both based out of Ames and serve as the area's National Public Radio outlets. The network also operates classical stations KICG, KICJ, KICL and KICP. The University of Northwestern – St. Paul operates Contemporary Christian simulcasts of KNWI-FM at 107.1 Osceola/Des Moines, KNWM-FM at 96.1 Madrid/Ames/Des Moines, and K264CD at 100.7 in downtown Des Moines. Low-power FM stations include KFMG-LP 99.1, a community radio station broadcasting from the Hotel Fort Des Moines and also webstreamed.", "title": "Media" }, { "paragraph_id": 94, "text": "The Des Moines-Ames media market consists of 35 central Iowa counties: Adair, Adams, Appanoose, Audubon, Boone, Calhoun, Carroll, Clarke, Dallas, Decatur, Franklin, Greene, Guthrie, Hamilton, Hardin, Humboldt, Jasper, Kossuth, Lucas, Madison, Mahaska, Marion, Marshall, Monroe, Pocahontas, Polk, Poweshiek, Ringgold, Story, Taylor, Union, Warren, Wayne, Webster, and Wright. It was ranked 71st by Nielsen Media Research for the 2008–2009 television season with 432,410 television households.", "title": "Media" }, { "paragraph_id": 95, "text": "Commercial television stations serving Des Moines include CBS affiliate KCCI channel 8, NBC affiliate WHO-DT channel 13, and Fox affiliate KDSM-TV channel 17. ABC affiliate WOI-TV channel 5 and CW affiliate KCWI-TV channel 23 are both licensed to Ames and broadcast from studios in West Des Moines. KFPX-TV channel 39, the local ION affiliate, is licensed to Newton. Two non-commercial stations are also licensed to Des Moines: KDIN channel 11, the local PBS member station and flagship of the Iowa Public Television network, and KDMI channel 19, a TCT affiliate. Mediacom is the Des Moines area's cable television provider. Television sports listings for Des Moines and Iowa can be found on the Des Moines Register website.", "title": "Media" }, { "paragraph_id": 96, "text": "The Des Moines Register is the city's primary daily newspaper. As of March 31, 2007, the Register ranked 71st in circulation among daily newspapers in the United States according to the Audit Bureau of Circulations with 146,050 daily and 233,229 Sunday subscribers. Weekly newspapers include Juice, a publication aimed at the 25–34 demographic published by the Register on Wednesdays; Cityview, an alternative weekly published on Thursdays; and the Des Moines Business Record, a business journal published on Sundays, along with the West Des Moines Register, the Johnston Register, and the Waukee Register on Tuesdays, Wednesdays, or Thursdays depending on the address of the subscriber. Additionally, magazine publisher Meredith Corporation was based in Des Moines prior to its acquisition by IAC and merger with Dotdash in 2021.", "title": "Media" }, { "paragraph_id": 97, "text": "Des Moines hosts professional minor league teams in several sports — baseball, basketball, hockey, indoor football, and soccer — and is home to the sports teams of Drake University which play in NCAA Division I.", "title": "Sports and recreation" }, { "paragraph_id": 98, "text": "The Des Moines Menace soccer club, a member of USL League Two, play their home games at Valley Stadium in West Des Moines. Des Moines United FC of the National Premier Soccer League also utilize Valley Stadium.", "title": "Sports and recreation" }, { "paragraph_id": 99, "text": "Des Moines is home to the Iowa Cubs baseball team of the Triple-A East. The I-Cubs, which are the Triple-A affiliate of the major league Chicago Cubs, play their home games at Principal Park near the confluence of the Des Moines and Raccoon Rivers.", "title": "Sports and recreation" }, { "paragraph_id": 100, "text": "Wells Fargo Arena of the Iowa Events Center is home to the Iowa Barnstormers of the Indoor Football League, the Iowa Wild of the American Hockey League, and the Iowa Wolves of the NBA G League. The Barnstormers relaunched as an af2 club in 2008 before joining a relaunched Arena Football League in 2010 and the Indoor Football League in 2015; the Barnstormers had previously played in the Arena Football League from 1994 to 2000 (featuring future NFL Hall of Famer and Super Bowl MVP quarterback Kurt Warner) before relocating to New York. The Iowa Energy, a D-League team, began play in 2007. They were bought by the Minnesota Timberwolves in 2017 and were renamed the Iowa Wolves to reflect the new ownership. The Wild, the AHL affiliate of the National Hockey League's Minnesota Wild have played at Wells Fargo Arena since 2013; previously, the Iowa Chops played four seasons in Des Moines (known as the Iowa Stars for three of those seasons.)", "title": "Sports and recreation" }, { "paragraph_id": 101, "text": "Additionally, the Des Moines Buccaneers of the United States Hockey League play at Buccaneer Arena in suburban Urbandale.", "title": "Sports and recreation" }, { "paragraph_id": 102, "text": "Des Moines is also home to the Drake University Bulldogs, an NCAA Division I member of the Missouri Valley Conference, primarily playing northwest of downtown at the on-campus Drake Stadium and Knapp Center. Drake Stadium is home to the famed Drake Relays each April. In addition to the Drake Relays, Drake Stadium has hosted multiple NCAA Outdoor Track and Field Championships and USA Outdoor Track and Field Championships.", "title": "Sports and recreation" }, { "paragraph_id": 103, "text": "The Vikings of Grand View University also compete in intercollegiate athletics in Des Moines. A member of the Heart of America Athletic Conference, within the NAIA, they field 21 varsity athletic teams. They were NAIA National Champions in football in 2013.", "title": "Sports and recreation" }, { "paragraph_id": 104, "text": "The Principal Charity Classic, a Champions Tour golf event, is held at Wakonda Club in late May or early June. The IMT Des Moines Marathon is held throughout the city each October.", "title": "Sports and recreation" }, { "paragraph_id": 105, "text": "Des Moines has 76 city parks and three golf courses, as well as three family aquatic centers, five community centers and three swimming pools. The city has 45 miles (72 km) of trails. The first major park was Greenwood Park. The park commissioners purchased the land on April 21, 1894.", "title": "Sports and recreation" }, { "paragraph_id": 106, "text": "The Principal Riverwalk is a riverwalk park district being constructed along the banks of the Des Moines River in the downtown. Primarily funded by the Principal Financial Group, the Riverwalk is a multi-year jointly funded project also funded by the city and state. Upon completion, it will feature a 1.2-mile (1.9 km) recreational trail connecting the east and west sides of downtown via two pedestrian bridges. A landscaped promenade along the street level is planned. The Riverwalk includes the downtown Brenton Skating Plaza, open from November through March.", "title": "Sports and recreation" }, { "paragraph_id": 107, "text": "Gray's Lake, part of the 167 acres (68 ha) of Gray's Lake Park, features a boat rental facility, fishing pier, floating boardwalks, and a park resource center. Located just south of the downtown, the centerpiece of the park is a lighted 1.9-mile (3.1 km) Kruidenier Trail, encircling it entirely.", "title": "Sports and recreation" }, { "paragraph_id": 108, "text": "From downtown Des Moines primarily along the east bank of the Des Moines River, the Neil Smith and John Pat Dorrian Trails are 28.2-mile (45.4 km) paved recreational trails that connect Gray's Lake northward to the east shore of Saylorville Lake, Big Creek State Park, and the recreational trails of Ankeny including the High Trestle Trail. These trails are near several recreational facilities including the Pete Crivaro Park, Principal Park, the Principal Riverwalk, the Greater Des Moines Botanical Garden, Union Park and its Heritage Carousel of Des Moines, Birdland Park and the Birdland Marina/Boatramp on the Des Moines River, Riverview Park, McHenry Park, and River Drive Park. Although outside of Des Moines, Jester Park has 1,834 acres (742 ha) of land along the western shore of Saylorville Lake and can be reached from the Neil Smith Trail over the Saylorville Dam.", "title": "Sports and recreation" }, { "paragraph_id": 109, "text": "Just west of Gray's Lake are the 1,500 acres (607 ha) of the Des Moines Water Works Park. The Water Works Park is along the banks of the Raccoon River immediately upstream from where the Raccoon River empties into the Des Moines River. The Des Moines Water Works Facility, which obtains the city's drinking water from the Raccoon River, is entirely within the Water Works Park. A bridge in the park crosses the Raccoon River. The Water Works Park recreational trails link to downtown Des Moines by travelling past Gray's Lake and back across the Raccoon River via either along the Meredith Trail near Principal Park, or along the Martin Luther King Jr. Parkway. The Water Works Park trails connect westward to Valley Junction and the recreational trails of the western suburbs: Windsor Heights, Urbandale, Clive, and Waukee. Also originating from Water Works Park, the Great Western Trail is an 18-mile (29 km) journey southward from Des Moines to Martensdale through the Willow Creek Golf Course, Orilla, and Cumming. Often, the location for summer music festivals and concerts, Water Works Park was the overnight campground for thousands of bicyclists on Tuesday, July 23, 2013, during RAGBRAI XLI.", "title": "Sports and recreation" }, { "paragraph_id": 110, "text": "The Greater Des Moines Sister City Commission, with members from the City of Des Moines and the suburbs of Cumming, Norwalk, Windsor Heights, Johnston, Urbandale, and Ankeny, maintains sister city relationships with:", "title": "Sister cities" } ]
Des Moines is the capital and the most populous city in Iowa, United States. It is also the county seat of Polk County. A small part of the city extends into Warren County. It was incorporated on September 22, 1851, as Fort Des Moines, which was shortened to "Des Moines" in 1857. It is located on, and named after, the Des Moines River, which likely was adapted from the early French name, Rivière des Moines, meaning "River of the Monks". The city's population was 214,133 as of the 2020 census. The six-county metropolitan area is ranked 81st in terms of population in the United States, with 709,466 residents according to the 2020 census by the United States Census Bureau, and is the largest metropolitan area fully located within the state. Des Moines is a major center of the US insurance industry and has a sizable financial-services and publishing business base. The city was credited as the "number one spot for U.S. insurance companies" in a Business Wire article and named the third-largest "insurance capital" of the world. The city is the headquarters for the Principal Financial Group, Ruan Transportation, TMC Transportation, EMC Insurance Companies, and Wellmark Blue Cross Blue Shield. Other major corporations such as Wells Fargo, Cognizant, Voya Financial, Nationwide Mutual Insurance Company, ACE Limited, Marsh, Monsanto, and Corteva have large operations in or near the metropolitan area. In recent years, Microsoft, Hewlett-Packard, and Facebook have built data-processing and logistical facilities in the Des Moines area. Des Moines is an important city in U.S. presidential politics; as the state's capital, it is the site of the first caucuses of the presidential primary cycle. Many presidential candidates set up campaign headquarters in Des Moines. A 2007 article in The New York Times said, "If you have any desire to witness presidential candidates in the most close-up and intimate of settings, there is arguably no better place to go than Des Moines."
2002-01-25T13:52:21Z
2023-12-26T18:51:46Z
[ "Template:EB1911 poster", "Template:Curlie", "Template:Wide image", "Template:Reflist", "Template:Dead link", "Template:Commons category", "Template:Attached KML", "Template:Official website", "Template:IPAc-en", "Template:US Census population", "Template:Portal", "Template:Webarchive", "Template:Em", "Template:IPA-fr", "Template:Cvt", "Template:Bartable", "Template:Main", "Template:See also", "Template:Des Moines, Iowa weatherbox", "Template:Sup", "Template:Legend inline", "Template:Cite book", "Template:Circa", "Template:Wikivoyage", "Template:Authority control", "Template:Convert", "Template:For timeline", "Template:Efn", "Template:Cite news", "Template:Notelist", "Template:Cite journal", "Template:ISBN", "Template:Des Moines, Iowa", "Template:Change", "Template:Cite web", "Template:Pop density", "Template:Flagdeco", "Template:Flagicon", "Template:Navboxes", "Template:Short description", "Template:Redirect", "Template:Use mdy dates", "Template:Infobox settlement" ]
https://en.wikipedia.org/wiki/Des_Moines,_Iowa
9,164
Donald Campbell
Donald Malcolm Campbell, CBE (23 March 1921 – 4 January 1967) was a British speed record breaker who broke eight absolute world speed records on water and on land in the 1950s and 1960s. He remains the only person to set both world land and water speed records in the same year (1964). He died during a water speed record attempt at Coniston Water in the Lake District, England. Donald Campbell was born at Canbury House, Kingston upon Thames, Surrey, the son of Malcolm, later Sir Malcolm Campbell, holder of 13 world speed records in the 1920s and 1930s in the Bluebird cars and boats, and his second wife, Dorothy Evelyn née Whittall. Campbell attended St Peter's School, Seaford and Uppingham School. At the outbreak of the Second World War he volunteered for the Royal Air Force, but was unable to serve because of a case of childhood rheumatic fever. He joined Briggs Motor Bodies Ltd in West Thurrock, where he became a maintenance engineer. Subsequently, he was a shareholder in a small engineering company called Kine Engineering, producing machine tools. Following his father's death on New Year's Eve, 31 December 1948 and aided by Malcolm's chief engineer, Leo Villa, the younger Campbell strove to set speed records first on water and then land. He married three times — to Daphne Harvey in 1945, producing daughter Georgina (Gina) Campbell, born on 19 September 1946; to Dorothy McKegg (1928–2008) in 1952; and to Tonia Bern (1928–2021) in December 1958, which lasted until his death in 1967. Campbell was intensely superstitious, hating the colour green, the number thirteen and believing nothing good ever happened on a Friday. He also had some interest in the paranormal, which he nurtured as a member of the Ghost Club. Campbell was a restless man and seemed driven to equal, if not surpass, his father's achievements. He was generally light-hearted and was usually, at least until his 1960 crash at the Bonneville Salt Flats, optimistic in his outlook. Campbell began his speed record attempts in the summer of 1949, using his father's old boat, Blue Bird K4, which he renamed Bluebird K4. His initial attempts that summer were unsuccessful, although he did come close to raising his father's existing record. The team returned to Coniston Water, Lancashire in 1950 for further trials. While there, they heard that an American, Stanley Sayres, had raised the record from 141 to 160 mph (227 to 257 km/h), beyond K4's capabilities without substantial modification. In late 1950 and 1951, Bluebird K4 was modified to make it a "prop-rider" as opposed to her original immersed propeller configuration. This greatly reduced hydrodynamic drag, as the third planing point would now be the propeller hub, meaning one of the two propeller blades was always out of the water at high speed. She now sported two cockpits, the second one being for Leo Villa. Bluebird K4 now had a chance of exceeding Sayres' record and also enjoyed success as a circuit racer, winning the Oltranza Cup in Italy in the spring of that year. Returning to Coniston in September, they finally got Bluebird up to 170 mph after further trials, only to suffer a structural failure at 170 mph (270 km/h) which wrecked the boat. Sayres raised the record the following year to 178 mph (286 km/h) in Slo-Mo-Shun IV. Along with Campbell, Britain had another potential contender for water speed record honours — John Cobb. He had commissioned the world's first purpose-built turbojet Hydroplane, Crusader, with a target speed of over 200 mph (320 km/h), and began trials on Loch Ness in autumn 1952. Cobb was killed later that year, when Crusader broke up, during an attempt on the record. Campbell was devastated at Cobb's loss, but he resolved to build a new Bluebird boat to bring the water speed record back to Britain. In early 1953, Campbell began development of his own advanced all-metal jet-powered Bluebird K7 hydroplane to challenge the record, by now held by the American prop rider hydroplane Slo-Mo-Shun IV.[1] Designed by Ken and Lew Norris, the K7 was a steel-framed, aluminium-bodied, three-point hydroplane with a Metropolitan-Vickers Beryl axial-flow turbojet engine, producing 3,500-pound-force (16 kN) of thrust. Like Slo-Mo-Shun, but unlike Cobb's tricycle Crusader, the three planing points were arranged with two forward, on outrigged sponsons and one aft, in a "pickle-fork" layout, prompting Bluebird's early comparison to a blue lobster. K7 was of very advanced design and construction, and its load bearing steel space frame ultra rigid and stressed to 25 g (exceeding contemporary military jet aircraft). It had a design speed of 250 miles per hour (400 kilometres per hour) and remained the only successful jet-boat in the world until the late 1960s. The designation "K7" was derived from its Lloyd's unlimited rating registration. It was carried on a prominent white roundel on each sponson, underneath an infinity symbol. Bluebird K7 was the seventh boat registered at Lloyds in the "Unlimited" series. Campbell set seven world water speed records in K7 between July 1955 and December 1964. The first of these marks was set at Ullswater on 23 July 1955, where he achieved a speed of 202.32 mph (325.60 km/h) but only after many months of trials and a major redesign of Bluebird's forward sponson attachments points. Campbell achieved a steady series of subsequent speed-record increases with the boat during the rest of the decade, beginning with a mark of 216 mph (348 km/h) in 1955 on Lake Mead in Nevada. Subsequently, four new marks were registered on Coniston Water, where Campbell and Bluebird became an annual fixture in the latter half of the 1950s, enjoying significant sponsorship from the Mobil oil company and then subsequently BP. Campbell also made an attempt in the summer of 1957 at Canandaigua, New York, which failed due to lack of suitable calm water conditions. Bluebird K7 became a well known and popular attraction, and as well as her annual Coniston appearances, K7 was displayed extensively in the UK, United States, Canada and Europe, and then subsequently in Australia during Campbell's prolonged attempt on the land speed record in 1963–1964. To extract more speed, and endow the boat with greater high-speed stability, in both pitch and yaw, K7 was subtly modified in the second half of the 1950s to incorporate more effective streamlining with a blown Perspex cockpit canopy and fluting to the lower part of the main hull. In 1958, a small wedge shaped tail fin, housing an arrester parachute, modified sponson fairings, that gave a significant reduction in forward aerodynamic lift, and a fixed hydrodynamic stabilising fin, attached to the transom to aid directional stability, and exert a marginal down-force on the nose were incorporated into the design to increase the safe operating envelope of the hydroplane. Thus she reached 225 mph (362 km/h) in 1956, where an unprecedented peak speed of 286.78 mph (461.53 km/h) was achieved on one run, 239 mph (385 km/h) in 1957, 248 mph (399 km/h) in 1958 and 260 mph (420 km/h) in 1959. Campbell was awarded the Order of the British Empire (CBE) in January 1957 for his water speed record breaking, and in particular his record at Lake Mead in the United States which earned him and Britain very positive acclaim. On 23 November 1964, Campbell achieved the Australian water speed record of 216 miles per hour (348 km/h) on Lake Bonney Riverland in South Australia, although he was unable to break the world record on that attempt. It was after the Lake Mead water speed record success in 1955 that the seeds of Campbell's ambition to hold the land speed record as well were planted. The following year, the serious planning was under way — to build a car to break the land speed record, which then stood at 394 mph (634 km/h) set by John Cobb in 1947. The Norris brothers designed Bluebird-Proteus CN7 with 500 mph (800 km/h) in mind. The brothers were even more enthusiastic about the car than the boat and like all of his projects, Campbell wanted Bluebird CN7, to be the best of its type, a showcase of British engineering skills. The British motor industry, in the guise of Dunlop, BP, Smiths Industries, Lucas Automotive, Rubery Owen as well as many others, became heavily involved in the project to build the most advanced car the world had yet seen. CN7 was powered by a specially modified Bristol-Siddeley Proteus free-turbine engine of 4,450 shp (3,320 kW) driving all four wheels. Bluebird CN7 was designed to achieve 475–500 mph and was completed by the spring of 1960. Following low-speed tests conducted at the Goodwood motor racing circuit in Sussex, in July, the CN7 was taken to the Bonneville Salt Flats in Utah, United States, scene of his father's last land speed record triumph, some 25 years earlier in September 1935. The trials initially went well, and various adjustments were made to the car. On the sixth run in CN7, Campbell lost control at over 360 mph and crashed. It was the car's tremendous structural integrity that saved his life. He was hospitalised with a fractured skull and a burst eardrum, as well as minor cuts and bruises, but CN7 was a write-off. Almost immediately, Campbell announced he was determined to have another go. Sir Alfred Owen, whose Rubery Owen industrial group had built CN7, offered to rebuild it for him. That single decision was to have a profound influence on the rest of Campbell's life. His original plan had been to break the land speed record at over 400 mph in 1960, return to Bonneville the following year to really bump up the speed to something near to 500 mph, get his seventh water speed record with K7 and then retire, as undisputed champion of speed and perhaps just as important, secure in the knowledge that he was worthy of his father's legacy. Campbell decided not to go back to Utah for the new trials. He felt the Bonneville course was too short at 11-mile (18 km) and the salt surface was in poor condition. BP offered to find another venue and eventually after a long search, Lake Eyre, in South Australia, was chosen. It hadn't rained there for nine years and the vast dry bed of the salt lake offered a course of up to 20-mile (32 km). By the summer of 1962, Bluebird CN7 was rebuilt, some nine months later than Campbell had hoped. It was essentially the same car, but with the addition of a large stabilising tail fin and a reinforced fibreglass cockpit cover. At the end of 1962, CN7 was shipped out to Australia ready for the new attempt. Low-speed runs had just started when the rains came. The course was compromised and further rain meant, that by May 1963, Lake Eyre was flooded to a depth of 3 inches, causing the attempt to be abandoned. Campbell was heavily criticised in the press for alleged time wasting and mismanagement of the project, despite the fact that he could hardly be held responsible for the unprecedented weather. To make matters worse for Campbell, American Craig Breedlove drove his pure thrust jet car "Spirit of America" to a speed of 407.45 miles per hour (655.73 km/h) at Bonneville in July 1963. Although the "car" did not conform to FIA (Federation Internationale de L'Automobile) regulations, that stipulated it had to be wheel-driven and have a minimum of four wheels, in the eyes of the world, Breedlove was now the fastest man on Earth. Campbell returned to Australia in March 1964, but the Lake Eyre course failed to fulfil the early promise it had shown in 1962 and there were further spells of rain. BP pulled out as his main sponsor after a dispute, but he was able to secure backing from Australian oil company Ampol. The track never properly dried out and Campbell was forced to make the best of the conditions. Finally, in July 1964, he was able to post some speeds that approached the record. On the 17th of that month, he took advantage of a break in the weather and made two courageous runs along the shortened and still damp track, posting a new land speed record of 403.10 mph (648.73 km/h). The surreal moment was captured in a number of well-known images by photographers, including Australia's Jeff Carter. Campbell was bitterly disappointed with the record as the vehicle had been designed for much higher speeds. CN7 covered the final third of the measured mile at an average of 429 mph (690 km/h), peaking as it left the measured distance at over 440 mph (710 km/h). He resented the fact that it had all been so difficult. "We've made it — we got the bastard at last," was his reaction to the success. Campbell's 403.1 mph represented the official land speed record. In 1969, after Campbell's fatal accident, his widow, Tonia Bern-Campbell negotiated a deal with Lynn Garrison, president of Craig Breedlove and Associates, that would see Craig Breedlove run Bluebird on Bonneville's Salt Flats. This concept was cancelled when the parallel Spirit of America supersonic car project failed to find support. Campbell now planned to go after the water speed record one more time with Bluebird K7 — to do what he had aimed for so many years earlier, during the initial planning stages of CN7 — break both records in the same year. After more delays, he finally achieved his seventh water speed record at Lake Dumbleyung near Perth, Western Australia, on the last day of 1964, at a speed of 276.33 mph (444.71 km/h). He had become the first, and so far only, person to set both land and water speed records in the same year. Campbell's land speed record was short-lived, because FIA rule changes meant that pure jet cars would be eligible to set records from October 1964. Campbell's 429 mph (690 km/h) speed on his final Lake Eyre run remained the highest speed achieved by a wheel-driven car until 2001; Bluebird CN7 is now on display at the National Motor Museum at Beaulieu in Hampshire, England, its potential only partly realised. Campbell decided a massive jump in speed was called for following his successful 1964 land speed record attempt in Bluebird CN7. His vision was of a supersonic rocket car with a potential maximum speed of 840 mph (1,350 km/h). Norris Brothers were requested to undertake a design study. Bluebird Mach 1.1 was a design for a rocket-powered supersonic land speed record car. Campbell chose a lucky date to hold a press conference at the Charing Cross Hotel on 7 July 1965 to announce his future record breaking plans: "... In terms of speed on the Earth's surface, my next logical step must be to construct a Bluebird car that can reach Mach 1.1. The Americans are already making plans for such a vehicle and it would be tragic for the world image of British technology if we did not compete in this great contest and win. The nation whose technologies are first to seize the 'faster than sound' record on land will be the nation whose industry will be seen to leapfrog into the '70s or '80s. We can have the car on the track within three years." Bluebird Mach 1.1 was to be rocket-powered. Ken Norris had calculated using rocket motors would result in a vehicle with very low frontal area, greater density, and lighter weight than if he were to employ a jet engine. Bluebird Mach 1.1 would also be a relatively compact and simple design. Norris specified two off-the-shelf Bristol Siddeley BS.605 rocket engines. The 605 had been developed as a rocket-assisted take-off engine for military aircraft and was fuelled with kerosene, using hydrogen peroxide as the oxidiser. Each engine was rated at 8,000 lbf (36 kN) thrust. In Bluebird Mach 1.1 application, the combined 16,000 lbf (71 kN) thrust would be equivalent of 36,000 bhp (27,000 kW; 36,000 PS) at 840 mph (1,350 km/h). To increase publicity for his rocket car venture, in the spring of 1966, Campbell decided to try once more for a water speed record. This time the target was 300 mph (480 km/h). Bluebird K7 was fitted with a lighter and more powerful Bristol Orpheus engine, taken from a Folland Gnat jet aircraft, which developed 4,500 pounds-force (20,000 N) of thrust. The modified boat was taken back to Coniston in the first week of November 1966. The trials did not go well. The weather was very poor, and K7 suffered an engine failure when her air intakes collapsed and debris was drawn into the engine. By the middle of December, some high-speed runs were made, in excess of 250 mph (400 km/h) but still well below Campbell's existing record. Problems with Bluebird's fuel system meant that the engine could not reach full speed, and so would not develop maximum power. Eventually, by the end of December, after further modifications to her fuel system, and the replacement of a fuel pump, the fuel starvation problem was fixed, and Campbell awaited better weather to mount an attempt. On 4 January 1967, weather conditions were finally suitable for an attempt. Campbell commenced the first run of his last record attempt at just after 8:45 am. Bluebird moved slowly out towards the middle of the lake, where she paused briefly as Campbell lined her up. With a deafening blast of power, Campbell now applied full throttle and Bluebird began to surge forward. Clouds of spray issued from the jet-pipe, water poured over the rear spar and after a few hundred yards, at 70 miles per hour (113 km/h), Bluebird unstuck from the surface and rocketed off towards the southern end of the lake, producing her characteristic comet's tail of spray. She entered the measured kilometre at 8:46 am. Leo Villa witnessed her passing the first marker buoy at about 285 mph (459 km/h) in perfect steady planing trim, her nose slightly down, still accelerating. 7.525 seconds later, Keith Harrison saw her leave the measured kilometre at a speed of over 310 mph (500 km/h). The average speed for the first run was 297.6 mph (478.9 km/h). Campbell lifted his foot from the throttle about 3/10 of a second before passing the southern kilometre marker. As Bluebird left the measured kilometre, Keith Harrison and Eric Shaw in a course boat at the southern end of the measured kilometre both noticed that she was very light around the bows, riding on her front stabilising fins. Her planing trim was no worse than she had exhibited when equipped with the Beryl engine, but it was markedly different from that observed by Leo Villa at the northern end of the kilometre, when she was under full acceleration. Campbell had made his usual commentary throughout the run. Campbell's words on his first run were, via radio intercom: "... I'm under way, all systems normal; brake swept up, er ... air pressure warning light on ... I'm coming onto track now and er ... I'll open up just as soon as I am heading down the lake, er doesn't look too smooth from here, doesn't matter, here we go ... Here we go ... [pause 3 seconds] ... Passing through four ... five coming up ... a lot of water, nose beginning to lift, water all over the front of the engine again ... and the nose is up ... low pressure fuel warning light ... going left ... OK we're up and away ... and passing through er ... tramping very hard at 150 ... very hard indeed ... FULL POWER ... Passing through 2 ... 25 out of the way ... tramping like hell Leo, I don't think I can get over the top, but I'll try, FULL HOUSE ... and I can't see where I am ... FULL HOUSE – FULL HOUSE – FULL HOUSE ... POWER OFF NOW! ... I'M THROUGH! ... power ... (garbled) er passing through 25 vector off Peel Island ... passing through 2 ... I'm lighting like mad ... brake gone down ... er ... engine lighting up now ... relighting ... passing Peel Island ... relight made normal ... and now ... down at Brown Howe ... passing through 100 ... er ... nose hasn't dropped yet ... nose down." Instead of refuelling and waiting for the wash of this run to subside, Campbell decided to make the return run immediately. This was not an unprecedented diversion from normal practice, as Campbell had used the advantage presented; i.e., no encroachment of water disturbances on the measured kilometre by the quick turnaround in many previous runs. The second run was even faster once severe tramping subsided on the run-up from Peel Island (caused by the water-brake disturbance). Once smooth water was reached some 700 metres (766 yd) or so from the start of the kilometre, K7 demonstrated cycles of ground effect hovering before accelerating hard at 0.63 g to a peak speed of 328 mph (528 km/h) some 200 metres or so from the southern marker buoy. Bluebird was now experiencing bouncing episodes of the starboard sponson with increasing ferocity. At the peak speed, the most intense and long-lasting bounce precipitated a severe decelerating episode — 328 miles per hour (528 km/h) to 296 miles per hour (476 km/h), -1.86g — as K7 dropped back onto the water. Engine flame-out then occurred and, shorn of thrust nose-down momentum, K7 experienced a gliding episode in strong ground effect with increasing angle-of-attack, before completely leaving the water at her static stability pitch-up limit of 5.2°. Bluebird then executed an almost complete backflip (~ 320° and slightly off-axis) before plunging into the water (port sponson marginally in advance of the starboard), approximately 230 metres from the end of the measured kilometre. The boat then cartwheeled across the water before coming to rest. The impact broke K7 forward of the air intakes (where Campbell was sitting) and the main hull sank shortly afterwards. Mr Whoppit, Campbell's teddy bear mascot, was found among the floating debris and the pilot's helmet was recovered. Royal Navy divers made efforts to find and recover the body but, although the wreck of K7 was found, they called off the search, after two weeks, without locating his body. Campbell's body was finally located in 2001. Campbell's last words, during a 31-second transmission, on his final run were, via radio intercom: "... Full nose up ... Pitching a bit down here ... coming through our own wash ... er getting straightened up now on track ... rather closer to Peel Island ... and we're tramping like mad ... and er ... FULL POWER ... er tramping like hell OVER. I can't see much and the water's very bad indeed ... I'm galloping over the top ... and she's giving a hell of a bloody row in here ... I can't see anything ... I've got the bows out ... I'm going ... U-hh ..." The cause of the crash has been variously attributed to several possible causes (or a combination of these causes): On 28 January 1967, Campbell was posthumously awarded the Queen's Commendation for Brave Conduct "for courage and determination in attacking the world water speed record." The wreckage of Campbell's craft was recovered by the Bluebird Project between October 2000, when the first sections were raised, and May 2001, when Campbell's body was recovered. The largest section, comprising approximately two-thirds of the centre hull, was raised on 8 March 2001. The project began when diver Bill Smith was inspired to look for the wreck after hearing the Marillion song "Out of This World" (from the album Afraid of Sunlight), which was written about Campbell and Bluebird. The recovered wreck revealed that the water brake had deployed after the accident as a result of stored accumulator pressure; Campbell would not have had time to deploy the relatively slow-moving brake as the boat flipped out of control. The boat still contained fuel in the engine fuel lines, discounting the fuel-starvation theory. The wreckage all evidenced an impact from left to right, wiping the whole front of the boat off in that direction. Campbell's lower harness mounts had failed and were found to be effectively useless. Further dives recovered various parts of K7, which had separated from the main hull when it broke up on impact. Part of Campbell's body was finally located just over two months later and recovered from the lake on 28 May 2001, still wearing his blue nylon overalls. On the night before his death, while playing cards he had drawn the queen and the ace of spades. Reflecting upon the fact that Mary, Queen of Scots had drawn the same two cards the night before she was beheaded, he told his mechanics, who were playing cards with him, that he had a fearful premonition that he was going to "get the chop". It was not possible to determine the cause of Campbell's death, though a consultant engineer giving evidence to the inquest said that the force of the impact could have caused him to be decapitated. When his remains were found, his skull was not present and is still missing. Campbell was buried in Coniston Cemetery on 12 September 2001 after his coffin was carried down the lake, and through the measured kilometre, on a launch, one last time. A funeral service was then held at St Andrew's Church in Coniston, after an earlier, and positive DNA examination had been carried out. The funeral was attended by his widow, Tonia, daughter Gina, other members of his family, members of his former team and admirers. The funeral was overshadowed in the media by coverage of the 9/11 attacks in the United States. Campbell's sister, Jean Wales, had been against the recovery of her brother's body out of respect for his stated wish that, in the event of something going wrong, "Skipper and boat stay together". Jean Wales did, however, remain in daily telephone contact with project leader Bill Smith during the recovery operation in anticipation of any news of her brother's remains. When Campbell was buried in Coniston Cemetery on 12 September 2001 she did not attend the service. Steve Hogarth, lead singer for Marillion, was present at the funeral and performed the song "Out of This World" solo. Between them, Campbell and his father had set 11 speed records on water and 10 on land. The story of Campbell's last attempt at the water speed record on Coniston Water was told in the BBC television film Across the Lake in 1988, with Anthony Hopkins as Campbell. Nine years earlier, Robert Hardy had played Campbell's father, Sir Malcolm Campbell, in the BBC2 Playhouse television drama "Speed King"; both were written by Roger Milner and produced by Innes Lloyd. In 2003, the BBC showed a documentary reconstruction of Campbell's fateful water-speed record attempt in an episode of Days That Shook the World. It featured a mixture of modern reconstruction and original film footage. All of the original colour clips were taken from a film capturing the event, Campbell at Coniston by John Lomax, a local amateur filmmaker from Wallasey, England. Lomax's film won awards worldwide in the late 1960s for recording the final weeks of Campbell's life. In 1956, Campbell was surprised by Eamonn Andrews for the seventh episode of the new television show This Is Your Life. An English Heritage blue plaque commemorates Campbell and his father at Canbury School, Kingston Hill, Kingston upon Thames, where they lived. In the village of Coniston, the Ruskin Museum has a display of Campbell memorabilia, and the Bristol Orpheus engine recovered in 2001 is also displayed. The engine's casing is mostly missing, having acted as a sacrificial anode in its time underwater, but the internals are preserved. Campbell's helmet from the ill-fated run is also on display. On 23 March 2021, organised by the Ruskin Museum, two Hawk jets of the Royal Air Force staged a fly past over the Lake District to mark the 100th anniversary of Campbell's birth. As they flew over Coniston Water, the jets dipped their wings in salute, in a repeat of a gesture carried out by an Avro Vulcan on the day after his death. Campbell's daughter, Gina, laid flowers on the surface of the lake as the jets flew overhead. On 7 December 2006, Campbell's daughter, Gina Campbell, formally gifted Bluebird K7 to the Ruskin Museum in Coniston on behalf of the Campbell Family Heritage Trust. In agreement with the trust and the museum, Bill Smith was to organise the restoration of the boat back to running order circa 4 January 1967. Smith said that this would take an undisclosed number of years to accomplish. Gina Campbell commented: "I've decided to secure the future of Bluebird for the people of Coniston, the Ruskin Museum and the people of the world". Museum Director Vicky Slowe spoke of Gina Campbell's generosity and said that: "Bill Smith has assured us he can get Bluebird fully conserved and reconfigured at no cost to the museum. As of 2008, K7 is being fully restored by The Bluebird Project, to a very high standard of working condition in North Shields, Tyne and Wear, using a significant proportion of her original fabric, but with a replacement BS Orpheus engine of the same type albeit incorporating many original components." As of May 2009, permission had been given for a one-off set of proving trials of Bluebird on Coniston Water, where she would be tested to a safe speed for demonstration purposes only. There was no fixed date given for completion of Bluebird K7 or the trials. Upon restoration, it was planned that K7 would be housed in her own purpose-built wing at the Ruskin Museum in Coniston. On 20 March 2018 the restoration was featured on the BBC's The One Show, when it was announced that Bluebird K7 would return to the water on Loch Fad, on the Isle of Bute in Scotland, in August 2018 for handling trials. In August 2018, initial restoration work on Bluebird was completed. She was transported to Loch Fad where she was refloated on 4 August 2018. Following initial engine trials on 5 August, Bluebird completed a series of test runs on the loch, reaching speeds of about 150 mph (240 km/h). For safety reasons, there are no plans to attempt to reach any higher speeds.
[ { "paragraph_id": 0, "text": "Donald Malcolm Campbell, CBE (23 March 1921 – 4 January 1967) was a British speed record breaker who broke eight absolute world speed records on water and on land in the 1950s and 1960s. He remains the only person to set both world land and water speed records in the same year (1964). He died during a water speed record attempt at Coniston Water in the Lake District, England.", "title": "" }, { "paragraph_id": 1, "text": "Donald Campbell was born at Canbury House, Kingston upon Thames, Surrey, the son of Malcolm, later Sir Malcolm Campbell, holder of 13 world speed records in the 1920s and 1930s in the Bluebird cars and boats, and his second wife, Dorothy Evelyn née Whittall.", "title": "Family and personal life" }, { "paragraph_id": 2, "text": "Campbell attended St Peter's School, Seaford and Uppingham School. At the outbreak of the Second World War he volunteered for the Royal Air Force, but was unable to serve because of a case of childhood rheumatic fever. He joined Briggs Motor Bodies Ltd in West Thurrock, where he became a maintenance engineer. Subsequently, he was a shareholder in a small engineering company called Kine Engineering, producing machine tools. Following his father's death on New Year's Eve, 31 December 1948 and aided by Malcolm's chief engineer, Leo Villa, the younger Campbell strove to set speed records first on water and then land.", "title": "Family and personal life" }, { "paragraph_id": 3, "text": "He married three times — to Daphne Harvey in 1945, producing daughter Georgina (Gina) Campbell, born on 19 September 1946; to Dorothy McKegg (1928–2008) in 1952; and to Tonia Bern (1928–2021) in December 1958, which lasted until his death in 1967. Campbell was intensely superstitious, hating the colour green, the number thirteen and believing nothing good ever happened on a Friday. He also had some interest in the paranormal, which he nurtured as a member of the Ghost Club.", "title": "Family and personal life" }, { "paragraph_id": 4, "text": "Campbell was a restless man and seemed driven to equal, if not surpass, his father's achievements. He was generally light-hearted and was usually, at least until his 1960 crash at the Bonneville Salt Flats, optimistic in his outlook.", "title": "Family and personal life" }, { "paragraph_id": 5, "text": "Campbell began his speed record attempts in the summer of 1949, using his father's old boat, Blue Bird K4, which he renamed Bluebird K4. His initial attempts that summer were unsuccessful, although he did come close to raising his father's existing record. The team returned to Coniston Water, Lancashire in 1950 for further trials. While there, they heard that an American, Stanley Sayres, had raised the record from 141 to 160 mph (227 to 257 km/h), beyond K4's capabilities without substantial modification.", "title": "Water speed records" }, { "paragraph_id": 6, "text": "In late 1950 and 1951, Bluebird K4 was modified to make it a \"prop-rider\" as opposed to her original immersed propeller configuration. This greatly reduced hydrodynamic drag, as the third planing point would now be the propeller hub, meaning one of the two propeller blades was always out of the water at high speed. She now sported two cockpits, the second one being for Leo Villa.", "title": "Water speed records" }, { "paragraph_id": 7, "text": "Bluebird K4 now had a chance of exceeding Sayres' record and also enjoyed success as a circuit racer, winning the Oltranza Cup in Italy in the spring of that year. Returning to Coniston in September, they finally got Bluebird up to 170 mph after further trials, only to suffer a structural failure at 170 mph (270 km/h) which wrecked the boat. Sayres raised the record the following year to 178 mph (286 km/h) in Slo-Mo-Shun IV.", "title": "Water speed records" }, { "paragraph_id": 8, "text": "Along with Campbell, Britain had another potential contender for water speed record honours — John Cobb. He had commissioned the world's first purpose-built turbojet Hydroplane, Crusader, with a target speed of over 200 mph (320 km/h), and began trials on Loch Ness in autumn 1952. Cobb was killed later that year, when Crusader broke up, during an attempt on the record. Campbell was devastated at Cobb's loss, but he resolved to build a new Bluebird boat to bring the water speed record back to Britain.", "title": "Water speed records" }, { "paragraph_id": 9, "text": "In early 1953, Campbell began development of his own advanced all-metal jet-powered Bluebird K7 hydroplane to challenge the record, by now held by the American prop rider hydroplane Slo-Mo-Shun IV.[1] Designed by Ken and Lew Norris, the K7 was a steel-framed, aluminium-bodied, three-point hydroplane with a Metropolitan-Vickers Beryl axial-flow turbojet engine, producing 3,500-pound-force (16 kN) of thrust.", "title": "Water speed records" }, { "paragraph_id": 10, "text": "Like Slo-Mo-Shun, but unlike Cobb's tricycle Crusader, the three planing points were arranged with two forward, on outrigged sponsons and one aft, in a \"pickle-fork\" layout, prompting Bluebird's early comparison to a blue lobster. K7 was of very advanced design and construction, and its load bearing steel space frame ultra rigid and stressed to 25 g (exceeding contemporary military jet aircraft). It had a design speed of 250 miles per hour (400 kilometres per hour) and remained the only successful jet-boat in the world until the late 1960s.", "title": "Water speed records" }, { "paragraph_id": 11, "text": "The designation \"K7\" was derived from its Lloyd's unlimited rating registration. It was carried on a prominent white roundel on each sponson, underneath an infinity symbol. Bluebird K7 was the seventh boat registered at Lloyds in the \"Unlimited\" series.", "title": "Water speed records" }, { "paragraph_id": 12, "text": "Campbell set seven world water speed records in K7 between July 1955 and December 1964. The first of these marks was set at Ullswater on 23 July 1955, where he achieved a speed of 202.32 mph (325.60 km/h) but only after many months of trials and a major redesign of Bluebird's forward sponson attachments points. Campbell achieved a steady series of subsequent speed-record increases with the boat during the rest of the decade, beginning with a mark of 216 mph (348 km/h) in 1955 on Lake Mead in Nevada. Subsequently, four new marks were registered on Coniston Water, where Campbell and Bluebird became an annual fixture in the latter half of the 1950s, enjoying significant sponsorship from the Mobil oil company and then subsequently BP.", "title": "Water speed records" }, { "paragraph_id": 13, "text": "Campbell also made an attempt in the summer of 1957 at Canandaigua, New York, which failed due to lack of suitable calm water conditions. Bluebird K7 became a well known and popular attraction, and as well as her annual Coniston appearances, K7 was displayed extensively in the UK, United States, Canada and Europe, and then subsequently in Australia during Campbell's prolonged attempt on the land speed record in 1963–1964.", "title": "Water speed records" }, { "paragraph_id": 14, "text": "To extract more speed, and endow the boat with greater high-speed stability, in both pitch and yaw, K7 was subtly modified in the second half of the 1950s to incorporate more effective streamlining with a blown Perspex cockpit canopy and fluting to the lower part of the main hull. In 1958, a small wedge shaped tail fin, housing an arrester parachute, modified sponson fairings, that gave a significant reduction in forward aerodynamic lift, and a fixed hydrodynamic stabilising fin, attached to the transom to aid directional stability, and exert a marginal down-force on the nose were incorporated into the design to increase the safe operating envelope of the hydroplane. Thus she reached 225 mph (362 km/h) in 1956, where an unprecedented peak speed of 286.78 mph (461.53 km/h) was achieved on one run, 239 mph (385 km/h) in 1957, 248 mph (399 km/h) in 1958 and 260 mph (420 km/h) in 1959.", "title": "Water speed records" }, { "paragraph_id": 15, "text": "Campbell was awarded the Order of the British Empire (CBE) in January 1957 for his water speed record breaking, and in particular his record at Lake Mead in the United States which earned him and Britain very positive acclaim.", "title": "Water speed records" }, { "paragraph_id": 16, "text": "On 23 November 1964, Campbell achieved the Australian water speed record of 216 miles per hour (348 km/h) on Lake Bonney Riverland in South Australia, although he was unable to break the world record on that attempt.", "title": "Water speed records" }, { "paragraph_id": 17, "text": "It was after the Lake Mead water speed record success in 1955 that the seeds of Campbell's ambition to hold the land speed record as well were planted. The following year, the serious planning was under way — to build a car to break the land speed record, which then stood at 394 mph (634 km/h) set by John Cobb in 1947. The Norris brothers designed Bluebird-Proteus CN7 with 500 mph (800 km/h) in mind.", "title": "Land speed record attempt" }, { "paragraph_id": 18, "text": "The brothers were even more enthusiastic about the car than the boat and like all of his projects, Campbell wanted Bluebird CN7, to be the best of its type, a showcase of British engineering skills. The British motor industry, in the guise of Dunlop, BP, Smiths Industries, Lucas Automotive, Rubery Owen as well as many others, became heavily involved in the project to build the most advanced car the world had yet seen. CN7 was powered by a specially modified Bristol-Siddeley Proteus free-turbine engine of 4,450 shp (3,320 kW) driving all four wheels. Bluebird CN7 was designed to achieve 475–500 mph and was completed by the spring of 1960.", "title": "Land speed record attempt" }, { "paragraph_id": 19, "text": "Following low-speed tests conducted at the Goodwood motor racing circuit in Sussex, in July, the CN7 was taken to the Bonneville Salt Flats in Utah, United States, scene of his father's last land speed record triumph, some 25 years earlier in September 1935. The trials initially went well, and various adjustments were made to the car. On the sixth run in CN7, Campbell lost control at over 360 mph and crashed. It was the car's tremendous structural integrity that saved his life. He was hospitalised with a fractured skull and a burst eardrum, as well as minor cuts and bruises, but CN7 was a write-off. Almost immediately, Campbell announced he was determined to have another go. Sir Alfred Owen, whose Rubery Owen industrial group had built CN7, offered to rebuild it for him. That single decision was to have a profound influence on the rest of Campbell's life. His original plan had been to break the land speed record at over 400 mph in 1960, return to Bonneville the following year to really bump up the speed to something near to 500 mph, get his seventh water speed record with K7 and then retire, as undisputed champion of speed and perhaps just as important, secure in the knowledge that he was worthy of his father's legacy.", "title": "Land speed record attempt" }, { "paragraph_id": 20, "text": "Campbell decided not to go back to Utah for the new trials. He felt the Bonneville course was too short at 11-mile (18 km) and the salt surface was in poor condition. BP offered to find another venue and eventually after a long search, Lake Eyre, in South Australia, was chosen. It hadn't rained there for nine years and the vast dry bed of the salt lake offered a course of up to 20-mile (32 km). By the summer of 1962, Bluebird CN7 was rebuilt, some nine months later than Campbell had hoped. It was essentially the same car, but with the addition of a large stabilising tail fin and a reinforced fibreglass cockpit cover. At the end of 1962, CN7 was shipped out to Australia ready for the new attempt. Low-speed runs had just started when the rains came. The course was compromised and further rain meant, that by May 1963, Lake Eyre was flooded to a depth of 3 inches, causing the attempt to be abandoned. Campbell was heavily criticised in the press for alleged time wasting and mismanagement of the project, despite the fact that he could hardly be held responsible for the unprecedented weather.", "title": "Land speed record attempt" }, { "paragraph_id": 21, "text": "To make matters worse for Campbell, American Craig Breedlove drove his pure thrust jet car \"Spirit of America\" to a speed of 407.45 miles per hour (655.73 km/h) at Bonneville in July 1963. Although the \"car\" did not conform to FIA (Federation Internationale de L'Automobile) regulations, that stipulated it had to be wheel-driven and have a minimum of four wheels, in the eyes of the world, Breedlove was now the fastest man on Earth.", "title": "Land speed record attempt" }, { "paragraph_id": 22, "text": "Campbell returned to Australia in March 1964, but the Lake Eyre course failed to fulfil the early promise it had shown in 1962 and there were further spells of rain. BP pulled out as his main sponsor after a dispute, but he was able to secure backing from Australian oil company Ampol.", "title": "Land speed record attempt" }, { "paragraph_id": 23, "text": "The track never properly dried out and Campbell was forced to make the best of the conditions. Finally, in July 1964, he was able to post some speeds that approached the record. On the 17th of that month, he took advantage of a break in the weather and made two courageous runs along the shortened and still damp track, posting a new land speed record of 403.10 mph (648.73 km/h). The surreal moment was captured in a number of well-known images by photographers, including Australia's Jeff Carter. Campbell was bitterly disappointed with the record as the vehicle had been designed for much higher speeds. CN7 covered the final third of the measured mile at an average of 429 mph (690 km/h), peaking as it left the measured distance at over 440 mph (710 km/h). He resented the fact that it had all been so difficult. \"We've made it — we got the bastard at last,\" was his reaction to the success. Campbell's 403.1 mph represented the official land speed record.", "title": "Land speed record attempt" }, { "paragraph_id": 24, "text": "In 1969, after Campbell's fatal accident, his widow, Tonia Bern-Campbell negotiated a deal with Lynn Garrison, president of Craig Breedlove and Associates, that would see Craig Breedlove run Bluebird on Bonneville's Salt Flats. This concept was cancelled when the parallel Spirit of America supersonic car project failed to find support.", "title": "Land speed record attempt" }, { "paragraph_id": 25, "text": "Campbell now planned to go after the water speed record one more time with Bluebird K7 — to do what he had aimed for so many years earlier, during the initial planning stages of CN7 — break both records in the same year. After more delays, he finally achieved his seventh water speed record at Lake Dumbleyung near Perth, Western Australia, on the last day of 1964, at a speed of 276.33 mph (444.71 km/h). He had become the first, and so far only, person to set both land and water speed records in the same year.", "title": "Double records" }, { "paragraph_id": 26, "text": "Campbell's land speed record was short-lived, because FIA rule changes meant that pure jet cars would be eligible to set records from October 1964. Campbell's 429 mph (690 km/h) speed on his final Lake Eyre run remained the highest speed achieved by a wheel-driven car until 2001; Bluebird CN7 is now on display at the National Motor Museum at Beaulieu in Hampshire, England, its potential only partly realised.", "title": "Double records" }, { "paragraph_id": 27, "text": "Campbell decided a massive jump in speed was called for following his successful 1964 land speed record attempt in Bluebird CN7. His vision was of a supersonic rocket car with a potential maximum speed of 840 mph (1,350 km/h). Norris Brothers were requested to undertake a design study. Bluebird Mach 1.1 was a design for a rocket-powered supersonic land speed record car. Campbell chose a lucky date to hold a press conference at the Charing Cross Hotel on 7 July 1965 to announce his future record breaking plans:", "title": "Rocket car plans and final water speed record attempt" }, { "paragraph_id": 28, "text": "\"... In terms of speed on the Earth's surface, my next logical step must be to construct a Bluebird car that can reach Mach 1.1. The Americans are already making plans for such a vehicle and it would be tragic for the world image of British technology if we did not compete in this great contest and win. The nation whose technologies are first to seize the 'faster than sound' record on land will be the nation whose industry will be seen to leapfrog into the '70s or '80s. We can have the car on the track within three years.\"", "title": "Rocket car plans and final water speed record attempt" }, { "paragraph_id": 29, "text": "Bluebird Mach 1.1 was to be rocket-powered. Ken Norris had calculated using rocket motors would result in a vehicle with very low frontal area, greater density, and lighter weight than if he were to employ a jet engine. Bluebird Mach 1.1 would also be a relatively compact and simple design. Norris specified two off-the-shelf Bristol Siddeley BS.605 rocket engines. The 605 had been developed as a rocket-assisted take-off engine for military aircraft and was fuelled with kerosene, using hydrogen peroxide as the oxidiser. Each engine was rated at 8,000 lbf (36 kN) thrust. In Bluebird Mach 1.1 application, the combined 16,000 lbf (71 kN) thrust would be equivalent of 36,000 bhp (27,000 kW; 36,000 PS) at 840 mph (1,350 km/h).", "title": "Rocket car plans and final water speed record attempt" }, { "paragraph_id": 30, "text": "To increase publicity for his rocket car venture, in the spring of 1966, Campbell decided to try once more for a water speed record. This time the target was 300 mph (480 km/h). Bluebird K7 was fitted with a lighter and more powerful Bristol Orpheus engine, taken from a Folland Gnat jet aircraft, which developed 4,500 pounds-force (20,000 N) of thrust. The modified boat was taken back to Coniston in the first week of November 1966. The trials did not go well. The weather was very poor, and K7 suffered an engine failure when her air intakes collapsed and debris was drawn into the engine. By the middle of December, some high-speed runs were made, in excess of 250 mph (400 km/h) but still well below Campbell's existing record. Problems with Bluebird's fuel system meant that the engine could not reach full speed, and so would not develop maximum power. Eventually, by the end of December, after further modifications to her fuel system, and the replacement of a fuel pump, the fuel starvation problem was fixed, and Campbell awaited better weather to mount an attempt.", "title": "Rocket car plans and final water speed record attempt" }, { "paragraph_id": 31, "text": "On 4 January 1967, weather conditions were finally suitable for an attempt. Campbell commenced the first run of his last record attempt at just after 8:45 am. Bluebird moved slowly out towards the middle of the lake, where she paused briefly as Campbell lined her up. With a deafening blast of power, Campbell now applied full throttle and Bluebird began to surge forward. Clouds of spray issued from the jet-pipe, water poured over the rear spar and after a few hundred yards, at 70 miles per hour (113 km/h), Bluebird unstuck from the surface and rocketed off towards the southern end of the lake, producing her characteristic comet's tail of spray. She entered the measured kilometre at 8:46 am. Leo Villa witnessed her passing the first marker buoy at about 285 mph (459 km/h) in perfect steady planing trim, her nose slightly down, still accelerating. 7.525 seconds later, Keith Harrison saw her leave the measured kilometre at a speed of over 310 mph (500 km/h). The average speed for the first run was 297.6 mph (478.9 km/h). Campbell lifted his foot from the throttle about 3/10 of a second before passing the southern kilometre marker. As Bluebird left the measured kilometre, Keith Harrison and Eric Shaw in a course boat at the southern end of the measured kilometre both noticed that she was very light around the bows, riding on her front stabilising fins. Her planing trim was no worse than she had exhibited when equipped with the Beryl engine, but it was markedly different from that observed by Leo Villa at the northern end of the kilometre, when she was under full acceleration. Campbell had made his usual commentary throughout the run.", "title": "Rocket car plans and final water speed record attempt" }, { "paragraph_id": 32, "text": "Campbell's words on his first run were, via radio intercom:", "title": "Rocket car plans and final water speed record attempt" }, { "paragraph_id": 33, "text": "\"... I'm under way, all systems normal; brake swept up, er ... air pressure warning light on ... I'm coming onto track now and er ... I'll open up just as soon as I am heading down the lake, er doesn't look too smooth from here, doesn't matter, here we go ... Here we go ... [pause 3 seconds] ... Passing through four ... five coming up ... a lot of water, nose beginning to lift, water all over the front of the engine again ... and the nose is up ... low pressure fuel warning light ... going left ... OK we're up and away ... and passing through er ... tramping very hard at 150 ... very hard indeed ... FULL POWER ... Passing through 2 ... 25 out of the way ... tramping like hell Leo, I don't think I can get over the top, but I'll try, FULL HOUSE ... and I can't see where I am ... FULL HOUSE – FULL HOUSE – FULL HOUSE ... POWER OFF NOW! ... I'M THROUGH! ... power ... (garbled) er passing through 25 vector off Peel Island ... passing through 2 ... I'm lighting like mad ... brake gone down ... er ... engine lighting up now ... relighting ... passing Peel Island ... relight made normal ... and now ... down at Brown Howe ... passing through 100 ... er ... nose hasn't dropped yet ... nose down.\"", "title": "Rocket car plans and final water speed record attempt" }, { "paragraph_id": 34, "text": "Instead of refuelling and waiting for the wash of this run to subside, Campbell decided to make the return run immediately. This was not an unprecedented diversion from normal practice, as Campbell had used the advantage presented; i.e., no encroachment of water disturbances on the measured kilometre by the quick turnaround in many previous runs. The second run was even faster once severe tramping subsided on the run-up from Peel Island (caused by the water-brake disturbance). Once smooth water was reached some 700 metres (766 yd) or so from the start of the kilometre, K7 demonstrated cycles of ground effect hovering before accelerating hard at 0.63 g to a peak speed of 328 mph (528 km/h) some 200 metres or so from the southern marker buoy. Bluebird was now experiencing bouncing episodes of the starboard sponson with increasing ferocity. At the peak speed, the most intense and long-lasting bounce precipitated a severe decelerating episode — 328 miles per hour (528 km/h) to 296 miles per hour (476 km/h), -1.86g — as K7 dropped back onto the water. Engine flame-out then occurred and, shorn of thrust nose-down momentum, K7 experienced a gliding episode in strong ground effect with increasing angle-of-attack, before completely leaving the water at her static stability pitch-up limit of 5.2°. Bluebird then executed an almost complete backflip (~ 320° and slightly off-axis) before plunging into the water (port sponson marginally in advance of the starboard), approximately 230 metres from the end of the measured kilometre. The boat then cartwheeled across the water before coming to rest. The impact broke K7 forward of the air intakes (where Campbell was sitting) and the main hull sank shortly afterwards.", "title": "Rocket car plans and final water speed record attempt" }, { "paragraph_id": 35, "text": "Mr Whoppit, Campbell's teddy bear mascot, was found among the floating debris and the pilot's helmet was recovered. Royal Navy divers made efforts to find and recover the body but, although the wreck of K7 was found, they called off the search, after two weeks, without locating his body. Campbell's body was finally located in 2001.", "title": "Rocket car plans and final water speed record attempt" }, { "paragraph_id": 36, "text": "Campbell's last words, during a 31-second transmission, on his final run were, via radio intercom:", "title": "Rocket car plans and final water speed record attempt" }, { "paragraph_id": 37, "text": "\"... Full nose up ... Pitching a bit down here ... coming through our own wash ... er getting straightened up now on track ... rather closer to Peel Island ... and we're tramping like mad ... and er ... FULL POWER ... er tramping like hell OVER. I can't see much and the water's very bad indeed ... I'm galloping over the top ... and she's giving a hell of a bloody row in here ... I can't see anything ... I've got the bows out ... I'm going ... U-hh ...\"", "title": "Rocket car plans and final water speed record attempt" }, { "paragraph_id": 38, "text": "The cause of the crash has been variously attributed to several possible causes (or a combination of these causes):", "title": "Rocket car plans and final water speed record attempt" }, { "paragraph_id": 39, "text": "On 28 January 1967, Campbell was posthumously awarded the Queen's Commendation for Brave Conduct \"for courage and determination in attacking the world water speed record.\"", "title": "Rocket car plans and final water speed record attempt" }, { "paragraph_id": 40, "text": "The wreckage of Campbell's craft was recovered by the Bluebird Project between October 2000, when the first sections were raised, and May 2001, when Campbell's body was recovered. The largest section, comprising approximately two-thirds of the centre hull, was raised on 8 March 2001. The project began when diver Bill Smith was inspired to look for the wreck after hearing the Marillion song \"Out of This World\" (from the album Afraid of Sunlight), which was written about Campbell and Bluebird.", "title": "Recovery of Bluebird K7 and Campbell's body" }, { "paragraph_id": 41, "text": "The recovered wreck revealed that the water brake had deployed after the accident as a result of stored accumulator pressure; Campbell would not have had time to deploy the relatively slow-moving brake as the boat flipped out of control. The boat still contained fuel in the engine fuel lines, discounting the fuel-starvation theory. The wreckage all evidenced an impact from left to right, wiping the whole front of the boat off in that direction. Campbell's lower harness mounts had failed and were found to be effectively useless. Further dives recovered various parts of K7, which had separated from the main hull when it broke up on impact.", "title": "Recovery of Bluebird K7 and Campbell's body" }, { "paragraph_id": 42, "text": "Part of Campbell's body was finally located just over two months later and recovered from the lake on 28 May 2001, still wearing his blue nylon overalls. On the night before his death, while playing cards he had drawn the queen and the ace of spades. Reflecting upon the fact that Mary, Queen of Scots had drawn the same two cards the night before she was beheaded, he told his mechanics, who were playing cards with him, that he had a fearful premonition that he was going to \"get the chop\". It was not possible to determine the cause of Campbell's death, though a consultant engineer giving evidence to the inquest said that the force of the impact could have caused him to be decapitated. When his remains were found, his skull was not present and is still missing.", "title": "Recovery of Bluebird K7 and Campbell's body" }, { "paragraph_id": 43, "text": "Campbell was buried in Coniston Cemetery on 12 September 2001 after his coffin was carried down the lake, and through the measured kilometre, on a launch, one last time. A funeral service was then held at St Andrew's Church in Coniston, after an earlier, and positive DNA examination had been carried out. The funeral was attended by his widow, Tonia, daughter Gina, other members of his family, members of his former team and admirers. The funeral was overshadowed in the media by coverage of the 9/11 attacks in the United States.", "title": "Recovery of Bluebird K7 and Campbell's body" }, { "paragraph_id": 44, "text": "Campbell's sister, Jean Wales, had been against the recovery of her brother's body out of respect for his stated wish that, in the event of something going wrong, \"Skipper and boat stay together\". Jean Wales did, however, remain in daily telephone contact with project leader Bill Smith during the recovery operation in anticipation of any news of her brother's remains. When Campbell was buried in Coniston Cemetery on 12 September 2001 she did not attend the service. Steve Hogarth, lead singer for Marillion, was present at the funeral and performed the song \"Out of This World\" solo.", "title": "Recovery of Bluebird K7 and Campbell's body" }, { "paragraph_id": 45, "text": "Between them, Campbell and his father had set 11 speed records on water and 10 on land.", "title": "Legacy" }, { "paragraph_id": 46, "text": "The story of Campbell's last attempt at the water speed record on Coniston Water was told in the BBC television film Across the Lake in 1988, with Anthony Hopkins as Campbell. Nine years earlier, Robert Hardy had played Campbell's father, Sir Malcolm Campbell, in the BBC2 Playhouse television drama \"Speed King\"; both were written by Roger Milner and produced by Innes Lloyd. In 2003, the BBC showed a documentary reconstruction of Campbell's fateful water-speed record attempt in an episode of Days That Shook the World. It featured a mixture of modern reconstruction and original film footage. All of the original colour clips were taken from a film capturing the event, Campbell at Coniston by John Lomax, a local amateur filmmaker from Wallasey, England. Lomax's film won awards worldwide in the late 1960s for recording the final weeks of Campbell's life.", "title": "Legacy" }, { "paragraph_id": 47, "text": "In 1956, Campbell was surprised by Eamonn Andrews for the seventh episode of the new television show This Is Your Life.", "title": "Legacy" }, { "paragraph_id": 48, "text": "An English Heritage blue plaque commemorates Campbell and his father at Canbury School, Kingston Hill, Kingston upon Thames, where they lived.", "title": "Legacy" }, { "paragraph_id": 49, "text": "In the village of Coniston, the Ruskin Museum has a display of Campbell memorabilia, and the Bristol Orpheus engine recovered in 2001 is also displayed. The engine's casing is mostly missing, having acted as a sacrificial anode in its time underwater, but the internals are preserved. Campbell's helmet from the ill-fated run is also on display.", "title": "Legacy" }, { "paragraph_id": 50, "text": "On 23 March 2021, organised by the Ruskin Museum, two Hawk jets of the Royal Air Force staged a fly past over the Lake District to mark the 100th anniversary of Campbell's birth. As they flew over Coniston Water, the jets dipped their wings in salute, in a repeat of a gesture carried out by an Avro Vulcan on the day after his death. Campbell's daughter, Gina, laid flowers on the surface of the lake as the jets flew overhead.", "title": "Legacy" }, { "paragraph_id": 51, "text": "On 7 December 2006, Campbell's daughter, Gina Campbell, formally gifted Bluebird K7 to the Ruskin Museum in Coniston on behalf of the Campbell Family Heritage Trust. In agreement with the trust and the museum, Bill Smith was to organise the restoration of the boat back to running order circa 4 January 1967. Smith said that this would take an undisclosed number of years to accomplish. Gina Campbell commented: \"I've decided to secure the future of Bluebird for the people of Coniston, the Ruskin Museum and the people of the world\". Museum Director Vicky Slowe spoke of Gina Campbell's generosity and said that: \"Bill Smith has assured us he can get Bluebird fully conserved and reconfigured at no cost to the museum. As of 2008, K7 is being fully restored by The Bluebird Project, to a very high standard of working condition in North Shields, Tyne and Wear, using a significant proportion of her original fabric, but with a replacement BS Orpheus engine of the same type albeit incorporating many original components.\"", "title": "Restoration" }, { "paragraph_id": 52, "text": "As of May 2009, permission had been given for a one-off set of proving trials of Bluebird on Coniston Water, where she would be tested to a safe speed for demonstration purposes only. There was no fixed date given for completion of Bluebird K7 or the trials. Upon restoration, it was planned that K7 would be housed in her own purpose-built wing at the Ruskin Museum in Coniston.", "title": "Restoration" }, { "paragraph_id": 53, "text": "On 20 March 2018 the restoration was featured on the BBC's The One Show, when it was announced that Bluebird K7 would return to the water on Loch Fad, on the Isle of Bute in Scotland, in August 2018 for handling trials.", "title": "Restoration" }, { "paragraph_id": 54, "text": "In August 2018, initial restoration work on Bluebird was completed. She was transported to Loch Fad where she was refloated on 4 August 2018. Following initial engine trials on 5 August, Bluebird completed a series of test runs on the loch, reaching speeds of about 150 mph (240 km/h). For safety reasons, there are no plans to attempt to reach any higher speeds.", "title": "Restoration" }, { "paragraph_id": 55, "text": "", "title": "External links" } ]
Donald Malcolm Campbell, was a British speed record breaker who broke eight absolute world speed records on water and on land in the 1950s and 1960s. He remains the only person to set both world land and water speed records in the same year (1964). He died during a water speed record attempt at Coniston Water in the Lake District, England.
2002-02-22T00:50:07Z
2023-11-11T20:16:37Z
[ "Template:Reflist", "Template:Citation needed", "Template:Convert", "Template:More citations needed section", "Template:Blockquote", "Template:ISBN", "Template:Webarchive", "Template:Refbegin", "Template:Cite book", "Template:Post-nominals", "Template:Use dmy dates", "Template:Commons", "Template:Use British English", "Template:Cvt", "Template:Cite news", "Template:Refend", "Template:YouTube", "Template:For", "Template:Infobox person", "Template:'", "Template:Unreferenced section", "Template:Cite web", "Template:Portal", "Template:Authority control", "Template:Short description" ]
https://en.wikipedia.org/wiki/Donald_Campbell
9,165
Directed set
In mathematics, a directed set (or a directed preorder or a filtered set) is a nonempty set A {\displaystyle A} together with a reflexive and transitive binary relation ≤ {\displaystyle \,\leq \,} (that is, a preorder), with the additional property that every pair of elements has an upper bound. In other words, for any a {\displaystyle a} and b {\displaystyle b} in A {\displaystyle A} there must exist c {\displaystyle c} in A {\displaystyle A} with a ≤ c {\displaystyle a\leq c} and b ≤ c . {\displaystyle b\leq c.} A directed set's preorder is called a direction. The notion defined above is sometimes called an upward directed set. A downward directed set is defined analogously, meaning that every pair of elements is bounded below. Some authors (and this article) assume that a directed set is directed upward, unless otherwise stated. Other authors call a set directed if and only if it is directed both upward and downward. Directed sets are a generalization of nonempty totally ordered sets. That is, all totally ordered sets are directed sets (contrast partially ordered sets, which need not be directed). Join-semilattices (which are partially ordered sets) are directed sets as well, but not conversely. Likewise, lattices are directed sets both upward and downward. In topology, directed sets are used to define nets, which generalize sequences and unite the various notions of limit used in analysis. Directed sets also give rise to direct limits in abstract algebra and (more generally) category theory. In addition to the definition above, there is an equivalent definition. A directed set is a set A {\displaystyle A} with a preorder such that every finite subset of A {\displaystyle A} has an upper bound. In this definition, the existence of an upper bound of the empty subset implies that A {\displaystyle A} is nonempty. The set of natural numbers N {\displaystyle \mathbb {N} } with the ordinary order ≤ {\displaystyle \,\leq \,} is one of the most important examples of a directed set (and so is every totally ordered set). By definition, a net is a function from a directed set and a sequence is a function from the natural numbers N . {\displaystyle \mathbb {N} .} Every sequence canonically becomes a net by endowing N {\displaystyle \mathbb {N} } with ≤ . {\displaystyle \,\leq .\,} If x 0 {\displaystyle x_{0}} is a real number then the set I := R ∖ { x 0 } {\displaystyle I:=\mathbb {R} \backslash \lbrace x_{0}\rbrace } can be turned into a directed set by defining a ≤ I b {\displaystyle a\leq _{I}b} if | a − x 0 | ≥ | b − x 0 | {\displaystyle \left|a-x_{0}\right|\geq \left|b-x_{0}\right|} (so "greater" elements are closer to x 0 {\displaystyle x_{0}} ). We then say that the reals have been directed towards x 0 . {\displaystyle x_{0}.} This is an example of a directed set that is neither partially ordered nor totally ordered. This is because antisymmetry breaks down for every pair a {\displaystyle a} and b {\displaystyle b} equidistant from x 0 , {\displaystyle x_{0},} where a {\displaystyle a} and b {\displaystyle b} are on opposite sides of x 0 . {\displaystyle x_{0}.} Explicitly, this happens when { a , b } = { x 0 − r , x 0 + r } {\displaystyle \{a,b\}=\left\{x_{0}-r,x_{0}+r\right\}} for some real r ≠ 0 , {\displaystyle r\neq 0,} in which case a ≤ I b {\displaystyle a\leq _{I}b} and b ≤ I a {\displaystyle b\leq _{I}a} even though a ≠ b . {\displaystyle a\neq b.} Had this preorder been defined on R {\displaystyle \mathbb {R} } instead of R ∖ { x 0 } {\displaystyle \mathbb {R} \backslash \lbrace x_{0}\rbrace } then it would still form a directed set but it would now have a (unique) greatest element, specifically x 0 {\displaystyle x_{0}} ; however, it still wouldn't be partially ordered. This example can be generalized to a metric space ( X , d ) {\displaystyle (X,d)} by defining on X {\displaystyle X} or X ∖ { x 0 } {\displaystyle X\setminus \left\{x_{0}\right\}} the preorder a ≤ b {\displaystyle a\leq b} if and only if d ( a , x 0 ) ≥ d ( b , x 0 ) . {\displaystyle d\left(a,x_{0}\right)\geq d\left(b,x_{0}\right).} A (trivial) example of a partially ordered set that is not directed is the set { a , b } , {\displaystyle \{a,b\},} in which the only order relations are a ≤ a {\displaystyle a\leq a} and b ≤ b . {\displaystyle b\leq b.} A less trivial example is like the previous example of the "reals directed towards x 0 {\displaystyle x_{0}} " but in which the ordering rule only applies to pairs of elements on the same side of x 0 {\displaystyle x_{0}} (that is, if one takes an element a {\displaystyle a} to the left of x 0 , {\displaystyle x_{0},} and b {\displaystyle b} to its right, then a {\displaystyle a} and b {\displaystyle b} are not comparable, and the subset { a , b } {\displaystyle \{a,b\}} has no upper bound). An element m {\displaystyle m} of a preordered set ( I , ≤ ) {\displaystyle (I,\leq )} is a maximal element if for every j ∈ I , {\displaystyle j\in I,} m ≤ j {\displaystyle m\leq j} implies j ≤ m . {\displaystyle j\leq m.} It is a greatest element if for every j ∈ I , {\displaystyle j\in I,} j ≤ m . {\displaystyle j\leq m.} Any preordered set with a greatest element is a directed set with the same preorder. For instance, in a poset P , {\displaystyle P,} every lower closure of an element; that is, every subset of the form { a ∈ P : a ≤ x } {\displaystyle \{a\in P:a\leq x\}} where x {\displaystyle x} is a fixed element from P , {\displaystyle P,} is directed. Every maximal element of a directed preordered set is a greatest element. Indeed, a directed preordered set is characterized by equality of the (possibly empty) sets of maximal and of greatest elements. Let D 1 {\displaystyle \mathbb {D} _{1}} and D 2 {\displaystyle \mathbb {D} _{2}} be directed sets. Then the Cartesian product set D 1 × D 2 {\displaystyle \mathbb {D} _{1}\times \mathbb {D} _{2}} can be made into a directed set by defining ( n 1 , n 2 ) ≤ ( m 1 , m 2 ) {\displaystyle \left(n_{1},n_{2}\right)\leq \left(m_{1},m_{2}\right)} if and only if n 1 ≤ m 1 {\displaystyle n_{1}\leq m_{1}} and n 2 ≤ m 2 . {\displaystyle n_{2}\leq m_{2}.} In analogy to the product order this is the product direction on the Cartesian product. For example, the set N × N {\displaystyle \mathbb {N} \times \mathbb {N} } of pairs of natural numbers can be made into a directed set by defining ( n 0 , n 1 ) ≤ ( m 0 , m 1 ) {\displaystyle \left(n_{0},n_{1}\right)\leq \left(m_{0},m_{1}\right)} if and only if n 0 ≤ m 0 {\displaystyle n_{0}\leq m_{0}} and n 1 ≤ m 1 . {\displaystyle n_{1}\leq m_{1}.} The subset inclusion relation ⊆ , {\displaystyle \,\subseteq ,\,} along with its dual ⊇ , {\displaystyle \,\supseteq ,\,} define partial orders on any given family of sets. A non-empty family of sets is a directed set with respect to the partial order ⊇ {\displaystyle \,\supseteq \,} (respectively, ⊆ {\displaystyle \,\subseteq \,} ) if and only if the intersection (respectively, union) of any two of its members contains as a subset (respectively, is contained as a subset of) some third member. In symbols, a family I {\displaystyle I} of sets is directed with respect to ⊇ {\displaystyle \,\supseteq \,} (respectively, ⊆ {\displaystyle \,\subseteq \,} ) if and only if or equivalently, Many important examples of directed sets can be defined using these partial orders. For example, by definition, a prefilter or filter base is a non-empty family of sets that is a directed set with respect to the partial order ⊇ {\displaystyle \,\supseteq \,} and that also does not contain the empty set (this condition prevents triviality because otherwise, the empty set would then be a greatest element with respect to ⊇ {\displaystyle \,\supseteq \,} ). Every π-system, which is a non-empty family of sets that is closed under the intersection of any two of its members, is a directed set with respect to ⊇ . {\displaystyle \,\supseteq \,.} Every λ-system is a directed set with respect to ⊆ . {\displaystyle \,\subseteq \,.} Every filter, topology, and σ-algebra is a directed set with respect to both ⊇ {\displaystyle \,\supseteq \,} and ⊆ . {\displaystyle \,\subseteq \,.} If x ∙ = ( x i ) i ∈ I {\displaystyle x_{\bullet }=\left(x_{i}\right)_{i\in I}} is any net from a directed set ( I , ≤ ) {\displaystyle (I,\leq )} then for any index i ∈ I , {\displaystyle i\in I,} the set x ≥ i := { x j : j ≥ i with j ∈ I } {\displaystyle x_{\geq i}:=\left\{x_{j}:j\geq i{\text{ with }}j\in I\right\}} is called the tail of ( I , ≤ ) {\displaystyle (I,\leq )} starting at i . {\displaystyle i.} The family Tails ( x ∙ ) := { x ≥ i : i ∈ I } {\displaystyle \operatorname {Tails} \left(x_{\bullet }\right):=\left\{x_{\geq i}:i\in I\right\}} of all tails is a directed set with respect to ⊇ ; {\displaystyle \,\supseteq ;\,} in fact, it is even a prefilter. If T {\displaystyle T} is a topological space and x 0 {\displaystyle x_{0}} is a point in T , {\displaystyle T,} set of all neighbourhoods of x 0 {\displaystyle x_{0}} can be turned into a directed set by writing U ≤ V {\displaystyle U\leq V} if and only if U {\displaystyle U} contains V . {\displaystyle V.} For every U , {\displaystyle U,} V , {\displaystyle V,} and W {\displaystyle W} : The set Finite ( I ) {\displaystyle \operatorname {Finite} (I)} of all finite subsets of a set I {\displaystyle I} is directed with respect to ⊆ {\displaystyle \,\subseteq \,} since given any two A , B ∈ Finite ( I ) , {\displaystyle A,B\in \operatorname {Finite} (I),} their union A ∪ B ∈ Finite ( I ) {\displaystyle A\cup B\in \operatorname {Finite} (I)} is an upper bound of A {\displaystyle A} and B {\displaystyle B} in Finite ( I ) . {\displaystyle \operatorname {Finite} (I).} This particular directed set is used to define the sum ∑ i ∈ I r i {\displaystyle {\textstyle \sum \limits _{i\in I}}r_{i}} of a generalized series of an I {\displaystyle I} -indexed collection of numbers ( r i ) i ∈ I {\displaystyle \left(r_{i}\right)_{i\in I}} (or more generally, the sum of elements in an abelian topological group, such as vectors in a topological vector space) as the limit of the net of partial sums F ∈ Finite ( I ) ↦ ∑ i ∈ F r i ; {\displaystyle F\in \operatorname {Finite} (I)\mapsto {\textstyle \sum \limits _{i\in F}}r_{i};} that is: Directed set is a more general concept than (join) semilattice: every join semilattice is a directed set, as the join or least upper bound of two elements is the desired c . {\displaystyle c.} The converse does not hold however, witness the directed set {1000,0001,1101,1011,1111} ordered bitwise (e.g. 1000 ≤ 1011 {\displaystyle 1000\leq 1011} holds, but 0001 ≤ 1000 {\displaystyle 0001\leq 1000} does not, since in the last bit 1 > 0), where {1000,0001} has three upper bounds but no least upper bound, cf. picture. (Also note that without 1111, the set is not directed.) The order relation in a directed set is not required to be antisymmetric, and therefore directed sets are not always partial orders. However, the term directed set is also used frequently in the context of posets. In this setting, a subset A {\displaystyle A} of a partially ordered set ( P , ≤ ) {\displaystyle (P,\leq )} is called a directed subset if it is a directed set according to the same partial order: in other words, it is not the empty set, and every pair of elements has an upper bound. Here the order relation on the elements of A {\displaystyle A} is inherited from P {\displaystyle P} ; for this reason, reflexivity and transitivity need not be required explicitly. A directed subset of a poset is not required to be downward closed; a subset of a poset is directed if and only if its downward closure is an ideal. While the definition of a directed set is for an "upward-directed" set (every pair of elements has an upper bound), it is also possible to define a downward-directed set in which every pair of elements has a common lower bound. A subset of a poset is downward-directed if and only if its upper closure is a filter. Directed subsets are used in domain theory, which studies directed-complete partial orders. These are posets in which every upward-directed set is required to have a least upper bound. In this context, directed subsets again provide a generalization of convergent sequences.
[ { "paragraph_id": 0, "text": "In mathematics, a directed set (or a directed preorder or a filtered set) is a nonempty set A {\\displaystyle A} together with a reflexive and transitive binary relation ≤ {\\displaystyle \\,\\leq \\,} (that is, a preorder), with the additional property that every pair of elements has an upper bound. In other words, for any a {\\displaystyle a} and b {\\displaystyle b} in A {\\displaystyle A} there must exist c {\\displaystyle c} in A {\\displaystyle A} with a ≤ c {\\displaystyle a\\leq c} and b ≤ c . {\\displaystyle b\\leq c.} A directed set's preorder is called a direction.", "title": "" }, { "paragraph_id": 1, "text": "The notion defined above is sometimes called an upward directed set. A downward directed set is defined analogously, meaning that every pair of elements is bounded below. Some authors (and this article) assume that a directed set is directed upward, unless otherwise stated. Other authors call a set directed if and only if it is directed both upward and downward.", "title": "" }, { "paragraph_id": 2, "text": "Directed sets are a generalization of nonempty totally ordered sets. That is, all totally ordered sets are directed sets (contrast partially ordered sets, which need not be directed). Join-semilattices (which are partially ordered sets) are directed sets as well, but not conversely. Likewise, lattices are directed sets both upward and downward.", "title": "" }, { "paragraph_id": 3, "text": "In topology, directed sets are used to define nets, which generalize sequences and unite the various notions of limit used in analysis. Directed sets also give rise to direct limits in abstract algebra and (more generally) category theory.", "title": "" }, { "paragraph_id": 4, "text": "In addition to the definition above, there is an equivalent definition. A directed set is a set A {\\displaystyle A} with a preorder such that every finite subset of A {\\displaystyle A} has an upper bound. In this definition, the existence of an upper bound of the empty subset implies that A {\\displaystyle A} is nonempty.", "title": "Equivalent definition" }, { "paragraph_id": 5, "text": "The set of natural numbers N {\\displaystyle \\mathbb {N} } with the ordinary order ≤ {\\displaystyle \\,\\leq \\,} is one of the most important examples of a directed set (and so is every totally ordered set). By definition, a net is a function from a directed set and a sequence is a function from the natural numbers N . {\\displaystyle \\mathbb {N} .} Every sequence canonically becomes a net by endowing N {\\displaystyle \\mathbb {N} } with ≤ . {\\displaystyle \\,\\leq .\\,}", "title": "Examples" }, { "paragraph_id": 6, "text": "If x 0 {\\displaystyle x_{0}} is a real number then the set I := R ∖ { x 0 } {\\displaystyle I:=\\mathbb {R} \\backslash \\lbrace x_{0}\\rbrace } can be turned into a directed set by defining a ≤ I b {\\displaystyle a\\leq _{I}b} if | a − x 0 | ≥ | b − x 0 | {\\displaystyle \\left|a-x_{0}\\right|\\geq \\left|b-x_{0}\\right|} (so \"greater\" elements are closer to x 0 {\\displaystyle x_{0}} ). We then say that the reals have been directed towards x 0 . {\\displaystyle x_{0}.} This is an example of a directed set that is neither partially ordered nor totally ordered. This is because antisymmetry breaks down for every pair a {\\displaystyle a} and b {\\displaystyle b} equidistant from x 0 , {\\displaystyle x_{0},} where a {\\displaystyle a} and b {\\displaystyle b} are on opposite sides of x 0 . {\\displaystyle x_{0}.} Explicitly, this happens when { a , b } = { x 0 − r , x 0 + r } {\\displaystyle \\{a,b\\}=\\left\\{x_{0}-r,x_{0}+r\\right\\}} for some real r ≠ 0 , {\\displaystyle r\\neq 0,} in which case a ≤ I b {\\displaystyle a\\leq _{I}b} and b ≤ I a {\\displaystyle b\\leq _{I}a} even though a ≠ b . {\\displaystyle a\\neq b.} Had this preorder been defined on R {\\displaystyle \\mathbb {R} } instead of R ∖ { x 0 } {\\displaystyle \\mathbb {R} \\backslash \\lbrace x_{0}\\rbrace } then it would still form a directed set but it would now have a (unique) greatest element, specifically x 0 {\\displaystyle x_{0}} ; however, it still wouldn't be partially ordered. This example can be generalized to a metric space ( X , d ) {\\displaystyle (X,d)} by defining on X {\\displaystyle X} or X ∖ { x 0 } {\\displaystyle X\\setminus \\left\\{x_{0}\\right\\}} the preorder a ≤ b {\\displaystyle a\\leq b} if and only if d ( a , x 0 ) ≥ d ( b , x 0 ) . {\\displaystyle d\\left(a,x_{0}\\right)\\geq d\\left(b,x_{0}\\right).}", "title": "Examples" }, { "paragraph_id": 7, "text": "A (trivial) example of a partially ordered set that is not directed is the set { a , b } , {\\displaystyle \\{a,b\\},} in which the only order relations are a ≤ a {\\displaystyle a\\leq a} and b ≤ b . {\\displaystyle b\\leq b.} A less trivial example is like the previous example of the \"reals directed towards x 0 {\\displaystyle x_{0}} \" but in which the ordering rule only applies to pairs of elements on the same side of x 0 {\\displaystyle x_{0}} (that is, if one takes an element a {\\displaystyle a} to the left of x 0 , {\\displaystyle x_{0},} and b {\\displaystyle b} to its right, then a {\\displaystyle a} and b {\\displaystyle b} are not comparable, and the subset { a , b } {\\displaystyle \\{a,b\\}} has no upper bound).", "title": "Examples" }, { "paragraph_id": 8, "text": "An element m {\\displaystyle m} of a preordered set ( I , ≤ ) {\\displaystyle (I,\\leq )} is a maximal element if for every j ∈ I , {\\displaystyle j\\in I,} m ≤ j {\\displaystyle m\\leq j} implies j ≤ m . {\\displaystyle j\\leq m.} It is a greatest element if for every j ∈ I , {\\displaystyle j\\in I,} j ≤ m . {\\displaystyle j\\leq m.}", "title": "Examples" }, { "paragraph_id": 9, "text": "Any preordered set with a greatest element is a directed set with the same preorder. For instance, in a poset P , {\\displaystyle P,} every lower closure of an element; that is, every subset of the form { a ∈ P : a ≤ x } {\\displaystyle \\{a\\in P:a\\leq x\\}} where x {\\displaystyle x} is a fixed element from P , {\\displaystyle P,} is directed.", "title": "Examples" }, { "paragraph_id": 10, "text": "Every maximal element of a directed preordered set is a greatest element. Indeed, a directed preordered set is characterized by equality of the (possibly empty) sets of maximal and of greatest elements.", "title": "Examples" }, { "paragraph_id": 11, "text": "Let D 1 {\\displaystyle \\mathbb {D} _{1}} and D 2 {\\displaystyle \\mathbb {D} _{2}} be directed sets. Then the Cartesian product set D 1 × D 2 {\\displaystyle \\mathbb {D} _{1}\\times \\mathbb {D} _{2}} can be made into a directed set by defining ( n 1 , n 2 ) ≤ ( m 1 , m 2 ) {\\displaystyle \\left(n_{1},n_{2}\\right)\\leq \\left(m_{1},m_{2}\\right)} if and only if n 1 ≤ m 1 {\\displaystyle n_{1}\\leq m_{1}} and n 2 ≤ m 2 . {\\displaystyle n_{2}\\leq m_{2}.} In analogy to the product order this is the product direction on the Cartesian product. For example, the set N × N {\\displaystyle \\mathbb {N} \\times \\mathbb {N} } of pairs of natural numbers can be made into a directed set by defining ( n 0 , n 1 ) ≤ ( m 0 , m 1 ) {\\displaystyle \\left(n_{0},n_{1}\\right)\\leq \\left(m_{0},m_{1}\\right)} if and only if n 0 ≤ m 0 {\\displaystyle n_{0}\\leq m_{0}} and n 1 ≤ m 1 . {\\displaystyle n_{1}\\leq m_{1}.}", "title": "Examples" }, { "paragraph_id": 12, "text": "The subset inclusion relation ⊆ , {\\displaystyle \\,\\subseteq ,\\,} along with its dual ⊇ , {\\displaystyle \\,\\supseteq ,\\,} define partial orders on any given family of sets. A non-empty family of sets is a directed set with respect to the partial order ⊇ {\\displaystyle \\,\\supseteq \\,} (respectively, ⊆ {\\displaystyle \\,\\subseteq \\,} ) if and only if the intersection (respectively, union) of any two of its members contains as a subset (respectively, is contained as a subset of) some third member. In symbols, a family I {\\displaystyle I} of sets is directed with respect to ⊇ {\\displaystyle \\,\\supseteq \\,} (respectively, ⊆ {\\displaystyle \\,\\subseteq \\,} ) if and only if", "title": "Examples" }, { "paragraph_id": 13, "text": "or equivalently,", "title": "Examples" }, { "paragraph_id": 14, "text": "Many important examples of directed sets can be defined using these partial orders. For example, by definition, a prefilter or filter base is a non-empty family of sets that is a directed set with respect to the partial order ⊇ {\\displaystyle \\,\\supseteq \\,} and that also does not contain the empty set (this condition prevents triviality because otherwise, the empty set would then be a greatest element with respect to ⊇ {\\displaystyle \\,\\supseteq \\,} ). Every π-system, which is a non-empty family of sets that is closed under the intersection of any two of its members, is a directed set with respect to ⊇ . {\\displaystyle \\,\\supseteq \\,.} Every λ-system is a directed set with respect to ⊆ . {\\displaystyle \\,\\subseteq \\,.} Every filter, topology, and σ-algebra is a directed set with respect to both ⊇ {\\displaystyle \\,\\supseteq \\,} and ⊆ . {\\displaystyle \\,\\subseteq \\,.} If x ∙ = ( x i ) i ∈ I {\\displaystyle x_{\\bullet }=\\left(x_{i}\\right)_{i\\in I}} is any net from a directed set ( I , ≤ ) {\\displaystyle (I,\\leq )} then for any index i ∈ I , {\\displaystyle i\\in I,} the set x ≥ i := { x j : j ≥ i with j ∈ I } {\\displaystyle x_{\\geq i}:=\\left\\{x_{j}:j\\geq i{\\text{ with }}j\\in I\\right\\}} is called the tail of ( I , ≤ ) {\\displaystyle (I,\\leq )} starting at i . {\\displaystyle i.} The family Tails ( x ∙ ) := { x ≥ i : i ∈ I } {\\displaystyle \\operatorname {Tails} \\left(x_{\\bullet }\\right):=\\left\\{x_{\\geq i}:i\\in I\\right\\}} of all tails is a directed set with respect to ⊇ ; {\\displaystyle \\,\\supseteq ;\\,} in fact, it is even a prefilter.", "title": "Examples" }, { "paragraph_id": 15, "text": "If T {\\displaystyle T} is a topological space and x 0 {\\displaystyle x_{0}} is a point in T , {\\displaystyle T,} set of all neighbourhoods of x 0 {\\displaystyle x_{0}} can be turned into a directed set by writing U ≤ V {\\displaystyle U\\leq V} if and only if U {\\displaystyle U} contains V . {\\displaystyle V.} For every U , {\\displaystyle U,} V , {\\displaystyle V,} and W {\\displaystyle W} :", "title": "Examples" }, { "paragraph_id": 16, "text": "The set Finite ( I ) {\\displaystyle \\operatorname {Finite} (I)} of all finite subsets of a set I {\\displaystyle I} is directed with respect to ⊆ {\\displaystyle \\,\\subseteq \\,} since given any two A , B ∈ Finite ( I ) , {\\displaystyle A,B\\in \\operatorname {Finite} (I),} their union A ∪ B ∈ Finite ( I ) {\\displaystyle A\\cup B\\in \\operatorname {Finite} (I)} is an upper bound of A {\\displaystyle A} and B {\\displaystyle B} in Finite ( I ) . {\\displaystyle \\operatorname {Finite} (I).} This particular directed set is used to define the sum ∑ i ∈ I r i {\\displaystyle {\\textstyle \\sum \\limits _{i\\in I}}r_{i}} of a generalized series of an I {\\displaystyle I} -indexed collection of numbers ( r i ) i ∈ I {\\displaystyle \\left(r_{i}\\right)_{i\\in I}} (or more generally, the sum of elements in an abelian topological group, such as vectors in a topological vector space) as the limit of the net of partial sums F ∈ Finite ( I ) ↦ ∑ i ∈ F r i ; {\\displaystyle F\\in \\operatorname {Finite} (I)\\mapsto {\\textstyle \\sum \\limits _{i\\in F}}r_{i};} that is:", "title": "Examples" }, { "paragraph_id": 17, "text": "Directed set is a more general concept than (join) semilattice: every join semilattice is a directed set, as the join or least upper bound of two elements is the desired c . {\\displaystyle c.} The converse does not hold however, witness the directed set {1000,0001,1101,1011,1111} ordered bitwise (e.g. 1000 ≤ 1011 {\\displaystyle 1000\\leq 1011} holds, but 0001 ≤ 1000 {\\displaystyle 0001\\leq 1000} does not, since in the last bit 1 > 0), where {1000,0001} has three upper bounds but no least upper bound, cf. picture. (Also note that without 1111, the set is not directed.)", "title": "Contrast with semilattices" }, { "paragraph_id": 18, "text": "The order relation in a directed set is not required to be antisymmetric, and therefore directed sets are not always partial orders. However, the term directed set is also used frequently in the context of posets. In this setting, a subset A {\\displaystyle A} of a partially ordered set ( P , ≤ ) {\\displaystyle (P,\\leq )} is called a directed subset if it is a directed set according to the same partial order: in other words, it is not the empty set, and every pair of elements has an upper bound. Here the order relation on the elements of A {\\displaystyle A} is inherited from P {\\displaystyle P} ; for this reason, reflexivity and transitivity need not be required explicitly.", "title": "Directed subsets" }, { "paragraph_id": 19, "text": "A directed subset of a poset is not required to be downward closed; a subset of a poset is directed if and only if its downward closure is an ideal. While the definition of a directed set is for an \"upward-directed\" set (every pair of elements has an upper bound), it is also possible to define a downward-directed set in which every pair of elements has a common lower bound. A subset of a poset is downward-directed if and only if its upper closure is a filter.", "title": "Directed subsets" }, { "paragraph_id": 20, "text": "Directed subsets are used in domain theory, which studies directed-complete partial orders. These are posets in which every upward-directed set is required to have a least upper bound. In this context, directed subsets again provide a generalization of convergent sequences.", "title": "Directed subsets" } ]
In mathematics, a directed set is a nonempty set A together with a reflexive and transitive binary relation ≤ , with the additional property that every pair of elements has an upper bound. In other words, for any a and b in A there must exist c in A with a ≤ c and b ≤ c . A directed set's preorder is called a direction. The notion defined above is sometimes called an upward directed set. A downward directed set is defined analogously, meaning that every pair of elements is bounded below. Some authors assume that a directed set is directed upward, unless otherwise stated. Other authors call a set directed if and only if it is directed both upward and downward. Directed sets are a generalization of nonempty totally ordered sets. That is, all totally ordered sets are directed sets. Join-semilattices are directed sets as well, but not conversely. Likewise, lattices are directed sets both upward and downward. In topology, directed sets are used to define nets, which generalize sequences and unite the various notions of limit used in analysis. Directed sets also give rise to direct limits in abstract algebra and category theory.
2002-01-25T15:14:07Z
2023-09-24T07:13:07Z
[ "Template:Reflist", "Template:Order theory", "Template:Visible anchor", "Template:Pi", "Template:Annotated link", "Template:Explain", "Template:Cite book", "Template:ISBN", "Template:Short description", "Template:Em", "Template:Hairsp" ]
https://en.wikipedia.org/wiki/Directed_set
9,209
Edward Bellamy
Edward Bellamy (March 26, 1850 – May 22, 1898) was an American author, journalist, and political activist most famous for his utopian novel Looking Backward. Bellamy's vision of a harmonious future world inspired the formation of numerous "Nationalist Clubs" dedicated to the propagation of his political ideas. After working as a journalist and writing several unremarkable novels, Bellamy published Looking Backward in 1888. It was one of the most commercially successful books published in the United States in the 19th century, and it especially appealed to a generation of intellectuals alienated from the alleged dark side of the Gilded Age. In the early 1890s, Bellamy established a newspaper known as The New Nation and began to promote united action between the various Nationalist Clubs and the emerging Populist Party. He published Equality, a sequel to Looking Backward, in 1897, and died the following year. Edward Bellamy was born in Chicopee, Massachusetts. His father was Rufus King Bellamy (1816–1886), a Baptist minister and a descendant of Joseph Bellamy. His mother, Maria Louisa Putnam Bellamy, was a Calvinist. She was the daughter of a Baptist minister named Benjamin Putnam, who was forced to withdraw from the ministry in Salem, Massachusetts, following objections to his becoming a Freemason. Bellamy attended public school at Chicopee Falls before leaving for Union College of Schenectady, New York, where he studied for just two semesters. Upon leaving school, he made his way to Europe for a year, spending extensive time in Germany. He briefly studied law but abandoned that field without ever having practiced as a lawyer, instead entering the world of journalism. In this capacity Bellamy briefly served on the staff of the New York Post before returning to his native Massachusetts to take a position at the Springfield Union. At the age of 25, Bellamy developed tuberculosis, the disease that would ultimately kill him. He suffered with its effects throughout his adult life. In an effort to regain his health, Bellamy spent a year in the Hawaiian Islands (1877 to 1878). Returning to the United States, he decided to abandon the daily grind of journalism in favor of literary work, which put fewer demands upon his time and his health. Bellamy married Emma Augusta Sanderson in 1882. The couple had two children. Bellamy's early novels, including Six to One (1878), Dr. Heidenhoff's Process (1880), and Miss Ludington's Sister (1885), were unremarkable works, making use of standard psychological plots. A turn to utopian science fiction with Looking Backward, 2000–1887, published in January 1888, captured the public imagination and catapulted Bellamy to literary fame. Its publisher could scarcely keep up with demand. Within a year it had sold some 200,000 copies, and by the end of the 19th century had sold more copies than any other book published in America up to that time except for Uncle Tom's Cabin by Harriet Beecher Stowe and Ben-Hur: A Tale of the Christ by Lew Wallace. The book gained an extensive readership in the United Kingdom as well, more than 235,000 copies being sold there between 1890 and 1935. In Looking Backward, a non-violent revolution had transformed the American economy and thereby society; private property had been abolished in favor of state ownership of capital and the elimination of social classes and the ills of society that he thought inevitably followed from them. In the new world of the year 2000, there was no longer war, poverty, crime, prostitution, corruption, money, or taxes. Neither did there exist such occupations seen by Bellamy as of dubious worth to society, such as politicians, lawyers, merchants, or soldiers. Instead, Bellamy's utopian society of the future was based upon the voluntary employment of all citizens between the ages of 21 and 45, after which time all would retire. Work was simple, aided by machine production, working hours short and vacation time long. The new economic basis of society effectively remade human nature itself in Bellamy's idyllic vision, with greed, maliciousness, untruthfulness, and insanity all relegated to the past. Bellamy's book inspired legions of inspired readers to establish so-called Nationalist Clubs, beginning in Boston late in 1888. His vision of a country relieved of its social ills through abandonment of the principle of competition and establishment of state ownership of industry proved an appealing panacea to a generation of intellectuals alienated from the dark side of Gilded Age America. By 1891 it was reported that no fewer than 162 Nationalist Clubs were in existence. Bellamy's use of the term "Nationalism" rather than "socialism" as a descriptor of his governmental vision was calculated, as he did not want to limit either sales of his novel or the potential influence of its political ideas. In an 1888 letter to literary critic William Dean Howells, Bellamy wrote: Every sensible man will admit there is a big deal in a name, especially in making first impressions. In the radicalness of the opinions I have expressed, I may seem to out-socialize the socialists, yet the word socialist is one I never could well stomach. In the first place it is a foreign word in itself, and equally foreign in all its suggestions. It smells to the average American of petroleum, suggests the red flag, and with all manner of sexual novelties, and an abusive tone about God and religion, which in this country we at least treat with respect. [...] [W]hatever German and French reformers may choose to call themselves, socialist is not a good name for a party to succeed with in America. No such party can or ought to succeed that is not wholly and enthusiastically American and patriotic in spirit and suggestions. Bellamy himself came to actively participate in the political movement which emerged around his book, particularly after 1891 when he founded his own magazine, The New Nation, and began to promote united action between the various Nationalist Clubs and the emerging People's Party. For the next three and a half years, Bellamy gave his all to politics, publishing his magazine, working to influence the platform of the People's Party, and publicizing the Nationalist movement in the popular press. This phase of his life came to an end in 1894, when The New Nation was forced to suspend publication owing to financial difficulties. With the key activists of the Nationalist Clubs largely absorbed into the apparatus of the People's Party (although a Nationalist Party did run three candidates for office in Wisconsin as late as 1896), Bellamy abandoned politics for a return to literature. He set to work on a sequel to Looking Backward titled Equality, attempting to deal with the ideal society of the post-revolutionary future in greater detail. In this final work, he addressed the question of feminism, dealing with the taboo subject of female reproductive rights in a future, post-revolutionary America. Other subjects overlooked in Looking Backward, such as animal rights and wilderness preservation, were dealt with in a similar context. The book saw print in 1897 and would prove to be Bellamy's final creation. Several short stories of Bellamy's were published in 1898, and The Duke of Stockbridge; a Romance of Shays' Rebellion was published in 1900. Edward Bellamy died of tuberculosis in Chicopee Falls, Massachusetts. He was 48 years old. His lifelong home in Chicopee Falls, built by his father, was designated a National Historic Landmark in 1971. Bellamy was the cousin of Francis Bellamy, famous for writing the original version of the Pledge of Allegiance. Bellamy Road, a residential road in Toronto, is named for the author.
[ { "paragraph_id": 0, "text": "Edward Bellamy (March 26, 1850 – May 22, 1898) was an American author, journalist, and political activist most famous for his utopian novel Looking Backward. Bellamy's vision of a harmonious future world inspired the formation of numerous \"Nationalist Clubs\" dedicated to the propagation of his political ideas.", "title": "" }, { "paragraph_id": 1, "text": "After working as a journalist and writing several unremarkable novels, Bellamy published Looking Backward in 1888. It was one of the most commercially successful books published in the United States in the 19th century, and it especially appealed to a generation of intellectuals alienated from the alleged dark side of the Gilded Age. In the early 1890s, Bellamy established a newspaper known as The New Nation and began to promote united action between the various Nationalist Clubs and the emerging Populist Party. He published Equality, a sequel to Looking Backward, in 1897, and died the following year.", "title": "" }, { "paragraph_id": 2, "text": "Edward Bellamy was born in Chicopee, Massachusetts. His father was Rufus King Bellamy (1816–1886), a Baptist minister and a descendant of Joseph Bellamy. His mother, Maria Louisa Putnam Bellamy, was a Calvinist. She was the daughter of a Baptist minister named Benjamin Putnam, who was forced to withdraw from the ministry in Salem, Massachusetts, following objections to his becoming a Freemason.", "title": "Biography" }, { "paragraph_id": 3, "text": "Bellamy attended public school at Chicopee Falls before leaving for Union College of Schenectady, New York, where he studied for just two semesters. Upon leaving school, he made his way to Europe for a year, spending extensive time in Germany. He briefly studied law but abandoned that field without ever having practiced as a lawyer, instead entering the world of journalism. In this capacity Bellamy briefly served on the staff of the New York Post before returning to his native Massachusetts to take a position at the Springfield Union.", "title": "Biography" }, { "paragraph_id": 4, "text": "At the age of 25, Bellamy developed tuberculosis, the disease that would ultimately kill him. He suffered with its effects throughout his adult life. In an effort to regain his health, Bellamy spent a year in the Hawaiian Islands (1877 to 1878). Returning to the United States, he decided to abandon the daily grind of journalism in favor of literary work, which put fewer demands upon his time and his health.", "title": "Biography" }, { "paragraph_id": 5, "text": "Bellamy married Emma Augusta Sanderson in 1882. The couple had two children.", "title": "Biography" }, { "paragraph_id": 6, "text": "Bellamy's early novels, including Six to One (1878), Dr. Heidenhoff's Process (1880), and Miss Ludington's Sister (1885), were unremarkable works, making use of standard psychological plots.", "title": "Biography" }, { "paragraph_id": 7, "text": "A turn to utopian science fiction with Looking Backward, 2000–1887, published in January 1888, captured the public imagination and catapulted Bellamy to literary fame. Its publisher could scarcely keep up with demand. Within a year it had sold some 200,000 copies, and by the end of the 19th century had sold more copies than any other book published in America up to that time except for Uncle Tom's Cabin by Harriet Beecher Stowe and Ben-Hur: A Tale of the Christ by Lew Wallace. The book gained an extensive readership in the United Kingdom as well, more than 235,000 copies being sold there between 1890 and 1935.", "title": "Biography" }, { "paragraph_id": 8, "text": "In Looking Backward, a non-violent revolution had transformed the American economy and thereby society; private property had been abolished in favor of state ownership of capital and the elimination of social classes and the ills of society that he thought inevitably followed from them. In the new world of the year 2000, there was no longer war, poverty, crime, prostitution, corruption, money, or taxes. Neither did there exist such occupations seen by Bellamy as of dubious worth to society, such as politicians, lawyers, merchants, or soldiers. Instead, Bellamy's utopian society of the future was based upon the voluntary employment of all citizens between the ages of 21 and 45, after which time all would retire. Work was simple, aided by machine production, working hours short and vacation time long. The new economic basis of society effectively remade human nature itself in Bellamy's idyllic vision, with greed, maliciousness, untruthfulness, and insanity all relegated to the past.", "title": "Biography" }, { "paragraph_id": 9, "text": "Bellamy's book inspired legions of inspired readers to establish so-called Nationalist Clubs, beginning in Boston late in 1888. His vision of a country relieved of its social ills through abandonment of the principle of competition and establishment of state ownership of industry proved an appealing panacea to a generation of intellectuals alienated from the dark side of Gilded Age America. By 1891 it was reported that no fewer than 162 Nationalist Clubs were in existence.", "title": "Bellamyite movement" }, { "paragraph_id": 10, "text": "Bellamy's use of the term \"Nationalism\" rather than \"socialism\" as a descriptor of his governmental vision was calculated, as he did not want to limit either sales of his novel or the potential influence of its political ideas. In an 1888 letter to literary critic William Dean Howells, Bellamy wrote:", "title": "Bellamyite movement" }, { "paragraph_id": 11, "text": "Every sensible man will admit there is a big deal in a name, especially in making first impressions. In the radicalness of the opinions I have expressed, I may seem to out-socialize the socialists, yet the word socialist is one I never could well stomach. In the first place it is a foreign word in itself, and equally foreign in all its suggestions. It smells to the average American of petroleum, suggests the red flag, and with all manner of sexual novelties, and an abusive tone about God and religion, which in this country we at least treat with respect. [...] [W]hatever German and French reformers may choose to call themselves, socialist is not a good name for a party to succeed with in America. No such party can or ought to succeed that is not wholly and enthusiastically American and patriotic in spirit and suggestions.", "title": "Bellamyite movement" }, { "paragraph_id": 12, "text": "Bellamy himself came to actively participate in the political movement which emerged around his book, particularly after 1891 when he founded his own magazine, The New Nation, and began to promote united action between the various Nationalist Clubs and the emerging People's Party. For the next three and a half years, Bellamy gave his all to politics, publishing his magazine, working to influence the platform of the People's Party, and publicizing the Nationalist movement in the popular press. This phase of his life came to an end in 1894, when The New Nation was forced to suspend publication owing to financial difficulties.", "title": "Bellamyite movement" }, { "paragraph_id": 13, "text": "With the key activists of the Nationalist Clubs largely absorbed into the apparatus of the People's Party (although a Nationalist Party did run three candidates for office in Wisconsin as late as 1896), Bellamy abandoned politics for a return to literature. He set to work on a sequel to Looking Backward titled Equality, attempting to deal with the ideal society of the post-revolutionary future in greater detail. In this final work, he addressed the question of feminism, dealing with the taboo subject of female reproductive rights in a future, post-revolutionary America. Other subjects overlooked in Looking Backward, such as animal rights and wilderness preservation, were dealt with in a similar context. The book saw print in 1897 and would prove to be Bellamy's final creation.", "title": "Bellamyite movement" }, { "paragraph_id": 14, "text": "Several short stories of Bellamy's were published in 1898, and The Duke of Stockbridge; a Romance of Shays' Rebellion was published in 1900.", "title": "Bellamyite movement" }, { "paragraph_id": 15, "text": "Edward Bellamy died of tuberculosis in Chicopee Falls, Massachusetts. He was 48 years old.", "title": "Bellamyite movement" }, { "paragraph_id": 16, "text": "His lifelong home in Chicopee Falls, built by his father, was designated a National Historic Landmark in 1971.", "title": "Bellamyite movement" }, { "paragraph_id": 17, "text": "Bellamy was the cousin of Francis Bellamy, famous for writing the original version of the Pledge of Allegiance.", "title": "Bellamyite movement" }, { "paragraph_id": 18, "text": "Bellamy Road, a residential road in Toronto, is named for the author.", "title": "Bellamyite movement" } ]
Edward Bellamy was an American author, journalist, and political activist most famous for his utopian novel Looking Backward. Bellamy's vision of a harmonious future world inspired the formation of numerous "Nationalist Clubs" dedicated to the propagation of his political ideas. After working as a journalist and writing several unremarkable novels, Bellamy published Looking Backward in 1888. It was one of the most commercially successful books published in the United States in the 19th century, and it especially appealed to a generation of intellectuals alienated from the alleged dark side of the Gilded Age. In the early 1890s, Bellamy established a newspaper known as The New Nation and began to promote united action between the various Nationalist Clubs and the emerging Populist Party. He published Equality, a sequel to Looking Backward, in 1897, and died the following year.
2002-02-25T15:51:15Z
2023-10-20T17:22:49Z
[ "Template:FadedPage", "Template:Internet Archive author", "Template:Librivox author", "Template:Short description", "Template:Use mdy dates", "Template:Gutenberg author", "Template:Authority control", "Template:Infobox writer", "Template:Refbegin", "Template:Sister project links", "Template:Reflist", "Template:Cite news", "Template:Webarchive", "Template:Refend", "Template:Edward Bellamy", "Template:For", "Template:Fact", "Template:Blockquote", "Template:Cite web" ]
https://en.wikipedia.org/wiki/Edward_Bellamy
9,222
E
E, or e, is the fifth letter and the second vowel letter in the Latin alphabet, used in the modern English alphabet, the alphabets of other western European languages and others worldwide. Its name in English is e (pronounced /ˈiː/); plural es, Es or E's. It is the most commonly used letter in many languages, including Czech, Danish, Dutch, English, French, German, Hungarian, Latin, Latvian, Norwegian, Spanish, and Swedish. The Latin letter 'E' differs little from its source, the Greek letter epsilon, 'Ε'. This in turn comes from the Semitic letter hê, which has been suggested to have started as a praying or calling human figure (hillul 'jubilation'), and was most likely based on a similar Egyptian hieroglyph that indicated a different pronunciation. In Semitic, the letter represented /h/ (and /e/ in foreign words); in Greek, hê became the letter epsilon, used to represent /e/. The various forms of the Old Italic script and the Latin alphabet followed this usage. Although Middle English spelling used ⟨e⟩ to represent long and short /e/, the Great Vowel Shift changed long /eː/ (as in 'me' or 'bee') to /iː/ while short /ɛ/ (as in 'met' or 'bed') remained a mid vowel. In other cases, the letter is silent, generally at the end of words like queue. In the orthography of many languages it represents either [e], [e̞], [ɛ], or some variation (such as a nasalized version) of these sounds, often with diacritics (as: ⟨e ê é è ë ē ĕ ě ẽ ė ẹ ę ẻ⟩) to indicate contrasts. Less commonly, as in French, German, or Saanich, ⟨e⟩ represents a mid-central vowel /ə/. Digraphs with ⟨e⟩ are common to indicate either diphthongs or monophthongs, such as ⟨ea⟩ or ⟨ee⟩ for /iː/ or /eɪ/ in English, ⟨ei⟩ for /aɪ/ in German, and ⟨eu⟩ for /ø/ in French or /ɔɪ/ in German. The International Phonetic Alphabet uses ⟨e⟩ for the close-mid front unrounded vowel or the mid front unrounded vowel. 'E' is the most common (or highest-frequency) letter in the English language alphabet and several other European languages, which has implications in both cryptography and data compression. In the story "The Gold-Bug" by Edgar Allan Poe, a character figures out a random character code by remembering that the most used letter in English is E. This makes it a hard and popular letter to use when writing lipograms. Ernest Vincent Wright's Gadsby (1939) is considered a "dreadful" novel, and supposedly "at least part of Wright's narrative issues were caused by language limitations imposed by the lack of E." Both Georges Perec's novel A Void (La Disparition) (1969) and its English translation by Gilbert Adair omit 'e' and are considered better works. In British Sign Language (BSL), the letter 'e' is signed by extending the index finger of the right hand touching the tip of index on the left hand, with all fingers of left hand open.
[ { "paragraph_id": 0, "text": "E, or e, is the fifth letter and the second vowel letter in the Latin alphabet, used in the modern English alphabet, the alphabets of other western European languages and others worldwide. Its name in English is e (pronounced /ˈiː/); plural es, Es or E's. It is the most commonly used letter in many languages, including Czech, Danish, Dutch, English, French, German, Hungarian, Latin, Latvian, Norwegian, Spanish, and Swedish.", "title": "" }, { "paragraph_id": 1, "text": "The Latin letter 'E' differs little from its source, the Greek letter epsilon, 'Ε'. This in turn comes from the Semitic letter hê, which has been suggested to have started as a praying or calling human figure (hillul 'jubilation'), and was most likely based on a similar Egyptian hieroglyph that indicated a different pronunciation. In Semitic, the letter represented /h/ (and /e/ in foreign words); in Greek, hê became the letter epsilon, used to represent /e/. The various forms of the Old Italic script and the Latin alphabet followed this usage.", "title": "History" }, { "paragraph_id": 2, "text": "Although Middle English spelling used ⟨e⟩ to represent long and short /e/, the Great Vowel Shift changed long /eː/ (as in 'me' or 'bee') to /iː/ while short /ɛ/ (as in 'met' or 'bed') remained a mid vowel. In other cases, the letter is silent, generally at the end of words like queue.", "title": "Use in writing systems" }, { "paragraph_id": 3, "text": "In the orthography of many languages it represents either [e], [e̞], [ɛ], or some variation (such as a nasalized version) of these sounds, often with diacritics (as: ⟨e ê é è ë ē ĕ ě ẽ ė ẹ ę ẻ⟩) to indicate contrasts. Less commonly, as in French, German, or Saanich, ⟨e⟩ represents a mid-central vowel /ə/. Digraphs with ⟨e⟩ are common to indicate either diphthongs or monophthongs, such as ⟨ea⟩ or ⟨ee⟩ for /iː/ or /eɪ/ in English, ⟨ei⟩ for /aɪ/ in German, and ⟨eu⟩ for /ø/ in French or /ɔɪ/ in German.", "title": "Use in writing systems" }, { "paragraph_id": 4, "text": "The International Phonetic Alphabet uses ⟨e⟩ for the close-mid front unrounded vowel or the mid front unrounded vowel.", "title": "Use in writing systems" }, { "paragraph_id": 5, "text": "'E' is the most common (or highest-frequency) letter in the English language alphabet and several other European languages, which has implications in both cryptography and data compression. In the story \"The Gold-Bug\" by Edgar Allan Poe, a character figures out a random character code by remembering that the most used letter in English is E. This makes it a hard and popular letter to use when writing lipograms. Ernest Vincent Wright's Gadsby (1939) is considered a \"dreadful\" novel, and supposedly \"at least part of Wright's narrative issues were caused by language limitations imposed by the lack of E.\" Both Georges Perec's novel A Void (La Disparition) (1969) and its English translation by Gilbert Adair omit 'e' and are considered better works.", "title": "Most common letter" }, { "paragraph_id": 6, "text": "In British Sign Language (BSL), the letter 'e' is signed by extending the index finger of the right hand touching the tip of index on the left hand, with all fingers of left hand open.", "title": "Other representations" } ]
E, or e, is the fifth letter and the second vowel letter in the Latin alphabet, used in the modern English alphabet, the alphabets of other western European languages and others worldwide. Its name in English is e; plural es, Es or E's. It is the most commonly used letter in many languages, including Czech, Danish, Dutch, English, French, German, Hungarian, Latin, Latvian, Norwegian, Spanish, and Swedish.
2001-05-20T21:32:35Z
2023-12-08T21:46:17Z
[ "Template:Pp-move-indef", "Template:IPA", "Template:Letter other reps", "Template:Commons-inline", "Template:Pp-semi-indef", "Template:Latin letter info", "Template:Script", "Template:Cite dictionary", "Template:Short description", "Template:IPAslink", "Template:IPAblink", "Template:Anli", "Template:Wiktionary-inline", "Template:Angbr", "Template:Infobox grapheme", "Template:Charmap", "Template:About", "Template:Angbr IPA", "Template:Midsize", "Template:Cite web", "Template:IPAc-en", "Template:Clear", "Template:Unichar", "Template:Cite journal", "Template:Val", "Template:Reflist", "Template:Latin script", "Template:Technical reasons" ]
https://en.wikipedia.org/wiki/E
9,223
Economics
Economics (/ˌɛkəˈnɒmɪks, ˌiːkə-/) is a social science that studies the production, distribution, and consumption of goods and services. Economics focuses on the behaviour and interactions of economic agents and how economies work. Microeconomics analyzes what's viewed as basic elements in the economy, including individual agents and markets, their interactions, and the outcomes of interactions. Individual agents may include, for example, households, firms, buyers, and sellers. Macroeconomics analyzes the economy as a system where production, consumption, saving, and investment interact, and factors affecting it: employment of the resources of labour, capital, and land, currency inflation, economic growth, and public policies that have impact on these elements. Other broad distinctions within economics include those between positive economics, describing "what is", and normative economics, advocating "what ought to be"; between economic theory and applied economics; between rational and behavioural economics; and between mainstream economics and heterodox economics. Economic analysis can be applied throughout society, including business, finance, cybersecurity, health care, engineering and government. It is also applied to such diverse subjects as crime, education, the family, feminism, law, philosophy, politics, religion, social institutions, war, science and the environment. The earlier term for the discipline was 'political economy', but since the late 19th century, it has commonly been called 'economics'. The term is ultimately derived from Ancient Greek οἰκονομία (oikonomia) which is a term for the "way (nomos) to run a household (oikos)", or in other words the know-how of an οἰκονομικός (oikonomikos), or "household or homestead manager". Derived terms such as "economy" can therefore often mean "frugal" or "thrifty". By extension then, "political economy" was the way to manage a polis or state. There are a variety of modern definitions of economics; some reflect evolving views of the subject or different views among economists. Scottish philosopher Adam Smith (1776) defined what was then called political economy as "an inquiry into the nature and causes of the wealth of nations", in particular as: a branch of the science of a statesman or legislator [with the twofold objectives of providing] a plentiful revenue or subsistence for the people ... [and] to supply the state or commonwealth with a revenue for the publick services. Jean-Baptiste Say (1803), distinguishing the subject matter from its public-policy uses, defined it as the science of production, distribution, and consumption of wealth. On the satirical side, Thomas Carlyle (1849) coined "the dismal science" as an epithet for classical economics, in this context, commonly linked to the pessimistic analysis of Malthus (1798). John Stuart Mill (1844) delimited the subject matter further: The science which traces the laws of such of the phenomena of society as arise from the combined operations of mankind for the production of wealth, in so far as those phenomena are not modified by the pursuit of any other object. Alfred Marshall provided a still widely cited definition in his textbook Principles of Economics (1890) that extended analysis beyond wealth and from the societal to the microeconomic level: Economics is a study of man in the ordinary business of life. It enquires how he gets his income and how he uses it. Thus, it is on the one side, the study of wealth and on the other and more important side, a part of the study of man. Lionel Robbins (1932) developed implications of what has been termed "[p]erhaps the most commonly accepted current definition of the subject": Economics is the science which studies human behaviour as a relationship between ends and scarce means which have alternative uses. Robbins described the definition as not classificatory in "pick[ing] out certain kinds of behaviour" but rather analytical in "focus[ing] attention on a particular aspect of behaviour, the form imposed by the influence of scarcity." He affirmed that previous economists have usually centred their studies on the analysis of wealth: how wealth is created (production), distributed, and consumed; and how wealth can grow. But he said that economics can be used to study other things, such as war, that are outside its usual focus. This is because war has as the goal winning it (as a sought after end), generates both cost and benefits; and, resources (human life and other costs) are used to attain the goal. If the war is not winnable or if the expected costs outweigh the benefits, the deciding actors (assuming they are rational) may never go to war (a decision) but rather explore other alternatives. Economics cannot be defined as the science that studies wealth, war, crime, education, and any other field economic analysis can be applied to; but, as the science that studies a particular common aspect of each of those subjects (they all use scarce resources to attain a sought after end). Some subsequent comments criticized the definition as overly broad in failing to limit its subject matter to analysis of markets. From the 1960s, however, such comments abated as the economic theory of maximizing behaviour and rational-choice modelling expanded the domain of the subject to areas previously treated in other fields. There are other criticisms as well, such as in scarcity not accounting for the macroeconomics of high unemployment. Gary Becker, a contributor to the expansion of economics into new areas, described the approach he favoured as "combin[ing the] assumptions of maximizing behaviour, stable preferences, and market equilibrium, used relentlessly and unflinchingly." One commentary characterizes the remark as making economics an approach rather than a subject matter but with great specificity as to the "choice process and the type of social interaction that [such] analysis involves." The same source reviews a range of definitions included in principles of economics textbooks and concludes that the lack of agreement need not affect the subject-matter that the texts treat. Among economists more generally, it argues that a particular definition presented may reflect the direction toward which the author believes economics is evolving, or should evolve. Many economists including nobel prize winners James M. Buchanan and Ronald Coase reject the method-based definition of Robbins and continue to prefer definitions like those of Say, in terms of its subject matter. Ha-Joon Chang has for example argued that the definition of Robbins would make economics very peculiar because all other sciences define themselves in terms of the area of inquiry or object of inquiry rather than the methodology. In the biology department, they do not say that all biology should be studied with DNA analysis. People study living organisms in many different ways, so some people will do DNA analysis, others might do anatomy, and still others might build game theoretic models of animal behavior. But they are all called biology because they all study living organisms. According to Ha Joon Chang, this view that the economy can and should be studied in only one way (for example by studying only rational choices), and going even one step further and basically redefining economics as a theory of everything, is very peculiar. Questions regarding distribution of resources are found throughout the writings of the Boeotian poet Hesiod and several economic historians have described Hesiod himself as the "first economist". However, the word Oikos, the Greek word from which the word economy derives, was used for issues regarding how to manage a household (which was understood to be the landowner, his family, and his slaves) rather than to refer to some normative societal system of distribution of resources, which is a much more recent phenomenon. Xenophon, the author of the Oeconomicus, is credited by philologues for being the source of the word economy. Other notable writers from Antiquity through to the Renaissance which wrote on include Aristotle, Chanakya (also known as Kautilya), Qin Shi Huang, Ibn Khaldun, and Thomas Aquinas. Joseph Schumpeter described 16th and 17th century scholastic writers, including Tomás de Mercado, Luis de Molina, and Juan de Lugo, as "coming nearer than any other group to being the 'founders' of scientific economics" as to monetary, interest, and value theory within a natural-law perspective. Two groups, who later were called "mercantilists" and "physiocrats", more directly influenced the subsequent development of the subject. Both groups were associated with the rise of economic nationalism and modern capitalism in Europe. Mercantilism was an economic doctrine that flourished from the 16th to 18th century in a prolific pamphlet literature, whether of merchants or statesmen. It held that a nation's wealth depended on its accumulation of gold and silver. Nations without access to mines could obtain gold and silver from trade only by selling goods abroad and restricting imports other than of gold and silver. The doctrine called for importing cheap raw materials to be used in manufacturing goods, which could be exported, and for state regulation to impose protective tariffs on foreign manufactured goods and prohibit manufacturing in the colonies. Physiocrats, a group of 18th-century French thinkers and writers, developed the idea of the economy as a circular flow of income and output. Physiocrats believed that only agricultural production generated a clear surplus over cost, so that agriculture was the basis of all wealth. Thus, they opposed the mercantilist policy of promoting manufacturing and trade at the expense of agriculture, including import tariffs. Physiocrats advocated replacing administratively costly tax collections with a single tax on income of land owners. In reaction against copious mercantilist trade regulations, the physiocrats advocated a policy of laissez-faire, which called for minimal government intervention in the economy. Adam Smith (1723–1790) was an early economic theorist. Smith was harshly critical of the mercantilists but described the physiocratic system "with all its imperfections" as "perhaps the purest approximation to the truth that has yet been published" on the subject. The publication of Adam Smith's The Wealth of Nations in 1776, has been described as "the effective birth of economics as a separate discipline." The book identified land, labour, and capital as the three factors of production and the major contributors to a nation's wealth, as distinct from the physiocratic idea that only agriculture was productive. Smith discusses potential benefits of specialization by division of labour, including increased labour productivity and gains from trade, whether between town and country or across countries. His "theorem" that "the division of labor is limited by the extent of the market" has been described as the "core of a theory of the functions of firm and industry" and a "fundamental principle of economic organization." To Smith has also been ascribed "the most important substantive proposition in all of economics" and foundation of resource-allocation theory—that, under competition, resource owners (of labour, land, and capital) seek their most profitable uses, resulting in an equal rate of return for all uses in equilibrium (adjusted for apparent differences arising from such factors as training and unemployment). In an argument that includes "one of the most famous passages in all economics," Smith represents every individual as trying to employ any capital they might command for their own advantage, not that of the society, and for the sake of profit, which is necessary at some level for employing capital in domestic industry, and positively related to the value of produce. In this: He generally, indeed, neither intends to promote the public interest, nor knows how much he is promoting it. By preferring the support of domestic to that of foreign industry, he intends only his own security; and by directing that industry in such a manner as its produce may be of the greatest value, he intends only his own gain, and he is in this, as in many other cases, led by an invisible hand to promote an end which was no part of his intention. Nor is it always the worse for the society that it was no part of it. By pursuing his own interest he frequently promotes that of the society more effectually than when he really intends to promote it. The Rev. Thomas Robert Malthus (1798) used the concept of diminishing returns to explain low living standards. Human population, he argued, tended to increase geometrically, outstripping the production of food, which increased arithmetically. The force of a rapidly growing population against a limited amount of land meant diminishing returns to labour. The result, he claimed, was chronically low wages, which prevented the standard of living for most of the population from rising above the subsistence level. Economist Julian Lincoln Simon has criticized Malthus's conclusions. While Adam Smith emphasized production and income, David Ricardo (1817) focused on the distribution of income among landowners, workers, and capitalists. Ricardo saw an inherent conflict between landowners on the one hand and labour and capital on the other. He posited that the growth of population and capital, pressing against a fixed supply of land, pushes up rents and holds down wages and profits. Ricardo was also the first to state and prove the principle of comparative advantage, according to which each country should specialize in producing and exporting goods in that it has a lower relative cost of production, rather relying only on its own production. It has been termed a "fundamental analytical explanation" for gains from trade. Coming at the end of the classical tradition, John Stuart Mill (1848) parted company with the earlier classical economists on the inevitability of the distribution of income produced by the market system. Mill pointed to a distinct difference between the market's two roles: allocation of resources and distribution of income. The market might be efficient in allocating resources but not in distributing income, he wrote, making it necessary for society to intervene. Value theory was important in classical theory. Smith wrote that the "real price of every thing ... is the toil and trouble of acquiring it". Smith maintained that, with rent and profit, other costs besides wages also enter the price of a commodity. Other classical economists presented variations on Smith, termed the 'labour theory of value'. Classical economics focused on the tendency of any market economy to settle in a final stationary state made up of a constant stock of physical wealth (capital) and a constant population size. Marxist (later, Marxian) economics descends from classical economics and it derives from the work of Karl Marx. The first volume of Marx's major work, Das Kapital, was published in 1867. Marx focused on the labour theory of value and theory of surplus value which, he believed, explained the exploitation of labour by capital. The labour theory of value held that the value of an exchanged commodity was determined by the labour that went into its production, and the theory of surplus value demonstrated how workers were only paid a proportion of the value their work had created. Marxian economics was further developed by Karl Kautsky (1854–1938)'s The Economic Doctrines of Karl Marx and The Class Struggle (Erfurt Program), Rudolf Hilferding's (1877–1941) Finance Capital, Vladimir Lenin (1870–1924)'s The Development of Capitalism in Russia and Imperialism, the Highest Stage of Capitalism, and Rosa Luxemburg (1871–1919)'s The Accumulation of Capital. At its inception as a social science, economics was defined and discussed at length as the study of production, distribution, and consumption of wealth by Jean-Baptiste Say in his Treatise on Political Economy or, The Production, Distribution, and Consumption of Wealth (1803). These three items were considered only in relation to the increase or diminution of wealth, and not in reference to their processes of execution. Say's definition has survived in part up to the present, modified by substituting the word "wealth" for "goods and services" meaning that wealth may include non-material objects as well. One hundred and thirty years later, Lionel Robbins noticed that this definition no longer sufficed, because many economists were making theoretical and philosophical inroads in other areas of human activity. In his Essay on the Nature and Significance of Economic Science, he proposed a definition of economics as a study of human behaviour, subject to and constrained by scarcity, which forces people to choose, allocate scarce resources to competing ends, and economize (seeking the greatest welfare while avoiding the wasting of scarce resources). According to Robbins: "Economics is the science which studies human behavior as a relationship between ends and scarce means which have alternative uses". Robbins' definition eventually became widely accepted by mainstream economists, and found its way into current textbooks. Although far from unanimous, most mainstream economists would accept some version of Robbins' definition, even though many have raised serious objections to the scope and method of economics, emanating from that definition. A body of theory later termed "neoclassical economics" formed from about 1870 to 1910. The term "economics" was popularized by such neoclassical economists as Alfred Marshall and Mary Paley Marshall as a concise synonym for "economic science" and a substitute for the earlier "political economy". This corresponded to the influence on the subject of mathematical methods used in the natural sciences. Neoclassical economics systematically integrated supply and demand as joint determinants of both price and quantity in market equilibrium, influencing the allocation of output and income distribution. It rejected the classical economics' labour theory of value in favor of a marginal utility theory of value on the demand side and a more comprehensive theory of costs on the supply side. In the 20th century, neoclassical theorists departed from an earlier idea that suggested measuring total utility for a society, opting instead for ordinal utility, which posits behavior-based relations across individuals. In microeconomics, neoclassical economics represents incentives and costs as playing a pervasive role in shaping decision making. An immediate example of this is the consumer theory of individual demand, which isolates how prices (as costs) and income affect quantity demanded. In macroeconomics it is reflected in an early and lasting neoclassical synthesis with Keynesian macroeconomics. Neoclassical economics is occasionally referred as orthodox economics whether by its critics or sympathizers. Modern mainstream economics builds on neoclassical economics but with many refinements that either supplement or generalize earlier analysis, such as econometrics, game theory, analysis of market failure and imperfect competition, and the neoclassical model of economic growth for analysing long-run variables affecting national income. Neoclassical economics studies the behaviour of individuals, households, and organizations (called economic actors, players, or agents), when they manage or use scarce resources, which have alternative uses, to achieve desired ends. Agents are assumed to act rationally, have multiple desirable ends in sight, limited resources to obtain these ends, a set of stable preferences, a definite overall guiding objective, and the capability of making a choice. There exists an economic problem, subject to study by economic science, when a decision (choice) is made by one or more players to attain the best possible outcome. Keynesian economics derives from John Maynard Keynes, in particular his book The General Theory of Employment, Interest and Money (1936), which ushered in contemporary macroeconomics as a distinct field. The book focused on determinants of national income in the short run when prices are relatively inflexible. Keynes attempted to explain in broad theoretical detail why high labour-market unemployment might not be self-correcting due to low "effective demand" and why even price flexibility and monetary policy might be unavailing. The term "revolutionary" has been applied to the book in its impact on economic analysis. During the following decades, many economists followed Keynes' ideas and expanded on his works. John Hicks and Alvin Hansen developed the IS–LM model which was a simple formalisation of some of Keynes' insights on the economy's short-run equilibrium. Franco Modigliani and James Tobin developed important theories of private consumption and investment, respectively, two major components of aggregate demand. Lawrence Klein built the first large-scale macroeconometric model, applying the Keynesian thinking systematically to the US economy. Immediately after World War II, Keynesian was the dominant economic view of the United States establishment and its allies, Marxian economics was the dominant economic view of the Soviet Union nomenklatura and its allies. Monetarism appeared in the 1950s and 1960s, its intellectual leader being Milton Friedman. Monetarists contended that monetary policy and other monetary shocks, as represented by the growth in the money stock, was an important cause of economic fluctuations, and consequently that monetary policy was more important than fiscal policy for purposes of stabilization. Friedman was also skeptical about the ability of central banks to conduct a sensible active monetary policy in practice, advocating instead using simple rules such as a steady rate of money growth. Monetarism rose to prominence in the 1970s and 1980s, when several major central banks followed a monetarist-inspired policy, but was later abandoned again because the results turned out to be unsatisfactory. A more fundamental challenge to the prevailing Keynesian paradigm came in the 1970s from new classical economists like Robert Lucas, Thomas Sargent and Edward Prescott. They introduced the notion of rational expectations in economics, which had profound implications for many economic discussions, among which were the socalled Lucas critique and the presentation of real business cycle models. During the 1980s a group of researchers appeared being called New Keynesian economists, including among others George Akerlof, Janet Yellen, Gregory Mankiw and Olivier Blanchard. They adopted the principle of rational expectations and other monetarist or new classical ideas such as building upon models employing micro foundations and optimizing behaviour, but simultaneously emphasized the importance of various market failures for the functioning of the economy, as had Keynes. Not least, they proposed various reasons that potentially explained the empirically observed features of price and wage rigidity, usually made to be endogenous features of the models, rather than simply assumed as in older Keynesian-style ones. After decades of often heated discussions between Keynesians, monetarists, new classical and new Keynesian economists, a synthesis emerged by the 2000s, often given the name the new neoclassical synthesis. It integrated the rational expectations and optimizing framework of the new classical theory with a new Keynesian role for nominal rigidities and other market imperfections like imperfect information in goods, labour and credit markets. The monetarist importance of monetary policy in stabilizing the economy and in particular controlling inflation was recognized as well as the traditional Keynesian insistence that fiscal policy could also play an influential role in affecting aggregate demand. Methodologically, the synthesis led to a new class of applied models, known as dynamic stochastic general equilibrium or DSGE models, descending from real business cycles models, but extended with several new Keynesian and other features. These models proved very useful and influential in the design of modern monetary policy and are now standard workhorses in most central banks. After the 2007–2008 financial crisis, macroeconomic research has put greater emphasis on understanding and integrating the financial system into models of the general economy and shedding light on the ways in which problems in the financial sector can turn into major macroeconomic recessions. In this and other research branches, inspiration from behavioral economics has started playing a more important role in mainstream economic theory. Also, heterogeneity among the economic agents, e.g. differences in income, plays an increasing role in recent economic research. Other schools or trends of thought referring to a particular style of economics practised at and disseminated from well-defined groups of academicians that have become known worldwide, include the Freiburg School, the School of Lausanne, the Stockholm school and the Chicago school of economics. During the 1970s and 1980s mainstream economics was sometimes separated into the Saltwater approach of those universities along the Eastern and Western coasts of the US, and the Freshwater, or Chicago school approach. Within macroeconomics there is, in general order of their historical appearance in the literature; classical economics, neoclassical economics, Keynesian economics, the neoclassical synthesis, monetarism, new classical economics, New Keynesian economics and the new neoclassical synthesis. Beside the mainstream development of economic thought, various alternative or heterodox economic theories have evolved over time, positioning themselves in contrast to mainstream theory. These include: Additionally, alternative developments include Marxian economics, constitutional economics, institutional economics, evolutionary economics, dependency theory, structuralist economics, world systems theory, econophysics, econodynamics, feminist economics and biophysical economics. Feminist economics emphasizes the role that gender plays in economies, challenging analyses that render gender invisible or support gender-oppressive economic systems. The goal is to create economic research and policy analysis that is inclusive and gender-aware to encourage gender equality and improve the well-being of marginalized groups. Mainstream economic theory relies upon analytical economic models. When creating theories, the objective is to find assumptions which are at least as simple in information requirements, more precise in predictions, and more fruitful in generating additional research than prior theories. While neoclassical economic theory constitutes both the dominant or orthodox theoretical as well as methodological framework, economic theory can also take the form of other schools of thought such as in heterodox economic theories. In microeconomics, principal concepts include supply and demand, marginalism, rational choice theory, opportunity cost, budget constraints, utility, and the theory of the firm. Early macroeconomic models focused on modelling the relationships between aggregate variables, but as the relationships appeared to change over time macroeconomists, including new Keynesians, reformulated their models with microfoundations, in which microeconomic concepts play a major part. Sometimes an economic hypothesis is only qualitative, not quantitative. Expositions of economic reasoning often use two-dimensional graphs to illustrate theoretical relationships. At a higher level of generality, mathematical economics is the application of mathematical methods to represent theories and analyze problems in economics. Paul Samuelson's treatise Foundations of Economic Analysis (1947) exemplifies the method, particularly as to maximizing behavioral relations of agents reaching equilibrium. The book focused on examining the class of statements called operationally meaningful theorems in economics, which are theorems that can conceivably be refuted by empirical data. Economic theories are frequently tested empirically, largely through the use of econometrics using economic data. The controlled experiments common to the physical sciences are difficult and uncommon in economics, and instead broad data is observationally studied; this type of testing is typically regarded as less rigorous than controlled experimentation, and the conclusions typically more tentative. However, the field of experimental economics is growing, and increasing use is being made of natural experiments. Statistical methods such as regression analysis are common. Practitioners use such methods to estimate the size, economic significance, and statistical significance ("signal strength") of the hypothesized relation(s) and to adjust for noise from other variables. By such means, a hypothesis may gain acceptance, although in a probabilistic, rather than certain, sense. Acceptance is dependent upon the falsifiable hypothesis surviving tests. Use of commonly accepted methods need not produce a final conclusion or even a consensus on a particular question, given different tests, data sets, and prior beliefs. Experimental economics has promoted the use of scientifically controlled experiments. This has reduced the long-noted distinction of economics from natural sciences because it allows direct tests of what were previously taken as axioms. In some cases these have found that the axioms are not entirely correct. In behavioural economics, psychologist Daniel Kahneman won the Nobel Prize in economics in 2002 for his and Amos Tversky's empirical discovery of several cognitive biases and heuristics. Similar empirical testing occurs in neuroeconomics. Another example is the assumption of narrowly selfish preferences versus a model that tests for selfish, altruistic, and cooperative preferences. These techniques have led some to argue that economics is a "genuine science". Microeconomics examines how entities, forming a market structure, interact within a market to create a market system. These entities include private and public players with various classifications, typically operating under scarcity of tradable units and regulation. The item traded may be a tangible product such as apples or a service such as repair services, legal counsel, or entertainment. Various market structures exist. In perfectly competitive markets, no participants are large enough to have the market power to set the price of a homogeneous product. In other words, every participant is a "price taker" as no participant influences the price of a product. In the real world, markets often experience imperfect competition. Forms of imperfect competition include monopoly (in which there is only one seller of a good), duopoly (in which there are only two sellers of a good), oligopoly (in which there are few sellers of a good), monopolistic competition (in which there are many sellers producing highly differentiated goods), monopsony (in which there is only one buyer of a good), and oligopsony (in which there are few buyers of a good). Firms under imperfect competition have the potential to be "price makers", which means that they can influence the prices of their products. In partial equilibrium method of analysis, it is assumed that activity in the market being analysed does not affect other markets. This method aggregates (the sum of all activity) in only one market. General-equilibrium theory studies various markets and their behaviour. It aggregates (the sum of all activity) across all markets. This method studies both changes in markets and their interactions leading towards equilibrium. In microeconomics, production is the conversion of inputs into outputs. It is an economic process that uses inputs to create a commodity or a service for exchange or direct use. Production is a flow and thus a rate of output per period of time. Distinctions include such production alternatives as for consumption (food, haircuts, etc.) vs. investment goods (new tractors, buildings, roads, etc.), public goods (national defence, smallpox vaccinations, etc.) or private goods (new computers, bananas, etc.), and "guns" vs "butter". Inputs used in the production process include such primary factors of production as labour services, capital (durable produced goods used in production, such as an existing factory), and land (including natural resources). Other inputs may include intermediate goods used in production of final goods, such as the steel in a new car. Economic efficiency measures how well a system generates desired output with a given set of inputs and available technology. Efficiency is improved if more output is generated without changing inputs. A widely accepted general standard is Pareto efficiency, which is reached when no further change can make someone better off without making someone else worse off. The production–possibility frontier (PPF) is an expository figure for representing scarcity, cost, and efficiency. In the simplest case an economy can produce just two goods (say "guns" and "butter"). The PPF is a table or graph (as at the right) showing the different quantity combinations of the two goods producible with a given technology and total factor inputs, which limit feasible total output. Each point on the curve shows potential total output for the economy, which is the maximum feasible output of one good, given a feasible output quantity of the other good. Scarcity is represented in the figure by people being willing but unable in the aggregate to consume beyond the PPF (such as at X) and by the negative slope of the curve. If production of one good increases along the curve, production of the other good decreases, an inverse relationship. This is because increasing output of one good requires transferring inputs to it from production of the other good, decreasing the latter. The slope of the curve at a point on it gives the trade-off between the two goods. It measures what an additional unit of one good costs in units forgone of the other good, an example of a real opportunity cost. Thus, if one more Gun costs 100 units of butter, the opportunity cost of one Gun is 100 Butter. Along the PPF, scarcity implies that choosing more of one good in the aggregate entails doing with less of the other good. Still, in a market economy, movement along the curve may indicate that the choice of the increased output is anticipated to be worth the cost to the agents. By construction, each point on the curve shows productive efficiency in maximizing output for given total inputs. A point inside the curve (as at A), is feasible but represents production inefficiency (wasteful use of inputs), in that output of one or both goods could increase by moving in a northeast direction to a point on the curve. Examples cited of such inefficiency include high unemployment during a business-cycle recession or economic organization of a country that discourages full use of resources. Being on the curve might still not fully satisfy allocative efficiency (also called Pareto efficiency) if it does not produce a mix of goods that consumers prefer over other points. Much applied economics in public policy is concerned with determining how the efficiency of an economy can be improved. Recognizing the reality of scarcity and then figuring out how to organize society for the most efficient use of resources has been described as the "essence of economics", where the subject "makes its unique contribution." Specialization is considered key to economic efficiency based on theoretical and empirical considerations. Different individuals or nations may have different real opportunity costs of production, say from differences in stocks of human capital per worker or capital/labour ratios. According to theory, this may give a comparative advantage in production of goods that make more intensive use of the relatively more abundant, thus relatively cheaper, input. Even if one region has an absolute advantage as to the ratio of its outputs to inputs in every type of output, it may still specialize in the output in which it has a comparative advantage and thereby gain from trading with a region that lacks any absolute advantage but has a comparative advantage in producing something else. It has been observed that a high volume of trade occurs among regions even with access to a similar technology and mix of factor inputs, including high-income countries. This has led to investigation of economies of scale and agglomeration to explain specialization in similar but differentiated product lines, to the overall benefit of respective trading parties or regions. The general theory of specialization applies to trade among individuals, farms, manufacturers, service providers, and economies. Among each of these production systems, there may be a corresponding division of labour with different work groups specializing, or correspondingly different types of capital equipment and differentiated land uses. An example that combines features above is a country that specializes in the production of high-tech knowledge products, as developed countries do, and trades with developing nations for goods produced in factories where labour is relatively cheap and plentiful, resulting in different in opportunity costs of production. More total output and utility thereby results from specializing in production and trading than if each country produced its own high-tech and low-tech products. Theory and observation set out the conditions such that market prices of outputs and productive inputs select an allocation of factor inputs by comparative advantage, so that (relatively) low-cost inputs go to producing low-cost outputs. In the process, aggregate output may increase as a by-product or by design. Such specialization of production creates opportunities for gains from trade whereby resource owners benefit from trade in the sale of one type of output for other, more highly valued goods. A measure of gains from trade is the increased income levels that trade may facilitate. Prices and quantities have been described as the most directly observable attributes of goods produced and exchanged in a market economy. The theory of supply and demand is an organizing principle for explaining how prices coordinate the amounts produced and consumed. In microeconomics, it applies to price and output determination for a market with perfect competition, which includes the condition of no buyers or sellers large enough to have price-setting power. For a given market of a commodity, demand is the relation of the quantity that all buyers would be prepared to purchase at each unit price of the good. Demand is often represented by a table or a graph showing price and quantity demanded (as in the figure). Demand theory describes individual consumers as rationally choosing the most preferred quantity of each good, given income, prices, tastes, etc. A term for this is "constrained utility maximization" (with income and wealth as the constraints on demand). Here, utility refers to the hypothesized relation of each individual consumer for ranking different commodity bundles as more or less preferred. The law of demand states that, in general, price and quantity demanded in a given market are inversely related. That is, the higher the price of a product, the less of it people would be prepared to buy (other things unchanged). As the price of a commodity falls, consumers move toward it from relatively more expensive goods (the substitution effect). In addition, purchasing power from the price decline increases ability to buy (the income effect). Other factors can change demand; for example an increase in income will shift the demand curve for a normal good outward relative to the origin, as in the figure. All determinants are predominantly taken as constant factors of demand and supply. Supply is the relation between the price of a good and the quantity available for sale at that price. It may be represented as a table or graph relating price and quantity supplied. Producers, for example business firms, are hypothesized to be profit maximizers, meaning that they attempt to produce and supply the amount of goods that will bring them the highest profit. Supply is typically represented as a function relating price and quantity, if other factors are unchanged. That is, the higher the price at which the good can be sold, the more of it producers will supply, as in the figure. The higher price makes it profitable to increase production. Just as on the demand side, the position of the supply can shift, say from a change in the price of a productive input or a technical improvement. The "Law of Supply" states that, in general, a rise in price leads to an expansion in supply and a fall in price leads to a contraction in supply. Here as well, the determinants of supply, such as price of substitutes, cost of production, technology applied and various factors inputs of production are all taken to be constant for a specific time period of evaluation of supply. Market equilibrium occurs where quantity supplied equals quantity demanded, the intersection of the supply and demand curves in the figure above. At a price below equilibrium, there is a shortage of quantity supplied compared to quantity demanded. This is posited to bid the price up. At a price above equilibrium, there is a surplus of quantity supplied compared to quantity demanded. This pushes the price down. The model of supply and demand predicts that for given supply and demand curves, price and quantity will stabilize at the price that makes quantity supplied equal to quantity demanded. Similarly, demand-and-supply theory predicts a new price-quantity combination from a shift in demand (as to the figure), or in supply. People frequently do not trade directly on markets. Instead, on the supply side, they may work in and produce through firms. The most obvious kinds of firms are corporations, partnerships and trusts. According to Ronald Coase, people begin to organize their production in firms when the costs of doing business becomes lower than doing it on the market. Firms combine labour and capital, and can achieve far greater economies of scale (when the average cost per unit declines as more units are produced) than individual market trading. In perfectly competitive markets studied in the theory of supply and demand, there are many producers, none of which significantly influence price. Industrial organization generalizes from that special case to study the strategic behaviour of firms that do have significant control of price. It considers the structure of such markets and their interactions. Common market structures studied besides perfect competition include monopolistic competition, various forms of oligopoly, and monopoly. Managerial economics applies microeconomic analysis to specific decisions in business firms or other management units. It draws heavily from quantitative methods such as operations research and programming and from statistical methods such as regression analysis in the absence of certainty and perfect knowledge. A unifying theme is the attempt to optimize business decisions, including unit-cost minimization and profit maximization, given the firm's objectives and constraints imposed by technology and market conditions. Uncertainty in economics is an unknown prospect of gain or loss, whether quantifiable as risk or not. Without it, household behaviour would be unaffected by uncertain employment and income prospects, financial and capital markets would reduce to exchange of a single instrument in each market period, and there would be no communications industry. Given its different forms, there are various ways of representing uncertainty and modelling economic agents' responses to it. Game theory is a branch of applied mathematics that considers strategic interactions between agents, one kind of uncertainty. It provides a mathematical foundation of industrial organization, discussed above, to model different types of firm behaviour, for example in a solipsistic industry (few sellers), but equally applicable to wage negotiations, bargaining, contract design, and any situation where individual agents are few enough to have perceptible effects on each other. In behavioural economics, it has been used to model the strategies agents choose when interacting with others whose interests are at least partially adverse to their own. In this, it generalizes maximization approaches developed to analyse market actors such as in the supply and demand model and allows for incomplete information of actors. The field dates from the 1944 classic Theory of Games and Economic Behavior by John von Neumann and Oskar Morgenstern. It has significant applications seemingly outside of economics in such diverse subjects as the formulation of nuclear strategies, ethics, political science, and evolutionary biology. Risk aversion may stimulate activity that in well-functioning markets smooths out risk and communicates information about risk, as in markets for insurance, commodity futures contracts, and financial instruments. Financial economics or simply finance describes the allocation of financial resources. It also analyses the pricing of financial instruments, the financial structure of companies, the efficiency and fragility of financial markets, financial crises, and related government policy or regulation. Some market organizations may give rise to inefficiencies associated with uncertainty. Based on George Akerlof's "Market for Lemons" article, the paradigm example is of a dodgy second-hand car market. Customers without knowledge of whether a car is a "lemon" depress its price below what a quality second-hand car would be. Information asymmetry arises here, if the seller has more relevant information than the buyer but no incentive to disclose it. Related problems in insurance are adverse selection, such that those at most risk are most likely to insure (say reckless drivers), and moral hazard, such that insurance results in riskier behaviour (say more reckless driving). Both problems may raise insurance costs and reduce efficiency by driving otherwise willing transactors from the market ("incomplete markets"). Moreover, attempting to reduce one problem, say adverse selection by mandating insurance, may add to another, say moral hazard. Information economics, which studies such problems, has relevance in subjects such as insurance, contract law, mechanism design, monetary economics, and health care. Applied subjects include market and legal remedies to spread or reduce risk, such as warranties, government-mandated partial insurance, restructuring or bankruptcy law, inspection, and regulation for quality and information disclosure. The term "market failure" encompasses several problems which may undermine standard economic assumptions. Although economists categorize market failures differently, the following categories emerge in the main texts. Information asymmetries and incomplete markets may result in economic inefficiency but also a possibility of improving efficiency through market, legal, and regulatory remedies, as discussed above. Natural monopoly, or the overlapping concepts of "practical" and "technical" monopoly, is an extreme case of failure of competition as a restraint on producers. Extreme economies of scale are one possible cause. Public goods are goods which are under-supplied in a typical market. The defining features are that people can consume public goods without having to pay for them and that more than one person can consume the good at the same time. Externalities occur where there are significant social costs or benefits from production or consumption that are not reflected in market prices. For example, air pollution may generate a negative externality, and education may generate a positive externality (less crime, etc.). Governments often tax and otherwise restrict the sale of goods that have negative externalities and subsidize or otherwise promote the purchase of goods that have positive externalities in an effort to correct the price distortions caused by these externalities. Elementary demand-and-supply theory predicts equilibrium but not the speed of adjustment for changes of equilibrium due to a shift in demand or supply. In many areas, some form of price stickiness is postulated to account for quantities, rather than prices, adjusting in the short run to changes on the demand side or the supply side. This includes standard analysis of the business cycle in macroeconomics. Analysis often revolves around causes of such price stickiness and their implications for reaching a hypothesized long-run equilibrium. Examples of such price stickiness in particular markets include wage rates in labour markets and posted prices in markets deviating from perfect competition. Some specialized fields of economics deal in market failure more than others. The economics of the public sector is one example. Much environmental economics concerns externalities or "public bads". Policy options include regulations that reflect cost-benefit analysis or market solutions that change incentives, such as emission fees or redefinition of property rights. Welfare economics uses microeconomics techniques to evaluate well-being from allocation of productive factors as to desirability and economic efficiency within an economy, often relative to competitive general equilibrium. It analyzes social welfare, however measured, in terms of economic activities of the individuals that compose the theoretical society considered. Accordingly, individuals, with associated economic activities, are the basic units for aggregating to social welfare, whether of a group, a community, or a society, and there is no "social welfare" apart from the "welfare" associated with its individual units. Macroeconomics, another branch of economics, examines the economy as a whole to explain broad aggregates and their interactions "top down", that is, using a simplified form of general-equilibrium theory. Such aggregates include national income and output, the unemployment rate, and price inflation and subaggregates like total consumption and investment spending and their components. It also studies effects of monetary policy and fiscal policy. Since at least the 1960s, macroeconomics has been characterized by further integration as to micro-based modelling of sectors, including rationality of players, efficient use of market information, and imperfect competition. This has addressed a long-standing concern about inconsistent developments of the same subject. Macroeconomic analysis also considers factors affecting the long-term level and growth of national income. Such factors include capital accumulation, technological change and labour force growth. Growth economics studies factors that explain economic growth – the increase in output per capita of a country over a long period of time. The same factors are used to explain differences in the level of output per capita between countries, in particular why some countries grow faster than others, and whether countries converge at the same rates of growth. Much-studied factors include the rate of investment, population growth, and technological change. These are represented in theoretical and empirical forms (as in the neoclassical and endogenous growth models) and in growth accounting. The economics of a depression were the spur for the creation of "macroeconomics" as a separate discipline. During the Great Depression of the 1930s, John Maynard Keynes authored a book entitled The General Theory of Employment, Interest and Money outlining the key theories of Keynesian economics. Keynes contended that aggregate demand for goods might be insufficient during economic downturns, leading to unnecessarily high unemployment and losses of potential output. He therefore advocated active policy responses by the public sector, including monetary policy actions by the central bank and fiscal policy actions by the government to stabilize output over the business cycle. Thus, a central conclusion of Keynesian economics is that, in some situations, no strong automatic mechanism moves output and employment towards full employment levels. John Hicks' IS/LM model has been the most influential interpretation of The General Theory. Over the years, understanding of the business cycle has branched into various research programmes, mostly related to or distinct from Keynesianism. The neoclassical synthesis refers to the reconciliation of Keynesian economics with classical economics, stating that Keynesianism is correct in the short run but qualified by classical-like considerations in the intermediate and long run. New classical macroeconomics, as distinct from the Keynesian view of the business cycle, posits market clearing with imperfect information. It includes Friedman's permanent income hypothesis on consumption and "rational expectations" theory, led by Robert Lucas, and real business cycle theory. In contrast, the new Keynesian approach retains the rational expectations assumption, however it assumes a variety of market failures. In particular, New Keynesians assume prices and wages are "sticky", which means they do not adjust instantaneously to changes in economic conditions. Thus, the new classicals assume that prices and wages adjust automatically to attain full employment, whereas the new Keynesians see full employment as being automatically achieved only in the long run, and hence government and central-bank policies are needed because the "long run" may be very long. The amount of unemployment in an economy is measured by the unemployment rate, the percentage of workers without jobs in the labour force. The labour force only includes workers actively looking for jobs. People who are retired, pursuing education, or discouraged from seeking work by a lack of job prospects are excluded from the labour force. Unemployment can be generally broken down into several types that are related to different causes. Classical models of unemployment occurs when wages are too high for employers to be willing to hire more workers. Consistent with classical unemployment, frictional unemployment occurs when appropriate job vacancies exist for a worker, but the length of time needed to search for and find the job leads to a period of unemployment. Structural unemployment covers a variety of possible causes of unemployment including a mismatch between workers' skills and the skills required for open jobs. Large amounts of structural unemployment can occur when an economy is transitioning industries and workers find their previous set of skills are no longer in demand. Structural unemployment is similar to frictional unemployment since both reflect the problem of matching workers with job vacancies, but structural unemployment covers the time needed to acquire new skills not just the short term search process. While some types of unemployment may occur regardless of the condition of the economy, cyclical unemployment occurs when growth stagnates. Okun's law represents the empirical relationship between unemployment and economic growth. The original version of Okun's law states that a 3% increase in output would lead to a 1% decrease in unemployment. Money is a means of final payment for goods in most price system economies, and is the unit of account in which prices are typically stated. Money has general acceptability, relative consistency in value, divisibility, durability, portability, elasticity in supply, and longevity with mass public confidence. It includes currency held by the nonbank public and checkable deposits. It has been described as a social convention, like language, useful to one largely because it is useful to others. In the words of Francis Amasa Walker, a well-known 19th-century economist, "Money is what money does" ("Money is that money does" in the original). As a medium of exchange, money facilitates trade. It is essentially a measure of value and more importantly, a store of value being a basis for credit creation. Its economic function can be contrasted with barter (non-monetary exchange). Given a diverse array of produced goods and specialized producers, barter may entail a hard-to-locate double coincidence of wants as to what is exchanged, say apples and a book. Money can reduce the transaction cost of exchange because of its ready acceptability. Then it is less costly for the seller to accept money in exchange, rather than what the buyer produces. Monetary policy is the policy that central banks conduct to accomplish their broader objectives. Most central banks in developed countries follow inflation targeting, whereas the main objective for many central banks in development countries is to uphold a fixed exchange rate system. The primary monetary tool is normally the adjustment of interest rates, either directly via administratively changing the central bank's own interest rates or indirectly via open market operations. Via the monetary transmission mechanism, interest rate changes affect investment, consumption and net export, and hence aggregate demand, output and employment, and ultimately the development of wages and inflation. Governments implement fiscal policy to influence macroeconomic conditions by adjusting spending and taxation policies to alter aggregate demand. When aggregate demand falls below the potential output of the economy, there is an output gap where some productive capacity is left unemployed. Governments increase spending and cut taxes to boost aggregate demand. Resources that have been idled can be used by the government. For example, unemployed home builders can be hired to expand highways. Tax cuts allow consumers to increase their spending, which boosts aggregate demand. Both tax cuts and spending have multiplier effects where the initial increase in demand from the policy percolates through the economy and generates additional economic activity. The effects of fiscal policy can be limited by crowding out. When there is no output gap, the economy is producing at full capacity and there are no excess productive resources. If the government increases spending in this situation, the government uses resources that otherwise would have been used by the private sector, so there is no increase in overall output. Some economists think that crowding out is always an issue while others do not think it is a major issue when output is depressed. Sceptics of fiscal policy also make the argument of Ricardian equivalence. They argue that an increase in debt will have to be paid for with future tax increases, which will cause people to reduce their consumption and save money to pay for the future tax increase. Under Ricardian equivalence, any boost in demand from tax cuts will be offset by the increased saving intended to pay for future higher taxes. Economic inequality includes income inequality, measured using the distribution of income (the amount of money people receive), and wealth inequality measured using the distribution of wealth (the amount of wealth people own), and other measures such as consumption, land ownership, and human capital. Inequality exists at different extents between countries or states, groups of people, and individuals. There are many methods for measuring inequality, the Gini coefficient being widely used for income differences among individuals. An example measure of inequality between countries is the Inequality-adjusted Human Development Index, a composite index that takes inequality into account. Important concepts of equality include equity, equality of outcome, and equality of opportunity. Research has linked economic inequality to political and social instability, including revolution, democratic breakdown and civil conflict. Research suggests that greater inequality hinders economic growth and macroeconomic stability, and that land and human capital inequality reduce growth more than inequality of income. Inequality is at the center stage of economic policy debate across the globe, as government tax and spending policies have significant effects on income distribution. In advanced economies, taxes and transfers decrease income inequality by one-third, with most of this being achieved via public social spending (such as pensions and family benefits.) Public economics is the field of economics that deals with economic activities of a public sector, usually government. The subject addresses such matters as tax incidence (who really pays a particular tax), cost-benefit analysis of government programmes, effects on economic efficiency and income distribution of different kinds of spending and taxes, and fiscal politics. The latter, an aspect of public choice theory, models public-sector behaviour analogously to microeconomics, involving interactions of self-interested voters, politicians, and bureaucrats. Much of economics is positive, seeking to describe and predict economic phenomena. Normative economics seeks to identify what economies ought to be like. Welfare economics is a normative branch of economics that uses microeconomic techniques to simultaneously determine the allocative efficiency within an economy and the income distribution associated with it. It attempts to measure social welfare by examining the economic activities of the individuals that comprise society. International trade studies determinants of goods-and-services flows across international boundaries. It also concerns the size and distribution of gains from trade. Policy applications include estimating the effects of changing tariff rates and trade quotas. International finance is a macroeconomic field which examines the flow of capital across international borders, and the effects of these movements on exchange rates. Increased trade in goods, services and capital between countries is a major effect of contemporary globalization. Labor economics seeks to understand the functioning and dynamics of the markets for wage labor. Labor markets function through the interaction of workers and employers. Labor economics looks at the suppliers of labor services (workers), the demands of labor services (employers), and attempts to understand the resulting pattern of wages, employment, and income. In economics, labor is a measure of the work done by human beings. It is conventionally contrasted with such other factors of production as land and capital. There are theories which have developed a concept called human capital (referring to the skills that workers possess, not necessarily their actual work), although there are also counter posing macro-economic system theories that think human capital is a contradiction in terms. Development economics examines economic aspects of the economic development process in relatively low-income countries focusing on structural change, poverty, and economic growth. Approaches in development economics frequently incorporate social and political factors. Economics has been subject to criticism that it relies on unrealistic, unverifiable, or highly simplified assumptions, in some cases because these assumptions simplify the proofs of desired conclusions. For example, the economist Friedrich Hayek claimed that economics (at least historically) used a scientistic approach which he claimed was "decidedly unscientific in the true sense of the word, since it involves a mechanical and uncritical application of habits of thought to fields different from those in which they have been formed". Latter-day examples of such assumptions include perfect information, profit maximization and rational choices, axioms of neoclassical economics. Such criticisms often conflate neoclassical economics with all of contemporary economics. The field of information economics includes both mathematical-economical research and also behavioural economics, akin to studies in behavioural psychology, and confounding factors to the neoclassical assumptions are the subject of substantial study in many areas of economics. Prominent historical mainstream economists such as Keynes and Joskow observed that much of the economics of their time was conceptual rather than quantitative, and difficult to model and formalize quantitatively. In a discussion on oligopoly research, Paul Joskow pointed out in 1975 that in practice, serious students of actual economies tended to use "informal models" based upon qualitative factors specific to particular industries. Joskow had a strong feeling that the important work in oligopoly was done through informal observations while formal models were "trotted out ex post". He argued that formal models were largely not important in the empirical work, either, and that the fundamental factor behind the theory of the firm, behaviour, was neglected. Deirdre McCloskey has argued that many empirical economic studies are poorly reported, and she and Stephen Ziliak argue that although her critique has been well-received, practice has not improved. The extent to which practice has improved since the early 2000s is contested: although economists have noted the discipline's adoption of increasingly rigorous modeling, other have criticized the field's focus on creating computer simulations detached from reality, as well as noting the loss of prestige suffered by the field for failing to anticipate the Great Recession. Economics has been derogatorily dubbed "the dismal science", first coined by the Victorian historian Thomas Carlyle in the 19th century. It is often stated that Carlyle gave it this nickname as a response to the work of Thomas Robert Malthus, who predicted widespread starvation resulting from projections that population growth would exceed the rate of increase in the food supply. However, the actual phrase was coined by Carlyle in the context of a debate with John Stuart Mill on slavery, in which Carlyle argued for slavery; the "dismal" nature of economics in Carlyle's view was that it "[found] the secret of this Universe in 'supply and demand', and reduc[ed] the duty of human governors to that of letting men alone"." Economics is one social science among several and has fields bordering on other areas, including economic geography, economic history, public choice, energy economics, cultural economics, family economics and institutional economics. Law and economics, or economic analysis of law, is an approach to legal theory that applies methods of economics to law. It includes the use of economic concepts to explain the effects of legal rules, to assess which legal rules are economically efficient, and to predict what the legal rules will be. A seminal article by Ronald Coase published in 1961 suggested that well-defined property rights could overcome the problems of externalities. Political economy is the interdisciplinary study that combines economics, law, and political science in explaining how political institutions, the political environment, and the economic system (capitalist, socialist, mixed) influence each other. It studies questions such as how monopoly, rent-seeking behaviour, and externalities should impact government policy. Historians have employed political economy to explore the ways in the past that persons and groups with common economic interests have used politics to effect changes beneficial to their interests. Energy economics is a broad scientific subject area which includes topics related to energy supply and energy demand. Georgescu-Roegen reintroduced the concept of entropy in relation to economics and energy from thermodynamics, as distinguished from what he viewed as the mechanistic foundation of neoclassical economics drawn from Newtonian physics. His work contributed significantly to thermoeconomics and to ecological economics. He also did foundational work which later developed into evolutionary economics. The sociological subfield of economic sociology arose, primarily through the work of Émile Durkheim, Max Weber and Georg Simmel, as an approach to analysing the effects of economic phenomena in relation to the overarching social paradigm (i.e. modernity). Classic works include Max Weber's The Protestant Ethic and the Spirit of Capitalism (1905) and Georg Simmel's The Philosophy of Money (1900). More recently, the works of James S. Coleman, Mark Granovetter, Peter Hedstrom and Richard Swedberg have been influential in this field. Gary Becker in 1974 presented an economic theory of social interactions, whose applications included the family, charity, merit goods and multiperson interactions, and envy and hatred. He and Kevin Murphy authored a book in 2001 that analyzed market behavior in a social environment. The professionalization of economics, reflected in the growth of graduate programmes on the subject, has been described as "the main change in economics since around 1900". Most major universities and many colleges have a major, school, or department in which academic degrees are awarded in the subject, whether in the liberal arts, business, or for professional study. See Bachelor of Economics and Master of Economics. In the private sector, professional economists are employed as consultants and in industry, including banking and finance. Economists also work for various government departments and agencies, for example, the national treasury, central bank or National Bureau of Statistics. See Economic analyst. There are dozens of prizes awarded to economists each year for outstanding intellectual contributions to the field, the most prominent of which is the Nobel Memorial Prize in Economic Sciences, though it is not a Nobel Prize. Contemporary economics uses mathematics. Economists draw on the tools of calculus, linear algebra, statistics, game theory, and computer science. Professional economists are expected to be familiar with these tools, while a minority specialize in econometrics and mathematical methods. Harriet Martineau (1802–1876) was a widely-read populariser of classical economic thought. Mary Paley Marshall (1850–1944), the first women lecturer at a British economics faculty, wrote The Economics of Industry with her husband Alfred Marshall. Joan Robinson (1903–1983) was an important post-Keynesian economist. The economic historian Anna Schwartz (1915–2012) coauthored A Monetary History of the United States, 1867–1960 with Milton Friedman. Three women have received the Nobel Prize in Economics: Elinor Ostrom (2009), Esther Duflo (2019) and Claudia Goldin (2023). Five have received the John Bates Clark Medal: Susan Athey (2007), Esther Duflo (2010), Amy Finkelstein (2012), Emi Nakamura (2019) and Melissa Dell (2020). Women's authorship share in prominent economic journals reduced from 1940 to the 1970s, but has subsequently risen, with different patterns of gendered coauthorship. Women remain globally under-represented in the profession (19% of authors in the RePEc database in 2018), with national variation.
[ { "paragraph_id": 0, "text": "Economics (/ˌɛkəˈnɒmɪks, ˌiːkə-/) is a social science that studies the production, distribution, and consumption of goods and services.", "title": "" }, { "paragraph_id": 1, "text": "Economics focuses on the behaviour and interactions of economic agents and how economies work. Microeconomics analyzes what's viewed as basic elements in the economy, including individual agents and markets, their interactions, and the outcomes of interactions. Individual agents may include, for example, households, firms, buyers, and sellers. Macroeconomics analyzes the economy as a system where production, consumption, saving, and investment interact, and factors affecting it: employment of the resources of labour, capital, and land, currency inflation, economic growth, and public policies that have impact on these elements.", "title": "" }, { "paragraph_id": 2, "text": "Other broad distinctions within economics include those between positive economics, describing \"what is\", and normative economics, advocating \"what ought to be\"; between economic theory and applied economics; between rational and behavioural economics; and between mainstream economics and heterodox economics.", "title": "" }, { "paragraph_id": 3, "text": "Economic analysis can be applied throughout society, including business, finance, cybersecurity, health care, engineering and government. It is also applied to such diverse subjects as crime, education, the family, feminism, law, philosophy, politics, religion, social institutions, war, science and the environment.", "title": "" }, { "paragraph_id": 4, "text": "", "title": "Definitions of economics" }, { "paragraph_id": 5, "text": "The earlier term for the discipline was 'political economy', but since the late 19th century, it has commonly been called 'economics'. The term is ultimately derived from Ancient Greek οἰκονομία (oikonomia) which is a term for the \"way (nomos) to run a household (oikos)\", or in other words the know-how of an οἰκονομικός (oikonomikos), or \"household or homestead manager\". Derived terms such as \"economy\" can therefore often mean \"frugal\" or \"thrifty\". By extension then, \"political economy\" was the way to manage a polis or state.", "title": "Definitions of economics" }, { "paragraph_id": 6, "text": "There are a variety of modern definitions of economics; some reflect evolving views of the subject or different views among economists. Scottish philosopher Adam Smith (1776) defined what was then called political economy as \"an inquiry into the nature and causes of the wealth of nations\", in particular as:", "title": "Definitions of economics" }, { "paragraph_id": 7, "text": "a branch of the science of a statesman or legislator [with the twofold objectives of providing] a plentiful revenue or subsistence for the people ... [and] to supply the state or commonwealth with a revenue for the publick services.", "title": "Definitions of economics" }, { "paragraph_id": 8, "text": "Jean-Baptiste Say (1803), distinguishing the subject matter from its public-policy uses, defined it as the science of production, distribution, and consumption of wealth. On the satirical side, Thomas Carlyle (1849) coined \"the dismal science\" as an epithet for classical economics, in this context, commonly linked to the pessimistic analysis of Malthus (1798). John Stuart Mill (1844) delimited the subject matter further:", "title": "Definitions of economics" }, { "paragraph_id": 9, "text": "The science which traces the laws of such of the phenomena of society as arise from the combined operations of mankind for the production of wealth, in so far as those phenomena are not modified by the pursuit of any other object.", "title": "Definitions of economics" }, { "paragraph_id": 10, "text": "Alfred Marshall provided a still widely cited definition in his textbook Principles of Economics (1890) that extended analysis beyond wealth and from the societal to the microeconomic level:", "title": "Definitions of economics" }, { "paragraph_id": 11, "text": "Economics is a study of man in the ordinary business of life. It enquires how he gets his income and how he uses it. Thus, it is on the one side, the study of wealth and on the other and more important side, a part of the study of man.", "title": "Definitions of economics" }, { "paragraph_id": 12, "text": "Lionel Robbins (1932) developed implications of what has been termed \"[p]erhaps the most commonly accepted current definition of the subject\":", "title": "Definitions of economics" }, { "paragraph_id": 13, "text": "Economics is the science which studies human behaviour as a relationship between ends and scarce means which have alternative uses.", "title": "Definitions of economics" }, { "paragraph_id": 14, "text": "Robbins described the definition as not classificatory in \"pick[ing] out certain kinds of behaviour\" but rather analytical in \"focus[ing] attention on a particular aspect of behaviour, the form imposed by the influence of scarcity.\" He affirmed that previous economists have usually centred their studies on the analysis of wealth: how wealth is created (production), distributed, and consumed; and how wealth can grow. But he said that economics can be used to study other things, such as war, that are outside its usual focus. This is because war has as the goal winning it (as a sought after end), generates both cost and benefits; and, resources (human life and other costs) are used to attain the goal. If the war is not winnable or if the expected costs outweigh the benefits, the deciding actors (assuming they are rational) may never go to war (a decision) but rather explore other alternatives. Economics cannot be defined as the science that studies wealth, war, crime, education, and any other field economic analysis can be applied to; but, as the science that studies a particular common aspect of each of those subjects (they all use scarce resources to attain a sought after end).", "title": "Definitions of economics" }, { "paragraph_id": 15, "text": "Some subsequent comments criticized the definition as overly broad in failing to limit its subject matter to analysis of markets. From the 1960s, however, such comments abated as the economic theory of maximizing behaviour and rational-choice modelling expanded the domain of the subject to areas previously treated in other fields. There are other criticisms as well, such as in scarcity not accounting for the macroeconomics of high unemployment.", "title": "Definitions of economics" }, { "paragraph_id": 16, "text": "Gary Becker, a contributor to the expansion of economics into new areas, described the approach he favoured as \"combin[ing the] assumptions of maximizing behaviour, stable preferences, and market equilibrium, used relentlessly and unflinchingly.\" One commentary characterizes the remark as making economics an approach rather than a subject matter but with great specificity as to the \"choice process and the type of social interaction that [such] analysis involves.\" The same source reviews a range of definitions included in principles of economics textbooks and concludes that the lack of agreement need not affect the subject-matter that the texts treat. Among economists more generally, it argues that a particular definition presented may reflect the direction toward which the author believes economics is evolving, or should evolve.", "title": "Definitions of economics" }, { "paragraph_id": 17, "text": "Many economists including nobel prize winners James M. Buchanan and Ronald Coase reject the method-based definition of Robbins and continue to prefer definitions like those of Say, in terms of its subject matter. Ha-Joon Chang has for example argued that the definition of Robbins would make economics very peculiar because all other sciences define themselves in terms of the area of inquiry or object of inquiry rather than the methodology. In the biology department, they do not say that all biology should be studied with DNA analysis. People study living organisms in many different ways, so some people will do DNA analysis, others might do anatomy, and still others might build game theoretic models of animal behavior. But they are all called biology because they all study living organisms. According to Ha Joon Chang, this view that the economy can and should be studied in only one way (for example by studying only rational choices), and going even one step further and basically redefining economics as a theory of everything, is very peculiar.", "title": "Definitions of economics" }, { "paragraph_id": 18, "text": "Questions regarding distribution of resources are found throughout the writings of the Boeotian poet Hesiod and several economic historians have described Hesiod himself as the \"first economist\". However, the word Oikos, the Greek word from which the word economy derives, was used for issues regarding how to manage a household (which was understood to be the landowner, his family, and his slaves) rather than to refer to some normative societal system of distribution of resources, which is a much more recent phenomenon. Xenophon, the author of the Oeconomicus, is credited by philologues for being the source of the word economy. Other notable writers from Antiquity through to the Renaissance which wrote on include Aristotle, Chanakya (also known as Kautilya), Qin Shi Huang, Ibn Khaldun, and Thomas Aquinas. Joseph Schumpeter described 16th and 17th century scholastic writers, including Tomás de Mercado, Luis de Molina, and Juan de Lugo, as \"coming nearer than any other group to being the 'founders' of scientific economics\" as to monetary, interest, and value theory within a natural-law perspective.", "title": "History of economic thought" }, { "paragraph_id": 19, "text": "Two groups, who later were called \"mercantilists\" and \"physiocrats\", more directly influenced the subsequent development of the subject. Both groups were associated with the rise of economic nationalism and modern capitalism in Europe. Mercantilism was an economic doctrine that flourished from the 16th to 18th century in a prolific pamphlet literature, whether of merchants or statesmen. It held that a nation's wealth depended on its accumulation of gold and silver. Nations without access to mines could obtain gold and silver from trade only by selling goods abroad and restricting imports other than of gold and silver. The doctrine called for importing cheap raw materials to be used in manufacturing goods, which could be exported, and for state regulation to impose protective tariffs on foreign manufactured goods and prohibit manufacturing in the colonies.", "title": "History of economic thought" }, { "paragraph_id": 20, "text": "Physiocrats, a group of 18th-century French thinkers and writers, developed the idea of the economy as a circular flow of income and output. Physiocrats believed that only agricultural production generated a clear surplus over cost, so that agriculture was the basis of all wealth. Thus, they opposed the mercantilist policy of promoting manufacturing and trade at the expense of agriculture, including import tariffs. Physiocrats advocated replacing administratively costly tax collections with a single tax on income of land owners. In reaction against copious mercantilist trade regulations, the physiocrats advocated a policy of laissez-faire, which called for minimal government intervention in the economy.", "title": "History of economic thought" }, { "paragraph_id": 21, "text": "Adam Smith (1723–1790) was an early economic theorist. Smith was harshly critical of the mercantilists but described the physiocratic system \"with all its imperfections\" as \"perhaps the purest approximation to the truth that has yet been published\" on the subject.", "title": "History of economic thought" }, { "paragraph_id": 22, "text": "The publication of Adam Smith's The Wealth of Nations in 1776, has been described as \"the effective birth of economics as a separate discipline.\" The book identified land, labour, and capital as the three factors of production and the major contributors to a nation's wealth, as distinct from the physiocratic idea that only agriculture was productive.", "title": "History of economic thought" }, { "paragraph_id": 23, "text": "Smith discusses potential benefits of specialization by division of labour, including increased labour productivity and gains from trade, whether between town and country or across countries. His \"theorem\" that \"the division of labor is limited by the extent of the market\" has been described as the \"core of a theory of the functions of firm and industry\" and a \"fundamental principle of economic organization.\" To Smith has also been ascribed \"the most important substantive proposition in all of economics\" and foundation of resource-allocation theory—that, under competition, resource owners (of labour, land, and capital) seek their most profitable uses, resulting in an equal rate of return for all uses in equilibrium (adjusted for apparent differences arising from such factors as training and unemployment).", "title": "History of economic thought" }, { "paragraph_id": 24, "text": "In an argument that includes \"one of the most famous passages in all economics,\" Smith represents every individual as trying to employ any capital they might command for their own advantage, not that of the society, and for the sake of profit, which is necessary at some level for employing capital in domestic industry, and positively related to the value of produce. In this:", "title": "History of economic thought" }, { "paragraph_id": 25, "text": "He generally, indeed, neither intends to promote the public interest, nor knows how much he is promoting it. By preferring the support of domestic to that of foreign industry, he intends only his own security; and by directing that industry in such a manner as its produce may be of the greatest value, he intends only his own gain, and he is in this, as in many other cases, led by an invisible hand to promote an end which was no part of his intention. Nor is it always the worse for the society that it was no part of it. By pursuing his own interest he frequently promotes that of the society more effectually than when he really intends to promote it.", "title": "History of economic thought" }, { "paragraph_id": 26, "text": "The Rev. Thomas Robert Malthus (1798) used the concept of diminishing returns to explain low living standards. Human population, he argued, tended to increase geometrically, outstripping the production of food, which increased arithmetically. The force of a rapidly growing population against a limited amount of land meant diminishing returns to labour. The result, he claimed, was chronically low wages, which prevented the standard of living for most of the population from rising above the subsistence level. Economist Julian Lincoln Simon has criticized Malthus's conclusions.", "title": "History of economic thought" }, { "paragraph_id": 27, "text": "While Adam Smith emphasized production and income, David Ricardo (1817) focused on the distribution of income among landowners, workers, and capitalists. Ricardo saw an inherent conflict between landowners on the one hand and labour and capital on the other. He posited that the growth of population and capital, pressing against a fixed supply of land, pushes up rents and holds down wages and profits. Ricardo was also the first to state and prove the principle of comparative advantage, according to which each country should specialize in producing and exporting goods in that it has a lower relative cost of production, rather relying only on its own production. It has been termed a \"fundamental analytical explanation\" for gains from trade.", "title": "History of economic thought" }, { "paragraph_id": 28, "text": "Coming at the end of the classical tradition, John Stuart Mill (1848) parted company with the earlier classical economists on the inevitability of the distribution of income produced by the market system. Mill pointed to a distinct difference between the market's two roles: allocation of resources and distribution of income. The market might be efficient in allocating resources but not in distributing income, he wrote, making it necessary for society to intervene.", "title": "History of economic thought" }, { "paragraph_id": 29, "text": "Value theory was important in classical theory. Smith wrote that the \"real price of every thing ... is the toil and trouble of acquiring it\". Smith maintained that, with rent and profit, other costs besides wages also enter the price of a commodity. Other classical economists presented variations on Smith, termed the 'labour theory of value'. Classical economics focused on the tendency of any market economy to settle in a final stationary state made up of a constant stock of physical wealth (capital) and a constant population size.", "title": "History of economic thought" }, { "paragraph_id": 30, "text": "Marxist (later, Marxian) economics descends from classical economics and it derives from the work of Karl Marx. The first volume of Marx's major work, Das Kapital, was published in 1867. Marx focused on the labour theory of value and theory of surplus value which, he believed, explained the exploitation of labour by capital. The labour theory of value held that the value of an exchanged commodity was determined by the labour that went into its production, and the theory of surplus value demonstrated how workers were only paid a proportion of the value their work had created.", "title": "History of economic thought" }, { "paragraph_id": 31, "text": "Marxian economics was further developed by Karl Kautsky (1854–1938)'s The Economic Doctrines of Karl Marx and The Class Struggle (Erfurt Program), Rudolf Hilferding's (1877–1941) Finance Capital, Vladimir Lenin (1870–1924)'s The Development of Capitalism in Russia and Imperialism, the Highest Stage of Capitalism, and Rosa Luxemburg (1871–1919)'s The Accumulation of Capital.", "title": "History of economic thought" }, { "paragraph_id": 32, "text": "At its inception as a social science, economics was defined and discussed at length as the study of production, distribution, and consumption of wealth by Jean-Baptiste Say in his Treatise on Political Economy or, The Production, Distribution, and Consumption of Wealth (1803). These three items were considered only in relation to the increase or diminution of wealth, and not in reference to their processes of execution. Say's definition has survived in part up to the present, modified by substituting the word \"wealth\" for \"goods and services\" meaning that wealth may include non-material objects as well. One hundred and thirty years later, Lionel Robbins noticed that this definition no longer sufficed, because many economists were making theoretical and philosophical inroads in other areas of human activity. In his Essay on the Nature and Significance of Economic Science, he proposed a definition of economics as a study of human behaviour, subject to and constrained by scarcity, which forces people to choose, allocate scarce resources to competing ends, and economize (seeking the greatest welfare while avoiding the wasting of scarce resources). According to Robbins: \"Economics is the science which studies human behavior as a relationship between ends and scarce means which have alternative uses\". Robbins' definition eventually became widely accepted by mainstream economists, and found its way into current textbooks. Although far from unanimous, most mainstream economists would accept some version of Robbins' definition, even though many have raised serious objections to the scope and method of economics, emanating from that definition.", "title": "History of economic thought" }, { "paragraph_id": 33, "text": "A body of theory later termed \"neoclassical economics\" formed from about 1870 to 1910. The term \"economics\" was popularized by such neoclassical economists as Alfred Marshall and Mary Paley Marshall as a concise synonym for \"economic science\" and a substitute for the earlier \"political economy\". This corresponded to the influence on the subject of mathematical methods used in the natural sciences.", "title": "History of economic thought" }, { "paragraph_id": 34, "text": "Neoclassical economics systematically integrated supply and demand as joint determinants of both price and quantity in market equilibrium, influencing the allocation of output and income distribution. It rejected the classical economics' labour theory of value in favor of a marginal utility theory of value on the demand side and a more comprehensive theory of costs on the supply side. In the 20th century, neoclassical theorists departed from an earlier idea that suggested measuring total utility for a society, opting instead for ordinal utility, which posits behavior-based relations across individuals.", "title": "History of economic thought" }, { "paragraph_id": 35, "text": "In microeconomics, neoclassical economics represents incentives and costs as playing a pervasive role in shaping decision making. An immediate example of this is the consumer theory of individual demand, which isolates how prices (as costs) and income affect quantity demanded. In macroeconomics it is reflected in an early and lasting neoclassical synthesis with Keynesian macroeconomics.", "title": "History of economic thought" }, { "paragraph_id": 36, "text": "Neoclassical economics is occasionally referred as orthodox economics whether by its critics or sympathizers. Modern mainstream economics builds on neoclassical economics but with many refinements that either supplement or generalize earlier analysis, such as econometrics, game theory, analysis of market failure and imperfect competition, and the neoclassical model of economic growth for analysing long-run variables affecting national income.", "title": "History of economic thought" }, { "paragraph_id": 37, "text": "Neoclassical economics studies the behaviour of individuals, households, and organizations (called economic actors, players, or agents), when they manage or use scarce resources, which have alternative uses, to achieve desired ends. Agents are assumed to act rationally, have multiple desirable ends in sight, limited resources to obtain these ends, a set of stable preferences, a definite overall guiding objective, and the capability of making a choice. There exists an economic problem, subject to study by economic science, when a decision (choice) is made by one or more players to attain the best possible outcome.", "title": "History of economic thought" }, { "paragraph_id": 38, "text": "Keynesian economics derives from John Maynard Keynes, in particular his book The General Theory of Employment, Interest and Money (1936), which ushered in contemporary macroeconomics as a distinct field. The book focused on determinants of national income in the short run when prices are relatively inflexible. Keynes attempted to explain in broad theoretical detail why high labour-market unemployment might not be self-correcting due to low \"effective demand\" and why even price flexibility and monetary policy might be unavailing. The term \"revolutionary\" has been applied to the book in its impact on economic analysis.", "title": "History of economic thought" }, { "paragraph_id": 39, "text": "During the following decades, many economists followed Keynes' ideas and expanded on his works. John Hicks and Alvin Hansen developed the IS–LM model which was a simple formalisation of some of Keynes' insights on the economy's short-run equilibrium. Franco Modigliani and James Tobin developed important theories of private consumption and investment, respectively, two major components of aggregate demand. Lawrence Klein built the first large-scale macroeconometric model, applying the Keynesian thinking systematically to the US economy.", "title": "History of economic thought" }, { "paragraph_id": 40, "text": "Immediately after World War II, Keynesian was the dominant economic view of the United States establishment and its allies, Marxian economics was the dominant economic view of the Soviet Union nomenklatura and its allies.", "title": "History of economic thought" }, { "paragraph_id": 41, "text": "Monetarism appeared in the 1950s and 1960s, its intellectual leader being Milton Friedman. Monetarists contended that monetary policy and other monetary shocks, as represented by the growth in the money stock, was an important cause of economic fluctuations, and consequently that monetary policy was more important than fiscal policy for purposes of stabilization. Friedman was also skeptical about the ability of central banks to conduct a sensible active monetary policy in practice, advocating instead using simple rules such as a steady rate of money growth.", "title": "History of economic thought" }, { "paragraph_id": 42, "text": "Monetarism rose to prominence in the 1970s and 1980s, when several major central banks followed a monetarist-inspired policy, but was later abandoned again because the results turned out to be unsatisfactory.", "title": "History of economic thought" }, { "paragraph_id": 43, "text": "A more fundamental challenge to the prevailing Keynesian paradigm came in the 1970s from new classical economists like Robert Lucas, Thomas Sargent and Edward Prescott. They introduced the notion of rational expectations in economics, which had profound implications for many economic discussions, among which were the socalled Lucas critique and the presentation of real business cycle models.", "title": "History of economic thought" }, { "paragraph_id": 44, "text": "During the 1980s a group of researchers appeared being called New Keynesian economists, including among others George Akerlof, Janet Yellen, Gregory Mankiw and Olivier Blanchard. They adopted the principle of rational expectations and other monetarist or new classical ideas such as building upon models employing micro foundations and optimizing behaviour, but simultaneously emphasized the importance of various market failures for the functioning of the economy, as had Keynes. Not least, they proposed various reasons that potentially explained the empirically observed features of price and wage rigidity, usually made to be endogenous features of the models, rather than simply assumed as in older Keynesian-style ones.", "title": "History of economic thought" }, { "paragraph_id": 45, "text": "After decades of often heated discussions between Keynesians, monetarists, new classical and new Keynesian economists, a synthesis emerged by the 2000s, often given the name the new neoclassical synthesis. It integrated the rational expectations and optimizing framework of the new classical theory with a new Keynesian role for nominal rigidities and other market imperfections like imperfect information in goods, labour and credit markets. The monetarist importance of monetary policy in stabilizing the economy and in particular controlling inflation was recognized as well as the traditional Keynesian insistence that fiscal policy could also play an influential role in affecting aggregate demand. Methodologically, the synthesis led to a new class of applied models, known as dynamic stochastic general equilibrium or DSGE models, descending from real business cycles models, but extended with several new Keynesian and other features. These models proved very useful and influential in the design of modern monetary policy and are now standard workhorses in most central banks.", "title": "History of economic thought" }, { "paragraph_id": 46, "text": "After the 2007–2008 financial crisis, macroeconomic research has put greater emphasis on understanding and integrating the financial system into models of the general economy and shedding light on the ways in which problems in the financial sector can turn into major macroeconomic recessions. In this and other research branches, inspiration from behavioral economics has started playing a more important role in mainstream economic theory. Also, heterogeneity among the economic agents, e.g. differences in income, plays an increasing role in recent economic research.", "title": "History of economic thought" }, { "paragraph_id": 47, "text": "Other schools or trends of thought referring to a particular style of economics practised at and disseminated from well-defined groups of academicians that have become known worldwide, include the Freiburg School, the School of Lausanne, the Stockholm school and the Chicago school of economics. During the 1970s and 1980s mainstream economics was sometimes separated into the Saltwater approach of those universities along the Eastern and Western coasts of the US, and the Freshwater, or Chicago school approach.", "title": "History of economic thought" }, { "paragraph_id": 48, "text": "Within macroeconomics there is, in general order of their historical appearance in the literature; classical economics, neoclassical economics, Keynesian economics, the neoclassical synthesis, monetarism, new classical economics, New Keynesian economics and the new neoclassical synthesis.", "title": "History of economic thought" }, { "paragraph_id": 49, "text": "Beside the mainstream development of economic thought, various alternative or heterodox economic theories have evolved over time, positioning themselves in contrast to mainstream theory. These include:", "title": "History of economic thought" }, { "paragraph_id": 50, "text": "Additionally, alternative developments include Marxian economics, constitutional economics, institutional economics, evolutionary economics, dependency theory, structuralist economics, world systems theory, econophysics, econodynamics, feminist economics and biophysical economics.", "title": "History of economic thought" }, { "paragraph_id": 51, "text": "Feminist economics emphasizes the role that gender plays in economies, challenging analyses that render gender invisible or support gender-oppressive economic systems. The goal is to create economic research and policy analysis that is inclusive and gender-aware to encourage gender equality and improve the well-being of marginalized groups.", "title": "History of economic thought" }, { "paragraph_id": 52, "text": "Mainstream economic theory relies upon analytical economic models. When creating theories, the objective is to find assumptions which are at least as simple in information requirements, more precise in predictions, and more fruitful in generating additional research than prior theories. While neoclassical economic theory constitutes both the dominant or orthodox theoretical as well as methodological framework, economic theory can also take the form of other schools of thought such as in heterodox economic theories.", "title": "Methodology" }, { "paragraph_id": 53, "text": "In microeconomics, principal concepts include supply and demand, marginalism, rational choice theory, opportunity cost, budget constraints, utility, and the theory of the firm. Early macroeconomic models focused on modelling the relationships between aggregate variables, but as the relationships appeared to change over time macroeconomists, including new Keynesians, reformulated their models with microfoundations, in which microeconomic concepts play a major part.", "title": "Methodology" }, { "paragraph_id": 54, "text": "Sometimes an economic hypothesis is only qualitative, not quantitative.", "title": "Methodology" }, { "paragraph_id": 55, "text": "Expositions of economic reasoning often use two-dimensional graphs to illustrate theoretical relationships. At a higher level of generality, mathematical economics is the application of mathematical methods to represent theories and analyze problems in economics. Paul Samuelson's treatise Foundations of Economic Analysis (1947) exemplifies the method, particularly as to maximizing behavioral relations of agents reaching equilibrium. The book focused on examining the class of statements called operationally meaningful theorems in economics, which are theorems that can conceivably be refuted by empirical data.", "title": "Methodology" }, { "paragraph_id": 56, "text": "Economic theories are frequently tested empirically, largely through the use of econometrics using economic data. The controlled experiments common to the physical sciences are difficult and uncommon in economics, and instead broad data is observationally studied; this type of testing is typically regarded as less rigorous than controlled experimentation, and the conclusions typically more tentative. However, the field of experimental economics is growing, and increasing use is being made of natural experiments.", "title": "Methodology" }, { "paragraph_id": 57, "text": "Statistical methods such as regression analysis are common. Practitioners use such methods to estimate the size, economic significance, and statistical significance (\"signal strength\") of the hypothesized relation(s) and to adjust for noise from other variables. By such means, a hypothesis may gain acceptance, although in a probabilistic, rather than certain, sense. Acceptance is dependent upon the falsifiable hypothesis surviving tests. Use of commonly accepted methods need not produce a final conclusion or even a consensus on a particular question, given different tests, data sets, and prior beliefs.", "title": "Methodology" }, { "paragraph_id": 58, "text": "Experimental economics has promoted the use of scientifically controlled experiments. This has reduced the long-noted distinction of economics from natural sciences because it allows direct tests of what were previously taken as axioms. In some cases these have found that the axioms are not entirely correct.", "title": "Methodology" }, { "paragraph_id": 59, "text": "In behavioural economics, psychologist Daniel Kahneman won the Nobel Prize in economics in 2002 for his and Amos Tversky's empirical discovery of several cognitive biases and heuristics. Similar empirical testing occurs in neuroeconomics. Another example is the assumption of narrowly selfish preferences versus a model that tests for selfish, altruistic, and cooperative preferences. These techniques have led some to argue that economics is a \"genuine science\".", "title": "Methodology" }, { "paragraph_id": 60, "text": "Microeconomics examines how entities, forming a market structure, interact within a market to create a market system. These entities include private and public players with various classifications, typically operating under scarcity of tradable units and regulation. The item traded may be a tangible product such as apples or a service such as repair services, legal counsel, or entertainment.", "title": "Microeconomics" }, { "paragraph_id": 61, "text": "Various market structures exist. In perfectly competitive markets, no participants are large enough to have the market power to set the price of a homogeneous product. In other words, every participant is a \"price taker\" as no participant influences the price of a product. In the real world, markets often experience imperfect competition.", "title": "Microeconomics" }, { "paragraph_id": 62, "text": "Forms of imperfect competition include monopoly (in which there is only one seller of a good), duopoly (in which there are only two sellers of a good), oligopoly (in which there are few sellers of a good), monopolistic competition (in which there are many sellers producing highly differentiated goods), monopsony (in which there is only one buyer of a good), and oligopsony (in which there are few buyers of a good). Firms under imperfect competition have the potential to be \"price makers\", which means that they can influence the prices of their products.", "title": "Microeconomics" }, { "paragraph_id": 63, "text": "In partial equilibrium method of analysis, it is assumed that activity in the market being analysed does not affect other markets. This method aggregates (the sum of all activity) in only one market. General-equilibrium theory studies various markets and their behaviour. It aggregates (the sum of all activity) across all markets. This method studies both changes in markets and their interactions leading towards equilibrium.", "title": "Microeconomics" }, { "paragraph_id": 64, "text": "In microeconomics, production is the conversion of inputs into outputs. It is an economic process that uses inputs to create a commodity or a service for exchange or direct use. Production is a flow and thus a rate of output per period of time. Distinctions include such production alternatives as for consumption (food, haircuts, etc.) vs. investment goods (new tractors, buildings, roads, etc.), public goods (national defence, smallpox vaccinations, etc.) or private goods (new computers, bananas, etc.), and \"guns\" vs \"butter\".", "title": "Microeconomics" }, { "paragraph_id": 65, "text": "Inputs used in the production process include such primary factors of production as labour services, capital (durable produced goods used in production, such as an existing factory), and land (including natural resources). Other inputs may include intermediate goods used in production of final goods, such as the steel in a new car.", "title": "Microeconomics" }, { "paragraph_id": 66, "text": "Economic efficiency measures how well a system generates desired output with a given set of inputs and available technology. Efficiency is improved if more output is generated without changing inputs. A widely accepted general standard is Pareto efficiency, which is reached when no further change can make someone better off without making someone else worse off.", "title": "Microeconomics" }, { "paragraph_id": 67, "text": "The production–possibility frontier (PPF) is an expository figure for representing scarcity, cost, and efficiency. In the simplest case an economy can produce just two goods (say \"guns\" and \"butter\"). The PPF is a table or graph (as at the right) showing the different quantity combinations of the two goods producible with a given technology and total factor inputs, which limit feasible total output. Each point on the curve shows potential total output for the economy, which is the maximum feasible output of one good, given a feasible output quantity of the other good.", "title": "Microeconomics" }, { "paragraph_id": 68, "text": "Scarcity is represented in the figure by people being willing but unable in the aggregate to consume beyond the PPF (such as at X) and by the negative slope of the curve. If production of one good increases along the curve, production of the other good decreases, an inverse relationship. This is because increasing output of one good requires transferring inputs to it from production of the other good, decreasing the latter.", "title": "Microeconomics" }, { "paragraph_id": 69, "text": "The slope of the curve at a point on it gives the trade-off between the two goods. It measures what an additional unit of one good costs in units forgone of the other good, an example of a real opportunity cost. Thus, if one more Gun costs 100 units of butter, the opportunity cost of one Gun is 100 Butter. Along the PPF, scarcity implies that choosing more of one good in the aggregate entails doing with less of the other good. Still, in a market economy, movement along the curve may indicate that the choice of the increased output is anticipated to be worth the cost to the agents.", "title": "Microeconomics" }, { "paragraph_id": 70, "text": "By construction, each point on the curve shows productive efficiency in maximizing output for given total inputs. A point inside the curve (as at A), is feasible but represents production inefficiency (wasteful use of inputs), in that output of one or both goods could increase by moving in a northeast direction to a point on the curve. Examples cited of such inefficiency include high unemployment during a business-cycle recession or economic organization of a country that discourages full use of resources. Being on the curve might still not fully satisfy allocative efficiency (also called Pareto efficiency) if it does not produce a mix of goods that consumers prefer over other points.", "title": "Microeconomics" }, { "paragraph_id": 71, "text": "Much applied economics in public policy is concerned with determining how the efficiency of an economy can be improved. Recognizing the reality of scarcity and then figuring out how to organize society for the most efficient use of resources has been described as the \"essence of economics\", where the subject \"makes its unique contribution.\"", "title": "Microeconomics" }, { "paragraph_id": 72, "text": "Specialization is considered key to economic efficiency based on theoretical and empirical considerations. Different individuals or nations may have different real opportunity costs of production, say from differences in stocks of human capital per worker or capital/labour ratios. According to theory, this may give a comparative advantage in production of goods that make more intensive use of the relatively more abundant, thus relatively cheaper, input.", "title": "Microeconomics" }, { "paragraph_id": 73, "text": "Even if one region has an absolute advantage as to the ratio of its outputs to inputs in every type of output, it may still specialize in the output in which it has a comparative advantage and thereby gain from trading with a region that lacks any absolute advantage but has a comparative advantage in producing something else.", "title": "Microeconomics" }, { "paragraph_id": 74, "text": "It has been observed that a high volume of trade occurs among regions even with access to a similar technology and mix of factor inputs, including high-income countries. This has led to investigation of economies of scale and agglomeration to explain specialization in similar but differentiated product lines, to the overall benefit of respective trading parties or regions.", "title": "Microeconomics" }, { "paragraph_id": 75, "text": "The general theory of specialization applies to trade among individuals, farms, manufacturers, service providers, and economies. Among each of these production systems, there may be a corresponding division of labour with different work groups specializing, or correspondingly different types of capital equipment and differentiated land uses.", "title": "Microeconomics" }, { "paragraph_id": 76, "text": "An example that combines features above is a country that specializes in the production of high-tech knowledge products, as developed countries do, and trades with developing nations for goods produced in factories where labour is relatively cheap and plentiful, resulting in different in opportunity costs of production. More total output and utility thereby results from specializing in production and trading than if each country produced its own high-tech and low-tech products.", "title": "Microeconomics" }, { "paragraph_id": 77, "text": "Theory and observation set out the conditions such that market prices of outputs and productive inputs select an allocation of factor inputs by comparative advantage, so that (relatively) low-cost inputs go to producing low-cost outputs. In the process, aggregate output may increase as a by-product or by design. Such specialization of production creates opportunities for gains from trade whereby resource owners benefit from trade in the sale of one type of output for other, more highly valued goods. A measure of gains from trade is the increased income levels that trade may facilitate.", "title": "Microeconomics" }, { "paragraph_id": 78, "text": "Prices and quantities have been described as the most directly observable attributes of goods produced and exchanged in a market economy. The theory of supply and demand is an organizing principle for explaining how prices coordinate the amounts produced and consumed. In microeconomics, it applies to price and output determination for a market with perfect competition, which includes the condition of no buyers or sellers large enough to have price-setting power.", "title": "Microeconomics" }, { "paragraph_id": 79, "text": "For a given market of a commodity, demand is the relation of the quantity that all buyers would be prepared to purchase at each unit price of the good. Demand is often represented by a table or a graph showing price and quantity demanded (as in the figure). Demand theory describes individual consumers as rationally choosing the most preferred quantity of each good, given income, prices, tastes, etc. A term for this is \"constrained utility maximization\" (with income and wealth as the constraints on demand). Here, utility refers to the hypothesized relation of each individual consumer for ranking different commodity bundles as more or less preferred.", "title": "Microeconomics" }, { "paragraph_id": 80, "text": "The law of demand states that, in general, price and quantity demanded in a given market are inversely related. That is, the higher the price of a product, the less of it people would be prepared to buy (other things unchanged). As the price of a commodity falls, consumers move toward it from relatively more expensive goods (the substitution effect). In addition, purchasing power from the price decline increases ability to buy (the income effect). Other factors can change demand; for example an increase in income will shift the demand curve for a normal good outward relative to the origin, as in the figure. All determinants are predominantly taken as constant factors of demand and supply.", "title": "Microeconomics" }, { "paragraph_id": 81, "text": "Supply is the relation between the price of a good and the quantity available for sale at that price. It may be represented as a table or graph relating price and quantity supplied. Producers, for example business firms, are hypothesized to be profit maximizers, meaning that they attempt to produce and supply the amount of goods that will bring them the highest profit. Supply is typically represented as a function relating price and quantity, if other factors are unchanged.", "title": "Microeconomics" }, { "paragraph_id": 82, "text": "That is, the higher the price at which the good can be sold, the more of it producers will supply, as in the figure. The higher price makes it profitable to increase production. Just as on the demand side, the position of the supply can shift, say from a change in the price of a productive input or a technical improvement. The \"Law of Supply\" states that, in general, a rise in price leads to an expansion in supply and a fall in price leads to a contraction in supply. Here as well, the determinants of supply, such as price of substitutes, cost of production, technology applied and various factors inputs of production are all taken to be constant for a specific time period of evaluation of supply.", "title": "Microeconomics" }, { "paragraph_id": 83, "text": "Market equilibrium occurs where quantity supplied equals quantity demanded, the intersection of the supply and demand curves in the figure above. At a price below equilibrium, there is a shortage of quantity supplied compared to quantity demanded. This is posited to bid the price up. At a price above equilibrium, there is a surplus of quantity supplied compared to quantity demanded. This pushes the price down. The model of supply and demand predicts that for given supply and demand curves, price and quantity will stabilize at the price that makes quantity supplied equal to quantity demanded. Similarly, demand-and-supply theory predicts a new price-quantity combination from a shift in demand (as to the figure), or in supply.", "title": "Microeconomics" }, { "paragraph_id": 84, "text": "People frequently do not trade directly on markets. Instead, on the supply side, they may work in and produce through firms. The most obvious kinds of firms are corporations, partnerships and trusts. According to Ronald Coase, people begin to organize their production in firms when the costs of doing business becomes lower than doing it on the market. Firms combine labour and capital, and can achieve far greater economies of scale (when the average cost per unit declines as more units are produced) than individual market trading.", "title": "Microeconomics" }, { "paragraph_id": 85, "text": "In perfectly competitive markets studied in the theory of supply and demand, there are many producers, none of which significantly influence price. Industrial organization generalizes from that special case to study the strategic behaviour of firms that do have significant control of price. It considers the structure of such markets and their interactions. Common market structures studied besides perfect competition include monopolistic competition, various forms of oligopoly, and monopoly.", "title": "Microeconomics" }, { "paragraph_id": 86, "text": "Managerial economics applies microeconomic analysis to specific decisions in business firms or other management units. It draws heavily from quantitative methods such as operations research and programming and from statistical methods such as regression analysis in the absence of certainty and perfect knowledge. A unifying theme is the attempt to optimize business decisions, including unit-cost minimization and profit maximization, given the firm's objectives and constraints imposed by technology and market conditions.", "title": "Microeconomics" }, { "paragraph_id": 87, "text": "Uncertainty in economics is an unknown prospect of gain or loss, whether quantifiable as risk or not. Without it, household behaviour would be unaffected by uncertain employment and income prospects, financial and capital markets would reduce to exchange of a single instrument in each market period, and there would be no communications industry. Given its different forms, there are various ways of representing uncertainty and modelling economic agents' responses to it.", "title": "Microeconomics" }, { "paragraph_id": 88, "text": "Game theory is a branch of applied mathematics that considers strategic interactions between agents, one kind of uncertainty. It provides a mathematical foundation of industrial organization, discussed above, to model different types of firm behaviour, for example in a solipsistic industry (few sellers), but equally applicable to wage negotiations, bargaining, contract design, and any situation where individual agents are few enough to have perceptible effects on each other. In behavioural economics, it has been used to model the strategies agents choose when interacting with others whose interests are at least partially adverse to their own.", "title": "Microeconomics" }, { "paragraph_id": 89, "text": "In this, it generalizes maximization approaches developed to analyse market actors such as in the supply and demand model and allows for incomplete information of actors. The field dates from the 1944 classic Theory of Games and Economic Behavior by John von Neumann and Oskar Morgenstern. It has significant applications seemingly outside of economics in such diverse subjects as the formulation of nuclear strategies, ethics, political science, and evolutionary biology.", "title": "Microeconomics" }, { "paragraph_id": 90, "text": "Risk aversion may stimulate activity that in well-functioning markets smooths out risk and communicates information about risk, as in markets for insurance, commodity futures contracts, and financial instruments. Financial economics or simply finance describes the allocation of financial resources. It also analyses the pricing of financial instruments, the financial structure of companies, the efficiency and fragility of financial markets, financial crises, and related government policy or regulation.", "title": "Microeconomics" }, { "paragraph_id": 91, "text": "Some market organizations may give rise to inefficiencies associated with uncertainty. Based on George Akerlof's \"Market for Lemons\" article, the paradigm example is of a dodgy second-hand car market. Customers without knowledge of whether a car is a \"lemon\" depress its price below what a quality second-hand car would be. Information asymmetry arises here, if the seller has more relevant information than the buyer but no incentive to disclose it. Related problems in insurance are adverse selection, such that those at most risk are most likely to insure (say reckless drivers), and moral hazard, such that insurance results in riskier behaviour (say more reckless driving).", "title": "Microeconomics" }, { "paragraph_id": 92, "text": "Both problems may raise insurance costs and reduce efficiency by driving otherwise willing transactors from the market (\"incomplete markets\"). Moreover, attempting to reduce one problem, say adverse selection by mandating insurance, may add to another, say moral hazard. Information economics, which studies such problems, has relevance in subjects such as insurance, contract law, mechanism design, monetary economics, and health care. Applied subjects include market and legal remedies to spread or reduce risk, such as warranties, government-mandated partial insurance, restructuring or bankruptcy law, inspection, and regulation for quality and information disclosure.", "title": "Microeconomics" }, { "paragraph_id": 93, "text": "The term \"market failure\" encompasses several problems which may undermine standard economic assumptions. Although economists categorize market failures differently, the following categories emerge in the main texts.", "title": "Microeconomics" }, { "paragraph_id": 94, "text": "Information asymmetries and incomplete markets may result in economic inefficiency but also a possibility of improving efficiency through market, legal, and regulatory remedies, as discussed above.", "title": "Microeconomics" }, { "paragraph_id": 95, "text": "Natural monopoly, or the overlapping concepts of \"practical\" and \"technical\" monopoly, is an extreme case of failure of competition as a restraint on producers. Extreme economies of scale are one possible cause.", "title": "Microeconomics" }, { "paragraph_id": 96, "text": "Public goods are goods which are under-supplied in a typical market. The defining features are that people can consume public goods without having to pay for them and that more than one person can consume the good at the same time.", "title": "Microeconomics" }, { "paragraph_id": 97, "text": "Externalities occur where there are significant social costs or benefits from production or consumption that are not reflected in market prices. For example, air pollution may generate a negative externality, and education may generate a positive externality (less crime, etc.). Governments often tax and otherwise restrict the sale of goods that have negative externalities and subsidize or otherwise promote the purchase of goods that have positive externalities in an effort to correct the price distortions caused by these externalities. Elementary demand-and-supply theory predicts equilibrium but not the speed of adjustment for changes of equilibrium due to a shift in demand or supply.", "title": "Microeconomics" }, { "paragraph_id": 98, "text": "In many areas, some form of price stickiness is postulated to account for quantities, rather than prices, adjusting in the short run to changes on the demand side or the supply side. This includes standard analysis of the business cycle in macroeconomics. Analysis often revolves around causes of such price stickiness and their implications for reaching a hypothesized long-run equilibrium. Examples of such price stickiness in particular markets include wage rates in labour markets and posted prices in markets deviating from perfect competition.", "title": "Microeconomics" }, { "paragraph_id": 99, "text": "Some specialized fields of economics deal in market failure more than others. The economics of the public sector is one example. Much environmental economics concerns externalities or \"public bads\".", "title": "Microeconomics" }, { "paragraph_id": 100, "text": "Policy options include regulations that reflect cost-benefit analysis or market solutions that change incentives, such as emission fees or redefinition of property rights.", "title": "Microeconomics" }, { "paragraph_id": 101, "text": "Welfare economics uses microeconomics techniques to evaluate well-being from allocation of productive factors as to desirability and economic efficiency within an economy, often relative to competitive general equilibrium. It analyzes social welfare, however measured, in terms of economic activities of the individuals that compose the theoretical society considered. Accordingly, individuals, with associated economic activities, are the basic units for aggregating to social welfare, whether of a group, a community, or a society, and there is no \"social welfare\" apart from the \"welfare\" associated with its individual units.", "title": "Microeconomics" }, { "paragraph_id": 102, "text": "Macroeconomics, another branch of economics, examines the economy as a whole to explain broad aggregates and their interactions \"top down\", that is, using a simplified form of general-equilibrium theory. Such aggregates include national income and output, the unemployment rate, and price inflation and subaggregates like total consumption and investment spending and their components. It also studies effects of monetary policy and fiscal policy.", "title": "Macroeconomics" }, { "paragraph_id": 103, "text": "Since at least the 1960s, macroeconomics has been characterized by further integration as to micro-based modelling of sectors, including rationality of players, efficient use of market information, and imperfect competition. This has addressed a long-standing concern about inconsistent developments of the same subject.", "title": "Macroeconomics" }, { "paragraph_id": 104, "text": "Macroeconomic analysis also considers factors affecting the long-term level and growth of national income. Such factors include capital accumulation, technological change and labour force growth.", "title": "Macroeconomics" }, { "paragraph_id": 105, "text": "Growth economics studies factors that explain economic growth – the increase in output per capita of a country over a long period of time. The same factors are used to explain differences in the level of output per capita between countries, in particular why some countries grow faster than others, and whether countries converge at the same rates of growth.", "title": "Macroeconomics" }, { "paragraph_id": 106, "text": "Much-studied factors include the rate of investment, population growth, and technological change. These are represented in theoretical and empirical forms (as in the neoclassical and endogenous growth models) and in growth accounting.", "title": "Macroeconomics" }, { "paragraph_id": 107, "text": "The economics of a depression were the spur for the creation of \"macroeconomics\" as a separate discipline. During the Great Depression of the 1930s, John Maynard Keynes authored a book entitled The General Theory of Employment, Interest and Money outlining the key theories of Keynesian economics. Keynes contended that aggregate demand for goods might be insufficient during economic downturns, leading to unnecessarily high unemployment and losses of potential output.", "title": "Macroeconomics" }, { "paragraph_id": 108, "text": "He therefore advocated active policy responses by the public sector, including monetary policy actions by the central bank and fiscal policy actions by the government to stabilize output over the business cycle. Thus, a central conclusion of Keynesian economics is that, in some situations, no strong automatic mechanism moves output and employment towards full employment levels. John Hicks' IS/LM model has been the most influential interpretation of The General Theory.", "title": "Macroeconomics" }, { "paragraph_id": 109, "text": "Over the years, understanding of the business cycle has branched into various research programmes, mostly related to or distinct from Keynesianism. The neoclassical synthesis refers to the reconciliation of Keynesian economics with classical economics, stating that Keynesianism is correct in the short run but qualified by classical-like considerations in the intermediate and long run.", "title": "Macroeconomics" }, { "paragraph_id": 110, "text": "New classical macroeconomics, as distinct from the Keynesian view of the business cycle, posits market clearing with imperfect information. It includes Friedman's permanent income hypothesis on consumption and \"rational expectations\" theory, led by Robert Lucas, and real business cycle theory.", "title": "Macroeconomics" }, { "paragraph_id": 111, "text": "In contrast, the new Keynesian approach retains the rational expectations assumption, however it assumes a variety of market failures. In particular, New Keynesians assume prices and wages are \"sticky\", which means they do not adjust instantaneously to changes in economic conditions.", "title": "Macroeconomics" }, { "paragraph_id": 112, "text": "Thus, the new classicals assume that prices and wages adjust automatically to attain full employment, whereas the new Keynesians see full employment as being automatically achieved only in the long run, and hence government and central-bank policies are needed because the \"long run\" may be very long.", "title": "Macroeconomics" }, { "paragraph_id": 113, "text": "The amount of unemployment in an economy is measured by the unemployment rate, the percentage of workers without jobs in the labour force. The labour force only includes workers actively looking for jobs. People who are retired, pursuing education, or discouraged from seeking work by a lack of job prospects are excluded from the labour force. Unemployment can be generally broken down into several types that are related to different causes.", "title": "Macroeconomics" }, { "paragraph_id": 114, "text": "Classical models of unemployment occurs when wages are too high for employers to be willing to hire more workers. Consistent with classical unemployment, frictional unemployment occurs when appropriate job vacancies exist for a worker, but the length of time needed to search for and find the job leads to a period of unemployment.", "title": "Macroeconomics" }, { "paragraph_id": 115, "text": "Structural unemployment covers a variety of possible causes of unemployment including a mismatch between workers' skills and the skills required for open jobs. Large amounts of structural unemployment can occur when an economy is transitioning industries and workers find their previous set of skills are no longer in demand. Structural unemployment is similar to frictional unemployment since both reflect the problem of matching workers with job vacancies, but structural unemployment covers the time needed to acquire new skills not just the short term search process.", "title": "Macroeconomics" }, { "paragraph_id": 116, "text": "While some types of unemployment may occur regardless of the condition of the economy, cyclical unemployment occurs when growth stagnates. Okun's law represents the empirical relationship between unemployment and economic growth. The original version of Okun's law states that a 3% increase in output would lead to a 1% decrease in unemployment.", "title": "Macroeconomics" }, { "paragraph_id": 117, "text": "Money is a means of final payment for goods in most price system economies, and is the unit of account in which prices are typically stated. Money has general acceptability, relative consistency in value, divisibility, durability, portability, elasticity in supply, and longevity with mass public confidence. It includes currency held by the nonbank public and checkable deposits. It has been described as a social convention, like language, useful to one largely because it is useful to others. In the words of Francis Amasa Walker, a well-known 19th-century economist, \"Money is what money does\" (\"Money is that money does\" in the original).", "title": "Macroeconomics" }, { "paragraph_id": 118, "text": "As a medium of exchange, money facilitates trade. It is essentially a measure of value and more importantly, a store of value being a basis for credit creation. Its economic function can be contrasted with barter (non-monetary exchange). Given a diverse array of produced goods and specialized producers, barter may entail a hard-to-locate double coincidence of wants as to what is exchanged, say apples and a book. Money can reduce the transaction cost of exchange because of its ready acceptability. Then it is less costly for the seller to accept money in exchange, rather than what the buyer produces.", "title": "Macroeconomics" }, { "paragraph_id": 119, "text": "Monetary policy is the policy that central banks conduct to accomplish their broader objectives. Most central banks in developed countries follow inflation targeting, whereas the main objective for many central banks in development countries is to uphold a fixed exchange rate system. The primary monetary tool is normally the adjustment of interest rates, either directly via administratively changing the central bank's own interest rates or indirectly via open market operations. Via the monetary transmission mechanism, interest rate changes affect investment, consumption and net export, and hence aggregate demand, output and employment, and ultimately the development of wages and inflation.", "title": "Macroeconomics" }, { "paragraph_id": 120, "text": "Governments implement fiscal policy to influence macroeconomic conditions by adjusting spending and taxation policies to alter aggregate demand. When aggregate demand falls below the potential output of the economy, there is an output gap where some productive capacity is left unemployed. Governments increase spending and cut taxes to boost aggregate demand. Resources that have been idled can be used by the government.", "title": "Macroeconomics" }, { "paragraph_id": 121, "text": "For example, unemployed home builders can be hired to expand highways. Tax cuts allow consumers to increase their spending, which boosts aggregate demand. Both tax cuts and spending have multiplier effects where the initial increase in demand from the policy percolates through the economy and generates additional economic activity.", "title": "Macroeconomics" }, { "paragraph_id": 122, "text": "The effects of fiscal policy can be limited by crowding out. When there is no output gap, the economy is producing at full capacity and there are no excess productive resources. If the government increases spending in this situation, the government uses resources that otherwise would have been used by the private sector, so there is no increase in overall output. Some economists think that crowding out is always an issue while others do not think it is a major issue when output is depressed.", "title": "Macroeconomics" }, { "paragraph_id": 123, "text": "Sceptics of fiscal policy also make the argument of Ricardian equivalence. They argue that an increase in debt will have to be paid for with future tax increases, which will cause people to reduce their consumption and save money to pay for the future tax increase. Under Ricardian equivalence, any boost in demand from tax cuts will be offset by the increased saving intended to pay for future higher taxes.", "title": "Macroeconomics" }, { "paragraph_id": 124, "text": "Economic inequality includes income inequality, measured using the distribution of income (the amount of money people receive), and wealth inequality measured using the distribution of wealth (the amount of wealth people own), and other measures such as consumption, land ownership, and human capital. Inequality exists at different extents between countries or states, groups of people, and individuals. There are many methods for measuring inequality, the Gini coefficient being widely used for income differences among individuals. An example measure of inequality between countries is the Inequality-adjusted Human Development Index, a composite index that takes inequality into account. Important concepts of equality include equity, equality of outcome, and equality of opportunity.", "title": "Macroeconomics" }, { "paragraph_id": 125, "text": "Research has linked economic inequality to political and social instability, including revolution, democratic breakdown and civil conflict. Research suggests that greater inequality hinders economic growth and macroeconomic stability, and that land and human capital inequality reduce growth more than inequality of income. Inequality is at the center stage of economic policy debate across the globe, as government tax and spending policies have significant effects on income distribution. In advanced economies, taxes and transfers decrease income inequality by one-third, with most of this being achieved via public social spending (such as pensions and family benefits.)", "title": "Macroeconomics" }, { "paragraph_id": 126, "text": "Public economics is the field of economics that deals with economic activities of a public sector, usually government. The subject addresses such matters as tax incidence (who really pays a particular tax), cost-benefit analysis of government programmes, effects on economic efficiency and income distribution of different kinds of spending and taxes, and fiscal politics. The latter, an aspect of public choice theory, models public-sector behaviour analogously to microeconomics, involving interactions of self-interested voters, politicians, and bureaucrats.", "title": "Other branches of economics" }, { "paragraph_id": 127, "text": "Much of economics is positive, seeking to describe and predict economic phenomena. Normative economics seeks to identify what economies ought to be like.", "title": "Other branches of economics" }, { "paragraph_id": 128, "text": "Welfare economics is a normative branch of economics that uses microeconomic techniques to simultaneously determine the allocative efficiency within an economy and the income distribution associated with it. It attempts to measure social welfare by examining the economic activities of the individuals that comprise society.", "title": "Other branches of economics" }, { "paragraph_id": 129, "text": "International trade studies determinants of goods-and-services flows across international boundaries. It also concerns the size and distribution of gains from trade. Policy applications include estimating the effects of changing tariff rates and trade quotas. International finance is a macroeconomic field which examines the flow of capital across international borders, and the effects of these movements on exchange rates. Increased trade in goods, services and capital between countries is a major effect of contemporary globalization.", "title": "Other branches of economics" }, { "paragraph_id": 130, "text": "Labor economics seeks to understand the functioning and dynamics of the markets for wage labor. Labor markets function through the interaction of workers and employers. Labor economics looks at the suppliers of labor services (workers), the demands of labor services (employers), and attempts to understand the resulting pattern of wages, employment, and income. In economics, labor is a measure of the work done by human beings. It is conventionally contrasted with such other factors of production as land and capital. There are theories which have developed a concept called human capital (referring to the skills that workers possess, not necessarily their actual work), although there are also counter posing macro-economic system theories that think human capital is a contradiction in terms.", "title": "Other branches of economics" }, { "paragraph_id": 131, "text": "Development economics examines economic aspects of the economic development process in relatively low-income countries focusing on structural change, poverty, and economic growth. Approaches in development economics frequently incorporate social and political factors.", "title": "Other branches of economics" }, { "paragraph_id": 132, "text": "Economics has been subject to criticism that it relies on unrealistic, unverifiable, or highly simplified assumptions, in some cases because these assumptions simplify the proofs of desired conclusions. For example, the economist Friedrich Hayek claimed that economics (at least historically) used a scientistic approach which he claimed was \"decidedly unscientific in the true sense of the word, since it involves a mechanical and uncritical application of habits of thought to fields different from those in which they have been formed\". Latter-day examples of such assumptions include perfect information, profit maximization and rational choices, axioms of neoclassical economics. Such criticisms often conflate neoclassical economics with all of contemporary economics. The field of information economics includes both mathematical-economical research and also behavioural economics, akin to studies in behavioural psychology, and confounding factors to the neoclassical assumptions are the subject of substantial study in many areas of economics.", "title": "Criticism" }, { "paragraph_id": 133, "text": "Prominent historical mainstream economists such as Keynes and Joskow observed that much of the economics of their time was conceptual rather than quantitative, and difficult to model and formalize quantitatively. In a discussion on oligopoly research, Paul Joskow pointed out in 1975 that in practice, serious students of actual economies tended to use \"informal models\" based upon qualitative factors specific to particular industries. Joskow had a strong feeling that the important work in oligopoly was done through informal observations while formal models were \"trotted out ex post\". He argued that formal models were largely not important in the empirical work, either, and that the fundamental factor behind the theory of the firm, behaviour, was neglected. Deirdre McCloskey has argued that many empirical economic studies are poorly reported, and she and Stephen Ziliak argue that although her critique has been well-received, practice has not improved. The extent to which practice has improved since the early 2000s is contested: although economists have noted the discipline's adoption of increasingly rigorous modeling, other have criticized the field's focus on creating computer simulations detached from reality, as well as noting the loss of prestige suffered by the field for failing to anticipate the Great Recession.", "title": "Criticism" }, { "paragraph_id": 134, "text": "Economics has been derogatorily dubbed \"the dismal science\", first coined by the Victorian historian Thomas Carlyle in the 19th century. It is often stated that Carlyle gave it this nickname as a response to the work of Thomas Robert Malthus, who predicted widespread starvation resulting from projections that population growth would exceed the rate of increase in the food supply. However, the actual phrase was coined by Carlyle in the context of a debate with John Stuart Mill on slavery, in which Carlyle argued for slavery; the \"dismal\" nature of economics in Carlyle's view was that it \"[found] the secret of this Universe in 'supply and demand', and reduc[ed] the duty of human governors to that of letting men alone\".\"", "title": "Criticism" }, { "paragraph_id": 135, "text": "Economics is one social science among several and has fields bordering on other areas, including economic geography, economic history, public choice, energy economics, cultural economics, family economics and institutional economics.", "title": "Related subjects" }, { "paragraph_id": 136, "text": "Law and economics, or economic analysis of law, is an approach to legal theory that applies methods of economics to law. It includes the use of economic concepts to explain the effects of legal rules, to assess which legal rules are economically efficient, and to predict what the legal rules will be. A seminal article by Ronald Coase published in 1961 suggested that well-defined property rights could overcome the problems of externalities.", "title": "Related subjects" }, { "paragraph_id": 137, "text": "Political economy is the interdisciplinary study that combines economics, law, and political science in explaining how political institutions, the political environment, and the economic system (capitalist, socialist, mixed) influence each other. It studies questions such as how monopoly, rent-seeking behaviour, and externalities should impact government policy. Historians have employed political economy to explore the ways in the past that persons and groups with common economic interests have used politics to effect changes beneficial to their interests.", "title": "Related subjects" }, { "paragraph_id": 138, "text": "Energy economics is a broad scientific subject area which includes topics related to energy supply and energy demand. Georgescu-Roegen reintroduced the concept of entropy in relation to economics and energy from thermodynamics, as distinguished from what he viewed as the mechanistic foundation of neoclassical economics drawn from Newtonian physics. His work contributed significantly to thermoeconomics and to ecological economics. He also did foundational work which later developed into evolutionary economics.", "title": "Related subjects" }, { "paragraph_id": 139, "text": "The sociological subfield of economic sociology arose, primarily through the work of Émile Durkheim, Max Weber and Georg Simmel, as an approach to analysing the effects of economic phenomena in relation to the overarching social paradigm (i.e. modernity). Classic works include Max Weber's The Protestant Ethic and the Spirit of Capitalism (1905) and Georg Simmel's The Philosophy of Money (1900). More recently, the works of James S. Coleman, Mark Granovetter, Peter Hedstrom and Richard Swedberg have been influential in this field.", "title": "Related subjects" }, { "paragraph_id": 140, "text": "Gary Becker in 1974 presented an economic theory of social interactions, whose applications included the family, charity, merit goods and multiperson interactions, and envy and hatred. He and Kevin Murphy authored a book in 2001 that analyzed market behavior in a social environment.", "title": "Related subjects" }, { "paragraph_id": 141, "text": "The professionalization of economics, reflected in the growth of graduate programmes on the subject, has been described as \"the main change in economics since around 1900\". Most major universities and many colleges have a major, school, or department in which academic degrees are awarded in the subject, whether in the liberal arts, business, or for professional study. See Bachelor of Economics and Master of Economics.", "title": "Profession" }, { "paragraph_id": 142, "text": "In the private sector, professional economists are employed as consultants and in industry, including banking and finance. Economists also work for various government departments and agencies, for example, the national treasury, central bank or National Bureau of Statistics. See Economic analyst.", "title": "Profession" }, { "paragraph_id": 143, "text": "There are dozens of prizes awarded to economists each year for outstanding intellectual contributions to the field, the most prominent of which is the Nobel Memorial Prize in Economic Sciences, though it is not a Nobel Prize.", "title": "Profession" }, { "paragraph_id": 144, "text": "Contemporary economics uses mathematics. Economists draw on the tools of calculus, linear algebra, statistics, game theory, and computer science. Professional economists are expected to be familiar with these tools, while a minority specialize in econometrics and mathematical methods.", "title": "Profession" }, { "paragraph_id": 145, "text": "Harriet Martineau (1802–1876) was a widely-read populariser of classical economic thought. Mary Paley Marshall (1850–1944), the first women lecturer at a British economics faculty, wrote The Economics of Industry with her husband Alfred Marshall. Joan Robinson (1903–1983) was an important post-Keynesian economist. The economic historian Anna Schwartz (1915–2012) coauthored A Monetary History of the United States, 1867–1960 with Milton Friedman. Three women have received the Nobel Prize in Economics: Elinor Ostrom (2009), Esther Duflo (2019) and Claudia Goldin (2023). Five have received the John Bates Clark Medal: Susan Athey (2007), Esther Duflo (2010), Amy Finkelstein (2012), Emi Nakamura (2019) and Melissa Dell (2020).", "title": "Profession" }, { "paragraph_id": 146, "text": "Women's authorship share in prominent economic journals reduced from 1940 to the 1970s, but has subsequently risen, with different patterns of gendered coauthorship. Women remain globally under-represented in the profession (19% of authors in the RePEc database in 2018), with national variation.", "title": "Profession" }, { "paragraph_id": 147, "text": "", "title": "External links" } ]
Economics is a social science that studies the production, distribution, and consumption of goods and services. Economics focuses on the behaviour and interactions of economic agents and how economies work. Microeconomics analyzes what's viewed as basic elements in the economy, including individual agents and markets, their interactions, and the outcomes of interactions. Individual agents may include, for example, households, firms, buyers, and sellers. Macroeconomics analyzes the economy as a system where production, consumption, saving, and investment interact, and factors affecting it: employment of the resources of labour, capital, and land, currency inflation, economic growth, and public policies that have impact on these elements. Other broad distinctions within economics include those between positive economics, describing "what is", and normative economics, advocating "what ought to be"; between economic theory and applied economics; between rational and behavioural economics; and between mainstream economics and heterodox economics. Economic analysis can be applied throughout society, including business, finance, cybersecurity, health care, engineering and government. It is also applied to such diverse subjects as crime, education, the family, feminism, law, philosophy, politics, religion, social institutions, war, science and the environment.
2001-10-07T15:50:29Z
2023-12-31T02:11:19Z
[ "Template:Harvp", "Template:Refend", "Template:Sfnp", "Template:Efn", "Template:Cite report", "Template:Pp-semi-indef", "Template:Use British English", "Template:IPAc-en", "Template:Unbulleted list citebundle", "Template:Economics", "Template:Notelist", "Template:Cite book", "Template:Curlie", "Template:Use Oxford spelling", "Template:Main", "Template:See also", "Template:Section link", "Template:Cite journal", "Template:Cite web", "Template:Cite encyclopedia", "Template:Cite news", "Template:Economics sidebar", "Template:Lang", "Template:Non-primary source needed", "Template:Use dmy dates", "Template:Cite conference", "Template:Social sciences", "Template:Authority control", "Template:Sfn", "Template:Portal", "Template:Div col end", "Template:Citation", "Template:Library resources box", "Template:Short description", "Template:Blockquote", "Template:Missing information", "Template:Librivox book", "Template:Sister project links", "Template:Refbegin", "Template:Anchor", "Template:Citation needed", "Template:Cite OED", "Template:Div col", "Template:Reflist", "Template:Cite dictionary", "Template:Webarchive", "Template:Other uses", "Template:Pp-move-indef", "Template:Redirect" ]
https://en.wikipedia.org/wiki/Economics
9,225
Electronic paper
Electronic paper, also known as electronic ink (e-ink) or intelligent paper, is a display device that mimics the appearance of ordinary ink on paper. Unlike conventional flat panel displays that emit light, an electronic paper display reflects ambient light, like paper. This may make them more comfortable to read, and provide a wider viewing angle than most light-emitting displays. The contrast ratio in electronic displays available as of 2008 approaches newspaper, and newly developed displays are slightly better. An ideal e-paper display can be read in direct sunlight without the image appearing to fade. Technologies include Gyricon, electrophoretics, electrowetting, interferometry, and plasmonics. Many electronic paper technologies hold static text and images indefinitely without electricity. Flexible electronic paper uses plastic substrates and plastic electronics for the display backplane. Applications of electronic visual displays include electronic shelf labels and digital signage, bus station time tables, electronic billboards, smartphone displays, and e-readers able to display digital versions of books and magazines. Electronic paper was first developed in the 1970s by Nick Sheridon at Xerox's Palo Alto Research Center. The first electronic paper, called Gyricon, consisted of polyethylene spheres between 75 and 106 micrometers across. Each sphere is a Janus particle composed of negatively charged black plastic on one side and positively charged white plastic on the other (each bead is thus a dipole). The spheres are embedded in a transparent silicone sheet, with each sphere suspended in a bubble of oil so that it can rotate freely. The polarity of the voltage applied to each pair of electrodes then determines whether the white or black side is face-up, thus giving the pixel a white or black appearance. At the FPD 2008 exhibition, Japanese company Soken demonstrated a wall with electronic wall-paper using this technology. In 2007, the Estonian company Visitret Displays was developing this kind of display using polyvinylidene fluoride (PVDF) as the material for the spheres, dramatically improving the video speed and decreasing the control voltage needed. An electrophoretic display (EPD) forms images by rearranging charged pigment particles with an applied electric field. In the simplest implementation of an EPD, titanium dioxide (titania) particles approximately one micrometer in diameter are dispersed in a hydrocarbon oil. A dark-colored dye is also added to the oil, along with surfactants and charging agents that cause the particles to take on an electric charge. This mixture is placed between two parallel, conductive plates separated by a gap of 10 to 100 micrometres. When a voltage is applied across the two plates, the particles migrate electrophoretically to the plate that bears the opposite charge from that on the particles. When the particles are located at the front (viewing) side of the display, it appears white, because the light is scattered back to the viewer by the high-index titania particles. When the particles are located at the rear side of the display, it appears dark, because the light is absorbed by the colored dye. If the rear electrode is divided into a number of small picture elements (pixels), then an image can be formed by applying the appropriate voltage to each region of the display to create a pattern of reflecting and absorbing regions. EPDs are typically addressed using MOSFET-based thin-film transistor (TFT) technology. TFTs are often used to form a high-density image in an EPD. A common application for TFT-based EPDs are e-readers. Electrophoretic displays are considered prime examples of the electronic paper category, because of their paper-like appearance and low power consumption. Examples of commercial electrophoretic displays include the high-resolution active matrix displays used in the Amazon Kindle, Barnes & Noble Nook, Sony Reader, Kobo eReader, and iRex iLiad e-readers. These displays are constructed from an electrophoretic imaging film manufactured by E Ink Corporation. A mobile phone that used the technology is the Motorola Fone. Electrophoretic Display technology has also been developed by SiPix and Bridgestone/Delta. SiPix is now part of E Ink Corporation. The SiPix design uses a flexible 0.15 mm Microcup architecture, instead of E Ink's 0.04 mm diameter microcapsules. Bridgestone Corp.'s Advanced Materials Division cooperated with Delta Optoelectronics Inc. in developing Quick Response Liquid Powder Display technology. Electrophoretic displays can be manufactured using the Electronics on Plastic by Laser Release (EPLaR) process, developed by Philips Research, to enable existing AM-LCD manufacturing plants to create flexible plastic displays. In the 1990s another type of electronic ink based on a microencapsulated electrophoretic display was conceived and prototyped by a team of undergraduates at MIT as described in their Nature paper. J.D. Albert, Barrett Comiskey, Joseph Jacobson, Jeremy Rubin and Russ Wilcox co-founded E Ink Corporation in 1997 to commercialize the technology. E Ink subsequently formed a partnership with Philips Components two years later to develop and market the technology. In 2005, Philips sold the electronic paper business as well as its related patents to Prime View International. "It has for many years been an ambition of researchers in display media to create a flexible low-cost system that is the electronic analog of paper. In this context, microparticle-based displays have long intrigued researchers. Switchable contrast in such displays is achieved by the electromigration of highly scattering or absorbing microparticles (in the size range 0.1–5 μm), quite distinct from the molecular-scale properties that govern the behavior of the more familiar liquid-crystal displays. Micro-particle-based displays possess intrinsic bistability, exhibit extremely low power d.c. field addressing and have demonstrated high contrast and reflectivity. These features, combined with a near-lambertian viewing characteristic, result in an 'ink on paper' look. But such displays have to date suffered from short lifetimes and difficulty in manufacture. Here we report the synthesis of an electrophoretic ink based on the microencapsulation of an electrophoretic dispersion. The use of a microencapsulated electrophoretic medium solves the lifetime issues and permits the fabrication of a bistable electronic display solely by means of printing. This system may satisfy the practical requirements of electronic paper." This used tiny microcapsules filled with electrically charged white particles suspended in a colored oil. In early versions, the underlying circuitry controlled whether the white particles were at the top of the capsule (so it looked white to the viewer) or at the bottom of the capsule (so the viewer saw the color of the oil). This was essentially a reintroduction of the well-known electrophoretic display technology, but microcapsules meant the display could be made on flexible plastic sheets instead of glass. One early version of the electronic paper consists of a sheet of very small transparent capsules, each about 40 micrometers across. Each capsule contains an oily solution containing black dye (the electronic ink), with numerous white titanium dioxide particles suspended within. The particles are slightly negatively charged, and each one is naturally white. The screen holds microcapsules in a layer of liquid polymer, sandwiched between two arrays of electrodes, the upper of which is transparent. The two arrays are aligned to divide the sheet into pixels, and each pixel corresponds to a pair of electrodes situated on either side of the sheet. The sheet is laminated with transparent plastic for protection, resulting in an overall thickness of 80 micrometers, or twice that of ordinary paper. The network of electrodes connects to display circuitry, which turns the electronic ink 'on' and 'off' at specific pixels by applying a voltage to specific electrode pairs. A negative charge to the surface electrode repels the particles to the bottom of local capsules, forcing the black dye to the surface and turning the pixel black. Reversing the voltage has the opposite effect. It forces the particles to the surface, turning the pixel white. A more recent implementation of this concept requires only one layer of electrodes beneath the microcapsules. These are commercially referred to as Active Matrix Electrophoretic Displays (AMEPD). Electrowetting display (EWD) is based on controlling the shape of a confined water/oil interface by an applied voltage. With no voltage applied, the (colored) oil forms a flat film between the water and a hydrophobic (water-repellent) insulating coating of an electrode, resulting in a colored pixel. When a voltage is applied between the electrode and the water, the interfacial tension between the water and the coating changes. As a result, the stacked state is no longer stable, causing the water to move the oil aside. This makes a partly transparent pixel, or, if a reflective white surface is under the switchable element, a white pixel. Because of the small pixel size, the user only experiences the average reflection, which provides a high-brightness, high-contrast switchable element. Displays based on electrowetting provide several attractive features. The switching between white and colored reflection is fast enough to display video content. It is a low-power, low-voltage technology, and displays based on the effect can be made flat and thin. The reflectivity and contrast are better than or equal to other reflective display types and approach the visual qualities of paper. In addition, the technology offers a unique path toward high-brightness full-color displays, leading to displays that are four times brighter than reflective LCDs and twice as bright as other emerging technologies. Instead of using red, green, and blue (RGB) filters or alternating segments of the three primary colors, which effectively result in only one-third of the display reflecting light in the desired color, electrowetting allows for a system in which one sub-pixel can switch two different colors independently. This results in the availability of two-thirds of the display area to reflect light in any desired color. This is achieved by building up a pixel with a stack of two independently controllable colored oil films plus a color filter. The colors are cyan, magenta, and yellow, which is a subtractive system, comparable to the principle used in inkjet printing. Compared to LCD, brightness is gained because no polarisers are required. Electrofluidic display is a variation of an electrowetting display that place an aqueous pigment dispersion inside a tiny reservoir. The reservoir comprises less than 5-10% of the viewable pixel area and therefore the pigment is substantially hidden from view. Voltage is used to electromechanically pull the pigment out of the reservoir and spread it as a film directly behind the viewing substrate. As a result, the display takes on color and brightness similar to that of conventional pigments printed on paper. When voltage is removed liquid surface tension causes the pigment dispersion to rapidly recoil into the reservoir. The technology can potentially provide greater than 85% white state reflectance for electronic paper. The core technology was invented at the Novel Devices Laboratory at the University of Cincinnati and there are working prototypes developed by collaboration with Sun Chemical, Polymer Vision and Gamma Dynamics. It has a wide margin in critical aspects such as brightness, color saturation and response time. Because the optically active layer can be less than 15 micrometres thick, there is strong potential for rollable displays. The technology used in electronic visual displays that can create various colors via interference of reflected light. The color is selected with an electrically switched light modulator comprising a microscopic cavity that is switched on and off using driver integrated circuits similar to those used to address liquid-crystal displays (LCD). Plasmonic nanostructures with conductive polymers have also been suggested as one kind of electronic paper. The material has two parts. The first part is a highly reflective metasurface made by metal-insulator-metal films tens of nanometers in thickness including nanoscale holes. The metasurfaces can reflect different colors depending on the thickness of the insulator. The standard RGB color schema can be used as pixels for full-color displays. The second part is a polymer with optical absorption controllable by an electrochemical potential. After growing the polymer on the plasmonic metasurfaces, the reflection of the metasurfaces can be modulated by the applied voltage. This technology presents broad range colors, high polarization-independent reflection (>50 %), strong contrast (>30 %), the fast response time (hundreds of ms), and long-term stability. In addition, it has ultralow power consumption (< 0.5 mW/cm2) and potential for high resolution (>10000 dpi). Since the ultrathin metasurfaces are flexible and the polymer is soft, the whole system can be bent. Desired future improvements for this technology include bistability, cheaper materials and implementation with TFT arrays. Other research efforts into e-paper have involved using organic transistors embedded into flexible substrates, including attempts to build them into conventional paper. Simple color e-paper consists of a thin colored optical filter added to the monochrome technology described above. The array of pixels is divided into triads, typically consisting of the standard cyan, magenta and yellow, in the same way as CRT monitors (although using subtractive primary colors as opposed to additive primary colors). The display is then controlled like any other electronic color display. E Ink Corporation of E Ink Holdings Inc. released the first colored E Ink displays to be used in a marketed product. The Ectaco jetBook Color was released in 2012 as the first colored electronic ink device, which used E Ink's Triton display technology. E Ink in early 2015 also announced another color electronic ink technology called Prism. This new technology is a color changing film that can be used for e-readers, but Prism is also marketed as a film that can be integrated into architectural design such as "wall, ceiling panel, or entire room instantly." The disadvantage of these current color displays is that they are considerably more expensive than standard E Ink displays. The jetBook Color costs roughly nine times more than other popular e-readers such as the Amazon Kindle. As of January 2015, Prism had not been announced to be used in the plans for any e-reader devices. Several companies are simultaneously developing electronic paper and ink. While the technologies used by each company provide many of the same features, each has its own distinct technological advantages. All electronic paper technologies face the following general challenges: Electronic ink can be applied to flexible or rigid materials. For flexible displays, the base requires a thin, flexible material tough enough to withstand considerable wear, such as extremely thin plastic. The method of how the inks are encapsulated and then applied to the substrate is what distinguishes each company from others. These processes are complex and are carefully guarded industry secrets. Nevertheless, making electronic paper is less complex and costly than LCDs. There are many approaches to electronic paper, with many companies developing technology in this area. Other technologies being applied to electronic paper include modifications of liquid-crystal displays, electrochromic displays, and the electronic equivalent of an Etch A Sketch at Kyushu University. Advantages of electronic paper include low power usage (power is only drawn when the display is updated), flexibility and better readability than most displays. Electronic ink can be printed on any surface, including walls, billboards, product labels and T-shirts. The ink's flexibility would also make it possible to develop rollable displays for electronic devices. In December 2005, Seiko released the first electronic ink based watch called the Spectrum SVRD001 wristwatch, which has a flexible electrophoretic display and in March 2010 Seiko released a second generation of this famous electronic ink watch with an active matrix display. The Pebble smart watch (2013) uses a low-power memory LCD manufactured by Sharp for its e-paper display. In 2019, Fossil launched a hybrid smartwatch called the Hybrid HR, integrating an always on electronic ink display with physical hands and dial to simulate the look of a traditional analog watch. In 2004, Sony released the Librié in Japan, the first e-book reader with an electronic paper E Ink display. In September 2006, Sony released the PRS-500 Sony Reader e-book reader in the USA. On October 2, 2007, Sony announced the PRS-505, an updated version of the Reader. In November 2008, Sony released the PRS-700BC, which incorporated a backlight and a touchscreen. In late 2007, Amazon began producing and marketing the Amazon Kindle, an e-book reader with an e-paper display. In February 2009, Amazon released the Kindle 2 and in May 2009 the larger Kindle DX was announced. In July 2010 the third-generation Kindle was announced, with notable design changes. The fourth generation of Kindle, called Touch, was announced in September 2011 that was the Kindle's first departure from keyboards and page turn buttons in favor of touchscreens. In September 2012, Amazon announced the fifth generation of the Kindle called the Paperwhite, which incorporates a LED frontlight and a higher contrast display. In November 2009, Barnes and Noble launched the Barnes & Noble Nook, running an Android operating system. It differs from other e-readers in having a replaceable battery, and a separate touch-screen color LCD below the main electronic paper reading screen. In 2017, Sony and reMarkable offered e-books tailored for writing with a smart stylus. In 2020, Onyx released the first frontlit 13.3 inch electronic paper Android tablet, the Boox Max Lumi. At the end of the same year, Bigme released the first 10.3 inch color electronic paper Android tablet, the Bigme B1 Pro. This was also the first large electronic paper tablet to support 4g cellular data. In February 2006, the Flemish daily De Tijd distributed an electronic version of the paper to select subscribers in a limited marketing study, using a pre-release version of the iRex iLiad. This was the first recorded application of electronic ink to newspaper publishing. The French daily Les Échos announced the official launch of an electronic version of the paper on a subscription basis in September 2007. Two offers were available, combining a one-year subscription and a reading device. The offer included either a light (176g) reading device (adapted for Les Echos by Ganaxa) or the iRex iLiad. Two different processing platforms were used to deliver readable information of the daily, one based on the newly developed GPP electronic ink platform from Ganaxa, and the other one developed internally by Les Echos. Flexible display cards enable financial payment cardholders to generate a one-time password to reduce online banking and transaction fraud. Electronic paper offers a flat and thin alternative to existing key fob tokens for data security. The world's first ISO compliant smart card with an embedded display was developed by Innovative Card Technologies and nCryptone in 2005. The cards were manufactured by Nagra ID. Some devices, like USB flash drives, have used electronic paper to display status information, such as available storage space. Once the image on the electronic paper has been set, it requires no power to maintain, so the readout can be seen even when the flash drive is not plugged in. Motorola's low-cost mobile phone, the Motorola F3, uses an alphanumeric black-and-white electrophoretic display. The Samsung Alias 2 mobile phone incorporates electronic ink from E Ink into the keypad, which allows the keypad to change character sets and orientation while in different display modes. On December 12, 2012, Yota Devices announced the first "YotaPhone" prototype and was later released in December 2013, a unique double-display smartphone. It has a 4.3-inch, HD LCD on the front and an electronic ink display on the back. On May and June 2020, Hisense released the hisense A5c and A5 pro cc, the first color electronic ink smartphones. With a single color display, with toggable front light running android 9 and Android 10. E-paper based electronic shelf labels (ESL) are used to digitally display the prices of goods at retail stores. Electronic-paper-based labels are updated via two-way infrared or radio technology and powered by a rechargeable coin cell. Some variants use ZBD (zenithal bistable display) which is more similar to LCD but does not need power to retain an image. E-paper displays at bus or trams stops can be remotely updated. Compared to LED or liquid-crystal displays (LCDs), they consume lower energy and the text or graphics stays visible during a power failure. Compared to LCDs, it is well visible also under full sunshine. Because of its energy-saving properties, electronic paper has proved a technology suited to digital signage applications. Electronic paper is used on computer monitors like the 13.3 inch Dasung Paperlike 3 HD and 25.3 inch Paperlike 253. Some laptops like Lenovo ThinkBook Plus use e-paper as a secondary screen. Typically, e-paper electronic Tags integrate e-ink technology with wireless interfaces like NFC or UHF. They are most commonly used as employees' ID cards or as production labels to track manufacturing changes and status. E-Paper Tags are also increasingly being used as shipping labels, especially in the case of reusable boxes. An interesting feature provided by some e-paper Tags manufacturers is batteryless design. This means that the power needed for a display's content update is provided wirelessly and the module itself doesn't contain any battery. Other proposed applications include clothes, digital photo frames, information boards, and keyboards. Keyboards with dynamically changeable keys are useful for less represented languages, non-standard keyboard layouts such as Dvorak, or for special non-alphabetical applications such as video editing or games. The reMarkable is a writer tablet for reading and taking notes.
[ { "paragraph_id": 0, "text": "Electronic paper, also known as electronic ink (e-ink) or intelligent paper, is a display device that mimics the appearance of ordinary ink on paper. Unlike conventional flat panel displays that emit light, an electronic paper display reflects ambient light, like paper. This may make them more comfortable to read, and provide a wider viewing angle than most light-emitting displays. The contrast ratio in electronic displays available as of 2008 approaches newspaper, and newly developed displays are slightly better. An ideal e-paper display can be read in direct sunlight without the image appearing to fade.", "title": "" }, { "paragraph_id": 1, "text": "Technologies include Gyricon, electrophoretics, electrowetting, interferometry, and plasmonics. Many electronic paper technologies hold static text and images indefinitely without electricity. Flexible electronic paper uses plastic substrates and plastic electronics for the display backplane. Applications of electronic visual displays include electronic shelf labels and digital signage, bus station time tables, electronic billboards, smartphone displays, and e-readers able to display digital versions of books and magazines.", "title": "" }, { "paragraph_id": 2, "text": "Electronic paper was first developed in the 1970s by Nick Sheridon at Xerox's Palo Alto Research Center. The first electronic paper, called Gyricon, consisted of polyethylene spheres between 75 and 106 micrometers across. Each sphere is a Janus particle composed of negatively charged black plastic on one side and positively charged white plastic on the other (each bead is thus a dipole). The spheres are embedded in a transparent silicone sheet, with each sphere suspended in a bubble of oil so that it can rotate freely. The polarity of the voltage applied to each pair of electrodes then determines whether the white or black side is face-up, thus giving the pixel a white or black appearance. At the FPD 2008 exhibition, Japanese company Soken demonstrated a wall with electronic wall-paper using this technology. In 2007, the Estonian company Visitret Displays was developing this kind of display using polyvinylidene fluoride (PVDF) as the material for the spheres, dramatically improving the video speed and decreasing the control voltage needed.", "title": "Technologies" }, { "paragraph_id": 3, "text": "An electrophoretic display (EPD) forms images by rearranging charged pigment particles with an applied electric field. In the simplest implementation of an EPD, titanium dioxide (titania) particles approximately one micrometer in diameter are dispersed in a hydrocarbon oil. A dark-colored dye is also added to the oil, along with surfactants and charging agents that cause the particles to take on an electric charge. This mixture is placed between two parallel, conductive plates separated by a gap of 10 to 100 micrometres. When a voltage is applied across the two plates, the particles migrate electrophoretically to the plate that bears the opposite charge from that on the particles. When the particles are located at the front (viewing) side of the display, it appears white, because the light is scattered back to the viewer by the high-index titania particles. When the particles are located at the rear side of the display, it appears dark, because the light is absorbed by the colored dye. If the rear electrode is divided into a number of small picture elements (pixels), then an image can be formed by applying the appropriate voltage to each region of the display to create a pattern of reflecting and absorbing regions.", "title": "Technologies" }, { "paragraph_id": 4, "text": "EPDs are typically addressed using MOSFET-based thin-film transistor (TFT) technology. TFTs are often used to form a high-density image in an EPD. A common application for TFT-based EPDs are e-readers. Electrophoretic displays are considered prime examples of the electronic paper category, because of their paper-like appearance and low power consumption. Examples of commercial electrophoretic displays include the high-resolution active matrix displays used in the Amazon Kindle, Barnes & Noble Nook, Sony Reader, Kobo eReader, and iRex iLiad e-readers. These displays are constructed from an electrophoretic imaging film manufactured by E Ink Corporation. A mobile phone that used the technology is the Motorola Fone.", "title": "Technologies" }, { "paragraph_id": 5, "text": "Electrophoretic Display technology has also been developed by SiPix and Bridgestone/Delta. SiPix is now part of E Ink Corporation. The SiPix design uses a flexible 0.15 mm Microcup architecture, instead of E Ink's 0.04 mm diameter microcapsules. Bridgestone Corp.'s Advanced Materials Division cooperated with Delta Optoelectronics Inc. in developing Quick Response Liquid Powder Display technology.", "title": "Technologies" }, { "paragraph_id": 6, "text": "Electrophoretic displays can be manufactured using the Electronics on Plastic by Laser Release (EPLaR) process, developed by Philips Research, to enable existing AM-LCD manufacturing plants to create flexible plastic displays.", "title": "Technologies" }, { "paragraph_id": 7, "text": "In the 1990s another type of electronic ink based on a microencapsulated electrophoretic display was conceived and prototyped by a team of undergraduates at MIT as described in their Nature paper. J.D. Albert, Barrett Comiskey, Joseph Jacobson, Jeremy Rubin and Russ Wilcox co-founded E Ink Corporation in 1997 to commercialize the technology. E Ink subsequently formed a partnership with Philips Components two years later to develop and market the technology. In 2005, Philips sold the electronic paper business as well as its related patents to Prime View International.", "title": "Technologies" }, { "paragraph_id": 8, "text": "\"It has for many years been an ambition of researchers in display media to create a flexible low-cost system that is the electronic analog of paper. In this context, microparticle-based displays have long intrigued researchers. Switchable contrast in such displays is achieved by the electromigration of highly scattering or absorbing microparticles (in the size range 0.1–5 μm), quite distinct from the molecular-scale properties that govern the behavior of the more familiar liquid-crystal displays. Micro-particle-based displays possess intrinsic bistability, exhibit extremely low power d.c. field addressing and have demonstrated high contrast and reflectivity. These features, combined with a near-lambertian viewing characteristic, result in an 'ink on paper' look. But such displays have to date suffered from short lifetimes and difficulty in manufacture. Here we report the synthesis of an electrophoretic ink based on the microencapsulation of an electrophoretic dispersion. The use of a microencapsulated electrophoretic medium solves the lifetime issues and permits the fabrication of a bistable electronic display solely by means of printing. This system may satisfy the practical requirements of electronic paper.\"", "title": "Technologies" }, { "paragraph_id": 9, "text": "This used tiny microcapsules filled with electrically charged white particles suspended in a colored oil. In early versions, the underlying circuitry controlled whether the white particles were at the top of the capsule (so it looked white to the viewer) or at the bottom of the capsule (so the viewer saw the color of the oil). This was essentially a reintroduction of the well-known electrophoretic display technology, but microcapsules meant the display could be made on flexible plastic sheets instead of glass. One early version of the electronic paper consists of a sheet of very small transparent capsules, each about 40 micrometers across. Each capsule contains an oily solution containing black dye (the electronic ink), with numerous white titanium dioxide particles suspended within. The particles are slightly negatively charged, and each one is naturally white. The screen holds microcapsules in a layer of liquid polymer, sandwiched between two arrays of electrodes, the upper of which is transparent. The two arrays are aligned to divide the sheet into pixels, and each pixel corresponds to a pair of electrodes situated on either side of the sheet. The sheet is laminated with transparent plastic for protection, resulting in an overall thickness of 80 micrometers, or twice that of ordinary paper. The network of electrodes connects to display circuitry, which turns the electronic ink 'on' and 'off' at specific pixels by applying a voltage to specific electrode pairs. A negative charge to the surface electrode repels the particles to the bottom of local capsules, forcing the black dye to the surface and turning the pixel black. Reversing the voltage has the opposite effect. It forces the particles to the surface, turning the pixel white. A more recent implementation of this concept requires only one layer of electrodes beneath the microcapsules. These are commercially referred to as Active Matrix Electrophoretic Displays (AMEPD).", "title": "Technologies" }, { "paragraph_id": 10, "text": "Electrowetting display (EWD) is based on controlling the shape of a confined water/oil interface by an applied voltage. With no voltage applied, the (colored) oil forms a flat film between the water and a hydrophobic (water-repellent) insulating coating of an electrode, resulting in a colored pixel. When a voltage is applied between the electrode and the water, the interfacial tension between the water and the coating changes. As a result, the stacked state is no longer stable, causing the water to move the oil aside. This makes a partly transparent pixel, or, if a reflective white surface is under the switchable element, a white pixel. Because of the small pixel size, the user only experiences the average reflection, which provides a high-brightness, high-contrast switchable element.", "title": "Technologies" }, { "paragraph_id": 11, "text": "Displays based on electrowetting provide several attractive features. The switching between white and colored reflection is fast enough to display video content. It is a low-power, low-voltage technology, and displays based on the effect can be made flat and thin. The reflectivity and contrast are better than or equal to other reflective display types and approach the visual qualities of paper. In addition, the technology offers a unique path toward high-brightness full-color displays, leading to displays that are four times brighter than reflective LCDs and twice as bright as other emerging technologies. Instead of using red, green, and blue (RGB) filters or alternating segments of the three primary colors, which effectively result in only one-third of the display reflecting light in the desired color, electrowetting allows for a system in which one sub-pixel can switch two different colors independently.", "title": "Technologies" }, { "paragraph_id": 12, "text": "This results in the availability of two-thirds of the display area to reflect light in any desired color. This is achieved by building up a pixel with a stack of two independently controllable colored oil films plus a color filter.", "title": "Technologies" }, { "paragraph_id": 13, "text": "The colors are cyan, magenta, and yellow, which is a subtractive system, comparable to the principle used in inkjet printing. Compared to LCD, brightness is gained because no polarisers are required.", "title": "Technologies" }, { "paragraph_id": 14, "text": "Electrofluidic display is a variation of an electrowetting display that place an aqueous pigment dispersion inside a tiny reservoir. The reservoir comprises less than 5-10% of the viewable pixel area and therefore the pigment is substantially hidden from view. Voltage is used to electromechanically pull the pigment out of the reservoir and spread it as a film directly behind the viewing substrate. As a result, the display takes on color and brightness similar to that of conventional pigments printed on paper. When voltage is removed liquid surface tension causes the pigment dispersion to rapidly recoil into the reservoir. The technology can potentially provide greater than 85% white state reflectance for electronic paper.", "title": "Technologies" }, { "paragraph_id": 15, "text": "The core technology was invented at the Novel Devices Laboratory at the University of Cincinnati and there are working prototypes developed by collaboration with Sun Chemical, Polymer Vision and Gamma Dynamics.", "title": "Technologies" }, { "paragraph_id": 16, "text": "It has a wide margin in critical aspects such as brightness, color saturation and response time. Because the optically active layer can be less than 15 micrometres thick, there is strong potential for rollable displays.", "title": "Technologies" }, { "paragraph_id": 17, "text": "The technology used in electronic visual displays that can create various colors via interference of reflected light. The color is selected with an electrically switched light modulator comprising a microscopic cavity that is switched on and off using driver integrated circuits similar to those used to address liquid-crystal displays (LCD).", "title": "Technologies" }, { "paragraph_id": 18, "text": "Plasmonic nanostructures with conductive polymers have also been suggested as one kind of electronic paper. The material has two parts. The first part is a highly reflective metasurface made by metal-insulator-metal films tens of nanometers in thickness including nanoscale holes. The metasurfaces can reflect different colors depending on the thickness of the insulator. The standard RGB color schema can be used as pixels for full-color displays. The second part is a polymer with optical absorption controllable by an electrochemical potential. After growing the polymer on the plasmonic metasurfaces, the reflection of the metasurfaces can be modulated by the applied voltage. This technology presents broad range colors, high polarization-independent reflection (>50 %), strong contrast (>30 %), the fast response time (hundreds of ms), and long-term stability. In addition, it has ultralow power consumption (< 0.5 mW/cm2) and potential for high resolution (>10000 dpi). Since the ultrathin metasurfaces are flexible and the polymer is soft, the whole system can be bent. Desired future improvements for this technology include bistability, cheaper materials and implementation with TFT arrays.", "title": "Technologies" }, { "paragraph_id": 19, "text": "Other research efforts into e-paper have involved using organic transistors embedded into flexible substrates, including attempts to build them into conventional paper. Simple color e-paper consists of a thin colored optical filter added to the monochrome technology described above. The array of pixels is divided into triads, typically consisting of the standard cyan, magenta and yellow, in the same way as CRT monitors (although using subtractive primary colors as opposed to additive primary colors). The display is then controlled like any other electronic color display.", "title": "Technologies" }, { "paragraph_id": 20, "text": "E Ink Corporation of E Ink Holdings Inc. released the first colored E Ink displays to be used in a marketed product. The Ectaco jetBook Color was released in 2012 as the first colored electronic ink device, which used E Ink's Triton display technology. E Ink in early 2015 also announced another color electronic ink technology called Prism. This new technology is a color changing film that can be used for e-readers, but Prism is also marketed as a film that can be integrated into architectural design such as \"wall, ceiling panel, or entire room instantly.\" The disadvantage of these current color displays is that they are considerably more expensive than standard E Ink displays. The jetBook Color costs roughly nine times more than other popular e-readers such as the Amazon Kindle. As of January 2015, Prism had not been announced to be used in the plans for any e-reader devices.", "title": "History" }, { "paragraph_id": 21, "text": "Several companies are simultaneously developing electronic paper and ink. While the technologies used by each company provide many of the same features, each has its own distinct technological advantages. All electronic paper technologies face the following general challenges:", "title": "Applications" }, { "paragraph_id": 22, "text": "Electronic ink can be applied to flexible or rigid materials. For flexible displays, the base requires a thin, flexible material tough enough to withstand considerable wear, such as extremely thin plastic. The method of how the inks are encapsulated and then applied to the substrate is what distinguishes each company from others. These processes are complex and are carefully guarded industry secrets. Nevertheless, making electronic paper is less complex and costly than LCDs.", "title": "Applications" }, { "paragraph_id": 23, "text": "There are many approaches to electronic paper, with many companies developing technology in this area. Other technologies being applied to electronic paper include modifications of liquid-crystal displays, electrochromic displays, and the electronic equivalent of an Etch A Sketch at Kyushu University. Advantages of electronic paper include low power usage (power is only drawn when the display is updated), flexibility and better readability than most displays. Electronic ink can be printed on any surface, including walls, billboards, product labels and T-shirts. The ink's flexibility would also make it possible to develop rollable displays for electronic devices.", "title": "Applications" }, { "paragraph_id": 24, "text": "In December 2005, Seiko released the first electronic ink based watch called the Spectrum SVRD001 wristwatch, which has a flexible electrophoretic display and in March 2010 Seiko released a second generation of this famous electronic ink watch with an active matrix display. The Pebble smart watch (2013) uses a low-power memory LCD manufactured by Sharp for its e-paper display.", "title": "Applications" }, { "paragraph_id": 25, "text": "In 2019, Fossil launched a hybrid smartwatch called the Hybrid HR, integrating an always on electronic ink display with physical hands and dial to simulate the look of a traditional analog watch.", "title": "Applications" }, { "paragraph_id": 26, "text": "In 2004, Sony released the Librié in Japan, the first e-book reader with an electronic paper E Ink display. In September 2006, Sony released the PRS-500 Sony Reader e-book reader in the USA. On October 2, 2007, Sony announced the PRS-505, an updated version of the Reader. In November 2008, Sony released the PRS-700BC, which incorporated a backlight and a touchscreen.", "title": "Applications" }, { "paragraph_id": 27, "text": "In late 2007, Amazon began producing and marketing the Amazon Kindle, an e-book reader with an e-paper display. In February 2009, Amazon released the Kindle 2 and in May 2009 the larger Kindle DX was announced. In July 2010 the third-generation Kindle was announced, with notable design changes. The fourth generation of Kindle, called Touch, was announced in September 2011 that was the Kindle's first departure from keyboards and page turn buttons in favor of touchscreens. In September 2012, Amazon announced the fifth generation of the Kindle called the Paperwhite, which incorporates a LED frontlight and a higher contrast display.", "title": "Applications" }, { "paragraph_id": 28, "text": "In November 2009, Barnes and Noble launched the Barnes & Noble Nook, running an Android operating system. It differs from other e-readers in having a replaceable battery, and a separate touch-screen color LCD below the main electronic paper reading screen.", "title": "Applications" }, { "paragraph_id": 29, "text": "In 2017, Sony and reMarkable offered e-books tailored for writing with a smart stylus.", "title": "Applications" }, { "paragraph_id": 30, "text": "In 2020, Onyx released the first frontlit 13.3 inch electronic paper Android tablet, the Boox Max Lumi. At the end of the same year, Bigme released the first 10.3 inch color electronic paper Android tablet, the Bigme B1 Pro. This was also the first large electronic paper tablet to support 4g cellular data.", "title": "Applications" }, { "paragraph_id": 31, "text": "In February 2006, the Flemish daily De Tijd distributed an electronic version of the paper to select subscribers in a limited marketing study, using a pre-release version of the iRex iLiad. This was the first recorded application of electronic ink to newspaper publishing.", "title": "Applications" }, { "paragraph_id": 32, "text": "The French daily Les Échos announced the official launch of an electronic version of the paper on a subscription basis in September 2007. Two offers were available, combining a one-year subscription and a reading device. The offer included either a light (176g) reading device (adapted for Les Echos by Ganaxa) or the iRex iLiad. Two different processing platforms were used to deliver readable information of the daily, one based on the newly developed GPP electronic ink platform from Ganaxa, and the other one developed internally by Les Echos.", "title": "Applications" }, { "paragraph_id": 33, "text": "Flexible display cards enable financial payment cardholders to generate a one-time password to reduce online banking and transaction fraud. Electronic paper offers a flat and thin alternative to existing key fob tokens for data security. The world's first ISO compliant smart card with an embedded display was developed by Innovative Card Technologies and nCryptone in 2005. The cards were manufactured by Nagra ID.", "title": "Applications" }, { "paragraph_id": 34, "text": "Some devices, like USB flash drives, have used electronic paper to display status information, such as available storage space. Once the image on the electronic paper has been set, it requires no power to maintain, so the readout can be seen even when the flash drive is not plugged in.", "title": "Applications" }, { "paragraph_id": 35, "text": "Motorola's low-cost mobile phone, the Motorola F3, uses an alphanumeric black-and-white electrophoretic display.", "title": "Applications" }, { "paragraph_id": 36, "text": "The Samsung Alias 2 mobile phone incorporates electronic ink from E Ink into the keypad, which allows the keypad to change character sets and orientation while in different display modes.", "title": "Applications" }, { "paragraph_id": 37, "text": "On December 12, 2012, Yota Devices announced the first \"YotaPhone\" prototype and was later released in December 2013, a unique double-display smartphone. It has a 4.3-inch, HD LCD on the front and an electronic ink display on the back.", "title": "Applications" }, { "paragraph_id": 38, "text": "On May and June 2020, Hisense released the hisense A5c and A5 pro cc, the first color electronic ink smartphones. With a single color display, with toggable front light running android 9 and Android 10.", "title": "Applications" }, { "paragraph_id": 39, "text": "E-paper based electronic shelf labels (ESL) are used to digitally display the prices of goods at retail stores. Electronic-paper-based labels are updated via two-way infrared or radio technology and powered by a rechargeable coin cell. Some variants use ZBD (zenithal bistable display) which is more similar to LCD but does not need power to retain an image.", "title": "Applications" }, { "paragraph_id": 40, "text": "E-paper displays at bus or trams stops can be remotely updated. Compared to LED or liquid-crystal displays (LCDs), they consume lower energy and the text or graphics stays visible during a power failure. Compared to LCDs, it is well visible also under full sunshine.", "title": "Applications" }, { "paragraph_id": 41, "text": "Because of its energy-saving properties, electronic paper has proved a technology suited to digital signage applications.", "title": "Applications" }, { "paragraph_id": 42, "text": "Electronic paper is used on computer monitors like the 13.3 inch Dasung Paperlike 3 HD and 25.3 inch Paperlike 253.", "title": "Applications" }, { "paragraph_id": 43, "text": "Some laptops like Lenovo ThinkBook Plus use e-paper as a secondary screen.", "title": "Applications" }, { "paragraph_id": 44, "text": "Typically, e-paper electronic Tags integrate e-ink technology with wireless interfaces like NFC or UHF. They are most commonly used as employees' ID cards or as production labels to track manufacturing changes and status. E-Paper Tags are also increasingly being used as shipping labels, especially in the case of reusable boxes. An interesting feature provided by some e-paper Tags manufacturers is batteryless design. This means that the power needed for a display's content update is provided wirelessly and the module itself doesn't contain any battery.", "title": "Applications" }, { "paragraph_id": 45, "text": "Other proposed applications include clothes, digital photo frames, information boards, and keyboards. Keyboards with dynamically changeable keys are useful for less represented languages, non-standard keyboard layouts such as Dvorak, or for special non-alphabetical applications such as video editing or games. The reMarkable is a writer tablet for reading and taking notes.", "title": "Applications" } ]
Electronic paper, also known as electronic ink (e-ink) or intelligent paper, is a display device that mimics the appearance of ordinary ink on paper. Unlike conventional flat panel displays that emit light, an electronic paper display reflects ambient light, like paper. This may make them more comfortable to read, and provide a wider viewing angle than most light-emitting displays. The contrast ratio in electronic displays available as of 2008 approaches newspaper, and newly developed displays are slightly better. An ideal e-paper display can be read in direct sunlight without the image appearing to fade. Technologies include Gyricon, electrophoretics, electrowetting, interferometry, and plasmonics. Many electronic paper technologies hold static text and images indefinitely without electricity. Flexible electronic paper uses plastic substrates and plastic electronics for the display backplane. Applications of electronic visual displays include electronic shelf labels and digital signage, bus station time tables, electronic billboards, smartphone displays, and e-readers able to display digital versions of books and magazines.
2001-02-25T10:02:09Z
2023-12-05T22:41:23Z
[ "Template:HowStuffWorks", "Template:Authority control", "Template:Update", "Template:Cite book", "Template:Citation", "Template:Webarchive", "Template:Clarify", "Template:Reflist", "Template:Cite journal", "Template:Cite web", "Template:Display technology", "Template:Short description", "Template:Cn", "Template:By whom", "Template:Cite news", "Template:Commons category", "Template:Paper", "Template:Other uses", "Template:Main" ]
https://en.wikipedia.org/wiki/Electronic_paper
9,228
Earth
Earth is the third planet from the Sun and the only astronomical object known to harbor life. This is enabled by Earth being a water world, the only one in the Solar System sustaining liquid surface water. Almost all of Earth's water is contained in its global ocean, covering 70.8% of Earth's crust. The remaining 29.2% of Earth's crust is land, most of which is located in the form of continental landmasses within one hemisphere, Earth's land hemisphere. Most of Earth's land is somewhat humid and covered by vegetation, while large sheets of ice at Earth's polar deserts retain more water than Earth's groundwater, lakes, rivers and atmospheric water combined. Earth's crust consists of slowly moving tectonic plates, which interact to produce mountain ranges, volcanoes, and earthquakes. Earth has a liquid outer core that generates a magnetosphere capable of deflecting most of the destructive solar winds and cosmic radiation. Earth has a dynamic atmosphere, which sustains Earth's surface conditions and protects it from most meteoroids and UV-light at entry. It has a composition of primarily nitrogen and oxygen. Water vapor is widely present in the atmosphere, forming clouds that cover most of the planet. The water vapor acts as a greenhouse gas and, together with other greenhouse gases in the atmosphere, particularly carbon dioxide (CO2), creates the conditions for both liquid surface water and water vapor to persist via the capturing of energy from the Sun's light. This process maintains the current average surface temperature of 14.76 °C, at which water is liquid under atmospheric pressure. Differences in the amount of captured energy between geographic regions (as with the equatorial region receiving more sunlight than the polar regions) drive atmospheric and ocean currents, producing a global climate system with different climate regions, and a range of weather phenomena such as precipitation, allowing components such as nitrogen to cycle. Earth is rounded into an ellipsoid with a circumference of about 40,000 km. It is the densest planet in the Solar System. Of the four rocky planets, it is the largest and most massive. Earth is about eight light-minutes away from the Sun and orbits it, taking a year (about 365.25 days) to complete one revolution. Earth rotates around its own axis in slightly less than a day (in about 23 hours and 56 minutes). Earth's axis of rotation is tilted with respect to the perpendicular to its orbital plane around the Sun, producing seasons. Earth is orbited by one permanent natural satellite, the Moon, which orbits Earth at 384,400 km (1.28 light seconds) and is roughly a quarter as wide as Earth. The Moon's gravity helps stabilize Earth's axis, and also causes tides which gradually slow Earth's rotation. As a result of tidal locking, the same side of the Moon always faces Earth. Earth, like most other bodies in the Solar System, formed 4.5 billion years ago from gas in the early Solar System. During the first billion years of Earth's history, the ocean formed and then life developed within it. Life spread globally and has been altering Earth's atmosphere and surface, leading to the Great Oxidation Event two billion years ago. Humans emerged 300,000 years ago in Africa and have spread across every continent on Earth with the exception of Antarctica. Humans depend on Earth's biosphere and natural resources for their survival, but have increasingly impacted the planet's environment. Humanity's current impact on Earth's climate and biosphere is unsustainable, threatening the livelihood of humans and many other forms of life, and causing widespread extinctions. The Modern English word Earth developed, via Middle English, from an Old English noun most often spelled eorðe. It has cognates in every Germanic language, and their ancestral root has been reconstructed as *erþō. In its earliest attestation, the word eorðe was used to translate the many senses of Latin terra and Greek γῆ gē: the ground, its soil, dry land, the human world, the surface of the world (including the sea), and the globe itself. As with Roman Terra/Tellūs and Greek Gaia, Earth may have been a personified goddess in Germanic paganism: late Norse mythology included Jörð ("Earth"), a giantess often given as the mother of Thor. Historically, "earth" has been written in lowercase. Beginning with the use of Early Middle English, its definite sense as "the globe" was expressed as "the earth". By the era of Early Modern English, capitalization of nouns began to prevail, and the earth was also written the Earth, particularly when referenced along with other heavenly bodies. More recently, the name is sometimes simply given as Earth, by analogy with the names of the other planets, though "earth" and forms with "the earth" remain common. House styles now vary: Oxford spelling recognizes the lowercase form as the most common, with the capitalized form an acceptable variant. Another convention capitalizes "Earth" when appearing as a name, such as a description of the "Earth's atmosphere", but employs the lowercase when it is preceded by "the", such as "the atmosphere of the earth"). It almost always appears in lowercase in colloquial expressions such as "what on earth are you doing?" The name Terra /ˈtɛrə/ occasionally is used in scientific writing and especially in science fiction to distinguish humanity's inhabited planet from others, while in poetry Tellus /ˈtɛləs/ has been used to denote personification of the Earth. Terra is also the name of the planet in some Romance languages, languages that evolved from Latin, like Italian and Portuguese, while in other Romance languages the word gave rise to names with slightly altered spellings, like the Spanish Tierra and the French Terre. The Latinate form Gæa or Gaea (English: /ˈdʒiː.ə/) of the Greek poetic name Gaia (Γαῖα; Ancient Greek: [ɡâi̯.a] or [ɡâj.ja]) is rare, though the alternative spelling Gaia has become common due to the Gaia hypothesis, in which case its pronunciation is /ˈɡaɪ.ə/ rather than the more classical English /ˈɡeɪ.ə/. There are a number of adjectives for the planet Earth. The word "earthly" is derived from "Earth". The word "Terra" is derived from the Latin word "terran" /ˈtɛrən/. The word "terrestrial" /təˈrɛstriəl/, is derived from the French word "terrene" /təˈriːn/. The world "tellurian" is derived from the Latin word "Tellus" /tɛˈlʊəriən/ and "telluric". The oldest material found in the Solar System is dated to 4.5682+0.0002−0.0004 Ga (billion years) ago. By 4.54±0.04 Ga the primordial Earth had formed. The bodies in the Solar System formed and evolved with the Sun. In theory, a solar nebula partitions a volume out of a molecular cloud by gravitational collapse, which begins to spin and flatten into a circumstellar disk, and then the planets grow out of that disk with the Sun. A nebula contains gas, ice grains, and dust (including primordial nuclides). According to nebular theory, planetesimals formed by accretion, with the primordial Earth being estimated as likely taking anywhere from 70 to 100 million years to form. Estimates of the age of the Moon range from 4.5 Ga to significantly younger. A leading hypothesis is that it was formed by accretion from material loosed from Earth after a Mars-sized object with about 10% of Earth's mass, named Theia, collided with Earth. It hit Earth with a glancing blow and some of its mass merged with Earth. Between approximately 4.1 and 3.8 Ga, numerous asteroid impacts during the Late Heavy Bombardment caused significant changes to the greater surface environment of the Moon and, by inference, to that of Earth. Earth's atmosphere and oceans were formed by volcanic activity and outgassing. Water vapor from these sources condensed into the oceans, augmented by water and ice from asteroids, protoplanets, and comets. Sufficient water to fill the oceans may have been on Earth since it formed. In this model, atmospheric greenhouse gases kept the oceans from freezing when the newly forming Sun had only 70% of its current luminosity. By 3.5 Ga, Earth's magnetic field was established, which helped prevent the atmosphere from being stripped away by the solar wind. As the molten outer layer of Earth cooled it formed the first solid crust, which is thought to have been mafic in composition. The first continental crust, which was more felsic in composition, formed by the partial melting of this mafic crust. The presence of grains of the mineral zircon of Hadean age in Eoarchean sedimentary rocks suggests that at least some felsic crust existed as early as 4.4 Ga, only 140 Ma after Earth's formation. There are two main models of how this initial small volume of continental crust evolved to reach its current abundance: (1) a relatively steady growth up to the present day, which is supported by the radiometric dating of continental crust globally and (2) an initial rapid growth in the volume of continental crust during the Archean, forming the bulk of the continental crust that now exists, which is supported by isotopic evidence from hafnium in zircons and neodymium in sedimentary rocks. The two models and the data that support them can be reconciled by large-scale recycling of the continental crust, particularly during the early stages of Earth's history. New continental crust forms as a result of plate tectonics, a process ultimately driven by the continuous loss of heat from Earth's interior. Over the period of hundreds of millions of years, tectonic forces have caused areas of continental crust to group together to form supercontinents that have subsequently broken apart. At approximately 750 Ma, one of the earliest known supercontinents, Rodinia, began to break apart. The continents later recombined to form Pannotia at 600–540 Ma, then finally Pangaea, which also began to break apart at 180 Ma. The most recent pattern of ice ages began about 40 Ma, and then intensified during the Pleistocene about 3 Ma. High- and middle-latitude regions have since undergone repeated cycles of glaciation and thaw, repeating about every 21,000, 41,000 and 100,000 years. The Last Glacial Period, colloquially called the "last ice age", covered large parts of the continents, to the middle latitudes, in ice and ended about 11,700 years ago. Chemical reactions led to the first self-replicating molecules about four billion years ago. A half billion years later, the last common ancestor of all current life arose. The evolution of photosynthesis allowed the Sun's energy to be harvested directly by life forms. The resultant molecular oxygen (O2) accumulated in the atmosphere and due to interaction with ultraviolet solar radiation, formed a protective ozone layer (O3) in the upper atmosphere. The incorporation of smaller cells within larger ones resulted in the development of complex cells called eukaryotes. True multicellular organisms formed as cells within colonies became increasingly specialized. Aided by the absorption of harmful ultraviolet radiation by the ozone layer, life colonized Earth's surface. Among the earliest fossil evidence for life is microbial mat fossils found in 3.48 billion-year-old sandstone in Western Australia, biogenic graphite found in 3.7 billion-year-old metasedimentary rocks in Western Greenland, and remains of biotic material found in 4.1 billion-year-old rocks in Western Australia. The earliest direct evidence of life on Earth is contained in 3.45 billion-year-old Australian rocks showing fossils of microorganisms. During the Neoproterozoic, 1000 to 539 Ma, much of Earth might have been covered in ice. This hypothesis has been termed "Snowball Earth", and it is of particular interest because it preceded the Cambrian explosion, when multicellular life forms significantly increased in complexity. Following the Cambrian explosion, 535 Ma, there have been at least five major mass extinctions and many minor ones. Apart from the proposed current Holocene extinction event, the most recent was 66 Ma, when an asteroid impact triggered the extinction of the non-avian dinosaurs and other large reptiles, but largely spared small animals such as insects, mammals, lizards and birds. Mammalian life has diversified over the past 66 Mys, and several million years ago an African ape species gained the ability to stand upright. This facilitated tool use and encouraged communication that provided the nutrition and stimulation needed for a larger brain, which led to the evolution of humans. The development of agriculture, and then civilization, led to humans having an influence on Earth and the nature and quantity of other life forms that continues to this day. Earth's expected long-term future is tied to that of the Sun. Over the next 1.1 billion years, solar luminosity will increase by 10%, and over the next 3.5 billion years by 40%. Earth's increasing surface temperature will accelerate the inorganic carbon cycle, reducing CO2 concentration to levels lethally low for plants (10 ppm for C4 photosynthesis) in approximately 100–900 million years. The lack of vegetation will result in the loss of oxygen in the atmosphere, making animal life impossible. Due to the increased luminosity, Earth's mean temperature may reach 100 °C (212 °F) in 1.5 billion years, and all ocean water will evaporate and be lost to space, which may trigger a runaway greenhouse effect, within an estimated 1.6 to 3 billion years. Even if the Sun were stable, a fraction of the water in the modern oceans will descend to the mantle, due to reduced steam venting from mid-ocean ridges. The Sun will evolve to become a red giant in about 5 billion years. Models predict that the Sun will expand to roughly 1 AU (150 million km; 93 million mi), about 250 times its present radius. Earth's fate is less clear. As a red giant, the Sun will lose roughly 30% of its mass, so, without tidal effects, Earth will move to an orbit 1.7 AU (250 million km; 160 million mi) from the Sun when the star reaches its maximum radius, otherwise, with tidal effects, it may enter the Sun's atmosphere and be vaporized. Earth has a rounded shape, through hydrostatic equilibrium, with an average diameter of 12,742 kilometers (7,918 mi), making it the fifth largest planetary sized and largest terrestrial object of the Solar System. Due to Earth's rotation it has the shape of an ellipsoid, bulging at its Equator; its diameter is 43 kilometers (27 mi) longer there than at its poles. Earth's shape furthermore has local topographic variations. Though the largest local variations, like the Mariana Trench (10,925 meters or 35,843 feet below local sea level), only shortens Earth's average radius by 0.17% and Mount Everest (8,848 meters or 29,029 feet above local sea level) lengthens it by only 0.14%. Since Earth's surface is farthest out from Earth's center of mass at its equatorial bulge, the summit of the volcano Chimborazo in Ecuador (6,384.4 km or 3,967.1 mi) is its farthest point out. Parallel to the rigid land topography the Ocean exhibits a more dynamic topography. To measure the local variation of Earth's topography, geodesy employs an idealized Earth producing a shape called a geoid. Such a geoid shape is gained if the ocean is idealized, covering Earth completely and without any perturbations such as tides and winds. The result is a smooth but gravitational irregular geoid surface, providing a mean sea level (MSL) as a reference level for topographic measurements. Earth's surface is the boundary between the atmosphere, and the solid Earth and oceans. Defined in this way, Earth's shape is an idealized spheroid – a squashed sphere – with a surface area of about 510 million km (197 million sq mi). Earth can be divided into two hemispheres: by latitude into the polar Northern and Southern hemispheres; or by longitude into the continental Eastern and Western hemispheres. Most of Earth's surface is ocean water: 70.8% or 361 million km (139 million sq mi). This vast pool of salty water is often called the world ocean, and makes Earth with its dynamic hydrosphere a water world or ocean world. Indeed, in Earth's early history the ocean may have covered Earth completely. The world ocean is commonly divided into the Pacific Ocean, Atlantic Ocean, Indian Ocean, Antarctic or Southern Ocean, and Arctic Ocean, from largest to smallest. The ocean covers Earth's oceanic crust, but to a lesser extent with shelf seas also shelves of the continental crust. The oceanic crust forms large oceanic basins with features like abyssal plains, seamounts, submarine volcanoes, oceanic trenches, submarine canyons, oceanic plateaus, and a globe-spanning mid-ocean ridge system. At Earth's polar regions, the ocean surface is covered by seasonally variable amounts of sea ice that often connects with polar land, permafrost and ice sheets, forming polar ice caps. Earth's land covers 29.2%, or 149 million km (58 million sq mi) of Earth's surface. The land surface includes many islands around the globe, but most of the land surface is taken by the four continental landmasses, which are (in descending order): Africa-Eurasia, America (landmass), Antarctica, and Australia (landmass). These landmasses are further broken down and grouped into the continents. The terrain of the land surface varies greatly and consists of mountains, deserts, plains, plateaus, and other landforms. The elevation of the land surface varies from a low point of −418 m (−1,371 ft) at the Dead Sea, to a maximum altitude of 8,848 m (29,029 ft) at the top of Mount Everest. The mean height of land above sea level is about 797 m (2,615 ft). Land can be covered by surface water, snow, ice, artificial structures or vegetation. Most of Earth's land hosts vegetation, but ice sheets (10%, not including the equally large land under permafrost) or cold as well as hot deserts (33%) occupy also considerable amounts of it. The pedosphere is the outermost layer of Earth's land surface and is composed of soil and subject to soil formation processes. Soil is crucial for land to be arable. Earth's total arable land is 10.7% of the land surface, with 1.3% being permanent cropland. Earth has an estimated 16.7 million km (6.4 million sq mi) of cropland and 33.5 million km (12.9 million sq mi) of pastureland. The land surface and the ocean floor form the top of Earth's crust, which together with parts of the upper mantle form Earth's lithosphere. Earth's crust may be divided into oceanic and continental crust. Beneath the ocean-floor sediments, the oceanic crust is predominantly basaltic, while the continental crust may include lower density materials such as granite, sediments and metamorphic rocks. Nearly 75% of the continental surfaces are covered by sedimentary rocks, although they form about 5% of the mass of the crust. Earth's surface topography comprises both the topography of the ocean surface, and the shape of Earth's land surface. The submarine terrain of the ocean floor has an average bathymetric depth of 4 km, and is as varied as the terrain above sea level. Earth's surface is continually being shaped by internal plate tectonic processes including earthquakes and volcanism; by weathering and erosion driven by ice, water, wind and temperature; and by biological processes including the growth and decomposition of biomass into soil. Earth's mechanically rigid outer layer of Earth's crust and upper mantle, the lithosphere, is divided into tectonic plates. These plates are rigid segments that move relative to each other at one of three boundaries types: at convergent boundaries, two plates come together; at divergent boundaries, two plates are pulled apart; and at transform boundaries, two plates slide past one another laterally. Along these plate boundaries, earthquakes, volcanic activity, mountain-building, and oceanic trench formation can occur. The tectonic plates ride on top of the asthenosphere, the solid but less-viscous part of the upper mantle that can flow and move along with the plates. As the tectonic plates migrate, oceanic crust is subducted under the leading edges of the plates at convergent boundaries. At the same time, the upwelling of mantle material at divergent boundaries creates mid-ocean ridges. The combination of these processes recycles the oceanic crust back into the mantle. Due to this recycling, most of the ocean floor is less than 100 Ma old. The oldest oceanic crust is located in the Western Pacific and is estimated to be 200 Ma old. By comparison, the oldest dated continental crust is 4,030 Ma, although zircons have been found preserved as clasts within Eoarchean sedimentary rocks that give ages up to 4,400 Ma, indicating that at least some continental crust existed at that time. The seven major plates are the Pacific, North American, Eurasian, African, Antarctic, Indo-Australian, and South American. Other notable plates include the Arabian Plate, the Caribbean Plate, the Nazca Plate off the west coast of South America and the Scotia Plate in the southern Atlantic Ocean. The Australian Plate fused with the Indian Plate between 50 and 55 Ma. The fastest-moving plates are the oceanic plates, with the Cocos Plate advancing at a rate of 75 mm/a (3.0 in/year) and the Pacific Plate moving 52–69 mm/a (2.0–2.7 in/year). At the other extreme, the slowest-moving plate is the South American Plate, progressing at a typical rate of 10.6 mm/a (0.42 in/year). Earth's interior, like that of the other terrestrial planets, is divided into layers by their chemical or physical (rheological) properties. The outer layer is a chemically distinct silicate solid crust, which is underlain by a highly viscous solid mantle. The crust is separated from the mantle by the Mohorovičić discontinuity. The thickness of the crust varies from about 6 kilometers (3.7 mi) under the oceans to 30–50 km (19–31 mi) for the continents. The crust and the cold, rigid, top of the upper mantle are collectively known as the lithosphere, which is divided into independently moving tectonic plates. Beneath the lithosphere is the asthenosphere, a relatively low-viscosity layer on which the lithosphere rides. Important changes in crystal structure within the mantle occur at 410 and 660 km (250 and 410 mi) below the surface, spanning a transition zone that separates the upper and lower mantle. Beneath the mantle, an extremely low viscosity liquid outer core lies above a solid inner core. Earth's inner core may be rotating at a slightly higher angular velocity than the remainder of the planet, advancing by 0.1–0.5° per year, although both somewhat higher and much lower rates have also been proposed. The radius of the inner core is about one-fifth of that of Earth. Density increases with depth, as described in the table on the right. Among the Solar System's planetary-sized objects Earth is the object with the highest density. Earth's mass is approximately 5.97×10 kg (5,970 Yg). It is composed mostly of iron (32.1% by mass), oxygen (30.1%), silicon (15.1%), magnesium (13.9%), sulfur (2.9%), nickel (1.8%), calcium (1.5%), and aluminum (1.4%), with the remaining 1.2% consisting of trace amounts of other elements. Due to gravitational separation, the core is primarily composed of the denser elements: iron (88.8%), with smaller amounts of nickel (5.8%), sulfur (4.5%), and less than 1% trace elements. The most common rock constituents of the crust are oxides. Over 99% of the crust is composed of various oxides of eleven elements, principally oxides containing silicon (the silicate minerals), aluminum, iron, calcium, magnesium, potassium, or sodium. The major heat-producing isotopes within Earth are potassium-40, uranium-238, and thorium-232. At the center, the temperature may be up to 6,000 °C (10,830 °F), and the pressure could reach 360 GPa (52 million psi). Because much of the heat is provided by radioactive decay, scientists postulate that early in Earth's history, before isotopes with short half-lives were depleted, Earth's heat production was much higher. At approximately 3 Gyr, twice the present-day heat would have been produced, increasing the rates of mantle convection and plate tectonics, and allowing the production of uncommon igneous rocks such as komatiites that are rarely formed today. The mean heat loss from Earth is 87 mW m, for a global heat loss of 4.42×10 W. A portion of the core's thermal energy is transported toward the crust by mantle plumes, a form of convection consisting of upwellings of higher-temperature rock. These plumes can produce hotspots and flood basalts. More of the heat in Earth is lost through plate tectonics, by mantle upwelling associated with mid-ocean ridges. The final major mode of heat loss is through conduction through the lithosphere, the majority of which occurs under the oceans because the crust there is much thinner than that of the continents. The gravity of Earth is the acceleration that is imparted to objects due to the distribution of mass within Earth. Near Earth's surface, gravitational acceleration is approximately 9.8 m/s (32 ft/s). Local differences in topography, geology, and deeper tectonic structure cause local and broad regional differences in Earth's gravitational field, known as gravity anomalies. The main part of Earth's magnetic field is generated in the core, the site of a dynamo process that converts the kinetic energy of thermally and compositionally driven convection into electrical and magnetic field energy. The field extends outwards from the core, through the mantle, and up to Earth's surface, where it is, approximately, a dipole. The poles of the dipole are located close to Earth's geographic poles. At the equator of the magnetic field, the magnetic-field strength at the surface is 3.05×10 T, with a magnetic dipole moment of 7.79×10 Am at epoch 2000, decreasing nearly 6% per century (although it still remains stronger than its long time average). The convection movements in the core are chaotic; the magnetic poles drift and periodically change alignment. This causes secular variation of the main field and field reversals at irregular intervals averaging a few times every million years. The most recent reversal occurred approximately 700,000 years ago. The extent of Earth's magnetic field in space defines the magnetosphere. Ions and electrons of the solar wind are deflected by the magnetosphere; solar wind pressure compresses the dayside of the magnetosphere, to about 10 Earth radii, and extends the nightside magnetosphere into a long tail. Because the velocity of the solar wind is greater than the speed at which waves propagate through the solar wind, a supersonic bow shock precedes the dayside magnetosphere within the solar wind. Charged particles are contained within the magnetosphere; the plasmasphere is defined by low-energy particles that essentially follow magnetic field lines as Earth rotates. The ring current is defined by medium-energy particles that drift relative to the geomagnetic field, but with paths that are still dominated by the magnetic field, and the Van Allen radiation belts are formed by high-energy particles whose motion is essentially random, but contained in the magnetosphere. During magnetic storms and substorms, charged particles can be deflected from the outer magnetosphere and especially the magnetotail, directed along field lines into Earth's ionosphere, where atmospheric atoms can be excited and ionized, causing the aurora. Earth's rotation period relative to the Sun—its mean solar day—is 86,400 seconds of mean solar time (86,400.0025 SI seconds). Because Earth's solar day is now slightly longer than it was during the 19th century due to tidal deceleration, each day varies between 0 and 2 ms longer than the mean solar day. Earth's rotation period relative to the fixed stars, called its stellar day by the International Earth Rotation and Reference Systems Service (IERS), is 86,164.0989 seconds of mean solar time (UT1), or 23 56 4.0989. Earth's rotation period relative to the precessing or moving mean March equinox (when the Sun is at 90° on the equator), is 86,164.0905 seconds of mean solar time (UT1) (23 56 4.0905). Thus the sidereal day is shorter than the stellar day by about 8.4 ms. Apart from meteors within the atmosphere and low-orbiting satellites, the main apparent motion of celestial bodies in Earth's sky is to the west at a rate of 15°/h = 15'/min. For bodies near the celestial equator, this is equivalent to an apparent diameter of the Sun or the Moon every two minutes; from Earth's surface, the apparent sizes of the Sun and the Moon are approximately the same. Earth orbits the Sun, making Earth the third-closest planet to the Sun and part of the inner Solar System. Earth's average orbital distance is about 150 million km (93 million mi), which is the basis for the Astronomical Unit and is equal to roughly 8.3 light minutes or 380 times Earth's distance to the Moon. Earth orbits the Sun every 365.2564 mean solar days, or one sidereal year. With an apparent movement of the Sun in Earth's sky at a rate of about 1°/day eastward, which is one apparent Sun or Moon diameter every 12 hours. Due to this motion, on average it takes 24 hours—a solar day—for Earth to complete a full rotation about its axis so that the Sun returns to the meridian. The orbital speed of Earth averages about 29.78 km/s (107,200 km/h; 66,600 mph), which is fast enough to travel a distance equal to Earth's diameter, about 12,742 km (7,918 mi), in seven minutes, and the distance to the Moon, 384,000 km (239,000 mi), in about 3.5 hours. The Moon and Earth orbit a common barycenter every 27.32 days relative to the background stars. When combined with the Earth–Moon system's common orbit around the Sun, the period of the synodic month, from new moon to new moon, is 29.53 days. Viewed from the celestial north pole, the motion of Earth, the Moon, and their axial rotations are all counterclockwise. Viewed from a vantage point above the Sun and Earth's north poles, Earth orbits in a counterclockwise direction about the Sun. The orbital and axial planes are not precisely aligned: Earth's axis is tilted some 23.44 degrees from the perpendicular to the Earth–Sun plane (the ecliptic), and the Earth-Moon plane is tilted up to ±5.1 degrees against the Earth–Sun plane. Without this tilt, there would be an eclipse every two weeks, alternating between lunar eclipses and solar eclipses. The Hill sphere, or the sphere of gravitational influence, of Earth is about 1.5 million km (930,000 mi) in radius. This is the maximum distance at which Earth's gravitational influence is stronger than the more distant Sun and planets. Objects must orbit Earth within this radius, or they can become unbound by the gravitational perturbation of the Sun. Earth, along with the Solar System, is situated in the Milky Way and orbits about 28,000 light-years from its center. It is about 20 light-years above the galactic plane in the Orion Arm. The axial tilt of Earth is approximately 23.439281° with the axis of its orbit plane, always pointing towards the Celestial Poles. Due to Earth's axial tilt, the amount of sunlight reaching any given point on the surface varies over the course of the year. This causes the seasonal change in climate, with summer in the Northern Hemisphere occurring when the Tropic of Cancer is facing the Sun, and in the Southern Hemisphere when the Tropic of Capricorn faces the Sun. In each instance, winter occurs simultaneously in the opposite hemisphere. During the summer, the day lasts longer, and the Sun climbs higher in the sky. In winter, the climate becomes cooler and the days shorter. Above the Arctic Circle and below the Antarctic Circle there is no daylight at all for part of the year, causing a polar night, and this night extends for several months at the poles themselves. These same latitudes also experience a midnight sun, where the sun remains visible all day. By astronomical convention, the four seasons can be determined by the solstices—the points in the orbit of maximum axial tilt toward or away from the Sun—and the equinoxes, when Earth's rotational axis is aligned with its orbital axis. In the Northern Hemisphere, winter solstice currently occurs around 21 December; summer solstice is near 21 June, spring equinox is around 20 March and autumnal equinox is about 22 or 23 September. In the Southern Hemisphere, the situation is reversed, with the summer and winter solstices exchanged and the spring and autumnal equinox dates swapped. The angle of Earth's axial tilt is relatively stable over long periods of time. Its axial tilt does undergo nutation; a slight, irregular motion with a main period of 18.6 years. The orientation (rather than the angle) of Earth's axis also changes over time, precessing around in a complete circle over each 25,800-year cycle; this precession is the reason for the difference between a sidereal year and a tropical year. Both of these motions are caused by the varying attraction of the Sun and the Moon on Earth's equatorial bulge. The poles also migrate a few meters across Earth's surface. This polar motion has multiple, cyclical components, which collectively are termed quasiperiodic motion. In addition to an annual component to this motion, there is a 14-month cycle called the Chandler wobble. Earth's rotational velocity also varies in a phenomenon known as length-of-day variation. In modern times, Earth's perihelion occurs around 3 January, and its aphelion around 4 July. These dates change over time due to precession and other orbital factors, which follow cyclical patterns known as Milankovitch cycles. The changing Earth–Sun distance causes an increase of about 6.8% in solar energy reaching Earth at perihelion relative to aphelion. Because the Southern Hemisphere is tilted toward the Sun at about the same time that Earth reaches the closest approach to the Sun, the Southern Hemisphere receives slightly more energy from the Sun than does the northern over the course of a year. This effect is much less significant than the total energy change due to the axial tilt, and most of the excess energy is absorbed by the higher proportion of water in the Southern Hemisphere. The Moon is a relatively large, terrestrial, planet-like natural satellite, with a diameter about one-quarter of Earth's. It is the largest moon in the Solar System relative to the size of its planet, although Charon is larger relative to the dwarf planet Pluto. The natural satellites of other planets are also referred to as "moons", after Earth's. The most widely accepted theory of the Moon's origin, the giant-impact hypothesis, states that it formed from the collision of a Mars-size protoplanet called Theia with the early Earth. This hypothesis explains the Moon's relative lack of iron and volatile elements and the fact that its composition is nearly identical to that of Earth's crust. The gravitational attraction between Earth and the Moon causes tides on Earth. The same effect on the Moon has led to its tidal locking: its rotation period is the same as the time it takes to orbit Earth. As a result, it always presents the same face to the planet. As the Moon orbits Earth, different parts of its face are illuminated by the Sun, leading to the lunar phases. Due to their tidal interaction, the Moon recedes from Earth at the rate of approximately 38 mm/a (1.5 in/year). Over millions of years, these tiny modifications—and the lengthening of Earth's day by about 23 µs/yr—add up to significant changes. During the Ediacaran period, for example, (approximately 620 Ma) there were 400±7 days in a year, with each day lasting 21.9±0.4 hours. The Moon may have dramatically affected the development of life by moderating the planet's climate. Paleontological evidence and computer simulations show that Earth's axial tilt is stabilized by tidal interactions with the Moon. Some theorists think that without this stabilization against the torques applied by the Sun and planets to Earth's equatorial bulge, the rotational axis might be chaotically unstable, exhibiting large changes over millions of years, as is the case for Mars, though this is disputed. Viewed from Earth, the Moon is just far enough away to have almost the same apparent-sized disk as the Sun. The angular size (or solid angle) of these two bodies match because, although the Sun's diameter is about 400 times as large as the Moon's, it is also 400 times more distant. This allows total and annular solar eclipses to occur on Earth. On 1 November 2023, scientists reported that, according to computer simulations, remnants of a protoplanet, named Theia, could be inside the Earth, left over from a collision with the Earth in ancient times, and afterwards becoming the Moon. Earth's co-orbital asteroids population consists of quasi-satellites, objects with a horseshoe orbit and trojans. There are at least five quasi-satellites, including 469219 Kamoʻoalewa. A trojan asteroid companion, 2010 TK7, is librating around the leading Lagrange triangular point, L4, in Earth's orbit around the Sun. The tiny near-Earth asteroid 2006 RH120 makes close approaches to the Earth–Moon system roughly every twenty years. During these approaches, it can orbit Earth for brief periods of time. As of September 2021, there are 4,550 operational, human-made satellites orbiting Earth. There are also inoperative satellites, including Vanguard 1, the oldest satellite currently in orbit, and over 16,000 pieces of tracked space debris. Earth's largest artificial satellite is the International Space Station. Earth's hydrosphere is the sum of Earth's water and its distribution. Most of Earth's hydrosphere consists of Earth's global ocean. Earth's hydrosphere also consists of water in the atmosphere and on land, including clouds, inland seas, lakes, rivers, and underground waters down to a depth of 2,000 m (6,600 ft). The mass of the oceans is approximately 1.35×10 metric tons or about 1/4400 of Earth's total mass. The oceans cover an area of 361.8 million km (139.7 million sq mi) with a mean depth of 3,682 m (12,080 ft), resulting in an estimated volume of 1.332 billion km (320 million cu mi). If all of Earth's crustal surface were at the same elevation as a smooth sphere, the depth of the resulting world ocean would be 2.7 to 2.8 km (1.68 to 1.74 mi). About 97.5% of the water is saline; the remaining 2.5% is fresh water. Most fresh water, about 68.7%, is present as ice in ice caps and glaciers. The remaining 30% is ground water, 1% surface water (covering only 2.8% of Earth's land) and other small forms of fresh water deposits such as permafrost, water vapor in the atmosphere, biological binding, etc. . In Earth's coldest regions, snow survives over the summer and changes into ice. This accumulated snow and ice eventually forms into glaciers, bodies of ice that flow under the influence of their own gravity. Alpine glaciers form in mountainous areas, whereas vast ice sheets form over land in polar regions. The flow of glaciers erodes the surface changing it dramatically, with the formation of U-shaped valleys and other landforms. Sea ice in the Arctic covers an area about as big as the United States, although it is quickly retreating as a consequence of climate change. The average salinity of Earth's oceans is about 35 grams of salt per kilogram of seawater (3.5% salt). Most of this salt was released from volcanic activity or extracted from cool igneous rocks. The oceans are also a reservoir of dissolved atmospheric gases, which are essential for the survival of many aquatic life forms. Sea water has an important influence on the world's climate, with the oceans acting as a large heat reservoir. Shifts in the oceanic temperature distribution can cause significant weather shifts, such as the El Niño–Southern Oscillation. The abundance of water, particularly liquid water, on Earth's surface is a unique feature that distinguishes it from other planets in the Solar System. Solar System planets with considerable atmospheres do partly host atmospheric water vapor, but they lack surface conditions for stable surface water. Despite some moons showing signs of large reservoirs of extraterrestrial liquid water, with possibly even more volume than Earth's ocean, all of them are large bodies of water under a kilometers thick frozen surface layer. The atmospheric pressure at Earth's sea level averages 101.325 kPa (14.696 psi), with a scale height of about 8.5 km (5.3 mi). A dry atmosphere is composed of 78.084% nitrogen, 20.946% oxygen, 0.934% argon, and trace amounts of carbon dioxide and other gaseous molecules. Water vapor content varies between 0.01% and 4% but averages about 1%. Clouds cover around two thirds of Earth's surface, more so over oceans than land. The height of the troposphere varies with latitude, ranging between 8 km (5 mi) at the poles to 17 km (11 mi) at the equator, with some variation resulting from weather and seasonal factors. Earth's biosphere has significantly altered its atmosphere. Oxygenic photosynthesis evolved 2.7 Gya, forming the primarily nitrogen–oxygen atmosphere of today. This change enabled the proliferation of aerobic organisms and, indirectly, the formation of the ozone layer due to the subsequent conversion of atmospheric O2 into O3. The ozone layer blocks ultraviolet solar radiation, permitting life on land. Other atmospheric functions important to life include transporting water vapor, providing useful gases, causing small meteors to burn up before they strike the surface, and moderating temperature. This last phenomenon is the greenhouse effect: trace molecules within the atmosphere serve to capture thermal energy emitted from the surface, thereby raising the average temperature. Water vapor, carbon dioxide, methane, nitrous oxide, and ozone are the primary greenhouse gases in the atmosphere. Without this heat-retention effect, the average surface temperature would be −18 °C (0 °F), in contrast to the current +15 °C (59 °F), and life on Earth probably would not exist in its current form. Earth's atmosphere has no definite boundary, gradually becoming thinner and fading into outer space. Three-quarters of the atmosphere's mass is contained within the first 11 km (6.8 mi) of the surface; this lowest layer is called the troposphere. Energy from the Sun heats this layer, and the surface below, causing expansion of the air. This lower-density air then rises and is replaced by cooler, higher-density air. The result is atmospheric circulation that drives the weather and climate through redistribution of thermal energy. The primary atmospheric circulation bands consist of the trade winds in the equatorial region below 30° latitude and the westerlies in the mid-latitudes between 30° and 60°. Ocean heat content and currents are also important factors in determining climate, particularly the thermohaline circulation that distributes thermal energy from the equatorial oceans to the polar regions. Earth receives 1361 W/m of solar irradiance. The amount of solar energy that reaches Earth's surface decreases with increasing latitude. At higher latitudes, the sunlight reaches the surface at lower angles, and it must pass through thicker columns of the atmosphere. As a result, the mean annual air temperature at sea level decreases by about 0.4 °C (0.7 °F) per degree of latitude from the equator. Earth's surface can be subdivided into specific latitudinal belts of approximately homogeneous climate. Ranging from the equator to the polar regions, these are the tropical (or equatorial), subtropical, temperate and polar climates. Further factors that affect a location's climates are its proximity to oceans, the oceanic and atmospheric circulation, and topology. Places close to oceans typically have colder summers and warmer winters, due to the fact that oceans can store large amounts of heat. The wind transports the cold or the heat of the ocean to the land. Atmospheric circulation also plays an important role: San Francisco and Washington DC are both coastal cities at about the same latitude. San Francisco's climate is significantly more moderate as the prevailing wind direction is from sea to land. Finally, temperatures decrease with height causing mountainous areas to be colder than low-lying areas. Water vapor generated through surface evaporation is transported by circulatory patterns in the atmosphere. When atmospheric conditions permit an uplift of warm, humid air, this water condenses and falls to the surface as precipitation. Most of the water is then transported to lower elevations by river systems and usually returned to the oceans or deposited into lakes. This water cycle is a vital mechanism for supporting life on land and is a primary factor in the erosion of surface features over geological periods. Precipitation patterns vary widely, ranging from several meters of water per year to less than a millimeter. Atmospheric circulation, topographic features, and temperature differences determine the average precipitation that falls in each region. The commonly used Köppen climate classification system has five broad groups (humid tropics, arid, humid middle latitudes, continental and cold polar), which are further divided into more specific subtypes. The Köppen system rates regions based on observed temperature and precipitation. Surface air temperature can rise to around 55 °C (131 °F) in hot deserts, such as Death Valley, and can fall as low as −89 °C (−128 °F) in Antarctica. The upper atmosphere, the atmosphere above the troposphere, is usually divided into the stratosphere, mesosphere, and thermosphere. Each layer has a different lapse rate, defining the rate of change in temperature with height. Beyond these, the exosphere thins out into the magnetosphere, where the geomagnetic fields interact with the solar wind. Within the stratosphere is the ozone layer, a component that partially shields the surface from ultraviolet light and thus is important for life on Earth. The Kármán line, defined as 100 km (62 mi) above Earth's surface, is a working definition for the boundary between the atmosphere and outer space. Thermal energy causes some of the molecules at the outer edge of the atmosphere to increase their velocity to the point where they can escape from Earth's gravity. This causes a slow but steady loss of the atmosphere into space. Because unfixed hydrogen has a low molecular mass, it can achieve escape velocity more readily, and it leaks into outer space at a greater rate than other gases. The leakage of hydrogen into space contributes to the shifting of Earth's atmosphere and surface from an initially reducing state to its current oxidizing one. Photosynthesis provided a source of free oxygen, but the loss of reducing agents such as hydrogen is thought to have been a necessary precondition for the widespread accumulation of oxygen in the atmosphere. Hence the ability of hydrogen to escape from the atmosphere may have influenced the nature of life that developed on Earth. In the current, oxygen-rich atmosphere most hydrogen is converted into water before it has an opportunity to escape. Instead, most of the hydrogen loss comes from the destruction of methane in the upper atmosphere. Earth is the only known place that has ever been habitable for life. Earth's life developed in Earth's early bodies of water some hundred million years after Earth formed. Earth's life has been shaping and inhabiting many particular ecosystems on Earth and has eventually expanded globally forming an overarching biosphere. Therefore, life has impacted Earth, significantly altering Earth's atmosphere and surface over long periods of time, causing changes like the Great Oxidation Event. Earth's life has over time greatly diversified, allowing the biosphere to have different biomes, which are inhabited by comparatively similar plants and animals. The different biomes developed at distinct elevations or water depths, planetary temperature latitudes and on land also with different humidity. Earth's species diversity and biomass reaches a peak in shallow waters and with forests, particularly in equatorial, warm and humid conditions. While freezing polar regions and high altitudes, or extremely arid areas are relatively barren of plant and animal life. Earth provides liquid water—an environment where complex organic molecules can assemble and interact, and sufficient energy to sustain a metabolism. Plants and other organisms take up nutrients from water, soils and the atmosphere. These nutrients are constantly recycled between different species. Extreme weather, such as tropical cyclones (including hurricanes and typhoons), occurs over most of Earth's surface and has a large impact on life in those areas. From 1980 to 2000, these events caused an average of 11,800 human deaths per year. Many places are subject to earthquakes, landslides, tsunamis, volcanic eruptions, tornadoes, blizzards, floods, droughts, wildfires, and other calamities and disasters. Human impact is felt in many areas due to pollution of the air and water, acid rain, loss of vegetation (overgrazing, deforestation, desertification), loss of wildlife, species extinction, soil degradation, soil depletion and erosion. Human activities release greenhouse gases into the atmosphere which cause global warming. This is driving changes such as the melting of glaciers and ice sheets, a global rise in average sea levels, increased risk of drought and wildfires, and migration of species to colder areas. Originating from earlier primates in Eastern Africa 300,000 years ago humans have since been migrating and with the advent of agriculture in the 10th millennium BC increasingly settling Earth's land. In the 20th century Antarctica had been the last continent to see a first and until today limited human presence. Human population has since the 19th century grown exponentially to seven billion in the early 2010s, and is projected to peak at around ten billion in the second half of the 21st century. Most of the growth is expected to take place in sub-Saharan Africa. Distribution and density of human population varies greatly around the world with the majority living in south to eastern Asia and 90% inhabiting only the Northern Hemisphere of Earth, partly due to the hemispherical predominance of the world's land mass, with 68% of the world's land mass being in the Northern Hemisphere. Furthermore, since the 19th century humans have increasingly converged into urban areas with the majority living in urban areas by the 21st century. Beyond Earth's surface humans have lived on a temporary basis, with only special purpose deep underground and underwater presence, and a few space stations. Human population virtually completely remains on Earth's surface, fully depending on Earth and the environment it sustains. Since the second half of the 20th century, some hundreds of humans have temporarily stayed beyond Earth, a tiny fraction of whom have reached another celestial body, the Moon. Earth has been subject to extensive human settlement, and humans have developed diverse societies and cultures. Most of Earth's land has been territorially claimed since the 19th century by sovereign states (countries) separated by political borders, and more than 200 such states exist today, with only parts of Antarctica and few small regions remaining unclaimed. Most of these states together form the United Nations, the leading worldwide intergovernmental organization, which extends human governance over the ocean and Antarctica, and therefore all of Earth. Earth has resources that have been exploited by humans. Those termed non-renewable resources, such as fossil fuels, are only replenished over geological timescales. Large deposits of fossil fuels are obtained from Earth's crust, consisting of coal, petroleum, and natural gas. These deposits are used by humans both for energy production and as feedstock for chemical production. Mineral ore bodies have also been formed within the crust through a process of ore genesis, resulting from actions of magmatism, erosion, and plate tectonics. These metals and other elements are extracted by mining, a process which often brings environmental and health damage. Earth's biosphere produces many useful biological products for humans, including food, wood, pharmaceuticals, oxygen, and the recycling of organic waste. The land-based ecosystem depends upon topsoil and fresh water, and the oceanic ecosystem depends on dissolved nutrients washed down from the land. In 2019, 39 million km (15 million sq mi) of Earth's land surface consisted of forest and woodlands, 12 million km (4.6 million sq mi) was shrub and grassland, 40 million km (15 million sq mi) were used for animal feed production and grazing, and 11 million km (4.2 million sq mi) were cultivated as croplands. Of the 12–14% of ice-free land that is used for croplands, 2 percentage points were irrigated in 2015. Humans use building materials to construct shelters. Human activities have impacted Earth's environments. Through activities such as the burning of fossil fuels, humans have been increasing the amount of greenhouse gases in the atmosphere, altering Earth's energy budget and climate. It is estimated that global temperatures in the year 2020 were 1.2 °C (2.2 °F) warmer than the preindustrial baseline. This increase in temperature, known as global warming, has contributed to the melting of glaciers, rising sea levels, increased risk of drought and wildfires, and migration of species to colder areas. The concept of planetary boundaries was introduced to quantify humanity's impact on Earth. Of the nine identified boundaries, five have been crossed: Biosphere integrity, climate change, chemical pollution, destruction of wild habitats and the nitrogen cycle are thought to have passed the safe threshold. As of 2018, no country meets the basic needs of its population without transgressing planetary boundaries. It is thought possible to provide all basic physical needs globally within sustainable levels of resource use. Human cultures have developed many views of the planet. The standard astronomical symbols of Earth are a quartered circle, , representing the four corners of the world, and a globus cruciger, . Earth is sometimes personified as a deity. In many cultures it is a mother goddess that is also the primary fertility deity. Creation myths in many religions involve the creation of Earth by a supernatural deity or deities. The Gaia hypothesis, developed in the mid-20th century, compared Earth's environments and life as a single self-regulating organism leading to broad stabilization of the conditions of habitability. Images of Earth taken from space, particularly during the Apollo program, have been credited with altering the way that people viewed the planet that they lived on, called the overview effect, emphasizing its beauty, uniqueness and apparent fragility. In particular, this caused a realization of the scope of effects from human activity on Earth's environment. Enabled by science, particularly Earth observation, humans have started to take action on environmental issues globally, acknowledging the impact of humans and the interconnectedness of Earth's environments. Scientific investigation has resulted in several culturally transformative shifts in people's view of the planet. Initial belief in a flat Earth was gradually displaced in Ancient Greece by the idea of a spherical Earth, which was attributed to both the philosophers Pythagoras and Parmenides. Earth was generally believed to be the center of the universe until the 16th century, when scientists first concluded that it was a moving object, one of the planets of the Solar System. It was only during the 19th century that geologists realized Earth's age was at least many millions of years. Lord Kelvin used thermodynamics to estimate the age of Earth to be between 20 million and 400 million years in 1864, sparking a vigorous debate on the subject; it was only when radioactivity and radioactive dating were discovered in the late 19th and early 20th centuries that a reliable mechanism for determining Earth's age was established, proving the planet to be billions of years old.
[ { "paragraph_id": 0, "text": "Earth is the third planet from the Sun and the only astronomical object known to harbor life. This is enabled by Earth being a water world, the only one in the Solar System sustaining liquid surface water. Almost all of Earth's water is contained in its global ocean, covering 70.8% of Earth's crust. The remaining 29.2% of Earth's crust is land, most of which is located in the form of continental landmasses within one hemisphere, Earth's land hemisphere. Most of Earth's land is somewhat humid and covered by vegetation, while large sheets of ice at Earth's polar deserts retain more water than Earth's groundwater, lakes, rivers and atmospheric water combined. Earth's crust consists of slowly moving tectonic plates, which interact to produce mountain ranges, volcanoes, and earthquakes. Earth has a liquid outer core that generates a magnetosphere capable of deflecting most of the destructive solar winds and cosmic radiation.", "title": "" }, { "paragraph_id": 1, "text": "Earth has a dynamic atmosphere, which sustains Earth's surface conditions and protects it from most meteoroids and UV-light at entry. It has a composition of primarily nitrogen and oxygen. Water vapor is widely present in the atmosphere, forming clouds that cover most of the planet. The water vapor acts as a greenhouse gas and, together with other greenhouse gases in the atmosphere, particularly carbon dioxide (CO2), creates the conditions for both liquid surface water and water vapor to persist via the capturing of energy from the Sun's light. This process maintains the current average surface temperature of 14.76 °C, at which water is liquid under atmospheric pressure. Differences in the amount of captured energy between geographic regions (as with the equatorial region receiving more sunlight than the polar regions) drive atmospheric and ocean currents, producing a global climate system with different climate regions, and a range of weather phenomena such as precipitation, allowing components such as nitrogen to cycle.", "title": "" }, { "paragraph_id": 2, "text": "Earth is rounded into an ellipsoid with a circumference of about 40,000 km. It is the densest planet in the Solar System. Of the four rocky planets, it is the largest and most massive. Earth is about eight light-minutes away from the Sun and orbits it, taking a year (about 365.25 days) to complete one revolution. Earth rotates around its own axis in slightly less than a day (in about 23 hours and 56 minutes). Earth's axis of rotation is tilted with respect to the perpendicular to its orbital plane around the Sun, producing seasons. Earth is orbited by one permanent natural satellite, the Moon, which orbits Earth at 384,400 km (1.28 light seconds) and is roughly a quarter as wide as Earth. The Moon's gravity helps stabilize Earth's axis, and also causes tides which gradually slow Earth's rotation. As a result of tidal locking, the same side of the Moon always faces Earth.", "title": "" }, { "paragraph_id": 3, "text": "Earth, like most other bodies in the Solar System, formed 4.5 billion years ago from gas in the early Solar System. During the first billion years of Earth's history, the ocean formed and then life developed within it. Life spread globally and has been altering Earth's atmosphere and surface, leading to the Great Oxidation Event two billion years ago. Humans emerged 300,000 years ago in Africa and have spread across every continent on Earth with the exception of Antarctica. Humans depend on Earth's biosphere and natural resources for their survival, but have increasingly impacted the planet's environment. Humanity's current impact on Earth's climate and biosphere is unsustainable, threatening the livelihood of humans and many other forms of life, and causing widespread extinctions.", "title": "" }, { "paragraph_id": 4, "text": "The Modern English word Earth developed, via Middle English, from an Old English noun most often spelled eorðe. It has cognates in every Germanic language, and their ancestral root has been reconstructed as *erþō. In its earliest attestation, the word eorðe was used to translate the many senses of Latin terra and Greek γῆ gē: the ground, its soil, dry land, the human world, the surface of the world (including the sea), and the globe itself. As with Roman Terra/Tellūs and Greek Gaia, Earth may have been a personified goddess in Germanic paganism: late Norse mythology included Jörð (\"Earth\"), a giantess often given as the mother of Thor.", "title": "Etymology" }, { "paragraph_id": 5, "text": "Historically, \"earth\" has been written in lowercase. Beginning with the use of Early Middle English, its definite sense as \"the globe\" was expressed as \"the earth\". By the era of Early Modern English, capitalization of nouns began to prevail, and the earth was also written the Earth, particularly when referenced along with other heavenly bodies. More recently, the name is sometimes simply given as Earth, by analogy with the names of the other planets, though \"earth\" and forms with \"the earth\" remain common. House styles now vary: Oxford spelling recognizes the lowercase form as the most common, with the capitalized form an acceptable variant. Another convention capitalizes \"Earth\" when appearing as a name, such as a description of the \"Earth's atmosphere\", but employs the lowercase when it is preceded by \"the\", such as \"the atmosphere of the earth\"). It almost always appears in lowercase in colloquial expressions such as \"what on earth are you doing?\"", "title": "Etymology" }, { "paragraph_id": 6, "text": "The name Terra /ˈtɛrə/ occasionally is used in scientific writing and especially in science fiction to distinguish humanity's inhabited planet from others, while in poetry Tellus /ˈtɛləs/ has been used to denote personification of the Earth. Terra is also the name of the planet in some Romance languages, languages that evolved from Latin, like Italian and Portuguese, while in other Romance languages the word gave rise to names with slightly altered spellings, like the Spanish Tierra and the French Terre. The Latinate form Gæa or Gaea (English: /ˈdʒiː.ə/) of the Greek poetic name Gaia (Γαῖα; Ancient Greek: [ɡâi̯.a] or [ɡâj.ja]) is rare, though the alternative spelling Gaia has become common due to the Gaia hypothesis, in which case its pronunciation is /ˈɡaɪ.ə/ rather than the more classical English /ˈɡeɪ.ə/.", "title": "Etymology" }, { "paragraph_id": 7, "text": "There are a number of adjectives for the planet Earth. The word \"earthly\" is derived from \"Earth\". The word \"Terra\" is derived from the Latin word \"terran\" /ˈtɛrən/. The word \"terrestrial\" /təˈrɛstriəl/, is derived from the French word \"terrene\" /təˈriːn/. The world \"tellurian\" is derived from the Latin word \"Tellus\" /tɛˈlʊəriən/ and \"telluric\".", "title": "Etymology" }, { "paragraph_id": 8, "text": "The oldest material found in the Solar System is dated to 4.5682+0.0002−0.0004 Ga (billion years) ago. By 4.54±0.04 Ga the primordial Earth had formed. The bodies in the Solar System formed and evolved with the Sun. In theory, a solar nebula partitions a volume out of a molecular cloud by gravitational collapse, which begins to spin and flatten into a circumstellar disk, and then the planets grow out of that disk with the Sun. A nebula contains gas, ice grains, and dust (including primordial nuclides). According to nebular theory, planetesimals formed by accretion, with the primordial Earth being estimated as likely taking anywhere from 70 to 100 million years to form.", "title": "Natural history" }, { "paragraph_id": 9, "text": "Estimates of the age of the Moon range from 4.5 Ga to significantly younger. A leading hypothesis is that it was formed by accretion from material loosed from Earth after a Mars-sized object with about 10% of Earth's mass, named Theia, collided with Earth. It hit Earth with a glancing blow and some of its mass merged with Earth. Between approximately 4.1 and 3.8 Ga, numerous asteroid impacts during the Late Heavy Bombardment caused significant changes to the greater surface environment of the Moon and, by inference, to that of Earth.", "title": "Natural history" }, { "paragraph_id": 10, "text": "Earth's atmosphere and oceans were formed by volcanic activity and outgassing. Water vapor from these sources condensed into the oceans, augmented by water and ice from asteroids, protoplanets, and comets. Sufficient water to fill the oceans may have been on Earth since it formed. In this model, atmospheric greenhouse gases kept the oceans from freezing when the newly forming Sun had only 70% of its current luminosity. By 3.5 Ga, Earth's magnetic field was established, which helped prevent the atmosphere from being stripped away by the solar wind.", "title": "Natural history" }, { "paragraph_id": 11, "text": "As the molten outer layer of Earth cooled it formed the first solid crust, which is thought to have been mafic in composition. The first continental crust, which was more felsic in composition, formed by the partial melting of this mafic crust. The presence of grains of the mineral zircon of Hadean age in Eoarchean sedimentary rocks suggests that at least some felsic crust existed as early as 4.4 Ga, only 140 Ma after Earth's formation. There are two main models of how this initial small volume of continental crust evolved to reach its current abundance: (1) a relatively steady growth up to the present day, which is supported by the radiometric dating of continental crust globally and (2) an initial rapid growth in the volume of continental crust during the Archean, forming the bulk of the continental crust that now exists, which is supported by isotopic evidence from hafnium in zircons and neodymium in sedimentary rocks. The two models and the data that support them can be reconciled by large-scale recycling of the continental crust, particularly during the early stages of Earth's history.", "title": "Natural history" }, { "paragraph_id": 12, "text": "New continental crust forms as a result of plate tectonics, a process ultimately driven by the continuous loss of heat from Earth's interior. Over the period of hundreds of millions of years, tectonic forces have caused areas of continental crust to group together to form supercontinents that have subsequently broken apart. At approximately 750 Ma, one of the earliest known supercontinents, Rodinia, began to break apart. The continents later recombined to form Pannotia at 600–540 Ma, then finally Pangaea, which also began to break apart at 180 Ma.", "title": "Natural history" }, { "paragraph_id": 13, "text": "The most recent pattern of ice ages began about 40 Ma, and then intensified during the Pleistocene about 3 Ma. High- and middle-latitude regions have since undergone repeated cycles of glaciation and thaw, repeating about every 21,000, 41,000 and 100,000 years. The Last Glacial Period, colloquially called the \"last ice age\", covered large parts of the continents, to the middle latitudes, in ice and ended about 11,700 years ago.", "title": "Natural history" }, { "paragraph_id": 14, "text": "Chemical reactions led to the first self-replicating molecules about four billion years ago. A half billion years later, the last common ancestor of all current life arose. The evolution of photosynthesis allowed the Sun's energy to be harvested directly by life forms. The resultant molecular oxygen (O2) accumulated in the atmosphere and due to interaction with ultraviolet solar radiation, formed a protective ozone layer (O3) in the upper atmosphere. The incorporation of smaller cells within larger ones resulted in the development of complex cells called eukaryotes. True multicellular organisms formed as cells within colonies became increasingly specialized. Aided by the absorption of harmful ultraviolet radiation by the ozone layer, life colonized Earth's surface. Among the earliest fossil evidence for life is microbial mat fossils found in 3.48 billion-year-old sandstone in Western Australia, biogenic graphite found in 3.7 billion-year-old metasedimentary rocks in Western Greenland, and remains of biotic material found in 4.1 billion-year-old rocks in Western Australia. The earliest direct evidence of life on Earth is contained in 3.45 billion-year-old Australian rocks showing fossils of microorganisms.", "title": "Natural history" }, { "paragraph_id": 15, "text": "During the Neoproterozoic, 1000 to 539 Ma, much of Earth might have been covered in ice. This hypothesis has been termed \"Snowball Earth\", and it is of particular interest because it preceded the Cambrian explosion, when multicellular life forms significantly increased in complexity. Following the Cambrian explosion, 535 Ma, there have been at least five major mass extinctions and many minor ones. Apart from the proposed current Holocene extinction event, the most recent was 66 Ma, when an asteroid impact triggered the extinction of the non-avian dinosaurs and other large reptiles, but largely spared small animals such as insects, mammals, lizards and birds. Mammalian life has diversified over the past 66 Mys, and several million years ago an African ape species gained the ability to stand upright. This facilitated tool use and encouraged communication that provided the nutrition and stimulation needed for a larger brain, which led to the evolution of humans. The development of agriculture, and then civilization, led to humans having an influence on Earth and the nature and quantity of other life forms that continues to this day.", "title": "Natural history" }, { "paragraph_id": 16, "text": "Earth's expected long-term future is tied to that of the Sun. Over the next 1.1 billion years, solar luminosity will increase by 10%, and over the next 3.5 billion years by 40%. Earth's increasing surface temperature will accelerate the inorganic carbon cycle, reducing CO2 concentration to levels lethally low for plants (10 ppm for C4 photosynthesis) in approximately 100–900 million years. The lack of vegetation will result in the loss of oxygen in the atmosphere, making animal life impossible. Due to the increased luminosity, Earth's mean temperature may reach 100 °C (212 °F) in 1.5 billion years, and all ocean water will evaporate and be lost to space, which may trigger a runaway greenhouse effect, within an estimated 1.6 to 3 billion years. Even if the Sun were stable, a fraction of the water in the modern oceans will descend to the mantle, due to reduced steam venting from mid-ocean ridges.", "title": "Natural history" }, { "paragraph_id": 17, "text": "The Sun will evolve to become a red giant in about 5 billion years. Models predict that the Sun will expand to roughly 1 AU (150 million km; 93 million mi), about 250 times its present radius. Earth's fate is less clear. As a red giant, the Sun will lose roughly 30% of its mass, so, without tidal effects, Earth will move to an orbit 1.7 AU (250 million km; 160 million mi) from the Sun when the star reaches its maximum radius, otherwise, with tidal effects, it may enter the Sun's atmosphere and be vaporized.", "title": "Natural history" }, { "paragraph_id": 18, "text": "Earth has a rounded shape, through hydrostatic equilibrium, with an average diameter of 12,742 kilometers (7,918 mi), making it the fifth largest planetary sized and largest terrestrial object of the Solar System.", "title": "Physical characteristics" }, { "paragraph_id": 19, "text": "Due to Earth's rotation it has the shape of an ellipsoid, bulging at its Equator; its diameter is 43 kilometers (27 mi) longer there than at its poles. Earth's shape furthermore has local topographic variations. Though the largest local variations, like the Mariana Trench (10,925 meters or 35,843 feet below local sea level), only shortens Earth's average radius by 0.17% and Mount Everest (8,848 meters or 29,029 feet above local sea level) lengthens it by only 0.14%. Since Earth's surface is farthest out from Earth's center of mass at its equatorial bulge, the summit of the volcano Chimborazo in Ecuador (6,384.4 km or 3,967.1 mi) is its farthest point out. Parallel to the rigid land topography the Ocean exhibits a more dynamic topography.", "title": "Physical characteristics" }, { "paragraph_id": 20, "text": "To measure the local variation of Earth's topography, geodesy employs an idealized Earth producing a shape called a geoid. Such a geoid shape is gained if the ocean is idealized, covering Earth completely and without any perturbations such as tides and winds. The result is a smooth but gravitational irregular geoid surface, providing a mean sea level (MSL) as a reference level for topographic measurements.", "title": "Physical characteristics" }, { "paragraph_id": 21, "text": "Earth's surface is the boundary between the atmosphere, and the solid Earth and oceans. Defined in this way, Earth's shape is an idealized spheroid – a squashed sphere – with a surface area of about 510 million km (197 million sq mi). Earth can be divided into two hemispheres: by latitude into the polar Northern and Southern hemispheres; or by longitude into the continental Eastern and Western hemispheres.", "title": "Physical characteristics" }, { "paragraph_id": 22, "text": "Most of Earth's surface is ocean water: 70.8% or 361 million km (139 million sq mi). This vast pool of salty water is often called the world ocean, and makes Earth with its dynamic hydrosphere a water world or ocean world. Indeed, in Earth's early history the ocean may have covered Earth completely. The world ocean is commonly divided into the Pacific Ocean, Atlantic Ocean, Indian Ocean, Antarctic or Southern Ocean, and Arctic Ocean, from largest to smallest. The ocean covers Earth's oceanic crust, but to a lesser extent with shelf seas also shelves of the continental crust. The oceanic crust forms large oceanic basins with features like abyssal plains, seamounts, submarine volcanoes, oceanic trenches, submarine canyons, oceanic plateaus, and a globe-spanning mid-ocean ridge system.", "title": "Physical characteristics" }, { "paragraph_id": 23, "text": "At Earth's polar regions, the ocean surface is covered by seasonally variable amounts of sea ice that often connects with polar land, permafrost and ice sheets, forming polar ice caps.", "title": "Physical characteristics" }, { "paragraph_id": 24, "text": "Earth's land covers 29.2%, or 149 million km (58 million sq mi) of Earth's surface. The land surface includes many islands around the globe, but most of the land surface is taken by the four continental landmasses, which are (in descending order): Africa-Eurasia, America (landmass), Antarctica, and Australia (landmass). These landmasses are further broken down and grouped into the continents. The terrain of the land surface varies greatly and consists of mountains, deserts, plains, plateaus, and other landforms. The elevation of the land surface varies from a low point of −418 m (−1,371 ft) at the Dead Sea, to a maximum altitude of 8,848 m (29,029 ft) at the top of Mount Everest. The mean height of land above sea level is about 797 m (2,615 ft).", "title": "Physical characteristics" }, { "paragraph_id": 25, "text": "Land can be covered by surface water, snow, ice, artificial structures or vegetation. Most of Earth's land hosts vegetation, but ice sheets (10%, not including the equally large land under permafrost) or cold as well as hot deserts (33%) occupy also considerable amounts of it.", "title": "Physical characteristics" }, { "paragraph_id": 26, "text": "The pedosphere is the outermost layer of Earth's land surface and is composed of soil and subject to soil formation processes. Soil is crucial for land to be arable. Earth's total arable land is 10.7% of the land surface, with 1.3% being permanent cropland. Earth has an estimated 16.7 million km (6.4 million sq mi) of cropland and 33.5 million km (12.9 million sq mi) of pastureland.", "title": "Physical characteristics" }, { "paragraph_id": 27, "text": "The land surface and the ocean floor form the top of Earth's crust, which together with parts of the upper mantle form Earth's lithosphere. Earth's crust may be divided into oceanic and continental crust. Beneath the ocean-floor sediments, the oceanic crust is predominantly basaltic, while the continental crust may include lower density materials such as granite, sediments and metamorphic rocks. Nearly 75% of the continental surfaces are covered by sedimentary rocks, although they form about 5% of the mass of the crust.", "title": "Physical characteristics" }, { "paragraph_id": 28, "text": "Earth's surface topography comprises both the topography of the ocean surface, and the shape of Earth's land surface. The submarine terrain of the ocean floor has an average bathymetric depth of 4 km, and is as varied as the terrain above sea level.", "title": "Physical characteristics" }, { "paragraph_id": 29, "text": "Earth's surface is continually being shaped by internal plate tectonic processes including earthquakes and volcanism; by weathering and erosion driven by ice, water, wind and temperature; and by biological processes including the growth and decomposition of biomass into soil.", "title": "Physical characteristics" }, { "paragraph_id": 30, "text": "Earth's mechanically rigid outer layer of Earth's crust and upper mantle, the lithosphere, is divided into tectonic plates. These plates are rigid segments that move relative to each other at one of three boundaries types: at convergent boundaries, two plates come together; at divergent boundaries, two plates are pulled apart; and at transform boundaries, two plates slide past one another laterally. Along these plate boundaries, earthquakes, volcanic activity, mountain-building, and oceanic trench formation can occur. The tectonic plates ride on top of the asthenosphere, the solid but less-viscous part of the upper mantle that can flow and move along with the plates.", "title": "Physical characteristics" }, { "paragraph_id": 31, "text": "As the tectonic plates migrate, oceanic crust is subducted under the leading edges of the plates at convergent boundaries. At the same time, the upwelling of mantle material at divergent boundaries creates mid-ocean ridges. The combination of these processes recycles the oceanic crust back into the mantle. Due to this recycling, most of the ocean floor is less than 100 Ma old. The oldest oceanic crust is located in the Western Pacific and is estimated to be 200 Ma old. By comparison, the oldest dated continental crust is 4,030 Ma, although zircons have been found preserved as clasts within Eoarchean sedimentary rocks that give ages up to 4,400 Ma, indicating that at least some continental crust existed at that time.", "title": "Physical characteristics" }, { "paragraph_id": 32, "text": "The seven major plates are the Pacific, North American, Eurasian, African, Antarctic, Indo-Australian, and South American. Other notable plates include the Arabian Plate, the Caribbean Plate, the Nazca Plate off the west coast of South America and the Scotia Plate in the southern Atlantic Ocean. The Australian Plate fused with the Indian Plate between 50 and 55 Ma. The fastest-moving plates are the oceanic plates, with the Cocos Plate advancing at a rate of 75 mm/a (3.0 in/year) and the Pacific Plate moving 52–69 mm/a (2.0–2.7 in/year). At the other extreme, the slowest-moving plate is the South American Plate, progressing at a typical rate of 10.6 mm/a (0.42 in/year).", "title": "Physical characteristics" }, { "paragraph_id": 33, "text": "Earth's interior, like that of the other terrestrial planets, is divided into layers by their chemical or physical (rheological) properties. The outer layer is a chemically distinct silicate solid crust, which is underlain by a highly viscous solid mantle. The crust is separated from the mantle by the Mohorovičić discontinuity. The thickness of the crust varies from about 6 kilometers (3.7 mi) under the oceans to 30–50 km (19–31 mi) for the continents. The crust and the cold, rigid, top of the upper mantle are collectively known as the lithosphere, which is divided into independently moving tectonic plates.", "title": "Physical characteristics" }, { "paragraph_id": 34, "text": "Beneath the lithosphere is the asthenosphere, a relatively low-viscosity layer on which the lithosphere rides. Important changes in crystal structure within the mantle occur at 410 and 660 km (250 and 410 mi) below the surface, spanning a transition zone that separates the upper and lower mantle. Beneath the mantle, an extremely low viscosity liquid outer core lies above a solid inner core. Earth's inner core may be rotating at a slightly higher angular velocity than the remainder of the planet, advancing by 0.1–0.5° per year, although both somewhat higher and much lower rates have also been proposed. The radius of the inner core is about one-fifth of that of Earth. Density increases with depth, as described in the table on the right.", "title": "Physical characteristics" }, { "paragraph_id": 35, "text": "Among the Solar System's planetary-sized objects Earth is the object with the highest density.", "title": "Physical characteristics" }, { "paragraph_id": 36, "text": "Earth's mass is approximately 5.97×10 kg (5,970 Yg). It is composed mostly of iron (32.1% by mass), oxygen (30.1%), silicon (15.1%), magnesium (13.9%), sulfur (2.9%), nickel (1.8%), calcium (1.5%), and aluminum (1.4%), with the remaining 1.2% consisting of trace amounts of other elements. Due to gravitational separation, the core is primarily composed of the denser elements: iron (88.8%), with smaller amounts of nickel (5.8%), sulfur (4.5%), and less than 1% trace elements. The most common rock constituents of the crust are oxides. Over 99% of the crust is composed of various oxides of eleven elements, principally oxides containing silicon (the silicate minerals), aluminum, iron, calcium, magnesium, potassium, or sodium.", "title": "Physical characteristics" }, { "paragraph_id": 37, "text": "The major heat-producing isotopes within Earth are potassium-40, uranium-238, and thorium-232. At the center, the temperature may be up to 6,000 °C (10,830 °F), and the pressure could reach 360 GPa (52 million psi). Because much of the heat is provided by radioactive decay, scientists postulate that early in Earth's history, before isotopes with short half-lives were depleted, Earth's heat production was much higher. At approximately 3 Gyr, twice the present-day heat would have been produced, increasing the rates of mantle convection and plate tectonics, and allowing the production of uncommon igneous rocks such as komatiites that are rarely formed today.", "title": "Physical characteristics" }, { "paragraph_id": 38, "text": "The mean heat loss from Earth is 87 mW m, for a global heat loss of 4.42×10 W. A portion of the core's thermal energy is transported toward the crust by mantle plumes, a form of convection consisting of upwellings of higher-temperature rock. These plumes can produce hotspots and flood basalts. More of the heat in Earth is lost through plate tectonics, by mantle upwelling associated with mid-ocean ridges. The final major mode of heat loss is through conduction through the lithosphere, the majority of which occurs under the oceans because the crust there is much thinner than that of the continents.", "title": "Physical characteristics" }, { "paragraph_id": 39, "text": "The gravity of Earth is the acceleration that is imparted to objects due to the distribution of mass within Earth. Near Earth's surface, gravitational acceleration is approximately 9.8 m/s (32 ft/s). Local differences in topography, geology, and deeper tectonic structure cause local and broad regional differences in Earth's gravitational field, known as gravity anomalies.", "title": "Physical characteristics" }, { "paragraph_id": 40, "text": "The main part of Earth's magnetic field is generated in the core, the site of a dynamo process that converts the kinetic energy of thermally and compositionally driven convection into electrical and magnetic field energy. The field extends outwards from the core, through the mantle, and up to Earth's surface, where it is, approximately, a dipole. The poles of the dipole are located close to Earth's geographic poles. At the equator of the magnetic field, the magnetic-field strength at the surface is 3.05×10 T, with a magnetic dipole moment of 7.79×10 Am at epoch 2000, decreasing nearly 6% per century (although it still remains stronger than its long time average). The convection movements in the core are chaotic; the magnetic poles drift and periodically change alignment. This causes secular variation of the main field and field reversals at irregular intervals averaging a few times every million years. The most recent reversal occurred approximately 700,000 years ago.", "title": "Physical characteristics" }, { "paragraph_id": 41, "text": "The extent of Earth's magnetic field in space defines the magnetosphere. Ions and electrons of the solar wind are deflected by the magnetosphere; solar wind pressure compresses the dayside of the magnetosphere, to about 10 Earth radii, and extends the nightside magnetosphere into a long tail. Because the velocity of the solar wind is greater than the speed at which waves propagate through the solar wind, a supersonic bow shock precedes the dayside magnetosphere within the solar wind. Charged particles are contained within the magnetosphere; the plasmasphere is defined by low-energy particles that essentially follow magnetic field lines as Earth rotates. The ring current is defined by medium-energy particles that drift relative to the geomagnetic field, but with paths that are still dominated by the magnetic field, and the Van Allen radiation belts are formed by high-energy particles whose motion is essentially random, but contained in the magnetosphere.", "title": "Physical characteristics" }, { "paragraph_id": 42, "text": "During magnetic storms and substorms, charged particles can be deflected from the outer magnetosphere and especially the magnetotail, directed along field lines into Earth's ionosphere, where atmospheric atoms can be excited and ionized, causing the aurora.", "title": "Physical characteristics" }, { "paragraph_id": 43, "text": "Earth's rotation period relative to the Sun—its mean solar day—is 86,400 seconds of mean solar time (86,400.0025 SI seconds). Because Earth's solar day is now slightly longer than it was during the 19th century due to tidal deceleration, each day varies between 0 and 2 ms longer than the mean solar day.", "title": "Orbit and rotation" }, { "paragraph_id": 44, "text": "Earth's rotation period relative to the fixed stars, called its stellar day by the International Earth Rotation and Reference Systems Service (IERS), is 86,164.0989 seconds of mean solar time (UT1), or 23 56 4.0989. Earth's rotation period relative to the precessing or moving mean March equinox (when the Sun is at 90° on the equator), is 86,164.0905 seconds of mean solar time (UT1) (23 56 4.0905). Thus the sidereal day is shorter than the stellar day by about 8.4 ms.", "title": "Orbit and rotation" }, { "paragraph_id": 45, "text": "Apart from meteors within the atmosphere and low-orbiting satellites, the main apparent motion of celestial bodies in Earth's sky is to the west at a rate of 15°/h = 15'/min. For bodies near the celestial equator, this is equivalent to an apparent diameter of the Sun or the Moon every two minutes; from Earth's surface, the apparent sizes of the Sun and the Moon are approximately the same.", "title": "Orbit and rotation" }, { "paragraph_id": 46, "text": "Earth orbits the Sun, making Earth the third-closest planet to the Sun and part of the inner Solar System. Earth's average orbital distance is about 150 million km (93 million mi), which is the basis for the Astronomical Unit and is equal to roughly 8.3 light minutes or 380 times Earth's distance to the Moon.", "title": "Orbit and rotation" }, { "paragraph_id": 47, "text": "Earth orbits the Sun every 365.2564 mean solar days, or one sidereal year. With an apparent movement of the Sun in Earth's sky at a rate of about 1°/day eastward, which is one apparent Sun or Moon diameter every 12 hours. Due to this motion, on average it takes 24 hours—a solar day—for Earth to complete a full rotation about its axis so that the Sun returns to the meridian.", "title": "Orbit and rotation" }, { "paragraph_id": 48, "text": "The orbital speed of Earth averages about 29.78 km/s (107,200 km/h; 66,600 mph), which is fast enough to travel a distance equal to Earth's diameter, about 12,742 km (7,918 mi), in seven minutes, and the distance to the Moon, 384,000 km (239,000 mi), in about 3.5 hours.", "title": "Orbit and rotation" }, { "paragraph_id": 49, "text": "The Moon and Earth orbit a common barycenter every 27.32 days relative to the background stars. When combined with the Earth–Moon system's common orbit around the Sun, the period of the synodic month, from new moon to new moon, is 29.53 days. Viewed from the celestial north pole, the motion of Earth, the Moon, and their axial rotations are all counterclockwise. Viewed from a vantage point above the Sun and Earth's north poles, Earth orbits in a counterclockwise direction about the Sun. The orbital and axial planes are not precisely aligned: Earth's axis is tilted some 23.44 degrees from the perpendicular to the Earth–Sun plane (the ecliptic), and the Earth-Moon plane is tilted up to ±5.1 degrees against the Earth–Sun plane. Without this tilt, there would be an eclipse every two weeks, alternating between lunar eclipses and solar eclipses.", "title": "Orbit and rotation" }, { "paragraph_id": 50, "text": "The Hill sphere, or the sphere of gravitational influence, of Earth is about 1.5 million km (930,000 mi) in radius. This is the maximum distance at which Earth's gravitational influence is stronger than the more distant Sun and planets. Objects must orbit Earth within this radius, or they can become unbound by the gravitational perturbation of the Sun. Earth, along with the Solar System, is situated in the Milky Way and orbits about 28,000 light-years from its center. It is about 20 light-years above the galactic plane in the Orion Arm.", "title": "Orbit and rotation" }, { "paragraph_id": 51, "text": "The axial tilt of Earth is approximately 23.439281° with the axis of its orbit plane, always pointing towards the Celestial Poles. Due to Earth's axial tilt, the amount of sunlight reaching any given point on the surface varies over the course of the year. This causes the seasonal change in climate, with summer in the Northern Hemisphere occurring when the Tropic of Cancer is facing the Sun, and in the Southern Hemisphere when the Tropic of Capricorn faces the Sun. In each instance, winter occurs simultaneously in the opposite hemisphere.", "title": "Orbit and rotation" }, { "paragraph_id": 52, "text": "During the summer, the day lasts longer, and the Sun climbs higher in the sky. In winter, the climate becomes cooler and the days shorter. Above the Arctic Circle and below the Antarctic Circle there is no daylight at all for part of the year, causing a polar night, and this night extends for several months at the poles themselves. These same latitudes also experience a midnight sun, where the sun remains visible all day.", "title": "Orbit and rotation" }, { "paragraph_id": 53, "text": "By astronomical convention, the four seasons can be determined by the solstices—the points in the orbit of maximum axial tilt toward or away from the Sun—and the equinoxes, when Earth's rotational axis is aligned with its orbital axis. In the Northern Hemisphere, winter solstice currently occurs around 21 December; summer solstice is near 21 June, spring equinox is around 20 March and autumnal equinox is about 22 or 23 September. In the Southern Hemisphere, the situation is reversed, with the summer and winter solstices exchanged and the spring and autumnal equinox dates swapped.", "title": "Orbit and rotation" }, { "paragraph_id": 54, "text": "The angle of Earth's axial tilt is relatively stable over long periods of time. Its axial tilt does undergo nutation; a slight, irregular motion with a main period of 18.6 years. The orientation (rather than the angle) of Earth's axis also changes over time, precessing around in a complete circle over each 25,800-year cycle; this precession is the reason for the difference between a sidereal year and a tropical year. Both of these motions are caused by the varying attraction of the Sun and the Moon on Earth's equatorial bulge. The poles also migrate a few meters across Earth's surface. This polar motion has multiple, cyclical components, which collectively are termed quasiperiodic motion. In addition to an annual component to this motion, there is a 14-month cycle called the Chandler wobble. Earth's rotational velocity also varies in a phenomenon known as length-of-day variation.", "title": "Orbit and rotation" }, { "paragraph_id": 55, "text": "In modern times, Earth's perihelion occurs around 3 January, and its aphelion around 4 July. These dates change over time due to precession and other orbital factors, which follow cyclical patterns known as Milankovitch cycles. The changing Earth–Sun distance causes an increase of about 6.8% in solar energy reaching Earth at perihelion relative to aphelion. Because the Southern Hemisphere is tilted toward the Sun at about the same time that Earth reaches the closest approach to the Sun, the Southern Hemisphere receives slightly more energy from the Sun than does the northern over the course of a year. This effect is much less significant than the total energy change due to the axial tilt, and most of the excess energy is absorbed by the higher proportion of water in the Southern Hemisphere.", "title": "Orbit and rotation" }, { "paragraph_id": 56, "text": "The Moon is a relatively large, terrestrial, planet-like natural satellite, with a diameter about one-quarter of Earth's. It is the largest moon in the Solar System relative to the size of its planet, although Charon is larger relative to the dwarf planet Pluto. The natural satellites of other planets are also referred to as \"moons\", after Earth's. The most widely accepted theory of the Moon's origin, the giant-impact hypothesis, states that it formed from the collision of a Mars-size protoplanet called Theia with the early Earth. This hypothesis explains the Moon's relative lack of iron and volatile elements and the fact that its composition is nearly identical to that of Earth's crust.", "title": "Earth–Moon system" }, { "paragraph_id": 57, "text": "The gravitational attraction between Earth and the Moon causes tides on Earth. The same effect on the Moon has led to its tidal locking: its rotation period is the same as the time it takes to orbit Earth. As a result, it always presents the same face to the planet. As the Moon orbits Earth, different parts of its face are illuminated by the Sun, leading to the lunar phases. Due to their tidal interaction, the Moon recedes from Earth at the rate of approximately 38 mm/a (1.5 in/year). Over millions of years, these tiny modifications—and the lengthening of Earth's day by about 23 µs/yr—add up to significant changes. During the Ediacaran period, for example, (approximately 620 Ma) there were 400±7 days in a year, with each day lasting 21.9±0.4 hours.", "title": "Earth–Moon system" }, { "paragraph_id": 58, "text": "The Moon may have dramatically affected the development of life by moderating the planet's climate. Paleontological evidence and computer simulations show that Earth's axial tilt is stabilized by tidal interactions with the Moon. Some theorists think that without this stabilization against the torques applied by the Sun and planets to Earth's equatorial bulge, the rotational axis might be chaotically unstable, exhibiting large changes over millions of years, as is the case for Mars, though this is disputed.", "title": "Earth–Moon system" }, { "paragraph_id": 59, "text": "Viewed from Earth, the Moon is just far enough away to have almost the same apparent-sized disk as the Sun. The angular size (or solid angle) of these two bodies match because, although the Sun's diameter is about 400 times as large as the Moon's, it is also 400 times more distant. This allows total and annular solar eclipses to occur on Earth.", "title": "Earth–Moon system" }, { "paragraph_id": 60, "text": "On 1 November 2023, scientists reported that, according to computer simulations, remnants of a protoplanet, named Theia, could be inside the Earth, left over from a collision with the Earth in ancient times, and afterwards becoming the Moon.", "title": "Earth–Moon system" }, { "paragraph_id": 61, "text": "Earth's co-orbital asteroids population consists of quasi-satellites, objects with a horseshoe orbit and trojans. There are at least five quasi-satellites, including 469219 Kamoʻoalewa. A trojan asteroid companion, 2010 TK7, is librating around the leading Lagrange triangular point, L4, in Earth's orbit around the Sun. The tiny near-Earth asteroid 2006 RH120 makes close approaches to the Earth–Moon system roughly every twenty years. During these approaches, it can orbit Earth for brief periods of time.", "title": "Earth–Moon system" }, { "paragraph_id": 62, "text": "As of September 2021, there are 4,550 operational, human-made satellites orbiting Earth. There are also inoperative satellites, including Vanguard 1, the oldest satellite currently in orbit, and over 16,000 pieces of tracked space debris. Earth's largest artificial satellite is the International Space Station.", "title": "Earth–Moon system" }, { "paragraph_id": 63, "text": "Earth's hydrosphere is the sum of Earth's water and its distribution. Most of Earth's hydrosphere consists of Earth's global ocean. Earth's hydrosphere also consists of water in the atmosphere and on land, including clouds, inland seas, lakes, rivers, and underground waters down to a depth of 2,000 m (6,600 ft).", "title": "Hydrosphere" }, { "paragraph_id": 64, "text": "The mass of the oceans is approximately 1.35×10 metric tons or about 1/4400 of Earth's total mass. The oceans cover an area of 361.8 million km (139.7 million sq mi) with a mean depth of 3,682 m (12,080 ft), resulting in an estimated volume of 1.332 billion km (320 million cu mi). If all of Earth's crustal surface were at the same elevation as a smooth sphere, the depth of the resulting world ocean would be 2.7 to 2.8 km (1.68 to 1.74 mi). About 97.5% of the water is saline; the remaining 2.5% is fresh water. Most fresh water, about 68.7%, is present as ice in ice caps and glaciers. The remaining 30% is ground water, 1% surface water (covering only 2.8% of Earth's land) and other small forms of fresh water deposits such as permafrost, water vapor in the atmosphere, biological binding, etc. .", "title": "Hydrosphere" }, { "paragraph_id": 65, "text": "In Earth's coldest regions, snow survives over the summer and changes into ice. This accumulated snow and ice eventually forms into glaciers, bodies of ice that flow under the influence of their own gravity. Alpine glaciers form in mountainous areas, whereas vast ice sheets form over land in polar regions. The flow of glaciers erodes the surface changing it dramatically, with the formation of U-shaped valleys and other landforms. Sea ice in the Arctic covers an area about as big as the United States, although it is quickly retreating as a consequence of climate change.", "title": "Hydrosphere" }, { "paragraph_id": 66, "text": "The average salinity of Earth's oceans is about 35 grams of salt per kilogram of seawater (3.5% salt). Most of this salt was released from volcanic activity or extracted from cool igneous rocks. The oceans are also a reservoir of dissolved atmospheric gases, which are essential for the survival of many aquatic life forms. Sea water has an important influence on the world's climate, with the oceans acting as a large heat reservoir. Shifts in the oceanic temperature distribution can cause significant weather shifts, such as the El Niño–Southern Oscillation.", "title": "Hydrosphere" }, { "paragraph_id": 67, "text": "The abundance of water, particularly liquid water, on Earth's surface is a unique feature that distinguishes it from other planets in the Solar System. Solar System planets with considerable atmospheres do partly host atmospheric water vapor, but they lack surface conditions for stable surface water. Despite some moons showing signs of large reservoirs of extraterrestrial liquid water, with possibly even more volume than Earth's ocean, all of them are large bodies of water under a kilometers thick frozen surface layer.", "title": "Hydrosphere" }, { "paragraph_id": 68, "text": "The atmospheric pressure at Earth's sea level averages 101.325 kPa (14.696 psi), with a scale height of about 8.5 km (5.3 mi). A dry atmosphere is composed of 78.084% nitrogen, 20.946% oxygen, 0.934% argon, and trace amounts of carbon dioxide and other gaseous molecules. Water vapor content varies between 0.01% and 4% but averages about 1%. Clouds cover around two thirds of Earth's surface, more so over oceans than land. The height of the troposphere varies with latitude, ranging between 8 km (5 mi) at the poles to 17 km (11 mi) at the equator, with some variation resulting from weather and seasonal factors.", "title": "Atmosphere" }, { "paragraph_id": 69, "text": "Earth's biosphere has significantly altered its atmosphere. Oxygenic photosynthesis evolved 2.7 Gya, forming the primarily nitrogen–oxygen atmosphere of today. This change enabled the proliferation of aerobic organisms and, indirectly, the formation of the ozone layer due to the subsequent conversion of atmospheric O2 into O3. The ozone layer blocks ultraviolet solar radiation, permitting life on land. Other atmospheric functions important to life include transporting water vapor, providing useful gases, causing small meteors to burn up before they strike the surface, and moderating temperature. This last phenomenon is the greenhouse effect: trace molecules within the atmosphere serve to capture thermal energy emitted from the surface, thereby raising the average temperature. Water vapor, carbon dioxide, methane, nitrous oxide, and ozone are the primary greenhouse gases in the atmosphere. Without this heat-retention effect, the average surface temperature would be −18 °C (0 °F), in contrast to the current +15 °C (59 °F), and life on Earth probably would not exist in its current form.", "title": "Atmosphere" }, { "paragraph_id": 70, "text": "Earth's atmosphere has no definite boundary, gradually becoming thinner and fading into outer space. Three-quarters of the atmosphere's mass is contained within the first 11 km (6.8 mi) of the surface; this lowest layer is called the troposphere. Energy from the Sun heats this layer, and the surface below, causing expansion of the air. This lower-density air then rises and is replaced by cooler, higher-density air. The result is atmospheric circulation that drives the weather and climate through redistribution of thermal energy.", "title": "Atmosphere" }, { "paragraph_id": 71, "text": "The primary atmospheric circulation bands consist of the trade winds in the equatorial region below 30° latitude and the westerlies in the mid-latitudes between 30° and 60°. Ocean heat content and currents are also important factors in determining climate, particularly the thermohaline circulation that distributes thermal energy from the equatorial oceans to the polar regions.", "title": "Atmosphere" }, { "paragraph_id": 72, "text": "Earth receives 1361 W/m of solar irradiance. The amount of solar energy that reaches Earth's surface decreases with increasing latitude. At higher latitudes, the sunlight reaches the surface at lower angles, and it must pass through thicker columns of the atmosphere. As a result, the mean annual air temperature at sea level decreases by about 0.4 °C (0.7 °F) per degree of latitude from the equator. Earth's surface can be subdivided into specific latitudinal belts of approximately homogeneous climate. Ranging from the equator to the polar regions, these are the tropical (or equatorial), subtropical, temperate and polar climates.", "title": "Atmosphere" }, { "paragraph_id": 73, "text": "Further factors that affect a location's climates are its proximity to oceans, the oceanic and atmospheric circulation, and topology. Places close to oceans typically have colder summers and warmer winters, due to the fact that oceans can store large amounts of heat. The wind transports the cold or the heat of the ocean to the land. Atmospheric circulation also plays an important role: San Francisco and Washington DC are both coastal cities at about the same latitude. San Francisco's climate is significantly more moderate as the prevailing wind direction is from sea to land. Finally, temperatures decrease with height causing mountainous areas to be colder than low-lying areas.", "title": "Atmosphere" }, { "paragraph_id": 74, "text": "Water vapor generated through surface evaporation is transported by circulatory patterns in the atmosphere. When atmospheric conditions permit an uplift of warm, humid air, this water condenses and falls to the surface as precipitation. Most of the water is then transported to lower elevations by river systems and usually returned to the oceans or deposited into lakes. This water cycle is a vital mechanism for supporting life on land and is a primary factor in the erosion of surface features over geological periods. Precipitation patterns vary widely, ranging from several meters of water per year to less than a millimeter. Atmospheric circulation, topographic features, and temperature differences determine the average precipitation that falls in each region.", "title": "Atmosphere" }, { "paragraph_id": 75, "text": "The commonly used Köppen climate classification system has five broad groups (humid tropics, arid, humid middle latitudes, continental and cold polar), which are further divided into more specific subtypes. The Köppen system rates regions based on observed temperature and precipitation. Surface air temperature can rise to around 55 °C (131 °F) in hot deserts, such as Death Valley, and can fall as low as −89 °C (−128 °F) in Antarctica.", "title": "Atmosphere" }, { "paragraph_id": 76, "text": "The upper atmosphere, the atmosphere above the troposphere, is usually divided into the stratosphere, mesosphere, and thermosphere. Each layer has a different lapse rate, defining the rate of change in temperature with height. Beyond these, the exosphere thins out into the magnetosphere, where the geomagnetic fields interact with the solar wind. Within the stratosphere is the ozone layer, a component that partially shields the surface from ultraviolet light and thus is important for life on Earth. The Kármán line, defined as 100 km (62 mi) above Earth's surface, is a working definition for the boundary between the atmosphere and outer space.", "title": "Atmosphere" }, { "paragraph_id": 77, "text": "Thermal energy causes some of the molecules at the outer edge of the atmosphere to increase their velocity to the point where they can escape from Earth's gravity. This causes a slow but steady loss of the atmosphere into space. Because unfixed hydrogen has a low molecular mass, it can achieve escape velocity more readily, and it leaks into outer space at a greater rate than other gases. The leakage of hydrogen into space contributes to the shifting of Earth's atmosphere and surface from an initially reducing state to its current oxidizing one. Photosynthesis provided a source of free oxygen, but the loss of reducing agents such as hydrogen is thought to have been a necessary precondition for the widespread accumulation of oxygen in the atmosphere. Hence the ability of hydrogen to escape from the atmosphere may have influenced the nature of life that developed on Earth. In the current, oxygen-rich atmosphere most hydrogen is converted into water before it has an opportunity to escape. Instead, most of the hydrogen loss comes from the destruction of methane in the upper atmosphere.", "title": "Atmosphere" }, { "paragraph_id": 78, "text": "Earth is the only known place that has ever been habitable for life. Earth's life developed in Earth's early bodies of water some hundred million years after Earth formed.", "title": "Life on Earth" }, { "paragraph_id": 79, "text": "Earth's life has been shaping and inhabiting many particular ecosystems on Earth and has eventually expanded globally forming an overarching biosphere. Therefore, life has impacted Earth, significantly altering Earth's atmosphere and surface over long periods of time, causing changes like the Great Oxidation Event.", "title": "Life on Earth" }, { "paragraph_id": 80, "text": "Earth's life has over time greatly diversified, allowing the biosphere to have different biomes, which are inhabited by comparatively similar plants and animals. The different biomes developed at distinct elevations or water depths, planetary temperature latitudes and on land also with different humidity. Earth's species diversity and biomass reaches a peak in shallow waters and with forests, particularly in equatorial, warm and humid conditions. While freezing polar regions and high altitudes, or extremely arid areas are relatively barren of plant and animal life.", "title": "Life on Earth" }, { "paragraph_id": 81, "text": "Earth provides liquid water—an environment where complex organic molecules can assemble and interact, and sufficient energy to sustain a metabolism. Plants and other organisms take up nutrients from water, soils and the atmosphere. These nutrients are constantly recycled between different species.", "title": "Life on Earth" }, { "paragraph_id": 82, "text": "Extreme weather, such as tropical cyclones (including hurricanes and typhoons), occurs over most of Earth's surface and has a large impact on life in those areas. From 1980 to 2000, these events caused an average of 11,800 human deaths per year. Many places are subject to earthquakes, landslides, tsunamis, volcanic eruptions, tornadoes, blizzards, floods, droughts, wildfires, and other calamities and disasters. Human impact is felt in many areas due to pollution of the air and water, acid rain, loss of vegetation (overgrazing, deforestation, desertification), loss of wildlife, species extinction, soil degradation, soil depletion and erosion. Human activities release greenhouse gases into the atmosphere which cause global warming. This is driving changes such as the melting of glaciers and ice sheets, a global rise in average sea levels, increased risk of drought and wildfires, and migration of species to colder areas.", "title": "Life on Earth" }, { "paragraph_id": 83, "text": "Originating from earlier primates in Eastern Africa 300,000 years ago humans have since been migrating and with the advent of agriculture in the 10th millennium BC increasingly settling Earth's land. In the 20th century Antarctica had been the last continent to see a first and until today limited human presence.", "title": "Human geography" }, { "paragraph_id": 84, "text": "Human population has since the 19th century grown exponentially to seven billion in the early 2010s, and is projected to peak at around ten billion in the second half of the 21st century. Most of the growth is expected to take place in sub-Saharan Africa.", "title": "Human geography" }, { "paragraph_id": 85, "text": "Distribution and density of human population varies greatly around the world with the majority living in south to eastern Asia and 90% inhabiting only the Northern Hemisphere of Earth, partly due to the hemispherical predominance of the world's land mass, with 68% of the world's land mass being in the Northern Hemisphere. Furthermore, since the 19th century humans have increasingly converged into urban areas with the majority living in urban areas by the 21st century.", "title": "Human geography" }, { "paragraph_id": 86, "text": "Beyond Earth's surface humans have lived on a temporary basis, with only special purpose deep underground and underwater presence, and a few space stations. Human population virtually completely remains on Earth's surface, fully depending on Earth and the environment it sustains. Since the second half of the 20th century, some hundreds of humans have temporarily stayed beyond Earth, a tiny fraction of whom have reached another celestial body, the Moon.", "title": "Human geography" }, { "paragraph_id": 87, "text": "Earth has been subject to extensive human settlement, and humans have developed diverse societies and cultures. Most of Earth's land has been territorially claimed since the 19th century by sovereign states (countries) separated by political borders, and more than 200 such states exist today, with only parts of Antarctica and few small regions remaining unclaimed. Most of these states together form the United Nations, the leading worldwide intergovernmental organization, which extends human governance over the ocean and Antarctica, and therefore all of Earth.", "title": "Human geography" }, { "paragraph_id": 88, "text": "Earth has resources that have been exploited by humans. Those termed non-renewable resources, such as fossil fuels, are only replenished over geological timescales. Large deposits of fossil fuels are obtained from Earth's crust, consisting of coal, petroleum, and natural gas. These deposits are used by humans both for energy production and as feedstock for chemical production. Mineral ore bodies have also been formed within the crust through a process of ore genesis, resulting from actions of magmatism, erosion, and plate tectonics. These metals and other elements are extracted by mining, a process which often brings environmental and health damage.", "title": "Human geography" }, { "paragraph_id": 89, "text": "Earth's biosphere produces many useful biological products for humans, including food, wood, pharmaceuticals, oxygen, and the recycling of organic waste. The land-based ecosystem depends upon topsoil and fresh water, and the oceanic ecosystem depends on dissolved nutrients washed down from the land. In 2019, 39 million km (15 million sq mi) of Earth's land surface consisted of forest and woodlands, 12 million km (4.6 million sq mi) was shrub and grassland, 40 million km (15 million sq mi) were used for animal feed production and grazing, and 11 million km (4.2 million sq mi) were cultivated as croplands. Of the 12–14% of ice-free land that is used for croplands, 2 percentage points were irrigated in 2015. Humans use building materials to construct shelters.", "title": "Human geography" }, { "paragraph_id": 90, "text": "Human activities have impacted Earth's environments. Through activities such as the burning of fossil fuels, humans have been increasing the amount of greenhouse gases in the atmosphere, altering Earth's energy budget and climate. It is estimated that global temperatures in the year 2020 were 1.2 °C (2.2 °F) warmer than the preindustrial baseline. This increase in temperature, known as global warming, has contributed to the melting of glaciers, rising sea levels, increased risk of drought and wildfires, and migration of species to colder areas.", "title": "Human geography" }, { "paragraph_id": 91, "text": "The concept of planetary boundaries was introduced to quantify humanity's impact on Earth. Of the nine identified boundaries, five have been crossed: Biosphere integrity, climate change, chemical pollution, destruction of wild habitats and the nitrogen cycle are thought to have passed the safe threshold. As of 2018, no country meets the basic needs of its population without transgressing planetary boundaries. It is thought possible to provide all basic physical needs globally within sustainable levels of resource use.", "title": "Human geography" }, { "paragraph_id": 92, "text": "Human cultures have developed many views of the planet. The standard astronomical symbols of Earth are a quartered circle, , representing the four corners of the world, and a globus cruciger, . Earth is sometimes personified as a deity. In many cultures it is a mother goddess that is also the primary fertility deity. Creation myths in many religions involve the creation of Earth by a supernatural deity or deities. The Gaia hypothesis, developed in the mid-20th century, compared Earth's environments and life as a single self-regulating organism leading to broad stabilization of the conditions of habitability.", "title": "Cultural and historical viewpoint" }, { "paragraph_id": 93, "text": "Images of Earth taken from space, particularly during the Apollo program, have been credited with altering the way that people viewed the planet that they lived on, called the overview effect, emphasizing its beauty, uniqueness and apparent fragility. In particular, this caused a realization of the scope of effects from human activity on Earth's environment. Enabled by science, particularly Earth observation, humans have started to take action on environmental issues globally, acknowledging the impact of humans and the interconnectedness of Earth's environments.", "title": "Cultural and historical viewpoint" }, { "paragraph_id": 94, "text": "Scientific investigation has resulted in several culturally transformative shifts in people's view of the planet. Initial belief in a flat Earth was gradually displaced in Ancient Greece by the idea of a spherical Earth, which was attributed to both the philosophers Pythagoras and Parmenides. Earth was generally believed to be the center of the universe until the 16th century, when scientists first concluded that it was a moving object, one of the planets of the Solar System.", "title": "Cultural and historical viewpoint" }, { "paragraph_id": 95, "text": "It was only during the 19th century that geologists realized Earth's age was at least many millions of years. Lord Kelvin used thermodynamics to estimate the age of Earth to be between 20 million and 400 million years in 1864, sparking a vigorous debate on the subject; it was only when radioactivity and radioactive dating were discovered in the late 19th and early 20th centuries that a reliable mechanism for determining Earth's age was established, proving the planet to be billions of years old.", "title": "Cultural and historical viewpoint" } ]
Earth is the third planet from the Sun and the only astronomical object known to harbor life. This is enabled by Earth being a water world, the only one in the Solar System sustaining liquid surface water. Almost all of Earth's water is contained in its global ocean, covering 70.8% of Earth's crust. The remaining 29.2% of Earth's crust is land, most of which is located in the form of continental landmasses within one hemisphere, Earth's land hemisphere. Most of Earth's land is somewhat humid and covered by vegetation, while large sheets of ice at Earth's polar deserts retain more water than Earth's groundwater, lakes, rivers and atmospheric water combined. Earth's crust consists of slowly moving tectonic plates, which interact to produce mountain ranges, volcanoes, and earthquakes. Earth has a liquid outer core that generates a magnetosphere capable of deflecting most of the destructive solar winds and cosmic radiation. Earth has a dynamic atmosphere, which sustains Earth's surface conditions and protects it from most meteoroids and UV-light at entry. It has a composition of primarily nitrogen and oxygen. Water vapor is widely present in the atmosphere, forming clouds that cover most of the planet. The water vapor acts as a greenhouse gas and, together with other greenhouse gases in the atmosphere, particularly carbon dioxide (CO2), creates the conditions for both liquid surface water and water vapor to persist via the capturing of energy from the Sun's light. This process maintains the current average surface temperature of 14.76 °C, at which water is liquid under atmospheric pressure. Differences in the amount of captured energy between geographic regions (as with the equatorial region receiving more sunlight than the polar regions) drive atmospheric and ocean currents, producing a global climate system with different climate regions, and a range of weather phenomena such as precipitation, allowing components such as nitrogen to cycle. Earth is rounded into an ellipsoid with a circumference of about 40,000 km. It is the densest planet in the Solar System. Of the four rocky planets, it is the largest and most massive. Earth is about eight light-minutes away from the Sun and orbits it, taking a year (about 365.25 days) to complete one revolution. Earth rotates around its own axis in slightly less than a day (in about 23 hours and 56 minutes). Earth's axis of rotation is tilted with respect to the perpendicular to its orbital plane around the Sun, producing seasons. Earth is orbited by one permanent natural satellite, the Moon, which orbits Earth at 384,400 km (1.28 light seconds) and is roughly a quarter as wide as Earth. The Moon's gravity helps stabilize Earth's axis, and also causes tides which gradually slow Earth's rotation. As a result of tidal locking, the same side of the Moon always faces Earth. Earth, like most other bodies in the Solar System, formed 4.5 billion years ago from gas in the early Solar System. During the first billion years of Earth's history, the ocean formed and then life developed within it. Life spread globally and has been altering Earth's atmosphere and surface, leading to the Great Oxidation Event two billion years ago. Humans emerged 300,000 years ago in Africa and have spread across every continent on Earth with the exception of Antarctica. Humans depend on Earth's biosphere and natural resources for their survival, but have increasingly impacted the planet's environment. Humanity's current impact on Earth's climate and biosphere is unsustainable, threatening the livelihood of humans and many other forms of life, and causing widespread extinctions.
2001-11-06T03:00:43Z
2023-12-26T21:27:41Z
[ "Template:Further", "Template:Cite encyclopedia", "Template:Solar System", "Template:Navboxes", "Template:IPA", "Template:As of", "Template:E", "Template:En dash", "Template:Pp-semi-indef", "Template:Pp-move", "Template:Infobox planet", "Template:Lang", "Template:Cite book", "Template:Cite news", "Template:Earth", "Template:IPA-el", "Template:Nowrap", "Template:Columns list", "Template:Cite web", "Template:See also", "Template:Mpl", "Template:OED", "Template:Cbignore", "Template:Redirect", "Template:Featured article", "Template:Use American English", "Template:Linktext", "Template:Refn", "Template:Short description", "Template:Use dmy dates", "Template:IPAc-en", "Template:Main", "Template:Convert", "Template:Cite journal", "Template:Val", "Template:Better source needed", "Template:Hlist", "Template:Spoken Wikipedia", "Template:Subject bar", "Template:Authority control", "Template:Anchor", "Template:Chem2", "Template:Multiple image", "Template:Reflist" ]
https://en.wikipedia.org/wiki/Earth
9,230
English Channel
The English Channel, also known as the Channel, is an arm of the Atlantic Ocean that separates Southern England from northern France. It links to the southern part of the North Sea by the Strait of Dover at its northeastern end. It is the busiest shipping area in the world. It is about 560 kilometres (300 nautical miles; 350 statute miles) long and varies in width from 240 km (130 nmi; 150 mi) at its widest to 34 km (18 nmi; 21 mi) at its narrowest in the Strait of Dover. It is the smallest of the shallow seas around the continental shelf of Europe, covering an area of some 75,000 square kilometres (22,000 square nautical miles; 29,000 square miles). The Channel was a key factor in Britain becoming a naval superpower and has been utilised by Britain as a natural defence mechanism to halt attempted invasions, such as in the Napoleonic Wars and in the Second World War. The population around the English Channel is predominantly located on the English coast and the major languages spoken in this region are English and French. The name first appears in Roman sources as Oceanus Britannicus (or Mare Britannicum, meaning the British Ocean or British Sea). Variations of this term were used by influential writers such as Ptolemy, and remained popular with British and continental authors well into the modern era. Other Latin names for the sea include Oceanus Gallicus (the Gaulish Ocean) which was used by Isidore of Seville in the sixth century. The term British Sea is still used by speakers of Cornish and Breton, with the sea known to them as Mor Bretannek and Mor Breizh respectively. While it is likely that these names derive from the Latin term, it is possible that they predate the arrival of the Romans in the area. The modern Welsh is often given as Môr Udd (the Lord's or Prince's Sea); however, this name originally described both the Channel and the North Sea combined. Anglo-Saxon texts make reference to the sea as Sūð-sǣ (South Sea), but this term fell out of favour, as later English authors followed the same conventions as their Latin and Norman contemporaries. One English name that did persist was the Narrow Seas, a collective term for the channel and North Sea. As England (followed by Great Britain and the United Kingdom) claimed sovereignty over the sea, a Royal Navy Admiral was appointed with maintaining duties in the two seas. The office was maintained until 1822, when several European nations (including the United Kingdom) adopted a three-mile (4.8 km) limit to territorial waters. The word channel was first recorded in Middle English in the 13th century and was borrowed from the Old French word chanel (a variant form of chenel 'canal'). By the middle of the fifteenth century, an Italian map based on Ptolemy's description named the sea as Britanicus Oceanus nunc Canalites Anglie (British Ocean but now English Channel). The map is possibly the first recorded use of the term English Channel and the description suggests the name had recently been adopted. In the sixteenth century, Dutch maps referred to the sea as the Engelse Kanaal (English Channel) and by the 1590s, William Shakespeare used the word Channel in his history plays of Henry VI, suggesting that by that time, the name was popularly understood by English people. By the eighteenth century, the name English Channel was in common usage in England. Following the Acts of Union 1707, this was replaced in official maps and documents with British Channel or British Sea for much of the next century. However, the term English Channel remained popular and was finally in official usage by the nineteenth century. The French name la Manche has been used since at least the 17th century. The name is usually said to refer to the sleeve (French: la manche) shape of the Channel. Folk etymology has derived it from a Celtic word meaning 'channel' that is also the source of the name for the Minch in Scotland, but this name is not attested before the 17th century, and French and British sources of that time are clear about its etymology. The name in French has been directly adapted in other languages as either a calque, such as Canale della Manica in Italian, or a direct borrowing, such as Canal de la Mancha in Spanish. The International Hydrographic Organization defines the limits of the English Channel as: The Strait of Dover (French: Pas de Calais), at the Channel's eastern end, is its narrowest point, while its widest point lies between Lyme Bay and the Gulf of Saint Malo, near its midpoint. Well on the continental shelf, it has an average depth of about 120 m (390 ft) at its widest; yet averages about 45 m (148 ft) between Dover and Calais, its notable sandbank hazard being Goodwin Sands. Eastwards from there the adjoining North Sea reduces to about 26 m (85 ft) across the Broad Fourteens (14 fathoms) where it lies over the southern cusp of the former land bridge between East Anglia and the Low Countries. The North Sea reaches much greater depths east of northern Britain. The Channel descends briefly to 180 m (590 ft) in the submerged valley of Hurd's Deep, 48 km (30 mi) west-northwest of Guernsey. There are several major islands in the Channel, the most notable being the Isle of Wight off the English coast, and the Channel Islands, British Crown Dependencies off the coast of France. The coastline, particularly on the French shore, is deeply indented, with several small islands close to the coastline, including Chausey and Mont Saint-Michel. The Cotentin Peninsula on the French coast juts out into the Channel, with the wide Bay of the Seine (French: Baie de Seine) to its east. On the English side there is a small parallel strait, the Solent, between the Isle of Wight and the mainland. The Celtic Sea is to the west of the Channel. The Channel acts as a funnel that amplifies the tidal range from less than a metre at sea in eastern places to more than 6 metres at in the Channel Islands the west coast of the Cotentin Peninsula and the north coast of Brittany in monthly spring tides. The time difference of about six hours between high water at the eastern and western limits of the Channel is indicative of the tidal range being amplified further by resonance. Amphidromic points are the Bay of Biscay and varying more in precise location in the far south of the North Sea, meaning both those associated eastern coasts repel the tides effectively, leaving the Strait of Dover as every six hours the natural bottleneck short of its consequent gravity-induced repulsion of the southward tide (surge) of the North Sea (equally from the Atlantic). The Channel does not experience, but its existence is necessary to explain the extent of North Sea storm surges, such as necessitate the Thames Barrier, Delta Works, Zuiderzee works (Afsluitdijk and other dams). In the UK Shipping Forecast the Channel is divided into the following areas, from the east: The Channel is of geologically recent origin, having been land for most of the Pleistocene period. Before the Devensian glaciation (the most recent glacial period, which ended around 10,000 years ago), Britain and Ireland were part of continental Europe, linked by an unbroken Weald–Artois anticline, a ridge that acted as a natural dam holding back a large freshwater pro-glacial lake in the Doggerland region, now submerged under the North Sea. During this period the North Sea and almost all of the British Isles were covered by ice. The lake was fed by meltwater from the Baltic and from the Caledonian and Scandinavian ice sheets that joined to the north, blocking its exit. The sea level was about 120 m (390 ft) lower than it is today. Then, between 450,000 and 180,000 years ago, at least two catastrophic glacial lake outburst floods breached the Weald–Artois anticline. These contributed to creating some of the deepest part's of the channel such as Hurd's Deep. The first flood of 450 thousand years ago would have lasted for several months, releasing as much as one million cubic metres of water per second. The flood started with large but localised waterfalls over the ridge, which excavated depressions now known as the Fosses Dangeard. The flow eroded the retaining ridge, causing the rock dam to fail and releasing lake water into the Atlantic. After multiple episodes of changing sea level, during which the Fosses Dangeard were largely infilled by various layers of sediment, another catastrophic flood some 180,000 years ago carved a large bedrock-floored valley, the Lobourg Channel, some 500 m wide and 25 m deep, from the southern North Sea basin through the centre of the Straits of Dover and into the English Channel. It left streamlined islands, longitudinal erosional grooves, and other features characteristic of catastrophic megaflood events, still present on the sea floor and now revealed by high-resolution sonar. Through the scoured channel passed a river, the Channel River, which drained the combined Rhine and Thames westwards to the Atlantic. The flooding destroyed the ridge that connected Britain to continental Europe, although a land connection across the southern North Sea would have existed intermittently at later times when periods of glaciation resulted in lowering of sea levels. At the end of the last glacial period, rising sea levels finally severed the last land connection. As a busy shipping lane, the Channel experiences environmental problems following accidents involving ships with toxic cargo and oil spills. Indeed, over 40% of the UK incidents threatening pollution occur in or very near the Channel. One occurrence was the MSC Napoli, which on 18 January 2007 was beached with nearly 1700 tonnes of dangerous cargo in Lyme Bay, a protected World Heritage Site coastline. The ship had been damaged and was en route to Portland Harbour. The English Channel, despite being a busy shipping lane, remains in part a haven for wildlife. Atlantic oceanic species are more common in the westernmost parts of the channel, particularly to the west of Start Point, Devon, but can sometimes be found further east towards Dorset and the Isle of Wight. Seal sightings are becoming more common along the English Channel, with both Grey Seal and Harbour Seal recorded frequently. The Channel, which delayed human reoccupation of Great Britain for more than 100,000 years, has in historic times been both an easy entry for seafaring people and a key natural defence, halting invading armies while in conjunction with control of the North Sea allowing Britain to blockade the continent. The most significant failed invasion threats came when the Dutch and Belgian ports were held by a major continental power, e.g. from the Spanish Armada in 1588, Napoleon during the Napoleonic Wars, and Nazi Germany during World War II. Successful invasions include the Roman conquest of Britain, the Norman Conquest in 1066 and the Glorious Revolution of 1688, while the concentration of excellent harbours in the Western Channel on Britain's south coast made possible the largest amphibious invasion in history, the Normandy Landings in 1944. Channel naval battles include the Battle of the Downs (1639), Battle of Dover (1652), the Battle of Portland (1653) and the Battle of La Hougue (1692). In more peaceful times the Channel served as a link joining shared cultures and political structures, particularly the huge Angevin Empire from 1135 to 1217. For nearly a thousand years, the Channel also provided a link between the Modern Celtic regions and languages of Cornwall and Brittany. Brittany was founded by Britons who fled Cornwall and Devon after Anglo-Saxon encroachment. In Brittany, there is a region known as "Cornouaille" (Cornwall) in French and "Kernev" in Breton. In ancient times there was also a "Domnonia" (Devon) in Brittany as well. In February 1684, ice formed on the sea in a belt 4.8 km (3.0 mi) wide off the coast of Kent and 3.2 km (2.0 mi) wide on the French side. Remnants of a mesolithic boatyard have been found on the Isle of Wight. Wheat was traded across the Channel about 8,000 years ago. "... Sophisticated social networks linked the Neolithic front in southern Europe to the Mesolithic peoples of northern Europe." The Ferriby Boats, Hanson Log Boats and the later Dover Bronze Age Boat could carry a substantial cross-Channel cargo. Diodorus Siculus and Pliny both suggest trade between the rebel Celtic tribes of Armorica and Iron Age Britain flourished. In 55 BC Julius Caesar invaded, claiming that the Britons had aided the Veneti against him the previous year. He was more successful in 54 BC, but Britain was not fully established as part of the Roman Empire until completion of the invasion by Aulus Plautius in 43 AD. A brisk and regular trade began between ports in Roman Gaul and those in Britain. This traffic continued until the end of Roman rule in Britain in 410 AD, after which the early Anglo-Saxons left less clear historical records. In the power vacuum left by the retreating Romans, the Germanic Angles, Saxons, and Jutes began the next great migration across the North Sea. Having already been used as mercenaries in Britain by the Romans, many people from these tribes crossed during the Migration Period, conquering and perhaps displacing the native Celtic populations. The attack on Lindisfarne in 793 is generally considered the beginning of the Viking Age. For the next 250 years the Scandinavian raiders of Norway, Sweden, and Denmark dominated the North Sea, raiding monasteries, homes, and towns along the coast and along the rivers that ran inland. According to the Anglo-Saxon Chronicle they began to settle in Britain in 851. They continued to settle in the British Isles and the continent until around 1050, with some raids recorded along the channel coast of England, including at Wareham, Portland, near Weymouth and along the river Teign in Devon. The fiefdom of Normandy was created for the Viking leader Rollo (also known as Robert of Normandy). Rollo had besieged Paris but in 911 entered vassalage to the king of the West Franks Charles the Simple through the Treaty of St.-Claire-sur-Epte. In exchange for his homage and fealty, Rollo legally gained the territory he and his Viking allies had previously conquered. The name "Normandy" reflects Rollo's Viking (i.e. "Northman") origins. The descendants of Rollo and his followers adopted the local Gallo-Romance language and intermarried with the area's inhabitants and became the Normans – a Norman French-speaking mixture of Scandinavians, Hiberno-Norse, Orcadians, Anglo-Danish, and indigenous Franks and Gauls. Rollo's descendant William, Duke of Normandy became king of England in 1066 in the Norman Conquest beginning with the Battle of Hastings, while retaining the fiefdom of Normandy for himself and his descendants. In 1204, during the reign of King John, mainland Normandy was taken from England by France under Philip II, while insular Normandy (the Channel Islands) remained under English control. In 1259, Henry III of England recognised the legality of French possession of mainland Normandy under the Treaty of Paris. His successors, however, often fought to regain control of mainland Normandy. With the rise of William the Conqueror the North Sea and Channel began to lose some of their importance. The new order oriented most of England and Scandinavia's trade south, toward the Mediterranean and the Orient. Although the British surrendered claims to mainland Normandy and other French possessions in 1801, the monarch of the United Kingdom retains the title Duke of Normandy in respect to the Channel Islands. The Channel Islands (except for Chausey) are Crown Dependencies of the British Crown. Thus the Loyal toast in the Channel Islands is Le roi, notre Duc ("The King, our Duke"). The British monarch is understood to not be the Duke of Normandy in regards of the French region of Normandy described herein, by virtue of the Treaty of Paris of 1259, the surrender of French possessions in 1801, and the belief that the rights of succession to that title are subject to Salic Law which excludes inheritance through female heirs. French Normandy was occupied by English forces during the Hundred Years' War in 1346–1360 and again in 1415–1450. From the reign of Elizabeth I, English foreign policy concentrated on preventing invasion across the Channel by ensuring no major European power controlled the potential Dutch and Flemish invasion ports. Her climb to the pre-eminent sea power of the world began in 1588 as the attempted invasion of the Spanish Armada was defeated by the combination of outstanding naval tactics by the English and the Dutch under command of Charles Howard, 1st Earl of Nottingham with Sir Francis Drake second in command, and the following stormy weather. Over the centuries the Royal Navy slowly grew to be the most powerful in the world. The building of the British Empire was possible only because the Royal Navy eventually managed to exercise unquestioned control over the seas around Europe, especially the Channel and the North Sea. During the Seven Years' War, France attempted to launch an invasion of Britain. To achieve this France needed to gain control of the Channel for several weeks, but was thwarted following the British naval victory at the Battle of Quiberon Bay in 1759 and was unsuccessful (The last French landing on English soil being in 1690 with a raid on Teignmouth, although the last French raid on British soil was a raid on Fishguard, Wales in 1797). Another significant challenge to British domination of the seas came during the Napoleonic Wars. The Battle of Trafalgar took place off the coast of Spain against a combined French and Spanish fleet and was won by Admiral Horatio Nelson, ending Napoleon's plans for a cross-Channel invasion and securing British dominance of the seas for over a century. The exceptional strategic importance of the Channel as a tool for blockading was recognised by the First Sea Lord Admiral Fisher in the years before World War I. "Five keys lock up the world! Singapore, the Cape, Alexandria, Gibraltar, Dover." However, on 25 July 1909 Louis Blériot made the first Channel crossing from Calais to Dover in an aeroplane. Blériot's crossing signalled a change in the function of the Channel as a barrier-moat for England against foreign enemies. Because the Kaiserliche Marine surface fleet could not match the British Grand Fleet, the Germans developed submarine warfare, which was to become a far greater threat to Britain. The Dover Patrol, set up just before the war started, escorted cross-Channel troopships and prevented submarines from sailing in the Channel, obliging them to travel to the Atlantic via the much longer route around Scotland. On land, the German army attempted to capture French Channel ports in the Race to the Sea but although the trenches are often said to have stretched "from the frontier of Switzerland to the English Channel", they reached the coast at the North Sea. Much of the British war effort in Flanders was a bloody but successful strategy to prevent the Germans reaching the Channel coast. At the outset of the war, an attempt was made to block the path of U-boats through the Dover Strait with naval minefields. By February 1915, this had been augmented by a 25 kilometres (16 mi) stretch of light steel netting called the Dover Barrage, which it was hoped would ensnare submerged submarines. After initial success, the Germans learned how to pass through the barrage, aided by the unreliability of British mines. On 31 January 1917, the Germans resumed unrestricted submarine warfare leading to dire Admiralty predictions that submarines would defeat Britain by November, the most dangerous situation Britain faced in either world war. The Battle of Passchendaele in 1917 was fought to reduce the threat by capturing the submarine bases on the Belgian coast, though it was the introduction of convoys and not capture of the bases that averted defeat. In April 1918 the Dover Patrol carried out the Zeebrugge Raid against the U-boat bases. During 1917, the Dover Barrage was re-sited with improved mines and more effective nets, aided by regular patrols by small warships equipped with powerful searchlights. A German attack on these vessels resulted in the Battle of Dover Strait in 1917. A much more ambitious attempt to improve the barrage, by installing eight massive concrete towers across the strait was called the Admiralty M-N Scheme but only two towers were nearing completion at the end of the war and the project was abandoned. The naval blockade in the Channel and North Sea was one of the decisive factors in the German defeat in 1918. During the Second World War, naval activity in the European theatre was primarily limited to the Atlantic. During the Battle of France in May 1940, the German forces succeeded in capturing both Boulogne and Calais, thereby threatening the line of retreat for the British Expeditionary Force. By a combination of hard fighting and German indecision, the port of Dunkirk was kept open allowing 338,000 Allied troops to be evacuated in Operation Dynamo. More than 11,000 were evacuated from Le Havre during Operation Cycle and a further 192,000 were evacuated from ports further down the coast in Operation Aerial in June 1940. The early stages of the Battle of Britain featured German air attacks on Channel shipping and ports; despite these early successes against shipping the Germans did not win the air supremacy necessary for Operation Sealion, the projected cross-Channel invasion. The Channel subsequently became the stage for an intensive coastal war, featuring submarines, minesweepers, and Fast Attack Craft. The narrow waters of the Channel were considered too dangerous for major warships until the Normandy Landings with the exception, for the German Kriegsmarine, of the Channel Dash (Operation Cerberus) in February 1942, and this required the support of the Luftwaffe in Operation Thunderbolt. Dieppe was the site of an ill-fated Dieppe Raid by Canadian and British armed forces. More successful was the later Operation Overlord (D-Day), a massive invasion of German-occupied France by Allied troops. Caen, Cherbourg, Carentan, Falaise and other Norman towns endured many casualties in the fight for the province, which continued until the closing of the so-called Falaise gap between Chambois and Montormel, then liberation of Le Havre. The Channel Islands were the only part of the British Commonwealth occupied by Germany (excepting the part of Egypt occupied by the Afrika Korps at the time of the Second Battle of El Alamein, which was a protectorate and not part of the Commonwealth). The German occupation of 1940–1945 was harsh, with some island residents being taken for slave labour on the Continent; native Jews sent to concentration camps; partisan resistance and retribution; accusations of collaboration; and slave labour (primarily Russians and eastern Europeans) being brought to the islands to build fortifications. The Royal Navy blockaded the islands from time to time, particularly following the liberation of mainland Normandy in 1944. Intense negotiations resulted in some Red Cross humanitarian aid, but there was considerable hunger and privation during the occupation, particularly in the final months, when the population was close to starvation. The German troops on the islands surrendered on 9 May 1945, a day after the final surrender in mainland Europe. There is significant public concern in the UK about illegal immigrants coming on small boats from France. Since 2018, the English Channel has seen a major increase in number of crossing. The English Channel coast is far more densely populated on the English shore. The most significant towns and cities along both the English and French sides of the Channel (each with more than 20,000 inhabitants, ranked in descending order; populations are the urban area populations from the 1999 French census, 2001 UK census, and 2001 Jersey census) are as follows: The two dominant cultures are English on the north shore of the Channel, French on the south. However, there are also a number of minority languages that are or were found on the shores and islands of the English Channel, which are listed here, with the Channel's name in the specific language following them. Most other languages tend towards variants of the French and English forms, but notably Welsh has Môr Udd. The Channel has traffic on both the UK–Europe and North Sea–Atlantic routes, and is the world's busiest seaway, with over 500 ships per day. Following an accident in January 1971 and a series of disastrous collisions with wreckage in February, the Dover TSS, the world's first radar-controlled traffic separation scheme, was set up by the International Maritime Organization. The scheme mandates that vessels travelling north must use the French side, travelling south the English side. There is a separation zone between the two lanes. In December 2002 the MV Tricolor, carrying £30m of luxury cars, sank 32 km (20 mi) northwest of Dunkirk after collision in fog with the container ship Kariba. The cargo ship Nicola ran into the wreckage the next day. There was no loss of life. The shore-based long-range traffic control system was updated in 2003 and there is a series of traffic separation systems in operation. Though the system is inherently incapable of reaching the levels of safety obtained from aviation systems such as the traffic collision avoidance system, it has reduced accidents to one or two per year. Marine GPS systems allow ships to be preprogrammed to follow navigational channels accurately and automatically, further avoiding risk of running aground, but following the fatal collision between Dutch Aquamarine and Ash in October 2001, Britain's Marine Accident Investigation Branch (MAIB) issued a safety bulletin saying it believed that in these most unusual circumstances GPS use had actually contributed to the collision. The ships were maintaining a very precise automated course, one directly behind the other, rather than making use of the full width of the traffic lanes as a human navigator would. A combination of radar difficulties in monitoring areas near cliffs, a failure of a CCTV system, incorrect operation of the anchor, the inability of the crew to follow standard procedures of using a GPS to provide early warning of the ship dragging the anchor and reluctance to admit the mistake and start the engine led to the MV Willy running aground in Cawsand Bay, Cornwall, in January 2002. The MAIB report makes it clear that the harbour controllers were informed of impending disaster by shore observers before the crew were themselves aware. The village of Kingsand was evacuated for three days because of the risk of explosion, and the ship was stranded for 11 days. The ferry routes crossing the English Channel, include (have included):- Many travellers cross beneath the Channel using the Channel Tunnel, first proposed in the early 19th century and finally opened in 1994, connecting the UK and France by rail. It is now routine to travel between Paris or Brussels and London on the Eurostar train. Freight trains also use the tunnel. Cars, coaches and lorries are carried on Eurotunnel Shuttle trains between Folkestone and Calais. The coastal resorts of the Channel, such as Brighton and Deauville, inaugurated an era of aristocratic tourism in the early 19th century. Short trips across the Channel for leisure purposes are often referred to as Channel Hopping. The Rampion Wind Farm is an offshore wind farm located in the Channel, off the coast of West Sussex. Other offshore wind farms planned on the French side of the Channel. As one of the narrowest and most well-known international waterways lacking dangerous currents, the Channel has been the first objective of numerous innovative sea, air, and human powered crossing technologies. Pre-historic people sailed from the mainland to England for millennia. At the end of the last Ice Age, lower sea levels even permitted walking across. Pierre Andriel crossed the English Channel aboard the Élise, ex the Scottish p.s. "Margery" in March 1816, one of the earliest seagoing voyages by steam ship. The paddle steamer Defiance, Captain William Wager, was the first steamer to cross the Channel to Holland, arriving there on 9 May 1816. On 10 June 1821, English-built paddle steamer Rob Roy was the first passenger ferry to cross channel. The steamer was purchased subsequently by the French postal administration and renamed Henri IV and put into regular passenger service a year later. It was able to make the journey across the Straits of Dover in around three hours. In June 1843, because of difficulties with Dover harbour, the South Eastern Railway company developed the Boulogne-sur-Mer-Folkestone route as an alternative to Calais-Dover. The first ferry crossed under the command of Captain Hayward. In 1974 a Welsh coracle piloted by Bernard Thomas of Llechryd crossed the English Channel to France in 131⁄2 hours. The journey was undertaken to demonstrate how the Bull Boats of the Mandan Indians of North Dakota could have been copied from coracles introduced by Prince Madog in the 12th century. The Mountbatten class hovercraft (MCH) entered commercial service in August 1968, initially between Dover and Boulogne but later also Ramsgate (Pegwell Bay) to Calais. The journey time Dover to Boulogne was roughly 35 minutes, with six trips per day at peak times. The fastest crossing of the English Channel by a commercial car-carrying hovercraft was 22 minutes, recorded by the Princess Anne MCH SR-N4 Mk3 on 14 September 1995, The first aircraft to cross the Channel was a balloon in 1785, piloted by Jean Pierre François Blanchard (France) and John Jeffries (US). Louis Blériot (France) piloted the first airplane to cross in 1909. On 26 September 2008, Swiss Yves Rossy aka Jetman became the first person to cross the English Channel with a Jet Powered Wing, He jumped from a Pilatus Porter over Calais, France, Rossy crossed the English Channel where he deployed his parachute and landed in Dover The first flying car to have crossed the English Channel is a Pégase designed by the French company Vaylon on 14 June 2017. It was piloted by a Franco-Italian pilot Bruno Vezzoli. This crossing was carried out as part of the first road and air trip from Paris to London in a flying car. Pegase is a 2 seats road approved dune buggy and a powered paraglider. The takeoff was at 8:03 a.m. from Ambleteuse in the North of France and landing was at East Studdal, near Dover. The flight was completed in 1 hour and 15 minutes for a total distance covered of 72.5 km (45.0 mi) including 33.3 km (20.7 mi) over the English Channel at an altitude of 1,240 metres (4,070 ft) . On 12 June 1979, the first human-powered aircraft to cross the English Channel was the Gossamer Albatross, built by American aeronautical engineer Dr. Paul B. MacCready's company AeroVironment, and piloted by Bryan Allen. The 35.7 km (22.2 mi) crossing was completed in 2 hours and 49 minutes. On 4 August 2019, Frenchman Franky Zapata became the first person to cross the English Channel on a jet-powered Flyboard Air. The board was powered by a kerosene-filled backpack. Zapata made the 35.4 km (22.0 mi) journey in 22 minutes, having landed on a boat half-way across to refuel. The sport of Channel swimming traces its origins to the latter part of the 19th century when Captain Matthew Webb made the first observed and unassisted swim across the Strait of Dover, swimming from England to France on 24–25 August 1875 in 21 hours 45 minutes. Up to 1927, fewer than ten swimmers (including the first woman, Gertrude Ederle in 1926) had managed to successfully swim the English Channel, and many dubious claims had been made. The Channel Swimming Association (CSA) was founded to authenticate and ratify swimmers' claims to have swum the Channel and to verify crossing times. The CSA was dissolved in 1999 and was succeeded by two separate organisations: CSA Ltd (CSA) and the Channel Swimming and Piloting Federation (CSPF), both observe and authenticate cross-Channel swims in the Strait of Dover. The Channel Crossing Association was also set up to cater for unorthodox crossings. The team with the most Channel swims to its credit is the Serpentine Swimming Club in London, followed by the international Sri Chinmoy Marathon Team. As of 2023, 1,881 people had completed 2,428 verified solo crossings under the rules of the CSA and the CSPF. This includes 24 two-way crossings and three three-way crossings. The Strait of Dover is the busiest stretch of water in the world. It is governed by International Law as described in Unorthodox Crossing of the Dover Strait Traffic Separation Scheme. It states: "[In] exceptional cases the French Maritime Authorities may grant authority for unorthodox craft to cross French territorial waters within the Traffic Separation Scheme when these craft set off from the British coast, on condition that the request for authorisation is sent to them with the opinion of the British Maritime Authorities." The fastest verified swim of the Channel was by the Australian Trent Grimsey on 8 September 2012, in 6 hours 55 minutes, beating a swim of 2007. The female record is held by Yvetta Hlavacova of Czechia, on 7 hours, 25 minutes on 5 August 2006. Both records were from England to France. There may have been some unreported swims of the Channel, by people intent on entering Britain in circumvention of immigration controls. A failed attempt to cross the Channel by two Syrian refugees in October 2014 came to light when their bodies were discovered on the shores of the North Sea in Norway and the Netherlands. On 16 September 1965, two Amphicars crossed from Dover to Calais. PLUTO was war-time fuel delivery project of "pipelines under the ocean" from England to France. Though plagued with technical difficulties during the Battle of Normandy, the pipelines delivered about 8% of the fuel requirements of the allied forces between D-Day and VE-Day.
[ { "paragraph_id": 0, "text": "The English Channel, also known as the Channel, is an arm of the Atlantic Ocean that separates Southern England from northern France. It links to the southern part of the North Sea by the Strait of Dover at its northeastern end. It is the busiest shipping area in the world.", "title": "" }, { "paragraph_id": 1, "text": "It is about 560 kilometres (300 nautical miles; 350 statute miles) long and varies in width from 240 km (130 nmi; 150 mi) at its widest to 34 km (18 nmi; 21 mi) at its narrowest in the Strait of Dover. It is the smallest of the shallow seas around the continental shelf of Europe, covering an area of some 75,000 square kilometres (22,000 square nautical miles; 29,000 square miles).", "title": "" }, { "paragraph_id": 2, "text": "The Channel was a key factor in Britain becoming a naval superpower and has been utilised by Britain as a natural defence mechanism to halt attempted invasions, such as in the Napoleonic Wars and in the Second World War.", "title": "" }, { "paragraph_id": 3, "text": "The population around the English Channel is predominantly located on the English coast and the major languages spoken in this region are English and French.", "title": "" }, { "paragraph_id": 4, "text": "The name first appears in Roman sources as Oceanus Britannicus (or Mare Britannicum, meaning the British Ocean or British Sea). Variations of this term were used by influential writers such as Ptolemy, and remained popular with British and continental authors well into the modern era. Other Latin names for the sea include Oceanus Gallicus (the Gaulish Ocean) which was used by Isidore of Seville in the sixth century.", "title": "Names" }, { "paragraph_id": 5, "text": "The term British Sea is still used by speakers of Cornish and Breton, with the sea known to them as Mor Bretannek and Mor Breizh respectively. While it is likely that these names derive from the Latin term, it is possible that they predate the arrival of the Romans in the area. The modern Welsh is often given as Môr Udd (the Lord's or Prince's Sea); however, this name originally described both the Channel and the North Sea combined.", "title": "Names" }, { "paragraph_id": 6, "text": "Anglo-Saxon texts make reference to the sea as Sūð-sǣ (South Sea), but this term fell out of favour, as later English authors followed the same conventions as their Latin and Norman contemporaries. One English name that did persist was the Narrow Seas, a collective term for the channel and North Sea. As England (followed by Great Britain and the United Kingdom) claimed sovereignty over the sea, a Royal Navy Admiral was appointed with maintaining duties in the two seas. The office was maintained until 1822, when several European nations (including the United Kingdom) adopted a three-mile (4.8 km) limit to territorial waters.", "title": "Names" }, { "paragraph_id": 7, "text": "The word channel was first recorded in Middle English in the 13th century and was borrowed from the Old French word chanel (a variant form of chenel 'canal'). By the middle of the fifteenth century, an Italian map based on Ptolemy's description named the sea as Britanicus Oceanus nunc Canalites Anglie (British Ocean but now English Channel). The map is possibly the first recorded use of the term English Channel and the description suggests the name had recently been adopted.", "title": "Names" }, { "paragraph_id": 8, "text": "In the sixteenth century, Dutch maps referred to the sea as the Engelse Kanaal (English Channel) and by the 1590s, William Shakespeare used the word Channel in his history plays of Henry VI, suggesting that by that time, the name was popularly understood by English people.", "title": "Names" }, { "paragraph_id": 9, "text": "By the eighteenth century, the name English Channel was in common usage in England. Following the Acts of Union 1707, this was replaced in official maps and documents with British Channel or British Sea for much of the next century. However, the term English Channel remained popular and was finally in official usage by the nineteenth century.", "title": "Names" }, { "paragraph_id": 10, "text": "The French name la Manche has been used since at least the 17th century. The name is usually said to refer to the sleeve (French: la manche) shape of the Channel. Folk etymology has derived it from a Celtic word meaning 'channel' that is also the source of the name for the Minch in Scotland, but this name is not attested before the 17th century, and French and British sources of that time are clear about its etymology. The name in French has been directly adapted in other languages as either a calque, such as Canale della Manica in Italian, or a direct borrowing, such as Canal de la Mancha in Spanish.", "title": "Names" }, { "paragraph_id": 11, "text": "", "title": "Nature" }, { "paragraph_id": 12, "text": "The International Hydrographic Organization defines the limits of the English Channel as:", "title": "Nature" }, { "paragraph_id": 13, "text": "The Strait of Dover (French: Pas de Calais), at the Channel's eastern end, is its narrowest point, while its widest point lies between Lyme Bay and the Gulf of Saint Malo, near its midpoint. Well on the continental shelf, it has an average depth of about 120 m (390 ft) at its widest; yet averages about 45 m (148 ft) between Dover and Calais, its notable sandbank hazard being Goodwin Sands. Eastwards from there the adjoining North Sea reduces to about 26 m (85 ft) across the Broad Fourteens (14 fathoms) where it lies over the southern cusp of the former land bridge between East Anglia and the Low Countries. The North Sea reaches much greater depths east of northern Britain. The Channel descends briefly to 180 m (590 ft) in the submerged valley of Hurd's Deep, 48 km (30 mi) west-northwest of Guernsey.", "title": "Nature" }, { "paragraph_id": 14, "text": "", "title": "Nature" }, { "paragraph_id": 15, "text": "There are several major islands in the Channel, the most notable being the Isle of Wight off the English coast, and the Channel Islands, British Crown Dependencies off the coast of France. The coastline, particularly on the French shore, is deeply indented, with several small islands close to the coastline, including Chausey and Mont Saint-Michel. The Cotentin Peninsula on the French coast juts out into the Channel, with the wide Bay of the Seine (French: Baie de Seine) to its east. On the English side there is a small parallel strait, the Solent, between the Isle of Wight and the mainland. The Celtic Sea is to the west of the Channel.", "title": "Nature" }, { "paragraph_id": 16, "text": "The Channel acts as a funnel that amplifies the tidal range from less than a metre at sea in eastern places to more than 6 metres at in the Channel Islands the west coast of the Cotentin Peninsula and the north coast of Brittany in monthly spring tides. The time difference of about six hours between high water at the eastern and western limits of the Channel is indicative of the tidal range being amplified further by resonance. Amphidromic points are the Bay of Biscay and varying more in precise location in the far south of the North Sea, meaning both those associated eastern coasts repel the tides effectively, leaving the Strait of Dover as every six hours the natural bottleneck short of its consequent gravity-induced repulsion of the southward tide (surge) of the North Sea (equally from the Atlantic). The Channel does not experience, but its existence is necessary to explain the extent of North Sea storm surges, such as necessitate the Thames Barrier, Delta Works, Zuiderzee works (Afsluitdijk and other dams).", "title": "Nature" }, { "paragraph_id": 17, "text": "In the UK Shipping Forecast the Channel is divided into the following areas, from the east:", "title": "Nature" }, { "paragraph_id": 18, "text": "The Channel is of geologically recent origin, having been land for most of the Pleistocene period. Before the Devensian glaciation (the most recent glacial period, which ended around 10,000 years ago), Britain and Ireland were part of continental Europe, linked by an unbroken Weald–Artois anticline, a ridge that acted as a natural dam holding back a large freshwater pro-glacial lake in the Doggerland region, now submerged under the North Sea. During this period the North Sea and almost all of the British Isles were covered by ice. The lake was fed by meltwater from the Baltic and from the Caledonian and Scandinavian ice sheets that joined to the north, blocking its exit. The sea level was about 120 m (390 ft) lower than it is today. Then, between 450,000 and 180,000 years ago, at least two catastrophic glacial lake outburst floods breached the Weald–Artois anticline. These contributed to creating some of the deepest part's of the channel such as Hurd's Deep.", "title": "Nature" }, { "paragraph_id": 19, "text": "The first flood of 450 thousand years ago would have lasted for several months, releasing as much as one million cubic metres of water per second. The flood started with large but localised waterfalls over the ridge, which excavated depressions now known as the Fosses Dangeard. The flow eroded the retaining ridge, causing the rock dam to fail and releasing lake water into the Atlantic. After multiple episodes of changing sea level, during which the Fosses Dangeard were largely infilled by various layers of sediment, another catastrophic flood some 180,000 years ago carved a large bedrock-floored valley, the Lobourg Channel, some 500 m wide and 25 m deep, from the southern North Sea basin through the centre of the Straits of Dover and into the English Channel. It left streamlined islands, longitudinal erosional grooves, and other features characteristic of catastrophic megaflood events, still present on the sea floor and now revealed by high-resolution sonar. Through the scoured channel passed a river, the Channel River, which drained the combined Rhine and Thames westwards to the Atlantic.", "title": "Nature" }, { "paragraph_id": 20, "text": "The flooding destroyed the ridge that connected Britain to continental Europe, although a land connection across the southern North Sea would have existed intermittently at later times when periods of glaciation resulted in lowering of sea levels. At the end of the last glacial period, rising sea levels finally severed the last land connection.", "title": "Nature" }, { "paragraph_id": 21, "text": "As a busy shipping lane, the Channel experiences environmental problems following accidents involving ships with toxic cargo and oil spills. Indeed, over 40% of the UK incidents threatening pollution occur in or very near the Channel. One occurrence was the MSC Napoli, which on 18 January 2007 was beached with nearly 1700 tonnes of dangerous cargo in Lyme Bay, a protected World Heritage Site coastline. The ship had been damaged and was en route to Portland Harbour.", "title": "Nature" }, { "paragraph_id": 22, "text": "The English Channel, despite being a busy shipping lane, remains in part a haven for wildlife. Atlantic oceanic species are more common in the westernmost parts of the channel, particularly to the west of Start Point, Devon, but can sometimes be found further east towards Dorset and the Isle of Wight. Seal sightings are becoming more common along the English Channel, with both Grey Seal and Harbour Seal recorded frequently.", "title": "Nature" }, { "paragraph_id": 23, "text": "The Channel, which delayed human reoccupation of Great Britain for more than 100,000 years, has in historic times been both an easy entry for seafaring people and a key natural defence, halting invading armies while in conjunction with control of the North Sea allowing Britain to blockade the continent. The most significant failed invasion threats came when the Dutch and Belgian ports were held by a major continental power, e.g. from the Spanish Armada in 1588, Napoleon during the Napoleonic Wars, and Nazi Germany during World War II. Successful invasions include the Roman conquest of Britain, the Norman Conquest in 1066 and the Glorious Revolution of 1688, while the concentration of excellent harbours in the Western Channel on Britain's south coast made possible the largest amphibious invasion in history, the Normandy Landings in 1944. Channel naval battles include the Battle of the Downs (1639), Battle of Dover (1652), the Battle of Portland (1653) and the Battle of La Hougue (1692).", "title": "Human history" }, { "paragraph_id": 24, "text": "In more peaceful times the Channel served as a link joining shared cultures and political structures, particularly the huge Angevin Empire from 1135 to 1217. For nearly a thousand years, the Channel also provided a link between the Modern Celtic regions and languages of Cornwall and Brittany. Brittany was founded by Britons who fled Cornwall and Devon after Anglo-Saxon encroachment. In Brittany, there is a region known as \"Cornouaille\" (Cornwall) in French and \"Kernev\" in Breton. In ancient times there was also a \"Domnonia\" (Devon) in Brittany as well.", "title": "Human history" }, { "paragraph_id": 25, "text": "In February 1684, ice formed on the sea in a belt 4.8 km (3.0 mi) wide off the coast of Kent and 3.2 km (2.0 mi) wide on the French side.", "title": "Human history" }, { "paragraph_id": 26, "text": "Remnants of a mesolithic boatyard have been found on the Isle of Wight. Wheat was traded across the Channel about 8,000 years ago. \"... Sophisticated social networks linked the Neolithic front in southern Europe to the Mesolithic peoples of northern Europe.\" The Ferriby Boats, Hanson Log Boats and the later Dover Bronze Age Boat could carry a substantial cross-Channel cargo.", "title": "Human history" }, { "paragraph_id": 27, "text": "Diodorus Siculus and Pliny both suggest trade between the rebel Celtic tribes of Armorica and Iron Age Britain flourished. In 55 BC Julius Caesar invaded, claiming that the Britons had aided the Veneti against him the previous year. He was more successful in 54 BC, but Britain was not fully established as part of the Roman Empire until completion of the invasion by Aulus Plautius in 43 AD. A brisk and regular trade began between ports in Roman Gaul and those in Britain. This traffic continued until the end of Roman rule in Britain in 410 AD, after which the early Anglo-Saxons left less clear historical records.", "title": "Human history" }, { "paragraph_id": 28, "text": "In the power vacuum left by the retreating Romans, the Germanic Angles, Saxons, and Jutes began the next great migration across the North Sea. Having already been used as mercenaries in Britain by the Romans, many people from these tribes crossed during the Migration Period, conquering and perhaps displacing the native Celtic populations.", "title": "Human history" }, { "paragraph_id": 29, "text": "The attack on Lindisfarne in 793 is generally considered the beginning of the Viking Age. For the next 250 years the Scandinavian raiders of Norway, Sweden, and Denmark dominated the North Sea, raiding monasteries, homes, and towns along the coast and along the rivers that ran inland. According to the Anglo-Saxon Chronicle they began to settle in Britain in 851. They continued to settle in the British Isles and the continent until around 1050, with some raids recorded along the channel coast of England, including at Wareham, Portland, near Weymouth and along the river Teign in Devon.", "title": "Human history" }, { "paragraph_id": 30, "text": "The fiefdom of Normandy was created for the Viking leader Rollo (also known as Robert of Normandy). Rollo had besieged Paris but in 911 entered vassalage to the king of the West Franks Charles the Simple through the Treaty of St.-Claire-sur-Epte. In exchange for his homage and fealty, Rollo legally gained the territory he and his Viking allies had previously conquered. The name \"Normandy\" reflects Rollo's Viking (i.e. \"Northman\") origins.", "title": "Human history" }, { "paragraph_id": 31, "text": "The descendants of Rollo and his followers adopted the local Gallo-Romance language and intermarried with the area's inhabitants and became the Normans – a Norman French-speaking mixture of Scandinavians, Hiberno-Norse, Orcadians, Anglo-Danish, and indigenous Franks and Gauls.", "title": "Human history" }, { "paragraph_id": 32, "text": "Rollo's descendant William, Duke of Normandy became king of England in 1066 in the Norman Conquest beginning with the Battle of Hastings, while retaining the fiefdom of Normandy for himself and his descendants. In 1204, during the reign of King John, mainland Normandy was taken from England by France under Philip II, while insular Normandy (the Channel Islands) remained under English control. In 1259, Henry III of England recognised the legality of French possession of mainland Normandy under the Treaty of Paris. His successors, however, often fought to regain control of mainland Normandy.", "title": "Human history" }, { "paragraph_id": 33, "text": "With the rise of William the Conqueror the North Sea and Channel began to lose some of their importance. The new order oriented most of England and Scandinavia's trade south, toward the Mediterranean and the Orient.", "title": "Human history" }, { "paragraph_id": 34, "text": "Although the British surrendered claims to mainland Normandy and other French possessions in 1801, the monarch of the United Kingdom retains the title Duke of Normandy in respect to the Channel Islands. The Channel Islands (except for Chausey) are Crown Dependencies of the British Crown. Thus the Loyal toast in the Channel Islands is Le roi, notre Duc (\"The King, our Duke\"). The British monarch is understood to not be the Duke of Normandy in regards of the French region of Normandy described herein, by virtue of the Treaty of Paris of 1259, the surrender of French possessions in 1801, and the belief that the rights of succession to that title are subject to Salic Law which excludes inheritance through female heirs.", "title": "Human history" }, { "paragraph_id": 35, "text": "French Normandy was occupied by English forces during the Hundred Years' War in 1346–1360 and again in 1415–1450.", "title": "Human history" }, { "paragraph_id": 36, "text": "From the reign of Elizabeth I, English foreign policy concentrated on preventing invasion across the Channel by ensuring no major European power controlled the potential Dutch and Flemish invasion ports. Her climb to the pre-eminent sea power of the world began in 1588 as the attempted invasion of the Spanish Armada was defeated by the combination of outstanding naval tactics by the English and the Dutch under command of Charles Howard, 1st Earl of Nottingham with Sir Francis Drake second in command, and the following stormy weather. Over the centuries the Royal Navy slowly grew to be the most powerful in the world.", "title": "Human history" }, { "paragraph_id": 37, "text": "The building of the British Empire was possible only because the Royal Navy eventually managed to exercise unquestioned control over the seas around Europe, especially the Channel and the North Sea. During the Seven Years' War, France attempted to launch an invasion of Britain. To achieve this France needed to gain control of the Channel for several weeks, but was thwarted following the British naval victory at the Battle of Quiberon Bay in 1759 and was unsuccessful (The last French landing on English soil being in 1690 with a raid on Teignmouth, although the last French raid on British soil was a raid on Fishguard, Wales in 1797).", "title": "Human history" }, { "paragraph_id": 38, "text": "Another significant challenge to British domination of the seas came during the Napoleonic Wars. The Battle of Trafalgar took place off the coast of Spain against a combined French and Spanish fleet and was won by Admiral Horatio Nelson, ending Napoleon's plans for a cross-Channel invasion and securing British dominance of the seas for over a century.", "title": "Human history" }, { "paragraph_id": 39, "text": "The exceptional strategic importance of the Channel as a tool for blockading was recognised by the First Sea Lord Admiral Fisher in the years before World War I. \"Five keys lock up the world! Singapore, the Cape, Alexandria, Gibraltar, Dover.\" However, on 25 July 1909 Louis Blériot made the first Channel crossing from Calais to Dover in an aeroplane. Blériot's crossing signalled a change in the function of the Channel as a barrier-moat for England against foreign enemies.", "title": "Human history" }, { "paragraph_id": 40, "text": "Because the Kaiserliche Marine surface fleet could not match the British Grand Fleet, the Germans developed submarine warfare, which was to become a far greater threat to Britain. The Dover Patrol, set up just before the war started, escorted cross-Channel troopships and prevented submarines from sailing in the Channel, obliging them to travel to the Atlantic via the much longer route around Scotland.", "title": "Human history" }, { "paragraph_id": 41, "text": "On land, the German army attempted to capture French Channel ports in the Race to the Sea but although the trenches are often said to have stretched \"from the frontier of Switzerland to the English Channel\", they reached the coast at the North Sea. Much of the British war effort in Flanders was a bloody but successful strategy to prevent the Germans reaching the Channel coast.", "title": "Human history" }, { "paragraph_id": 42, "text": "At the outset of the war, an attempt was made to block the path of U-boats through the Dover Strait with naval minefields. By February 1915, this had been augmented by a 25 kilometres (16 mi) stretch of light steel netting called the Dover Barrage, which it was hoped would ensnare submerged submarines. After initial success, the Germans learned how to pass through the barrage, aided by the unreliability of British mines. On 31 January 1917, the Germans resumed unrestricted submarine warfare leading to dire Admiralty predictions that submarines would defeat Britain by November, the most dangerous situation Britain faced in either world war.", "title": "Human history" }, { "paragraph_id": 43, "text": "The Battle of Passchendaele in 1917 was fought to reduce the threat by capturing the submarine bases on the Belgian coast, though it was the introduction of convoys and not capture of the bases that averted defeat. In April 1918 the Dover Patrol carried out the Zeebrugge Raid against the U-boat bases. During 1917, the Dover Barrage was re-sited with improved mines and more effective nets, aided by regular patrols by small warships equipped with powerful searchlights. A German attack on these vessels resulted in the Battle of Dover Strait in 1917. A much more ambitious attempt to improve the barrage, by installing eight massive concrete towers across the strait was called the Admiralty M-N Scheme but only two towers were nearing completion at the end of the war and the project was abandoned.", "title": "Human history" }, { "paragraph_id": 44, "text": "The naval blockade in the Channel and North Sea was one of the decisive factors in the German defeat in 1918.", "title": "Human history" }, { "paragraph_id": 45, "text": "During the Second World War, naval activity in the European theatre was primarily limited to the Atlantic. During the Battle of France in May 1940, the German forces succeeded in capturing both Boulogne and Calais, thereby threatening the line of retreat for the British Expeditionary Force. By a combination of hard fighting and German indecision, the port of Dunkirk was kept open allowing 338,000 Allied troops to be evacuated in Operation Dynamo. More than 11,000 were evacuated from Le Havre during Operation Cycle and a further 192,000 were evacuated from ports further down the coast in Operation Aerial in June 1940. The early stages of the Battle of Britain featured German air attacks on Channel shipping and ports; despite these early successes against shipping the Germans did not win the air supremacy necessary for Operation Sealion, the projected cross-Channel invasion.", "title": "Human history" }, { "paragraph_id": 46, "text": "The Channel subsequently became the stage for an intensive coastal war, featuring submarines, minesweepers, and Fast Attack Craft.", "title": "Human history" }, { "paragraph_id": 47, "text": "The narrow waters of the Channel were considered too dangerous for major warships until the Normandy Landings with the exception, for the German Kriegsmarine, of the Channel Dash (Operation Cerberus) in February 1942, and this required the support of the Luftwaffe in Operation Thunderbolt.", "title": "Human history" }, { "paragraph_id": 48, "text": "Dieppe was the site of an ill-fated Dieppe Raid by Canadian and British armed forces. More successful was the later Operation Overlord (D-Day), a massive invasion of German-occupied France by Allied troops. Caen, Cherbourg, Carentan, Falaise and other Norman towns endured many casualties in the fight for the province, which continued until the closing of the so-called Falaise gap between Chambois and Montormel, then liberation of Le Havre.", "title": "Human history" }, { "paragraph_id": 49, "text": "The Channel Islands were the only part of the British Commonwealth occupied by Germany (excepting the part of Egypt occupied by the Afrika Korps at the time of the Second Battle of El Alamein, which was a protectorate and not part of the Commonwealth). The German occupation of 1940–1945 was harsh, with some island residents being taken for slave labour on the Continent; native Jews sent to concentration camps; partisan resistance and retribution; accusations of collaboration; and slave labour (primarily Russians and eastern Europeans) being brought to the islands to build fortifications. The Royal Navy blockaded the islands from time to time, particularly following the liberation of mainland Normandy in 1944. Intense negotiations resulted in some Red Cross humanitarian aid, but there was considerable hunger and privation during the occupation, particularly in the final months, when the population was close to starvation. The German troops on the islands surrendered on 9 May 1945, a day after the final surrender in mainland Europe.", "title": "Human history" }, { "paragraph_id": 50, "text": "There is significant public concern in the UK about illegal immigrants coming on small boats from France. Since 2018, the English Channel has seen a major increase in number of crossing.", "title": "Human history" }, { "paragraph_id": 51, "text": "The English Channel coast is far more densely populated on the English shore. The most significant towns and cities along both the English and French sides of the Channel (each with more than 20,000 inhabitants, ranked in descending order; populations are the urban area populations from the 1999 French census, 2001 UK census, and 2001 Jersey census) are as follows:", "title": "Population" }, { "paragraph_id": 52, "text": "The two dominant cultures are English on the north shore of the Channel, French on the south. However, there are also a number of minority languages that are or were found on the shores and islands of the English Channel, which are listed here, with the Channel's name in the specific language following them.", "title": "Population" }, { "paragraph_id": 53, "text": "Most other languages tend towards variants of the French and English forms, but notably Welsh has Môr Udd.", "title": "Population" }, { "paragraph_id": 54, "text": "The Channel has traffic on both the UK–Europe and North Sea–Atlantic routes, and is the world's busiest seaway, with over 500 ships per day. Following an accident in January 1971 and a series of disastrous collisions with wreckage in February, the Dover TSS, the world's first radar-controlled traffic separation scheme, was set up by the International Maritime Organization. The scheme mandates that vessels travelling north must use the French side, travelling south the English side. There is a separation zone between the two lanes.", "title": "Economy" }, { "paragraph_id": 55, "text": "In December 2002 the MV Tricolor, carrying £30m of luxury cars, sank 32 km (20 mi) northwest of Dunkirk after collision in fog with the container ship Kariba. The cargo ship Nicola ran into the wreckage the next day. There was no loss of life.", "title": "Economy" }, { "paragraph_id": 56, "text": "The shore-based long-range traffic control system was updated in 2003 and there is a series of traffic separation systems in operation. Though the system is inherently incapable of reaching the levels of safety obtained from aviation systems such as the traffic collision avoidance system, it has reduced accidents to one or two per year.", "title": "Economy" }, { "paragraph_id": 57, "text": "Marine GPS systems allow ships to be preprogrammed to follow navigational channels accurately and automatically, further avoiding risk of running aground, but following the fatal collision between Dutch Aquamarine and Ash in October 2001, Britain's Marine Accident Investigation Branch (MAIB) issued a safety bulletin saying it believed that in these most unusual circumstances GPS use had actually contributed to the collision. The ships were maintaining a very precise automated course, one directly behind the other, rather than making use of the full width of the traffic lanes as a human navigator would.", "title": "Economy" }, { "paragraph_id": 58, "text": "A combination of radar difficulties in monitoring areas near cliffs, a failure of a CCTV system, incorrect operation of the anchor, the inability of the crew to follow standard procedures of using a GPS to provide early warning of the ship dragging the anchor and reluctance to admit the mistake and start the engine led to the MV Willy running aground in Cawsand Bay, Cornwall, in January 2002. The MAIB report makes it clear that the harbour controllers were informed of impending disaster by shore observers before the crew were themselves aware. The village of Kingsand was evacuated for three days because of the risk of explosion, and the ship was stranded for 11 days.", "title": "Economy" }, { "paragraph_id": 59, "text": "The ferry routes crossing the English Channel, include (have included):-", "title": "Economy" }, { "paragraph_id": 60, "text": "Many travellers cross beneath the Channel using the Channel Tunnel, first proposed in the early 19th century and finally opened in 1994, connecting the UK and France by rail. It is now routine to travel between Paris or Brussels and London on the Eurostar train. Freight trains also use the tunnel. Cars, coaches and lorries are carried on Eurotunnel Shuttle trains between Folkestone and Calais.", "title": "Economy" }, { "paragraph_id": 61, "text": "The coastal resorts of the Channel, such as Brighton and Deauville, inaugurated an era of aristocratic tourism in the early 19th century. Short trips across the Channel for leisure purposes are often referred to as Channel Hopping.", "title": "Economy" }, { "paragraph_id": 62, "text": "The Rampion Wind Farm is an offshore wind farm located in the Channel, off the coast of West Sussex. Other offshore wind farms planned on the French side of the Channel.", "title": "Economy" }, { "paragraph_id": 63, "text": "As one of the narrowest and most well-known international waterways lacking dangerous currents, the Channel has been the first objective of numerous innovative sea, air, and human powered crossing technologies. Pre-historic people sailed from the mainland to England for millennia. At the end of the last Ice Age, lower sea levels even permitted walking across.", "title": "History of Channel crossings" }, { "paragraph_id": 64, "text": "Pierre Andriel crossed the English Channel aboard the Élise, ex the Scottish p.s. \"Margery\" in March 1816, one of the earliest seagoing voyages by steam ship.", "title": "History of Channel crossings" }, { "paragraph_id": 65, "text": "The paddle steamer Defiance, Captain William Wager, was the first steamer to cross the Channel to Holland, arriving there on 9 May 1816.", "title": "History of Channel crossings" }, { "paragraph_id": 66, "text": "On 10 June 1821, English-built paddle steamer Rob Roy was the first passenger ferry to cross channel. The steamer was purchased subsequently by the French postal administration and renamed Henri IV and put into regular passenger service a year later. It was able to make the journey across the Straits of Dover in around three hours.", "title": "History of Channel crossings" }, { "paragraph_id": 67, "text": "In June 1843, because of difficulties with Dover harbour, the South Eastern Railway company developed the Boulogne-sur-Mer-Folkestone route as an alternative to Calais-Dover. The first ferry crossed under the command of Captain Hayward.", "title": "History of Channel crossings" }, { "paragraph_id": 68, "text": "In 1974 a Welsh coracle piloted by Bernard Thomas of Llechryd crossed the English Channel to France in 131⁄2 hours. The journey was undertaken to demonstrate how the Bull Boats of the Mandan Indians of North Dakota could have been copied from coracles introduced by Prince Madog in the 12th century.", "title": "History of Channel crossings" }, { "paragraph_id": 69, "text": "The Mountbatten class hovercraft (MCH) entered commercial service in August 1968, initially between Dover and Boulogne but later also Ramsgate (Pegwell Bay) to Calais. The journey time Dover to Boulogne was roughly 35 minutes, with six trips per day at peak times. The fastest crossing of the English Channel by a commercial car-carrying hovercraft was 22 minutes, recorded by the Princess Anne MCH SR-N4 Mk3 on 14 September 1995,", "title": "History of Channel crossings" }, { "paragraph_id": 70, "text": "The first aircraft to cross the Channel was a balloon in 1785, piloted by Jean Pierre François Blanchard (France) and John Jeffries (US).", "title": "History of Channel crossings" }, { "paragraph_id": 71, "text": "Louis Blériot (France) piloted the first airplane to cross in 1909.", "title": "History of Channel crossings" }, { "paragraph_id": 72, "text": "On 26 September 2008, Swiss Yves Rossy aka Jetman became the first person to cross the English Channel with a Jet Powered Wing, He jumped from a Pilatus Porter over Calais, France, Rossy crossed the English Channel where he deployed his parachute and landed in Dover", "title": "History of Channel crossings" }, { "paragraph_id": 73, "text": "The first flying car to have crossed the English Channel is a Pégase designed by the French company Vaylon on 14 June 2017. It was piloted by a Franco-Italian pilot Bruno Vezzoli. This crossing was carried out as part of the first road and air trip from Paris to London in a flying car. Pegase is a 2 seats road approved dune buggy and a powered paraglider. The takeoff was at 8:03 a.m. from Ambleteuse in the North of France and landing was at East Studdal, near Dover. The flight was completed in 1 hour and 15 minutes for a total distance covered of 72.5 km (45.0 mi) including 33.3 km (20.7 mi) over the English Channel at an altitude of 1,240 metres (4,070 ft) .", "title": "History of Channel crossings" }, { "paragraph_id": 74, "text": "On 12 June 1979, the first human-powered aircraft to cross the English Channel was the Gossamer Albatross, built by American aeronautical engineer Dr. Paul B. MacCready's company AeroVironment, and piloted by Bryan Allen. The 35.7 km (22.2 mi) crossing was completed in 2 hours and 49 minutes.", "title": "History of Channel crossings" }, { "paragraph_id": 75, "text": "On 4 August 2019, Frenchman Franky Zapata became the first person to cross the English Channel on a jet-powered Flyboard Air. The board was powered by a kerosene-filled backpack. Zapata made the 35.4 km (22.0 mi) journey in 22 minutes, having landed on a boat half-way across to refuel.", "title": "History of Channel crossings" }, { "paragraph_id": 76, "text": "The sport of Channel swimming traces its origins to the latter part of the 19th century when Captain Matthew Webb made the first observed and unassisted swim across the Strait of Dover, swimming from England to France on 24–25 August 1875 in 21 hours 45 minutes.", "title": "History of Channel crossings" }, { "paragraph_id": 77, "text": "Up to 1927, fewer than ten swimmers (including the first woman, Gertrude Ederle in 1926) had managed to successfully swim the English Channel, and many dubious claims had been made. The Channel Swimming Association (CSA) was founded to authenticate and ratify swimmers' claims to have swum the Channel and to verify crossing times. The CSA was dissolved in 1999 and was succeeded by two separate organisations: CSA Ltd (CSA) and the Channel Swimming and Piloting Federation (CSPF), both observe and authenticate cross-Channel swims in the Strait of Dover. The Channel Crossing Association was also set up to cater for unorthodox crossings.", "title": "History of Channel crossings" }, { "paragraph_id": 78, "text": "The team with the most Channel swims to its credit is the Serpentine Swimming Club in London, followed by the international Sri Chinmoy Marathon Team.", "title": "History of Channel crossings" }, { "paragraph_id": 79, "text": "As of 2023, 1,881 people had completed 2,428 verified solo crossings under the rules of the CSA and the CSPF. This includes 24 two-way crossings and three three-way crossings.", "title": "History of Channel crossings" }, { "paragraph_id": 80, "text": "The Strait of Dover is the busiest stretch of water in the world. It is governed by International Law as described in Unorthodox Crossing of the Dover Strait Traffic Separation Scheme. It states: \"[In] exceptional cases the French Maritime Authorities may grant authority for unorthodox craft to cross French territorial waters within the Traffic Separation Scheme when these craft set off from the British coast, on condition that the request for authorisation is sent to them with the opinion of the British Maritime Authorities.\"", "title": "History of Channel crossings" }, { "paragraph_id": 81, "text": "The fastest verified swim of the Channel was by the Australian Trent Grimsey on 8 September 2012, in 6 hours 55 minutes, beating a swim of 2007. The female record is held by Yvetta Hlavacova of Czechia, on 7 hours, 25 minutes on 5 August 2006. Both records were from England to France.", "title": "History of Channel crossings" }, { "paragraph_id": 82, "text": "There may have been some unreported swims of the Channel, by people intent on entering Britain in circumvention of immigration controls. A failed attempt to cross the Channel by two Syrian refugees in October 2014 came to light when their bodies were discovered on the shores of the North Sea in Norway and the Netherlands.", "title": "History of Channel crossings" }, { "paragraph_id": 83, "text": "On 16 September 1965, two Amphicars crossed from Dover to Calais.", "title": "History of Channel crossings" }, { "paragraph_id": 84, "text": "PLUTO was war-time fuel delivery project of \"pipelines under the ocean\" from England to France. Though plagued with technical difficulties during the Battle of Normandy, the pipelines delivered about 8% of the fuel requirements of the allied forces between D-Day and VE-Day.", "title": "History of Channel crossings" } ]
The English Channel, also known as the Channel, is an arm of the Atlantic Ocean that separates Southern England from northern France. It links to the southern part of the North Sea by the Strait of Dover at its northeastern end. It is the busiest shipping area in the world. It is about 560 kilometres long and varies in width from 240 km at its widest to 34 km at its narrowest in the Strait of Dover. It is the smallest of the shallow seas around the continental shelf of Europe, covering an area of some 75,000 square kilometres. The Channel was a key factor in Britain becoming a naval superpower and has been utilised by Britain as a natural defence mechanism to halt attempted invasions, such as in the Napoleonic Wars and in the Second World War. The population around the English Channel is predominantly located on the English coast and the major languages spoken in this region are English and French.
2001-05-24T14:08:07Z
2023-12-05T14:32:27Z
[ "Template:Use dmy dates", "Template:Blockquote", "Template:Citation needed", "Template:Main", "Template:Lang-ga", "Template:Notelist", "Template:Cite book", "Template:Borders of France", "Template:Redirect", "Template:Use British English", "Template:Lang", "Template:Legend", "Template:Lang-br", "Template:Lang-nl", "Template:Short description", "Template:Infobox body of water", "Template:Convert", "Template:Reflist", "Template:List of seas", "Template:Portal", "Template:Cite web", "Template:Cite journal", "Template:Cite news", "Template:Borders of the United Kingdom", "Template:Efn", "Template:Lang-it", "Template:Cite encyclopedia", "Template:Dead link", "Template:Webarchive", "Template:Citation", "Template:Lang-es", "Template:Anchor", "Template:Wikivoyage", "Template:Commons", "Template:Authority control", "Template:Lang-fr", "Template:More citations needed section", "Template:Lang-kw", "Template:Frac", "Template:Cbignore" ]
https://en.wikipedia.org/wiki/English_Channel
9,232
Eiffel Tower
The Eiffel Tower (/ˈaɪfəl/ EYE-fəl; French: Tour Eiffel [tuʁ ɛfɛl] ) is a wrought-iron lattice tower on the Champ de Mars in Paris, France. It is named after the engineer Gustave Eiffel, whose company designed and built the tower from 1887 to 1889. Locally nicknamed "La dame de fer" (French for "Iron Lady"), it was constructed as the centerpiece of the 1889 World's Fair, and to crown the centennial anniversary of the French Revolution. Although initially criticised by some of France's leading artists and intellectuals for its design, it has since become a global cultural icon of France and one of the most recognisable structures in the world. The tower received 5,889,000 visitors in 2022. The Eiffel Tower is the most visited monument with an entrance fee in the world: 6.91 million people ascended it in 2015. It was designated a monument historique in 1964, and was named part of a UNESCO World Heritage Site ("Paris, Banks of the Seine") in 1991. The tower is 330 metres (1,083 ft) tall, about the same height as an 81-storey building, and the tallest structure in Paris. Its base is square, measuring 125 metres (410 ft) on each side. During its construction, the Eiffel Tower surpassed the Washington Monument to become the tallest human-made structure in the world, a title it held for 41 years until the Chrysler Building in New York City was finished in 1930. It was the first structure in the world to surpass both the 200-metre and 300-metre mark in height. Due to the addition of a broadcasting aerial at the top of the tower in 1957, it is now taller than the Chrysler Building by 5.2 metres (17 ft). Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure in France after the Millau Viaduct. The tower has three levels for visitors, with restaurants on the first and second levels. The top level's upper platform is 276 m (906 ft) above the ground – the highest observation deck accessible to the public in the European Union. Tickets can be purchased to ascend by stairs or lift to the first and second levels. The climb from ground level to the first level is over 300 steps, as is the climb from the first level to the second, making the entire ascent a 600 step climb. Although there is a staircase to the top level, it is usually accessible only by lift. On this top, third level is a private apartment built for Gustave Eiffel's private use. He decorated it with furniture by Jean Lachaise and invited friends such as Thomas Edison. The design of the Eiffel Tower is attributed to Maurice Koechlin and Émile Nouguier, two senior engineers working for the Compagnie des Établissements Eiffel. It was envisioned after discussion about a suitable centerpiece for the proposed 1889 Exposition Universelle, a world's fair to celebrate the centennial of the French Revolution. In May 1884, working at home, Koechlin made a sketch of their idea, described by him as "a great pylon, consisting of four lattice girders standing apart at the base and coming together at the top, joined together by metal trusses at regular intervals". Eiffel initially showed little enthusiasm, but he did approve further study, and the two engineers then asked Stephen Sauvestre, the head of the company's architectural department, to contribute to the design. Sauvestre added decorative arches to the base of the tower, a glass pavilion to the first level, and other embellishments. The new version gained Eiffel's support: he bought the rights to the patent on the design which Koechlin, Nougier, and Sauvestre had taken out, and the design was put on display at the Exhibition of Decorative Arts in the autumn of 1884 under the company name. On 30 March 1885, Eiffel presented his plans to the Société des Ingénieurs Civils; after discussing the technical problems and emphasising the practical uses of the tower, he finished his talk by saying the tower would symbolise [n]ot only the art of the modern engineer, but also the century of Industry and Science in which we are living, and for which the way was prepared by the great scientific movement of the eighteenth century and by the Revolution of 1789, to which this monument will be built as an expression of France's gratitude. Little progress was made until 1886, when Jules Grévy was re-elected as president of France and Édouard Lockroy was appointed as minister for trade. A budget for the exposition was passed and, on 1 May, Lockroy announced an alteration to the terms of the open competition being held for a centrepiece to the exposition, which effectively made the selection of Eiffel's design a foregone conclusion, as entries had to include a study for a 300 m (980 ft) four-sided metal tower on the Champ de Mars. (A 300-metre tower was then considered a herculean engineering effort.) On 12 May, a commission was set up to examine Eiffel's scheme and its rivals, which, a month later, decided that all the proposals except Eiffel's were either impractical or lacking in details. After some debate about the exact location of the tower, a contract was signed on 8 January 1887. Eiffel signed it acting in his own capacity rather than as the representative of his company, the contract granting him 1.5 million francs toward the construction costs: less than a quarter of the estimated 6.5 million francs. Eiffel was to receive all income from the commercial exploitation of the tower during the exhibition and for the next 20 years. He later established a separate company to manage the tower, putting up half the necessary capital himself. A French bank, the Crédit Industriel et Commercial (CIC), helped finance the construction of the Eiffel Tower. During the period of the tower's construction, the CIC was acquiring funds from predatory loans to the National Bank of Haiti, some of which went towards the financing of the tower. These loans were connected to an indemnity controversy which saw France force Haiti's government to financially compensate French slaveowners for lost income as a result of the Haitian Revolution, and required Haiti to pay the CIC and its partner nearly half of all taxes collected on exports, "effectively choking off the nation's primary source of income". According to The New York Times, "[at] a time when the [CIC] was helping finance one of the world's best-known landmarks, the Eiffel Tower, as a monument to French liberty, it was choking Haiti's economy, taking much of the young nation's income back to Paris and impairing its ability to start schools, hospitals and the other building blocks of an independent country." The proposed tower had been a subject of controversy, drawing criticism from those who did not believe it was feasible and those who objected on artistic grounds. Prior to the Eiffel Tower's construction, no structure had ever been constructed to a height of 300 m, or even 200 m for that matter, and many people believed it was impossible. These objections were an expression of a long-standing debate in France about the relationship between architecture and engineering. It came to a head as work began at the Champ de Mars: a "Committee of Three Hundred" (one member for each metre of the tower's height) was formed, led by the prominent architect Charles Garnier and including some of the most important figures of the arts, such as William-Adolphe Bouguereau, Guy de Maupassant, Charles Gounod and Jules Massenet. A petition called "Artists against the Eiffel Tower" was sent to the Minister of Works and Commissioner for the Exposition, Adolphe Alphand, and it was published by Le Temps on 14 February 1887: We, writers, painters, sculptors, architects and passionate devotees of the hitherto untouched beauty of Paris, protest with all our strength, with all our indignation in the name of slighted French taste, against the erection ... of this useless and monstrous Eiffel Tower ... To bring our arguments home, imagine for a moment a giddy, ridiculous tower dominating Paris like a gigantic black smokestack, crushing under its barbaric bulk Notre Dame, the Tour Saint-Jacques, the Louvre, the Dome of les Invalides, the Arc de Triomphe, all of our humiliated monuments will disappear in this ghastly dream. And for twenty years ... we shall see stretching like a blot of ink the hateful shadow of the hateful column of bolted sheet metal. Gustave Eiffel responded to these criticisms by comparing his tower to the Egyptian pyramids: "My tower will be the tallest edifice ever erected by man. Will it not also be grandiose in its way? And why would something admirable in Egypt become hideous and ridiculous in Paris?" These criticisms were also dealt with by Édouard Lockroy in a letter of support written to Alphand, sardonically saying, "Judging by the stately swell of the rhythms, the beauty of the metaphors, the elegance of its delicate and precise style, one can tell this protest is the result of collaboration of the most famous writers and poets of our time", and he explained that the protest was irrelevant since the project had been decided upon months before, and construction on the tower was already under way. Indeed, Garnier was a member of the Tower Commission that had examined the various proposals, and had raised no objection. Eiffel was similarly unworried, pointing out to a journalist that it was premature to judge the effect of the tower solely on the basis of the drawings, that the Champ de Mars was distant enough from the monuments mentioned in the protest for there to be little risk of the tower overwhelming them, and putting the aesthetic argument for the tower: "Do not the laws of natural forces always conform to the secret laws of harmony?" Some of the protesters changed their minds when the tower was built; others remained unconvinced. Guy de Maupassant supposedly ate lunch in the tower's restaurant every day because it was the one place in Paris where the tower was not visible. By 1918, it had become a symbol of Paris and of France after Guillaume Apollinaire wrote a nationalist poem in the shape of the tower (a calligram) to express his feelings about the war against Germany. Today, it is widely considered to be a remarkable piece of structural art, and is often featured in films and literature. Work on the foundations started on 28 January 1887. Those for the east and south legs were straightforward, with each leg resting on four 2 m (6.6 ft) concrete slabs, one for each of the principal girders of each leg. The west and north legs, being closer to the river Seine, were more complicated: each slab needed two piles installed by using compressed-air caissons 15 m (49 ft) long and 6 m (20 ft) in diameter driven to a depth of 22 m (72 ft) to support the concrete slabs, which were 6 m (20 ft) thick. Each of these slabs supported a block of limestone with an inclined top to bear a supporting shoe for the ironwork. Each shoe was anchored to the stonework by a pair of bolts 10 cm (4 in) in diameter and 7.5 m (25 ft) long. The foundations were completed on 30 June, and the erection of the ironwork began. The visible work on-site was complemented by the enormous amount of exacting preparatory work that took place behind the scenes: the drawing office produced 1,700 general drawings and 3,629 detailed drawings of the 18,038 different parts needed. The task of drawing the components was complicated by the complex angles involved in the design and the degree of precision required: the position of rivet holes was specified to within 1 mm (0.04 in) and angles worked out to one second of arc. The finished components, some already riveted together into sub-assemblies, arrived on horse-drawn carts from a factory in the nearby Parisian suburb of Levallois-Perret and were first bolted together, with the bolts being replaced with rivets as construction progressed. No drilling or shaping was done on site: if any part did not fit, it was sent back to the factory for alteration. In all, 18,038 pieces were joined using 2.5 million rivets. At first, the legs were constructed as cantilevers, but about halfway to the first level construction was paused to create a substantial timber scaffold. This renewed concerns about the structural integrity of the tower, and sensational headlines such as "Eiffel Suicide!" and "Gustave Eiffel Has Gone Mad: He Has Been Confined in an Asylum" appeared in the tabloid press. Multiple famous artists of that time, Charles Garnier and Alexander Dumas, thought poorly of the newly made tower. Charles Garnier thought it was a "truly tragic street lamp". Alexander Dumas said that it was like "Odius shadow of the odious column built of rivets and iron plates extending like a black blot". There was multiple protests over the style and the reasoning of placing it in the middle of Paris. At this stage, a small "creeper" crane designed to move up the tower was installed in each leg. They made use of the guides for the lifts which were to be fitted in the four legs. The critical stage of joining the legs at the first level was completed by the end of March 1888. Although the metalwork had been prepared with the utmost attention to detail, provision had been made to carry out small adjustments to precisely align the legs; hydraulic jacks were fitted to the shoes at the base of each leg, capable of exerting a force of 800 tonnes, and the legs were intentionally constructed at a slightly steeper angle than necessary, being supported by sandboxes on the scaffold. Although construction involved 300 on-site employees, due to Eiffel's safety precautions and the use of movable gangways, guardrails and screens, only one person died. The main structural work was completed at the end of March 1889 and, on 31 March, Eiffel celebrated by leading a group of government officials, accompanied by representatives of the press, to the top of the tower. Because the lifts were not yet in operation, the ascent was made by foot, and took over an hour, with Eiffel stopping frequently to explain various features. Most of the party chose to stop at the lower levels, but a few, including the structural engineer, Émile Nouguier, the head of construction, Jean Compagnon, the President of the City Council, and reporters from Le Figaro and Le Monde Illustré, completed the ascent. At 2:35 pm, Eiffel hoisted a large Tricolour to the accompaniment of a 25-gun salute fired at the first level. There was still work to be done, particularly on the lifts and facilities, and the tower was not opened to the public until nine days after the opening of the exposition on 6 May; even then, the lifts had not been completed. The tower was an instant success with the public, and nearly 30,000 visitors made the 1,710-step climb to the top before the lifts entered service on 26 May. Tickets cost 2 francs for the first level, 3 for the second, and 5 for the top, with half-price admission on Sundays, and by the end of the exhibition there had been 1,896,987 visitors. After dark, the tower was lit by hundreds of gas lamps, and a beacon sent out three beams of red, white and blue light. Two searchlights mounted on a circular rail were used to illuminate various buildings of the exposition. The daily opening and closing of the exposition were announced by a cannon at the top. On the second level, the French newspaper Le Figaro had an office and a printing press, where a special souvenir edition, Le Figaro de la Tour, was made. There was also a pâtisserie. At the top, there was a post office where visitors could send letters and postcards as a memento of their visit. Graffitists were also catered for: sheets of paper were mounted on the walls each day for visitors to record their impressions of the tower. Gustave Eiffel described the collection of responses as "truly curious". Famous visitors to the tower included the Prince of Wales, Sarah Bernhardt, "Buffalo Bill" Cody (his Wild West show was an attraction at the exposition) and Thomas Edison. Eiffel invited Edison to his private apartment at the top of the tower, where Edison presented him with one of his phonographs, a new invention and one of the many highlights of the exposition. Edison signed the guestbook with this message: To M Eiffel the Engineer the brave builder of so gigantic and original specimen of modern Engineering from one who has the greatest respect and admiration for all Engineers including the Great Engineer the Bon Dieu, Thomas Edison. Eiffel made use of his apartment at the top of the tower to carry out meteorological observations, and also used the tower to perform experiments on the action of air resistance on falling bodies. Eiffel had a permit for the tower to stand for 20 years. It was to be dismantled in 1909, when its ownership would revert to the City of Paris. The city had planned to tear it down (part of the original contest rules for designing a tower was that it should be easy to dismantle) but as the tower proved to be valuable for many innovations in the early 20th century, particularly radio telegraphy, it was allowed to remain after the expiry of the permit, and from 1910 it also became part of the International Time Service. For the 1900 Exposition Universelle, the lifts in the east and west legs were replaced by lifts running as far as the second level constructed by the French firm Fives-Lille. These had a compensating mechanism to keep the floor level as the angle of ascent changed at the first level, and were driven by a similar hydraulic mechanism as the Otis lifts, although this was situated at the base of the tower. Hydraulic pressure was provided by pressurised accumulators located near this mechanism. At the same time the lift in the north pillar was removed and replaced by a staircase to the first level. The layout of both first and second levels was modified, with the space available for visitors on the second level. The original lift in the south pillar was removed 13 years later. On 19 October 1901, Alberto Santos-Dumont, flying his No.6 airship, won a 100,000-franc prize offered by Henri Deutsch de la Meurthe for the first person to make a flight from St. Cloud to the Eiffel Tower and back in less than half an hour. In 1910, Father Theodor Wulf measured radiant energy at the top and bottom of the tower. He found more at the top than expected, incidentally discovering what are known today as cosmic rays. Two years later, on 4 February 1912, Austrian tailor Franz Reichelt died after jumping from the first level of the tower (a height of 57 m) to demonstrate his parachute design. In 1914, at the outbreak of World War I, a radio transmitter located in the tower jammed German radio communications, seriously hindering their advance on Paris and contributing to the Allied victory at the First Battle of the Marne. From 1925 to 1934, illuminated signs for Citroën adorned three of the tower's sides, making it the tallest advertising space in the world at the time. In April 1935, the tower was used to make experimental low-resolution television transmissions, using a shortwave transmitter of 200 watts power. On 17 November, an improved 180-line transmitter was installed. On two separate but related occasions in 1925, the con artist Victor Lustig "sold" the tower for scrap metal. A year later, in February 1926, pilot Leon Collet was killed trying to fly under the tower. His aircraft became entangled in an aerial belonging to a wireless station. A bust of Gustave Eiffel by Antoine Bourdelle was unveiled at the base of the north leg on 2 May 1929. In 1930, the tower lost the title of the world's tallest structure when the Chrysler Building in New York City was completed. In 1938, the decorative arcade around the first level was removed. Upon the German occupation of Paris in 1940, the lift cables were cut by the French. The tower was closed to the public during the occupation and the lifts were not repaired until 1946. In 1940, German soldiers had to climb the tower to hoist a swastika-centered Reichskriegsflagge, but the flag was so large it blew away just a few hours later, and was replaced by a smaller one. When visiting Paris, Hitler chose to stay on the ground. When the Allies were nearing Paris in August 1944, Hitler ordered General Dietrich von Choltitz, the military governor of Paris, to demolish the tower along with the rest of the city. Von Choltitz disobeyed the order. On 25 August, before the Germans had been driven out of Paris, the German flag was replaced with a Tricolour by two men from the French Naval Museum, who narrowly beat three men led by Lucien Sarniguet, who had lowered the Tricolour on 13 June 1940 when Paris fell to the Germans. A fire started in the television transmitter on 3 January 1956, damaging the top of the tower. Repairs took a year, and in 1957, the present radio aerial was added to the top. In 1964, the Eiffel Tower was officially declared to be a historical monument by the Minister of Cultural Affairs, André Malraux. A year later, an additional lift system was installed in the north pillar. According to interviews, in 1967, Montreal Mayor Jean Drapeau negotiated a secret agreement with Charles de Gaulle for the tower to be dismantled and temporarily relocated to Montreal to serve as a landmark and tourist attraction during Expo 67. The plan was allegedly vetoed by the company operating the tower out of fear that the French government could refuse permission for the tower to be restored in its original location. In 1982, the original lifts between the second and third levels were replaced after 97 years in service. These had been closed to the public between November and March because the water in the hydraulic drive tended to freeze. The new cars operate in pairs, with one counterbalancing the other, and perform the journey in one stage, reducing the journey time from eight minutes to less than two minutes. At the same time, two new emergency staircases were installed, replacing the original spiral staircases. In 1983, the south pillar was fitted with an electrically driven Otis lift to serve the Jules Verne restaurant. The Fives-Lille lifts in the east and west legs, fitted in 1899, were extensively refurbished in 1986. The cars were replaced, and a computer system was installed to completely automate the lifts. The motive power was moved from the water hydraulic system to a new electrically driven oil-filled hydraulic system, and the original water hydraulics were retained solely as a counterbalance system. A service lift was added to the south pillar for moving small loads and maintenance personnel three years later. Robert Moriarty flew a Beechcraft Bonanza under the tower on 31 March 1984. In 1987, A. J. Hackett made one of his first bungee jumps from the top of the Eiffel Tower, using a special cord he had helped develop. Hackett was arrested by the police. On 27 October 1991, Thierry Devaux, along with mountain guide Hervé Calvayrac, performed a series of acrobatic figures while bungee jumping from the second floor of the tower. Facing the Champ de Mars, Devaux used an electric winch between figures to go back up to the second floor. When firemen arrived, he stopped after the sixth jump. For its "Countdown to the Year 2000" celebration on 31 December 1999, flashing lights and high-powered searchlights were installed on the tower. During the last three minutes of the year, the lights were turned on starting from the base of the tower and continuing to the top to welcome 2000 with a huge fireworks show. An exhibition above a cafeteria on the first floor commemorates this event. The searchlights on top of the tower made it a beacon in Paris's night sky, and 20,000 flashing bulbs gave the tower a sparkly appearance for five minutes every hour on the hour. The lights sparkled blue for several nights to herald the new millennium on 31 December 2000. The sparkly lighting continued for 18 months until July 2001. The sparkling lights were turned on again on 21 June 2003, and the display was planned to last for 10 years before they needed replacing. The tower received its 200,000,000th guest on 28 November 2002. The tower has operated at its maximum capacity of about 7 million visitors per year since 2003. In 2004, the Eiffel Tower began hosting a seasonal ice rink on the first level. A glass floor was installed on the first level during the 2014 refurbishment. The puddle iron (wrought iron) of the Eiffel Tower weighs 7,300 tonnes, and the addition of lifts, shops and antennae have brought the total weight to approximately 10,100 tonnes. As a demonstration of the economy of design, if the 7,300 tonnes of metal in the structure were melted down, it would fill the square base, 125 metres (410 ft) on each side, to a depth of only 6.25 cm (2.46 in) assuming the density of the metal to be 7.8 tonnes per cubic metre. Additionally, a cubic box surrounding the tower (324 m × 125 m × 125 m) would contain 6,200 tonnes of air, weighing almost as much as the iron itself. Depending on the ambient temperature, the top of the tower may shift away from the sun by up to 18 cm (7 in) due to thermal expansion of the metal on the side facing the sun. When it was built, many were shocked by the tower's daring form. Eiffel was accused of trying to create something artistic with no regard to the principles of engineering. However, Eiffel and his team – experienced bridge builders – understood the importance of wind forces, and knew that if they were going to build the tallest structure in the world, they had to be sure it could withstand them. In an interview with the newspaper Le Temps published on 14 February 1887, Eiffel said: Is it not true that the very conditions which give strength also conform to the hidden rules of harmony? ... Now to what phenomenon did I have to give primary concern in designing the Tower? It was wind resistance. Well then! I hold that the curvature of the monument's four outer edges, which is as mathematical calculation dictated it should be ... will give a great impression of strength and beauty, for it will reveal to the eyes of the observer the boldness of the design as a whole. He used graphical methods to determine the strength of the tower and empirical evidence to account for the effects of wind, rather than a mathematical formula. Close examination of the tower reveals a basically exponential shape. All parts of the tower were overdesigned to ensure maximum resistance to wind forces. The top half was even assumed to have no gaps in the latticework. In the years since it was completed, engineers have put forward various mathematical hypotheses in an attempt to explain the success of the design. The most recent, devised in 2004 after letters sent by Eiffel to the French Society of Civil Engineers in 1885 were translated into English, is described as a non-linear integral equation based on counteracting the wind pressure on any point of the tower with the tension between the construction elements at that point. The Eiffel Tower sways by up to 9 cm (3.5 in) in the wind. The four columns of the tower each house access stairs and elevators to the first two floors, while at the south column only the elevator to the second floor restaurant is publicly accessible. The first floor is publicly accessible by elevator or stairs. When originally built, the first level contained three restaurants – one French, one Russian and one Flemish — and an "Anglo-American Bar". After the exposition closed, the Flemish restaurant was converted to a 250-seat theatre. Today there is the Le 58 Tour Eiffel restaurant and other facilities. The second floor is publicly accessible by elevator or stairs and has a restaurant called Le Jules Verne, a gourmet restaurant with its own lift going up from the south column to the second level. This restaurant has one star in the Michelin Red Guide. It was run by the multi-Michelin star chef Alain Ducasse from 2007 to 2017. As of May 2019, it is managed by three-star chef Frédéric Anton. It owes its name to the famous science-fiction writer Jules Verne. The third floor is the top floor, publicly accessible by elevator. Originally there were laboratories for various experiments, and a small apartment reserved for Gustave Eiffel to entertain guests, which is now open to the public, complete with period decorations and lifelike mannequins of Eiffel and some of his notable guests. From 1937 until 1981, there was a restaurant near the top of the tower. It was removed due to structural considerations; engineers had determined it was too heavy and was causing the tower to sag. This restaurant was sold to an American restaurateur and transported to New York and then New Orleans. It was rebuilt on the edge of New Orleans' Garden District as a restaurant and later event hall. Today there is a champagne bar. The arrangement of the lifts has been changed several times during the tower's history. Given the elasticity of the cables and the time taken to align the cars with the landings, each lift, in normal service, takes an average of 8 minutes and 50 seconds to do the round trip, spending an average of 1 minute and 15 seconds at each level. The average journey time between levels is 1 minute. The original hydraulic mechanism is on public display in a small museum at the base of the east and west legs. Because the mechanism requires frequent lubrication and maintenance, public access is often restricted. The rope mechanism of the north tower can be seen as visitors exit the lift. Equipping the tower with adequate and safe passenger lifts was a major concern of the government commission overseeing the Exposition. Although some visitors could be expected to climb to the first level, or even the second, lifts clearly had to be the main means of ascent. Constructing lifts to reach the first level was relatively straightforward: the legs were wide enough at the bottom and so nearly straight that they could contain a straight track, and a contract was given to the French company Roux, Combaluzier & Lepape for two lifts to be fitted in the east and west legs. Roux, Combaluzier & Lepape used a pair of endless chains with rigid, articulated links to which the car was attached. Lead weights on some links of the upper or return sections of the chains counterbalanced most of the car's weight. The car was pushed up from below, not pulled up from above: to prevent the chain buckling, it was enclosed in a conduit. At the bottom of the run, the chains passed around 3.9 m (12 ft 10 in) diameter sprockets. Smaller sprockets at the top guided the chains. Installing lifts to the second level was more of a challenge because a straight track was impossible. No French company wanted to undertake the work. The European branch of Otis Brothers & Company submitted a proposal but this was rejected: the fair's charter ruled out the use of any foreign material in the construction of the tower. The deadline for bids was extended but still no French companies put themselves forward, and eventually the contract was given to Otis in July 1887. Otis were confident they would eventually be given the contract and had already started creating designs. The car was divided into two superimposed compartments, each holding 25 passengers, with the lift operator occupying an exterior platform on the first level. Motive power was provided by an inclined hydraulic ram 12.67 m (41 ft 7 in) long and 96.5 cm (38.0 in) in diameter in the tower leg with a stroke of 10.83 m (35 ft 6 in): this moved a carriage carrying six sheaves. Five fixed sheaves were mounted higher up the leg, producing an arrangement similar to a block and tackle but acting in reverse, multiplying the stroke of the piston rather than the force generated. The hydraulic pressure in the driving cylinder was produced by a large open reservoir on the second level. After being exhausted from the cylinder, the water was pumped back up to the reservoir by two pumps in the machinery room at the base of the south leg. This reservoir also provided power to the lifts to the first level. The original lifts for the journey between the second and third levels were supplied by Léon Edoux. A pair of 81 m (266 ft) hydraulic rams were mounted on the second level, reaching nearly halfway up to the third level. One lift car was mounted on top of these rams: cables ran from the top of this car up to sheaves on the third level and back down to a second car. Each car travelled only half the distance between the second and third levels and passengers were required to change lifts halfway by means of a short gangway. The 10-ton cars each held 65 passengers. Gustave Eiffel engraved on the tower the names of 72 French scientists, engineers and mathematicians in recognition of their contributions to the building of the tower. Eiffel chose this "invocation of science" because of his concern over the artists' protest. At the beginning of the 20th century, the engravings were painted over, but they were restored in 1986–87 by the Société Nouvelle d'exploitation de la Tour Eiffel, a company operating the tower. The tower is painted in three shades: lighter at the top, getting progressively darker towards the bottom to complement the Parisian sky. It was originally reddish brown; this changed in 1968 to a bronze colour known as "Eiffel Tower Brown". In what is expected to be a temporary change, the tower is being painted gold in commemoration of the upcoming 2024 Summer Olympics in Paris. The only non-structural elements are the four decorative grill-work arches, added in Sauvestre's sketches, which served to make the tower look more substantial and to make a more impressive entrance to the exposition. A pop-culture movie cliché is that the view from a Parisian window always includes the tower. In reality, since zoning restrictions limit the height of most buildings in Paris to seven storeys, only a small number of tall buildings have a clear view of the tower. Maintenance of the tower includes applying 60 tons of paint every seven years to prevent it from rusting. The tower has been completely repainted at least 19 times since it was built. Lead paint was still being used as recently as 2001 when the practice was stopped out of concern for the environment. The tower has been used for making radio transmissions since the beginning of the 20th century. Until the 1950s, sets of aerial wires ran from the cupola to anchors on the Avenue de Suffren and Champ de Mars. These were connected to longwave transmitters in small bunkers. In 1909, a permanent underground radio centre was built near the south pillar, which still exists today. On 20 November 1913, the Paris Observatory, using the Eiffel Tower as an aerial, exchanged wireless signals with the United States Naval Observatory, which used an aerial in Arlington County, Virginia. The object of the transmissions was to measure the difference in longitude between Paris and Washington, D.C. Today, radio and digital television signals are transmitted from the Eiffel Tower. A television antenna was first installed on the tower in 1957, increasing its height by 18.7 m (61 ft). Work carried out in 2000 added a further 5.3 m (17 ft), giving the current height of 324 m (1,063 ft). Analogue television signals from the Eiffel Tower ceased on 8 March 2011. The pinnacle height of the Eiffel Tower has changed multiple times over the years as described in the chart below. The Eiffel Tower was the world's tallest structure when completed in 1889, a distinction it retained until 1929 when the Chrysler Building in New York City was topped out. The tower also lost its standing as the world's tallest tower to the Tokyo Tower in 1958 but retains its status as the tallest freestanding (non-guyed) structure in France. The nearest Paris Métro station is Bir-Hakeim and the nearest RER station is Champ de Mars-Tour Eiffel. The tower itself is located at the intersection of the quai Branly and the Pont d'Iéna. More than 300 million people have visited the tower since it was completed in 1889. In 2015, there were 6.91 million visitors. The tower is the most-visited paid monument in the world. An average of 25,000 people ascend the tower every day (which can result in long queues). The tower and its image have been in the public domain since 1993, 70 years after Eiffel's death. In June 1990 a French court ruled that a special lighting display on the tower in 1989 to mark the tower's 100th anniversary was an "original visual creation" protected by copyright. The Court of Cassation, France's judicial court of last resort, upheld the ruling in March 1992. The Société d'Exploitation de la Tour Eiffel (SETE) now considers any illumination of the tower to be a separate work of art that falls under copyright. As a result, the SNTE alleges that it is illegal to publish contemporary photographs of the lit tower at night without permission in France and some other countries for commercial use. For this reason, it is often rare to find images or videos of the lit tower at night on stock image sites, and media outlets rarely broadcast images or videos of it. The imposition of copyright has been controversial. The Director of Documentation for what was then called the Société Nouvelle d'exploitation de la Tour Eiffel (SNTE), Stéphane Dieu, commented in 2005: "It is really just a way to manage commercial use of the image, so that it isn't used in ways [of which] we don't approve". SNTE made over €1 million from copyright fees in 2002. However, it could also be used to restrict the publication of tourist photographs of the tower at night, as well as hindering non-profit and semi-commercial publication of images of the illuminated tower. The copyright claim itself has never been tested in courts to date, according to a 2014 article in the Art Law Journal, and there has never been an attempt to track down millions of people who have posted and shared their images of the illuminated tower on the Internet worldwide. It added, however, that permissive situation may arise on commercial use of such images, like in a magazine, on a film poster, or on product packaging. French doctrine and jurisprudence allows pictures incorporating a copyrighted work as long as their presence is incidental or accessory to the subject being represented, a reasoning akin to the de minimis rule. Therefore, SETE may be unable to claim copyright on photographs of Paris which happen to include the lit tower. As one of the most famous landmarks in the world, the Eiffel Tower has been the inspiration for the creation of many replicas and similar towers. An early example is Blackpool Tower in England. The mayor of Blackpool, Sir John Bickerstaffe, was so impressed on seeing the Eiffel Tower at the 1889 exposition that he commissioned a similar tower to be built in his town. It opened in 1894 and is 158.1 m (519 ft) tall. Tokyo Tower in Japan, built as a communications tower in 1958, was also inspired by the Eiffel Tower. There are various scale models of the tower in the United States, including a half-scale version at the Paris Las Vegas, Nevada, one in Paris, Texas built in 1993, and two 1:3 scale models at Kings Island, located in Mason, Ohio, and Kings Dominion, Virginia, amusement parks opened in 1972 and 1975 respectively. Two 1:3 scale models can be found in China, one in Durango, Mexico that was donated by the local French community, and several across Europe. In 2011, the TV show Pricing the Priceless on the National Geographic Channel speculated that a full-size replica of the tower would cost approximately US$480 million to build. This would be more than ten times the cost of the original (nearly 8 million in 1890 Francs; ~US$40 million in 2018 dollars).
[ { "paragraph_id": 0, "text": "The Eiffel Tower (/ˈaɪfəl/ EYE-fəl; French: Tour Eiffel [tuʁ ɛfɛl] ) is a wrought-iron lattice tower on the Champ de Mars in Paris, France. It is named after the engineer Gustave Eiffel, whose company designed and built the tower from 1887 to 1889.", "title": "" }, { "paragraph_id": 1, "text": "Locally nicknamed \"La dame de fer\" (French for \"Iron Lady\"), it was constructed as the centerpiece of the 1889 World's Fair, and to crown the centennial anniversary of the French Revolution. Although initially criticised by some of France's leading artists and intellectuals for its design, it has since become a global cultural icon of France and one of the most recognisable structures in the world. The tower received 5,889,000 visitors in 2022. The Eiffel Tower is the most visited monument with an entrance fee in the world: 6.91 million people ascended it in 2015. It was designated a monument historique in 1964, and was named part of a UNESCO World Heritage Site (\"Paris, Banks of the Seine\") in 1991.", "title": "" }, { "paragraph_id": 2, "text": "The tower is 330 metres (1,083 ft) tall, about the same height as an 81-storey building, and the tallest structure in Paris. Its base is square, measuring 125 metres (410 ft) on each side. During its construction, the Eiffel Tower surpassed the Washington Monument to become the tallest human-made structure in the world, a title it held for 41 years until the Chrysler Building in New York City was finished in 1930. It was the first structure in the world to surpass both the 200-metre and 300-metre mark in height. Due to the addition of a broadcasting aerial at the top of the tower in 1957, it is now taller than the Chrysler Building by 5.2 metres (17 ft). Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure in France after the Millau Viaduct.", "title": "" }, { "paragraph_id": 3, "text": "The tower has three levels for visitors, with restaurants on the first and second levels. The top level's upper platform is 276 m (906 ft) above the ground – the highest observation deck accessible to the public in the European Union. Tickets can be purchased to ascend by stairs or lift to the first and second levels. The climb from ground level to the first level is over 300 steps, as is the climb from the first level to the second, making the entire ascent a 600 step climb. Although there is a staircase to the top level, it is usually accessible only by lift. On this top, third level is a private apartment built for Gustave Eiffel's private use. He decorated it with furniture by Jean Lachaise and invited friends such as Thomas Edison.", "title": "" }, { "paragraph_id": 4, "text": "The design of the Eiffel Tower is attributed to Maurice Koechlin and Émile Nouguier, two senior engineers working for the Compagnie des Établissements Eiffel. It was envisioned after discussion about a suitable centerpiece for the proposed 1889 Exposition Universelle, a world's fair to celebrate the centennial of the French Revolution. In May 1884, working at home, Koechlin made a sketch of their idea, described by him as \"a great pylon, consisting of four lattice girders standing apart at the base and coming together at the top, joined together by metal trusses at regular intervals\". Eiffel initially showed little enthusiasm, but he did approve further study, and the two engineers then asked Stephen Sauvestre, the head of the company's architectural department, to contribute to the design. Sauvestre added decorative arches to the base of the tower, a glass pavilion to the first level, and other embellishments.", "title": "History" }, { "paragraph_id": 5, "text": "The new version gained Eiffel's support: he bought the rights to the patent on the design which Koechlin, Nougier, and Sauvestre had taken out, and the design was put on display at the Exhibition of Decorative Arts in the autumn of 1884 under the company name. On 30 March 1885, Eiffel presented his plans to the Société des Ingénieurs Civils; after discussing the technical problems and emphasising the practical uses of the tower, he finished his talk by saying the tower would symbolise", "title": "History" }, { "paragraph_id": 6, "text": "[n]ot only the art of the modern engineer, but also the century of Industry and Science in which we are living, and for which the way was prepared by the great scientific movement of the eighteenth century and by the Revolution of 1789, to which this monument will be built as an expression of France's gratitude.", "title": "History" }, { "paragraph_id": 7, "text": "Little progress was made until 1886, when Jules Grévy was re-elected as president of France and Édouard Lockroy was appointed as minister for trade. A budget for the exposition was passed and, on 1 May, Lockroy announced an alteration to the terms of the open competition being held for a centrepiece to the exposition, which effectively made the selection of Eiffel's design a foregone conclusion, as entries had to include a study for a 300 m (980 ft) four-sided metal tower on the Champ de Mars. (A 300-metre tower was then considered a herculean engineering effort.) On 12 May, a commission was set up to examine Eiffel's scheme and its rivals, which, a month later, decided that all the proposals except Eiffel's were either impractical or lacking in details.", "title": "History" }, { "paragraph_id": 8, "text": "After some debate about the exact location of the tower, a contract was signed on 8 January 1887. Eiffel signed it acting in his own capacity rather than as the representative of his company, the contract granting him 1.5 million francs toward the construction costs: less than a quarter of the estimated 6.5 million francs. Eiffel was to receive all income from the commercial exploitation of the tower during the exhibition and for the next 20 years. He later established a separate company to manage the tower, putting up half the necessary capital himself.", "title": "History" }, { "paragraph_id": 9, "text": "A French bank, the Crédit Industriel et Commercial (CIC), helped finance the construction of the Eiffel Tower. During the period of the tower's construction, the CIC was acquiring funds from predatory loans to the National Bank of Haiti, some of which went towards the financing of the tower. These loans were connected to an indemnity controversy which saw France force Haiti's government to financially compensate French slaveowners for lost income as a result of the Haitian Revolution, and required Haiti to pay the CIC and its partner nearly half of all taxes collected on exports, \"effectively choking off the nation's primary source of income\". According to The New York Times, \"[at] a time when the [CIC] was helping finance one of the world's best-known landmarks, the Eiffel Tower, as a monument to French liberty, it was choking Haiti's economy, taking much of the young nation's income back to Paris and impairing its ability to start schools, hospitals and the other building blocks of an independent country.\"", "title": "History" }, { "paragraph_id": 10, "text": "The proposed tower had been a subject of controversy, drawing criticism from those who did not believe it was feasible and those who objected on artistic grounds. Prior to the Eiffel Tower's construction, no structure had ever been constructed to a height of 300 m, or even 200 m for that matter, and many people believed it was impossible. These objections were an expression of a long-standing debate in France about the relationship between architecture and engineering. It came to a head as work began at the Champ de Mars: a \"Committee of Three Hundred\" (one member for each metre of the tower's height) was formed, led by the prominent architect Charles Garnier and including some of the most important figures of the arts, such as William-Adolphe Bouguereau, Guy de Maupassant, Charles Gounod and Jules Massenet. A petition called \"Artists against the Eiffel Tower\" was sent to the Minister of Works and Commissioner for the Exposition, Adolphe Alphand, and it was published by Le Temps on 14 February 1887:", "title": "History" }, { "paragraph_id": 11, "text": "We, writers, painters, sculptors, architects and passionate devotees of the hitherto untouched beauty of Paris, protest with all our strength, with all our indignation in the name of slighted French taste, against the erection ... of this useless and monstrous Eiffel Tower ... To bring our arguments home, imagine for a moment a giddy, ridiculous tower dominating Paris like a gigantic black smokestack, crushing under its barbaric bulk Notre Dame, the Tour Saint-Jacques, the Louvre, the Dome of les Invalides, the Arc de Triomphe, all of our humiliated monuments will disappear in this ghastly dream. And for twenty years ... we shall see stretching like a blot of ink the hateful shadow of the hateful column of bolted sheet metal.", "title": "History" }, { "paragraph_id": 12, "text": "Gustave Eiffel responded to these criticisms by comparing his tower to the Egyptian pyramids: \"My tower will be the tallest edifice ever erected by man. Will it not also be grandiose in its way? And why would something admirable in Egypt become hideous and ridiculous in Paris?\" These criticisms were also dealt with by Édouard Lockroy in a letter of support written to Alphand, sardonically saying, \"Judging by the stately swell of the rhythms, the beauty of the metaphors, the elegance of its delicate and precise style, one can tell this protest is the result of collaboration of the most famous writers and poets of our time\", and he explained that the protest was irrelevant since the project had been decided upon months before, and construction on the tower was already under way.", "title": "History" }, { "paragraph_id": 13, "text": "Indeed, Garnier was a member of the Tower Commission that had examined the various proposals, and had raised no objection. Eiffel was similarly unworried, pointing out to a journalist that it was premature to judge the effect of the tower solely on the basis of the drawings, that the Champ de Mars was distant enough from the monuments mentioned in the protest for there to be little risk of the tower overwhelming them, and putting the aesthetic argument for the tower: \"Do not the laws of natural forces always conform to the secret laws of harmony?\"", "title": "History" }, { "paragraph_id": 14, "text": "Some of the protesters changed their minds when the tower was built; others remained unconvinced. Guy de Maupassant supposedly ate lunch in the tower's restaurant every day because it was the one place in Paris where the tower was not visible.", "title": "History" }, { "paragraph_id": 15, "text": "By 1918, it had become a symbol of Paris and of France after Guillaume Apollinaire wrote a nationalist poem in the shape of the tower (a calligram) to express his feelings about the war against Germany. Today, it is widely considered to be a remarkable piece of structural art, and is often featured in films and literature.", "title": "History" }, { "paragraph_id": 16, "text": "Work on the foundations started on 28 January 1887. Those for the east and south legs were straightforward, with each leg resting on four 2 m (6.6 ft) concrete slabs, one for each of the principal girders of each leg. The west and north legs, being closer to the river Seine, were more complicated: each slab needed two piles installed by using compressed-air caissons 15 m (49 ft) long and 6 m (20 ft) in diameter driven to a depth of 22 m (72 ft) to support the concrete slabs, which were 6 m (20 ft) thick. Each of these slabs supported a block of limestone with an inclined top to bear a supporting shoe for the ironwork.", "title": "History" }, { "paragraph_id": 17, "text": "Each shoe was anchored to the stonework by a pair of bolts 10 cm (4 in) in diameter and 7.5 m (25 ft) long. The foundations were completed on 30 June, and the erection of the ironwork began. The visible work on-site was complemented by the enormous amount of exacting preparatory work that took place behind the scenes: the drawing office produced 1,700 general drawings and 3,629 detailed drawings of the 18,038 different parts needed. The task of drawing the components was complicated by the complex angles involved in the design and the degree of precision required: the position of rivet holes was specified to within 1 mm (0.04 in) and angles worked out to one second of arc. The finished components, some already riveted together into sub-assemblies, arrived on horse-drawn carts from a factory in the nearby Parisian suburb of Levallois-Perret and were first bolted together, with the bolts being replaced with rivets as construction progressed. No drilling or shaping was done on site: if any part did not fit, it was sent back to the factory for alteration. In all, 18,038 pieces were joined using 2.5 million rivets.", "title": "History" }, { "paragraph_id": 18, "text": "At first, the legs were constructed as cantilevers, but about halfway to the first level construction was paused to create a substantial timber scaffold. This renewed concerns about the structural integrity of the tower, and sensational headlines such as \"Eiffel Suicide!\" and \"Gustave Eiffel Has Gone Mad: He Has Been Confined in an Asylum\" appeared in the tabloid press. Multiple famous artists of that time, Charles Garnier and Alexander Dumas, thought poorly of the newly made tower. Charles Garnier thought it was a \"truly tragic street lamp\". Alexander Dumas said that it was like \"Odius shadow of the odious column built of rivets and iron plates extending like a black blot\". There was multiple protests over the style and the reasoning of placing it in the middle of Paris. At this stage, a small \"creeper\" crane designed to move up the tower was installed in each leg. They made use of the guides for the lifts which were to be fitted in the four legs. The critical stage of joining the legs at the first level was completed by the end of March 1888. Although the metalwork had been prepared with the utmost attention to detail, provision had been made to carry out small adjustments to precisely align the legs; hydraulic jacks were fitted to the shoes at the base of each leg, capable of exerting a force of 800 tonnes, and the legs were intentionally constructed at a slightly steeper angle than necessary, being supported by sandboxes on the scaffold. Although construction involved 300 on-site employees, due to Eiffel's safety precautions and the use of movable gangways, guardrails and screens, only one person died.", "title": "History" }, { "paragraph_id": 19, "text": "The main structural work was completed at the end of March 1889 and, on 31 March, Eiffel celebrated by leading a group of government officials, accompanied by representatives of the press, to the top of the tower. Because the lifts were not yet in operation, the ascent was made by foot, and took over an hour, with Eiffel stopping frequently to explain various features. Most of the party chose to stop at the lower levels, but a few, including the structural engineer, Émile Nouguier, the head of construction, Jean Compagnon, the President of the City Council, and reporters from Le Figaro and Le Monde Illustré, completed the ascent. At 2:35 pm, Eiffel hoisted a large Tricolour to the accompaniment of a 25-gun salute fired at the first level.", "title": "History" }, { "paragraph_id": 20, "text": "There was still work to be done, particularly on the lifts and facilities, and the tower was not opened to the public until nine days after the opening of the exposition on 6 May; even then, the lifts had not been completed. The tower was an instant success with the public, and nearly 30,000 visitors made the 1,710-step climb to the top before the lifts entered service on 26 May. Tickets cost 2 francs for the first level, 3 for the second, and 5 for the top, with half-price admission on Sundays, and by the end of the exhibition there had been 1,896,987 visitors.", "title": "History" }, { "paragraph_id": 21, "text": "After dark, the tower was lit by hundreds of gas lamps, and a beacon sent out three beams of red, white and blue light. Two searchlights mounted on a circular rail were used to illuminate various buildings of the exposition. The daily opening and closing of the exposition were announced by a cannon at the top.", "title": "History" }, { "paragraph_id": 22, "text": "On the second level, the French newspaper Le Figaro had an office and a printing press, where a special souvenir edition, Le Figaro de la Tour, was made. There was also a pâtisserie.", "title": "History" }, { "paragraph_id": 23, "text": "At the top, there was a post office where visitors could send letters and postcards as a memento of their visit. Graffitists were also catered for: sheets of paper were mounted on the walls each day for visitors to record their impressions of the tower. Gustave Eiffel described the collection of responses as \"truly curious\".", "title": "History" }, { "paragraph_id": 24, "text": "Famous visitors to the tower included the Prince of Wales, Sarah Bernhardt, \"Buffalo Bill\" Cody (his Wild West show was an attraction at the exposition) and Thomas Edison. Eiffel invited Edison to his private apartment at the top of the tower, where Edison presented him with one of his phonographs, a new invention and one of the many highlights of the exposition. Edison signed the guestbook with this message:", "title": "History" }, { "paragraph_id": 25, "text": "To M Eiffel the Engineer the brave builder of so gigantic and original specimen of modern Engineering from one who has the greatest respect and admiration for all Engineers including the Great Engineer the Bon Dieu, Thomas Edison.", "title": "History" }, { "paragraph_id": 26, "text": "Eiffel made use of his apartment at the top of the tower to carry out meteorological observations, and also used the tower to perform experiments on the action of air resistance on falling bodies.", "title": "History" }, { "paragraph_id": 27, "text": "Eiffel had a permit for the tower to stand for 20 years. It was to be dismantled in 1909, when its ownership would revert to the City of Paris. The city had planned to tear it down (part of the original contest rules for designing a tower was that it should be easy to dismantle) but as the tower proved to be valuable for many innovations in the early 20th century, particularly radio telegraphy, it was allowed to remain after the expiry of the permit, and from 1910 it also became part of the International Time Service.", "title": "History" }, { "paragraph_id": 28, "text": "For the 1900 Exposition Universelle, the lifts in the east and west legs were replaced by lifts running as far as the second level constructed by the French firm Fives-Lille. These had a compensating mechanism to keep the floor level as the angle of ascent changed at the first level, and were driven by a similar hydraulic mechanism as the Otis lifts, although this was situated at the base of the tower. Hydraulic pressure was provided by pressurised accumulators located near this mechanism. At the same time the lift in the north pillar was removed and replaced by a staircase to the first level. The layout of both first and second levels was modified, with the space available for visitors on the second level. The original lift in the south pillar was removed 13 years later.", "title": "History" }, { "paragraph_id": 29, "text": "On 19 October 1901, Alberto Santos-Dumont, flying his No.6 airship, won a 100,000-franc prize offered by Henri Deutsch de la Meurthe for the first person to make a flight from St. Cloud to the Eiffel Tower and back in less than half an hour.", "title": "History" }, { "paragraph_id": 30, "text": "In 1910, Father Theodor Wulf measured radiant energy at the top and bottom of the tower. He found more at the top than expected, incidentally discovering what are known today as cosmic rays. Two years later, on 4 February 1912, Austrian tailor Franz Reichelt died after jumping from the first level of the tower (a height of 57 m) to demonstrate his parachute design. In 1914, at the outbreak of World War I, a radio transmitter located in the tower jammed German radio communications, seriously hindering their advance on Paris and contributing to the Allied victory at the First Battle of the Marne. From 1925 to 1934, illuminated signs for Citroën adorned three of the tower's sides, making it the tallest advertising space in the world at the time. In April 1935, the tower was used to make experimental low-resolution television transmissions, using a shortwave transmitter of 200 watts power. On 17 November, an improved 180-line transmitter was installed.", "title": "History" }, { "paragraph_id": 31, "text": "On two separate but related occasions in 1925, the con artist Victor Lustig \"sold\" the tower for scrap metal. A year later, in February 1926, pilot Leon Collet was killed trying to fly under the tower. His aircraft became entangled in an aerial belonging to a wireless station. A bust of Gustave Eiffel by Antoine Bourdelle was unveiled at the base of the north leg on 2 May 1929. In 1930, the tower lost the title of the world's tallest structure when the Chrysler Building in New York City was completed. In 1938, the decorative arcade around the first level was removed.", "title": "History" }, { "paragraph_id": 32, "text": "Upon the German occupation of Paris in 1940, the lift cables were cut by the French. The tower was closed to the public during the occupation and the lifts were not repaired until 1946. In 1940, German soldiers had to climb the tower to hoist a swastika-centered Reichskriegsflagge, but the flag was so large it blew away just a few hours later, and was replaced by a smaller one. When visiting Paris, Hitler chose to stay on the ground. When the Allies were nearing Paris in August 1944, Hitler ordered General Dietrich von Choltitz, the military governor of Paris, to demolish the tower along with the rest of the city. Von Choltitz disobeyed the order. On 25 August, before the Germans had been driven out of Paris, the German flag was replaced with a Tricolour by two men from the French Naval Museum, who narrowly beat three men led by Lucien Sarniguet, who had lowered the Tricolour on 13 June 1940 when Paris fell to the Germans.", "title": "History" }, { "paragraph_id": 33, "text": "A fire started in the television transmitter on 3 January 1956, damaging the top of the tower. Repairs took a year, and in 1957, the present radio aerial was added to the top. In 1964, the Eiffel Tower was officially declared to be a historical monument by the Minister of Cultural Affairs, André Malraux. A year later, an additional lift system was installed in the north pillar.", "title": "History" }, { "paragraph_id": 34, "text": "According to interviews, in 1967, Montreal Mayor Jean Drapeau negotiated a secret agreement with Charles de Gaulle for the tower to be dismantled and temporarily relocated to Montreal to serve as a landmark and tourist attraction during Expo 67. The plan was allegedly vetoed by the company operating the tower out of fear that the French government could refuse permission for the tower to be restored in its original location.", "title": "History" }, { "paragraph_id": 35, "text": "In 1982, the original lifts between the second and third levels were replaced after 97 years in service. These had been closed to the public between November and March because the water in the hydraulic drive tended to freeze. The new cars operate in pairs, with one counterbalancing the other, and perform the journey in one stage, reducing the journey time from eight minutes to less than two minutes. At the same time, two new emergency staircases were installed, replacing the original spiral staircases. In 1983, the south pillar was fitted with an electrically driven Otis lift to serve the Jules Verne restaurant. The Fives-Lille lifts in the east and west legs, fitted in 1899, were extensively refurbished in 1986. The cars were replaced, and a computer system was installed to completely automate the lifts. The motive power was moved from the water hydraulic system to a new electrically driven oil-filled hydraulic system, and the original water hydraulics were retained solely as a counterbalance system. A service lift was added to the south pillar for moving small loads and maintenance personnel three years later.", "title": "History" }, { "paragraph_id": 36, "text": "Robert Moriarty flew a Beechcraft Bonanza under the tower on 31 March 1984. In 1987, A. J. Hackett made one of his first bungee jumps from the top of the Eiffel Tower, using a special cord he had helped develop. Hackett was arrested by the police. On 27 October 1991, Thierry Devaux, along with mountain guide Hervé Calvayrac, performed a series of acrobatic figures while bungee jumping from the second floor of the tower. Facing the Champ de Mars, Devaux used an electric winch between figures to go back up to the second floor. When firemen arrived, he stopped after the sixth jump.", "title": "History" }, { "paragraph_id": 37, "text": "For its \"Countdown to the Year 2000\" celebration on 31 December 1999, flashing lights and high-powered searchlights were installed on the tower. During the last three minutes of the year, the lights were turned on starting from the base of the tower and continuing to the top to welcome 2000 with a huge fireworks show. An exhibition above a cafeteria on the first floor commemorates this event. The searchlights on top of the tower made it a beacon in Paris's night sky, and 20,000 flashing bulbs gave the tower a sparkly appearance for five minutes every hour on the hour.", "title": "History" }, { "paragraph_id": 38, "text": "The lights sparkled blue for several nights to herald the new millennium on 31 December 2000. The sparkly lighting continued for 18 months until July 2001. The sparkling lights were turned on again on 21 June 2003, and the display was planned to last for 10 years before they needed replacing.", "title": "History" }, { "paragraph_id": 39, "text": "The tower received its 200,000,000th guest on 28 November 2002. The tower has operated at its maximum capacity of about 7 million visitors per year since 2003. In 2004, the Eiffel Tower began hosting a seasonal ice rink on the first level. A glass floor was installed on the first level during the 2014 refurbishment.", "title": "History" }, { "paragraph_id": 40, "text": "The puddle iron (wrought iron) of the Eiffel Tower weighs 7,300 tonnes, and the addition of lifts, shops and antennae have brought the total weight to approximately 10,100 tonnes. As a demonstration of the economy of design, if the 7,300 tonnes of metal in the structure were melted down, it would fill the square base, 125 metres (410 ft) on each side, to a depth of only 6.25 cm (2.46 in) assuming the density of the metal to be 7.8 tonnes per cubic metre. Additionally, a cubic box surrounding the tower (324 m × 125 m × 125 m) would contain 6,200 tonnes of air, weighing almost as much as the iron itself. Depending on the ambient temperature, the top of the tower may shift away from the sun by up to 18 cm (7 in) due to thermal expansion of the metal on the side facing the sun.", "title": "Design" }, { "paragraph_id": 41, "text": "When it was built, many were shocked by the tower's daring form. Eiffel was accused of trying to create something artistic with no regard to the principles of engineering. However, Eiffel and his team – experienced bridge builders – understood the importance of wind forces, and knew that if they were going to build the tallest structure in the world, they had to be sure it could withstand them. In an interview with the newspaper Le Temps published on 14 February 1887, Eiffel said:", "title": "Design" }, { "paragraph_id": 42, "text": "Is it not true that the very conditions which give strength also conform to the hidden rules of harmony? ... Now to what phenomenon did I have to give primary concern in designing the Tower? It was wind resistance. Well then! I hold that the curvature of the monument's four outer edges, which is as mathematical calculation dictated it should be ... will give a great impression of strength and beauty, for it will reveal to the eyes of the observer the boldness of the design as a whole.", "title": "Design" }, { "paragraph_id": 43, "text": "He used graphical methods to determine the strength of the tower and empirical evidence to account for the effects of wind, rather than a mathematical formula. Close examination of the tower reveals a basically exponential shape. All parts of the tower were overdesigned to ensure maximum resistance to wind forces. The top half was even assumed to have no gaps in the latticework. In the years since it was completed, engineers have put forward various mathematical hypotheses in an attempt to explain the success of the design. The most recent, devised in 2004 after letters sent by Eiffel to the French Society of Civil Engineers in 1885 were translated into English, is described as a non-linear integral equation based on counteracting the wind pressure on any point of the tower with the tension between the construction elements at that point.", "title": "Design" }, { "paragraph_id": 44, "text": "The Eiffel Tower sways by up to 9 cm (3.5 in) in the wind.", "title": "Design" }, { "paragraph_id": 45, "text": "The four columns of the tower each house access stairs and elevators to the first two floors, while at the south column only the elevator to the second floor restaurant is publicly accessible.", "title": "Design" }, { "paragraph_id": 46, "text": "The first floor is publicly accessible by elevator or stairs. When originally built, the first level contained three restaurants – one French, one Russian and one Flemish — and an \"Anglo-American Bar\". After the exposition closed, the Flemish restaurant was converted to a 250-seat theatre. Today there is the Le 58 Tour Eiffel restaurant and other facilities.", "title": "Design" }, { "paragraph_id": 47, "text": "The second floor is publicly accessible by elevator or stairs and has a restaurant called Le Jules Verne, a gourmet restaurant with its own lift going up from the south column to the second level. This restaurant has one star in the Michelin Red Guide. It was run by the multi-Michelin star chef Alain Ducasse from 2007 to 2017. As of May 2019, it is managed by three-star chef Frédéric Anton. It owes its name to the famous science-fiction writer Jules Verne.", "title": "Design" }, { "paragraph_id": 48, "text": "The third floor is the top floor, publicly accessible by elevator.", "title": "Design" }, { "paragraph_id": 49, "text": "Originally there were laboratories for various experiments, and a small apartment reserved for Gustave Eiffel to entertain guests, which is now open to the public, complete with period decorations and lifelike mannequins of Eiffel and some of his notable guests.", "title": "Design" }, { "paragraph_id": 50, "text": "From 1937 until 1981, there was a restaurant near the top of the tower. It was removed due to structural considerations; engineers had determined it was too heavy and was causing the tower to sag. This restaurant was sold to an American restaurateur and transported to New York and then New Orleans. It was rebuilt on the edge of New Orleans' Garden District as a restaurant and later event hall. Today there is a champagne bar.", "title": "Design" }, { "paragraph_id": 51, "text": "The arrangement of the lifts has been changed several times during the tower's history. Given the elasticity of the cables and the time taken to align the cars with the landings, each lift, in normal service, takes an average of 8 minutes and 50 seconds to do the round trip, spending an average of 1 minute and 15 seconds at each level. The average journey time between levels is 1 minute. The original hydraulic mechanism is on public display in a small museum at the base of the east and west legs. Because the mechanism requires frequent lubrication and maintenance, public access is often restricted. The rope mechanism of the north tower can be seen as visitors exit the lift.", "title": "Design" }, { "paragraph_id": 52, "text": "Equipping the tower with adequate and safe passenger lifts was a major concern of the government commission overseeing the Exposition. Although some visitors could be expected to climb to the first level, or even the second, lifts clearly had to be the main means of ascent.", "title": "Design" }, { "paragraph_id": 53, "text": "Constructing lifts to reach the first level was relatively straightforward: the legs were wide enough at the bottom and so nearly straight that they could contain a straight track, and a contract was given to the French company Roux, Combaluzier & Lepape for two lifts to be fitted in the east and west legs. Roux, Combaluzier & Lepape used a pair of endless chains with rigid, articulated links to which the car was attached. Lead weights on some links of the upper or return sections of the chains counterbalanced most of the car's weight. The car was pushed up from below, not pulled up from above: to prevent the chain buckling, it was enclosed in a conduit. At the bottom of the run, the chains passed around 3.9 m (12 ft 10 in) diameter sprockets. Smaller sprockets at the top guided the chains.", "title": "Design" }, { "paragraph_id": 54, "text": "Installing lifts to the second level was more of a challenge because a straight track was impossible. No French company wanted to undertake the work. The European branch of Otis Brothers & Company submitted a proposal but this was rejected: the fair's charter ruled out the use of any foreign material in the construction of the tower. The deadline for bids was extended but still no French companies put themselves forward, and eventually the contract was given to Otis in July 1887. Otis were confident they would eventually be given the contract and had already started creating designs.", "title": "Design" }, { "paragraph_id": 55, "text": "The car was divided into two superimposed compartments, each holding 25 passengers, with the lift operator occupying an exterior platform on the first level. Motive power was provided by an inclined hydraulic ram 12.67 m (41 ft 7 in) long and 96.5 cm (38.0 in) in diameter in the tower leg with a stroke of 10.83 m (35 ft 6 in): this moved a carriage carrying six sheaves. Five fixed sheaves were mounted higher up the leg, producing an arrangement similar to a block and tackle but acting in reverse, multiplying the stroke of the piston rather than the force generated. The hydraulic pressure in the driving cylinder was produced by a large open reservoir on the second level. After being exhausted from the cylinder, the water was pumped back up to the reservoir by two pumps in the machinery room at the base of the south leg. This reservoir also provided power to the lifts to the first level.", "title": "Design" }, { "paragraph_id": 56, "text": "The original lifts for the journey between the second and third levels were supplied by Léon Edoux. A pair of 81 m (266 ft) hydraulic rams were mounted on the second level, reaching nearly halfway up to the third level. One lift car was mounted on top of these rams: cables ran from the top of this car up to sheaves on the third level and back down to a second car. Each car travelled only half the distance between the second and third levels and passengers were required to change lifts halfway by means of a short gangway. The 10-ton cars each held 65 passengers.", "title": "Design" }, { "paragraph_id": 57, "text": "Gustave Eiffel engraved on the tower the names of 72 French scientists, engineers and mathematicians in recognition of their contributions to the building of the tower. Eiffel chose this \"invocation of science\" because of his concern over the artists' protest. At the beginning of the 20th century, the engravings were painted over, but they were restored in 1986–87 by the Société Nouvelle d'exploitation de la Tour Eiffel, a company operating the tower.", "title": "Design" }, { "paragraph_id": 58, "text": "The tower is painted in three shades: lighter at the top, getting progressively darker towards the bottom to complement the Parisian sky. It was originally reddish brown; this changed in 1968 to a bronze colour known as \"Eiffel Tower Brown\". In what is expected to be a temporary change, the tower is being painted gold in commemoration of the upcoming 2024 Summer Olympics in Paris.", "title": "Design" }, { "paragraph_id": 59, "text": "The only non-structural elements are the four decorative grill-work arches, added in Sauvestre's sketches, which served to make the tower look more substantial and to make a more impressive entrance to the exposition.", "title": "Design" }, { "paragraph_id": 60, "text": "A pop-culture movie cliché is that the view from a Parisian window always includes the tower. In reality, since zoning restrictions limit the height of most buildings in Paris to seven storeys, only a small number of tall buildings have a clear view of the tower.", "title": "Design" }, { "paragraph_id": 61, "text": "Maintenance of the tower includes applying 60 tons of paint every seven years to prevent it from rusting. The tower has been completely repainted at least 19 times since it was built. Lead paint was still being used as recently as 2001 when the practice was stopped out of concern for the environment.", "title": "Design" }, { "paragraph_id": 62, "text": "The tower has been used for making radio transmissions since the beginning of the 20th century. Until the 1950s, sets of aerial wires ran from the cupola to anchors on the Avenue de Suffren and Champ de Mars. These were connected to longwave transmitters in small bunkers. In 1909, a permanent underground radio centre was built near the south pillar, which still exists today. On 20 November 1913, the Paris Observatory, using the Eiffel Tower as an aerial, exchanged wireless signals with the United States Naval Observatory, which used an aerial in Arlington County, Virginia. The object of the transmissions was to measure the difference in longitude between Paris and Washington, D.C. Today, radio and digital television signals are transmitted from the Eiffel Tower.", "title": "Communications" }, { "paragraph_id": 63, "text": "A television antenna was first installed on the tower in 1957, increasing its height by 18.7 m (61 ft). Work carried out in 2000 added a further 5.3 m (17 ft), giving the current height of 324 m (1,063 ft). Analogue television signals from the Eiffel Tower ceased on 8 March 2011.", "title": "Communications" }, { "paragraph_id": 64, "text": "The pinnacle height of the Eiffel Tower has changed multiple times over the years as described in the chart below.", "title": "Dimensions" }, { "paragraph_id": 65, "text": "The Eiffel Tower was the world's tallest structure when completed in 1889, a distinction it retained until 1929 when the Chrysler Building in New York City was topped out. The tower also lost its standing as the world's tallest tower to the Tokyo Tower in 1958 but retains its status as the tallest freestanding (non-guyed) structure in France.", "title": "Taller structures" }, { "paragraph_id": 66, "text": "The nearest Paris Métro station is Bir-Hakeim and the nearest RER station is Champ de Mars-Tour Eiffel. The tower itself is located at the intersection of the quai Branly and the Pont d'Iéna.", "title": "Tourism" }, { "paragraph_id": 67, "text": "More than 300 million people have visited the tower since it was completed in 1889. In 2015, there were 6.91 million visitors. The tower is the most-visited paid monument in the world. An average of 25,000 people ascend the tower every day (which can result in long queues).", "title": "Tourism" }, { "paragraph_id": 68, "text": "The tower and its image have been in the public domain since 1993, 70 years after Eiffel's death. In June 1990 a French court ruled that a special lighting display on the tower in 1989 to mark the tower's 100th anniversary was an \"original visual creation\" protected by copyright. The Court of Cassation, France's judicial court of last resort, upheld the ruling in March 1992. The Société d'Exploitation de la Tour Eiffel (SETE) now considers any illumination of the tower to be a separate work of art that falls under copyright. As a result, the SNTE alleges that it is illegal to publish contemporary photographs of the lit tower at night without permission in France and some other countries for commercial use. For this reason, it is often rare to find images or videos of the lit tower at night on stock image sites, and media outlets rarely broadcast images or videos of it.", "title": "Illumination copyright" }, { "paragraph_id": 69, "text": "The imposition of copyright has been controversial. The Director of Documentation for what was then called the Société Nouvelle d'exploitation de la Tour Eiffel (SNTE), Stéphane Dieu, commented in 2005: \"It is really just a way to manage commercial use of the image, so that it isn't used in ways [of which] we don't approve\". SNTE made over €1 million from copyright fees in 2002. However, it could also be used to restrict the publication of tourist photographs of the tower at night, as well as hindering non-profit and semi-commercial publication of images of the illuminated tower.", "title": "Illumination copyright" }, { "paragraph_id": 70, "text": "The copyright claim itself has never been tested in courts to date, according to a 2014 article in the Art Law Journal, and there has never been an attempt to track down millions of people who have posted and shared their images of the illuminated tower on the Internet worldwide. It added, however, that permissive situation may arise on commercial use of such images, like in a magazine, on a film poster, or on product packaging.", "title": "Illumination copyright" }, { "paragraph_id": 71, "text": "French doctrine and jurisprudence allows pictures incorporating a copyrighted work as long as their presence is incidental or accessory to the subject being represented, a reasoning akin to the de minimis rule. Therefore, SETE may be unable to claim copyright on photographs of Paris which happen to include the lit tower.", "title": "Illumination copyright" }, { "paragraph_id": 72, "text": "As one of the most famous landmarks in the world, the Eiffel Tower has been the inspiration for the creation of many replicas and similar towers. An early example is Blackpool Tower in England. The mayor of Blackpool, Sir John Bickerstaffe, was so impressed on seeing the Eiffel Tower at the 1889 exposition that he commissioned a similar tower to be built in his town. It opened in 1894 and is 158.1 m (519 ft) tall. Tokyo Tower in Japan, built as a communications tower in 1958, was also inspired by the Eiffel Tower.", "title": "Replicas" }, { "paragraph_id": 73, "text": "There are various scale models of the tower in the United States, including a half-scale version at the Paris Las Vegas, Nevada, one in Paris, Texas built in 1993, and two 1:3 scale models at Kings Island, located in Mason, Ohio, and Kings Dominion, Virginia, amusement parks opened in 1972 and 1975 respectively. Two 1:3 scale models can be found in China, one in Durango, Mexico that was donated by the local French community, and several across Europe.", "title": "Replicas" }, { "paragraph_id": 74, "text": "In 2011, the TV show Pricing the Priceless on the National Geographic Channel speculated that a full-size replica of the tower would cost approximately US$480 million to build. This would be more than ten times the cost of the original (nearly 8 million in 1890 Francs; ~US$40 million in 2018 dollars).", "title": "Replicas" } ]
The Eiffel Tower is a wrought-iron lattice tower on the Champ de Mars in Paris, France. It is named after the engineer Gustave Eiffel, whose company designed and built the tower from 1887 to 1889. Locally nicknamed "La dame de fer", it was constructed as the centerpiece of the 1889 World's Fair, and to crown the centennial anniversary of the French Revolution. Although initially criticised by some of France's leading artists and intellectuals for its design, it has since become a global cultural icon of France and one of the most recognisable structures in the world. The tower received 5,889,000 visitors in 2022. The Eiffel Tower is the most visited monument with an entrance fee in the world: 6.91 million people ascended it in 2015. It was designated a monument historique in 1964, and was named part of a UNESCO World Heritage Site in 1991. The tower is 330 metres (1,083 ft) tall, about the same height as an 81-storey building, and the tallest structure in Paris. Its base is square, measuring 125 metres (410 ft) on each side. During its construction, the Eiffel Tower surpassed the Washington Monument to become the tallest human-made structure in the world, a title it held for 41 years until the Chrysler Building in New York City was finished in 1930. It was the first structure in the world to surpass both the 200-metre and 300-metre mark in height. Due to the addition of a broadcasting aerial at the top of the tower in 1957, it is now taller than the Chrysler Building by 5.2 metres (17 ft). Excluding transmitters, the Eiffel Tower is the second tallest free-standing structure in France after the Millau Viaduct. The tower has three levels for visitors, with restaurants on the first and second levels. The top level's upper platform is 276 m (906 ft) above the ground – the highest observation deck accessible to the public in the European Union. Tickets can be purchased to ascend by stairs or lift to the first and second levels. The climb from ground level to the first level is over 300 steps, as is the climb from the first level to the second, making the entire ascent a 600 step climb. Although there is a staircase to the top level, it is usually accessible only by lift. On this top, third level is a private apartment built for Gustave Eiffel's private use. He decorated it with furniture by Jean Lachaise and invited friends such as Thomas Edison.
2001-04-23T16:45:46Z
2023-12-27T18:58:14Z
[ "Template:Redirect-multi", "Template:Ill", "Template:Sister project links", "Template:Visitor attractions in Paris", "Template:Structurae", "Template:Supertall", "Template:About", "Template:Use British English", "Template:IPAc-en", "Template:Not a typo", "Template:Portal", "Template:Official website", "Template:Use dmy dates", "Template:Infobox building", "Template:Lang", "Template:Cite journal", "Template:S-start", "Template:Convert", "Template:Blockquote", "Template:Citation needed", "Template:Citation", "Template:Cite book", "Template:Cbignore", "Template:S-end", "Template:1900 Paris Exposition", "Template:Olympic venues volleyball", "Template:Cite episode", "Template:S-aft", "Template:Short description", "Template:Lang-fr", "Template:Nowrap", "Template:Main", "Template:Reflist", "Template:Cite web", "Template:1889 Paris Universal Exposition", "Template:Authority control", "Template:Pp-semi-indef", "Template:Respell", "Template:IPA-fr", "Template:Cite newspaper The Times", "Template:1937 Paris International Exposition of Arts and Techniques Applied to Modern Life", "Template:2024 Summer Olympic Venues", "Template:Further", "Template:Cite news", "Template:Cite magazine", "Template:S-ttl", "Template:7th arrondissement of Paris", "Template:Pp-move", "Template:Clear", "Template:Panorama", "Template:S-ach", "Template:S-bef" ]
https://en.wikipedia.org/wiki/Eiffel_Tower
9,235
Ethical egoism
In ethical philosophy, ethical egoism is the normative position that moral agents ought to act in their own self-interest. It differs from psychological egoism, which claims that people can only act in their self-interest. Ethical egoism also differs from rational egoism, which holds that it is rational to act in one's self-interest. Ethical egoism holds, therefore, that actions whose consequences will benefit the doer are ethical. Ethical egoism contrasts with ethical altruism, which holds that moral agents have an obligation to help others. Egoism and altruism both contrast with ethical utilitarianism, which holds that a moral agent should treat one's self (also known as the subject) with no higher regard than one has for others (as egoism does, by elevating self-interests and "the self" to a status not granted to others). But it also holds that one is not obligated to sacrifice one's own interests (as altruism does) to help others' interests, so long as one's own interests (i.e., one's own desires or well-being) are substantially equivalent to the others' interests and well-being, but they have the choice to do so. Egoism, utilitarianism, and altruism are all forms of consequentialism, but egoism and altruism contrast with utilitarianism, in that egoism and altruism are both agent-focused forms of consequentialism (i.e., subject-focused or subjective). However, utilitarianism is held to be agent-neutral (i.e., objective and impartial): it does not treat the subject's (i.e., the self's, i.e., the moral "agent's") own interests as being more or less important than the interests, desires, or well-being of others. Ethical egoism does not, however, require moral agents to harm the interests and well-being of others when making moral deliberation; e.g., what is in an agent's self-interest may be incidentally detrimental, beneficial, or neutral in its effect on others. Individualism allows for others' interest and well-being to be disregarded or not, as long as what is chosen is efficacious in satisfying the self-interest of the agent. Nor does ethical egoism necessarily entail that, in pursuing self-interest, one ought always to do what one wants to do; e.g., in the long term, the fulfillment of short-term desires may prove detrimental to the self. Fleeting pleasure, then, takes a back seat to protracted eudaimonia. In the words of James Rachels, "Ethical egoism ... endorses selfishness, but it doesn't endorse foolishness." Ethical egoism is often used as the philosophical basis for support of right-libertarianism and individualist anarchism. These are political positions based partly on a belief that individuals should not coercively prevent others from exercising freedom of action. Ethical egoism can be broadly divided into three categories: individual, personal, and universal. An individual ethical egoist would hold that all people should do whatever benefits "my" (the individual's) self-interest; a personal ethical egoist would hold that they should act in their self-interest, but would make no claims about what anyone else ought to do; a universal ethical egoist would argue that everyone should act in ways that are in their self-interest. Ethical egoism was introduced by the philosopher Henry Sidgwick in his book The Methods of Ethics, written in 1874. Sidgwick compared egoism to the philosophy of utilitarianism, writing that whereas utilitarianism sought to maximize overall pleasure, egoism focused only on maximizing individual pleasure. Philosophers before Sidgwick have also retroactively been identified as ethical egoists. One ancient example is the philosophy of Yang Zhu (4th century BC), Yangism, who views wei wo, or "everything for myself", as the only virtue necessary for self-cultivation. Ancient Greek philosophers like Plato, Aristotle and the Stoics were exponents of virtue ethics, and "did not accept the formal principle that whatever the good is, we should seek only our own good, or prefer it to the good of others." However, the beliefs of the Cyrenaics have been referred to as a "form of egoistic hedonism", and while some refer to Epicurus' hedonism as a form of virtue ethics, others argue his ethics are more properly described as ethical egoism. Philosopher James Rachels, in an essay that takes as its title the theory's name, outlines the three arguments most commonly touted in its favor: It has been argued that extreme ethical egoism is self-defeating. Faced with a situation of limited resources, egoists would consume as much of the resource as they could, making the overall situation worse for everybody. Egoists may respond that if the situation becomes worse for everybody, that would include the egoist, so it is not, in fact, in their rational self-interest to take things to such extremes. However, the (unregulated) tragedy of the commons and the (one off) prisoner's dilemma are cases in which, on the one hand, it is rational for an individual to seek to take as much as possible even though that makes things worse for everybody, and on the other hand, those cases are not self-refuting since that behaviour remains rational even though it is ultimately self-defeating, i.e. self-defeating does not imply self-refuting. Egoists might respond that a tragedy of the commons, however, assumes some degree of public land. That is, a commons forbidding homesteading requires regulation. Thus, an argument against the tragedy of the commons, in this belief system, is fundamentally an argument for private property rights and the system that recognizes both property rights and rational self-interest—capitalism. More generally, egoists might say that an increasing respect for individual rights uniquely allows for increasing wealth creation and increasing usable resources despite a fixed amount of raw materials (e.g. the West pre-1776 versus post-1776, East versus West Germany, Hong Kong versus mainland China, North versus South Korea, etc.). It is not clear how to apply a private ownership model to many examples of "commons", however. Examples include large fisheries, the atmosphere and the ocean. Some perhaps decisive problems with ethical egoism have been pointed out. One is that an ethical egoist would not want ethical egoism to be universalized: as it would be in the egoist's best self-interest if others acted altruistically towards them, they wouldn't want them to act egoistically; however, that is what they consider to be morally binding. Their moral principles would demand of others not to follow them, which can be considered self-defeating and leads to the question: "How can ethical egoism be considered morally binding if its advocates do not want it to be universally applied?" Another objection (e.g. by James Rachels) states that the distinction ethical egoism makes between "yourself" and "the rest" – demanding to view the interests of "yourself" as more important – is arbitrary, as no justification for it can be offered; considering that the merits and desires of "the rest" are comparable to those of "yourself" while lacking a justifiable distinction, Rachels concludes that "the rest" should be given the same moral consideration as "yourself". The term ethical egoism has been applied retroactively to philosophers such as Bernard de Mandeville and to many other materialists of his generation, although none of them declared themselves to be egoists. Note that materialism does not necessarily imply egoism, as indicated by Karl Marx, and the many other materialists who espoused forms of collectivism. It has been argued that ethical egoism can lend itself to individualist anarchism such as that of Benjamin Tucker, or the combined anarcho-communism and egoism of Emma Goldman, both of whom were proponents of many egoist ideas put forward by Max Stirner. In this context, egoism is another way of describing the sense that the common good should be enjoyed by all. However, most notable anarchists in history have been less radical, retaining altruism and a sense of the importance of the individual that is appreciable but does not go as far as egoism. Recent trends to greater appreciation of egoism within anarchism tend to come from less classical directions such as post-left anarchy or Situationism (e.g. Raoul Vaneigem). Egoism has also been referenced by anarcho-capitalists, such as Murray Rothbard. Philosopher Max Stirner, in his book The Ego and Its Own, was the first philosopher to call himself an egoist, though his writing makes clear that he desired not a new idea of morality (ethical egoism), but rather a rejection of morality (amoralism), as a nonexistent and limiting "spook"; for this, Stirner has been described as the first individualist anarchist. Other philosophers, such as Thomas Hobbes and David Gauthier, have argued that the conflicts which arise when people each pursue their own ends can be resolved for the best of each individual only if they all voluntarily forgo some of their aims—that is, one's self-interest is often best pursued by allowing others to pursue their self-interest as well so that liberty is equal among individuals. Sacrificing one's short-term self-interest to maximize one's long-term self-interest is one form of "rational self-interest" which is the idea behind most philosophers' advocacy of ethical egoism. Egoists have also argued that one's actual interests are not immediately obvious, and that the pursuit of self-interest involves more than merely the acquisition of some good, but the maximizing of one's chances of survival and/or happiness. Philosopher Friedrich Nietzsche suggested that egoistic or "life-affirming" behavior stimulates jealousy or "ressentiment" in others, and that this is the psychological motive for the altruism in Christianity. Sociologist Helmut Schoeck similarly considered envy the motive of collective efforts by society to reduce the disproportionate gains of successful individuals through moral or legal constraints, with altruism being primary among these. In addition, Nietzsche (in Beyond Good and Evil) and Alasdair MacIntyre (in After Virtue) have pointed out that the ancient Greeks did not associate morality with altruism in the way that post-Christian Western civilization has done. Aristotle's view is that we have duties to ourselves as well as to other people (e.g. friends) and to the polis as a whole. The same is true for Thomas Aquinas, Christian Wolff and Immanuel Kant, who claim that there are duties to ourselves as Aristotle did, although it has been argued that, for Aristotle, the duty to one's self is primary. Ayn Rand argued that there is a positive harmony of interests among free, rational humans, such that no moral agent can rationally coerce another person consistently with their own long-term self-interest. Rand argued that other people are an enormous value to an individual's well-being (through education, trade and affection), but also that this value could be fully realized only under conditions of political and economic freedom. According to Rand, voluntary trade alone can assure that human interaction is mutually beneficial. Rand's student, Leonard Peikoff has argued that the identification of one's interests itself is impossible absent the use of principles, and that self-interest cannot be consistently pursued absent a consistent adherence to certain ethical principles. Recently, Rand's position has also been defended by such writers as Tara Smith, Tibor Machan, Allan Gotthelf, David Kelley, Douglas Rasmussen, Nathaniel Branden, Harry Binswanger, Andrew Bernstein, and Craig Biddle. Philosopher David L. Norton identified himself as an "ethical individualist", and, like Rand, saw a harmony between an individual's fidelity to their own self-actualization, or "personal destiny", and the achievement of society's well-being.
[ { "paragraph_id": 0, "text": "In ethical philosophy, ethical egoism is the normative position that moral agents ought to act in their own self-interest. It differs from psychological egoism, which claims that people can only act in their self-interest. Ethical egoism also differs from rational egoism, which holds that it is rational to act in one's self-interest. Ethical egoism holds, therefore, that actions whose consequences will benefit the doer are ethical.", "title": "" }, { "paragraph_id": 1, "text": "Ethical egoism contrasts with ethical altruism, which holds that moral agents have an obligation to help others. Egoism and altruism both contrast with ethical utilitarianism, which holds that a moral agent should treat one's self (also known as the subject) with no higher regard than one has for others (as egoism does, by elevating self-interests and \"the self\" to a status not granted to others). But it also holds that one is not obligated to sacrifice one's own interests (as altruism does) to help others' interests, so long as one's own interests (i.e., one's own desires or well-being) are substantially equivalent to the others' interests and well-being, but they have the choice to do so. Egoism, utilitarianism, and altruism are all forms of consequentialism, but egoism and altruism contrast with utilitarianism, in that egoism and altruism are both agent-focused forms of consequentialism (i.e., subject-focused or subjective). However, utilitarianism is held to be agent-neutral (i.e., objective and impartial): it does not treat the subject's (i.e., the self's, i.e., the moral \"agent's\") own interests as being more or less important than the interests, desires, or well-being of others.", "title": "" }, { "paragraph_id": 2, "text": "Ethical egoism does not, however, require moral agents to harm the interests and well-being of others when making moral deliberation; e.g., what is in an agent's self-interest may be incidentally detrimental, beneficial, or neutral in its effect on others. Individualism allows for others' interest and well-being to be disregarded or not, as long as what is chosen is efficacious in satisfying the self-interest of the agent. Nor does ethical egoism necessarily entail that, in pursuing self-interest, one ought always to do what one wants to do; e.g., in the long term, the fulfillment of short-term desires may prove detrimental to the self. Fleeting pleasure, then, takes a back seat to protracted eudaimonia. In the words of James Rachels, \"Ethical egoism ... endorses selfishness, but it doesn't endorse foolishness.\"", "title": "" }, { "paragraph_id": 3, "text": "Ethical egoism is often used as the philosophical basis for support of right-libertarianism and individualist anarchism. These are political positions based partly on a belief that individuals should not coercively prevent others from exercising freedom of action.", "title": "" }, { "paragraph_id": 4, "text": "Ethical egoism can be broadly divided into three categories: individual, personal, and universal. An individual ethical egoist would hold that all people should do whatever benefits \"my\" (the individual's) self-interest; a personal ethical egoist would hold that they should act in their self-interest, but would make no claims about what anyone else ought to do; a universal ethical egoist would argue that everyone should act in ways that are in their self-interest.", "title": "Forms" }, { "paragraph_id": 5, "text": "Ethical egoism was introduced by the philosopher Henry Sidgwick in his book The Methods of Ethics, written in 1874. Sidgwick compared egoism to the philosophy of utilitarianism, writing that whereas utilitarianism sought to maximize overall pleasure, egoism focused only on maximizing individual pleasure.", "title": "History" }, { "paragraph_id": 6, "text": "Philosophers before Sidgwick have also retroactively been identified as ethical egoists. One ancient example is the philosophy of Yang Zhu (4th century BC), Yangism, who views wei wo, or \"everything for myself\", as the only virtue necessary for self-cultivation. Ancient Greek philosophers like Plato, Aristotle and the Stoics were exponents of virtue ethics, and \"did not accept the formal principle that whatever the good is, we should seek only our own good, or prefer it to the good of others.\" However, the beliefs of the Cyrenaics have been referred to as a \"form of egoistic hedonism\", and while some refer to Epicurus' hedonism as a form of virtue ethics, others argue his ethics are more properly described as ethical egoism.", "title": "History" }, { "paragraph_id": 7, "text": "Philosopher James Rachels, in an essay that takes as its title the theory's name, outlines the three arguments most commonly touted in its favor:", "title": "Justifications" }, { "paragraph_id": 8, "text": "It has been argued that extreme ethical egoism is self-defeating. Faced with a situation of limited resources, egoists would consume as much of the resource as they could, making the overall situation worse for everybody. Egoists may respond that if the situation becomes worse for everybody, that would include the egoist, so it is not, in fact, in their rational self-interest to take things to such extremes. However, the (unregulated) tragedy of the commons and the (one off) prisoner's dilemma are cases in which, on the one hand, it is rational for an individual to seek to take as much as possible even though that makes things worse for everybody, and on the other hand, those cases are not self-refuting since that behaviour remains rational even though it is ultimately self-defeating, i.e. self-defeating does not imply self-refuting. Egoists might respond that a tragedy of the commons, however, assumes some degree of public land. That is, a commons forbidding homesteading requires regulation. Thus, an argument against the tragedy of the commons, in this belief system, is fundamentally an argument for private property rights and the system that recognizes both property rights and rational self-interest—capitalism. More generally, egoists might say that an increasing respect for individual rights uniquely allows for increasing wealth creation and increasing usable resources despite a fixed amount of raw materials (e.g. the West pre-1776 versus post-1776, East versus West Germany, Hong Kong versus mainland China, North versus South Korea, etc.).", "title": "Criticism" }, { "paragraph_id": 9, "text": "It is not clear how to apply a private ownership model to many examples of \"commons\", however. Examples include large fisheries, the atmosphere and the ocean.", "title": "Criticism" }, { "paragraph_id": 10, "text": "Some perhaps decisive problems with ethical egoism have been pointed out.", "title": "Criticism" }, { "paragraph_id": 11, "text": "One is that an ethical egoist would not want ethical egoism to be universalized: as it would be in the egoist's best self-interest if others acted altruistically towards them, they wouldn't want them to act egoistically; however, that is what they consider to be morally binding. Their moral principles would demand of others not to follow them, which can be considered self-defeating and leads to the question: \"How can ethical egoism be considered morally binding if its advocates do not want it to be universally applied?\"", "title": "Criticism" }, { "paragraph_id": 12, "text": "Another objection (e.g. by James Rachels) states that the distinction ethical egoism makes between \"yourself\" and \"the rest\" – demanding to view the interests of \"yourself\" as more important – is arbitrary, as no justification for it can be offered; considering that the merits and desires of \"the rest\" are comparable to those of \"yourself\" while lacking a justifiable distinction, Rachels concludes that \"the rest\" should be given the same moral consideration as \"yourself\".", "title": "Criticism" }, { "paragraph_id": 13, "text": "The term ethical egoism has been applied retroactively to philosophers such as Bernard de Mandeville and to many other materialists of his generation, although none of them declared themselves to be egoists. Note that materialism does not necessarily imply egoism, as indicated by Karl Marx, and the many other materialists who espoused forms of collectivism. It has been argued that ethical egoism can lend itself to individualist anarchism such as that of Benjamin Tucker, or the combined anarcho-communism and egoism of Emma Goldman, both of whom were proponents of many egoist ideas put forward by Max Stirner. In this context, egoism is another way of describing the sense that the common good should be enjoyed by all. However, most notable anarchists in history have been less radical, retaining altruism and a sense of the importance of the individual that is appreciable but does not go as far as egoism. Recent trends to greater appreciation of egoism within anarchism tend to come from less classical directions such as post-left anarchy or Situationism (e.g. Raoul Vaneigem). Egoism has also been referenced by anarcho-capitalists, such as Murray Rothbard.", "title": "Notable proponents" }, { "paragraph_id": 14, "text": "Philosopher Max Stirner, in his book The Ego and Its Own, was the first philosopher to call himself an egoist, though his writing makes clear that he desired not a new idea of morality (ethical egoism), but rather a rejection of morality (amoralism), as a nonexistent and limiting \"spook\"; for this, Stirner has been described as the first individualist anarchist. Other philosophers, such as Thomas Hobbes and David Gauthier, have argued that the conflicts which arise when people each pursue their own ends can be resolved for the best of each individual only if they all voluntarily forgo some of their aims—that is, one's self-interest is often best pursued by allowing others to pursue their self-interest as well so that liberty is equal among individuals. Sacrificing one's short-term self-interest to maximize one's long-term self-interest is one form of \"rational self-interest\" which is the idea behind most philosophers' advocacy of ethical egoism. Egoists have also argued that one's actual interests are not immediately obvious, and that the pursuit of self-interest involves more than merely the acquisition of some good, but the maximizing of one's chances of survival and/or happiness.", "title": "Notable proponents" }, { "paragraph_id": 15, "text": "Philosopher Friedrich Nietzsche suggested that egoistic or \"life-affirming\" behavior stimulates jealousy or \"ressentiment\" in others, and that this is the psychological motive for the altruism in Christianity. Sociologist Helmut Schoeck similarly considered envy the motive of collective efforts by society to reduce the disproportionate gains of successful individuals through moral or legal constraints, with altruism being primary among these. In addition, Nietzsche (in Beyond Good and Evil) and Alasdair MacIntyre (in After Virtue) have pointed out that the ancient Greeks did not associate morality with altruism in the way that post-Christian Western civilization has done. Aristotle's view is that we have duties to ourselves as well as to other people (e.g. friends) and to the polis as a whole. The same is true for Thomas Aquinas, Christian Wolff and Immanuel Kant, who claim that there are duties to ourselves as Aristotle did, although it has been argued that, for Aristotle, the duty to one's self is primary.", "title": "Notable proponents" }, { "paragraph_id": 16, "text": "Ayn Rand argued that there is a positive harmony of interests among free, rational humans, such that no moral agent can rationally coerce another person consistently with their own long-term self-interest. Rand argued that other people are an enormous value to an individual's well-being (through education, trade and affection), but also that this value could be fully realized only under conditions of political and economic freedom. According to Rand, voluntary trade alone can assure that human interaction is mutually beneficial. Rand's student, Leonard Peikoff has argued that the identification of one's interests itself is impossible absent the use of principles, and that self-interest cannot be consistently pursued absent a consistent adherence to certain ethical principles. Recently, Rand's position has also been defended by such writers as Tara Smith, Tibor Machan, Allan Gotthelf, David Kelley, Douglas Rasmussen, Nathaniel Branden, Harry Binswanger, Andrew Bernstein, and Craig Biddle.", "title": "Notable proponents" }, { "paragraph_id": 17, "text": "Philosopher David L. Norton identified himself as an \"ethical individualist\", and, like Rand, saw a harmony between an individual's fidelity to their own self-actualization, or \"personal destiny\", and the achievement of society's well-being.", "title": "Notable proponents" } ]
In ethical philosophy, ethical egoism is the normative position that moral agents ought to act in their own self-interest. It differs from psychological egoism, which claims that people can only act in their self-interest. Ethical egoism also differs from rational egoism, which holds that it is rational to act in one's self-interest. Ethical egoism holds, therefore, that actions whose consequences will benefit the doer are ethical. Ethical egoism contrasts with ethical altruism, which holds that moral agents have an obligation to help others. Egoism and altruism both contrast with ethical utilitarianism, which holds that a moral agent should treat one's self with no higher regard than one has for others. But it also holds that one is not obligated to sacrifice one's own interests to help others' interests, so long as one's own interests are substantially equivalent to the others' interests and well-being, but they have the choice to do so. Egoism, utilitarianism, and altruism are all forms of consequentialism, but egoism and altruism contrast with utilitarianism, in that egoism and altruism are both agent-focused forms of consequentialism. However, utilitarianism is held to be agent-neutral: it does not treat the subject's own interests as being more or less important than the interests, desires, or well-being of others. Ethical egoism does not, however, require moral agents to harm the interests and well-being of others when making moral deliberation; e.g., what is in an agent's self-interest may be incidentally detrimental, beneficial, or neutral in its effect on others. Individualism allows for others' interest and well-being to be disregarded or not, as long as what is chosen is efficacious in satisfying the self-interest of the agent. Nor does ethical egoism necessarily entail that, in pursuing self-interest, one ought always to do what one wants to do; e.g., in the long term, the fulfillment of short-term desires may prove detrimental to the self. Fleeting pleasure, then, takes a back seat to protracted eudaimonia. In the words of James Rachels, "Ethical egoism ... endorses selfishness, but it doesn't endorse foolishness." Ethical egoism is often used as the philosophical basis for support of right-libertarianism and individualist anarchism. These are political positions based partly on a belief that individuals should not coercively prevent others from exercising freedom of action.
2001-03-02T22:32:41Z
2023-08-18T19:07:49Z
[ "Template:Cite web", "Template:Cite IEP", "Template:ISBN", "Template:Cite SEP", "Template:Philosophy topics", "Template:Cite book", "Template:Cite journal", "Template:Colbegin", "Template:Reflist", "Template:Citation", "Template:Short description", "Template:Individualism sidebar", "Template:For", "Template:Colend" ]
https://en.wikipedia.org/wiki/Ethical_egoism
9,236
Evolution
Evolution is the change in the heritable characteristics of biological populations over successive generations. Evolution occurs when evolutionary processes such as natural selection and genetic drift act on genetic variation, resulting in certain characteristics becoming more or less common within a population over successive generations. The process of evolution has given rise to biodiversity at every level of biological organisation. The theory of evolution by natural selection was conceived independently by Charles Darwin and Alfred Russel Wallace in the mid-19th century as an explanation for why organisms are adapted to their physical and biological environments. The theory was first set out in detail in Darwin's book On the Origin of Species. Evolution by natural selection is established by observable facts about living organisms: (1) more offspring are often produced than can possibly survive; (2) traits vary among individuals with respect to their morphology, physiology, and behaviour; (3) different traits confer different rates of survival and reproduction (differential fitness); and (4) traits can be passed from generation to generation (heritability of fitness). In successive generations, members of a population are therefore more likely to be replaced by the offspring of parents with favourable characteristics for that environment. In the early 20th century, competing ideas of evolution were refuted and evolution was combined with Mendelian inheritance and population genetics to give rise to modern evolutionary theory. In this synthesis the basis for heredity is in DNA molecules that pass information from generation to generation. The processes that change DNA in a population include natural selection, genetic drift, mutation, and gene flow. All life on Earth—including humanity—shares a last universal common ancestor (LUCA), which lived approximately 3.5–3.8 billion years ago. The fossil record includes a progression from early biogenic graphite to microbial mat fossils to fossilised multicellular organisms. Existing patterns of biodiversity have been shaped by repeated formations of new species (speciation), changes within species (anagenesis), and loss of species (extinction) throughout the evolutionary history of life on Earth. Morphological and biochemical traits tend to be more similar among species that share a more recent common ancestor, which historically was used to reconstruct phylogenetic trees, although direct comparison of genetic sequences is a more common method today. Evolutionary biologists have continued to study various aspects of evolution by forming and testing hypotheses as well as constructing theories based on evidence from the field or laboratory and on data generated by the methods of mathematical and theoretical biology. Their discoveries have influenced not just the development of biology but also other fields including agriculture, medicine, and computer science. Evolution in organisms occurs through changes in heritable characteristics—the inherited characteristics of an organism. In humans, for example, eye colour is an inherited characteristic and an individual might inherit the "brown-eye trait" from one of their parents. Inherited traits are controlled by genes and the complete set of genes within an organism's genome (genetic material) is called its genotype. The complete set of observable traits that make up the structure and behaviour of an organism is called its phenotype. Some of these traits come from the interaction of its genotype with the environment while others are neutral. Some observable characteristics are not inherited. For example, suntanned skin comes from the interaction between a person's genotype and sunlight; thus, suntans are not passed on to people's children. The phenotype is the ability of the skin to tan when exposed to sunlight. However, some people tan more easily than others, due to differences in genotypic variation; a striking example are people with the inherited trait of albinism, who do not tan at all and are very sensitive to sunburn. Heritable characteristics are passed from one generation to the next via DNA, a molecule that encodes genetic information. DNA is a long biopolymer composed of four types of bases. The sequence of bases along a particular DNA molecule specifies the genetic information, in a manner similar to a sequence of letters spelling out a sentence. Before a cell divides, the DNA is copied, so that each of the resulting two cells will inherit the DNA sequence. Portions of a DNA molecule that specify a single functional unit are called genes; different genes have different sequences of bases. Within cells, each long strand of DNA is called a chromosome. The specific location of a DNA sequence within a chromosome is known as a locus. If the DNA sequence at a locus varies between individuals, the different forms of this sequence are called alleles. DNA sequences can change through mutations, producing new alleles. If a mutation occurs within a gene, the new allele may affect the trait that the gene controls, altering the phenotype of the organism. However, while this simple correspondence between an allele and a trait works in some cases, most traits are influenced by multiple genes in a quantitative or epistatic manner. Evolution can occur if there is genetic variation within a population. Variation comes from mutations in the genome, reshuffling of genes through sexual reproduction and migration between populations (gene flow). Despite the constant introduction of new variation through mutation and gene flow, most of the genome of a species is very similar among all individuals of that species. However, discoveries in the field of evolutionary developmental biology have demonstrated that even relatively small differences in genotype can lead to dramatic differences in phenotype both within and between species. An individual organism's phenotype results from both its genotype and the influence of the environment it has lived in. The modern evolutionary synthesis defines evolution as the change over time in this genetic variation. The frequency of one particular allele will become more or less prevalent relative to other forms of that gene. Variation disappears when a new allele reaches the point of fixation—when it either disappears from the population or replaces the ancestral allele entirely. Mutations are changes in the DNA sequence of a cell's genome and are the ultimate source of genetic variation in all organisms. When mutations occur, they may alter the product of a gene, or prevent the gene from functioning, or have no effect. About half of the mutations in the coding regions of protein-coding genes are deleterious — the other half are neutral. A small percentage of the total mutations in this region confer a fitness benefit. Some of the mutations in other parts of the genome are deleterious but the vast majority are neutral. A few are beneficial. Mutations can involve large sections of a chromosome becoming duplicated (usually by genetic recombination), which can introduce extra copies of a gene into a genome. Extra copies of genes are a major source of the raw material needed for new genes to evolve. This is important because most new genes evolve within gene families from pre-existing genes that share common ancestors. For example, the human eye uses four genes to make structures that sense light: three for colour vision and one for night vision; all four are descended from a single ancestral gene. New genes can be generated from an ancestral gene when a duplicate copy mutates and acquires a new function. This process is easier once a gene has been duplicated because it increases the redundancy of the system; one gene in the pair can acquire a new function while the other copy continues to perform its original function. Other types of mutations can even generate entirely new genes from previously noncoding DNA, a phenomenon termed de novo gene birth. The generation of new genes can also involve small parts of several genes being duplicated, with these fragments then recombining to form new combinations with new functions (exon shuffling). When new genes are assembled from shuffling pre-existing parts, domains act as modules with simple independent functions, which can be mixed together to produce new combinations with new and complex functions. For example, polyketide synthases are large enzymes that make antibiotics; they contain up to 100 independent domains that each catalyse one step in the overall process, like a step in an assembly line. One example of mutation is wild boar piglets. They are camouflage coloured and show a characteristic pattern of dark and light longitudinal stripes. However, mutations in the melanocortin 1 receptor (MC1R) disrupt the pattern. The majority of pig breeds carry MC1R mutations disrupting wild-type colour and different mutations causing dominant black colouring. In asexual organisms, genes are inherited together, or linked, as they cannot mix with genes of other organisms during reproduction. In contrast, the offspring of sexual organisms contain random mixtures of their parents' chromosomes that are produced through independent assortment. In a related process called homologous recombination, sexual organisms exchange DNA between two matching chromosomes. Recombination and reassortment do not alter allele frequencies, but instead change which alleles are associated with each other, producing offspring with new combinations of alleles. Sex usually increases genetic variation and may increase the rate of evolution. The two-fold cost of sex was first described by John Maynard Smith. The first cost is that in sexually dimorphic species only one of the two sexes can bear young. This cost does not apply to hermaphroditic species, like most plants and many invertebrates. The second cost is that any individual who reproduces sexually can only pass on 50% of its genes to any individual offspring, with even less passed on as each new generation passes. Yet sexual reproduction is the more common means of reproduction among eukaryotes and multicellular organisms. The Red Queen hypothesis has been used to explain the significance of sexual reproduction as a means to enable continual evolution and adaptation in response to coevolution with other species in an ever-changing environment. Another hypothesis is that sexual reproduction is primarily an adaptation for promoting accurate recombinational repair of damage in germline DNA, and that increased diversity is a byproduct of this process that may sometimes be adaptively beneficial. Gene flow is the exchange of genes between populations and between species. It can therefore be a source of variation that is new to a population or to a species. Gene flow can be caused by the movement of individuals between separate populations of organisms, as might be caused by the movement of mice between inland and coastal populations, or the movement of pollen between heavy-metal-tolerant and heavy-metal-sensitive populations of grasses. Gene transfer between species includes the formation of hybrid organisms and horizontal gene transfer. Horizontal gene transfer is the transfer of genetic material from one organism to another organism that is not its offspring; this is most common among bacteria. In medicine, this contributes to the spread of antibiotic resistance, as when one bacteria acquires resistance genes it can rapidly transfer them to other species. Horizontal transfer of genes from bacteria to eukaryotes such as the yeast Saccharomyces cerevisiae and the adzuki bean weevil Callosobruchus chinensis has occurred. An example of larger-scale transfers are the eukaryotic bdelloid rotifers, which have received a range of genes from bacteria, fungi and plants. Viruses can also carry DNA between organisms, allowing transfer of genes even across biological domains. Large-scale gene transfer has also occurred between the ancestors of eukaryotic cells and bacteria, during the acquisition of chloroplasts and mitochondria. It is possible that eukaryotes themselves originated from horizontal gene transfers between bacteria and archaea. Some heritable changes cannot be explained by changes to the sequence of nucleotides in the DNA. These phenomena are classed as epigenetic inheritance systems. DNA methylation marking chromatin, self-sustaining metabolic loops, gene silencing by RNA interference and the three-dimensional conformation of proteins (such as prions) are areas where epigenetic inheritance systems have been discovered at the organismic level. Developmental biologists suggest that complex interactions in genetic networks and communication among cells can lead to heritable variations that may underlay some of the mechanics in developmental plasticity and canalisation. Heritability may also occur at even larger scales. For example, ecological inheritance through the process of niche construction is defined by the regular and repeated activities of organisms in their environment. This generates a legacy of effects that modify and feed back into the selection regime of subsequent generations. Other examples of heritability in evolution that are not under the direct control of genes include the inheritance of cultural traits and symbiogenesis. From a neo-Darwinian perspective, evolution occurs when there are changes in the frequencies of alleles within a population of interbreeding organisms, for example, the allele for black colour in a population of moths becoming more common. Mechanisms that can lead to changes in allele frequencies include natural selection, genetic drift, and mutation bias. Evolution by natural selection is the process by which traits that enhance survival and reproduction become more common in successive generations of a population. It embodies three principles: More offspring are produced than can possibly survive, and these conditions produce competition between organisms for survival and reproduction. Consequently, organisms with traits that give them an advantage over their competitors are more likely to pass on their traits to the next generation than those with traits that do not confer an advantage. This teleonomy is the quality whereby the process of natural selection creates and preserves traits that are seemingly fitted for the functional roles they perform. Consequences of selection include nonrandom mating and genetic hitchhiking. The central concept of natural selection is the evolutionary fitness of an organism. Fitness is measured by an organism's ability to survive and reproduce, which determines the size of its genetic contribution to the next generation. However, fitness is not the same as the total number of offspring: instead fitness is indicated by the proportion of subsequent generations that carry an organism's genes. For example, if an organism could survive well and reproduce rapidly, but its offspring were all too small and weak to survive, this organism would make little genetic contribution to future generations and would thus have low fitness. If an allele increases fitness more than the other alleles of that gene, then with each generation this allele has a higher probability of becoming common within the population. These traits are said to be "selected for." Examples of traits that can increase fitness are enhanced survival and increased fecundity. Conversely, the lower fitness caused by having a less beneficial or deleterious allele results in this allele likely becoming rarer—they are "selected against." Importantly, the fitness of an allele is not a fixed characteristic; if the environment changes, previously neutral or harmful traits may become beneficial and previously beneficial traits become harmful. However, even if the direction of selection does reverse in this way, traits that were lost in the past may not re-evolve in an identical form. However, a re-activation of dormant genes, as long as they have not been eliminated from the genome and were only suppressed perhaps for hundreds of generations, can lead to the re-occurrence of traits thought to be lost like hindlegs in dolphins, teeth in chickens, wings in wingless stick insects, tails and additional nipples in humans etc. "Throwbacks" such as these are known as atavisms. Natural selection within a population for a trait that can vary across a range of values, such as height, can be categorised into three different types. The first is directional selection, which is a shift in the average value of a trait over time—for example, organisms slowly getting taller. Secondly, disruptive selection is selection for extreme trait values and often results in two different values becoming most common, with selection against the average value. This would be when either short or tall organisms had an advantage, but not those of medium height. Finally, in stabilising selection there is selection against extreme trait values on both ends, which causes a decrease in variance around the average value and less diversity. This would, for example, cause organisms to eventually have a similar height. Natural selection most generally makes nature the measure against which individuals and individual traits, are more or less likely to survive. "Nature" in this sense refers to an ecosystem, that is, a system in which organisms interact with every other element, physical as well as biological, in their local environment. Eugene Odum, a founder of ecology, defined an ecosystem as: "Any unit that includes all of the organisms...in a given area interacting with the physical environment so that a flow of energy leads to clearly defined trophic structure, biotic diversity, and material cycles (i.e., exchange of materials between living and nonliving parts) within the system...." Each population within an ecosystem occupies a distinct niche, or position, with distinct relationships to other parts of the system. These relationships involve the life history of the organism, its position in the food chain and its geographic range. This broad understanding of nature enables scientists to delineate specific forces which, together, comprise natural selection. Natural selection can act at different levels of organisation, such as genes, cells, individual organisms, groups of organisms and species. Selection can act at multiple levels simultaneously. An example of selection occurring below the level of the individual organism are genes called transposons, which can replicate and spread throughout a genome. Selection at a level above the individual, such as group selection, may allow the evolution of cooperation. Genetic drift is the random fluctuation of allele frequencies within a population from one generation to the next. When selective forces are absent or relatively weak, allele frequencies are equally likely to drift upward or downward in each successive generation because the alleles are subject to sampling error. This drift halts when an allele eventually becomes fixed, either by disappearing from the population or by replacing the other alleles entirely. Genetic drift may therefore eliminate some alleles from a population due to chance alone. Even in the absence of selective forces, genetic drift can cause two separate populations that begin with the same genetic structure to drift apart into two divergent populations with different sets of alleles. According to the neutral theory of molecular evolution most evolutionary changes are the result of the fixation of neutral mutations by genetic drift. In this model, most genetic changes in a population are thus the result of constant mutation pressure and genetic drift. This form of the neutral theory has been debated since it does not seem to fit some genetic variation seen in nature. A better-supported version of this model is the nearly neutral theory, according to which a mutation that would be effectively neutral in a small population is not necessarily neutral in a large population. Other theories propose that genetic drift is dwarfed by other stochastic forces in evolution, such as genetic hitchhiking, also known as genetic draft. Another concept is constructive neutral evolution (CNE), which explains that complex systems can emerge and spread into a population through neutral transitions due to the principles of excess capacity, presuppression, and ratcheting, and it has been applied in areas ranging from the origins of the spliceosome to the complex interdependence of microbial communities. The time it takes a neutral allele to become fixed by genetic drift depends on population size; fixation is more rapid in smaller populations. The number of individuals in a population is not critical, but instead a measure known as the effective population size. The effective population is usually smaller than the total population since it takes into account factors such as the level of inbreeding and the stage of the lifecycle in which the population is the smallest. The effective population size may not be the same for every gene in the same population. It is usually difficult to measure the relative importance of selection and neutral processes, including drift. The comparative importance of adaptive and non-adaptive forces in driving evolutionary change is an area of current research. Mutation bias is usually conceived as a difference in expected rates for two different kinds of mutation, e.g., transition-transversion bias, GC-AT bias, deletion-insertion bias. This is related to the idea of developmental bias. Haldane and Fisher argued that, because mutation is a weak pressure easily overcome by selection, tendencies of mutation would be ineffectual except under conditions of neutral evolution or extraordinarily high mutation rates. This opposing-pressures argument was long used to dismiss the possibility of internal tendencies in evolution, until the molecular era prompted renewed interest in neutral evolution. Noboru Sueoka and Ernst Freese proposed that systematic biases in mutation might be responsible for systematic differences in genomic GC composition between species. The identification of a GC-biased E. coli mutator strain in 1967, along with the proposal of the neutral theory, established the plausibility of mutational explanations for molecular patterns, which are now common in the molecular evolution literature. For instance, mutation biases are frequently invoked in models of codon usage. Such models also include effects of selection, following the mutation-selection-drift model, which allows both for mutation biases and differential selection based on effects on translation. Hypotheses of mutation bias have played an important role in the development of thinking about the evolution of genome composition, including isochores. Different insertion vs. deletion biases in different taxa can lead to the evolution of different genome sizes. The hypothesis of Lynch regarding genome size relies on mutational biases toward increase or decrease in genome size. However, mutational hypotheses for the evolution of composition suffered a reduction in scope when it was discovered that (1) GC-biased gene conversion makes an important contribution to composition in diploid organisms such as mammals and (2) bacterial genomes frequently have AT-biased mutation. Contemporary thinking about the role of mutation biases reflects a different theory from that of Haldane and Fisher. More recent work showed that the original "pressures" theory assumes that evolution is based on standing variation: when evolution depends on events of mutation that introduce new alleles, mutational and developmental biases in the introduction of variation (arrival biases) can impose biases on evolution without requiring neutral evolution or high mutation rates. Several studies report that the mutations implicated in adaptation reflect common mutation biases though others dispute this interpretation. Recombination allows alleles on the same strand of DNA to become separated. However, the rate of recombination is low (approximately two events per chromosome per generation). As a result, genes close together on a chromosome may not always be shuffled away from each other and genes that are close together tend to be inherited together, a phenomenon known as linkage. This tendency is measured by finding how often two alleles occur together on a single chromosome compared to expectations, which is called their linkage disequilibrium. A set of alleles that is usually inherited in a group is called a haplotype. This can be important when one allele in a particular haplotype is strongly beneficial: natural selection can drive a selective sweep that will also cause the other alleles in the haplotype to become more common in the population; this effect is called genetic hitchhiking or genetic draft. Genetic draft caused by the fact that some neutral genes are genetically linked to others that are under selection can be partially captured by an appropriate effective population size. A special case of natural selection is sexual selection, which is selection for any trait that increases mating success by increasing the attractiveness of an organism to potential mates. Traits that evolved through sexual selection are particularly prominent among males of several animal species. Although sexually favoured, traits such as cumbersome antlers, mating calls, large body size and bright colours often attract predation, which compromises the survival of individual males. This survival disadvantage is balanced by higher reproductive success in males that show these hard-to-fake, sexually selected traits. Evolution influences every aspect of the form and behaviour of organisms. Most prominent are the specific behavioural and physical adaptations that are the outcome of natural selection. These adaptations increase fitness by aiding activities such as finding food, avoiding predators or attracting mates. Organisms can also respond to selection by cooperating with each other, usually by aiding their relatives or engaging in mutually beneficial symbiosis. In the longer term, evolution produces new species through splitting ancestral populations of organisms into new groups that cannot or will not interbreed. These outcomes of evolution are distinguished based on time scale as macroevolution versus microevolution. Macroevolution refers to evolution that occurs at or above the level of species, in particular speciation and extinction; whereas microevolution refers to smaller evolutionary changes within a species or population, in particular shifts in allele frequency and adaptation. Macroevolution the outcome of long periods of microevolution. Thus, the distinction between micro- and macroevolution is not a fundamental one—the difference is simply the time involved. However, in macroevolution, the traits of the entire species may be important. For instance, a large amount of variation among individuals allows a species to rapidly adapt to new habitats, lessening the chance of it going extinct, while a wide geographic range increases the chance of speciation, by making it more likely that part of the population will become isolated. In this sense, microevolution and macroevolution might involve selection at different levels—with microevolution acting on genes and organisms, versus macroevolutionary processes such as species selection acting on entire species and affecting their rates of speciation and extinction. A common misconception is that evolution has goals, long-term plans, or an innate tendency for "progress", as expressed in beliefs such as orthogenesis and evolutionism; realistically however, evolution has no long-term goal and does not necessarily produce greater complexity. Although complex species have evolved, they occur as a side effect of the overall number of organisms increasing and simple forms of life still remain more common in the biosphere. For example, the overwhelming majority of species are microscopic prokaryotes, which form about half the world's biomass despite their small size, and constitute the vast majority of Earth's biodiversity. Simple organisms have therefore been the dominant form of life on Earth throughout its history and continue to be the main form of life up to the present day, with complex life only appearing more diverse because it is more noticeable. Indeed, the evolution of microorganisms is particularly important to evolutionary research, since their rapid reproduction allows the study of experimental evolution and the observation of evolution and adaptation in real time. Adaptation is the process that makes organisms better suited to their habitat. Also, the term adaptation may refer to a trait that is important for an organism's survival. For example, the adaptation of horses' teeth to the grinding of grass. By using the term adaptation for the evolutionary process and adaptive trait for the product (the bodily part or function), the two senses of the word may be distinguished. Adaptations are produced by natural selection. The following definitions are due to Theodosius Dobzhansky: Adaptation may cause either the gain of a new feature, or the loss of an ancestral feature. An example that shows both types of change is bacterial adaptation to antibiotic selection, with genetic changes causing antibiotic resistance by both modifying the target of the drug, or increasing the activity of transporters that pump the drug out of the cell. Other striking examples are the bacteria Escherichia coli evolving the ability to use citric acid as a nutrient in a long-term laboratory experiment, Flavobacterium evolving a novel enzyme that allows these bacteria to grow on the by-products of nylon manufacturing, and the soil bacterium Sphingobium evolving an entirely new metabolic pathway that degrades the synthetic pesticide pentachlorophenol. An interesting but still controversial idea is that some adaptations might increase the ability of organisms to generate genetic diversity and adapt by natural selection (increasing organisms' evolvability). Adaptation occurs through the gradual modification of existing structures. Consequently, structures with similar internal organisation may have different functions in related organisms. This is the result of a single ancestral structure being adapted to function in different ways. The bones within bat wings, for example, are very similar to those in mice feet and primate hands, due to the descent of all these structures from a common mammalian ancestor. However, since all living organisms are related to some extent, even organs that appear to have little or no structural similarity, such as arthropod, squid and vertebrate eyes, or the limbs and wings of arthropods and vertebrates, can depend on a common set of homologous genes that control their assembly and function; this is called deep homology. During evolution, some structures may lose their original function and become vestigial structures. Such structures may have little or no function in a current species, yet have a clear function in ancestral species, or other closely related species. Examples include pseudogenes, the non-functional remains of eyes in blind cave-dwelling fish, wings in flightless birds, the presence of hip bones in whales and snakes, and sexual traits in organisms that reproduce via asexual reproduction. Examples of vestigial structures in humans include wisdom teeth, the coccyx, the vermiform appendix, and other behavioural vestiges such as goose bumps and primitive reflexes. However, many traits that appear to be simple adaptations are in fact exaptations: structures originally adapted for one function, but which coincidentally became somewhat useful for some other function in the process. One example is the African lizard Holaspis guentheri, which developed an extremely flat head for hiding in crevices, as can be seen by looking at its near relatives. However, in this species, the head has become so flattened that it assists in gliding from tree to tree—an exaptation. Within cells, molecular machines such as the bacterial flagella and protein sorting machinery evolved by the recruitment of several pre-existing proteins that previously had different functions. Another example is the recruitment of enzymes from glycolysis and xenobiotic metabolism to serve as structural proteins called crystallins within the lenses of organisms' eyes. An area of current investigation in evolutionary developmental biology is the developmental basis of adaptations and exaptations. This research addresses the origin and evolution of embryonic development and how modifications of development and developmental processes produce novel features. These studies have shown that evolution can alter development to produce new structures, such as embryonic bone structures that develop into the jaw in other animals instead forming part of the middle ear in mammals. It is also possible for structures that have been lost in evolution to reappear due to changes in developmental genes, such as a mutation in chickens causing embryos to grow teeth similar to those of crocodiles. It is now becoming clear that most alterations in the form of organisms are due to changes in a small set of conserved genes. Interactions between organisms can produce both conflict and cooperation. When the interaction is between pairs of species, such as a pathogen and a host, or a predator and its prey, these species can develop matched sets of adaptations. Here, the evolution of one species causes adaptations in a second species. These changes in the second species then, in turn, cause new adaptations in the first species. This cycle of selection and response is called coevolution. An example is the production of tetrodotoxin in the rough-skinned newt and the evolution of tetrodotoxin resistance in its predator, the common garter snake. In this predator-prey pair, an evolutionary arms race has produced high levels of toxin in the newt and correspondingly high levels of toxin resistance in the snake. Not all co-evolved interactions between species involve conflict. Many cases of mutually beneficial interactions have evolved. For instance, an extreme cooperation exists between plants and the mycorrhizal fungi that grow on their roots and aid the plant in absorbing nutrients from the soil. This is a reciprocal relationship as the plants provide the fungi with sugars from photosynthesis. Here, the fungi actually grow inside plant cells, allowing them to exchange nutrients with their hosts, while sending signals that suppress the plant immune system. Coalitions between organisms of the same species have also evolved. An extreme case is the eusociality found in social insects, such as bees, termites and ants, where sterile insects feed and guard the small number of organisms in a colony that are able to reproduce. On an even smaller scale, the somatic cells that make up the body of an animal limit their reproduction so they can maintain a stable organism, which then supports a small number of the animal's germ cells to produce offspring. Here, somatic cells respond to specific signals that instruct them whether to grow, remain as they are, or die. If cells ignore these signals and multiply inappropriately, their uncontrolled growth causes cancer. Such cooperation within species may have evolved through the process of kin selection, which is where one organism acts to help raise a relative's offspring. This activity is selected for because if the helping individual contains alleles which promote the helping activity, it is likely that its kin will also contain these alleles and thus those alleles will be passed on. Other processes that may promote cooperation include group selection, where cooperation provides benefits to a group of organisms. Speciation is the process where a species diverges into two or more descendant species. There are multiple ways to define the concept of "species." The choice of definition is dependent on the particularities of the species concerned. For example, some species concepts apply more readily toward sexually reproducing organisms while others lend themselves better toward asexual organisms. Despite the diversity of various species concepts, these various concepts can be placed into one of three broad philosophical approaches: interbreeding, ecological and phylogenetic. The Biological Species Concept (BSC) is a classic example of the interbreeding approach. Defined by evolutionary biologist Ernst Mayr in 1942, the BSC states that "species are groups of actually or potentially interbreeding natural populations, which are reproductively isolated from other such groups." Despite its wide and long-term use, the BSC like other species concepts is not without controversy, for example, because genetic recombination among prokaryotes is not an intrinsic aspect of reproduction; this is called the species problem. Some researchers have attempted a unifying monistic definition of species, while others adopt a pluralistic approach and suggest that there may be different ways to logically interpret the definition of a species. Barriers to reproduction between two diverging sexual populations are required for the populations to become new species. Gene flow may slow this process by spreading the new genetic variants also to the other populations. Depending on how far two species have diverged since their most recent common ancestor, it may still be possible for them to produce offspring, as with horses and donkeys mating to produce mules. Such hybrids are generally infertile. In this case, closely related species may regularly interbreed, but hybrids will be selected against and the species will remain distinct. However, viable hybrids are occasionally formed and these new species can either have properties intermediate between their parent species, or possess a totally new phenotype. The importance of hybridisation in producing new species of animals is unclear, although cases have been seen in many types of animals, with the gray tree frog being a particularly well-studied example. Speciation has been observed multiple times under both controlled laboratory conditions and in nature. In sexually reproducing organisms, speciation results from reproductive isolation followed by genealogical divergence. There are four primary geographic modes of speciation. The most common in animals is allopatric speciation, which occurs in populations initially isolated geographically, such as by habitat fragmentation or migration. Selection under these conditions can produce very rapid changes in the appearance and behaviour of organisms. As selection and drift act independently on populations isolated from the rest of their species, separation may eventually produce organisms that cannot interbreed. The second mode of speciation is peripatric speciation, which occurs when small populations of organisms become isolated in a new environment. This differs from allopatric speciation in that the isolated populations are numerically much smaller than the parental population. Here, the founder effect causes rapid speciation after an increase in inbreeding increases selection on homozygotes, leading to rapid genetic change. The third mode is parapatric speciation. This is similar to peripatric speciation in that a small population enters a new habitat, but differs in that there is no physical separation between these two populations. Instead, speciation results from the evolution of mechanisms that reduce gene flow between the two populations. Generally this occurs when there has been a drastic change in the environment within the parental species' habitat. One example is the grass Anthoxanthum odoratum, which can undergo parapatric speciation in response to localised metal pollution from mines. Here, plants evolve that have resistance to high levels of metals in the soil. Selection against interbreeding with the metal-sensitive parental population produced a gradual change in the flowering time of the metal-resistant plants, which eventually produced complete reproductive isolation. Selection against hybrids between the two populations may cause reinforcement, which is the evolution of traits that promote mating within a species, as well as character displacement, which is when two species become more distinct in appearance. Finally, in sympatric speciation species diverge without geographic isolation or changes in habitat. This form is rare since even a small amount of gene flow may remove genetic differences between parts of a population. Generally, sympatric speciation in animals requires the evolution of both genetic differences and nonrandom mating, to allow reproductive isolation to evolve. One type of sympatric speciation involves crossbreeding of two related species to produce a new hybrid species. This is not common in animals as animal hybrids are usually sterile. This is because during meiosis the homologous chromosomes from each parent are from different species and cannot successfully pair. However, it is more common in plants because plants often double their number of chromosomes, to form polyploids. This allows the chromosomes from each parental species to form matching pairs during meiosis, since each parent's chromosomes are represented by a pair already. An example of such a speciation event is when the plant species Arabidopsis thaliana and Arabidopsis arenosa crossbred to give the new species Arabidopsis suecica. This happened about 20,000 years ago, and the speciation process has been repeated in the laboratory, which allows the study of the genetic mechanisms involved in this process. Indeed, chromosome doubling within a species may be a common cause of reproductive isolation, as half the doubled chromosomes will be unmatched when breeding with undoubled organisms. Speciation events are important in the theory of punctuated equilibrium, which accounts for the pattern in the fossil record of short "bursts" of evolution interspersed with relatively long periods of stasis, where species remain relatively unchanged. In this theory, speciation and rapid evolution are linked, with natural selection and genetic drift acting most strongly on organisms undergoing speciation in novel habitats or small populations. As a result, the periods of stasis in the fossil record correspond to the parental population and the organisms undergoing speciation and rapid evolution are found in small populations or geographically restricted habitats and therefore rarely being preserved as fossils. Extinction is the disappearance of an entire species. Extinction is not an unusual event, as species regularly appear through speciation and disappear through extinction. Nearly all animal and plant species that have lived on Earth are now extinct, and extinction appears to be the ultimate fate of all species. These extinctions have happened continuously throughout the history of life, although the rate of extinction spikes in occasional mass extinction events. The Cretaceous–Paleogene extinction event, during which the non-avian dinosaurs became extinct, is the most well-known, but the earlier Permian–Triassic extinction event was even more severe, with approximately 96% of all marine species driven to extinction. The Holocene extinction event is an ongoing mass extinction associated with humanity's expansion across the globe over the past few thousand years. Present-day extinction rates are 100–1000 times greater than the background rate and up to 30% of current species may be extinct by the mid 21st century. Human activities are now the primary cause of the ongoing extinction event; global warming may further accelerate it in the future. Despite the estimated extinction of more than 99% of all species that ever lived on Earth, about 1 trillion species are estimated to be on Earth currently with only one-thousandth of 1% described. The role of extinction in evolution is not very well understood and may depend on which type of extinction is considered. The causes of the continuous "low-level" extinction events, which form the majority of extinctions, may be the result of competition between species for limited resources (the competitive exclusion principle). If one species can out-compete another, this could produce species selection, with the fitter species surviving and the other species being driven to extinction. The intermittent mass extinctions are also important, but instead of acting as a selective force, they drastically reduce diversity in a nonspecific manner and promote bursts of rapid evolution and speciation in survivors. Concepts and models used in evolutionary biology, such as natural selection, have many applications. Artificial selection is the intentional selection of traits in a population of organisms. This has been used for thousands of years in the domestication of plants and animals. More recently, such selection has become a vital part of genetic engineering, with selectable markers such as antibiotic resistance genes being used to manipulate DNA. Proteins with valuable properties have evolved by repeated rounds of mutation and selection (for example modified enzymes and new antibodies) in a process called directed evolution. Understanding the changes that have occurred during an organism's evolution can reveal the genes needed to construct parts of the body, genes which may be involved in human genetic disorders. For example, the Mexican tetra is an albino cavefish that lost its eyesight during evolution. Breeding together different populations of this blind fish produced some offspring with functional eyes, since different mutations had occurred in the isolated populations that had evolved in different caves. This helped identify genes required for vision and pigmentation. Evolutionary theory has many applications in medicine. Many human diseases are not static phenomena, but capable of evolution. Viruses, bacteria, fungi and cancers evolve to be resistant to host immune defences, as well as to pharmaceutical drugs. These same problems occur in agriculture with pesticide and herbicide resistance. It is possible that we are facing the end of the effective life of most of available antibiotics and predicting the evolution and evolvability of our pathogens and devising strategies to slow or circumvent it is requiring deeper knowledge of the complex forces driving evolution at the molecular level. In computer science, simulations of evolution using evolutionary algorithms and artificial life started in the 1960s and were extended with simulation of artificial selection. Artificial evolution became a widely recognised optimisation method as a result of the work of Ingo Rechenberg in the 1960s. He used evolution strategies to solve complex engineering problems. Genetic algorithms in particular became popular through the writing of John Henry Holland. Practical applications also include automatic evolution of computer programmes. Evolutionary algorithms are now used to solve multi-dimensional problems more efficiently than software produced by human designers and also to optimise the design of systems. The Earth is about 4.54 billion years old. The earliest undisputed evidence of life on Earth dates from at least 3.5 billion years ago, during the Eoarchean Era after a geological crust started to solidify following the earlier molten Hadean Eon. Microbial mat fossils have been found in 3.48 billion-year-old sandstone in Western Australia. Other early physical evidence of a biogenic substance is graphite in 3.7 billion-year-old metasedimentary rocks discovered in Western Greenland as well as "remains of biotic life" found in 4.1 billion-year-old rocks in Western Australia. Commenting on the Australian findings, Stephen Blair Hedges wrote: "If life arose relatively quickly on Earth, then it could be common in the universe." In July 2016, scientists reported identifying a set of 355 genes from the last universal common ancestor (LUCA) of all organisms living on Earth. More than 99% of all species, amounting to over five billion species, that ever lived on Earth are estimated to be extinct. Estimates on the number of Earth's current species range from 10 million to 14 million, of which about 1.9 million are estimated to have been named and 1.6 million documented in a central database to date, leaving at least 80% not yet described. Highly energetic chemistry is thought to have produced a self-replicating molecule around 4 billion years ago, and half a billion years later the last common ancestor of all life existed. The current scientific consensus is that the complex biochemistry that makes up life came from simpler chemical reactions. The beginning of life may have included self-replicating molecules such as RNA and the assembly of simple cells. All organisms on Earth are descended from a common ancestor or ancestral gene pool. Current species are a stage in the process of evolution, with their diversity the product of a long series of speciation and extinction events. The common descent of organisms was first deduced from four simple facts about organisms: First, they have geographic distributions that cannot be explained by local adaptation. Second, the diversity of life is not a set of completely unique organisms, but organisms that share morphological similarities. Third, vestigial traits with no clear purpose resemble functional ancestral traits. Fourth, organisms can be classified using these similarities into a hierarchy of nested groups, similar to a family tree. Due to horizontal gene transfer, this "tree of life" may be more complicated than a simple branching tree, since some genes have spread independently between distantly related species. To solve this problem and others, some authors prefer to use the "Coral of life" as a metaphor or a mathematical model to illustrate the evolution of life. This view dates back to an idea briefly mentioned by Darwin but later abandoned. Past species have also left records of their evolutionary history. Fossils, along with the comparative anatomy of present-day organisms, constitute the morphological, or anatomical, record. By comparing the anatomies of both modern and extinct species, palaeontologists can infer the lineages of those species. However, this approach is most successful for organisms that had hard body parts, such as shells, bones or teeth. Further, as prokaryotes such as bacteria and archaea share a limited set of common morphologies, their fossils do not provide information on their ancestry. More recently, evidence for common descent has come from the study of biochemical similarities between organisms. For example, all living cells use the same basic set of nucleotides and amino acids. The development of molecular genetics has revealed the record of evolution left in organisms' genomes: dating when species diverged through the molecular clock produced by mutations. For example, these DNA sequence comparisons have revealed that humans and chimpanzees share 98% of their genomes and analysing the few areas where they differ helps shed light on when the common ancestor of these species existed. Prokaryotes inhabited the Earth from approximately 3–4 billion years ago. No obvious changes in morphology or cellular organisation occurred in these organisms over the next few billion years. The eukaryotic cells emerged between 1.6 and 2.7 billion years ago. The next major change in cell structure came when bacteria were engulfed by eukaryotic cells, in a cooperative association called endosymbiosis. The engulfed bacteria and the host cell then underwent coevolution, with the bacteria evolving into either mitochondria or hydrogenosomes. Another engulfment of cyanobacterial-like organisms led to the formation of chloroplasts in algae and plants. The history of life was that of the unicellular eukaryotes, prokaryotes and archaea until about 610 million years ago when multicellular organisms began to appear in the oceans in the Ediacaran period. The evolution of multicellularity occurred in multiple independent events, in organisms as diverse as sponges, brown algae, cyanobacteria, slime moulds and myxobacteria. In January 2016, scientists reported that, about 800 million years ago, a minor genetic change in a single molecule called GK-PID may have allowed organisms to go from a single cell organism to one of many cells. Soon after the emergence of these first multicellular organisms, a remarkable amount of biological diversity appeared over approximately 10 million years, in an event called the Cambrian explosion. Here, the majority of types of modern animals appeared in the fossil record, as well as unique lineages that subsequently became extinct. Various triggers for the Cambrian explosion have been proposed, including the accumulation of oxygen in the atmosphere from photosynthesis. About 500 million years ago, plants and fungi colonised the land and were soon followed by arthropods and other animals. Insects were particularly successful and even today make up the majority of animal species. Amphibians first appeared around 364 million years ago, followed by early amniotes and birds around 155 million years ago (both from "reptile"-like lineages), mammals around 129 million years ago, Homininae around 10 million years ago and modern humans around 250,000 years ago. However, despite the evolution of these large animals, smaller organisms similar to the types that evolved early in this process continue to be highly successful and dominate the Earth, with the majority of both biomass and species being prokaryotes. The proposal that one type of organism could descend from another type goes back to some of the first pre-Socratic Greek philosophers, such as Anaximander and Empedocles. Such proposals survived into Roman times. The poet and philosopher Lucretius followed Empedocles in his masterwork De rerum natura (On the Nature of Things). In contrast to these materialistic views, Aristotelianism had considered all natural things as actualisations of fixed natural possibilities, known as forms. This became part of a medieval teleological understanding of nature in which all things have an intended role to play in a divine cosmic order. Variations of this idea became the standard understanding of the Middle Ages and were integrated into Christian learning, but Aristotle did not demand that real types of organisms always correspond one-for-one with exact metaphysical forms and specifically gave examples of how new types of living things could come to be. A number of Arab Muslim scholars wrote about evolution, most notably Ibn Khaldun, who wrote the book Muqaddimah in 1377 AD, in which he asserted that humans developed from "the world of the monkeys", in a process by which "species become more numerous". The "New Science" of the 17th century rejected the Aristotelian approach. It sought to explain natural phenomena in terms of physical laws that were the same for all visible things and that did not require the existence of any fixed natural categories or divine cosmic order. However, this new approach was slow to take root in the biological sciences: the last bastion of the concept of fixed natural types. John Ray applied one of the previously more general terms for fixed natural types, "species", to plant and animal types, but he strictly identified each type of living thing as a species and proposed that each species could be defined by the features that perpetuated themselves generation after generation. The biological classification introduced by Carl Linnaeus in 1735 explicitly recognised the hierarchical nature of species relationships, but still viewed species as fixed according to a divine plan. Other naturalists of this time speculated on the evolutionary change of species over time according to natural laws. In 1751, Pierre Louis Maupertuis wrote of natural modifications occurring during reproduction and accumulating over many generations to produce new species. Georges-Louis Leclerc, Comte de Buffon, suggested that species could degenerate into different organisms, and Erasmus Darwin proposed that all warm-blooded animals could have descended from a single microorganism (or "filament"). The first full-fledged evolutionary scheme was Jean-Baptiste Lamarck's "transmutation" theory of 1809, which envisaged spontaneous generation continually producing simple forms of life that developed greater complexity in parallel lineages with an inherent progressive tendency, and postulated that on a local level, these lineages adapted to the environment by inheriting changes caused by their use or disuse in parents. (The latter process was later called Lamarckism.) These ideas were condemned by established naturalists as speculation lacking empirical support. In particular, Georges Cuvier insisted that species were unrelated and fixed, their similarities reflecting divine design for functional needs. In the meantime, Ray's ideas of benevolent design had been developed by William Paley into the Natural Theology or Evidences of the Existence and Attributes of the Deity (1802), which proposed complex adaptations as evidence of divine design and which was admired by Charles Darwin. The crucial break from the concept of constant typological classes or types in biology came with the theory of evolution through natural selection, which was formulated by Charles Darwin and Alfred Wallace in terms of variable populations. Darwin used the expression "descent with modification" rather than "evolution". Partly influenced by An Essay on the Principle of Population (1798) by Thomas Robert Malthus, Darwin noted that population growth would lead to a "struggle for existence" in which favourable variations prevailed as others perished. In each generation, many offspring fail to survive to an age of reproduction because of limited resources. This could explain the diversity of plants and animals from a common ancestry through the working of natural laws in the same way for all types of organism. Darwin developed his theory of "natural selection" from 1838 onwards and was writing up his "big book" on the subject when Alfred Russel Wallace sent him a version of virtually the same theory in 1858. Their separate papers were presented together at an 1858 meeting of the Linnean Society of London. At the end of 1859, Darwin's publication of his "abstract" as On the Origin of Species explained natural selection in detail and in a way that led to an increasingly wide acceptance of Darwin's concepts of evolution at the expense of alternative theories. Thomas Henry Huxley applied Darwin's ideas to humans, using paleontology and comparative anatomy to provide strong evidence that humans and apes shared a common ancestry. Some were disturbed by this since it implied that humans did not have a special place in the universe. The mechanisms of reproductive heritability and the origin of new traits remained a mystery. Towards this end, Darwin developed his provisional theory of pangenesis. In 1865, Gregor Mendel reported that traits were inherited in a predictable manner through the independent assortment and segregation of elements (later known as genes). Mendel's laws of inheritance eventually supplanted most of Darwin's pangenesis theory. August Weismann made the important distinction between germ cells that give rise to gametes (such as sperm and egg cells) and the somatic cells of the body, demonstrating that heredity passes through the germ line only. Hugo de Vries connected Darwin's pangenesis theory to Weismann's germ/soma cell distinction and proposed that Darwin's pangenes were concentrated in the cell nucleus and when expressed they could move into the cytoplasm to change the cell's structure. De Vries was also one of the researchers who made Mendel's work well known, believing that Mendelian traits corresponded to the transfer of heritable variations along the germline. To explain how new variants originate, de Vries developed a mutation theory that led to a temporary rift between those who accepted Darwinian evolution and biometricians who allied with de Vries. In the 1930s, pioneers in the field of population genetics, such as Ronald Fisher, Sewall Wright and J. B. S. Haldane set the foundations of evolution onto a robust statistical philosophy. The false contradiction between Darwin's theory, genetic mutations, and Mendelian inheritance was thus reconciled. In the 1920s and 1930s, the modern synthesis connected natural selection and population genetics, based on Mendelian inheritance, into a unified theory that included random genetic drift, mutation, and gene flow. This new version of evolutionary theory focused on changes in allele frequencies in population. It explained patterns observed across species in populations, through fossil transitions in palaeontology. Since then, further syntheses have extended evolution's explanatory power in the light of numerous discoveries, to cover biological phenomena across the whole of the biological hierarchy from genes to populations. The publication of the structure of DNA by James Watson and Francis Crick with contribution of Rosalind Franklin in 1953 demonstrated a physical mechanism for inheritance. Molecular biology improved understanding of the relationship between genotype and phenotype. Advances were also made in phylogenetic systematics, mapping the transition of traits into a comparative and testable framework through the publication and use of evolutionary trees. In 1973, evolutionary biologist Theodosius Dobzhansky penned that "nothing in biology makes sense except in the light of evolution", because it has brought to light the relations of what first seemed disjointed facts in natural history into a coherent explanatory body of knowledge that describes and predicts many observable facts about life on this planet. One extension, known as evolutionary developmental biology and informally called "evo-devo," emphasises how changes between generations (evolution) act on patterns of change within individual organisms (development). Since the beginning of the 21st century, some biologists have argued for an extended evolutionary synthesis, which would account for the effects of non-genetic inheritance modes, such as epigenetics, parental effects, ecological inheritance and cultural inheritance, and evolvability. In the 19th century, particularly after the publication of On the Origin of Species in 1859, the idea that life had evolved was an active source of academic debate centred on the philosophical, social and religious implications of evolution. Today, the modern evolutionary synthesis is accepted by a vast majority of scientists. However, evolution remains a contentious concept for some theists. While various religions and denominations have reconciled their beliefs with evolution through concepts such as theistic evolution, there are creationists who believe that evolution is contradicted by the creation myths found in their religions and who raise various objections to evolution. As had been demonstrated by responses to the publication of Vestiges of the Natural History of Creation in 1844, the most controversial aspect of evolutionary biology is the implication of human evolution that humans share common ancestry with apes and that the mental and moral faculties of humanity have the same types of natural causes as other inherited traits in animals. In some countries, notably the United States, these tensions between science and religion have fuelled the current creation–evolution controversy, a religious conflict focusing on politics and public education. While other scientific fields such as cosmology and Earth science also conflict with literal interpretations of many religious texts, evolutionary biology experiences significantly more opposition from religious literalists. The teaching of evolution in American secondary school biology classes was uncommon in most of the first half of the 20th century. The Scopes Trial decision of 1925 caused the subject to become very rare in American secondary biology textbooks for a generation, but it was gradually re-introduced later and became legally protected with the 1968 Epperson v. Arkansas decision. Since then, the competing religious belief of creationism was legally disallowed in secondary school curricula in various decisions in the 1970s and 1980s, but it returned in pseudoscientific form as intelligent design (ID), to be excluded once again in the 2005 Kitzmiller v. Dover Area School District case. The debate over Darwin's ideas did not generate significant controversy in China.
[ { "paragraph_id": 0, "text": "Evolution is the change in the heritable characteristics of biological populations over successive generations. Evolution occurs when evolutionary processes such as natural selection and genetic drift act on genetic variation, resulting in certain characteristics becoming more or less common within a population over successive generations. The process of evolution has given rise to biodiversity at every level of biological organisation.", "title": "" }, { "paragraph_id": 1, "text": "The theory of evolution by natural selection was conceived independently by Charles Darwin and Alfred Russel Wallace in the mid-19th century as an explanation for why organisms are adapted to their physical and biological environments. The theory was first set out in detail in Darwin's book On the Origin of Species. Evolution by natural selection is established by observable facts about living organisms: (1) more offspring are often produced than can possibly survive; (2) traits vary among individuals with respect to their morphology, physiology, and behaviour; (3) different traits confer different rates of survival and reproduction (differential fitness); and (4) traits can be passed from generation to generation (heritability of fitness). In successive generations, members of a population are therefore more likely to be replaced by the offspring of parents with favourable characteristics for that environment.", "title": "" }, { "paragraph_id": 2, "text": "In the early 20th century, competing ideas of evolution were refuted and evolution was combined with Mendelian inheritance and population genetics to give rise to modern evolutionary theory. In this synthesis the basis for heredity is in DNA molecules that pass information from generation to generation. The processes that change DNA in a population include natural selection, genetic drift, mutation, and gene flow.", "title": "" }, { "paragraph_id": 3, "text": "All life on Earth—including humanity—shares a last universal common ancestor (LUCA), which lived approximately 3.5–3.8 billion years ago. The fossil record includes a progression from early biogenic graphite to microbial mat fossils to fossilised multicellular organisms. Existing patterns of biodiversity have been shaped by repeated formations of new species (speciation), changes within species (anagenesis), and loss of species (extinction) throughout the evolutionary history of life on Earth. Morphological and biochemical traits tend to be more similar among species that share a more recent common ancestor, which historically was used to reconstruct phylogenetic trees, although direct comparison of genetic sequences is a more common method today.", "title": "" }, { "paragraph_id": 4, "text": "Evolutionary biologists have continued to study various aspects of evolution by forming and testing hypotheses as well as constructing theories based on evidence from the field or laboratory and on data generated by the methods of mathematical and theoretical biology. Their discoveries have influenced not just the development of biology but also other fields including agriculture, medicine, and computer science.", "title": "" }, { "paragraph_id": 5, "text": "Evolution in organisms occurs through changes in heritable characteristics—the inherited characteristics of an organism. In humans, for example, eye colour is an inherited characteristic and an individual might inherit the \"brown-eye trait\" from one of their parents. Inherited traits are controlled by genes and the complete set of genes within an organism's genome (genetic material) is called its genotype.", "title": "Heredity" }, { "paragraph_id": 6, "text": "The complete set of observable traits that make up the structure and behaviour of an organism is called its phenotype. Some of these traits come from the interaction of its genotype with the environment while others are neutral. Some observable characteristics are not inherited. For example, suntanned skin comes from the interaction between a person's genotype and sunlight; thus, suntans are not passed on to people's children. The phenotype is the ability of the skin to tan when exposed to sunlight. However, some people tan more easily than others, due to differences in genotypic variation; a striking example are people with the inherited trait of albinism, who do not tan at all and are very sensitive to sunburn.", "title": "Heredity" }, { "paragraph_id": 7, "text": "Heritable characteristics are passed from one generation to the next via DNA, a molecule that encodes genetic information. DNA is a long biopolymer composed of four types of bases. The sequence of bases along a particular DNA molecule specifies the genetic information, in a manner similar to a sequence of letters spelling out a sentence. Before a cell divides, the DNA is copied, so that each of the resulting two cells will inherit the DNA sequence. Portions of a DNA molecule that specify a single functional unit are called genes; different genes have different sequences of bases. Within cells, each long strand of DNA is called a chromosome. The specific location of a DNA sequence within a chromosome is known as a locus. If the DNA sequence at a locus varies between individuals, the different forms of this sequence are called alleles. DNA sequences can change through mutations, producing new alleles. If a mutation occurs within a gene, the new allele may affect the trait that the gene controls, altering the phenotype of the organism. However, while this simple correspondence between an allele and a trait works in some cases, most traits are influenced by multiple genes in a quantitative or epistatic manner.", "title": "Heredity" }, { "paragraph_id": 8, "text": "Evolution can occur if there is genetic variation within a population. Variation comes from mutations in the genome, reshuffling of genes through sexual reproduction and migration between populations (gene flow). Despite the constant introduction of new variation through mutation and gene flow, most of the genome of a species is very similar among all individuals of that species. However, discoveries in the field of evolutionary developmental biology have demonstrated that even relatively small differences in genotype can lead to dramatic differences in phenotype both within and between species.", "title": "Sources of variation" }, { "paragraph_id": 9, "text": "An individual organism's phenotype results from both its genotype and the influence of the environment it has lived in. The modern evolutionary synthesis defines evolution as the change over time in this genetic variation. The frequency of one particular allele will become more or less prevalent relative to other forms of that gene. Variation disappears when a new allele reaches the point of fixation—when it either disappears from the population or replaces the ancestral allele entirely.", "title": "Sources of variation" }, { "paragraph_id": 10, "text": "Mutations are changes in the DNA sequence of a cell's genome and are the ultimate source of genetic variation in all organisms. When mutations occur, they may alter the product of a gene, or prevent the gene from functioning, or have no effect.", "title": "Sources of variation" }, { "paragraph_id": 11, "text": "About half of the mutations in the coding regions of protein-coding genes are deleterious — the other half are neutral. A small percentage of the total mutations in this region confer a fitness benefit. Some of the mutations in other parts of the genome are deleterious but the vast majority are neutral. A few are beneficial.", "title": "Sources of variation" }, { "paragraph_id": 12, "text": "Mutations can involve large sections of a chromosome becoming duplicated (usually by genetic recombination), which can introduce extra copies of a gene into a genome. Extra copies of genes are a major source of the raw material needed for new genes to evolve. This is important because most new genes evolve within gene families from pre-existing genes that share common ancestors. For example, the human eye uses four genes to make structures that sense light: three for colour vision and one for night vision; all four are descended from a single ancestral gene.", "title": "Sources of variation" }, { "paragraph_id": 13, "text": "New genes can be generated from an ancestral gene when a duplicate copy mutates and acquires a new function. This process is easier once a gene has been duplicated because it increases the redundancy of the system; one gene in the pair can acquire a new function while the other copy continues to perform its original function. Other types of mutations can even generate entirely new genes from previously noncoding DNA, a phenomenon termed de novo gene birth.", "title": "Sources of variation" }, { "paragraph_id": 14, "text": "The generation of new genes can also involve small parts of several genes being duplicated, with these fragments then recombining to form new combinations with new functions (exon shuffling). When new genes are assembled from shuffling pre-existing parts, domains act as modules with simple independent functions, which can be mixed together to produce new combinations with new and complex functions. For example, polyketide synthases are large enzymes that make antibiotics; they contain up to 100 independent domains that each catalyse one step in the overall process, like a step in an assembly line.", "title": "Sources of variation" }, { "paragraph_id": 15, "text": "One example of mutation is wild boar piglets. They are camouflage coloured and show a characteristic pattern of dark and light longitudinal stripes. However, mutations in the melanocortin 1 receptor (MC1R) disrupt the pattern. The majority of pig breeds carry MC1R mutations disrupting wild-type colour and different mutations causing dominant black colouring.", "title": "Sources of variation" }, { "paragraph_id": 16, "text": "In asexual organisms, genes are inherited together, or linked, as they cannot mix with genes of other organisms during reproduction. In contrast, the offspring of sexual organisms contain random mixtures of their parents' chromosomes that are produced through independent assortment. In a related process called homologous recombination, sexual organisms exchange DNA between two matching chromosomes. Recombination and reassortment do not alter allele frequencies, but instead change which alleles are associated with each other, producing offspring with new combinations of alleles. Sex usually increases genetic variation and may increase the rate of evolution.", "title": "Sources of variation" }, { "paragraph_id": 17, "text": "The two-fold cost of sex was first described by John Maynard Smith. The first cost is that in sexually dimorphic species only one of the two sexes can bear young. This cost does not apply to hermaphroditic species, like most plants and many invertebrates. The second cost is that any individual who reproduces sexually can only pass on 50% of its genes to any individual offspring, with even less passed on as each new generation passes. Yet sexual reproduction is the more common means of reproduction among eukaryotes and multicellular organisms. The Red Queen hypothesis has been used to explain the significance of sexual reproduction as a means to enable continual evolution and adaptation in response to coevolution with other species in an ever-changing environment. Another hypothesis is that sexual reproduction is primarily an adaptation for promoting accurate recombinational repair of damage in germline DNA, and that increased diversity is a byproduct of this process that may sometimes be adaptively beneficial.", "title": "Sources of variation" }, { "paragraph_id": 18, "text": "Gene flow is the exchange of genes between populations and between species. It can therefore be a source of variation that is new to a population or to a species. Gene flow can be caused by the movement of individuals between separate populations of organisms, as might be caused by the movement of mice between inland and coastal populations, or the movement of pollen between heavy-metal-tolerant and heavy-metal-sensitive populations of grasses.", "title": "Sources of variation" }, { "paragraph_id": 19, "text": "Gene transfer between species includes the formation of hybrid organisms and horizontal gene transfer. Horizontal gene transfer is the transfer of genetic material from one organism to another organism that is not its offspring; this is most common among bacteria. In medicine, this contributes to the spread of antibiotic resistance, as when one bacteria acquires resistance genes it can rapidly transfer them to other species. Horizontal transfer of genes from bacteria to eukaryotes such as the yeast Saccharomyces cerevisiae and the adzuki bean weevil Callosobruchus chinensis has occurred. An example of larger-scale transfers are the eukaryotic bdelloid rotifers, which have received a range of genes from bacteria, fungi and plants. Viruses can also carry DNA between organisms, allowing transfer of genes even across biological domains.", "title": "Sources of variation" }, { "paragraph_id": 20, "text": "Large-scale gene transfer has also occurred between the ancestors of eukaryotic cells and bacteria, during the acquisition of chloroplasts and mitochondria. It is possible that eukaryotes themselves originated from horizontal gene transfers between bacteria and archaea.", "title": "Sources of variation" }, { "paragraph_id": 21, "text": "Some heritable changes cannot be explained by changes to the sequence of nucleotides in the DNA. These phenomena are classed as epigenetic inheritance systems. DNA methylation marking chromatin, self-sustaining metabolic loops, gene silencing by RNA interference and the three-dimensional conformation of proteins (such as prions) are areas where epigenetic inheritance systems have been discovered at the organismic level. Developmental biologists suggest that complex interactions in genetic networks and communication among cells can lead to heritable variations that may underlay some of the mechanics in developmental plasticity and canalisation. Heritability may also occur at even larger scales. For example, ecological inheritance through the process of niche construction is defined by the regular and repeated activities of organisms in their environment. This generates a legacy of effects that modify and feed back into the selection regime of subsequent generations. Other examples of heritability in evolution that are not under the direct control of genes include the inheritance of cultural traits and symbiogenesis.", "title": "Sources of variation" }, { "paragraph_id": 22, "text": "From a neo-Darwinian perspective, evolution occurs when there are changes in the frequencies of alleles within a population of interbreeding organisms, for example, the allele for black colour in a population of moths becoming more common. Mechanisms that can lead to changes in allele frequencies include natural selection, genetic drift, and mutation bias.", "title": "Evolutionary forces" }, { "paragraph_id": 23, "text": "Evolution by natural selection is the process by which traits that enhance survival and reproduction become more common in successive generations of a population. It embodies three principles:", "title": "Evolutionary forces" }, { "paragraph_id": 24, "text": "More offspring are produced than can possibly survive, and these conditions produce competition between organisms for survival and reproduction. Consequently, organisms with traits that give them an advantage over their competitors are more likely to pass on their traits to the next generation than those with traits that do not confer an advantage. This teleonomy is the quality whereby the process of natural selection creates and preserves traits that are seemingly fitted for the functional roles they perform. Consequences of selection include nonrandom mating and genetic hitchhiking.", "title": "Evolutionary forces" }, { "paragraph_id": 25, "text": "The central concept of natural selection is the evolutionary fitness of an organism. Fitness is measured by an organism's ability to survive and reproduce, which determines the size of its genetic contribution to the next generation. However, fitness is not the same as the total number of offspring: instead fitness is indicated by the proportion of subsequent generations that carry an organism's genes. For example, if an organism could survive well and reproduce rapidly, but its offspring were all too small and weak to survive, this organism would make little genetic contribution to future generations and would thus have low fitness.", "title": "Evolutionary forces" }, { "paragraph_id": 26, "text": "If an allele increases fitness more than the other alleles of that gene, then with each generation this allele has a higher probability of becoming common within the population. These traits are said to be \"selected for.\" Examples of traits that can increase fitness are enhanced survival and increased fecundity. Conversely, the lower fitness caused by having a less beneficial or deleterious allele results in this allele likely becoming rarer—they are \"selected against.\"", "title": "Evolutionary forces" }, { "paragraph_id": 27, "text": "Importantly, the fitness of an allele is not a fixed characteristic; if the environment changes, previously neutral or harmful traits may become beneficial and previously beneficial traits become harmful. However, even if the direction of selection does reverse in this way, traits that were lost in the past may not re-evolve in an identical form. However, a re-activation of dormant genes, as long as they have not been eliminated from the genome and were only suppressed perhaps for hundreds of generations, can lead to the re-occurrence of traits thought to be lost like hindlegs in dolphins, teeth in chickens, wings in wingless stick insects, tails and additional nipples in humans etc. \"Throwbacks\" such as these are known as atavisms.", "title": "Evolutionary forces" }, { "paragraph_id": 28, "text": "Natural selection within a population for a trait that can vary across a range of values, such as height, can be categorised into three different types. The first is directional selection, which is a shift in the average value of a trait over time—for example, organisms slowly getting taller. Secondly, disruptive selection is selection for extreme trait values and often results in two different values becoming most common, with selection against the average value. This would be when either short or tall organisms had an advantage, but not those of medium height. Finally, in stabilising selection there is selection against extreme trait values on both ends, which causes a decrease in variance around the average value and less diversity. This would, for example, cause organisms to eventually have a similar height.", "title": "Evolutionary forces" }, { "paragraph_id": 29, "text": "Natural selection most generally makes nature the measure against which individuals and individual traits, are more or less likely to survive. \"Nature\" in this sense refers to an ecosystem, that is, a system in which organisms interact with every other element, physical as well as biological, in their local environment. Eugene Odum, a founder of ecology, defined an ecosystem as: \"Any unit that includes all of the organisms...in a given area interacting with the physical environment so that a flow of energy leads to clearly defined trophic structure, biotic diversity, and material cycles (i.e., exchange of materials between living and nonliving parts) within the system....\" Each population within an ecosystem occupies a distinct niche, or position, with distinct relationships to other parts of the system. These relationships involve the life history of the organism, its position in the food chain and its geographic range. This broad understanding of nature enables scientists to delineate specific forces which, together, comprise natural selection.", "title": "Evolutionary forces" }, { "paragraph_id": 30, "text": "Natural selection can act at different levels of organisation, such as genes, cells, individual organisms, groups of organisms and species. Selection can act at multiple levels simultaneously. An example of selection occurring below the level of the individual organism are genes called transposons, which can replicate and spread throughout a genome. Selection at a level above the individual, such as group selection, may allow the evolution of cooperation.", "title": "Evolutionary forces" }, { "paragraph_id": 31, "text": "Genetic drift is the random fluctuation of allele frequencies within a population from one generation to the next. When selective forces are absent or relatively weak, allele frequencies are equally likely to drift upward or downward in each successive generation because the alleles are subject to sampling error. This drift halts when an allele eventually becomes fixed, either by disappearing from the population or by replacing the other alleles entirely. Genetic drift may therefore eliminate some alleles from a population due to chance alone. Even in the absence of selective forces, genetic drift can cause two separate populations that begin with the same genetic structure to drift apart into two divergent populations with different sets of alleles.", "title": "Evolutionary forces" }, { "paragraph_id": 32, "text": "According to the neutral theory of molecular evolution most evolutionary changes are the result of the fixation of neutral mutations by genetic drift. In this model, most genetic changes in a population are thus the result of constant mutation pressure and genetic drift. This form of the neutral theory has been debated since it does not seem to fit some genetic variation seen in nature. A better-supported version of this model is the nearly neutral theory, according to which a mutation that would be effectively neutral in a small population is not necessarily neutral in a large population. Other theories propose that genetic drift is dwarfed by other stochastic forces in evolution, such as genetic hitchhiking, also known as genetic draft. Another concept is constructive neutral evolution (CNE), which explains that complex systems can emerge and spread into a population through neutral transitions due to the principles of excess capacity, presuppression, and ratcheting, and it has been applied in areas ranging from the origins of the spliceosome to the complex interdependence of microbial communities.", "title": "Evolutionary forces" }, { "paragraph_id": 33, "text": "The time it takes a neutral allele to become fixed by genetic drift depends on population size; fixation is more rapid in smaller populations. The number of individuals in a population is not critical, but instead a measure known as the effective population size. The effective population is usually smaller than the total population since it takes into account factors such as the level of inbreeding and the stage of the lifecycle in which the population is the smallest. The effective population size may not be the same for every gene in the same population.", "title": "Evolutionary forces" }, { "paragraph_id": 34, "text": "It is usually difficult to measure the relative importance of selection and neutral processes, including drift. The comparative importance of adaptive and non-adaptive forces in driving evolutionary change is an area of current research.", "title": "Evolutionary forces" }, { "paragraph_id": 35, "text": "Mutation bias is usually conceived as a difference in expected rates for two different kinds of mutation, e.g., transition-transversion bias, GC-AT bias, deletion-insertion bias. This is related to the idea of developmental bias. Haldane and Fisher argued that, because mutation is a weak pressure easily overcome by selection, tendencies of mutation would be ineffectual except under conditions of neutral evolution or extraordinarily high mutation rates. This opposing-pressures argument was long used to dismiss the possibility of internal tendencies in evolution, until the molecular era prompted renewed interest in neutral evolution.", "title": "Evolutionary forces" }, { "paragraph_id": 36, "text": "Noboru Sueoka and Ernst Freese proposed that systematic biases in mutation might be responsible for systematic differences in genomic GC composition between species. The identification of a GC-biased E. coli mutator strain in 1967, along with the proposal of the neutral theory, established the plausibility of mutational explanations for molecular patterns, which are now common in the molecular evolution literature.", "title": "Evolutionary forces" }, { "paragraph_id": 37, "text": "For instance, mutation biases are frequently invoked in models of codon usage. Such models also include effects of selection, following the mutation-selection-drift model, which allows both for mutation biases and differential selection based on effects on translation. Hypotheses of mutation bias have played an important role in the development of thinking about the evolution of genome composition, including isochores. Different insertion vs. deletion biases in different taxa can lead to the evolution of different genome sizes. The hypothesis of Lynch regarding genome size relies on mutational biases toward increase or decrease in genome size.", "title": "Evolutionary forces" }, { "paragraph_id": 38, "text": "However, mutational hypotheses for the evolution of composition suffered a reduction in scope when it was discovered that (1) GC-biased gene conversion makes an important contribution to composition in diploid organisms such as mammals and (2) bacterial genomes frequently have AT-biased mutation.", "title": "Evolutionary forces" }, { "paragraph_id": 39, "text": "Contemporary thinking about the role of mutation biases reflects a different theory from that of Haldane and Fisher. More recent work showed that the original \"pressures\" theory assumes that evolution is based on standing variation: when evolution depends on events of mutation that introduce new alleles, mutational and developmental biases in the introduction of variation (arrival biases) can impose biases on evolution without requiring neutral evolution or high mutation rates. Several studies report that the mutations implicated in adaptation reflect common mutation biases though others dispute this interpretation.", "title": "Evolutionary forces" }, { "paragraph_id": 40, "text": "Recombination allows alleles on the same strand of DNA to become separated. However, the rate of recombination is low (approximately two events per chromosome per generation). As a result, genes close together on a chromosome may not always be shuffled away from each other and genes that are close together tend to be inherited together, a phenomenon known as linkage. This tendency is measured by finding how often two alleles occur together on a single chromosome compared to expectations, which is called their linkage disequilibrium. A set of alleles that is usually inherited in a group is called a haplotype. This can be important when one allele in a particular haplotype is strongly beneficial: natural selection can drive a selective sweep that will also cause the other alleles in the haplotype to become more common in the population; this effect is called genetic hitchhiking or genetic draft. Genetic draft caused by the fact that some neutral genes are genetically linked to others that are under selection can be partially captured by an appropriate effective population size.", "title": "Evolutionary forces" }, { "paragraph_id": 41, "text": "A special case of natural selection is sexual selection, which is selection for any trait that increases mating success by increasing the attractiveness of an organism to potential mates. Traits that evolved through sexual selection are particularly prominent among males of several animal species. Although sexually favoured, traits such as cumbersome antlers, mating calls, large body size and bright colours often attract predation, which compromises the survival of individual males. This survival disadvantage is balanced by higher reproductive success in males that show these hard-to-fake, sexually selected traits.", "title": "Evolutionary forces" }, { "paragraph_id": 42, "text": "Evolution influences every aspect of the form and behaviour of organisms. Most prominent are the specific behavioural and physical adaptations that are the outcome of natural selection. These adaptations increase fitness by aiding activities such as finding food, avoiding predators or attracting mates. Organisms can also respond to selection by cooperating with each other, usually by aiding their relatives or engaging in mutually beneficial symbiosis. In the longer term, evolution produces new species through splitting ancestral populations of organisms into new groups that cannot or will not interbreed. These outcomes of evolution are distinguished based on time scale as macroevolution versus microevolution. Macroevolution refers to evolution that occurs at or above the level of species, in particular speciation and extinction; whereas microevolution refers to smaller evolutionary changes within a species or population, in particular shifts in allele frequency and adaptation. Macroevolution the outcome of long periods of microevolution. Thus, the distinction between micro- and macroevolution is not a fundamental one—the difference is simply the time involved. However, in macroevolution, the traits of the entire species may be important. For instance, a large amount of variation among individuals allows a species to rapidly adapt to new habitats, lessening the chance of it going extinct, while a wide geographic range increases the chance of speciation, by making it more likely that part of the population will become isolated. In this sense, microevolution and macroevolution might involve selection at different levels—with microevolution acting on genes and organisms, versus macroevolutionary processes such as species selection acting on entire species and affecting their rates of speciation and extinction.", "title": "Natural outcomes" }, { "paragraph_id": 43, "text": "A common misconception is that evolution has goals, long-term plans, or an innate tendency for \"progress\", as expressed in beliefs such as orthogenesis and evolutionism; realistically however, evolution has no long-term goal and does not necessarily produce greater complexity. Although complex species have evolved, they occur as a side effect of the overall number of organisms increasing and simple forms of life still remain more common in the biosphere. For example, the overwhelming majority of species are microscopic prokaryotes, which form about half the world's biomass despite their small size, and constitute the vast majority of Earth's biodiversity. Simple organisms have therefore been the dominant form of life on Earth throughout its history and continue to be the main form of life up to the present day, with complex life only appearing more diverse because it is more noticeable. Indeed, the evolution of microorganisms is particularly important to evolutionary research, since their rapid reproduction allows the study of experimental evolution and the observation of evolution and adaptation in real time.", "title": "Natural outcomes" }, { "paragraph_id": 44, "text": "Adaptation is the process that makes organisms better suited to their habitat. Also, the term adaptation may refer to a trait that is important for an organism's survival. For example, the adaptation of horses' teeth to the grinding of grass. By using the term adaptation for the evolutionary process and adaptive trait for the product (the bodily part or function), the two senses of the word may be distinguished. Adaptations are produced by natural selection. The following definitions are due to Theodosius Dobzhansky:", "title": "Natural outcomes" }, { "paragraph_id": 45, "text": "Adaptation may cause either the gain of a new feature, or the loss of an ancestral feature. An example that shows both types of change is bacterial adaptation to antibiotic selection, with genetic changes causing antibiotic resistance by both modifying the target of the drug, or increasing the activity of transporters that pump the drug out of the cell. Other striking examples are the bacteria Escherichia coli evolving the ability to use citric acid as a nutrient in a long-term laboratory experiment, Flavobacterium evolving a novel enzyme that allows these bacteria to grow on the by-products of nylon manufacturing, and the soil bacterium Sphingobium evolving an entirely new metabolic pathway that degrades the synthetic pesticide pentachlorophenol. An interesting but still controversial idea is that some adaptations might increase the ability of organisms to generate genetic diversity and adapt by natural selection (increasing organisms' evolvability).", "title": "Natural outcomes" }, { "paragraph_id": 46, "text": "Adaptation occurs through the gradual modification of existing structures. Consequently, structures with similar internal organisation may have different functions in related organisms. This is the result of a single ancestral structure being adapted to function in different ways. The bones within bat wings, for example, are very similar to those in mice feet and primate hands, due to the descent of all these structures from a common mammalian ancestor. However, since all living organisms are related to some extent, even organs that appear to have little or no structural similarity, such as arthropod, squid and vertebrate eyes, or the limbs and wings of arthropods and vertebrates, can depend on a common set of homologous genes that control their assembly and function; this is called deep homology.", "title": "Natural outcomes" }, { "paragraph_id": 47, "text": "During evolution, some structures may lose their original function and become vestigial structures. Such structures may have little or no function in a current species, yet have a clear function in ancestral species, or other closely related species. Examples include pseudogenes, the non-functional remains of eyes in blind cave-dwelling fish, wings in flightless birds, the presence of hip bones in whales and snakes, and sexual traits in organisms that reproduce via asexual reproduction. Examples of vestigial structures in humans include wisdom teeth, the coccyx, the vermiform appendix, and other behavioural vestiges such as goose bumps and primitive reflexes.", "title": "Natural outcomes" }, { "paragraph_id": 48, "text": "However, many traits that appear to be simple adaptations are in fact exaptations: structures originally adapted for one function, but which coincidentally became somewhat useful for some other function in the process. One example is the African lizard Holaspis guentheri, which developed an extremely flat head for hiding in crevices, as can be seen by looking at its near relatives. However, in this species, the head has become so flattened that it assists in gliding from tree to tree—an exaptation. Within cells, molecular machines such as the bacterial flagella and protein sorting machinery evolved by the recruitment of several pre-existing proteins that previously had different functions. Another example is the recruitment of enzymes from glycolysis and xenobiotic metabolism to serve as structural proteins called crystallins within the lenses of organisms' eyes.", "title": "Natural outcomes" }, { "paragraph_id": 49, "text": "An area of current investigation in evolutionary developmental biology is the developmental basis of adaptations and exaptations. This research addresses the origin and evolution of embryonic development and how modifications of development and developmental processes produce novel features. These studies have shown that evolution can alter development to produce new structures, such as embryonic bone structures that develop into the jaw in other animals instead forming part of the middle ear in mammals. It is also possible for structures that have been lost in evolution to reappear due to changes in developmental genes, such as a mutation in chickens causing embryos to grow teeth similar to those of crocodiles. It is now becoming clear that most alterations in the form of organisms are due to changes in a small set of conserved genes.", "title": "Natural outcomes" }, { "paragraph_id": 50, "text": "Interactions between organisms can produce both conflict and cooperation. When the interaction is between pairs of species, such as a pathogen and a host, or a predator and its prey, these species can develop matched sets of adaptations. Here, the evolution of one species causes adaptations in a second species. These changes in the second species then, in turn, cause new adaptations in the first species. This cycle of selection and response is called coevolution. An example is the production of tetrodotoxin in the rough-skinned newt and the evolution of tetrodotoxin resistance in its predator, the common garter snake. In this predator-prey pair, an evolutionary arms race has produced high levels of toxin in the newt and correspondingly high levels of toxin resistance in the snake.", "title": "Natural outcomes" }, { "paragraph_id": 51, "text": "Not all co-evolved interactions between species involve conflict. Many cases of mutually beneficial interactions have evolved. For instance, an extreme cooperation exists between plants and the mycorrhizal fungi that grow on their roots and aid the plant in absorbing nutrients from the soil. This is a reciprocal relationship as the plants provide the fungi with sugars from photosynthesis. Here, the fungi actually grow inside plant cells, allowing them to exchange nutrients with their hosts, while sending signals that suppress the plant immune system.", "title": "Natural outcomes" }, { "paragraph_id": 52, "text": "Coalitions between organisms of the same species have also evolved. An extreme case is the eusociality found in social insects, such as bees, termites and ants, where sterile insects feed and guard the small number of organisms in a colony that are able to reproduce. On an even smaller scale, the somatic cells that make up the body of an animal limit their reproduction so they can maintain a stable organism, which then supports a small number of the animal's germ cells to produce offspring. Here, somatic cells respond to specific signals that instruct them whether to grow, remain as they are, or die. If cells ignore these signals and multiply inappropriately, their uncontrolled growth causes cancer.", "title": "Natural outcomes" }, { "paragraph_id": 53, "text": "Such cooperation within species may have evolved through the process of kin selection, which is where one organism acts to help raise a relative's offspring. This activity is selected for because if the helping individual contains alleles which promote the helping activity, it is likely that its kin will also contain these alleles and thus those alleles will be passed on. Other processes that may promote cooperation include group selection, where cooperation provides benefits to a group of organisms.", "title": "Natural outcomes" }, { "paragraph_id": 54, "text": "Speciation is the process where a species diverges into two or more descendant species.", "title": "Natural outcomes" }, { "paragraph_id": 55, "text": "There are multiple ways to define the concept of \"species.\" The choice of definition is dependent on the particularities of the species concerned. For example, some species concepts apply more readily toward sexually reproducing organisms while others lend themselves better toward asexual organisms. Despite the diversity of various species concepts, these various concepts can be placed into one of three broad philosophical approaches: interbreeding, ecological and phylogenetic. The Biological Species Concept (BSC) is a classic example of the interbreeding approach. Defined by evolutionary biologist Ernst Mayr in 1942, the BSC states that \"species are groups of actually or potentially interbreeding natural populations, which are reproductively isolated from other such groups.\" Despite its wide and long-term use, the BSC like other species concepts is not without controversy, for example, because genetic recombination among prokaryotes is not an intrinsic aspect of reproduction; this is called the species problem. Some researchers have attempted a unifying monistic definition of species, while others adopt a pluralistic approach and suggest that there may be different ways to logically interpret the definition of a species.", "title": "Natural outcomes" }, { "paragraph_id": 56, "text": "Barriers to reproduction between two diverging sexual populations are required for the populations to become new species. Gene flow may slow this process by spreading the new genetic variants also to the other populations. Depending on how far two species have diverged since their most recent common ancestor, it may still be possible for them to produce offspring, as with horses and donkeys mating to produce mules. Such hybrids are generally infertile. In this case, closely related species may regularly interbreed, but hybrids will be selected against and the species will remain distinct. However, viable hybrids are occasionally formed and these new species can either have properties intermediate between their parent species, or possess a totally new phenotype. The importance of hybridisation in producing new species of animals is unclear, although cases have been seen in many types of animals, with the gray tree frog being a particularly well-studied example.", "title": "Natural outcomes" }, { "paragraph_id": 57, "text": "Speciation has been observed multiple times under both controlled laboratory conditions and in nature. In sexually reproducing organisms, speciation results from reproductive isolation followed by genealogical divergence. There are four primary geographic modes of speciation. The most common in animals is allopatric speciation, which occurs in populations initially isolated geographically, such as by habitat fragmentation or migration. Selection under these conditions can produce very rapid changes in the appearance and behaviour of organisms. As selection and drift act independently on populations isolated from the rest of their species, separation may eventually produce organisms that cannot interbreed.", "title": "Natural outcomes" }, { "paragraph_id": 58, "text": "The second mode of speciation is peripatric speciation, which occurs when small populations of organisms become isolated in a new environment. This differs from allopatric speciation in that the isolated populations are numerically much smaller than the parental population. Here, the founder effect causes rapid speciation after an increase in inbreeding increases selection on homozygotes, leading to rapid genetic change.", "title": "Natural outcomes" }, { "paragraph_id": 59, "text": "The third mode is parapatric speciation. This is similar to peripatric speciation in that a small population enters a new habitat, but differs in that there is no physical separation between these two populations. Instead, speciation results from the evolution of mechanisms that reduce gene flow between the two populations. Generally this occurs when there has been a drastic change in the environment within the parental species' habitat. One example is the grass Anthoxanthum odoratum, which can undergo parapatric speciation in response to localised metal pollution from mines. Here, plants evolve that have resistance to high levels of metals in the soil. Selection against interbreeding with the metal-sensitive parental population produced a gradual change in the flowering time of the metal-resistant plants, which eventually produced complete reproductive isolation. Selection against hybrids between the two populations may cause reinforcement, which is the evolution of traits that promote mating within a species, as well as character displacement, which is when two species become more distinct in appearance.", "title": "Natural outcomes" }, { "paragraph_id": 60, "text": "Finally, in sympatric speciation species diverge without geographic isolation or changes in habitat. This form is rare since even a small amount of gene flow may remove genetic differences between parts of a population. Generally, sympatric speciation in animals requires the evolution of both genetic differences and nonrandom mating, to allow reproductive isolation to evolve.", "title": "Natural outcomes" }, { "paragraph_id": 61, "text": "One type of sympatric speciation involves crossbreeding of two related species to produce a new hybrid species. This is not common in animals as animal hybrids are usually sterile. This is because during meiosis the homologous chromosomes from each parent are from different species and cannot successfully pair. However, it is more common in plants because plants often double their number of chromosomes, to form polyploids. This allows the chromosomes from each parental species to form matching pairs during meiosis, since each parent's chromosomes are represented by a pair already. An example of such a speciation event is when the plant species Arabidopsis thaliana and Arabidopsis arenosa crossbred to give the new species Arabidopsis suecica. This happened about 20,000 years ago, and the speciation process has been repeated in the laboratory, which allows the study of the genetic mechanisms involved in this process. Indeed, chromosome doubling within a species may be a common cause of reproductive isolation, as half the doubled chromosomes will be unmatched when breeding with undoubled organisms.", "title": "Natural outcomes" }, { "paragraph_id": 62, "text": "Speciation events are important in the theory of punctuated equilibrium, which accounts for the pattern in the fossil record of short \"bursts\" of evolution interspersed with relatively long periods of stasis, where species remain relatively unchanged. In this theory, speciation and rapid evolution are linked, with natural selection and genetic drift acting most strongly on organisms undergoing speciation in novel habitats or small populations. As a result, the periods of stasis in the fossil record correspond to the parental population and the organisms undergoing speciation and rapid evolution are found in small populations or geographically restricted habitats and therefore rarely being preserved as fossils.", "title": "Natural outcomes" }, { "paragraph_id": 63, "text": "Extinction is the disappearance of an entire species. Extinction is not an unusual event, as species regularly appear through speciation and disappear through extinction. Nearly all animal and plant species that have lived on Earth are now extinct, and extinction appears to be the ultimate fate of all species. These extinctions have happened continuously throughout the history of life, although the rate of extinction spikes in occasional mass extinction events. The Cretaceous–Paleogene extinction event, during which the non-avian dinosaurs became extinct, is the most well-known, but the earlier Permian–Triassic extinction event was even more severe, with approximately 96% of all marine species driven to extinction. The Holocene extinction event is an ongoing mass extinction associated with humanity's expansion across the globe over the past few thousand years. Present-day extinction rates are 100–1000 times greater than the background rate and up to 30% of current species may be extinct by the mid 21st century. Human activities are now the primary cause of the ongoing extinction event; global warming may further accelerate it in the future. Despite the estimated extinction of more than 99% of all species that ever lived on Earth, about 1 trillion species are estimated to be on Earth currently with only one-thousandth of 1% described.", "title": "Natural outcomes" }, { "paragraph_id": 64, "text": "The role of extinction in evolution is not very well understood and may depend on which type of extinction is considered. The causes of the continuous \"low-level\" extinction events, which form the majority of extinctions, may be the result of competition between species for limited resources (the competitive exclusion principle). If one species can out-compete another, this could produce species selection, with the fitter species surviving and the other species being driven to extinction. The intermittent mass extinctions are also important, but instead of acting as a selective force, they drastically reduce diversity in a nonspecific manner and promote bursts of rapid evolution and speciation in survivors.", "title": "Natural outcomes" }, { "paragraph_id": 65, "text": "Concepts and models used in evolutionary biology, such as natural selection, have many applications.", "title": "Applications" }, { "paragraph_id": 66, "text": "Artificial selection is the intentional selection of traits in a population of organisms. This has been used for thousands of years in the domestication of plants and animals. More recently, such selection has become a vital part of genetic engineering, with selectable markers such as antibiotic resistance genes being used to manipulate DNA. Proteins with valuable properties have evolved by repeated rounds of mutation and selection (for example modified enzymes and new antibodies) in a process called directed evolution.", "title": "Applications" }, { "paragraph_id": 67, "text": "Understanding the changes that have occurred during an organism's evolution can reveal the genes needed to construct parts of the body, genes which may be involved in human genetic disorders. For example, the Mexican tetra is an albino cavefish that lost its eyesight during evolution. Breeding together different populations of this blind fish produced some offspring with functional eyes, since different mutations had occurred in the isolated populations that had evolved in different caves. This helped identify genes required for vision and pigmentation.", "title": "Applications" }, { "paragraph_id": 68, "text": "Evolutionary theory has many applications in medicine. Many human diseases are not static phenomena, but capable of evolution. Viruses, bacteria, fungi and cancers evolve to be resistant to host immune defences, as well as to pharmaceutical drugs. These same problems occur in agriculture with pesticide and herbicide resistance. It is possible that we are facing the end of the effective life of most of available antibiotics and predicting the evolution and evolvability of our pathogens and devising strategies to slow or circumvent it is requiring deeper knowledge of the complex forces driving evolution at the molecular level.", "title": "Applications" }, { "paragraph_id": 69, "text": "In computer science, simulations of evolution using evolutionary algorithms and artificial life started in the 1960s and were extended with simulation of artificial selection. Artificial evolution became a widely recognised optimisation method as a result of the work of Ingo Rechenberg in the 1960s. He used evolution strategies to solve complex engineering problems. Genetic algorithms in particular became popular through the writing of John Henry Holland. Practical applications also include automatic evolution of computer programmes. Evolutionary algorithms are now used to solve multi-dimensional problems more efficiently than software produced by human designers and also to optimise the design of systems.", "title": "Applications" }, { "paragraph_id": 70, "text": "The Earth is about 4.54 billion years old. The earliest undisputed evidence of life on Earth dates from at least 3.5 billion years ago, during the Eoarchean Era after a geological crust started to solidify following the earlier molten Hadean Eon. Microbial mat fossils have been found in 3.48 billion-year-old sandstone in Western Australia. Other early physical evidence of a biogenic substance is graphite in 3.7 billion-year-old metasedimentary rocks discovered in Western Greenland as well as \"remains of biotic life\" found in 4.1 billion-year-old rocks in Western Australia. Commenting on the Australian findings, Stephen Blair Hedges wrote: \"If life arose relatively quickly on Earth, then it could be common in the universe.\" In July 2016, scientists reported identifying a set of 355 genes from the last universal common ancestor (LUCA) of all organisms living on Earth.", "title": "Evolutionary history of life" }, { "paragraph_id": 71, "text": "More than 99% of all species, amounting to over five billion species, that ever lived on Earth are estimated to be extinct. Estimates on the number of Earth's current species range from 10 million to 14 million, of which about 1.9 million are estimated to have been named and 1.6 million documented in a central database to date, leaving at least 80% not yet described.", "title": "Evolutionary history of life" }, { "paragraph_id": 72, "text": "Highly energetic chemistry is thought to have produced a self-replicating molecule around 4 billion years ago, and half a billion years later the last common ancestor of all life existed. The current scientific consensus is that the complex biochemistry that makes up life came from simpler chemical reactions. The beginning of life may have included self-replicating molecules such as RNA and the assembly of simple cells.", "title": "Evolutionary history of life" }, { "paragraph_id": 73, "text": "All organisms on Earth are descended from a common ancestor or ancestral gene pool. Current species are a stage in the process of evolution, with their diversity the product of a long series of speciation and extinction events. The common descent of organisms was first deduced from four simple facts about organisms: First, they have geographic distributions that cannot be explained by local adaptation. Second, the diversity of life is not a set of completely unique organisms, but organisms that share morphological similarities. Third, vestigial traits with no clear purpose resemble functional ancestral traits. Fourth, organisms can be classified using these similarities into a hierarchy of nested groups, similar to a family tree.", "title": "Evolutionary history of life" }, { "paragraph_id": 74, "text": "Due to horizontal gene transfer, this \"tree of life\" may be more complicated than a simple branching tree, since some genes have spread independently between distantly related species. To solve this problem and others, some authors prefer to use the \"Coral of life\" as a metaphor or a mathematical model to illustrate the evolution of life. This view dates back to an idea briefly mentioned by Darwin but later abandoned.", "title": "Evolutionary history of life" }, { "paragraph_id": 75, "text": "Past species have also left records of their evolutionary history. Fossils, along with the comparative anatomy of present-day organisms, constitute the morphological, or anatomical, record. By comparing the anatomies of both modern and extinct species, palaeontologists can infer the lineages of those species. However, this approach is most successful for organisms that had hard body parts, such as shells, bones or teeth. Further, as prokaryotes such as bacteria and archaea share a limited set of common morphologies, their fossils do not provide information on their ancestry.", "title": "Evolutionary history of life" }, { "paragraph_id": 76, "text": "More recently, evidence for common descent has come from the study of biochemical similarities between organisms. For example, all living cells use the same basic set of nucleotides and amino acids. The development of molecular genetics has revealed the record of evolution left in organisms' genomes: dating when species diverged through the molecular clock produced by mutations. For example, these DNA sequence comparisons have revealed that humans and chimpanzees share 98% of their genomes and analysing the few areas where they differ helps shed light on when the common ancestor of these species existed.", "title": "Evolutionary history of life" }, { "paragraph_id": 77, "text": "Prokaryotes inhabited the Earth from approximately 3–4 billion years ago. No obvious changes in morphology or cellular organisation occurred in these organisms over the next few billion years. The eukaryotic cells emerged between 1.6 and 2.7 billion years ago. The next major change in cell structure came when bacteria were engulfed by eukaryotic cells, in a cooperative association called endosymbiosis. The engulfed bacteria and the host cell then underwent coevolution, with the bacteria evolving into either mitochondria or hydrogenosomes. Another engulfment of cyanobacterial-like organisms led to the formation of chloroplasts in algae and plants.", "title": "Evolutionary history of life" }, { "paragraph_id": 78, "text": "The history of life was that of the unicellular eukaryotes, prokaryotes and archaea until about 610 million years ago when multicellular organisms began to appear in the oceans in the Ediacaran period. The evolution of multicellularity occurred in multiple independent events, in organisms as diverse as sponges, brown algae, cyanobacteria, slime moulds and myxobacteria. In January 2016, scientists reported that, about 800 million years ago, a minor genetic change in a single molecule called GK-PID may have allowed organisms to go from a single cell organism to one of many cells.", "title": "Evolutionary history of life" }, { "paragraph_id": 79, "text": "Soon after the emergence of these first multicellular organisms, a remarkable amount of biological diversity appeared over approximately 10 million years, in an event called the Cambrian explosion. Here, the majority of types of modern animals appeared in the fossil record, as well as unique lineages that subsequently became extinct. Various triggers for the Cambrian explosion have been proposed, including the accumulation of oxygen in the atmosphere from photosynthesis.", "title": "Evolutionary history of life" }, { "paragraph_id": 80, "text": "About 500 million years ago, plants and fungi colonised the land and were soon followed by arthropods and other animals. Insects were particularly successful and even today make up the majority of animal species. Amphibians first appeared around 364 million years ago, followed by early amniotes and birds around 155 million years ago (both from \"reptile\"-like lineages), mammals around 129 million years ago, Homininae around 10 million years ago and modern humans around 250,000 years ago. However, despite the evolution of these large animals, smaller organisms similar to the types that evolved early in this process continue to be highly successful and dominate the Earth, with the majority of both biomass and species being prokaryotes.", "title": "Evolutionary history of life" }, { "paragraph_id": 81, "text": "The proposal that one type of organism could descend from another type goes back to some of the first pre-Socratic Greek philosophers, such as Anaximander and Empedocles. Such proposals survived into Roman times. The poet and philosopher Lucretius followed Empedocles in his masterwork De rerum natura (On the Nature of Things).", "title": "History of evolutionary thought" }, { "paragraph_id": 82, "text": "In contrast to these materialistic views, Aristotelianism had considered all natural things as actualisations of fixed natural possibilities, known as forms. This became part of a medieval teleological understanding of nature in which all things have an intended role to play in a divine cosmic order. Variations of this idea became the standard understanding of the Middle Ages and were integrated into Christian learning, but Aristotle did not demand that real types of organisms always correspond one-for-one with exact metaphysical forms and specifically gave examples of how new types of living things could come to be.", "title": "History of evolutionary thought" }, { "paragraph_id": 83, "text": "A number of Arab Muslim scholars wrote about evolution, most notably Ibn Khaldun, who wrote the book Muqaddimah in 1377 AD, in which he asserted that humans developed from \"the world of the monkeys\", in a process by which \"species become more numerous\".", "title": "History of evolutionary thought" }, { "paragraph_id": 84, "text": "The \"New Science\" of the 17th century rejected the Aristotelian approach. It sought to explain natural phenomena in terms of physical laws that were the same for all visible things and that did not require the existence of any fixed natural categories or divine cosmic order. However, this new approach was slow to take root in the biological sciences: the last bastion of the concept of fixed natural types. John Ray applied one of the previously more general terms for fixed natural types, \"species\", to plant and animal types, but he strictly identified each type of living thing as a species and proposed that each species could be defined by the features that perpetuated themselves generation after generation. The biological classification introduced by Carl Linnaeus in 1735 explicitly recognised the hierarchical nature of species relationships, but still viewed species as fixed according to a divine plan.", "title": "History of evolutionary thought" }, { "paragraph_id": 85, "text": "Other naturalists of this time speculated on the evolutionary change of species over time according to natural laws. In 1751, Pierre Louis Maupertuis wrote of natural modifications occurring during reproduction and accumulating over many generations to produce new species. Georges-Louis Leclerc, Comte de Buffon, suggested that species could degenerate into different organisms, and Erasmus Darwin proposed that all warm-blooded animals could have descended from a single microorganism (or \"filament\"). The first full-fledged evolutionary scheme was Jean-Baptiste Lamarck's \"transmutation\" theory of 1809, which envisaged spontaneous generation continually producing simple forms of life that developed greater complexity in parallel lineages with an inherent progressive tendency, and postulated that on a local level, these lineages adapted to the environment by inheriting changes caused by their use or disuse in parents. (The latter process was later called Lamarckism.) These ideas were condemned by established naturalists as speculation lacking empirical support. In particular, Georges Cuvier insisted that species were unrelated and fixed, their similarities reflecting divine design for functional needs. In the meantime, Ray's ideas of benevolent design had been developed by William Paley into the Natural Theology or Evidences of the Existence and Attributes of the Deity (1802), which proposed complex adaptations as evidence of divine design and which was admired by Charles Darwin.", "title": "History of evolutionary thought" }, { "paragraph_id": 86, "text": "The crucial break from the concept of constant typological classes or types in biology came with the theory of evolution through natural selection, which was formulated by Charles Darwin and Alfred Wallace in terms of variable populations. Darwin used the expression \"descent with modification\" rather than \"evolution\". Partly influenced by An Essay on the Principle of Population (1798) by Thomas Robert Malthus, Darwin noted that population growth would lead to a \"struggle for existence\" in which favourable variations prevailed as others perished. In each generation, many offspring fail to survive to an age of reproduction because of limited resources. This could explain the diversity of plants and animals from a common ancestry through the working of natural laws in the same way for all types of organism. Darwin developed his theory of \"natural selection\" from 1838 onwards and was writing up his \"big book\" on the subject when Alfred Russel Wallace sent him a version of virtually the same theory in 1858. Their separate papers were presented together at an 1858 meeting of the Linnean Society of London. At the end of 1859, Darwin's publication of his \"abstract\" as On the Origin of Species explained natural selection in detail and in a way that led to an increasingly wide acceptance of Darwin's concepts of evolution at the expense of alternative theories. Thomas Henry Huxley applied Darwin's ideas to humans, using paleontology and comparative anatomy to provide strong evidence that humans and apes shared a common ancestry. Some were disturbed by this since it implied that humans did not have a special place in the universe.", "title": "History of evolutionary thought" }, { "paragraph_id": 87, "text": "The mechanisms of reproductive heritability and the origin of new traits remained a mystery. Towards this end, Darwin developed his provisional theory of pangenesis. In 1865, Gregor Mendel reported that traits were inherited in a predictable manner through the independent assortment and segregation of elements (later known as genes). Mendel's laws of inheritance eventually supplanted most of Darwin's pangenesis theory. August Weismann made the important distinction between germ cells that give rise to gametes (such as sperm and egg cells) and the somatic cells of the body, demonstrating that heredity passes through the germ line only. Hugo de Vries connected Darwin's pangenesis theory to Weismann's germ/soma cell distinction and proposed that Darwin's pangenes were concentrated in the cell nucleus and when expressed they could move into the cytoplasm to change the cell's structure. De Vries was also one of the researchers who made Mendel's work well known, believing that Mendelian traits corresponded to the transfer of heritable variations along the germline. To explain how new variants originate, de Vries developed a mutation theory that led to a temporary rift between those who accepted Darwinian evolution and biometricians who allied with de Vries. In the 1930s, pioneers in the field of population genetics, such as Ronald Fisher, Sewall Wright and J. B. S. Haldane set the foundations of evolution onto a robust statistical philosophy. The false contradiction between Darwin's theory, genetic mutations, and Mendelian inheritance was thus reconciled.", "title": "History of evolutionary thought" }, { "paragraph_id": 88, "text": "In the 1920s and 1930s, the modern synthesis connected natural selection and population genetics, based on Mendelian inheritance, into a unified theory that included random genetic drift, mutation, and gene flow. This new version of evolutionary theory focused on changes in allele frequencies in population. It explained patterns observed across species in populations, through fossil transitions in palaeontology.", "title": "History of evolutionary thought" }, { "paragraph_id": 89, "text": "Since then, further syntheses have extended evolution's explanatory power in the light of numerous discoveries, to cover biological phenomena across the whole of the biological hierarchy from genes to populations.", "title": "History of evolutionary thought" }, { "paragraph_id": 90, "text": "The publication of the structure of DNA by James Watson and Francis Crick with contribution of Rosalind Franklin in 1953 demonstrated a physical mechanism for inheritance. Molecular biology improved understanding of the relationship between genotype and phenotype. Advances were also made in phylogenetic systematics, mapping the transition of traits into a comparative and testable framework through the publication and use of evolutionary trees. In 1973, evolutionary biologist Theodosius Dobzhansky penned that \"nothing in biology makes sense except in the light of evolution\", because it has brought to light the relations of what first seemed disjointed facts in natural history into a coherent explanatory body of knowledge that describes and predicts many observable facts about life on this planet.", "title": "History of evolutionary thought" }, { "paragraph_id": 91, "text": "One extension, known as evolutionary developmental biology and informally called \"evo-devo,\" emphasises how changes between generations (evolution) act on patterns of change within individual organisms (development). Since the beginning of the 21st century, some biologists have argued for an extended evolutionary synthesis, which would account for the effects of non-genetic inheritance modes, such as epigenetics, parental effects, ecological inheritance and cultural inheritance, and evolvability.", "title": "History of evolutionary thought" }, { "paragraph_id": 92, "text": "In the 19th century, particularly after the publication of On the Origin of Species in 1859, the idea that life had evolved was an active source of academic debate centred on the philosophical, social and religious implications of evolution. Today, the modern evolutionary synthesis is accepted by a vast majority of scientists. However, evolution remains a contentious concept for some theists.", "title": "Social and cultural responses" }, { "paragraph_id": 93, "text": "While various religions and denominations have reconciled their beliefs with evolution through concepts such as theistic evolution, there are creationists who believe that evolution is contradicted by the creation myths found in their religions and who raise various objections to evolution. As had been demonstrated by responses to the publication of Vestiges of the Natural History of Creation in 1844, the most controversial aspect of evolutionary biology is the implication of human evolution that humans share common ancestry with apes and that the mental and moral faculties of humanity have the same types of natural causes as other inherited traits in animals. In some countries, notably the United States, these tensions between science and religion have fuelled the current creation–evolution controversy, a religious conflict focusing on politics and public education. While other scientific fields such as cosmology and Earth science also conflict with literal interpretations of many religious texts, evolutionary biology experiences significantly more opposition from religious literalists.", "title": "Social and cultural responses" }, { "paragraph_id": 94, "text": "The teaching of evolution in American secondary school biology classes was uncommon in most of the first half of the 20th century. The Scopes Trial decision of 1925 caused the subject to become very rare in American secondary biology textbooks for a generation, but it was gradually re-introduced later and became legally protected with the 1968 Epperson v. Arkansas decision. Since then, the competing religious belief of creationism was legally disallowed in secondary school curricula in various decisions in the 1970s and 1980s, but it returned in pseudoscientific form as intelligent design (ID), to be excluded once again in the 2005 Kitzmiller v. Dover Area School District case. The debate over Darwin's ideas did not generate significant controversy in China.", "title": "Social and cultural responses" } ]
Evolution is the change in the heritable characteristics of biological populations over successive generations. Evolution occurs when evolutionary processes such as natural selection and genetic drift act on genetic variation, resulting in certain characteristics becoming more or less common within a population over successive generations. The process of evolution has given rise to biodiversity at every level of biological organisation. The theory of evolution by natural selection was conceived independently by Charles Darwin and Alfred Russel Wallace in the mid-19th century as an explanation for why organisms are adapted to their physical and biological environments. The theory was first set out in detail in Darwin's book On the Origin of Species. Evolution by natural selection is established by observable facts about living organisms: (1) more offspring are often produced than can possibly survive; (2) traits vary among individuals with respect to their morphology, physiology, and behaviour; (3) different traits confer different rates of survival and reproduction; and (4) traits can be passed from generation to generation. In successive generations, members of a population are therefore more likely to be replaced by the offspring of parents with favourable characteristics for that environment. In the early 20th century, competing ideas of evolution were refuted and evolution was combined with Mendelian inheritance and population genetics to give rise to modern evolutionary theory. In this synthesis the basis for heredity is in DNA molecules that pass information from generation to generation. The processes that change DNA in a population include natural selection, genetic drift, mutation, and gene flow. All life on Earth—including humanity—shares a last universal common ancestor (LUCA), which lived approximately 3.5–3.8 billion years ago. The fossil record includes a progression from early biogenic graphite to microbial mat fossils to fossilised multicellular organisms. Existing patterns of biodiversity have been shaped by repeated formations of new species (speciation), changes within species (anagenesis), and loss of species (extinction) throughout the evolutionary history of life on Earth. Morphological and biochemical traits tend to be more similar among species that share a more recent common ancestor, which historically was used to reconstruct phylogenetic trees, although direct comparison of genetic sequences is a more common method today. Evolutionary biologists have continued to study various aspects of evolution by forming and testing hypotheses as well as constructing theories based on evidence from the field or laboratory and on data generated by the methods of mathematical and theoretical biology. Their discoveries have influenced not just the development of biology but also other fields including agriculture, medicine, and computer science.
2001-11-06T00:39:57Z
2023-12-28T14:23:19Z
[ "Template:About", "Template:Use British English", "Template:Clarify", "Template:Reflist", "Template:Authority control", "Template:Short description", "Template:Imagefact", "Template:Cite news", "Template:Internet Archive", "Template:Refend", "Template:Featured article", "Template:Cite book", "Template:See also", "Template:PhylomapA", "Template:Harvnb", "Template:Cite web", "Template:Cite journal", "Template:Webarchive", "Template:Sister project links", "Template:Evolution sidebar", "Template:Spoken Wikipedia", "Template:Cite encyclopedia", "Template:Doi", "Template:Toclimit", "Template:Multiple image", "Template:Sfn", "Template:Annotated link", "Template:In Our Time", "Template:Evolution", "Template:Pp-semi-protected", "Template:Use dmy dates", "Template:Further", "Template:Main", "Template:Clear", "Template:Align", "Template:Page needed", "Template:Refbegin", "Template:See introduction", "Template:YouTube", "Template:Library resources box" ]
https://en.wikipedia.org/wiki/Evolution
9,238
Ernst Mayr
Ernst Walter Mayr (/ˈmaɪər/; 5 July 1904 – 3 February 2005) was one of the 20th century's leading evolutionary biologists. He was also a renowned taxonomist, tropical explorer, ornithologist, philosopher of biology, and historian of science. His work contributed to the conceptual revolution that led to the modern evolutionary synthesis of Mendelian genetics, systematics, and Darwinian evolution, and to the development of the biological species concept. Although Charles Darwin and others posited that multiple species could evolve from a single common ancestor, the mechanism by which this occurred was not understood, creating the species problem. Ernst Mayr approached the problem with a new definition for species. In his book Systematics and the Origin of Species (1942) he wrote that a species is not just a group of morphologically similar individuals, but a group that can breed only among themselves, excluding all others. When populations within a species become isolated by geography, feeding strategy, mate choice, or other means, they may start to differ from other populations through genetic drift and natural selection, and over time may evolve into new species. The most significant and rapid genetic reorganization occurs in extremely small populations that have been isolated (as on islands). His theory of peripatric speciation (a more precise form of allopatric speciation which he advanced), based on his work on birds, is still considered a leading mode of speciation, and was the theoretical underpinning for the theory of punctuated equilibrium, proposed by Niles Eldredge and Stephen Jay Gould. Mayr is sometimes credited with inventing modern philosophy of biology, particularly the part related to evolutionary biology, which he distinguished from physics due to its introduction of (natural) history into science. Mayr was the second son of Helene Pusinelli and Otto Mayr. His father was a district prosecuting attorney at Würzburg but took an interest in natural history and took the children out on field trips. Mayr learnt all the local birds in Würzburg from his elder brother Otto. He also had access to a natural history magazine for amateurs, Kosmos. His father died just before he was thirteen. The family then moved to Dresden, where he studied at the Staatsgymnasium in Dresden-Neustadt and completed his high school education. In April 1922, while still in high school, he joined the newly founded Saxony Ornithologists' Association. There he met Rudolf Zimmermann, who became his ornithological mentor. In February 1923, Mayr passed his high school examination (Abitur) and his mother rewarded him with a pair of binoculars. On 23 March 1923 on one of the lakes of Moritzburg, the Frauenteich, he spotted what he identified as a red-crested pochard. The species had not been seen in Saxony since 1845 and the local club argued about the identity. Raimund Schelcher (1891–1979) of the club then suggested that Mayr visit his classmate Erwin Stresemann on his way to Greifswald, where Mayr was to begin his medical studies. After a tough interrogation, Stresemann accepted and published the sighting as authentic. Stresemann was very impressed and suggested that, between semesters, Mayr could work as a volunteer in the ornithological section of the museum. Mayr wrote about this event, "It was as if someone had given me the key to heaven." He entered the University of Greifswald in 1923 and, according to Mayr himself, "took the medical curriculum (to satisfy a family tradition) but after only a year, he decided to leave medicine and enrolled at the Faculty of Biological Sciences." Mayr was endlessly interested in ornithology and "chose Greifswald at the Baltic for my studies for no other reason than that ... it was situated in the ornithologically most interesting area." Although he ostensibly planned to become a physician, he was "first and foremost an ornithologist." During the first semester break Stresemann gave him a test to identify treecreepers and Mayr was able to identify most of the specimens correctly. Stresemann declared that Mayr "was a born systematist". In 1925, Stresemann suggested that he give up his medical studies, in fact he should leave the faculty of medicine and enrol into the faculty of Biology and then join the Berlin Museum with the prospect of bird-collecting trips to the tropics, on the condition that he completed his doctoral studies in 16 months. Mayr completed his doctorate in ornithology at the University of Berlin under Dr. Carl Zimmer, who was a full professor (Ordentlicher Professor), on 24 June 1926 at the age of 21. On 1 July he accepted the position offered to him at the museum for a monthly salary of 330.54 Reichsmark. At the International Zoological Congress at Budapest in 1927, Mayr was introduced by Stresemann to banker and naturalist Walter Rothschild, who asked him to undertake an expedition to New Guinea on behalf of himself and the American Museum of Natural History in New York. In New Guinea, Mayr collected several thousand bird skins (he named 26 new bird species during his lifetime) and, in the process also named 38 new orchid species. During his stay in New Guinea, he was invited to accompany the Whitney South Sea Expedition to the Solomon Islands. Also, while in New Guinea, he visited the Lutheran missionaries Otto Thiele and Christian Keyser, in the Finschhafen district; there, while in conversation with his hosts, he uncovered the discrepancies in Hermann Detzner's popular book Four Years among Cannibals: New Guinea, in which Detzner claimed to have seen the interior, discovered several species of flora and fauna, while remaining only steps ahead of the Australian patrols sent to capture him. He returned to Germany in 1930. Mayr moved to the United States in 1931 to take up a curatorial position at the American Museum of Natural History, where he played the important role of brokering and acquiring the Walter Rothschild collection of bird skins, which was being sold in order to pay off a blackmailer. During his time at the museum he produced numerous publications on bird taxonomy, and in 1942 his first book Systematics and the Origin of Species, which completed the evolutionary synthesis started by Darwin. After Mayr was appointed at the American Museum of Natural History, he influenced American ornithological research by mentoring young birdwatchers. Mayr was surprised at the differences between American and German birding societies. He noted that the German society was "far more scientific, far more interested in life histories and breeding bird species, as well as in reports on recent literature." Mayr organized a monthly seminar under the auspices of the Linnean Society of New York. Under the influence of J.A. Allen, Frank Chapman, and Jonathan Dwight, the society concentrated on taxonomy and later became a clearing house for bird banding and sight records. Mayr encouraged his Linnaean Society seminar participants to take up a specific research project of their own. Under Mayr's influence one of them, Joseph Hickey, went on to write A Guide to Birdwatching (1943). Hickey remembered later, "Mayr was our age and invited on all our field trips. The heckling of this German foreigner was tremendous, but he gave tit for tat, and any modern picture of Dr E. Mayr as a very formal person does not square with my memory of the 1930s. He held his own." A group of eight young birdwatchers from The Bronx later became the Bronx County Bird Club, led by Ludlow Griscom. "Everyone should have a problem" was the way one Bronx County Bird Club member recalled Mayr's refrain. Mayr said of his own involvement with the local birdwatchers: "In those early years in New York when I was a stranger in a big city, it was the companionship and later friendship which I was offered in the Linnean Society that was the most important thing in my life." Mayr also greatly influenced the American ornithologist Margaret Morse Nice. Mayr encouraged her to correspond with European ornithologists and helped her in her landmark study on song sparrows. Nice wrote to Joseph Grinnell in 1932, trying to get foreign literature reviewed in the Condor: "Too many American ornithologists have despised the study of the living bird; the magazines and books that deal with the subject abound in careless statements, anthropomorphic interpretations, repetition of ancient errors, and sweeping conclusions from a pitiful array of facts. ... in Europe the study of the living bird is taken seriously. We could learn a great deal from their writing." Mayr ensured that Nice could publish her two-volume Studies in the Life History of the Song Sparrow. He found her a publisher, and her book was reviewed by Aldo Leopold, Joseph Grinnell, and Jean Delacour. Nice dedicated her book to "My Friend Ernst Mayr." Mayr joined the faculty of Harvard University in 1953, where he also served as director of the Museum of Comparative Zoology from 1961 to 1970. He retired in 1975 as emeritus professor of zoology, showered with honors. Following his retirement, he went on to publish more than 200 articles, in a variety of journals—more than some reputable scientists publish in their entire careers; 14 of his 25 books were published after he was 65. Even as a centenarian, he continued to write books. On his 100th birthday, he was interviewed by Scientific American magazine. Mayr died on 3 February 2005 in his retirement home in Bedford, Massachusetts, after a short illness. He had married fellow German Margarete "Gretel" Simon in May 1935 (they had met at a party in Manhattan in 1932), and she assisted Mayr in some of his work. Margarete died in 1990. He was survived by two daughters (Christa Menzel and Susanne Harrison), five grandchildren and 10 great-grandchildren. The awards that Mayr received include the National Medal of Science, the Balzan Prize, the Sarton Medal of the History of Science Society, the International Prize for Biology, the Loye and Alden Miller Research Award, and the Lewis Thomas Prize for Writing about Science. In 1939 he was elected a Corresponding Member of the Royal Australasian Ornithologists Union. He was awarded the 1946 Leidy Award from the Academy of Natural Sciences of Philadelphia. He was awarded the Linnean Society of London's prestigious Darwin-Wallace Medal in 1958 and the Linnaean Society of New York's inaugural Eisenmann Medal in 1983. For his work, Animal Species and Evolution, he was awarded the Daniel Giraud Elliot Medal from the National Academy of Sciences in 1967. Mayr was elected a Foreign Member of the Royal Society (ForMemRS) in 1988. In 1995 he received the Benjamin Franklin Medal for Distinguished Achievement in the Sciences of the American Philosophical Society, of which he was already a member. Mayr never won a Nobel Prize, but he noted that there is no prize for evolutionary biology and that Darwin would not have received one, either. (In fact, there is no Nobel Prize for biology.) Mayr did win a 1999 Crafoord Prize. It honors basic research in fields that do not qualify for Nobel Prizes and is administered by the same organization as the Nobel Prize. In 2001, Mayr received the Golden Plate Award of the American Academy of Achievement. Mayr was co-author of six global reviews of bird species new to science (listed below). Mayr said he was an atheist in regards to "the idea of a personal God" because "there is nothing that supports [it]". As a traditionally-trained biologist, Mayr was often highly critical of early mathematical approaches to evolution, such as those of J.B.S. Haldane, and famously called such approaches "beanbag genetics" in 1959. He maintained that factors such as reproductive isolation had to be taken into account. In a similar fashion, Mayr was also quite critical of molecular evolution studies such as those of Carl Woese. Current molecular studies in evolution and speciation indicate that although allopatric speciation is the norm, there are numerous cases of sympatric speciation in groups with greater mobility, such as birds. The precise mechanisms of sympatric speciation, however, are usually a form of microallopatry enabled by variations in niche occupancy among individuals within a population. In many of his writings, Mayr rejected reductionism in evolutionary biology, arguing that evolutionary pressures act on the whole organism, not on single genes, and that genes can have different effects depending on the other genes present. He advocated a study of the whole genome, rather than of only isolated genes. After articulating the biological species concept in 1942, Mayr played a central role in the species problem debate over what was the best species concept. He staunchly defended the biological species concept against the many definitions of "species" that others proposed. Mayr was an outspoken defender of the scientific method and was known to critique sharply science on the edge. As a notable example, in 1995, he criticized the Search for Extra-Terrestrial Intelligence (SETI), as conducted by fellow Harvard professor Paul Horowitz, as being a waste of university and student resources for its inability to address and answer a scientific question. Over 60 eminent scientists, led by Carl Sagan, rebutted the criticism. Mayr rejected the idea of a gene-centered view of evolution and starkly but politely criticised Richard Dawkins's ideas: The funny thing is if in England, you ask a man in the street who the greatest living Darwinian is, he will say Richard Dawkins. And indeed, Dawkins has done a marvelous job of popularizing Darwinism. But Dawkins' basic theory of the gene being the object of evolution is totally non-Darwinian. I would not call him the greatest Darwinian. Mayr insisted that the entire genome should be considered as the target of selection, rather than individual genes: The idea that a few people have about the gene being the target of selection is completely impractical; a gene is never visible to natural selection, and in the genotype, it is always in the context with other genes, and the interaction with those other genes make a particular gene either more favorable or less favorable. In fact, Dobzhansky, for instance, worked quite a bit on so-called lethal chromosomes which are highly successful in one combination, and lethal in another. Therefore people like Dawkins in England who still think the gene is the target of selection are evidently wrong. In the 30s and 40s, it was widely accepted that genes were the target of selection, because that was the only way they could be made accessible to mathematics, but now we know that it is really the whole genotype of the individual, not the gene. Except for that slight revision, the basic Darwinian theory hasn't changed in the last 50 years. Darwin's theory of evolution is based on key facts and the inferences drawn from them, which Mayr summarised as follows: In relation to the publication of Darwin's Origins of Species, Mayr identified philosophical implications of evolution:
[ { "paragraph_id": 0, "text": "Ernst Walter Mayr (/ˈmaɪər/; 5 July 1904 – 3 February 2005) was one of the 20th century's leading evolutionary biologists. He was also a renowned taxonomist, tropical explorer, ornithologist, philosopher of biology, and historian of science. His work contributed to the conceptual revolution that led to the modern evolutionary synthesis of Mendelian genetics, systematics, and Darwinian evolution, and to the development of the biological species concept.", "title": "" }, { "paragraph_id": 1, "text": "Although Charles Darwin and others posited that multiple species could evolve from a single common ancestor, the mechanism by which this occurred was not understood, creating the species problem. Ernst Mayr approached the problem with a new definition for species. In his book Systematics and the Origin of Species (1942) he wrote that a species is not just a group of morphologically similar individuals, but a group that can breed only among themselves, excluding all others. When populations within a species become isolated by geography, feeding strategy, mate choice, or other means, they may start to differ from other populations through genetic drift and natural selection, and over time may evolve into new species. The most significant and rapid genetic reorganization occurs in extremely small populations that have been isolated (as on islands).", "title": "" }, { "paragraph_id": 2, "text": "His theory of peripatric speciation (a more precise form of allopatric speciation which he advanced), based on his work on birds, is still considered a leading mode of speciation, and was the theoretical underpinning for the theory of punctuated equilibrium, proposed by Niles Eldredge and Stephen Jay Gould. Mayr is sometimes credited with inventing modern philosophy of biology, particularly the part related to evolutionary biology, which he distinguished from physics due to its introduction of (natural) history into science.", "title": "" }, { "paragraph_id": 3, "text": "Mayr was the second son of Helene Pusinelli and Otto Mayr. His father was a district prosecuting attorney at Würzburg but took an interest in natural history and took the children out on field trips. Mayr learnt all the local birds in Würzburg from his elder brother Otto. He also had access to a natural history magazine for amateurs, Kosmos. His father died just before he was thirteen. The family then moved to Dresden, where he studied at the Staatsgymnasium in Dresden-Neustadt and completed his high school education. In April 1922, while still in high school, he joined the newly founded Saxony Ornithologists' Association. There he met Rudolf Zimmermann, who became his ornithological mentor. In February 1923, Mayr passed his high school examination (Abitur) and his mother rewarded him with a pair of binoculars.", "title": "Biography" }, { "paragraph_id": 4, "text": "On 23 March 1923 on one of the lakes of Moritzburg, the Frauenteich, he spotted what he identified as a red-crested pochard. The species had not been seen in Saxony since 1845 and the local club argued about the identity. Raimund Schelcher (1891–1979) of the club then suggested that Mayr visit his classmate Erwin Stresemann on his way to Greifswald, where Mayr was to begin his medical studies. After a tough interrogation, Stresemann accepted and published the sighting as authentic. Stresemann was very impressed and suggested that, between semesters, Mayr could work as a volunteer in the ornithological section of the museum. Mayr wrote about this event, \"It was as if someone had given me the key to heaven.\" He entered the University of Greifswald in 1923 and, according to Mayr himself, \"took the medical curriculum (to satisfy a family tradition) but after only a year, he decided to leave medicine and enrolled at the Faculty of Biological Sciences.\" Mayr was endlessly interested in ornithology and \"chose Greifswald at the Baltic for my studies for no other reason than that ... it was situated in the ornithologically most interesting area.\" Although he ostensibly planned to become a physician, he was \"first and foremost an ornithologist.\" During the first semester break Stresemann gave him a test to identify treecreepers and Mayr was able to identify most of the specimens correctly. Stresemann declared that Mayr \"was a born systematist\". In 1925, Stresemann suggested that he give up his medical studies, in fact he should leave the faculty of medicine and enrol into the faculty of Biology and then join the Berlin Museum with the prospect of bird-collecting trips to the tropics, on the condition that he completed his doctoral studies in 16 months. Mayr completed his doctorate in ornithology at the University of Berlin under Dr. Carl Zimmer, who was a full professor (Ordentlicher Professor), on 24 June 1926 at the age of 21. On 1 July he accepted the position offered to him at the museum for a monthly salary of 330.54 Reichsmark.", "title": "Biography" }, { "paragraph_id": 5, "text": "At the International Zoological Congress at Budapest in 1927, Mayr was introduced by Stresemann to banker and naturalist Walter Rothschild, who asked him to undertake an expedition to New Guinea on behalf of himself and the American Museum of Natural History in New York. In New Guinea, Mayr collected several thousand bird skins (he named 26 new bird species during his lifetime) and, in the process also named 38 new orchid species. During his stay in New Guinea, he was invited to accompany the Whitney South Sea Expedition to the Solomon Islands. Also, while in New Guinea, he visited the Lutheran missionaries Otto Thiele and Christian Keyser, in the Finschhafen district; there, while in conversation with his hosts, he uncovered the discrepancies in Hermann Detzner's popular book Four Years among Cannibals: New Guinea, in which Detzner claimed to have seen the interior, discovered several species of flora and fauna, while remaining only steps ahead of the Australian patrols sent to capture him. He returned to Germany in 1930.", "title": "Biography" }, { "paragraph_id": 6, "text": "Mayr moved to the United States in 1931 to take up a curatorial position at the American Museum of Natural History, where he played the important role of brokering and acquiring the Walter Rothschild collection of bird skins, which was being sold in order to pay off a blackmailer. During his time at the museum he produced numerous publications on bird taxonomy, and in 1942 his first book Systematics and the Origin of Species, which completed the evolutionary synthesis started by Darwin.", "title": "Biography" }, { "paragraph_id": 7, "text": "After Mayr was appointed at the American Museum of Natural History, he influenced American ornithological research by mentoring young birdwatchers. Mayr was surprised at the differences between American and German birding societies. He noted that the German society was \"far more scientific, far more interested in life histories and breeding bird species, as well as in reports on recent literature.\"", "title": "Biography" }, { "paragraph_id": 8, "text": "Mayr organized a monthly seminar under the auspices of the Linnean Society of New York. Under the influence of J.A. Allen, Frank Chapman, and Jonathan Dwight, the society concentrated on taxonomy and later became a clearing house for bird banding and sight records.", "title": "Biography" }, { "paragraph_id": 9, "text": "Mayr encouraged his Linnaean Society seminar participants to take up a specific research project of their own. Under Mayr's influence one of them, Joseph Hickey, went on to write A Guide to Birdwatching (1943). Hickey remembered later, \"Mayr was our age and invited on all our field trips. The heckling of this German foreigner was tremendous, but he gave tit for tat, and any modern picture of Dr E. Mayr as a very formal person does not square with my memory of the 1930s. He held his own.\" A group of eight young birdwatchers from The Bronx later became the Bronx County Bird Club, led by Ludlow Griscom. \"Everyone should have a problem\" was the way one Bronx County Bird Club member recalled Mayr's refrain. Mayr said of his own involvement with the local birdwatchers: \"In those early years in New York when I was a stranger in a big city, it was the companionship and later friendship which I was offered in the Linnean Society that was the most important thing in my life.\"", "title": "Biography" }, { "paragraph_id": 10, "text": "Mayr also greatly influenced the American ornithologist Margaret Morse Nice. Mayr encouraged her to correspond with European ornithologists and helped her in her landmark study on song sparrows. Nice wrote to Joseph Grinnell in 1932, trying to get foreign literature reviewed in the Condor: \"Too many American ornithologists have despised the study of the living bird; the magazines and books that deal with the subject abound in careless statements, anthropomorphic interpretations, repetition of ancient errors, and sweeping conclusions from a pitiful array of facts. ... in Europe the study of the living bird is taken seriously. We could learn a great deal from their writing.\" Mayr ensured that Nice could publish her two-volume Studies in the Life History of the Song Sparrow. He found her a publisher, and her book was reviewed by Aldo Leopold, Joseph Grinnell, and Jean Delacour. Nice dedicated her book to \"My Friend Ernst Mayr.\"", "title": "Biography" }, { "paragraph_id": 11, "text": "Mayr joined the faculty of Harvard University in 1953, where he also served as director of the Museum of Comparative Zoology from 1961 to 1970. He retired in 1975 as emeritus professor of zoology, showered with honors. Following his retirement, he went on to publish more than 200 articles, in a variety of journals—more than some reputable scientists publish in their entire careers; 14 of his 25 books were published after he was 65. Even as a centenarian, he continued to write books. On his 100th birthday, he was interviewed by Scientific American magazine.", "title": "Biography" }, { "paragraph_id": 12, "text": "Mayr died on 3 February 2005 in his retirement home in Bedford, Massachusetts, after a short illness. He had married fellow German Margarete \"Gretel\" Simon in May 1935 (they had met at a party in Manhattan in 1932), and she assisted Mayr in some of his work.", "title": "Biography" }, { "paragraph_id": 13, "text": "Margarete died in 1990. He was survived by two daughters (Christa Menzel and Susanne Harrison), five grandchildren and 10 great-grandchildren.", "title": "Biography" }, { "paragraph_id": 14, "text": "The awards that Mayr received include the National Medal of Science, the Balzan Prize, the Sarton Medal of the History of Science Society, the International Prize for Biology, the Loye and Alden Miller Research Award, and the Lewis Thomas Prize for Writing about Science. In 1939 he was elected a Corresponding Member of the Royal Australasian Ornithologists Union. He was awarded the 1946 Leidy Award from the Academy of Natural Sciences of Philadelphia. He was awarded the Linnean Society of London's prestigious Darwin-Wallace Medal in 1958 and the Linnaean Society of New York's inaugural Eisenmann Medal in 1983. For his work, Animal Species and Evolution, he was awarded the Daniel Giraud Elliot Medal from the National Academy of Sciences in 1967. Mayr was elected a Foreign Member of the Royal Society (ForMemRS) in 1988. In 1995 he received the Benjamin Franklin Medal for Distinguished Achievement in the Sciences of the American Philosophical Society, of which he was already a member. Mayr never won a Nobel Prize, but he noted that there is no prize for evolutionary biology and that Darwin would not have received one, either. (In fact, there is no Nobel Prize for biology.) Mayr did win a 1999 Crafoord Prize. It honors basic research in fields that do not qualify for Nobel Prizes and is administered by the same organization as the Nobel Prize. In 2001, Mayr received the Golden Plate Award of the American Academy of Achievement.", "title": "Biography" }, { "paragraph_id": 15, "text": "Mayr was co-author of six global reviews of bird species new to science (listed below).", "title": "Biography" }, { "paragraph_id": 16, "text": "Mayr said he was an atheist in regards to \"the idea of a personal God\" because \"there is nothing that supports [it]\".", "title": "Biography" }, { "paragraph_id": 17, "text": "As a traditionally-trained biologist, Mayr was often highly critical of early mathematical approaches to evolution, such as those of J.B.S. Haldane, and famously called such approaches \"beanbag genetics\" in 1959. He maintained that factors such as reproductive isolation had to be taken into account. In a similar fashion, Mayr was also quite critical of molecular evolution studies such as those of Carl Woese. Current molecular studies in evolution and speciation indicate that although allopatric speciation is the norm, there are numerous cases of sympatric speciation in groups with greater mobility, such as birds. The precise mechanisms of sympatric speciation, however, are usually a form of microallopatry enabled by variations in niche occupancy among individuals within a population.", "title": "Ideas" }, { "paragraph_id": 18, "text": "In many of his writings, Mayr rejected reductionism in evolutionary biology, arguing that evolutionary pressures act on the whole organism, not on single genes, and that genes can have different effects depending on the other genes present. He advocated a study of the whole genome, rather than of only isolated genes. After articulating the biological species concept in 1942, Mayr played a central role in the species problem debate over what was the best species concept. He staunchly defended the biological species concept against the many definitions of \"species\" that others proposed.", "title": "Ideas" }, { "paragraph_id": 19, "text": "Mayr was an outspoken defender of the scientific method and was known to critique sharply science on the edge. As a notable example, in 1995, he criticized the Search for Extra-Terrestrial Intelligence (SETI), as conducted by fellow Harvard professor Paul Horowitz, as being a waste of university and student resources for its inability to address and answer a scientific question. Over 60 eminent scientists, led by Carl Sagan, rebutted the criticism.", "title": "Ideas" }, { "paragraph_id": 20, "text": "Mayr rejected the idea of a gene-centered view of evolution and starkly but politely criticised Richard Dawkins's ideas:", "title": "Ideas" }, { "paragraph_id": 21, "text": "The funny thing is if in England, you ask a man in the street who the greatest living Darwinian is, he will say Richard Dawkins. And indeed, Dawkins has done a marvelous job of popularizing Darwinism. But Dawkins' basic theory of the gene being the object of evolution is totally non-Darwinian. I would not call him the greatest Darwinian.", "title": "Ideas" }, { "paragraph_id": 22, "text": "Mayr insisted that the entire genome should be considered as the target of selection, rather than individual genes:", "title": "Ideas" }, { "paragraph_id": 23, "text": "The idea that a few people have about the gene being the target of selection is completely impractical; a gene is never visible to natural selection, and in the genotype, it is always in the context with other genes, and the interaction with those other genes make a particular gene either more favorable or less favorable. In fact, Dobzhansky, for instance, worked quite a bit on so-called lethal chromosomes which are highly successful in one combination, and lethal in another. Therefore people like Dawkins in England who still think the gene is the target of selection are evidently wrong. In the 30s and 40s, it was widely accepted that genes were the target of selection, because that was the only way they could be made accessible to mathematics, but now we know that it is really the whole genotype of the individual, not the gene. Except for that slight revision, the basic Darwinian theory hasn't changed in the last 50 years.", "title": "Ideas" }, { "paragraph_id": 24, "text": "Darwin's theory of evolution is based on key facts and the inferences drawn from them, which Mayr summarised as follows:", "title": "Summary of Darwin's theory" }, { "paragraph_id": 25, "text": "In relation to the publication of Darwin's Origins of Species, Mayr identified philosophical implications of evolution:", "title": "Summary of Darwin's theory" } ]
Ernst Walter Mayr was one of the 20th century's leading evolutionary biologists. He was also a renowned taxonomist, tropical explorer, ornithologist, philosopher of biology, and historian of science. His work contributed to the conceptual revolution that led to the modern evolutionary synthesis of Mendelian genetics, systematics, and Darwinian evolution, and to the development of the biological species concept. Although Charles Darwin and others posited that multiple species could evolve from a single common ancestor, the mechanism by which this occurred was not understood, creating the species problem. Ernst Mayr approached the problem with a new definition for species. In his book Systematics and the Origin of Species (1942) he wrote that a species is not just a group of morphologically similar individuals, but a group that can breed only among themselves, excluding all others. When populations within a species become isolated by geography, feeding strategy, mate choice, or other means, they may start to differ from other populations through genetic drift and natural selection, and over time may evolve into new species. The most significant and rapid genetic reorganization occurs in extremely small populations that have been isolated. His theory of peripatric speciation, based on his work on birds, is still considered a leading mode of speciation, and was the theoretical underpinning for the theory of punctuated equilibrium, proposed by Niles Eldredge and Stephen Jay Gould. Mayr is sometimes credited with inventing modern philosophy of biology, particularly the part related to evolutionary biology, which he distinguished from physics due to its introduction of (natural) history into science.
2001-01-30T23:28:10Z
2023-12-10T19:07:14Z
[ "Template:Authority control", "Template:Blockquote", "Template:Quote", "Template:Cite book", "Template:Cite journal", "Template:Webarchive", "Template:Citation", "Template:Refbegin", "Template:Reflist", "Template:ISBN", "Template:Cite web", "Template:Short description", "Template:Hatnote", "Template:Infobox scientist", "Template:IPAc-en", "Template:Refend", "Template:Wikiquote", "Template:Harvnb", "Template:Winners of the National Medal of Science" ]
https://en.wikipedia.org/wiki/Ernst_Mayr
9,239
Europe
Europe is a continent comprising the westernmost peninsulas of Eurasia, located entirely in the Northern Hemisphere and mostly in the Eastern Hemisphere. It shares the continental landmass of Afro-Eurasia with both Africa and Asia. It is bordered by the Arctic Ocean to the north, the Atlantic Ocean to the west, the Mediterranean Sea to the south, and Asia to the east. Europe is commonly considered to be separated from Asia by the watershed of the Ural Mountains, the Ural River, the Caspian Sea, the Greater Caucasus, the Black Sea and the waterways of the Turkish straits. Europe covers about 10.18 million km (3.93 million sq mi), or 2% of Earth's surface (6.8% of land area), making it the second-smallest continent (using the seven-continent model). Politically, Europe is divided into about fifty sovereign states, of which Russia is the largest and most populous, spanning 39% of the continent and comprising 15% of its population. Europe had a total population of about 745 million (about 10% of the world population) in 2021; the third-largest after Asia and Africa. The European climate is largely affected by warm Atlantic currents that temper winters and summers on much of the continent, even at latitudes along which the climate in Asia and North America is severe. Further from the sea, seasonal differences are more noticeable than close to the coast. European culture is the root of Western civilisation, which traces its lineage back to ancient Greece and ancient Rome. The fall of the Western Roman Empire in 476 CE and the related Migration Period marked the end of Europe's ancient history, and the beginning of the Middle Ages. The Italian Renaissance began in Florence and spread to the rest of the continent, bringing a renewed interest in humanism, exploration, art, and science which contributed to the beginning of the modern era. Since the Age of Discovery, led by Spain and Portugal, Europe played a predominant role in global affairs with multiple explorations and conquests around the world. Between the 16th and 20th centuries, European powers colonised at various times the Americas, almost all of Africa and Oceania, and the majority of Asia. The Age of Enlightenment, the French Revolution, and the Napoleonic Wars shaped the continent culturally, politically and economically from the end of the 17th century until the first half of the 19th century. The Industrial Revolution, which began in Great Britain at the end of the 18th century, gave rise to radical economic, cultural and social change in Western Europe and eventually the wider world. Both world wars began and were fought to a great extent in Europe, contributing to a decline in Western European dominance in world affairs by the mid-20th century as the Soviet Union and the United States took prominence. During the Cold War, Europe was divided along the Iron Curtain between NATO in the West and the Warsaw Pact in the East, until the Revolutions of 1989, the fall of the Berlin Wall, and the dissolution of the Soviet Union. The European Union (EU) and the Council of Europe are two important international organisations aiming to represent the European continent on a political level. The Council of Europe was founded in 1948 with the idea of unifying Europe to achieve common goals and prevent future wars. Further European integration by some states led to the formation of the European Union, a separate supranational political entity based on a system of European law that lies between a confederation and a federation. The EU originated in Western Europe but has been expanding eastward since the fall of the Soviet Union in 1991. A majority of its members have adopted a common currency, the euro, and participate in the European single market and a customs union. A large bloc of countries, the Schengen Area, have also abolished internal border and immigration controls. Regular popular elections take place every five years within the EU; they are considered to be the second largest democratic elections in the world after India's. In classical Greek mythology, Europa (Ancient Greek: Εὐρώπη, Eurṓpē) was a Phoenician princess. One view is that her name derives from the Ancient Greek elements εὐρύς (eurús) 'wide, broad', and ὤψ (ōps, gen. ὠπός, ōpós) 'eye, face, countenance', hence their composite Eurṓpē would mean 'wide-gazing' or 'broad of aspect'. Broad has been an epithet of Earth herself in the reconstructed Proto-Indo-European religion and the poetry devoted to it. An alternative view is that of Robert Beekes, who has argued in favour of a Pre-Indo-European origin for the name, explaining that a derivation from eurus would yield a different toponym than Europa. Beekes has located toponyms related to that of Europa in the territory of ancient Greece, and localities such as that of Europos in ancient Macedonia. There have been attempts to connect Eurṓpē to a Semitic term for west, this being either Akkadian erebu meaning 'to go down, set' (said of the sun) or Phoenician 'ereb 'evening, west', which is at the origin of Arabic maghreb and Hebrew ma'arav. Martin Litchfield West stated that "phonologically, the match between Europa's name and any form of the Semitic word is very poor", while Beekes considers a connection to Semitic languages improbable. Most major world languages use words derived from Eurṓpē or Europa to refer to the continent. Chinese, for example, uses the word Ōuzhōu (歐洲/欧洲), which is an abbreviation of the transliterated name Ōuluóbā zhōu (歐羅巴洲) (zhōu means "continent"); a similar Chinese-derived term Ōshū (欧州) is also sometimes used in Japanese such as in the Japanese name of the European Union, Ōshū Rengō (欧州連合), despite the katakana Yōroppa (ヨーロッパ) being more commonly used. In some Turkic languages, the originally Persian name Frangistan ('land of the Franks') is used casually in referring to much of Europe, besides official names such as Avrupa or Evropa. Clickable map of Europe, showing one of the most commonly used continental boundaries Key: blue: states which straddle the border between Europe and Asia; green: countries not geographically in Europe, but closely associated with the continent The prevalent definition of Europe as a geographical term has been in use since the mid-19th century. Europe is taken to be bounded by large bodies of water to the north, west and south; Europe's limits to the east and north-east are usually taken to be the Ural Mountains, the Ural River, and the Caspian Sea; to the south-east, the Caucasus Mountains, the Black Sea, and the waterways connecting the Black Sea to the Mediterranean Sea. Islands are generally grouped with the nearest continental landmass, hence Iceland is considered to be part of Europe, while the nearby island of Greenland is usually assigned to North America, although politically belonging to Denmark. Nevertheless, there are some exceptions based on sociopolitical and cultural differences. Cyprus is closest to Anatolia (or Asia Minor), but is considered part of Europe politically and it is a member state of the EU. Malta was considered an island of North-western Africa for centuries, but now it is considered to be part of Europe as well. "Europe", as used specifically in British English, may also refer to Continental Europe exclusively. The term "continent" usually implies the physical geography of a large land mass completely or almost completely surrounded by water at its borders. Prior to the adoption of the current convention that includes mountain divides, the border between Europe and Asia had been redefined several times since its first conception in classical antiquity, but always as a series of rivers, seas and straits that were believed to extend an unknown distance east and north from the Mediterranean Sea without the inclusion of any mountain ranges. Cartographer Herman Moll suggested in 1715 Europe was bounded by a series of partly-joined waterways directed towards the Turkish straits, and the Irtysh River draining into the upper part of the Ob River and the Arctic Ocean. In contrast, the present eastern boundary of Europe partially adheres to the Ural and Caucasus Mountains, which is somewhat arbitrary and inconsistent compared to any clear-cut definition of the term "continent". The current division of Eurasia into two continents now reflects East-West cultural, linguistic and ethnic differences which vary on a spectrum rather than with a sharp dividing line. The geographic border between Europe and Asia does not follow any state boundaries and now only follows a few bodies of water. Turkey is generally considered a transcontinental country divided entirely by water, while Russia and Kazakhstan are only partly divided by waterways. France, the Netherlands, Portugal and Spain are also transcontinental (or more properly, intercontinental, when oceans or large seas are involved) in that their main land areas are in Europe while pockets of their territories are located on other continents separated from Europe by large bodies of water. Spain, for example, has territories south of the Mediterranean Sea—namely, Ceuta and Melilla—which are parts of Africa and share a border with Morocco. According to the current convention, Georgia and Azerbaijan are transcontinental countries where waterways have been completely replaced by mountains as the divide between continents. The first recorded usage of Eurṓpē as a geographic term is in the Homeric Hymn to Delian Apollo, in reference to the western shore of the Aegean Sea. As a name for a part of the known world, it is first used in the 6th century BCE by Anaximander and Hecataeus. Anaximander placed the boundary between Asia and Europe along the Phasis River (the modern Rioni River on the territory of Georgia) in the Caucasus, a convention still followed by Herodotus in the 5th century BCE. Herodotus mentioned that the world had been divided by unknown persons into three parts—Europe, Asia, and Libya (Africa)—with the Nile and the Phasis forming their boundaries—though he also states that some considered the River Don, rather than the Phasis, as the boundary between Europe and Asia. Europe's eastern frontier was defined in the 1st century by geographer Strabo at the River Don. The Book of Jubilees described the continents as the lands given by Noah to his three sons; Europe was defined as stretching from the Pillars of Hercules at the Strait of Gibraltar, separating it from Northwest Africa, to the Don, separating it from Asia. The convention received by the Middle Ages and surviving into modern usage is that of the Roman era used by Roman-era authors such as Posidonius, Strabo and Ptolemy, who took the Tanais (the modern Don River) as the boundary. The Roman Empire did not attach a strong identity to the concept of continental divisions. However, following the fall of the Western Roman Empire, the culture that developed in its place, linked to Latin and the Catholic church, began to associate itself with the concept of "Europe". The term "Europe" is first used for a cultural sphere in the Carolingian Renaissance of the 9th century. From that time, the term designated the sphere of influence of the Western Church, as opposed to both the Eastern Orthodox churches and to the Islamic world. A cultural definition of Europe as the lands of Latin Christendom coalesced in the 8th century, signifying the new cultural condominium created through the confluence of Germanic traditions and Christian-Latin culture, defined partly in contrast with Byzantium and Islam, and limited to northern Iberia, the British Isles, France, Christianised western Germany, the Alpine regions and northern and central Italy. The concept is one of the lasting legacies of the Carolingian Renaissance: Europa often figures in the letters of Charlemagne's court scholar, Alcuin. The transition of Europe to being a cultural term as well as a geographic one led to the borders of Europe being affected by cultural considerations in the East, especially relating to areas under Byzantine, Ottoman, and Russian influence. Such questions were affected by the positive connotations associated with the term Europe by its users. Such cultural considerations were not applied to the Americas, despite their conquest and settlement by European states. Instead, the concept of "Western civilization" emerged as a way of grouping together Europe and these colonies. The question of defining a precise eastern boundary of Europe arises in the Early Modern period, as the eastern extension of Muscovy began to include North Asia. Throughout the Middle Ages and into the 18th century, the traditional division of the landmass of Eurasia into two continents, Europe and Asia, followed Ptolemy, with the boundary following the Turkish Straits, the Black Sea, the Kerch Strait, the Sea of Azov and the Don (ancient Tanais). But maps produced during the 16th to 18th centuries tended to differ in how to continue the boundary beyond the Don bend at Kalach-na-Donu (where it is closest to the Volga, now joined with it by the Volga–Don Canal), into territory not described in any detail by the ancient geographers. Around 1715, Herman Moll produced a map showing the northern part of the Ob River and the Irtysh River, a major tributary of the Ob, as components of a series of partly-joined waterways taking the boundary between Europe and Asia from the Turkish Straits, and the Don River all the way to the Arctic Ocean. In 1721, he produced a more up to date map that was easier to read. However, his proposal to adhere to major rivers as the line of demarcation was never taken up by other geographers who were beginning to move away from the idea of water boundaries as the only legitimate divides between Europe and Asia. Four years later, in 1725, Philip Johan von Strahlenberg was the first to depart from the classical Don boundary. He drew a new line along the Volga, following the Volga north until the Samara Bend, along Obshchy Syrt (the drainage divide between the Volga and Ural Rivers), then north and east along the latter waterway to its source in the Ural Mountains. At this point he proposed that mountain ranges could be included as boundaries between continents as alternatives to nearby waterways. Accordingly, he drew the new boundary north along Ural Mountains rather than the nearby and parallel running Ob and Irtysh rivers. This was endorsed by the Russian Empire and introduced the convention that would eventually become commonly accepted. However, this did not come without criticism. Voltaire, writing in 1760 about Peter the Great's efforts to make Russia more European, ignored the whole boundary question with his claim that neither Russia, Scandinavia, northern Germany, nor Poland were fully part of Europe. Since then, many modern analytical geographers like Halford Mackinder have declared that they see little validity in the Ural Mountains as a boundary between continents. The mapmakers continued to differ on the boundary between the lower Don and Samara well into the 19th century. The 1745 atlas published by the Russian Academy of Sciences has the boundary follow the Don beyond Kalach as far as Serafimovich before cutting north towards Arkhangelsk, while other 18th- to 19th-century mapmakers such as John Cary followed Strahlenberg's prescription. To the south, the Kuma–Manych Depression was identified c. 1773 by a German naturalist, Peter Simon Pallas, as a valley that once connected the Black Sea and the Caspian Sea, and subsequently was proposed as a natural boundary between continents. By the mid-19th century, there were three main conventions, one following the Don, the Volga–Don Canal and the Volga, the other following the Kuma–Manych Depression to the Caspian and then the Ural River, and the third abandoning the Don altogether, following the Greater Caucasus watershed to the Caspian. The question was still treated as a "controversy" in geographical literature of the 1860s, with Douglas Freshfield advocating the Caucasus crest boundary as the "best possible", citing support from various "modern geographers". In Russia and the Soviet Union, the boundary along the Kuma–Manych Depression was the most commonly used as early as 1906. In 1958, the Soviet Geographical Society formally recommended that the boundary between the Europe and Asia be drawn in textbooks from Baydaratskaya Bay, on the Kara Sea, along the eastern foot of Ural Mountains, then following the Ural River until the Mugodzhar Hills, and then the Emba River; and Kuma–Manych Depression, thus placing the Caucasus entirely in Asia and the Urals entirely in Europe. However, most geographers in the Soviet Union favoured the boundary along the Caucasus crest, and this became the common convention in the later 20th century, although the Kuma–Manych boundary remained in use in some 20th-century maps. Some view the separation of Eurasia into Asia and Europe as a residue of Eurocentrism: "In physical, cultural and historical diversity, China and India are comparable to the entire European landmass, not to a single European country. [...]." During the 2.5 million years of the Pleistocene, numerous cold phases called glacials (Quaternary ice age), or significant advances of continental ice sheets, in Europe and North America, occurred at intervals of approximately 40,000 to 100,000 years. The long glacial periods were separated by more temperate and shorter interglacials which lasted about 10,000–15,000 years. The last cold episode of the last glacial period ended about 10,000 years ago. Earth is currently in an interglacial period of the Quaternary, called the Holocene. Homo erectus georgicus, which lived roughly 1.8 million years ago in Georgia, is the earliest hominin to have been discovered in Europe. Other hominin remains, dating back roughly 1 million years, have been discovered in Atapuerca, Spain. Neanderthal man (named after the Neandertal valley in Germany) appeared in Europe 150,000 years ago (115,000 years ago it is found already in the territory of present-day Poland) and disappeared from the fossil record about 40,000 years ago, with their final refuge being the Iberian Peninsula. The Neanderthals were supplanted by modern humans (Cro-Magnons), who appeared in Europe around 43,000 to 40,000 years ago. Homo sapiens arrived in Europe around 54,000 years ago, some 10,000 years earlier than previously thought. The earliest sites in Europe dated 48,000 years ago are Riparo Mochi (Italy), Geissenklösterle (Germany) and Isturitz (France). The European Neolithic period—marked by the cultivation of crops and the raising of livestock, increased numbers of settlements and the widespread use of pottery—began around 7000 BCE in Greece and the Balkans, probably influenced by earlier farming practices in Anatolia and the Near East. It spread from the Balkans along the valleys of the Danube and the Rhine (Linear Pottery culture), and along the Mediterranean coast (Cardial culture). Between 4500 and 3000 BCE, these central European neolithic cultures developed further to the west and the north, transmitting newly acquired skills in producing copper artifacts. In Western Europe the Neolithic period was characterised not by large agricultural settlements but by field monuments, such as causewayed enclosures, burial mounds and megalithic tombs. The Corded Ware cultural horizon flourished at the transition from the Neolithic to the Chalcolithic. During this period giant megalithic monuments, such as the Megalithic Temples of Malta and Stonehenge, were constructed throughout Western and Southern Europe. The modern native populations of Europe largely descend from three distinct lineages: Mesolithic hunter-gatherers, descended from populations associated with the Paleolithic Epigravettian culture; Neolithic Early European Farmers who migrated from Anatolia during the Neolithic Revolution 9,000 years ago; and Yamnaya Steppe herders who expanded into Europe from the Pontic–Caspian steppe of Ukraine and southern Russia in the context of Indo-European migrations 5,000 years ago. The European Bronze Age began c. 3200 BCE in Greece with the Minoan civilisation on Crete, the first advanced civilisation in Europe. The Minoans were followed by the Myceneans, who collapsed suddenly around 1200 BCE, ushering the European Iron Age. Iron Age colonisation by the Greeks and Phoenicians gave rise to early Mediterranean cities. Early Iron Age Italy and Greece from around the 8th century BCE gradually gave rise to historical Classical antiquity, whose beginning is sometimes dated to 776 BCE, the year of the first Olympic Games. Ancient Greece was the founding culture of Western civilisation. Western democratic and rationalist culture are often attributed to Ancient Greece. The Greek city-state, the polis, was the fundamental political unit of classical Greece. In 508 BCE, Cleisthenes instituted the world's first democratic system of government in Athens. The Greek political ideals were rediscovered in the late 18th century by European philosophers and idealists. Greece also generated many cultural contributions: in philosophy, humanism and rationalism under Aristotle, Socrates and Plato; in history with Herodotus and Thucydides; in dramatic and narrative verse, starting with the epic poems of Homer; in drama with Sophocles and Euripides; in medicine with Hippocrates and Galen; and in science with Pythagoras, Euclid and Archimedes. In the course of the 5th century BCE, several of the Greek city states would ultimately check the Achaemenid Persian advance in Europe through the Greco-Persian Wars, considered a pivotal moment in world history, as the 50 years of peace that followed are known as Golden Age of Athens, the seminal period of ancient Greece that laid many of the foundations of Western civilisation. Greece was followed by Rome, which left its mark on law, politics, language, engineering, architecture, government and many more key aspects in western civilisation. By 200 BCE, Rome had conquered Italy and over the following two centuries it conquered Greece and Hispania (Spain and Portugal), the North African coast, much of the Middle East, Gaul (France and Belgium) and Britannia (England and Wales). Expanding from their base in central Italy beginning in the third century BCE, the Romans gradually expanded to eventually rule the entire Mediterranean Basin and Western Europe by the turn of the millennium. The Roman Republic ended in 27 BCE, when Augustus proclaimed the Roman Empire. The two centuries that followed are known as the pax romana, a period of unprecedented peace, prosperity and political stability in most of Europe. The empire continued to expand under emperors such as Antoninus Pius and Marcus Aurelius, who spent time on the Empire's northern border fighting Germanic, Pictish and Scottish tribes. Christianity was legalised by Constantine I in 313 CE after three centuries of imperial persecution. Constantine also permanently moved the capital of the empire from Rome to the city of Byzantium (modern-day Istanbul) which was renamed Constantinople in his honour in 330 CE. Christianity became the sole official religion of the empire in 380 CE and in 391–392 CE, the emperor Theodosius outlawed pagan religions. This is sometimes considered to mark the end of antiquity; alternatively antiquity is considered to end with the fall of the Western Roman Empire in 476 CE; the closure of the pagan Platonic Academy of Athens in 529 CE; or the rise of Islam in the early 7th century CE. During most of its existence, the Byzantine Empire was one of the most powerful economic, cultural, and military forces in Europe. During the decline of the Roman Empire, Europe entered a long period of change arising from what historians call the "Age of Migrations". There were numerous invasions and migrations amongst the Ostrogoths, Visigoths, Goths, Vandals, Huns, Franks, Angles, Saxons, Slavs, Avars, Bulgars and, later on, the Vikings, Pechenegs, Cumans and Magyars. Renaissance thinkers such as Petrarch would later refer to this as the "Dark Ages". Isolated monastic communities were the only places to safeguard and compile written knowledge accumulated previously; apart from this very few written records survive and much literature, philosophy, mathematics and other thinking from the classical period disappeared from Western Europe, though they were preserved in the east, in the Byzantine Empire. While the Roman empire in the west continued to decline, Roman traditions and the Roman state remained strong in the predominantly Greek-speaking Eastern Roman Empire, also known as the Byzantine Empire. During most of its existence, the Byzantine Empire was the most powerful economic, cultural and military force in Europe. Emperor Justinian I presided over Constantinople's first golden age: he established a legal code that forms the basis of many modern legal systems, funded the construction of the Hagia Sophia and brought the Christian church under state control. From the 7th century onwards, as the Byzantines and neighbouring Sasanid Persians were severely weakened due to the protracted, centuries-lasting and frequent Byzantine–Sasanian wars, the Muslim Arabs began to make inroads into historically Roman territory, taking the Levant and North Africa and making inroads into Asia Minor. In the mid-7th century, following the Muslim conquest of Persia, Islam penetrated into the Caucasus region. Over the next centuries Muslim forces took Cyprus, Malta, Crete, Sicily and parts of southern Italy. Between 711 and 720, most of the lands of the Visigothic Kingdom of Iberia was brought under Muslim rule—save for small areas in the north-west (Asturias) and largely Basque regions in the Pyrenees. This territory, under the Arabic name Al-Andalus, became part of the expanding Umayyad Caliphate. The unsuccessful second siege of Constantinople (717) weakened the Umayyad dynasty and reduced their prestige. The Umayyads were then defeated by the Frankish leader Charles Martel at the Battle of Poitiers in 732, which ended their northward advance. In the remote regions of north-western Iberia and the middle Pyrenees the power of the Muslims in the south was scarcely felt. It was here that the foundations of the Christian kingdoms of Asturias, Leon and Galicia were laid and from where the reconquest of the Iberian Peninsula would start. However, no coordinated attempt would be made to drive the Moors out. The Christian kingdoms were mainly focused on their own internal power struggles. As a result, the Reconquista took the greater part of eight hundred years, in which period a long list of Alfonsos, Sanchos, Ordoños, Ramiros, Fernandos and Bermudos would be fighting their Christian rivals as much as the Muslim invaders. During the Dark Ages, the Western Roman Empire fell under the control of various tribes. The Germanic and Slav tribes established their domains over Western and Eastern Europe, respectively. Eventually the Frankish tribes were united under Clovis I. Charlemagne, a Frankish king of the Carolingian dynasty who had conquered most of Western Europe, was anointed "Holy Roman Emperor" by the Pope in 800. This led in 962 to the founding of the Holy Roman Empire, which eventually became centred in the German principalities of central Europe. East Central Europe saw the creation of the first Slavic states and the adoption of Christianity (c. 1000 CE). The powerful West Slavic state of Great Moravia spread its territory all the way south to the Balkans, reaching its largest territorial extent under Svatopluk I and causing a series of armed conflicts with East Francia. Further south, the first South Slavic states emerged in the late 7th and 8th century and adopted Christianity: the First Bulgarian Empire, the Serbian Principality (later Kingdom and Empire) and the Duchy of Croatia (later Kingdom of Croatia). To the East, Kievan Rus' expanded from its capital in Kiev to become the largest state in Europe by the 10th century. In 988, Vladimir the Great adopted Orthodox Christianity as the religion of state. Further East, Volga Bulgaria became an Islamic state in the 10th century, but was eventually absorbed into Russia several centuries later. The period between the year 1000 and 1250 is known as the High Middle Ages, followed by the Late Middle Ages until c. 1500. During the High Middle Ages the population of Europe experienced significant growth, culminating in the Renaissance of the 12th century. Economic growth, together with the lack of safety on the mainland trading routes, made possible the development of major commercial routes along the coast of the Mediterranean and Baltic Seas. The growing wealth and independence acquired by some coastal cities gave the Maritime Republics a leading role in the European scene. The Middle Ages on the mainland were dominated by the two upper echelons of the social structure: the nobility and the clergy. Feudalism developed in France in the Early Middle Ages, and soon spread throughout Europe. A struggle for influence between the nobility and the monarchy in England led to the writing of the Magna Carta and the establishment of a parliament. The primary source of culture in this period came from the Roman Catholic Church. Through monasteries and cathedral schools, the Church was responsible for education in much of Europe. The Papacy reached the height of its power during the High Middle Ages. An East-West Schism in 1054 split the former Roman Empire religiously, with the Eastern Orthodox Church in the Byzantine Empire and the Roman Catholic Church in the former Western Roman Empire. In 1095 Pope Urban II called for a crusade against Muslims occupying Jerusalem and the Holy Land. In Europe itself, the Church organised the Inquisition against heretics. In the Iberian Peninsula, the Reconquista concluded with the fall of Granada in 1492, ending over seven centuries of Islamic rule in the south-western peninsula. In the east, a resurgent Byzantine Empire recaptured Crete and Cyprus from the Muslims, and reconquered the Balkans. Constantinople was the largest and wealthiest city in Europe from the 9th to the 12th centuries, with a population of approximately 400,000. The Empire was weakened following the defeat at Manzikert, and was weakened considerably by the sack of Constantinople in 1204, during the Fourth Crusade. Although it would recover Constantinople in 1261, Byzantium fell in 1453 when Constantinople was taken by the Ottoman Empire. In the 11th and 12th centuries, constant incursions by nomadic Turkic tribes, such as the Pechenegs and the Cuman-Kipchaks, caused a massive migration of Slavic populations to the safer, heavily forested regions of the north, and temporarily halted the expansion of the Rus' state to the south and east. Like many other parts of Eurasia, these territories were overrun by the Mongols. The invaders, who became known as Tatars, were mostly Turkic-speaking peoples under Mongol suzerainty. They established the state of the Golden Horde with headquarters in Crimea, which later adopted Islam as a religion, and ruled over modern-day southern and central Russia for more than three centuries. After the collapse of Mongol dominions, the first Romanian states (principalities) emerged in the 14th century: Moldavia and Walachia. Previously, these territories were under the successive control of Pechenegs and Cumans. From the 12th to the 15th centuries, the Grand Duchy of Moscow grew from a small principality under Mongol rule to the largest state in Europe, overthrowing the Mongols in 1480, and eventually becoming the Tsardom of Russia. The state was consolidated under Ivan III the Great and Ivan the Terrible, steadily expanding to the east and south over the next centuries. The Great Famine of 1315–1317 was the first crisis that would strike Europe in the late Middle Ages. The period between 1348 and 1420 witnessed the heaviest loss. The population of France was reduced by half. Medieval Britain was afflicted by 95 famines, and France suffered the effects of 75 or more in the same period. Europe was devastated in the mid-14th century by the Black Death, one of the most deadly pandemics in human history which killed an estimated 25 million people in Europe alone—a third of the European population at the time. The plague had a devastating effect on Europe's social structure; it induced people to live for the moment as illustrated by Giovanni Boccaccio in The Decameron (1353). It was a serious blow to the Roman Catholic Church and led to increased persecution of Jews, beggars and lepers. The plague is thought to have returned every generation with varying virulence and mortalities until the 18th century. During this period, more than 100 plague epidemics swept across Europe. The Renaissance was a period of cultural change originating in Florence, and later spreading to the rest of Europe. The rise of a new humanism was accompanied by the recovery of forgotten classical Greek and Arabic knowledge from monastic libraries, often translated from Arabic into Latin. The Renaissance spread across Europe between the 14th and 16th centuries: it saw the flowering of art, philosophy, music, and the sciences, under the joint patronage of royalty, the nobility, the Roman Catholic Church and an emerging merchant class. Patrons in Italy, including the Medici family of Florentine bankers and the Popes in Rome, funded prolific quattrocento and cinquecento artists such as Raphael, Michelangelo and Leonardo da Vinci. Political intrigue within the Church in the mid-14th century caused the Western Schism. During this forty-year period, two popes—one in Avignon and one in Rome—claimed rulership over the Church. Although the schism was eventually healed in 1417, the papacy's spiritual authority had suffered greatly. In the 15th century, Europe started to extend itself beyond its geographic frontiers. Spain and Portugal, the greatest naval powers of the time, took the lead in exploring the world. Exploration reached the Southern Hemisphere in the Atlantic and the Southern tip of Africa. Christopher Columbus reached the New World in 1492, and Vasco da Gama opened the ocean route to the East linking the Atlantic and Indian Oceans in 1498. The Portuguese-born explorer Ferdinand Magellan reached Asia westward across the Atlantic and the Pacific Oceans in a Spanish expedition, resulting in the first circumnavigation of the globe, completed by the Spaniard Juan Sebastián Elcano (1519–1522). Soon after, the Spanish and Portuguese began establishing large global empires in the Americas, Asia, Africa and Oceania. France, the Netherlands and England soon followed in building large colonial empires with vast holdings in Africa, the Americas and Asia. In 1588, a Spanish armada failed to invade England. A year later England tried unsuccessfully to invade Spain, allowing Philip II of Spain to maintain his dominant war capacity in Europe. This English disaster also allowed the Spanish fleet to retain its capability to wage war for the next decades. However, two more Spanish armadas failed to invade England (2nd Spanish Armada and 3rd Spanish Armada). The Church's power was further weakened by the Protestant Reformation in 1517 when German theologian Martin Luther nailed his Ninety-five Theses criticising the selling of indulgences to the church door. He was subsequently excommunicated in the papal bull Exsurge Domine in 1520 and his followers were condemned in the 1521 Diet of Worms, which divided German princes between Protestant and Roman Catholic faiths. Religious fighting and warfare spread with Protestantism. The plunder of the empires of the Americas allowed Spain to finance religious persecution in Europe for over a century. The Thirty Years War (1618–1648) crippled the Holy Roman Empire and devastated much of Germany, killing between 25 and 40 percent of its population. In the aftermath of the Peace of Westphalia, France rose to predominance within Europe. The defeat of the Ottoman Turks at the Battle of Vienna in 1683 marked the historic end of Ottoman expansion into Europe. The 17th century in Central and parts of Eastern Europe was a period of general decline; the region experienced more than 150 famines in a 200-year period between 1501 and 1700. From the Union of Krewo (1385) east-central Europe was dominated by the Kingdom of Poland and the Grand Duchy of Lithuania. The hegemony of the vast Polish–Lithuanian Commonwealth had ended with the devastation brought by the Second Northern War (Deluge) and subsequent conflicts; the state itself was partitioned and ceased to exist at the end of the 18th century. From the 15th to 18th centuries, when the disintegrating khanates of the Golden Horde were conquered by Russia, Tatars from the Crimean Khanate frequently raided Eastern Slavic lands to capture slaves. Further east, the Nogai Horde and Kazakh Khanate frequently raided the Slavic-speaking areas of contemporary Russia and Ukraine for hundreds of years, until the Russian expansion and conquest of most of northern Eurasia (i.e. Eastern Europe, Central Asia and Siberia). The Renaissance and the New Monarchs marked the start of an Age of Discovery, a period of exploration, invention and scientific development. Among the great figures of the Western scientific revolution of the 16th and 17th centuries were Copernicus, Kepler, Galileo and Isaac Newton. According to Peter Barrett, "It is widely accepted that 'modern science' arose in the Europe of the 17th century (towards the end of the Renaissance), introducing a new understanding of the natural world." The Seven Years' War brought to an end the "Old System" of alliances in Europe. Consequently, when the American Revolutionary War turned into a global war between 1778 and 1783, Britain found itself opposed by a strong coalition of European powers, and lacking any substantial ally. The Age of Enlightenment was a powerful intellectual movement during the 18th century promoting scientific and reason-based thoughts. Discontent with the aristocracy and clergy's monopoly on political power in France resulted in the French Revolution, and the establishment of the First Republic as a result of which the monarchy and many of the nobility perished during the initial reign of terror. Napoleon Bonaparte rose to power in the aftermath of the French Revolution, and established the First French Empire that, during the Napoleonic Wars, grew to encompass large parts of Europe before collapsing in 1815 with the Battle of Waterloo. Napoleonic rule resulted in the further dissemination of the ideals of the French Revolution, including that of the nation state, as well as the widespread adoption of the French models of administration, law and education. The Congress of Vienna, convened after Napoleon's downfall, established a new balance of power in Europe centred on the five "Great Powers": the UK, France, Prussia, Austria and Russia. This balance would remain in place until the Revolutions of 1848, during which liberal uprisings affected all of Europe except for Russia and the UK. These revolutions were eventually put down by conservative elements and few reforms resulted. The year 1859 saw the unification of Romania, as a nation state, from smaller principalities. In 1867, the Austro-Hungarian empire was formed; 1871 saw the unifications of both Italy and Germany as nation-states from smaller principalities. In parallel, the Eastern Question grew more complex ever since the Ottoman defeat in the Russo-Turkish War (1768–1774). As the dissolution of the Ottoman Empire seemed imminent, the Great Powers struggled to safeguard their strategic and commercial interests in the Ottoman domains. The Russian Empire stood to benefit from the decline, whereas the Habsburg Empire and Britain perceived the preservation of the Ottoman Empire to be in their best interests. Meanwhile, the Serbian Revolution (1804) and Greek War of Independence (1821) marked the beginning of the end of Ottoman rule in the Balkans, which ended with the Balkan Wars in 1912–1913. Formal recognition of the de facto independent principalities of Montenegro, Serbia and Romania ensued at the Congress of Berlin in 1878. The Industrial Revolution started in Great Britain in the last part of the 18th century and spread throughout Europe. The invention and implementation of new technologies resulted in rapid urban growth, mass employment and the rise of a new working class. Reforms in social and economic spheres followed, including the first laws on child labour, the legalisation of trade unions, and the abolition of slavery. In Britain, the Public Health Act of 1875 was passed, which significantly improved living conditions in many British cities. Europe's population increased from about 100 million in 1700 to 400 million by 1900. The last major famine recorded in Western Europe, the Great Famine of Ireland, caused death and mass emigration of millions of Irish people. In the 19th century, 70 million people left Europe in migrations to various European colonies abroad and to the United States. The industrial revolution also led to large population growth, and the share of the world population living in Europe reached a peak of slightly above 25% around the year 1913. Two world wars and an economic depression dominated the first half of the 20th century. The First World War was fought between 1914 and 1918. It started when Archduke Franz Ferdinand of Austria was assassinated by the Yugoslav nationalist Gavrilo Princip. Most European nations were drawn into the war, which was fought between the Entente Powers (France, Belgium, Serbia, Portugal, Russia, the United Kingdom, and later Italy, Greece, Romania, and the United States) and the Central Powers (Austria-Hungary, Germany, Bulgaria, and the Ottoman Empire). The war left more than 16 million civilians and military dead. Over 60 million European soldiers were mobilised from 1914 to 1918. Russia was plunged into the Russian Revolution, which threw down the Tsarist monarchy and replaced it with the communist Soviet Union, leading also to the independence of many former Russian governorates, such as Finland, Estonia, Latvia and Lithuania, as new European countries. Austria-Hungary and the Ottoman Empire collapsed and broke up into separate nations, and many other nations had their borders redrawn. The Treaty of Versailles, which officially ended the First World War in 1919, was harsh towards Germany, upon whom it placed full responsibility for the war and imposed heavy sanctions. Excess deaths in Russia over the course of the First World War and the Russian Civil War (including the postwar famine) amounted to a combined total of 18 million. In 1932–1933, under Stalin's leadership, confiscations of grain by the Soviet authorities contributed to the second Soviet famine which caused millions of deaths; surviving kulaks were persecuted and many sent to Gulags to do forced labour. Stalin was also responsible for the Great Purge of 1937–38 in which the NKVD executed 681,692 people; millions of people were deported and exiled to remote areas of the Soviet Union. The social revolutions sweeping through Russia also affected other European nations following The Great War: in 1919, with the Weimar Republic in Germany and the First Austrian Republic; in 1922, with Mussolini's one-party fascist government in the Kingdom of Italy and in Atatürk's Turkish Republic, adopting the Western alphabet and state secularism. Economic instability, caused in part by debts incurred in the First World War and 'loans' to Germany played havoc in Europe in the late 1920s and 1930s. This, and the Wall Street Crash of 1929, brought about the worldwide Great Depression. Helped by the economic crisis, social instability and the threat of communism, fascist movements developed throughout Europe placing Adolf Hitler in power of what became Nazi Germany. In 1933, Hitler became the leader of Germany and began to work towards his goal of building Greater Germany. Germany re-expanded and took back the Saarland and Rhineland in 1935 and 1936. In 1938, Austria became a part of Germany following the Anschluss. Later that year, following the Munich Agreement signed by Germany, France, the United Kingdom, and Italy, Germany annexed the Sudetenland, which was a part of Czechoslovakia inhabited by ethnic Germans, and in early 1939, the remainder of Czechoslovakia was split into the Protectorate of Bohemia and Moravia, controlled by Germany and the Slovak Republic. At the time, the United Kingdom and France preferred a policy of appeasement. With tensions mounting between Germany and Poland over the future of Danzig, the Germans turned to the Soviets and signed the Molotov–Ribbentrop Pact, which allowed the Soviets to invade the Baltic states and parts of Poland and Romania. Germany invaded Poland on 1 September 1939, prompting France and the United Kingdom to declare war on Germany on 3 September, opening the European Theatre of the Second World War. The Soviet invasion of Poland started on 17 September and Poland fell soon thereafter. On 24 September, the Soviet Union attacked the Baltic countries and, on 30 November, Finland, the latter of which was followed by the devastating Winter War for the Red Army. The British hoped to land at Narvik and send troops to aid Finland, but their primary objective in the landing was to encircle Germany and cut the Germans off from Scandinavian resources. Around the same time, Germany moved troops into Denmark. The Phoney War continued. In May 1940, Germany attacked France through the Low Countries. France capitulated in June 1940. By August, Germany had begun a bombing offensive against the United Kingdom but failed to convince the Britons to give up. In 1941, Germany invaded the Soviet Union in Operation Barbarossa. On 7 December 1941 Japan's attack on Pearl Harbor drew the United States into the conflict as allies of the British Empire, and other allied forces. After the staggering Battle of Stalingrad in 1943, the German offensive in the Soviet Union turned into a continual fallback. The Battle of Kursk, which involved the largest tank battle in history, was the last major German offensive on the Eastern Front. In June 1944, British and American forces invaded France in the D-Day landings, opening a new front against Germany. Berlin finally fell in 1945, ending the Second World War in Europe. The war was the largest and most destructive in human history, with 60 million dead across the world. More than 40 million people in Europe had died as a result of the Second World War, including between 11 and 17 million people who perished during the Holocaust. The Soviet Union lost around 27 million people (mostly civilians) during the war, about half of all Second World War casualties. By the end of the Second World War, Europe had more than 40 million refugees. Several post-war expulsions in Central and Eastern Europe displaced a total of about 20 million people. The First World War, and especially the Second World War, diminished the eminence of Western Europe in world affairs. After the Second World War the map of Europe was redrawn at the Yalta Conference and divided into two blocs, the Western countries and the communist Eastern bloc, separated by what was later called by Winston Churchill an "Iron Curtain". The United States and Western Europe established the NATO alliance and, later, the Soviet Union and Central Europe established the Warsaw Pact. Particular hot spots after the Second World War were Berlin and Trieste, whereby the Free Territory of Trieste, founded in 1947 with the UN, was dissolved in 1954 and 1975, respectively. The Berlin blockade in 1948 and 1949 and the construction of the Berlin Wall in 1961 were one of the great international crises of the Cold War. The two new superpowers, the United States and the Soviet Union, became locked in a fifty-year-long Cold War, centred on nuclear proliferation. At the same time decolonisation, which had already started after the First World War, gradually resulted in the independence of most of the European colonies in Asia and Africa. In the 1980s the reforms of Mikhail Gorbachev and the Solidarity movement in Poland weakened the previously rigid communist system. The opening of the Iron Curtain at the Pan-European Picnic then set in motion a peaceful chain reaction, at the end of which the Eastern bloc, the Warsaw Pact and other communist states collapsed, and the Cold War ended. Germany was reunited, after the symbolic fall of the Berlin Wall in 1989 and the maps of Central and Eastern Europe were redrawn once more. This made old previously interrupted cultural and economic relationships possible, and previously isolated cities such as Berlin, Prague, Vienna, Budapest and Trieste were now again in the centre of Europe. European integration also grew after the Second World War. In 1949 the Council of Europe was founded, following a speech by Sir Winston Churchill, with the idea of unifying Europe to achieve common goals. It includes all European states except for Belarus, Russia, and Vatican City. The Treaty of Rome in 1957 established the European Economic Community between six Western European states with the goal of a unified economic policy and common market. In 1967 the EEC, European Coal and Steel Community, and Euratom formed the European Community, which in 1993 became the European Union. The EU established a parliament, court and central bank, and introduced the euro as a unified currency. Between 2004 and 2013, more Central European countries began joining, expanding the EU to 28 European countries and once more making Europe a major economical and political centre of power. However, the United Kingdom withdrew from the EU on 31 January 2020, as a result of a June 2016 referendum on EU membership. The Russo-Ukrainian conflict, which has been ongoing since 2014, steeply escalated when Russia launched a full-scale invasion of Ukraine on 24 February 2022, marking the largest humanitarian and refugee crisis in Europe since the Second World War and the Yugoslav Wars. Europe makes up the western fifth of the Eurasian landmass. It has a higher ratio of coast to landmass than any other continent or subcontinent. Its maritime borders consist of the Arctic Ocean to the north, the Atlantic Ocean to the west and the Mediterranean, Black and Caspian Seas to the south. Land relief in Europe shows great variation within relatively small areas. The southern regions are more mountainous, while moving north the terrain descends from the high Alps, Pyrenees and Carpathians, through hilly uplands, into broad, low northern plains, which are vast in the east. This extended lowland is known as the Great European Plain and at its heart lies the North German Plain. An arc of uplands also exists along the north-western seaboard, which begins in the western parts of the islands of Britain and Ireland, and then continues along the mountainous, fjord-cut spine of Norway. This description is simplified. Subregions such as the Iberian Peninsula and the Italian Peninsula contain their own complex features, as does mainland Central Europe itself, where the relief contains many plateaus, river valleys and basins that complicate the general trend. Sub-regions like Iceland, Britain and Ireland are special cases. The former is a land unto itself in the northern ocean that is counted as part of Europe, while the latter are upland areas that were once joined to the mainland until rising sea levels cut them off. Europe lies mainly in the temperate climate zone of the northern hemisphere, where the prevailing wind direction is from the west. The climate is milder in comparison to other areas of the same latitude around the globe due to the influence of the Gulf Stream, an ocean current which carries warm water from the Gulf of Mexico across the Atlantic ocean to Europe. The Gulf Stream is nicknamed "Europe's central heating", because it makes Europe's climate warmer and wetter than it would otherwise be. The Gulf Stream not only carries warm water to Europe's coast but also warms up the prevailing westerly winds that blow across the continent from the Atlantic Ocean. Therefore, the average temperature throughout the year of Aveiro is 16 °C (61 °F), while it is only 13 °C (55 °F) in New York City which is almost on the same latitude, bordering the same ocean. Berlin, Germany; Calgary, Canada; and Irkutsk, in far south-eastern Russia, lie on around the same latitude; January temperatures in Berlin average around 8 °C (14 °F) higher than those in Calgary and they are almost 22 °C (40 °F) higher than average temperatures in Irkutsk. The large water masses of the Mediterranean Sea, which equalise the temperatures on an annual and daily average, are also of particular importance. The water of the Mediterranean extends from the Sahara desert to the Alpine arc in its northernmost part of the Adriatic Sea near Trieste. In general, Europe is not just colder towards the north compared to the south, but it also gets colder from the west towards the east. The climate is more oceanic in the west and less so in the east. This can be illustrated by the following table of average temperatures at locations roughly following the 64th, 60th, 55th, 50th, 45th and 40th latitudes. None of them is located at high altitude; most of them are close to the sea. It is notable how the average temperatures for the coldest month, as well as the annual average temperatures, drop from the west to the east. For instance, Edinburgh is warmer than Belgrade during the coldest month of the year, although Belgrade is around 10° of latitude farther south. The geological history of Europe traces back to the formation of the Baltic Shield (Fennoscandia) and the Sarmatian craton, both around 2.25 billion years ago, followed by the Volgo–Uralia shield, the three together leading to the East European craton (≈ Baltica) which became a part of the supercontinent Columbia. Around 1.1 billion years ago, Baltica and Arctica (as part of the Laurentia block) became joined to Rodinia, later resplitting around 550 million years ago to reform as Baltica. Around 440 million years ago Euramerica was formed from Baltica and Laurentia; a further joining with Gondwana then leading to the formation of Pangea. Around 190 million years ago, Gondwana and Laurasia split apart due to the widening of the Atlantic Ocean. Finally and very soon afterwards, Laurasia itself split up again, into Laurentia (North America) and the Eurasian continent. The land connection between the two persisted for a considerable time, via Greenland, leading to interchange of animal species. From around 50 million years ago, rising and falling sea levels have determined the actual shape of Europe and its connections with continents such as Asia. Europe's present shape dates to the late Tertiary period about five million years ago. The geology of Europe is hugely varied and complex and gives rise to the wide variety of landscapes found across the continent, from the Scottish Highlands to the rolling plains of Hungary. Europe's most significant feature is the dichotomy between highland and mountainous Southern Europe and a vast, partially underwater, northern plain ranging from Ireland in the west to the Ural Mountains in the east. These two halves are separated by the mountain chains of the Pyrenees and Alps/Carpathians. The northern plains are delimited in the west by the Scandinavian Mountains and the mountainous parts of the British Isles. Major shallow water bodies submerging parts of the northern plains are the Celtic Sea, the North Sea, the Baltic Sea complex and Barents Sea. The northern plain contains the old geological continent of Baltica and so may be regarded geologically as the "main continent", while peripheral highlands and mountainous regions in the south and west constitute fragments from various other geological continents. Most of the older geology of western Europe existed as part of the ancient microcontinent Avalonia. Having lived side by side with agricultural peoples for millennia, Europe's animals and plants have been profoundly affected by the presence and activities of humans. With the exception of Fennoscandia and northern Russia, few areas of untouched wilderness are currently found in Europe, except for various national parks. The main natural vegetation cover in Europe is mixed forest. The conditions for growth are very favourable. In the north, the Gulf Stream and North Atlantic Drift warm the continent. Southern Europe has a warm but mild climate. There are frequent summer droughts in this region. Mountain ridges also affect the conditions. Some of these, such as the Alps and the Pyrenees, are oriented east–west and allow the wind to carry large masses of water from the ocean in the interior. Others are oriented south–north (Scandinavian Mountains, Dinarides, Carpathians, Apennines) and because the rain falls primarily on the side of mountains that is oriented towards the sea, forests grow well on this side, while on the other side, the conditions are much less favourable. Few corners of mainland Europe have not been grazed by livestock at some point in time, and the cutting down of the preagricultural forest habitat caused disruption to the original plant and animal ecosystems. Possibly 80 to 90 percent of Europe was once covered by forest. It stretched from the Mediterranean Sea to the Arctic Ocean. Although over half of Europe's original forests disappeared through the centuries of deforestation, Europe still has over one quarter of its land area as forest, such as the broadleaf and mixed forests, taiga of Scandinavia and Russia, mixed rainforests of the Caucasus and the Cork oak forests in the western Mediterranean. During recent times, deforestation has been slowed and many trees have been planted. However, in many cases monoculture plantations of conifers have replaced the original mixed natural forest, because these grow quicker. The plantations now cover vast areas of land, but offer poorer habitats for many European forest dwelling species which require a mixture of tree species and diverse forest structure. The amount of natural forest in Western Europe is just 2–3% or less, while in its Western Russia its 5–10%. The European country with the smallest percentage of forested area is Iceland (1%), while the most forested country is Finland (77%). In temperate Europe, mixed forest with both broadleaf and coniferous trees dominate. The most important species in central and western Europe are beech and oak. In the north, the taiga is a mixed spruce–pine–birch forest; further north within Russia and extreme northern Scandinavia, the taiga gives way to tundra as the Arctic is approached. In the Mediterranean, many olive trees have been planted, which are very well adapted to its arid climate; Mediterranean Cypress is also widely planted in southern Europe. The semi-arid Mediterranean region hosts much scrub forest. A narrow east–west tongue of Eurasian grassland (the steppe) extends westwards from Ukraine and southern Russia and ends in Hungary and traverses into taiga to the north. Glaciation during the most recent ice age and the presence of humans affected the distribution of European fauna. As for the animals, in many parts of Europe most large animals and top predator species have been hunted to extinction. The woolly mammoth was extinct before the end of the Neolithic period. Today wolves (carnivores) and bears (omnivores) are endangered. Once they were found in most parts of Europe. However, deforestation and hunting caused these animals to withdraw further and further. By the Middle Ages the bears' habitats were limited to more or less inaccessible mountains with sufficient forest cover. Today, the brown bear lives primarily in the Balkan peninsula, Scandinavia and Russia; a small number also persist in other countries across Europe (Austria, Pyrenees etc.), but in these areas brown bear populations are fragmented and marginalised because of the destruction of their habitat. In addition, polar bears may be found on Svalbard, a Norwegian archipelago far north of Scandinavia. The wolf, the second largest predator in Europe after the brown bear, can be found primarily in Central and Eastern Europe and in the Balkans, with a handful of packs in pockets of Western Europe (Scandinavia, Spain, etc.). Other carnivores include the European wildcat, red fox and arctic fox, the golden jackal, different species of martens, the European hedgehog, different species of reptiles (like snakes such as vipers and grass snakes) and amphibians, as well as different birds (owls, hawks and other birds of prey). Important European herbivores are snails, larvae, fish, different birds and mammals, like rodents, deer and roe deer, boars and living in the mountains, marmots, steinbocks, chamois among others. A number of insects, such as the small tortoiseshell butterfly, add to the biodiversity. Sea creatures are also an important part of European flora and fauna. The sea flora is mainly phytoplankton. Important animals that live in European seas are zooplankton, molluscs, echinoderms, different crustaceans, squids and octopuses, fish, dolphins and whales. Biodiversity is protected in Europe through the Council of Europe's Bern Convention, which has also been signed by the European Community as well as non-European states. The political map of Europe is substantially derived from the re-organisation of Europe following the Napoleonic Wars in 1815. The prevalent form of government in Europe is parliamentary democracy, in most cases in the form of Republic; in 1815, the prevalent form of government was still the Monarchy. Europe's remaining eleven monarchies are constitutional. European integration is the process of political, legal, economic (and in some cases social and cultural) integration of European states as it has been pursued by the powers sponsoring the Council of Europe since the end of the Second World War. The European Union has been the focus of economic integration on the continent since its foundation in 1993. More recently, the Eurasian Economic Union has been established as a counterpart comprising former Soviet states. 27 European states are members of the politico-economic European Union, 26 of the border-free Schengen Area and 20 of the monetary union Eurozone. Among the smaller European organisations are the Nordic Council, the Benelux, the Baltic Assembly and the Visegrád Group. This list includes all internationally recognised sovereign countries falling even partially under any common geographical or political definitions of Europe. Within the above-mentioned states are several de facto independent countries with limited to no international recognition. None of them are members of the UN: Several dependencies and similar territories with broad autonomy are also found within or close to Europe. This includes Åland (an autonomous county of Finland), two autonomous territories of the Kingdom of Denmark (other than Denmark proper), three Crown Dependencies and two British Overseas Territories. Svalbard is also included due to its unique status within Norway, although it is not autonomous. Not included are the three countries of the United Kingdom with devolved powers and the two Autonomous Regions of Portugal, which despite having a unique degree of autonomy, are not largely self-governing in matters other than international affairs. Areas with little more than a unique tax status, such as the Canary Islands and Heligoland, are also not included for this reason. As a continent, the economy of Europe is currently the largest on Earth and it is the richest region as measured by assets under management with over $32.7 trillion compared to North America's $27.1 trillion in 2008. In 2009 Europe remained the wealthiest region. Its $37.1 trillion in assets under management represented one-third of the world's wealth. It was one of several regions where wealth surpassed its precrisis year-end peak. As with other continents, Europe has a large wealth gap among its countries. The richer states tend to be in the Northwest and West in general, followed by Central Europe, while most economies of Eastern and Southeastern Europe are still reemerging from the collapse of the Soviet Union and the breakup of Yugoslavia. The model of the Blue Banana was designed as an economic geographic representation of the respective economic power of the regions, which was further developed into the Golden Banana or Blue Star. The trade between East and West, as well as towards Asia, which had been disrupted for a long time by the two world wars, new borders and the Cold War, increased sharply after 1989. In addition, there is new impetus from the Chinese Belt and Road Initiative across the Suez Canal towards Africa and Asia. The European Union, a political entity composed of 27 European states, comprises the largest single economic area in the world. Nineteen EU countries share the euro as a common currency. Five European countries rank in the top ten of the world's largest national economies in GDP (PPP). This includes (ranks according to the CIA): Germany (6), Russia (7), the United Kingdom (10), France (11) and Italy (13). Some European countries are much richer than others. The richest in terms of nominal GDP is Monaco with its US$185,829 per capita (2018) and the poorest is Ukraine with its US$3,659 per capita (2019). As a whole, Europe's GDP per capita is US$21,767 according to a 2016 International Monetary Fund assessment. Capitalism has been dominant in the Western world since the end of feudalism. From Britain, it gradually spread throughout Europe. The Industrial Revolution started in Europe, specifically the United Kingdom in the late 18th century, and the 19th century saw Western Europe industrialise. Economies were disrupted by the First World War, but by the beginning of the Second World War, they had recovered and were having to compete with the growing economic strength of the United States. The Second World War, again, damaged much of Europe's industries. After the Second World War the economy of the UK was in a state of ruin, and continued to suffer relative economic decline in the following decades. Italy was also in a poor economic condition but regained a high level of growth by the 1950s. West Germany recovered quickly and had doubled production from pre-war levels by the 1950s. France also staged a remarkable comeback enjoying rapid growth and modernisation; later on Spain, under the leadership of Franco, also recovered and the nation recorded huge unprecedented economic growth beginning in the 1960s in what is called the Spanish miracle. The majority of Central and Eastern European states came under the control of the Soviet Union and thus were members of the Council for Mutual Economic Assistance (COMECON). The states which retained a free-market system were given a large amount of aid by the United States under the Marshall Plan. The western states moved to link their economies together, providing the basis for the EU and increasing cross border trade. This helped them to enjoy rapidly improving economies, while those states in COMECON were struggling in a large part due to the cost of the Cold War. Until 1990, the European Community was expanded from 6 founding members to 12. The emphasis placed on resurrecting the West German economy led to it overtaking the UK as Europe's largest economy. With the fall of communism in Central and Eastern Europe in 1991, the post-socialist states underwent shock therapy measures to liberalise their economies and implement free market reforms. After East and West Germany were reunited in 1990, the economy of West Germany struggled as it had to support and largely rebuild the infrastructure of East Germany, while the latter experienced sudden mass unemployment and plummeting of industrial production. By the millennium change, the EU dominated the economy of Europe, comprising the five largest European economies of the time: Germany, the United Kingdom, France, Italy, and Spain. In 1999, 12 of the 15 members of the EU joined the Eurozone, replacing their national currencies by the euro. Figures released by Eurostat in 2009 confirmed that the Eurozone had gone into recession in 2008. It impacted much of the region. In 2010, fears of a sovereign debt crisis developed concerning some countries in Europe, especially Greece, Ireland, Spain and Portugal. As a result, measures were taken, especially for Greece, by the leading countries of the Eurozone. The EU-27 unemployment rate was 10.3% in 2012. For those aged 15–24 it was 22.4%. The population of Europe was about 742 million in 2023 according to UN estimates. This is slightly more than one ninth of the world's population. The population density of Europe (the number of people per area) is the second highest of any continent, behind Asia. The population of Europe is currently slowly decreasing, by about 0.2% per year, because there are fewer births than deaths. This natural decrease in population is reduced by the fact that more people migrate to Europe from other continents than vice versa. Southern Europe and Western Europe are the regions with the highest average number of elderly people in the world. In 2021, the percentage of people over 65 years old was 21% in Western Europe and Southern Europe, compared to 19% in all of Europe and 10% in the world. Projections suggest that by 2050 Europe will reach 30%. This is caused by the fact that the population has been having children below replacement level since the 1970s. The United Nations predicts that Europe will decline its population between 2022 and 2050 by −7 per cent, without changing immigration movements. According to a population projection of the UN Population Division, Europe's population may fall to between 680 and 720 million people by 2050, which would be 7% of the world population at that time. Within this context, significant disparities exist between regions in relation to fertility rates. The average number of children per female of child-bearing age is 1.52, far below the replacement rate. The UN predicts a steady population decline in Central and Eastern Europe as a result of emigration and low birth rates. Pan and Pfeil (2004) count 87 distinct "peoples of Europe", of which 33 form the majority population in at least one sovereign state, while the remaining 54 constitute ethnic minorities. Europe is home to the highest number of migrants of all global regions at nearly 87 million people in 2020, according to the International Organisation for Migration. In 2005, the EU had an overall net gain from immigration of 1.8 million people. This accounted for almost 85% of Europe's total population growth. In 2021, 827,000 persons were given citizenship of an EU member state, an increase of about 14% compared with 2020. 2.3 million immigrants from non-EU countries entered the EU in 2021. Early modern emigration from Europe began with Spanish and Portuguese settlers in the 16th century, and French and English settlers in the 17th century. But numbers remained relatively small until waves of mass emigration in the 19th century, when millions of poor families left Europe. Today, large populations of European descent are found on every continent. European ancestry predominates in North America and to a lesser degree in South America (particularly in Uruguay, Argentina, Chile and Brazil, while most of the other Latin American countries also have a considerable population of European origins). Australia and New Zealand have large European-derived populations. Africa has no countries with European-derived majorities (or with the exception of Cape Verde and probably São Tomé and Príncipe, depending on context), but there are significant minorities, such as the White South Africans in South Africa. In Asia, European-derived populations, specifically Russians, predominate in North Asia and some parts of Northern Kazakhstan. Europe has about 225 indigenous languages, mostly falling within three Indo-European language groups: the Romance languages, derived from the Latin of the Roman Empire; the Germanic languages, whose ancestor language came from southern Scandinavia; and the Slavic languages. Slavic languages are mostly spoken in Southern, Central and Eastern Europe. Romance languages are spoken primarily in Western and Southern Europe, as well as in Switzerland in Central Europe and Romania and Moldova in Eastern Europe. Germanic languages are spoken in Western, Northern and Central Europe as well as in Gibraltar and Malta in Southern Europe. Languages in adjacent areas show significant overlaps (such as in English, for example). Other Indo-European languages outside the three main groups include the Baltic group (Latvian and Lithuanian), the Celtic group (Irish, Scottish Gaelic, Manx, Welsh, Cornish and Breton), Greek, Armenian and Albanian. A distinct non-Indo-European family of Uralic languages (Estonian, Finnish, Hungarian, Erzya, Komi, Mari, Moksha and Udmurt) is spoken mainly in Estonia, Finland, Hungary and parts of Russia. Turkic languages include Azerbaijani, Kazakh and Turkish, in addition to smaller languages in Eastern and Southeast Europe (Balkan Gagauz Turkish, Bashkir, Chuvash, Crimean Tatar, Karachay-Balkar, Kumyk, Nogai and Tatar). Kartvelian languages (Georgian, Mingrelian and Svan) are spoken primarily in Georgia. Two other language families reside in the North Caucasus (termed Northeast Caucasian, most notably including Chechen, Avar and Lezgin; and Northwest Caucasian, most notably including Adyghe). Maltese is the only Semitic language that is official within the EU, while Basque is the only European language isolate. Multilingualism and the protection of regional and minority languages are recognised political goals in Europe today. The Council of Europe Framework Convention for the Protection of National Minorities and the Council of Europe's European Charter for Regional or Minority Languages set up a legal framework for language rights in Europe. Religion in Europe according to the Global Religious Landscape survey by the Pew Forum, 2016 The largest religion in Europe is Christianity, with 76.2% of Europeans considering themselves Christians, including Catholic, Eastern Orthodox and various Protestant denominations. Among Protestants, the most popular are Lutheranism, Anglicanism and the Reformed faith. Smaller Protestant denominations include Anabaptists as well as denominations centered in the United States such as Pentecostalism, Methodism, and Evangelicalism. Although Christianity originated in the Middle East, its centre of mass shifted to Europe when it became the official religion of the Roman Empire in the late 4th century. Christianity played a prominent role in the development of the European culture and identity. Today, a bit over 25% of the world's Christians live in Europe. Islam is the second most popular religion in Europe. Over 25 million, or roughly 5% of the population, adhere to it. In Albania and Bosnia and Herzegovina, two countries in the Balkan peninsula in Southeastern Europe, Islam instead of Christianity is the majority religion. This is also the case in Turkey and in certain parts of Russia, as well as in Azerbaijan and Kazakhstan, all of which are at the border to Asia. Many countries in Europe are home to a sizeable Muslim minority, and immigration to Europe has increased the number of Muslim people in Europe in recent years. The Jewish population in Europe was about 1.4 million people in 2020 (about 0.2% of the population). There is a long history of Jewish life in Europe, beginning in antiquity. During the late 19th and early 20th centuries, the Russian Empire had the majority of the world's Jews living within its borders. In 1897, according to Russian census of 1897, the total Jewish population of Russia was 5.1 million people, which was 4.13% of total population. Of this total, the vast majority lived within the Pale of Settlement. In 1933, there were about 9.5 million Jewish people in Europe, representing 1.7% of the population, but most were killed or displaced across Europe during The Holocaust. In the 21st century, France has the largest Jewish population in Europe, followed by the United Kingdom, Germany and Russia. Other religions practiced in Europe include Hinduism and Buddhism, which are minority religions, except in Russia's Republic of Kalmykia, where Tibetan Buddhism is the majority religion. A large and increasing number of people in Europe are irreligious, atheist and agnostic. They are estimated to make up about 18.3% of Europe's population currently. The three largest urban areas of Europe are Moscow, London and Paris. All have over 10 million residents, and as such have been described as megacities. While Istanbul has the highest total city population, it lies partly in Asia. 64.9% of the residents live on the European side and 35.1% on the Asian side. The next largest cities in order of population are Madrid, Saint Petersburg, Milan, Barcelona, Berlin, and Rome each having over three million residents. When considering the commuter belts or metropolitan areas within Europe (for which comparable data is available), Moscow covers the largest population, followed in order by Istanbul, London, Paris, Madrid, Milan, Ruhr Area, Saint Petersburg, Rhein-Süd, Barcelona and Berlin. "Europe" as a cultural concept is substantially derived from the shared heritage of ancient Greece and the Roman Empire and its cultures. The boundaries of Europe were historically understood as those of Christendom (or more specifically Latin Christendom), as established or defended throughout the medieval and early modern history of Europe, especially against Islam, as in the Reconquista and the Ottoman wars in Europe. This shared cultural heritage is combined by overlapping indigenous national cultures and folklores, roughly divided into Slavic, Latin (Romance) and Germanic, but with several components not part of either of these groups (notably Greek, Basque and Celtic). Historically, special examples with overlapping cultures are Strasbourg with Latin (Romance) and Germanic, or Trieste with Latin, Slavic and Germanic roots. Cultural contacts and mixtures shape a large part of the regional cultures of Europe. Europe is often described as "maximum cultural diversity with minimal geographical distances". Different cultural events are organised in Europe, with the aim of bringing different cultures closer together and raising awareness of their importance, such as the European Capital of Culture, the European Region of Gastronomy, the European Youth Capital and the European Capital of Sport. Sport in Europe tends to be highly organized with many sports having professional leagues. In Europe many people are unable to access basic social conditions, which makes it harder for them to thrive and flourish. Access to basic necessities can be compromised, for example 10% of Europeans spend at least 40% of household income on housing. 75 million Europeans feel socially isolated. From the 1980s income inequality has been rising and wage shares have been falling. In 2016, the richest 20% of households earned over five times more than the poorest 20%. Many workers experience stagnant real wages and precarious work is common even for essential workers. Historical Maps
[ { "paragraph_id": 0, "text": "Europe is a continent comprising the westernmost peninsulas of Eurasia, located entirely in the Northern Hemisphere and mostly in the Eastern Hemisphere. It shares the continental landmass of Afro-Eurasia with both Africa and Asia. It is bordered by the Arctic Ocean to the north, the Atlantic Ocean to the west, the Mediterranean Sea to the south, and Asia to the east. Europe is commonly considered to be separated from Asia by the watershed of the Ural Mountains, the Ural River, the Caspian Sea, the Greater Caucasus, the Black Sea and the waterways of the Turkish straits.", "title": "" }, { "paragraph_id": 1, "text": "Europe covers about 10.18 million km (3.93 million sq mi), or 2% of Earth's surface (6.8% of land area), making it the second-smallest continent (using the seven-continent model). Politically, Europe is divided into about fifty sovereign states, of which Russia is the largest and most populous, spanning 39% of the continent and comprising 15% of its population. Europe had a total population of about 745 million (about 10% of the world population) in 2021; the third-largest after Asia and Africa. The European climate is largely affected by warm Atlantic currents that temper winters and summers on much of the continent, even at latitudes along which the climate in Asia and North America is severe. Further from the sea, seasonal differences are more noticeable than close to the coast.", "title": "" }, { "paragraph_id": 2, "text": "European culture is the root of Western civilisation, which traces its lineage back to ancient Greece and ancient Rome. The fall of the Western Roman Empire in 476 CE and the related Migration Period marked the end of Europe's ancient history, and the beginning of the Middle Ages. The Italian Renaissance began in Florence and spread to the rest of the continent, bringing a renewed interest in humanism, exploration, art, and science which contributed to the beginning of the modern era. Since the Age of Discovery, led by Spain and Portugal, Europe played a predominant role in global affairs with multiple explorations and conquests around the world. Between the 16th and 20th centuries, European powers colonised at various times the Americas, almost all of Africa and Oceania, and the majority of Asia.", "title": "" }, { "paragraph_id": 3, "text": "The Age of Enlightenment, the French Revolution, and the Napoleonic Wars shaped the continent culturally, politically and economically from the end of the 17th century until the first half of the 19th century. The Industrial Revolution, which began in Great Britain at the end of the 18th century, gave rise to radical economic, cultural and social change in Western Europe and eventually the wider world. Both world wars began and were fought to a great extent in Europe, contributing to a decline in Western European dominance in world affairs by the mid-20th century as the Soviet Union and the United States took prominence. During the Cold War, Europe was divided along the Iron Curtain between NATO in the West and the Warsaw Pact in the East, until the Revolutions of 1989, the fall of the Berlin Wall, and the dissolution of the Soviet Union.", "title": "" }, { "paragraph_id": 4, "text": "The European Union (EU) and the Council of Europe are two important international organisations aiming to represent the European continent on a political level. The Council of Europe was founded in 1948 with the idea of unifying Europe to achieve common goals and prevent future wars. Further European integration by some states led to the formation of the European Union, a separate supranational political entity based on a system of European law that lies between a confederation and a federation. The EU originated in Western Europe but has been expanding eastward since the fall of the Soviet Union in 1991. A majority of its members have adopted a common currency, the euro, and participate in the European single market and a customs union. A large bloc of countries, the Schengen Area, have also abolished internal border and immigration controls. Regular popular elections take place every five years within the EU; they are considered to be the second largest democratic elections in the world after India's.", "title": "" }, { "paragraph_id": 5, "text": "", "title": "Name" }, { "paragraph_id": 6, "text": "In classical Greek mythology, Europa (Ancient Greek: Εὐρώπη, Eurṓpē) was a Phoenician princess. One view is that her name derives from the Ancient Greek elements εὐρύς (eurús) 'wide, broad', and ὤψ (ōps, gen. ὠπός, ōpós) 'eye, face, countenance', hence their composite Eurṓpē would mean 'wide-gazing' or 'broad of aspect'. Broad has been an epithet of Earth herself in the reconstructed Proto-Indo-European religion and the poetry devoted to it. An alternative view is that of Robert Beekes, who has argued in favour of a Pre-Indo-European origin for the name, explaining that a derivation from eurus would yield a different toponym than Europa. Beekes has located toponyms related to that of Europa in the territory of ancient Greece, and localities such as that of Europos in ancient Macedonia.", "title": "Name" }, { "paragraph_id": 7, "text": "There have been attempts to connect Eurṓpē to a Semitic term for west, this being either Akkadian erebu meaning 'to go down, set' (said of the sun) or Phoenician 'ereb 'evening, west', which is at the origin of Arabic maghreb and Hebrew ma'arav. Martin Litchfield West stated that \"phonologically, the match between Europa's name and any form of the Semitic word is very poor\", while Beekes considers a connection to Semitic languages improbable.", "title": "Name" }, { "paragraph_id": 8, "text": "Most major world languages use words derived from Eurṓpē or Europa to refer to the continent. Chinese, for example, uses the word Ōuzhōu (歐洲/欧洲), which is an abbreviation of the transliterated name Ōuluóbā zhōu (歐羅巴洲) (zhōu means \"continent\"); a similar Chinese-derived term Ōshū (欧州) is also sometimes used in Japanese such as in the Japanese name of the European Union, Ōshū Rengō (欧州連合), despite the katakana Yōroppa (ヨーロッパ) being more commonly used. In some Turkic languages, the originally Persian name Frangistan ('land of the Franks') is used casually in referring to much of Europe, besides official names such as Avrupa or Evropa.", "title": "Name" }, { "paragraph_id": 9, "text": "Clickable map of Europe, showing one of the most commonly used continental boundaries Key: blue: states which straddle the border between Europe and Asia; green: countries not geographically in Europe, but closely associated with the continent", "title": "Definition" }, { "paragraph_id": 10, "text": "The prevalent definition of Europe as a geographical term has been in use since the mid-19th century. Europe is taken to be bounded by large bodies of water to the north, west and south; Europe's limits to the east and north-east are usually taken to be the Ural Mountains, the Ural River, and the Caspian Sea; to the south-east, the Caucasus Mountains, the Black Sea, and the waterways connecting the Black Sea to the Mediterranean Sea.", "title": "Definition" }, { "paragraph_id": 11, "text": "Islands are generally grouped with the nearest continental landmass, hence Iceland is considered to be part of Europe, while the nearby island of Greenland is usually assigned to North America, although politically belonging to Denmark. Nevertheless, there are some exceptions based on sociopolitical and cultural differences. Cyprus is closest to Anatolia (or Asia Minor), but is considered part of Europe politically and it is a member state of the EU. Malta was considered an island of North-western Africa for centuries, but now it is considered to be part of Europe as well. \"Europe\", as used specifically in British English, may also refer to Continental Europe exclusively.", "title": "Definition" }, { "paragraph_id": 12, "text": "The term \"continent\" usually implies the physical geography of a large land mass completely or almost completely surrounded by water at its borders. Prior to the adoption of the current convention that includes mountain divides, the border between Europe and Asia had been redefined several times since its first conception in classical antiquity, but always as a series of rivers, seas and straits that were believed to extend an unknown distance east and north from the Mediterranean Sea without the inclusion of any mountain ranges. Cartographer Herman Moll suggested in 1715 Europe was bounded by a series of partly-joined waterways directed towards the Turkish straits, and the Irtysh River draining into the upper part of the Ob River and the Arctic Ocean. In contrast, the present eastern boundary of Europe partially adheres to the Ural and Caucasus Mountains, which is somewhat arbitrary and inconsistent compared to any clear-cut definition of the term \"continent\".", "title": "Definition" }, { "paragraph_id": 13, "text": "The current division of Eurasia into two continents now reflects East-West cultural, linguistic and ethnic differences which vary on a spectrum rather than with a sharp dividing line. The geographic border between Europe and Asia does not follow any state boundaries and now only follows a few bodies of water. Turkey is generally considered a transcontinental country divided entirely by water, while Russia and Kazakhstan are only partly divided by waterways. France, the Netherlands, Portugal and Spain are also transcontinental (or more properly, intercontinental, when oceans or large seas are involved) in that their main land areas are in Europe while pockets of their territories are located on other continents separated from Europe by large bodies of water. Spain, for example, has territories south of the Mediterranean Sea—namely, Ceuta and Melilla—which are parts of Africa and share a border with Morocco. According to the current convention, Georgia and Azerbaijan are transcontinental countries where waterways have been completely replaced by mountains as the divide between continents.", "title": "Definition" }, { "paragraph_id": 14, "text": "The first recorded usage of Eurṓpē as a geographic term is in the Homeric Hymn to Delian Apollo, in reference to the western shore of the Aegean Sea. As a name for a part of the known world, it is first used in the 6th century BCE by Anaximander and Hecataeus. Anaximander placed the boundary between Asia and Europe along the Phasis River (the modern Rioni River on the territory of Georgia) in the Caucasus, a convention still followed by Herodotus in the 5th century BCE. Herodotus mentioned that the world had been divided by unknown persons into three parts—Europe, Asia, and Libya (Africa)—with the Nile and the Phasis forming their boundaries—though he also states that some considered the River Don, rather than the Phasis, as the boundary between Europe and Asia. Europe's eastern frontier was defined in the 1st century by geographer Strabo at the River Don. The Book of Jubilees described the continents as the lands given by Noah to his three sons; Europe was defined as stretching from the Pillars of Hercules at the Strait of Gibraltar, separating it from Northwest Africa, to the Don, separating it from Asia.", "title": "Definition" }, { "paragraph_id": 15, "text": "The convention received by the Middle Ages and surviving into modern usage is that of the Roman era used by Roman-era authors such as Posidonius, Strabo and Ptolemy, who took the Tanais (the modern Don River) as the boundary.", "title": "Definition" }, { "paragraph_id": 16, "text": "The Roman Empire did not attach a strong identity to the concept of continental divisions. However, following the fall of the Western Roman Empire, the culture that developed in its place, linked to Latin and the Catholic church, began to associate itself with the concept of \"Europe\". The term \"Europe\" is first used for a cultural sphere in the Carolingian Renaissance of the 9th century. From that time, the term designated the sphere of influence of the Western Church, as opposed to both the Eastern Orthodox churches and to the Islamic world.", "title": "Definition" }, { "paragraph_id": 17, "text": "A cultural definition of Europe as the lands of Latin Christendom coalesced in the 8th century, signifying the new cultural condominium created through the confluence of Germanic traditions and Christian-Latin culture, defined partly in contrast with Byzantium and Islam, and limited to northern Iberia, the British Isles, France, Christianised western Germany, the Alpine regions and northern and central Italy. The concept is one of the lasting legacies of the Carolingian Renaissance: Europa often figures in the letters of Charlemagne's court scholar, Alcuin. The transition of Europe to being a cultural term as well as a geographic one led to the borders of Europe being affected by cultural considerations in the East, especially relating to areas under Byzantine, Ottoman, and Russian influence. Such questions were affected by the positive connotations associated with the term Europe by its users. Such cultural considerations were not applied to the Americas, despite their conquest and settlement by European states. Instead, the concept of \"Western civilization\" emerged as a way of grouping together Europe and these colonies.", "title": "Definition" }, { "paragraph_id": 18, "text": "The question of defining a precise eastern boundary of Europe arises in the Early Modern period, as the eastern extension of Muscovy began to include North Asia. Throughout the Middle Ages and into the 18th century, the traditional division of the landmass of Eurasia into two continents, Europe and Asia, followed Ptolemy, with the boundary following the Turkish Straits, the Black Sea, the Kerch Strait, the Sea of Azov and the Don (ancient Tanais). But maps produced during the 16th to 18th centuries tended to differ in how to continue the boundary beyond the Don bend at Kalach-na-Donu (where it is closest to the Volga, now joined with it by the Volga–Don Canal), into territory not described in any detail by the ancient geographers.", "title": "Definition" }, { "paragraph_id": 19, "text": "Around 1715, Herman Moll produced a map showing the northern part of the Ob River and the Irtysh River, a major tributary of the Ob, as components of a series of partly-joined waterways taking the boundary between Europe and Asia from the Turkish Straits, and the Don River all the way to the Arctic Ocean. In 1721, he produced a more up to date map that was easier to read. However, his proposal to adhere to major rivers as the line of demarcation was never taken up by other geographers who were beginning to move away from the idea of water boundaries as the only legitimate divides between Europe and Asia.", "title": "Definition" }, { "paragraph_id": 20, "text": "Four years later, in 1725, Philip Johan von Strahlenberg was the first to depart from the classical Don boundary. He drew a new line along the Volga, following the Volga north until the Samara Bend, along Obshchy Syrt (the drainage divide between the Volga and Ural Rivers), then north and east along the latter waterway to its source in the Ural Mountains. At this point he proposed that mountain ranges could be included as boundaries between continents as alternatives to nearby waterways. Accordingly, he drew the new boundary north along Ural Mountains rather than the nearby and parallel running Ob and Irtysh rivers. This was endorsed by the Russian Empire and introduced the convention that would eventually become commonly accepted. However, this did not come without criticism. Voltaire, writing in 1760 about Peter the Great's efforts to make Russia more European, ignored the whole boundary question with his claim that neither Russia, Scandinavia, northern Germany, nor Poland were fully part of Europe. Since then, many modern analytical geographers like Halford Mackinder have declared that they see little validity in the Ural Mountains as a boundary between continents.", "title": "Definition" }, { "paragraph_id": 21, "text": "The mapmakers continued to differ on the boundary between the lower Don and Samara well into the 19th century. The 1745 atlas published by the Russian Academy of Sciences has the boundary follow the Don beyond Kalach as far as Serafimovich before cutting north towards Arkhangelsk, while other 18th- to 19th-century mapmakers such as John Cary followed Strahlenberg's prescription. To the south, the Kuma–Manych Depression was identified c. 1773 by a German naturalist, Peter Simon Pallas, as a valley that once connected the Black Sea and the Caspian Sea, and subsequently was proposed as a natural boundary between continents.", "title": "Definition" }, { "paragraph_id": 22, "text": "By the mid-19th century, there were three main conventions, one following the Don, the Volga–Don Canal and the Volga, the other following the Kuma–Manych Depression to the Caspian and then the Ural River, and the third abandoning the Don altogether, following the Greater Caucasus watershed to the Caspian. The question was still treated as a \"controversy\" in geographical literature of the 1860s, with Douglas Freshfield advocating the Caucasus crest boundary as the \"best possible\", citing support from various \"modern geographers\".", "title": "Definition" }, { "paragraph_id": 23, "text": "In Russia and the Soviet Union, the boundary along the Kuma–Manych Depression was the most commonly used as early as 1906. In 1958, the Soviet Geographical Society formally recommended that the boundary between the Europe and Asia be drawn in textbooks from Baydaratskaya Bay, on the Kara Sea, along the eastern foot of Ural Mountains, then following the Ural River until the Mugodzhar Hills, and then the Emba River; and Kuma–Manych Depression, thus placing the Caucasus entirely in Asia and the Urals entirely in Europe. However, most geographers in the Soviet Union favoured the boundary along the Caucasus crest, and this became the common convention in the later 20th century, although the Kuma–Manych boundary remained in use in some 20th-century maps.", "title": "Definition" }, { "paragraph_id": 24, "text": "Some view the separation of Eurasia into Asia and Europe as a residue of Eurocentrism: \"In physical, cultural and historical diversity, China and India are comparable to the entire European landmass, not to a single European country. [...].\"", "title": "Definition" }, { "paragraph_id": 25, "text": "During the 2.5 million years of the Pleistocene, numerous cold phases called glacials (Quaternary ice age), or significant advances of continental ice sheets, in Europe and North America, occurred at intervals of approximately 40,000 to 100,000 years. The long glacial periods were separated by more temperate and shorter interglacials which lasted about 10,000–15,000 years. The last cold episode of the last glacial period ended about 10,000 years ago. Earth is currently in an interglacial period of the Quaternary, called the Holocene.", "title": "History" }, { "paragraph_id": 26, "text": "Homo erectus georgicus, which lived roughly 1.8 million years ago in Georgia, is the earliest hominin to have been discovered in Europe. Other hominin remains, dating back roughly 1 million years, have been discovered in Atapuerca, Spain. Neanderthal man (named after the Neandertal valley in Germany) appeared in Europe 150,000 years ago (115,000 years ago it is found already in the territory of present-day Poland) and disappeared from the fossil record about 40,000 years ago, with their final refuge being the Iberian Peninsula. The Neanderthals were supplanted by modern humans (Cro-Magnons), who appeared in Europe around 43,000 to 40,000 years ago. Homo sapiens arrived in Europe around 54,000 years ago, some 10,000 years earlier than previously thought. The earliest sites in Europe dated 48,000 years ago are Riparo Mochi (Italy), Geissenklösterle (Germany) and Isturitz (France).", "title": "History" }, { "paragraph_id": 27, "text": "The European Neolithic period—marked by the cultivation of crops and the raising of livestock, increased numbers of settlements and the widespread use of pottery—began around 7000 BCE in Greece and the Balkans, probably influenced by earlier farming practices in Anatolia and the Near East. It spread from the Balkans along the valleys of the Danube and the Rhine (Linear Pottery culture), and along the Mediterranean coast (Cardial culture). Between 4500 and 3000 BCE, these central European neolithic cultures developed further to the west and the north, transmitting newly acquired skills in producing copper artifacts. In Western Europe the Neolithic period was characterised not by large agricultural settlements but by field monuments, such as causewayed enclosures, burial mounds and megalithic tombs. The Corded Ware cultural horizon flourished at the transition from the Neolithic to the Chalcolithic. During this period giant megalithic monuments, such as the Megalithic Temples of Malta and Stonehenge, were constructed throughout Western and Southern Europe.", "title": "History" }, { "paragraph_id": 28, "text": "The modern native populations of Europe largely descend from three distinct lineages: Mesolithic hunter-gatherers, descended from populations associated with the Paleolithic Epigravettian culture; Neolithic Early European Farmers who migrated from Anatolia during the Neolithic Revolution 9,000 years ago; and Yamnaya Steppe herders who expanded into Europe from the Pontic–Caspian steppe of Ukraine and southern Russia in the context of Indo-European migrations 5,000 years ago. The European Bronze Age began c. 3200 BCE in Greece with the Minoan civilisation on Crete, the first advanced civilisation in Europe. The Minoans were followed by the Myceneans, who collapsed suddenly around 1200 BCE, ushering the European Iron Age. Iron Age colonisation by the Greeks and Phoenicians gave rise to early Mediterranean cities. Early Iron Age Italy and Greece from around the 8th century BCE gradually gave rise to historical Classical antiquity, whose beginning is sometimes dated to 776 BCE, the year of the first Olympic Games.", "title": "History" }, { "paragraph_id": 29, "text": "Ancient Greece was the founding culture of Western civilisation. Western democratic and rationalist culture are often attributed to Ancient Greece. The Greek city-state, the polis, was the fundamental political unit of classical Greece. In 508 BCE, Cleisthenes instituted the world's first democratic system of government in Athens. The Greek political ideals were rediscovered in the late 18th century by European philosophers and idealists. Greece also generated many cultural contributions: in philosophy, humanism and rationalism under Aristotle, Socrates and Plato; in history with Herodotus and Thucydides; in dramatic and narrative verse, starting with the epic poems of Homer; in drama with Sophocles and Euripides; in medicine with Hippocrates and Galen; and in science with Pythagoras, Euclid and Archimedes. In the course of the 5th century BCE, several of the Greek city states would ultimately check the Achaemenid Persian advance in Europe through the Greco-Persian Wars, considered a pivotal moment in world history, as the 50 years of peace that followed are known as Golden Age of Athens, the seminal period of ancient Greece that laid many of the foundations of Western civilisation.", "title": "History" }, { "paragraph_id": 30, "text": "Greece was followed by Rome, which left its mark on law, politics, language, engineering, architecture, government and many more key aspects in western civilisation. By 200 BCE, Rome had conquered Italy and over the following two centuries it conquered Greece and Hispania (Spain and Portugal), the North African coast, much of the Middle East, Gaul (France and Belgium) and Britannia (England and Wales).", "title": "History" }, { "paragraph_id": 31, "text": "Expanding from their base in central Italy beginning in the third century BCE, the Romans gradually expanded to eventually rule the entire Mediterranean Basin and Western Europe by the turn of the millennium. The Roman Republic ended in 27 BCE, when Augustus proclaimed the Roman Empire. The two centuries that followed are known as the pax romana, a period of unprecedented peace, prosperity and political stability in most of Europe. The empire continued to expand under emperors such as Antoninus Pius and Marcus Aurelius, who spent time on the Empire's northern border fighting Germanic, Pictish and Scottish tribes. Christianity was legalised by Constantine I in 313 CE after three centuries of imperial persecution. Constantine also permanently moved the capital of the empire from Rome to the city of Byzantium (modern-day Istanbul) which was renamed Constantinople in his honour in 330 CE. Christianity became the sole official religion of the empire in 380 CE and in 391–392 CE, the emperor Theodosius outlawed pagan religions. This is sometimes considered to mark the end of antiquity; alternatively antiquity is considered to end with the fall of the Western Roman Empire in 476 CE; the closure of the pagan Platonic Academy of Athens in 529 CE; or the rise of Islam in the early 7th century CE. During most of its existence, the Byzantine Empire was one of the most powerful economic, cultural, and military forces in Europe.", "title": "History" }, { "paragraph_id": 32, "text": "During the decline of the Roman Empire, Europe entered a long period of change arising from what historians call the \"Age of Migrations\". There were numerous invasions and migrations amongst the Ostrogoths, Visigoths, Goths, Vandals, Huns, Franks, Angles, Saxons, Slavs, Avars, Bulgars and, later on, the Vikings, Pechenegs, Cumans and Magyars. Renaissance thinkers such as Petrarch would later refer to this as the \"Dark Ages\".", "title": "History" }, { "paragraph_id": 33, "text": "Isolated monastic communities were the only places to safeguard and compile written knowledge accumulated previously; apart from this very few written records survive and much literature, philosophy, mathematics and other thinking from the classical period disappeared from Western Europe, though they were preserved in the east, in the Byzantine Empire.", "title": "History" }, { "paragraph_id": 34, "text": "While the Roman empire in the west continued to decline, Roman traditions and the Roman state remained strong in the predominantly Greek-speaking Eastern Roman Empire, also known as the Byzantine Empire. During most of its existence, the Byzantine Empire was the most powerful economic, cultural and military force in Europe. Emperor Justinian I presided over Constantinople's first golden age: he established a legal code that forms the basis of many modern legal systems, funded the construction of the Hagia Sophia and brought the Christian church under state control.", "title": "History" }, { "paragraph_id": 35, "text": "From the 7th century onwards, as the Byzantines and neighbouring Sasanid Persians were severely weakened due to the protracted, centuries-lasting and frequent Byzantine–Sasanian wars, the Muslim Arabs began to make inroads into historically Roman territory, taking the Levant and North Africa and making inroads into Asia Minor. In the mid-7th century, following the Muslim conquest of Persia, Islam penetrated into the Caucasus region. Over the next centuries Muslim forces took Cyprus, Malta, Crete, Sicily and parts of southern Italy. Between 711 and 720, most of the lands of the Visigothic Kingdom of Iberia was brought under Muslim rule—save for small areas in the north-west (Asturias) and largely Basque regions in the Pyrenees. This territory, under the Arabic name Al-Andalus, became part of the expanding Umayyad Caliphate. The unsuccessful second siege of Constantinople (717) weakened the Umayyad dynasty and reduced their prestige. The Umayyads were then defeated by the Frankish leader Charles Martel at the Battle of Poitiers in 732, which ended their northward advance. In the remote regions of north-western Iberia and the middle Pyrenees the power of the Muslims in the south was scarcely felt. It was here that the foundations of the Christian kingdoms of Asturias, Leon and Galicia were laid and from where the reconquest of the Iberian Peninsula would start. However, no coordinated attempt would be made to drive the Moors out. The Christian kingdoms were mainly focused on their own internal power struggles. As a result, the Reconquista took the greater part of eight hundred years, in which period a long list of Alfonsos, Sanchos, Ordoños, Ramiros, Fernandos and Bermudos would be fighting their Christian rivals as much as the Muslim invaders.", "title": "History" }, { "paragraph_id": 36, "text": "During the Dark Ages, the Western Roman Empire fell under the control of various tribes. The Germanic and Slav tribes established their domains over Western and Eastern Europe, respectively. Eventually the Frankish tribes were united under Clovis I. Charlemagne, a Frankish king of the Carolingian dynasty who had conquered most of Western Europe, was anointed \"Holy Roman Emperor\" by the Pope in 800. This led in 962 to the founding of the Holy Roman Empire, which eventually became centred in the German principalities of central Europe.", "title": "History" }, { "paragraph_id": 37, "text": "East Central Europe saw the creation of the first Slavic states and the adoption of Christianity (c. 1000 CE). The powerful West Slavic state of Great Moravia spread its territory all the way south to the Balkans, reaching its largest territorial extent under Svatopluk I and causing a series of armed conflicts with East Francia. Further south, the first South Slavic states emerged in the late 7th and 8th century and adopted Christianity: the First Bulgarian Empire, the Serbian Principality (later Kingdom and Empire) and the Duchy of Croatia (later Kingdom of Croatia). To the East, Kievan Rus' expanded from its capital in Kiev to become the largest state in Europe by the 10th century. In 988, Vladimir the Great adopted Orthodox Christianity as the religion of state. Further East, Volga Bulgaria became an Islamic state in the 10th century, but was eventually absorbed into Russia several centuries later.", "title": "History" }, { "paragraph_id": 38, "text": "The period between the year 1000 and 1250 is known as the High Middle Ages, followed by the Late Middle Ages until c. 1500.", "title": "History" }, { "paragraph_id": 39, "text": "During the High Middle Ages the population of Europe experienced significant growth, culminating in the Renaissance of the 12th century. Economic growth, together with the lack of safety on the mainland trading routes, made possible the development of major commercial routes along the coast of the Mediterranean and Baltic Seas. The growing wealth and independence acquired by some coastal cities gave the Maritime Republics a leading role in the European scene.", "title": "History" }, { "paragraph_id": 40, "text": "The Middle Ages on the mainland were dominated by the two upper echelons of the social structure: the nobility and the clergy. Feudalism developed in France in the Early Middle Ages, and soon spread throughout Europe. A struggle for influence between the nobility and the monarchy in England led to the writing of the Magna Carta and the establishment of a parliament. The primary source of culture in this period came from the Roman Catholic Church. Through monasteries and cathedral schools, the Church was responsible for education in much of Europe.", "title": "History" }, { "paragraph_id": 41, "text": "The Papacy reached the height of its power during the High Middle Ages. An East-West Schism in 1054 split the former Roman Empire religiously, with the Eastern Orthodox Church in the Byzantine Empire and the Roman Catholic Church in the former Western Roman Empire. In 1095 Pope Urban II called for a crusade against Muslims occupying Jerusalem and the Holy Land. In Europe itself, the Church organised the Inquisition against heretics. In the Iberian Peninsula, the Reconquista concluded with the fall of Granada in 1492, ending over seven centuries of Islamic rule in the south-western peninsula.", "title": "History" }, { "paragraph_id": 42, "text": "In the east, a resurgent Byzantine Empire recaptured Crete and Cyprus from the Muslims, and reconquered the Balkans. Constantinople was the largest and wealthiest city in Europe from the 9th to the 12th centuries, with a population of approximately 400,000. The Empire was weakened following the defeat at Manzikert, and was weakened considerably by the sack of Constantinople in 1204, during the Fourth Crusade. Although it would recover Constantinople in 1261, Byzantium fell in 1453 when Constantinople was taken by the Ottoman Empire.", "title": "History" }, { "paragraph_id": 43, "text": "In the 11th and 12th centuries, constant incursions by nomadic Turkic tribes, such as the Pechenegs and the Cuman-Kipchaks, caused a massive migration of Slavic populations to the safer, heavily forested regions of the north, and temporarily halted the expansion of the Rus' state to the south and east. Like many other parts of Eurasia, these territories were overrun by the Mongols. The invaders, who became known as Tatars, were mostly Turkic-speaking peoples under Mongol suzerainty. They established the state of the Golden Horde with headquarters in Crimea, which later adopted Islam as a religion, and ruled over modern-day southern and central Russia for more than three centuries. After the collapse of Mongol dominions, the first Romanian states (principalities) emerged in the 14th century: Moldavia and Walachia. Previously, these territories were under the successive control of Pechenegs and Cumans. From the 12th to the 15th centuries, the Grand Duchy of Moscow grew from a small principality under Mongol rule to the largest state in Europe, overthrowing the Mongols in 1480, and eventually becoming the Tsardom of Russia. The state was consolidated under Ivan III the Great and Ivan the Terrible, steadily expanding to the east and south over the next centuries.", "title": "History" }, { "paragraph_id": 44, "text": "The Great Famine of 1315–1317 was the first crisis that would strike Europe in the late Middle Ages. The period between 1348 and 1420 witnessed the heaviest loss. The population of France was reduced by half. Medieval Britain was afflicted by 95 famines, and France suffered the effects of 75 or more in the same period. Europe was devastated in the mid-14th century by the Black Death, one of the most deadly pandemics in human history which killed an estimated 25 million people in Europe alone—a third of the European population at the time.", "title": "History" }, { "paragraph_id": 45, "text": "The plague had a devastating effect on Europe's social structure; it induced people to live for the moment as illustrated by Giovanni Boccaccio in The Decameron (1353). It was a serious blow to the Roman Catholic Church and led to increased persecution of Jews, beggars and lepers. The plague is thought to have returned every generation with varying virulence and mortalities until the 18th century. During this period, more than 100 plague epidemics swept across Europe.", "title": "History" }, { "paragraph_id": 46, "text": "The Renaissance was a period of cultural change originating in Florence, and later spreading to the rest of Europe. The rise of a new humanism was accompanied by the recovery of forgotten classical Greek and Arabic knowledge from monastic libraries, often translated from Arabic into Latin. The Renaissance spread across Europe between the 14th and 16th centuries: it saw the flowering of art, philosophy, music, and the sciences, under the joint patronage of royalty, the nobility, the Roman Catholic Church and an emerging merchant class. Patrons in Italy, including the Medici family of Florentine bankers and the Popes in Rome, funded prolific quattrocento and cinquecento artists such as Raphael, Michelangelo and Leonardo da Vinci.", "title": "History" }, { "paragraph_id": 47, "text": "Political intrigue within the Church in the mid-14th century caused the Western Schism. During this forty-year period, two popes—one in Avignon and one in Rome—claimed rulership over the Church. Although the schism was eventually healed in 1417, the papacy's spiritual authority had suffered greatly. In the 15th century, Europe started to extend itself beyond its geographic frontiers. Spain and Portugal, the greatest naval powers of the time, took the lead in exploring the world. Exploration reached the Southern Hemisphere in the Atlantic and the Southern tip of Africa. Christopher Columbus reached the New World in 1492, and Vasco da Gama opened the ocean route to the East linking the Atlantic and Indian Oceans in 1498. The Portuguese-born explorer Ferdinand Magellan reached Asia westward across the Atlantic and the Pacific Oceans in a Spanish expedition, resulting in the first circumnavigation of the globe, completed by the Spaniard Juan Sebastián Elcano (1519–1522). Soon after, the Spanish and Portuguese began establishing large global empires in the Americas, Asia, Africa and Oceania. France, the Netherlands and England soon followed in building large colonial empires with vast holdings in Africa, the Americas and Asia. In 1588, a Spanish armada failed to invade England. A year later England tried unsuccessfully to invade Spain, allowing Philip II of Spain to maintain his dominant war capacity in Europe. This English disaster also allowed the Spanish fleet to retain its capability to wage war for the next decades. However, two more Spanish armadas failed to invade England (2nd Spanish Armada and 3rd Spanish Armada).", "title": "History" }, { "paragraph_id": 48, "text": "The Church's power was further weakened by the Protestant Reformation in 1517 when German theologian Martin Luther nailed his Ninety-five Theses criticising the selling of indulgences to the church door. He was subsequently excommunicated in the papal bull Exsurge Domine in 1520 and his followers were condemned in the 1521 Diet of Worms, which divided German princes between Protestant and Roman Catholic faiths. Religious fighting and warfare spread with Protestantism. The plunder of the empires of the Americas allowed Spain to finance religious persecution in Europe for over a century. The Thirty Years War (1618–1648) crippled the Holy Roman Empire and devastated much of Germany, killing between 25 and 40 percent of its population. In the aftermath of the Peace of Westphalia, France rose to predominance within Europe. The defeat of the Ottoman Turks at the Battle of Vienna in 1683 marked the historic end of Ottoman expansion into Europe.", "title": "History" }, { "paragraph_id": 49, "text": "The 17th century in Central and parts of Eastern Europe was a period of general decline; the region experienced more than 150 famines in a 200-year period between 1501 and 1700. From the Union of Krewo (1385) east-central Europe was dominated by the Kingdom of Poland and the Grand Duchy of Lithuania. The hegemony of the vast Polish–Lithuanian Commonwealth had ended with the devastation brought by the Second Northern War (Deluge) and subsequent conflicts; the state itself was partitioned and ceased to exist at the end of the 18th century.", "title": "History" }, { "paragraph_id": 50, "text": "From the 15th to 18th centuries, when the disintegrating khanates of the Golden Horde were conquered by Russia, Tatars from the Crimean Khanate frequently raided Eastern Slavic lands to capture slaves. Further east, the Nogai Horde and Kazakh Khanate frequently raided the Slavic-speaking areas of contemporary Russia and Ukraine for hundreds of years, until the Russian expansion and conquest of most of northern Eurasia (i.e. Eastern Europe, Central Asia and Siberia).", "title": "History" }, { "paragraph_id": 51, "text": "The Renaissance and the New Monarchs marked the start of an Age of Discovery, a period of exploration, invention and scientific development. Among the great figures of the Western scientific revolution of the 16th and 17th centuries were Copernicus, Kepler, Galileo and Isaac Newton. According to Peter Barrett, \"It is widely accepted that 'modern science' arose in the Europe of the 17th century (towards the end of the Renaissance), introducing a new understanding of the natural world.\"", "title": "History" }, { "paragraph_id": 52, "text": "The Seven Years' War brought to an end the \"Old System\" of alliances in Europe. Consequently, when the American Revolutionary War turned into a global war between 1778 and 1783, Britain found itself opposed by a strong coalition of European powers, and lacking any substantial ally.", "title": "History" }, { "paragraph_id": 53, "text": "The Age of Enlightenment was a powerful intellectual movement during the 18th century promoting scientific and reason-based thoughts. Discontent with the aristocracy and clergy's monopoly on political power in France resulted in the French Revolution, and the establishment of the First Republic as a result of which the monarchy and many of the nobility perished during the initial reign of terror. Napoleon Bonaparte rose to power in the aftermath of the French Revolution, and established the First French Empire that, during the Napoleonic Wars, grew to encompass large parts of Europe before collapsing in 1815 with the Battle of Waterloo. Napoleonic rule resulted in the further dissemination of the ideals of the French Revolution, including that of the nation state, as well as the widespread adoption of the French models of administration, law and education. The Congress of Vienna, convened after Napoleon's downfall, established a new balance of power in Europe centred on the five \"Great Powers\": the UK, France, Prussia, Austria and Russia. This balance would remain in place until the Revolutions of 1848, during which liberal uprisings affected all of Europe except for Russia and the UK. These revolutions were eventually put down by conservative elements and few reforms resulted. The year 1859 saw the unification of Romania, as a nation state, from smaller principalities. In 1867, the Austro-Hungarian empire was formed; 1871 saw the unifications of both Italy and Germany as nation-states from smaller principalities.", "title": "History" }, { "paragraph_id": 54, "text": "In parallel, the Eastern Question grew more complex ever since the Ottoman defeat in the Russo-Turkish War (1768–1774). As the dissolution of the Ottoman Empire seemed imminent, the Great Powers struggled to safeguard their strategic and commercial interests in the Ottoman domains. The Russian Empire stood to benefit from the decline, whereas the Habsburg Empire and Britain perceived the preservation of the Ottoman Empire to be in their best interests. Meanwhile, the Serbian Revolution (1804) and Greek War of Independence (1821) marked the beginning of the end of Ottoman rule in the Balkans, which ended with the Balkan Wars in 1912–1913. Formal recognition of the de facto independent principalities of Montenegro, Serbia and Romania ensued at the Congress of Berlin in 1878.", "title": "History" }, { "paragraph_id": 55, "text": "The Industrial Revolution started in Great Britain in the last part of the 18th century and spread throughout Europe. The invention and implementation of new technologies resulted in rapid urban growth, mass employment and the rise of a new working class. Reforms in social and economic spheres followed, including the first laws on child labour, the legalisation of trade unions, and the abolition of slavery. In Britain, the Public Health Act of 1875 was passed, which significantly improved living conditions in many British cities. Europe's population increased from about 100 million in 1700 to 400 million by 1900. The last major famine recorded in Western Europe, the Great Famine of Ireland, caused death and mass emigration of millions of Irish people. In the 19th century, 70 million people left Europe in migrations to various European colonies abroad and to the United States. The industrial revolution also led to large population growth, and the share of the world population living in Europe reached a peak of slightly above 25% around the year 1913.", "title": "History" }, { "paragraph_id": 56, "text": "Two world wars and an economic depression dominated the first half of the 20th century. The First World War was fought between 1914 and 1918. It started when Archduke Franz Ferdinand of Austria was assassinated by the Yugoslav nationalist Gavrilo Princip. Most European nations were drawn into the war, which was fought between the Entente Powers (France, Belgium, Serbia, Portugal, Russia, the United Kingdom, and later Italy, Greece, Romania, and the United States) and the Central Powers (Austria-Hungary, Germany, Bulgaria, and the Ottoman Empire). The war left more than 16 million civilians and military dead. Over 60 million European soldiers were mobilised from 1914 to 1918.", "title": "History" }, { "paragraph_id": 57, "text": "Russia was plunged into the Russian Revolution, which threw down the Tsarist monarchy and replaced it with the communist Soviet Union, leading also to the independence of many former Russian governorates, such as Finland, Estonia, Latvia and Lithuania, as new European countries. Austria-Hungary and the Ottoman Empire collapsed and broke up into separate nations, and many other nations had their borders redrawn. The Treaty of Versailles, which officially ended the First World War in 1919, was harsh towards Germany, upon whom it placed full responsibility for the war and imposed heavy sanctions. Excess deaths in Russia over the course of the First World War and the Russian Civil War (including the postwar famine) amounted to a combined total of 18 million. In 1932–1933, under Stalin's leadership, confiscations of grain by the Soviet authorities contributed to the second Soviet famine which caused millions of deaths; surviving kulaks were persecuted and many sent to Gulags to do forced labour. Stalin was also responsible for the Great Purge of 1937–38 in which the NKVD executed 681,692 people; millions of people were deported and exiled to remote areas of the Soviet Union.", "title": "History" }, { "paragraph_id": 58, "text": "The social revolutions sweeping through Russia also affected other European nations following The Great War: in 1919, with the Weimar Republic in Germany and the First Austrian Republic; in 1922, with Mussolini's one-party fascist government in the Kingdom of Italy and in Atatürk's Turkish Republic, adopting the Western alphabet and state secularism. Economic instability, caused in part by debts incurred in the First World War and 'loans' to Germany played havoc in Europe in the late 1920s and 1930s. This, and the Wall Street Crash of 1929, brought about the worldwide Great Depression. Helped by the economic crisis, social instability and the threat of communism, fascist movements developed throughout Europe placing Adolf Hitler in power of what became Nazi Germany.", "title": "History" }, { "paragraph_id": 59, "text": "In 1933, Hitler became the leader of Germany and began to work towards his goal of building Greater Germany. Germany re-expanded and took back the Saarland and Rhineland in 1935 and 1936. In 1938, Austria became a part of Germany following the Anschluss. Later that year, following the Munich Agreement signed by Germany, France, the United Kingdom, and Italy, Germany annexed the Sudetenland, which was a part of Czechoslovakia inhabited by ethnic Germans, and in early 1939, the remainder of Czechoslovakia was split into the Protectorate of Bohemia and Moravia, controlled by Germany and the Slovak Republic. At the time, the United Kingdom and France preferred a policy of appeasement.", "title": "History" }, { "paragraph_id": 60, "text": "With tensions mounting between Germany and Poland over the future of Danzig, the Germans turned to the Soviets and signed the Molotov–Ribbentrop Pact, which allowed the Soviets to invade the Baltic states and parts of Poland and Romania. Germany invaded Poland on 1 September 1939, prompting France and the United Kingdom to declare war on Germany on 3 September, opening the European Theatre of the Second World War. The Soviet invasion of Poland started on 17 September and Poland fell soon thereafter. On 24 September, the Soviet Union attacked the Baltic countries and, on 30 November, Finland, the latter of which was followed by the devastating Winter War for the Red Army. The British hoped to land at Narvik and send troops to aid Finland, but their primary objective in the landing was to encircle Germany and cut the Germans off from Scandinavian resources. Around the same time, Germany moved troops into Denmark. The Phoney War continued.", "title": "History" }, { "paragraph_id": 61, "text": "In May 1940, Germany attacked France through the Low Countries. France capitulated in June 1940. By August, Germany had begun a bombing offensive against the United Kingdom but failed to convince the Britons to give up. In 1941, Germany invaded the Soviet Union in Operation Barbarossa. On 7 December 1941 Japan's attack on Pearl Harbor drew the United States into the conflict as allies of the British Empire, and other allied forces.", "title": "History" }, { "paragraph_id": 62, "text": "After the staggering Battle of Stalingrad in 1943, the German offensive in the Soviet Union turned into a continual fallback. The Battle of Kursk, which involved the largest tank battle in history, was the last major German offensive on the Eastern Front. In June 1944, British and American forces invaded France in the D-Day landings, opening a new front against Germany. Berlin finally fell in 1945, ending the Second World War in Europe. The war was the largest and most destructive in human history, with 60 million dead across the world. More than 40 million people in Europe had died as a result of the Second World War, including between 11 and 17 million people who perished during the Holocaust. The Soviet Union lost around 27 million people (mostly civilians) during the war, about half of all Second World War casualties. By the end of the Second World War, Europe had more than 40 million refugees. Several post-war expulsions in Central and Eastern Europe displaced a total of about 20 million people.", "title": "History" }, { "paragraph_id": 63, "text": "The First World War, and especially the Second World War, diminished the eminence of Western Europe in world affairs. After the Second World War the map of Europe was redrawn at the Yalta Conference and divided into two blocs, the Western countries and the communist Eastern bloc, separated by what was later called by Winston Churchill an \"Iron Curtain\". The United States and Western Europe established the NATO alliance and, later, the Soviet Union and Central Europe established the Warsaw Pact. Particular hot spots after the Second World War were Berlin and Trieste, whereby the Free Territory of Trieste, founded in 1947 with the UN, was dissolved in 1954 and 1975, respectively. The Berlin blockade in 1948 and 1949 and the construction of the Berlin Wall in 1961 were one of the great international crises of the Cold War.", "title": "History" }, { "paragraph_id": 64, "text": "The two new superpowers, the United States and the Soviet Union, became locked in a fifty-year-long Cold War, centred on nuclear proliferation. At the same time decolonisation, which had already started after the First World War, gradually resulted in the independence of most of the European colonies in Asia and Africa.", "title": "History" }, { "paragraph_id": 65, "text": "In the 1980s the reforms of Mikhail Gorbachev and the Solidarity movement in Poland weakened the previously rigid communist system. The opening of the Iron Curtain at the Pan-European Picnic then set in motion a peaceful chain reaction, at the end of which the Eastern bloc, the Warsaw Pact and other communist states collapsed, and the Cold War ended. Germany was reunited, after the symbolic fall of the Berlin Wall in 1989 and the maps of Central and Eastern Europe were redrawn once more. This made old previously interrupted cultural and economic relationships possible, and previously isolated cities such as Berlin, Prague, Vienna, Budapest and Trieste were now again in the centre of Europe.", "title": "History" }, { "paragraph_id": 66, "text": "European integration also grew after the Second World War. In 1949 the Council of Europe was founded, following a speech by Sir Winston Churchill, with the idea of unifying Europe to achieve common goals. It includes all European states except for Belarus, Russia, and Vatican City. The Treaty of Rome in 1957 established the European Economic Community between six Western European states with the goal of a unified economic policy and common market. In 1967 the EEC, European Coal and Steel Community, and Euratom formed the European Community, which in 1993 became the European Union. The EU established a parliament, court and central bank, and introduced the euro as a unified currency. Between 2004 and 2013, more Central European countries began joining, expanding the EU to 28 European countries and once more making Europe a major economical and political centre of power. However, the United Kingdom withdrew from the EU on 31 January 2020, as a result of a June 2016 referendum on EU membership. The Russo-Ukrainian conflict, which has been ongoing since 2014, steeply escalated when Russia launched a full-scale invasion of Ukraine on 24 February 2022, marking the largest humanitarian and refugee crisis in Europe since the Second World War and the Yugoslav Wars.", "title": "History" }, { "paragraph_id": 67, "text": "Europe makes up the western fifth of the Eurasian landmass. It has a higher ratio of coast to landmass than any other continent or subcontinent. Its maritime borders consist of the Arctic Ocean to the north, the Atlantic Ocean to the west and the Mediterranean, Black and Caspian Seas to the south. Land relief in Europe shows great variation within relatively small areas. The southern regions are more mountainous, while moving north the terrain descends from the high Alps, Pyrenees and Carpathians, through hilly uplands, into broad, low northern plains, which are vast in the east. This extended lowland is known as the Great European Plain and at its heart lies the North German Plain. An arc of uplands also exists along the north-western seaboard, which begins in the western parts of the islands of Britain and Ireland, and then continues along the mountainous, fjord-cut spine of Norway.", "title": "Geography" }, { "paragraph_id": 68, "text": "This description is simplified. Subregions such as the Iberian Peninsula and the Italian Peninsula contain their own complex features, as does mainland Central Europe itself, where the relief contains many plateaus, river valleys and basins that complicate the general trend. Sub-regions like Iceland, Britain and Ireland are special cases. The former is a land unto itself in the northern ocean that is counted as part of Europe, while the latter are upland areas that were once joined to the mainland until rising sea levels cut them off.", "title": "Geography" }, { "paragraph_id": 69, "text": "Europe lies mainly in the temperate climate zone of the northern hemisphere, where the prevailing wind direction is from the west. The climate is milder in comparison to other areas of the same latitude around the globe due to the influence of the Gulf Stream, an ocean current which carries warm water from the Gulf of Mexico across the Atlantic ocean to Europe. The Gulf Stream is nicknamed \"Europe's central heating\", because it makes Europe's climate warmer and wetter than it would otherwise be. The Gulf Stream not only carries warm water to Europe's coast but also warms up the prevailing westerly winds that blow across the continent from the Atlantic Ocean.", "title": "Geography" }, { "paragraph_id": 70, "text": "Therefore, the average temperature throughout the year of Aveiro is 16 °C (61 °F), while it is only 13 °C (55 °F) in New York City which is almost on the same latitude, bordering the same ocean. Berlin, Germany; Calgary, Canada; and Irkutsk, in far south-eastern Russia, lie on around the same latitude; January temperatures in Berlin average around 8 °C (14 °F) higher than those in Calgary and they are almost 22 °C (40 °F) higher than average temperatures in Irkutsk.", "title": "Geography" }, { "paragraph_id": 71, "text": "The large water masses of the Mediterranean Sea, which equalise the temperatures on an annual and daily average, are also of particular importance. The water of the Mediterranean extends from the Sahara desert to the Alpine arc in its northernmost part of the Adriatic Sea near Trieste.", "title": "Geography" }, { "paragraph_id": 72, "text": "In general, Europe is not just colder towards the north compared to the south, but it also gets colder from the west towards the east. The climate is more oceanic in the west and less so in the east. This can be illustrated by the following table of average temperatures at locations roughly following the 64th, 60th, 55th, 50th, 45th and 40th latitudes. None of them is located at high altitude; most of them are close to the sea.", "title": "Geography" }, { "paragraph_id": 73, "text": "It is notable how the average temperatures for the coldest month, as well as the annual average temperatures, drop from the west to the east. For instance, Edinburgh is warmer than Belgrade during the coldest month of the year, although Belgrade is around 10° of latitude farther south.", "title": "Geography" }, { "paragraph_id": 74, "text": "The geological history of Europe traces back to the formation of the Baltic Shield (Fennoscandia) and the Sarmatian craton, both around 2.25 billion years ago, followed by the Volgo–Uralia shield, the three together leading to the East European craton (≈ Baltica) which became a part of the supercontinent Columbia. Around 1.1 billion years ago, Baltica and Arctica (as part of the Laurentia block) became joined to Rodinia, later resplitting around 550 million years ago to reform as Baltica. Around 440 million years ago Euramerica was formed from Baltica and Laurentia; a further joining with Gondwana then leading to the formation of Pangea. Around 190 million years ago, Gondwana and Laurasia split apart due to the widening of the Atlantic Ocean. Finally and very soon afterwards, Laurasia itself split up again, into Laurentia (North America) and the Eurasian continent. The land connection between the two persisted for a considerable time, via Greenland, leading to interchange of animal species. From around 50 million years ago, rising and falling sea levels have determined the actual shape of Europe and its connections with continents such as Asia. Europe's present shape dates to the late Tertiary period about five million years ago.", "title": "Geography" }, { "paragraph_id": 75, "text": "The geology of Europe is hugely varied and complex and gives rise to the wide variety of landscapes found across the continent, from the Scottish Highlands to the rolling plains of Hungary. Europe's most significant feature is the dichotomy between highland and mountainous Southern Europe and a vast, partially underwater, northern plain ranging from Ireland in the west to the Ural Mountains in the east. These two halves are separated by the mountain chains of the Pyrenees and Alps/Carpathians. The northern plains are delimited in the west by the Scandinavian Mountains and the mountainous parts of the British Isles. Major shallow water bodies submerging parts of the northern plains are the Celtic Sea, the North Sea, the Baltic Sea complex and Barents Sea.", "title": "Geography" }, { "paragraph_id": 76, "text": "The northern plain contains the old geological continent of Baltica and so may be regarded geologically as the \"main continent\", while peripheral highlands and mountainous regions in the south and west constitute fragments from various other geological continents. Most of the older geology of western Europe existed as part of the ancient microcontinent Avalonia.", "title": "Geography" }, { "paragraph_id": 77, "text": "Having lived side by side with agricultural peoples for millennia, Europe's animals and plants have been profoundly affected by the presence and activities of humans. With the exception of Fennoscandia and northern Russia, few areas of untouched wilderness are currently found in Europe, except for various national parks.", "title": "Geography" }, { "paragraph_id": 78, "text": "The main natural vegetation cover in Europe is mixed forest. The conditions for growth are very favourable. In the north, the Gulf Stream and North Atlantic Drift warm the continent. Southern Europe has a warm but mild climate. There are frequent summer droughts in this region. Mountain ridges also affect the conditions. Some of these, such as the Alps and the Pyrenees, are oriented east–west and allow the wind to carry large masses of water from the ocean in the interior. Others are oriented south–north (Scandinavian Mountains, Dinarides, Carpathians, Apennines) and because the rain falls primarily on the side of mountains that is oriented towards the sea, forests grow well on this side, while on the other side, the conditions are much less favourable. Few corners of mainland Europe have not been grazed by livestock at some point in time, and the cutting down of the preagricultural forest habitat caused disruption to the original plant and animal ecosystems.", "title": "Geography" }, { "paragraph_id": 79, "text": "Possibly 80 to 90 percent of Europe was once covered by forest. It stretched from the Mediterranean Sea to the Arctic Ocean. Although over half of Europe's original forests disappeared through the centuries of deforestation, Europe still has over one quarter of its land area as forest, such as the broadleaf and mixed forests, taiga of Scandinavia and Russia, mixed rainforests of the Caucasus and the Cork oak forests in the western Mediterranean. During recent times, deforestation has been slowed and many trees have been planted. However, in many cases monoculture plantations of conifers have replaced the original mixed natural forest, because these grow quicker. The plantations now cover vast areas of land, but offer poorer habitats for many European forest dwelling species which require a mixture of tree species and diverse forest structure. The amount of natural forest in Western Europe is just 2–3% or less, while in its Western Russia its 5–10%. The European country with the smallest percentage of forested area is Iceland (1%), while the most forested country is Finland (77%).", "title": "Geography" }, { "paragraph_id": 80, "text": "In temperate Europe, mixed forest with both broadleaf and coniferous trees dominate. The most important species in central and western Europe are beech and oak. In the north, the taiga is a mixed spruce–pine–birch forest; further north within Russia and extreme northern Scandinavia, the taiga gives way to tundra as the Arctic is approached. In the Mediterranean, many olive trees have been planted, which are very well adapted to its arid climate; Mediterranean Cypress is also widely planted in southern Europe. The semi-arid Mediterranean region hosts much scrub forest. A narrow east–west tongue of Eurasian grassland (the steppe) extends westwards from Ukraine and southern Russia and ends in Hungary and traverses into taiga to the north.", "title": "Geography" }, { "paragraph_id": 81, "text": "Glaciation during the most recent ice age and the presence of humans affected the distribution of European fauna. As for the animals, in many parts of Europe most large animals and top predator species have been hunted to extinction. The woolly mammoth was extinct before the end of the Neolithic period. Today wolves (carnivores) and bears (omnivores) are endangered. Once they were found in most parts of Europe. However, deforestation and hunting caused these animals to withdraw further and further. By the Middle Ages the bears' habitats were limited to more or less inaccessible mountains with sufficient forest cover. Today, the brown bear lives primarily in the Balkan peninsula, Scandinavia and Russia; a small number also persist in other countries across Europe (Austria, Pyrenees etc.), but in these areas brown bear populations are fragmented and marginalised because of the destruction of their habitat. In addition, polar bears may be found on Svalbard, a Norwegian archipelago far north of Scandinavia. The wolf, the second largest predator in Europe after the brown bear, can be found primarily in Central and Eastern Europe and in the Balkans, with a handful of packs in pockets of Western Europe (Scandinavia, Spain, etc.).", "title": "Geography" }, { "paragraph_id": 82, "text": "Other carnivores include the European wildcat, red fox and arctic fox, the golden jackal, different species of martens, the European hedgehog, different species of reptiles (like snakes such as vipers and grass snakes) and amphibians, as well as different birds (owls, hawks and other birds of prey).", "title": "Geography" }, { "paragraph_id": 83, "text": "Important European herbivores are snails, larvae, fish, different birds and mammals, like rodents, deer and roe deer, boars and living in the mountains, marmots, steinbocks, chamois among others. A number of insects, such as the small tortoiseshell butterfly, add to the biodiversity.", "title": "Geography" }, { "paragraph_id": 84, "text": "Sea creatures are also an important part of European flora and fauna. The sea flora is mainly phytoplankton. Important animals that live in European seas are zooplankton, molluscs, echinoderms, different crustaceans, squids and octopuses, fish, dolphins and whales.", "title": "Geography" }, { "paragraph_id": 85, "text": "Biodiversity is protected in Europe through the Council of Europe's Bern Convention, which has also been signed by the European Community as well as non-European states.", "title": "Geography" }, { "paragraph_id": 86, "text": "", "title": "Geography" }, { "paragraph_id": 87, "text": "The political map of Europe is substantially derived from the re-organisation of Europe following the Napoleonic Wars in 1815. The prevalent form of government in Europe is parliamentary democracy, in most cases in the form of Republic; in 1815, the prevalent form of government was still the Monarchy. Europe's remaining eleven monarchies are constitutional.", "title": "Politics" }, { "paragraph_id": 88, "text": "European integration is the process of political, legal, economic (and in some cases social and cultural) integration of European states as it has been pursued by the powers sponsoring the Council of Europe since the end of the Second World War. The European Union has been the focus of economic integration on the continent since its foundation in 1993. More recently, the Eurasian Economic Union has been established as a counterpart comprising former Soviet states.", "title": "Politics" }, { "paragraph_id": 89, "text": "27 European states are members of the politico-economic European Union, 26 of the border-free Schengen Area and 20 of the monetary union Eurozone. Among the smaller European organisations are the Nordic Council, the Benelux, the Baltic Assembly and the Visegrád Group.", "title": "Politics" }, { "paragraph_id": 90, "text": "This list includes all internationally recognised sovereign countries falling even partially under any common geographical or political definitions of Europe.", "title": "List of states and territories" }, { "paragraph_id": 91, "text": "Within the above-mentioned states are several de facto independent countries with limited to no international recognition. None of them are members of the UN:", "title": "List of states and territories" }, { "paragraph_id": 92, "text": "Several dependencies and similar territories with broad autonomy are also found within or close to Europe. This includes Åland (an autonomous county of Finland), two autonomous territories of the Kingdom of Denmark (other than Denmark proper), three Crown Dependencies and two British Overseas Territories. Svalbard is also included due to its unique status within Norway, although it is not autonomous. Not included are the three countries of the United Kingdom with devolved powers and the two Autonomous Regions of Portugal, which despite having a unique degree of autonomy, are not largely self-governing in matters other than international affairs. Areas with little more than a unique tax status, such as the Canary Islands and Heligoland, are also not included for this reason.", "title": "List of states and territories" }, { "paragraph_id": 93, "text": "As a continent, the economy of Europe is currently the largest on Earth and it is the richest region as measured by assets under management with over $32.7 trillion compared to North America's $27.1 trillion in 2008. In 2009 Europe remained the wealthiest region. Its $37.1 trillion in assets under management represented one-third of the world's wealth. It was one of several regions where wealth surpassed its precrisis year-end peak. As with other continents, Europe has a large wealth gap among its countries. The richer states tend to be in the Northwest and West in general, followed by Central Europe, while most economies of Eastern and Southeastern Europe are still reemerging from the collapse of the Soviet Union and the breakup of Yugoslavia.", "title": "Economy" }, { "paragraph_id": 94, "text": "The model of the Blue Banana was designed as an economic geographic representation of the respective economic power of the regions, which was further developed into the Golden Banana or Blue Star. The trade between East and West, as well as towards Asia, which had been disrupted for a long time by the two world wars, new borders and the Cold War, increased sharply after 1989. In addition, there is new impetus from the Chinese Belt and Road Initiative across the Suez Canal towards Africa and Asia.", "title": "Economy" }, { "paragraph_id": 95, "text": "The European Union, a political entity composed of 27 European states, comprises the largest single economic area in the world. Nineteen EU countries share the euro as a common currency. Five European countries rank in the top ten of the world's largest national economies in GDP (PPP). This includes (ranks according to the CIA): Germany (6), Russia (7), the United Kingdom (10), France (11) and Italy (13).", "title": "Economy" }, { "paragraph_id": 96, "text": "Some European countries are much richer than others. The richest in terms of nominal GDP is Monaco with its US$185,829 per capita (2018) and the poorest is Ukraine with its US$3,659 per capita (2019).", "title": "Economy" }, { "paragraph_id": 97, "text": "As a whole, Europe's GDP per capita is US$21,767 according to a 2016 International Monetary Fund assessment.", "title": "Economy" }, { "paragraph_id": 98, "text": "Capitalism has been dominant in the Western world since the end of feudalism. From Britain, it gradually spread throughout Europe. The Industrial Revolution started in Europe, specifically the United Kingdom in the late 18th century, and the 19th century saw Western Europe industrialise. Economies were disrupted by the First World War, but by the beginning of the Second World War, they had recovered and were having to compete with the growing economic strength of the United States. The Second World War, again, damaged much of Europe's industries.", "title": "Economy" }, { "paragraph_id": 99, "text": "After the Second World War the economy of the UK was in a state of ruin, and continued to suffer relative economic decline in the following decades. Italy was also in a poor economic condition but regained a high level of growth by the 1950s. West Germany recovered quickly and had doubled production from pre-war levels by the 1950s. France also staged a remarkable comeback enjoying rapid growth and modernisation; later on Spain, under the leadership of Franco, also recovered and the nation recorded huge unprecedented economic growth beginning in the 1960s in what is called the Spanish miracle. The majority of Central and Eastern European states came under the control of the Soviet Union and thus were members of the Council for Mutual Economic Assistance (COMECON).", "title": "Economy" }, { "paragraph_id": 100, "text": "The states which retained a free-market system were given a large amount of aid by the United States under the Marshall Plan. The western states moved to link their economies together, providing the basis for the EU and increasing cross border trade. This helped them to enjoy rapidly improving economies, while those states in COMECON were struggling in a large part due to the cost of the Cold War. Until 1990, the European Community was expanded from 6 founding members to 12. The emphasis placed on resurrecting the West German economy led to it overtaking the UK as Europe's largest economy.", "title": "Economy" }, { "paragraph_id": 101, "text": "With the fall of communism in Central and Eastern Europe in 1991, the post-socialist states underwent shock therapy measures to liberalise their economies and implement free market reforms.", "title": "Economy" }, { "paragraph_id": 102, "text": "After East and West Germany were reunited in 1990, the economy of West Germany struggled as it had to support and largely rebuild the infrastructure of East Germany, while the latter experienced sudden mass unemployment and plummeting of industrial production.", "title": "Economy" }, { "paragraph_id": 103, "text": "By the millennium change, the EU dominated the economy of Europe, comprising the five largest European economies of the time: Germany, the United Kingdom, France, Italy, and Spain. In 1999, 12 of the 15 members of the EU joined the Eurozone, replacing their national currencies by the euro.", "title": "Economy" }, { "paragraph_id": 104, "text": "Figures released by Eurostat in 2009 confirmed that the Eurozone had gone into recession in 2008. It impacted much of the region. In 2010, fears of a sovereign debt crisis developed concerning some countries in Europe, especially Greece, Ireland, Spain and Portugal. As a result, measures were taken, especially for Greece, by the leading countries of the Eurozone. The EU-27 unemployment rate was 10.3% in 2012. For those aged 15–24 it was 22.4%.", "title": "Economy" }, { "paragraph_id": 105, "text": "The population of Europe was about 742 million in 2023 according to UN estimates. This is slightly more than one ninth of the world's population. The population density of Europe (the number of people per area) is the second highest of any continent, behind Asia. The population of Europe is currently slowly decreasing, by about 0.2% per year, because there are fewer births than deaths. This natural decrease in population is reduced by the fact that more people migrate to Europe from other continents than vice versa.", "title": "Demographics" }, { "paragraph_id": 106, "text": "Southern Europe and Western Europe are the regions with the highest average number of elderly people in the world. In 2021, the percentage of people over 65 years old was 21% in Western Europe and Southern Europe, compared to 19% in all of Europe and 10% in the world. Projections suggest that by 2050 Europe will reach 30%. This is caused by the fact that the population has been having children below replacement level since the 1970s. The United Nations predicts that Europe will decline its population between 2022 and 2050 by −7 per cent, without changing immigration movements.", "title": "Demographics" }, { "paragraph_id": 107, "text": "According to a population projection of the UN Population Division, Europe's population may fall to between 680 and 720 million people by 2050, which would be 7% of the world population at that time. Within this context, significant disparities exist between regions in relation to fertility rates. The average number of children per female of child-bearing age is 1.52, far below the replacement rate. The UN predicts a steady population decline in Central and Eastern Europe as a result of emigration and low birth rates.", "title": "Demographics" }, { "paragraph_id": 108, "text": "Pan and Pfeil (2004) count 87 distinct \"peoples of Europe\", of which 33 form the majority population in at least one sovereign state, while the remaining 54 constitute ethnic minorities.", "title": "Demographics" }, { "paragraph_id": 109, "text": "Europe is home to the highest number of migrants of all global regions at nearly 87 million people in 2020, according to the International Organisation for Migration. In 2005, the EU had an overall net gain from immigration of 1.8 million people. This accounted for almost 85% of Europe's total population growth. In 2021, 827,000 persons were given citizenship of an EU member state, an increase of about 14% compared with 2020. 2.3 million immigrants from non-EU countries entered the EU in 2021.", "title": "Demographics" }, { "paragraph_id": 110, "text": "Early modern emigration from Europe began with Spanish and Portuguese settlers in the 16th century, and French and English settlers in the 17th century. But numbers remained relatively small until waves of mass emigration in the 19th century, when millions of poor families left Europe.", "title": "Demographics" }, { "paragraph_id": 111, "text": "Today, large populations of European descent are found on every continent. European ancestry predominates in North America and to a lesser degree in South America (particularly in Uruguay, Argentina, Chile and Brazil, while most of the other Latin American countries also have a considerable population of European origins). Australia and New Zealand have large European-derived populations. Africa has no countries with European-derived majorities (or with the exception of Cape Verde and probably São Tomé and Príncipe, depending on context), but there are significant minorities, such as the White South Africans in South Africa. In Asia, European-derived populations, specifically Russians, predominate in North Asia and some parts of Northern Kazakhstan.", "title": "Demographics" }, { "paragraph_id": 112, "text": "Europe has about 225 indigenous languages, mostly falling within three Indo-European language groups: the Romance languages, derived from the Latin of the Roman Empire; the Germanic languages, whose ancestor language came from southern Scandinavia; and the Slavic languages. Slavic languages are mostly spoken in Southern, Central and Eastern Europe. Romance languages are spoken primarily in Western and Southern Europe, as well as in Switzerland in Central Europe and Romania and Moldova in Eastern Europe. Germanic languages are spoken in Western, Northern and Central Europe as well as in Gibraltar and Malta in Southern Europe. Languages in adjacent areas show significant overlaps (such as in English, for example). Other Indo-European languages outside the three main groups include the Baltic group (Latvian and Lithuanian), the Celtic group (Irish, Scottish Gaelic, Manx, Welsh, Cornish and Breton), Greek, Armenian and Albanian.", "title": "Demographics" }, { "paragraph_id": 113, "text": "A distinct non-Indo-European family of Uralic languages (Estonian, Finnish, Hungarian, Erzya, Komi, Mari, Moksha and Udmurt) is spoken mainly in Estonia, Finland, Hungary and parts of Russia. Turkic languages include Azerbaijani, Kazakh and Turkish, in addition to smaller languages in Eastern and Southeast Europe (Balkan Gagauz Turkish, Bashkir, Chuvash, Crimean Tatar, Karachay-Balkar, Kumyk, Nogai and Tatar). Kartvelian languages (Georgian, Mingrelian and Svan) are spoken primarily in Georgia. Two other language families reside in the North Caucasus (termed Northeast Caucasian, most notably including Chechen, Avar and Lezgin; and Northwest Caucasian, most notably including Adyghe). Maltese is the only Semitic language that is official within the EU, while Basque is the only European language isolate.", "title": "Demographics" }, { "paragraph_id": 114, "text": "Multilingualism and the protection of regional and minority languages are recognised political goals in Europe today. The Council of Europe Framework Convention for the Protection of National Minorities and the Council of Europe's European Charter for Regional or Minority Languages set up a legal framework for language rights in Europe.", "title": "Demographics" }, { "paragraph_id": 115, "text": "Religion in Europe according to the Global Religious Landscape survey by the Pew Forum, 2016", "title": "Demographics" }, { "paragraph_id": 116, "text": "The largest religion in Europe is Christianity, with 76.2% of Europeans considering themselves Christians, including Catholic, Eastern Orthodox and various Protestant denominations. Among Protestants, the most popular are Lutheranism, Anglicanism and the Reformed faith. Smaller Protestant denominations include Anabaptists as well as denominations centered in the United States such as Pentecostalism, Methodism, and Evangelicalism. Although Christianity originated in the Middle East, its centre of mass shifted to Europe when it became the official religion of the Roman Empire in the late 4th century. Christianity played a prominent role in the development of the European culture and identity. Today, a bit over 25% of the world's Christians live in Europe.", "title": "Demographics" }, { "paragraph_id": 117, "text": "Islam is the second most popular religion in Europe. Over 25 million, or roughly 5% of the population, adhere to it. In Albania and Bosnia and Herzegovina, two countries in the Balkan peninsula in Southeastern Europe, Islam instead of Christianity is the majority religion. This is also the case in Turkey and in certain parts of Russia, as well as in Azerbaijan and Kazakhstan, all of which are at the border to Asia. Many countries in Europe are home to a sizeable Muslim minority, and immigration to Europe has increased the number of Muslim people in Europe in recent years.", "title": "Demographics" }, { "paragraph_id": 118, "text": "The Jewish population in Europe was about 1.4 million people in 2020 (about 0.2% of the population). There is a long history of Jewish life in Europe, beginning in antiquity. During the late 19th and early 20th centuries, the Russian Empire had the majority of the world's Jews living within its borders. In 1897, according to Russian census of 1897, the total Jewish population of Russia was 5.1 million people, which was 4.13% of total population. Of this total, the vast majority lived within the Pale of Settlement. In 1933, there were about 9.5 million Jewish people in Europe, representing 1.7% of the population, but most were killed or displaced across Europe during The Holocaust. In the 21st century, France has the largest Jewish population in Europe, followed by the United Kingdom, Germany and Russia.", "title": "Demographics" }, { "paragraph_id": 119, "text": "Other religions practiced in Europe include Hinduism and Buddhism, which are minority religions, except in Russia's Republic of Kalmykia, where Tibetan Buddhism is the majority religion.", "title": "Demographics" }, { "paragraph_id": 120, "text": "A large and increasing number of people in Europe are irreligious, atheist and agnostic. They are estimated to make up about 18.3% of Europe's population currently.", "title": "Demographics" }, { "paragraph_id": 121, "text": "The three largest urban areas of Europe are Moscow, London and Paris. All have over 10 million residents, and as such have been described as megacities. While Istanbul has the highest total city population, it lies partly in Asia. 64.9% of the residents live on the European side and 35.1% on the Asian side. The next largest cities in order of population are Madrid, Saint Petersburg, Milan, Barcelona, Berlin, and Rome each having over three million residents.", "title": "Demographics" }, { "paragraph_id": 122, "text": "When considering the commuter belts or metropolitan areas within Europe (for which comparable data is available), Moscow covers the largest population, followed in order by Istanbul, London, Paris, Madrid, Milan, Ruhr Area, Saint Petersburg, Rhein-Süd, Barcelona and Berlin.", "title": "Demographics" }, { "paragraph_id": 123, "text": "\"Europe\" as a cultural concept is substantially derived from the shared heritage of ancient Greece and the Roman Empire and its cultures. The boundaries of Europe were historically understood as those of Christendom (or more specifically Latin Christendom), as established or defended throughout the medieval and early modern history of Europe, especially against Islam, as in the Reconquista and the Ottoman wars in Europe.", "title": "Culture" }, { "paragraph_id": 124, "text": "This shared cultural heritage is combined by overlapping indigenous national cultures and folklores, roughly divided into Slavic, Latin (Romance) and Germanic, but with several components not part of either of these groups (notably Greek, Basque and Celtic). Historically, special examples with overlapping cultures are Strasbourg with Latin (Romance) and Germanic, or Trieste with Latin, Slavic and Germanic roots. Cultural contacts and mixtures shape a large part of the regional cultures of Europe. Europe is often described as \"maximum cultural diversity with minimal geographical distances\".", "title": "Culture" }, { "paragraph_id": 125, "text": "Different cultural events are organised in Europe, with the aim of bringing different cultures closer together and raising awareness of their importance, such as the European Capital of Culture, the European Region of Gastronomy, the European Youth Capital and the European Capital of Sport.", "title": "Culture" }, { "paragraph_id": 126, "text": "Sport in Europe tends to be highly organized with many sports having professional leagues.", "title": "Culture" }, { "paragraph_id": 127, "text": "In Europe many people are unable to access basic social conditions, which makes it harder for them to thrive and flourish. Access to basic necessities can be compromised, for example 10% of Europeans spend at least 40% of household income on housing. 75 million Europeans feel socially isolated. From the 1980s income inequality has been rising and wage shares have been falling. In 2016, the richest 20% of households earned over five times more than the poorest 20%. Many workers experience stagnant real wages and precarious work is common even for essential workers.", "title": "Culture" }, { "paragraph_id": 128, "text": "Historical Maps", "title": "External links" } ]
Europe is a continent comprising the westernmost peninsulas of Eurasia, located entirely in the Northern Hemisphere and mostly in the Eastern Hemisphere. It shares the continental landmass of Afro-Eurasia with both Africa and Asia. It is bordered by the Arctic Ocean to the north, the Atlantic Ocean to the west, the Mediterranean Sea to the south, and Asia to the east. Europe is commonly considered to be separated from Asia by the watershed of the Ural Mountains, the Ural River, the Caspian Sea, the Greater Caucasus, the Black Sea and the waterways of the Turkish straits. Europe covers about 10.18 million km2 (3.93 million sq mi), or 2% of Earth's surface, making it the second-smallest continent. Politically, Europe is divided into about fifty sovereign states, of which Russia is the largest and most populous, spanning 39% of the continent and comprising 15% of its population. Europe had a total population of about 745 million in 2021; the third-largest after Asia and Africa. The European climate is largely affected by warm Atlantic currents that temper winters and summers on much of the continent, even at latitudes along which the climate in Asia and North America is severe. Further from the sea, seasonal differences are more noticeable than close to the coast. European culture is the root of Western civilisation, which traces its lineage back to ancient Greece and ancient Rome. The fall of the Western Roman Empire in 476 CE and the related Migration Period marked the end of Europe's ancient history, and the beginning of the Middle Ages. The Italian Renaissance began in Florence and spread to the rest of the continent, bringing a renewed interest in humanism, exploration, art, and science which contributed to the beginning of the modern era. Since the Age of Discovery, led by Spain and Portugal, Europe played a predominant role in global affairs with multiple explorations and conquests around the world. Between the 16th and 20th centuries, European powers colonised at various times the Americas, almost all of Africa and Oceania, and the majority of Asia. The Age of Enlightenment, the French Revolution, and the Napoleonic Wars shaped the continent culturally, politically and economically from the end of the 17th century until the first half of the 19th century. The Industrial Revolution, which began in Great Britain at the end of the 18th century, gave rise to radical economic, cultural and social change in Western Europe and eventually the wider world. Both world wars began and were fought to a great extent in Europe, contributing to a decline in Western European dominance in world affairs by the mid-20th century as the Soviet Union and the United States took prominence. During the Cold War, Europe was divided along the Iron Curtain between NATO in the West and the Warsaw Pact in the East, until the Revolutions of 1989, the fall of the Berlin Wall, and the dissolution of the Soviet Union. The European Union (EU) and the Council of Europe are two important international organisations aiming to represent the European continent on a political level. The Council of Europe was founded in 1948 with the idea of unifying Europe to achieve common goals and prevent future wars. Further European integration by some states led to the formation of the European Union, a separate supranational political entity based on a system of European law that lies between a confederation and a federation. The EU originated in Western Europe but has been expanding eastward since the fall of the Soviet Union in 1991. A majority of its members have adopted a common currency, the euro, and participate in the European single market and a customs union. A large bloc of countries, the Schengen Area, have also abolished internal border and immigration controls. Regular popular elections take place every five years within the EU; they are considered to be the second largest democratic elections in the world after India's.
2001-10-01T09:20:59Z
2023-12-31T14:27:24Z
[ "Template:Pp-vandalism", "Template:Flagicon", "Template:Div col end", "Template:Cnote2 End", "Template:Webarchive", "Template:GovPubs", "Template:Authority control", "Template:Main", "Template:Nowrap", "Template:Cref2", "Template:Anchor", "Template:Europe and seas labelled map", "Template:Navboxes", "Template:Legend-table2", "Template:Cite news", "Template:Abbr", "Template:Nihongo", "Template:Multiple image", "Template:Sfn", "Template:Cite web", "Template:Lang", "Template:Not typo", "Template:Clear", "Template:Dead link", "Template:Transliteration", "Template:Legend0", "Template:Supranational European Bodies", "Template:Cite journal", "Template:Subject bar", "Template:Further", "Template:C.", "Template:Cite encyclopedia", "Template:About", "Template:Legend", "Template:Flagg", "Template:Interlanguage link", "Template:Cite magazine", "Template:Britannica", "Template:Infobox continent", "Template:Dubious", "Template:Sfnp", "Template:Cvt", "Template:Pie chart", "Template:Excerpt", "Template:Div col", "Template:Reflist", "Template:Cite book", "Template:Curlie", "Template:Europefooter", "Template:See also", "Template:Harvnb", "Template:Cite EB1911", "Template:UN Population", "Template:Use dmy dates", "Template:Use British English", "Template:TOC limit", "Template:Cnote2", "Template:ISBN", "Template:Citation", "Template:Cbignore", "Template:Circa", "Template:Flag", "Template:Short description", "Template:Convert", "Template:Lang-grc", "Template:Coat of arms", "Template:Cnote2 Begin" ]
https://en.wikipedia.org/wiki/Europe
9,240
Europa
Europa may refer to:
[ { "paragraph_id": 0, "text": "Europa may refer to:", "title": "" } ]
Europa may refer to:
2001-10-17T16:22:45Z
2023-12-08T22:28:01Z
[ "Template:Disambiguation", "Template:Wiktionary", "Template:TOC right", "Template:MS", "Template:HMS", "Template:Canned search", "Template:Look from", "Template:In title" ]
https://en.wikipedia.org/wiki/Europa
9,241
Euglenozoa
Euglenozoa are a large group of flagellate Discoba. They include a variety of common free-living species, as well as a few important parasites, some of which infect humans. Euglenozoa are represented by four major groups, i.e., Kinetoplastea, Diplonemea, Euglenida, and Symbiontida. Euglenozoa are unicellular, mostly around 15–40 μm (0.00059–0.00157 in) in size, although some euglenids get up to 500 μm (0.020 in) long. Most euglenozoa have two flagella, which are inserted parallel to one another in an apical or subapical pocket. In some these are associated with a cytostome or mouth, used to ingest bacteria or other small organisms. This is supported by one of three sets of microtubules that arise from the flagellar bases; the other two support the dorsal and ventral surfaces of the cell. Some other euglenozoa feed through absorption, and many euglenids possess chloroplasts, the only eukaryotes outside Diaphoretickes to do so without performing kleptoplasty, and so obtain energy through photosynthesis. These chloroplasts are surrounded by three membranes and contain chlorophylls A and B, along with other pigments, so are probably derived from a green alga, captured long ago in an endosymbiosis by a basal euglenozoan. Reproduction occurs exclusively through cell division. During mitosis, the nuclear membrane remains intact, and the spindle microtubules form inside of it. The group is characterized by the ultrastructure of the flagella. In addition to the normal supporting microtubules or axoneme, each contains a rod (called paraxonemal), which has a tubular structure in one flagellum and a latticed structure in the other. Based on this, two smaller groups have been included here: the diplonemids and Postgaardi. Historically, euglenozoans have been treated as either plants or animals, depending on whether they belong to largely photosynthetic groups or not. Hence they have names based on either the International Code of Nomenclature for algae, fungi, and plants (ICNafp) or the International Code of Zoological Nomenclature (ICZN). For example, one family has the name Euglenaceae under the ICNafp and the name Euglenidae under the ICZN. As another example, the genus name Dinema is acceptable under the ICZN, but illegitimate under the ICNafp, as it is a later homonym of an orchid genus, so that the synonym Dinematomonas must be used instead. The Euglenozoa are generally accepted as monophyletic. They are related to Percolozoa; the two share mitochondria with disk-shaped cristae, which only occurs in a few other groups. Both probably belong to a larger group of eukaryotes called the Excavata. This grouping, though, has been challenged. The phylogeny based on the work of Cavalier-Smith (2016): A consensus phylogeny following the review by Kostygov et al. (2021): The following classification of Euglenozoa is as described by Cavalier-Smith in 2016, modified to include the new subphylum Plicomonada according to Cavalier-Smith et al (2017). Phylum Euglenozoa Cavalier-Smith 1981 emend. Simpson 1997 [Euglenobionta] Phylum Euglenozoa Cavalier-Smith 1981 emend. Simpson 1997
[ { "paragraph_id": 0, "text": "Euglenozoa are a large group of flagellate Discoba. They include a variety of common free-living species, as well as a few important parasites, some of which infect humans. Euglenozoa are represented by four major groups, i.e., Kinetoplastea, Diplonemea, Euglenida, and Symbiontida. Euglenozoa are unicellular, mostly around 15–40 μm (0.00059–0.00157 in) in size, although some euglenids get up to 500 μm (0.020 in) long.", "title": "" }, { "paragraph_id": 1, "text": "Most euglenozoa have two flagella, which are inserted parallel to one another in an apical or subapical pocket. In some these are associated with a cytostome or mouth, used to ingest bacteria or other small organisms. This is supported by one of three sets of microtubules that arise from the flagellar bases; the other two support the dorsal and ventral surfaces of the cell.", "title": "Structure" }, { "paragraph_id": 2, "text": "Some other euglenozoa feed through absorption, and many euglenids possess chloroplasts, the only eukaryotes outside Diaphoretickes to do so without performing kleptoplasty, and so obtain energy through photosynthesis. These chloroplasts are surrounded by three membranes and contain chlorophylls A and B, along with other pigments, so are probably derived from a green alga, captured long ago in an endosymbiosis by a basal euglenozoan. Reproduction occurs exclusively through cell division. During mitosis, the nuclear membrane remains intact, and the spindle microtubules form inside of it.", "title": "Structure" }, { "paragraph_id": 3, "text": "The group is characterized by the ultrastructure of the flagella. In addition to the normal supporting microtubules or axoneme, each contains a rod (called paraxonemal), which has a tubular structure in one flagellum and a latticed structure in the other. Based on this, two smaller groups have been included here: the diplonemids and Postgaardi.", "title": "Structure" }, { "paragraph_id": 4, "text": "Historically, euglenozoans have been treated as either plants or animals, depending on whether they belong to largely photosynthetic groups or not. Hence they have names based on either the International Code of Nomenclature for algae, fungi, and plants (ICNafp) or the International Code of Zoological Nomenclature (ICZN). For example, one family has the name Euglenaceae under the ICNafp and the name Euglenidae under the ICZN. As another example, the genus name Dinema is acceptable under the ICZN, but illegitimate under the ICNafp, as it is a later homonym of an orchid genus, so that the synonym Dinematomonas must be used instead.", "title": "Classification" }, { "paragraph_id": 5, "text": "The Euglenozoa are generally accepted as monophyletic. They are related to Percolozoa; the two share mitochondria with disk-shaped cristae, which only occurs in a few other groups. Both probably belong to a larger group of eukaryotes called the Excavata. This grouping, though, has been challenged.", "title": "Classification" }, { "paragraph_id": 6, "text": "The phylogeny based on the work of Cavalier-Smith (2016):", "title": "Classification" }, { "paragraph_id": 7, "text": "A consensus phylogeny following the review by Kostygov et al. (2021):", "title": "Classification" }, { "paragraph_id": 8, "text": "The following classification of Euglenozoa is as described by Cavalier-Smith in 2016, modified to include the new subphylum Plicomonada according to Cavalier-Smith et al (2017).", "title": "Classification" }, { "paragraph_id": 9, "text": "Phylum Euglenozoa Cavalier-Smith 1981 emend. Simpson 1997 [Euglenobionta]", "title": "Classification" }, { "paragraph_id": 10, "text": "Phylum Euglenozoa Cavalier-Smith 1981 emend. Simpson 1997", "title": "Classification" } ]
Euglenozoa are a large group of flagellate Discoba. They include a variety of common free-living species, as well as a few important parasites, some of which infect humans. Euglenozoa are represented by four major groups, i.e., Kinetoplastea, Diplonemea, Euglenida, and Symbiontida. Euglenozoa are unicellular, mostly around 15–40 μm (0.00059–0.00157 in) in size, although some euglenids get up to 500 μm (0.020 in) long.
2001-03-06T06:57:27Z
2023-10-04T19:51:59Z
[ "Template:Failed verification", "Template:Cite web", "Template:Cite journal", "Template:Excavata", "Template:Taxonbar", "Template:Short description", "Template:Automatic taxobox", "Template:Convert", "Template:Barlabel", "Template:Clade", "Template:Reflist" ]
https://en.wikipedia.org/wiki/Euglenozoa
9,247
Epistemology
Epistemology (/ɪˌpɪstəˈmɒlədʒi/ ; from Ancient Greek ἐπιστήμη (epistḗmē) 'knowledge', and -logy) is the branch of philosophy concerned with knowledge. Epistemologists study the nature, origin, and scope of knowledge, epistemic justification, the rationality of belief, and various related issues. Debates in (contemporary) epistemology are generally clustered around four core areas: In these debates and others, epistemology aims to answer questions such as "What do people know?", "What does it mean to say that people know something?", "What makes justified beliefs justified?", and "How do people know that they know?" Specialties in epistemology ask questions such as "How can people create formal models about issues related to knowledge?" (in formal epistemology), "What are the historical conditions of changes in different kinds of knowledge?" (in historical epistemology), "What are the methods, aims, and subject matter of epistemological inquiry?" (in metaepistemology), and "How do people know together?" (in social epistemology). The etymology of the word epistemology is derived from the ancient Greek epistēmē, meaning "knowledge, understanding, skill, scientific knowledge", and the English suffix -ology, meaning "the science or discipline of (what is indicated by the first element)". The word "epistemology" first appeared in 1847, in a review in New York's Eclectic Magazine : The title of one of the principal works of Fichte is 'Wissenschaftslehre,' which, after the analogy of technology ... we render epistemology. The word was first used to present a philosophy in English by Scottish philosopher James Frederick Ferrier in 1854. It was the title of the first section of his Institutes of Metaphysics: This section of the science is properly termed the Epistemology—the doctrine or theory of knowing, just as ontology is the science of being... It answers the general question, 'What is knowing and the known?'—or more shortly, 'What is knowledge?' Introductory classes to epistemology often start their analysis of knowledge by pointing out three different senses of "knowing" something: "knowing that" (knowing the truth of propositions), "knowing how" (understanding how to perform certain actions), and "knowing by acquaintance" (directly perceiving an object, being familiar with it, or otherwise coming into contact with it). Epistemology is primarily concerned with the first of these forms of knowledge, propositional knowledge. All three senses of "knowing" can be seen in our ordinary use of the word. In mathematics, you can know that 2 + 2 = 4, but there is also knowing how to add two numbers, and knowing a person (e.g., knowing other persons, or knowing oneself), place (e.g., one's hometown), thing (e.g., cars), or activity (e.g., addition). While these distinctions are not explicit in English, they are explicitly made in other languages, including French, Italian, Portuguese, Spanish, Romanian, German and Dutch (although some languages closely related to English have been said to retain these verbs, such as Scots).. In French, Portuguese, Spanish, Romanian, German, and Dutch 'to know (a person)' is translated using connaître, conhecer, conocer, a cunoaște and kennen (both German and Dutch) respectively, whereas 'to know (how to do something)' is translated using savoir, saber (both Portuguese and Spanish), a şti, wissen, and weten. Modern Greek has the verbs γνωρίζω (gnorízo) and ξέρω (kséro). Italian has the verbs conoscere and sapere and the nouns for 'knowledge' are conoscenza and sapienza. German has the verbs wissen and kennen; the former implies knowing a fact, the latter knowing in the sense of being acquainted with and having a working knowledge of; there is also a noun derived from kennen, namely Erkennen, which has been said to imply knowledge in the form of recognition or acknowledgment. The verb itself implies a process: you have to go from one state to another, from a state of "not-erkennen" to a state of true erkennen. This verb seems the most appropriate in terms of describing the "episteme" in one of the modern European languages, hence the German name "Erkenntnistheorie". The theoretical interpretation and significance of these linguistic issues remains controversial. In his paper On Denoting and his later book Problems of Philosophy, Bertrand Russell brought a great deal of attention to the distinction between "knowledge by description" and "knowledge by acquaintance". Gilbert Ryle is similarly credited with bringing more attention to the distinction between knowing how and knowing that in The Concept of Mind. In Personal Knowledge, Michael Polanyi argues for the epistemological relevance of knowledge how and knowledge that; using the example of the act of balance involved in riding a bicycle, he suggests that the theoretical knowledge of the physics involved in maintaining a state of balance cannot substitute for the practical knowledge of how to ride, and that it is important to understand how both are established and grounded. This position is essentially Ryle's, who argued that a failure to acknowledge the distinction between "knowledge that" and "knowledge how" leads to infinite regress. One of the most important distinctions in epistemology is between what can be known a priori (independently of experience) and what can be known a posteriori (through experience). The terms originate from the Analytic methods of Aristotle's Organon, and may be roughly defined as follows: Views that emphasize the importance of a priori knowledge are generally classified as rationalist. Views that emphasize the importance of a posteriori knowledge are generally classified as empiricist. One of the core concepts in epistemology is belief. A belief is an attitude that a person holds regarding anything that they take to be true. For instance, to believe that snow is white is comparable to accepting the truth of the proposition "snow is white". Beliefs can be occurrent (e.g. a person actively thinking "snow is white"), or they can be dispositional (e.g. a person who if asked about the color of snow would assert "snow is white"). While there is not universal agreement about the nature of belief, most contemporary philosophers hold the view that a disposition to express belief B qualifies as holding the belief B. There are various different ways that contemporary philosophers have tried to describe beliefs, including as representations of ways that the world could be (Jerry Fodor), as dispositions to act as if certain things are true (Roderick Chisholm), as interpretive schemes for making sense of someone's actions (Daniel Dennett and Donald Davidson), or as mental states that fill a particular function (Hilary Putnam). Some have also attempted to offer significant revisions to our notion of belief, including eliminativists about belief who argue that there is no phenomenon in the natural world which corresponds to our folk psychological concept of belief (Paul Churchland) and formal epistemologists who aim to replace our bivalent notion of belief ("either I have a belief or I don't have a belief") with the more permissive, probabilistic notion of credence ("there is an entire spectrum of degrees of belief, not a simple dichotomy between belief and non-belief"). While belief plays a significant role in epistemological debates surrounding knowledge and justification, it also has many other philosophical debates in its own right. Notable debates include: "What is the rational way to revise one's beliefs when presented with various sorts of evidence?"; "Is the content of our beliefs entirely determined by our mental states, or do the relevant facts have any bearing on our beliefs (e.g. if I believe that I'm holding a glass of water, is the non-mental fact that water is H2O part of the content of that belief)?"; "How fine-grained or coarse-grained are our beliefs?"; and "Must it be possible for a belief to be expressible in language, or are there non-linguistic beliefs?" Truth is the property or state of being in accordance with facts or reality. On most views, truth is the correspondence of language or thought to a mind-independent world. This is called the correspondence theory of truth. Among philosophers who think that it is possible to analyze the conditions necessary for knowledge, virtually all of them accept that truth is such a condition. There is much less agreement about the extent to which a knower must know why something is true in order to know. On such views, something being known implies that it is true. However, this should not be confused for the more contentious view that one must know that one knows in order to know (the KK principle). Epistemologists disagree about whether belief is the only truth-bearer. Other common suggestions for things that can bear the property of being true include propositions, sentences, thoughts, utterances, and judgments. Plato, in his Gorgias, argues that belief is the most commonly invoked truth-bearer. Many of the debates regarding truth are at the crossroads of epistemology and logic. Some contemporary debates regarding truth include: How do we define truth? Is it even possible to give an informative definition of truth? What things are truth-bearers and are therefore capable of being true or false? Are truth and falsity bivalent, or are there other truth values? What are the criteria of truth that allow us to identify it and to distinguish it from falsity? What role does truth play in constituting knowledge? And is truth absolute, or is it merely relative to one's perspective? As the term "justification" is used in epistemology, a belief is justified if one has good reason for holding it. Loosely speaking, justification is the reason that someone holds a rationally admissible belief, on the assumption that it is a good reason for holding it. Sources of justification might include perceptual experience (the evidence of the senses), reason, and authoritative testimony, among others. Importantly however, a belief being justified does not guarantee that the belief is true, since a person could be justified in forming beliefs based on very convincing evidence that was nonetheless deceiving. A central debate about the nature of justification is a debate between epistemological externalists on the one hand and epistemological internalists on the other. While epistemic externalism first arose in attempts to overcome the Gettier problem, it has flourished in the time since as an alternative way of conceiving of epistemic justification. The initial development of epistemic externalism is often attributed to Alvin Goldman, although numerous other philosophers have worked on the topic in the time since. Externalists hold that factors deemed "external", meaning outside of the psychological states of those who gain knowledge, can be conditions of justification. For example, an externalist response to the Gettier problem is to say that for a justified true belief to count as knowledge, there must be a link or dependency between the belief and the state of the external world. Usually, this is understood to be a causal link. Such causation, to the extent that it is "outside" the mind, would count as an external, knowledge-yielding condition. Internalists, on the other hand, assert that all knowledge-yielding conditions are within the psychological states of those who gain knowledge. Though unfamiliar with the internalist/externalist debate himself, many point to René Descartes as an early example of the internalist path to justification. He wrote that because the only method by which we perceive the external world is through our senses, and that, because the senses are not infallible, we should not consider our concept of knowledge infallible. The only way to find anything that could be described as "indubitably true", he advocates, would be to see things "clearly and distinctly". He argued that if there is an omnipotent, good being who made the world, then it is reasonable to believe that people are made with the ability to know. However, this does not mean that man's ability to know is perfect. God gave man the ability to know but not with omniscience. Descartes said that man must use his capacities for knowledge correctly and carefully through methodological doubt. The dictum "Cogito ergo sum" (I think, therefore I am) is also commonly associated with Descartes's theory. In his own methodological doubt—doubting everything he previously knew so he could start from a blank slate—the first thing that he could not logically bring himself to doubt was his own existence: "I do not exist" would be a contradiction in terms. The act of saying that one does not exist assumes that someone must be making the statement in the first place. Descartes could doubt his senses, his body, and the world around him—but he could not deny his own existence, because he was able to doubt and must exist to manifest that doubt. Even if some "evil genius" were deceiving him, he would have to exist to be deceived. This one sure point provided him with what he called his Archimedean point, in order to further develop his foundation for knowledge. Simply put, Descartes's epistemological justification depended on his indubitable belief in his own existence and his clear and distinct knowledge of God. A central issue in epistemology is the question of what the nature of knowledge is or how to define it. Sometimes the expressions "theory of knowledge" and "analysis of knowledge" are used specifically for this form of inquiry. The term "knowledge" has various meanings in natural language. It can refer to an awareness of facts, as in knowing that Mars is a planet, to a possession of skills, as in knowing how to swim, or to an experiential acquaintance, as in knowing Daniel Craig personally. Factual knowledge, also referred to as propositional knowledge or descriptive knowledge, plays a special role in epistemology. On the linguistic level, it is distinguished from the other forms of knowledge since it can be expressed through a that-clause, i.e. using a formulation like "They know that..." followed by the known proposition. Some features of factual knowledge are widely accepted: it is a form of cognitive success that establishes epistemic contact with reality. However, there are still various disagreements about its exact nature even though it has been studied intensely. Different factors are responsible for these disagreements. Some theorists try to furnish a practically useful definition by describing its most noteworthy and easily identifiable features. Others engage in an analysis of knowledge, which aims to provide a theoretically precise definition that identifies the set of essential features characteristic for all instances of knowledge and only for them. Differences in the methodology may also cause disagreements. In this regard, some epistemologists use abstract and general intuitions in order to arrive at their definitions. A different approach is to start from concrete individual cases of knowledge to determine what all of them have in common. Yet another method is to focus on linguistic evidence by studying how the term "knowledge" is commonly used. Different standards of knowledge are further sources of disagreement. A few theorists set these standards very high by demanding that absolute certainty or infallibility is necessary. On such a view, knowledge is a very rare thing. Theorists more in tune with ordinary language usually demand lower standards and see knowledge as something commonly found in everyday life. The historically most influential definition, discussed since ancient Greek philosophy, characterizes knowledge in relation to three essential features: as (1) a belief that is (2) true and (3) justified. There is still wide acceptance that the first two features are correct, i.e. that knowledge is a mental state that affirms a true proposition. However, there is a lot of dispute about the third feature: justification. This feature is usually included to distinguish knowledge from true beliefs that rest on superstition, lucky guesses, or faulty reasoning. This expresses the idea that knowledge is not the same as being right about something. Traditionally, justification is understood as the possession of evidence: a belief is justified if the believer has good evidence supporting it. Such evidence could be a perceptual experience, a memory, or a second belief. The justified-true-belief account of knowledge came under severe criticism in the second half of the 20th century, when Edmund Gettier proposed various counterexamples. In a famous so-called Gettier-case, a person is driving on a country road. There are many barn façades along this road and only one real barn. But it is not possible to tell the difference between them from the road. The person then stops by a fortuitous coincidence in front of the only real barn and forms the belief that it is a barn. The idea behind this thought experiment is that this is not knowledge even though the belief is both justified and true. The reason is that it is just a lucky accident since the person cannot tell the difference: they would have formed exactly the same justified belief if they had stopped at another site, in which case the belief would have been false. Various additional examples were proposed along similar lines. Most of them involve a justified true belief that apparently fails to amount to knowledge because the belief's justification is in some sense not relevant to its truth. These counterexamples have provoked very diverse responses. Some theorists think that one only needs to modify one's conception of justification to avoid them. But the more common approach is to search for an additional criterion. On this view, all cases of knowledge involve a justified true belief but some justified true beliefs do not amount to knowledge since they lack this additional feature. There are diverse suggestions for this fourth criterion. Some epistemologists require that no false belief is involved in the justification or that no defeater of the belief is present. A different approach is to require that the belief tracks truth, i.e. that the person would not have the belief if it was false. Some even require that the justification has to be infallible, i.e. that it necessitates the belief's truth. A quite different approach is to affirm that the justified-true-belief account of knowledge is deeply flawed and to seek a complete reconceptualization of knowledge. These reconceptualizations often do not require justification at all. One such approach is to require that the true belief was produced by a reliable process. Naturalized epistemologists often hold that the believed fact has to cause the belief. Virtue theorists are also interested in how the belief is produced. For them, the belief must be a manifestation of a cognitive virtue. The primary value problem is to determine why knowledge should be more valuable than simply true belief. In Plato's Meno, Socrates points out to Meno that a man who knew the way to Larissa could lead others there correctly. But so, too, could a man who had true beliefs about how to get there, even if he had not gone there or had any knowledge of Larissa. Socrates says that it seems that both knowledge and true opinion can guide action. Meno then wonders why knowledge is valued more than true belief and why knowledge and true belief are different. Socrates responds that knowledge, unlike belief, must be ‘tied down’ to the truth, like the mythical tethered statues of Daedalus. More generally, the problem is to identify what (if anything) makes knowledge more valuable than a mere minimal conjunction of its components such as mere true belief or justified true belief. Other components considered besides belief, truth and justification are safety, sensitivity, statistical likelihood, and any anti-Gettier condition. This is done within analyses that conceive of knowledge as divided into components. Knowledge-first epistemological theories, which posit knowledge as fundamental, are notable exceptions to these kind of analyses. The value problem re-emerged in the philosophical literature on epistemology in the twenty-first century following the rise of virtue epistemology in the 1980s, partly because of the obvious link to the concept of value in ethics. In contemporary philosophy, epistemologists including Ernest Sosa, John Greco, Jonathan Kvanvig, Linda Zagzebski, and Duncan Pritchard have defended virtue epistemology as a solution to the value problem. They argue that epistemology should also evaluate the "properties" of people as epistemic agents (i.e. intellectual virtues), rather than merely the properties of propositions and propositional mental attitudes. The value problem has been presented as an argument against epistemic reliabilism by Linda Zagzebski, Wayne Riggs, and Richard Swinburne, among others. Zagzebski analogizes the value of knowledge to the value of espresso produced by an espresso maker: "The liquid in this cup is not improved by the fact that it comes from a reliable espresso maker. If the espresso tastes good, it makes no difference if it comes from an unreliable machine." For Zagzebski, the value of knowledge deflates to the value of mere true belief. She assumes that reliability in itself has no value or disvalue, but Goldman and Olsson disagree. They point out that Zagzebski's conclusion rests on the assumption of veritism: all that matters is the acquisition of true belief. To the contrary, they argue that a reliable process for acquiring a true belief adds value to the mere true belief by making it more likely that future beliefs of a similar kind will be true. By analogy, having a reliable espresso maker that produced a good cup of espresso would be more valuable than having an unreliable one that luckily produced a good cup because the reliable one would more likely produce good future cups compared to the unreliable one. The value problem is important to assessing the adequacy of theories of knowledge that conceive of knowledge as consisting of true belief and other components. According to Kvanvig, an adequate account of knowledge should resist counterexamples and allow an explanation of the value of knowledge over mere true belief. Should a theory of knowledge fail to do so, it would prove inadequate. One of the more influential responses to the problem is that knowledge is not particularly valuable and is not what ought to be the main focus of epistemology. Instead, epistemologists ought to focus on other mental states, such as understanding. Advocates of virtue epistemology have argued that the value of knowledge comes from an internal relationship between the knower and the mental state of believing. There are many proposed sources of knowledge and justified belief which we take to be actual sources of knowledge in our everyday lives. Some of the most commonly discussed include perception, reason, memory, and testimony. As mentioned above, epistemologists draw a distinction between what can be known a priori (independently of experience) and what can only be known a posteriori (through experience). Much of what we call a priori knowledge is thought to be attained through reason alone, as featured prominently in rationalism. This might also include a non-rational faculty of intuition, as defended by proponents of innatism. In contrast, a posteriori knowledge is derived entirely through experience or as a result of experience, as emphasized in empiricism. This also includes cases where knowledge can be traced back to an earlier experience, as in memory or testimony. A way to look at the difference between the two is through an example. Bruce Russell gives two propositions in which the reader decides which one he believes more. Option A: All crows are birds. Option B: All crows are black. If you believe option A, then you are a priori justified in believing it because you do not have to see a crow to know it is a bird. If you believe in option B, then you are posteriori justified to believe it because you have seen many crows therefore knowing they are black. He goes on to say that it does not matter if the statement is true or not, only that if you believe in one or the other that matters. The idea of a priori knowledge is that it is based on intuition or rational insights. Laurence BonJour says in his article "The Structure of Empirical Knowledge", that a "rational insight is an immediate, non-inferential grasp, apprehension or 'seeing' that some proposition is necessarily true." (3) Going back to the crow example, by Laurence BonJour's definition the reason you would believe in option A is because you have an immediate knowledge that a crow is a bird, without ever experiencing one. Evolutionary psychology takes a novel approach to the problem. It says that there is an innate predisposition for certain types of learning. "Only small parts of the brain resemble a tabula rasa; this is true even for human beings. The remainder is more like an exposed negative waiting to be dipped into a developer fluid". Immanuel Kant, in his Critique of Pure Reason, drew a distinction between "analytic" and "synthetic" propositions. He contended that some propositions are such that we can know they are true just by understanding their meaning. For example, consider, "My father's brother is my uncle." We can know it is true solely by virtue of our understanding in what its terms mean. Philosophers call such propositions "analytic". Synthetic propositions, on the other hand, have distinct subjects and predicates. An example would be, "My father's brother has black hair." Kant stated that all mathematical and scientific statements are synthetic a priori propositions because they are necessarily true but our knowledge about the attributes of the mathematical or physical subjects we can only get by logical inference. While this distinction is first and foremost about meaning and is therefore most relevant to the philosophy of language, the distinction has significant epistemological consequences, seen most prominently in the works of the logical positivists. In particular, if the set of propositions which can only be known a posteriori is coextensive with the set of propositions which are synthetically true, and if the set of propositions which can be known a priori is coextensive with the set of propositions which are analytically true (or in other words, which are true by definition), then there can only be two kinds of successful inquiry: Logico-mathematical inquiry, which investigates what is true by definition, and empirical inquiry, which investigates what is true in the world. Most notably, this would exclude the possibility that branches of philosophy like metaphysics could ever provide informative accounts of what actually exists. The American philosopher W. V. O. Quine, in his paper "Two Dogmas of Empiricism", famously challenged the analytic-synthetic distinction, arguing that the boundary between the two is too blurry to provide a clear division between propositions that are true by definition and propositions that are not. While some contemporary philosophers take themselves to have offered more sustainable accounts of the distinction that are not vulnerable to Quine's objections, there is no consensus about whether or not these succeed. Science is often considered to be a refined, formalized, systematic, institutionalized form of the pursuit and acquisition of empirical knowledge. As such, the philosophy of science may be viewed as an application of the principles of epistemology and as a foundation for epistemological inquiry. The regress problem (also known as Agrippa's Trilemma) is the problem of providing a complete logical foundation for human knowledge. The traditional way of supporting a rational argument is to appeal to other rational arguments, typically using chains of reason and rules of logic. A classic example that goes back to Aristotle is deducing that Socrates is mortal. We have a logical rule that says All humans are mortal and an assertion that Socrates is human and we deduce that Socrates is mortal. In this example how do we know that Socrates is human? Presumably we apply other rules such as: All born from human females are human. Which then leaves open the question how do we know that all born from humans are human? This is the regress problem: how can we eventually terminate a logical argument with some statements that do not require further justification but can still be considered rational and justified? As John Pollock stated: ... to justify a belief one must appeal to a further justified belief. This means that one of two things can be the case. Either there are some beliefs that we can be justified for holding, without being able to justify them on the basis of any other belief, or else for each justified belief there is an infinite regress of (potential) justification [the nebula theory]. On this theory there is no rock bottom of justification. Justification just meanders in and out through our network of beliefs, stopping nowhere. The apparent impossibility of completing an infinite chain of reasoning is thought by some to support skepticism. It is also the impetus for Descartes's famous dictum: I think, therefore I am. Descartes was looking for some logical statement that could be true without appeal to other statements. Many epistemologists studying justification have attempted to argue for various types of chains of reasoning that can escape the regress problem. Foundationalists respond to the regress problem by asserting that certain "foundations" or "basic beliefs" support other beliefs but do not themselves require justification from other beliefs. These beliefs might be justified because they are self-evident, infallible, or derive from reliable cognitive mechanisms. Perception, memory, and a priori intuition are often considered possible examples of basic beliefs. The chief criticism of foundationalism is that if a belief is not supported by other beliefs, accepting it may be arbitrary or unjustified. Another response to the regress problem is coherentism, which is the rejection of the assumption that the regress proceeds according to a pattern of linear justification. To avoid the charge of circularity, coherentists hold that an individual belief is justified circularly by the way it fits together (coheres) with the rest of the belief system of which it is a part. This theory has the advantage of avoiding the infinite regress without claiming special, possibly arbitrary status for some particular class of beliefs. Yet, since a system can be coherent while also being wrong, coherentists face the difficulty of ensuring that the whole system corresponds to reality. Additionally, most logicians agree that any argument that is circular is, at best, only trivially valid. That is, to be illuminating, arguments must operate with information from multiple premises, not simply conclude by reiterating a premise. Nigel Warburton writes in Thinking from A to Z that "[c]ircular arguments are not invalid; in other words, from a logical point of view there is nothing intrinsically wrong with them. However, they are, when viciously circular, spectacularly uninformative." An alternative resolution to the regress problem is known as "infinitism". Infinitists take the infinite series to be merely potential, in the sense that an individual may have indefinitely many reasons available to them, without having consciously thought through all of these reasons when the need arises. This position is motivated in part by the desire to avoid what is seen as the arbitrariness and circularity of its chief competitors, foundationalism and coherentism. The most prominent defense of infinitism has been given by Peter Klein. An intermediate position, known as "foundherentism", is advanced by Susan Haack. Foundherentism is meant to unify foundationalism and coherentism. Haack explains the view by using a crossword puzzle as an analogy. Whereas, for example, infinitists regard the regress of reasons as taking the form of a single line that continues indefinitely, Haack has argued that chains of properly justified beliefs look more like a crossword puzzle, with various different lines mutually supporting each other. Thus, Haack's view leaves room for both chains of beliefs that are "vertical" (terminating in foundational beliefs) and chains that are "horizontal" (deriving their justification from coherence with beliefs that are also members of foundationalist chains of belief). Empiricism is a view in the theory of knowledge which focuses on the role of experience, especially experience based on perceptual observations by the senses, in the generation of knowledge. Certain forms exempt disciplines such as mathematics and logic from these requirements. There are many variants of empiricism, including British empiricism, logical empiricism, phenomenalism, and some versions of common sense philosophy. Most forms of empiricism give epistemologically privileged status to sensory impressions or sense data, although this plays out very differently in different cases. Some of the most famous historical empiricists include John Locke, David Hume, George Berkeley, Francis Bacon, John Stuart Mill, Rudolf Carnap, and Bertrand Russell. Rationalism is the epistemological view that reason is the chief source of knowledge and the main determinant of what constitutes knowledge. More broadly, it can also refer to any view which appeals to reason as a source of knowledge or justification. Rationalism is one of the two classical views in epistemology, the other being empiricism. Rationalists claim that the mind, through the use of reason, can directly grasp certain truths in various domains, including logic, mathematics, ethics, and metaphysics. Rationalist views can range from modest views in mathematics and logic (such as that of Gottlob Frege) to ambitious metaphysical systems (such as that of Baruch Spinoza). Some of the most famous rationalists include Plato, René Descartes, Baruch Spinoza, and Gottfried Leibniz. Skepticism is a position that questions the possibility of human knowledge, either in particular domains or on a general level. Skepticism does not refer to any one specific school of philosophy, but is rather a thread that runs through many epistemological debates. Ancient Greek skepticism began during the Hellenistic period in philosophy, which featured both Pyrrhonism (notably defended by Pyrrho, Sextus Empiricus, and Aenesidemus) and Academic skepticism (notably defended by Arcesilaus and Carneades). Among ancient Indian philosophers, skepticism was notably defended by the Ajñana school and in the Buddhist Madhyamika tradition. In modern philosophy, René Descartes' famous inquiry into mind and body began as an exercise in skepticism, in which he started by trying to doubt all purported cases of knowledge in order to search for something that was known with absolute certainty. Epistemic skepticism questions whether knowledge is possible at all. Generally speaking, skeptics argue that knowledge requires certainty, and that most or all of our beliefs are fallible (meaning that our grounds for holding them always, or almost always, fall short of certainty), which would together entail that knowledge is always or almost always impossible for us. Characterizing knowledge as strong or weak is dependent on a person's viewpoint and their characterization of knowledge. Much of modern epistemology is derived from attempts to better understand and address philosophical skepticism. One of the oldest forms of epistemic skepticism can be found in Agrippa's trilemma (named after the Pyrrhonist philosopher Agrippa the Skeptic) which demonstrates that certainty can not be achieved with regard to beliefs. Pyrrhonism dates back to Pyrrho of Elis from the 4th century BCE, although most of what we know about Pyrrhonism today is from the surviving works of Sextus Empiricus. Pyrrhonists claim that for any argument for a non-evident proposition, an equally convincing argument for a contradictory proposition can be produced. Pyrrhonists do not dogmatically deny the possibility of knowledge, but instead point out that beliefs about non-evident matters cannot be substantiated. The Cartesian evil demon problem, first raised by René Descartes, supposes that our sensory impressions may be controlled by some external power rather than the result of ordinary veridical perception. In such a scenario, nothing we sense would actually exist, but would instead be mere illusion. As a result, we would never be able to know anything about the world, since we would be systematically deceived about everything. The conclusion often drawn from evil demon skepticism is that even if we are not completely deceived, all of the information provided by our senses is still compatible with skeptical scenarios in which we are completely deceived, and that we must therefore either be able to exclude the possibility of deception or else must deny the possibility of infallible knowledge (that is, knowledge which is completely certain) beyond our immediate sensory impressions. While the view that no beliefs are beyond doubt other than our immediate sensory impressions is often ascribed to Descartes, he in fact thought that we can exclude the possibility that we are systematically deceived, although his reasons for thinking this are based on a highly contentious ontological argument for the existence of a benevolent God who would not allow such deception to occur. Epistemological skepticism can be classified as either "mitigated" or "unmitigated" skepticism. Mitigated skepticism rejects "strong" or "strict" knowledge claims but does approve weaker ones, which can be considered "virtual knowledge", but only with regard to justified beliefs. Unmitigated skepticism rejects claims of both virtual and strong knowledge. Characterizing knowledge as strong, weak, virtual or genuine can be determined differently depending on a person's viewpoint as well as their characterization of knowledge. Some of the most notable attempts to respond to unmitigated skepticism include direct realism, disjunctivism, common sense philosophy, pragmatism, fideism, and fictionalism. Pragmatism is a fallibilist epistemology that emphasizes the role of action in knowing. Different interpretations of pragmatism variously emphasize: truth as the final outcome of ideal scientific inquiry and experimentation, truth as closely related to usefulness, experience as transacting with (instead of representing) nature, and human practices as the foundation of language. Pragmatism's origins are often attributed to Charles Sanders Peirce, William James, and John Dewey. In 1878, Peirce formulated the maxim: "Consider what effects, that might conceivably have practical bearings, we conceive the object of our conception to have. Then, our conception of these effects is the whole of our conception of the object." William James suggested that through a pragmatist epistemology, theories "become instruments, not answers to enigmas in which we can rest". In James's pragmatic method, which he adapted from Peirce, metaphysical disputes can be settled by tracing the practical consequences of the different sides of the argument. If this process does not resolve the dispute, then "the dispute is idle". Contemporary versions of pragmatism have been developed by thinkers such as Richard Rorty and Hilary Putnam. Rorty proposed that values were historically contingent and dependent upon their utility within a given historical period. Contemporary philosophers working in pragmatism are called neopragmatists, and also include Nicholas Rescher, Robert Brandom, Susan Haack, and Cornel West. In certain respects an intellectual descendant of pragmatism, naturalized epistemology considers the evolutionary role of knowledge for agents living and evolving in the world. It de-emphasizes the questions around justification and truth, and instead asks, empirically, how reliable beliefs are formed and the role that evolution played in the development of such processes. It suggests a more empirical approach to the subject as a whole, leaving behind philosophical definitions and consistency arguments, and instead using psychological methods to study and understand how "knowledge" is actually formed and is used in the natural world. As such, it does not attempt to answer the analytic questions of traditional epistemology, but rather replace them with new empirical ones. Naturalized epistemology was first proposed in "Epistemology Naturalized", a seminal paper by W.V.O. Quine. A less radical view has been defended by Hilary Kornblith in Knowledge and its Place in Nature, in which he seeks to turn epistemology towards empirical investigation without completely abandoning traditional epistemic concepts. Epistemic relativism is the view that what is true, rational, or justified for one person need not be true, rational, or justified for another person. Epistemic relativists therefore assert that while there are relative facts about truth, rationality, justification, and so on, there is no perspective-independent fact of the matter. Note that this is distinct from epistemic contextualism, which holds that the meaning of epistemic terms vary across contexts (e.g. "I know" might mean something different in everyday contexts and skeptical contexts). In contrast, epistemic relativism holds that the relevant facts vary, not just linguistic meaning. Relativism about truth may also be a form of ontological relativism, insofar as relativists about truth hold that facts about what exists vary based on perspective. Constructivism is a view in philosophy according to which all "knowledge is a compilation of human-made constructions", "not the neutral discovery of an objective truth". Whereas objectivism is concerned with the "object of our knowledge", constructivism emphasizes "how we construct knowledge". Constructivism proposes new definitions for knowledge and truth, which emphasize intersubjectivity rather than objectivity, and viability rather than truth. The constructivist point of view is in many ways comparable to certain forms of pragmatism. Bayesian epistemology is a formal approach to various topics in epistemology that has its roots in Thomas Bayes' work in the field of probability theory. One advantage of its formal method in contrast to traditional epistemology is that its concepts and theorems can be defined with a high degree of precision. It is based on the idea that beliefs can be interpreted as subjective probabilities. As such, they are subject to the laws of probability theory, which act as the norms of rationality. These norms can be divided into static constraints, governing the rationality of beliefs at any moment, and dynamic constraints, governing how rational agents should change their beliefs upon receiving new evidence. The most characteristic Bayesian expression of these principles is found in the form of Dutch books, which illustrate irrationality in agents through a series of bets that lead to a loss for the agent no matter which of the probabilistic events occurs. Bayesians have applied these fundamental principles to various epistemological topics but Bayesianism does not cover all topics of traditional epistemology. Feminist epistemology is a subfield of epistemology which applies feminist theory to epistemological questions. It began to emerge as a distinct subfield in the 20th century. Prominent feminist epistemologists include Miranda Fricker (who developed the concept of epistemic injustice), Donna Haraway (who first proposed the concept of situated knowledge), Sandra Harding, and Elizabeth Anderson. Harding proposes that feminist epistemology can be broken into three distinct categories: feminist empiricism, standpoint epistemology, and postmodern epistemology. Feminist epistemology has also played a significant role in the development of many debates in social epistemology. Epistemicide is a term used in decolonisation studies that describes the killing of knowledge systems under systemic oppression such as colonisation and slavery. The term was coined by Boaventura de Sousa Santos, who presented the significance of such physical violence creating the centering of Western knowledge in the current world. This term challenges the thought of what is seen as knowledge in academia today. Indian schools of philosophy, such as the Hindu Nyaya and Carvaka schools, and the Jain and Buddhist philosophical schools, developed an epistemological tradition independently of the Western philosophical tradition called "pramana". Pramana can be translated as "instrument of knowledge" and refers to various means or sources of knowledge that Indian philosophers held to be reliable. Each school of Indian philosophy had their own theories about which pramanas were valid means to knowledge and which were unreliable (and why). A Vedic text, Taittirīya Āraṇyaka (c. 9th–6th centuries BCE), lists "four means of attaining correct knowledge": smṛti ("tradition" or "scripture"), pratyakṣa ("perception"), aitihya ("communication by one who is expert", or "tradition"), and anumāna ("reasoning" or "inference"). In the Indian traditions, the most widely discussed pramanas are: Pratyakṣa (perception), Anumāṇa (inference), Upamāṇa (comparison and analogy), Arthāpatti (postulation, derivation from circumstances), Anupalabdi (non-perception, negative/cognitive proof) and Śabda (word, testimony of past or present reliable experts). While the Nyaya school (beginning with the Nyāya Sūtras of Gotama, between 6th-century BCE and 2nd-century CE) were a proponent of realism and supported four pramanas (perception, inference, comparison/analogy and testimony), the Buddhist epistemologists (Dignaga and Dharmakirti) generally accepted only perception and inference. The Carvaka school of materialists only accepted the pramana of perception, and hence were among the first empiricists in the Indian traditions. Another school, the Ajñana, included notable proponents of philosophical skepticism. The theory of knowledge of the Buddha in the early Buddhist texts has been interpreted as a form of pragmatism as well as a form of correspondence theory. Likewise, the Buddhist philosopher Dharmakirti has been interpreted both as holding a form of pragmatism or correspondence theory for his view that what is true is what has effective power (arthakriya). The Buddhist Madhyamika school's theory of emptiness (shunyata) meanwhile has been interpreted as a form of philosophical skepticism. The main contribution to epistemology by the Jains has been their theory of "many sided-ness" or "multi-perspectivism" (Anekantavada), which says that since the world is multifaceted, any single viewpoint is limited (naya – a partial standpoint). This has been interpreted as a kind of pluralism or perspectivism. According to Jain epistemology, none of the pramanas gives absolute or perfect knowledge since they are each limited points of view. Formal epistemology uses formal tools and methods from decision theory, logic, probability theory and computability theory to model and reason about issues of epistemological interest. Work in this area spans several academic fields, including philosophy, computer science, economics, and statistics. The focus of formal epistemology has tended to differ somewhat from that of traditional epistemology, with topics like uncertainty, induction, and belief revision garnering more attention than the analysis of knowledge, skepticism, and issues with justification. Historical epistemology is the study of the historical conditions of, and changes in, different kinds of knowledge. There are many versions of or approaches to historical epistemology, which is different from history of epistemology. Twentieth-century French historical epistemologists like Abel Rey, Gaston Bachelard, Jean Cavaillès, and Georges Canguilhem focused specifically on changes in scientific discourse. Metaepistemology is the metaphilosophical study of the methods, aims, and subject matter of epistemology. In general, metaepistemology aims to better understand our first-order epistemological inquiry. Some goals of metaepistemology are identifying inaccurate assumptions made in epistemological debates and determining whether the questions asked in mainline epistemology are the right epistemological questions to be asking. Social epistemology deals with questions about knowledge in contexts where our knowledge attributions cannot be explained by simply examining individuals in isolation from one another, meaning that the scope of our knowledge attributions must be widened to include broader social contexts. It also explores the ways in which interpersonal beliefs can be justified in social contexts. The most common topics discussed in contemporary social epistemology are testimony, which deals with the conditions under which a belief "x is true" which resulted from being told "x is true" constitutes knowledge; peer disagreement, which deals with when and how I should revise my beliefs in light of other people holding beliefs that contradict mine; and group epistemology, which deals with what it means to attribute knowledge to groups rather than individuals, and when group knowledge attributions are appropriate. Contemporary philosophers consider that epistemology is a major subfield of philosophy, along with ethics, logic, and metaphysics, which are more ancient subdivisions of philosophy. But in the early and mid 20th century, epistemology was not seen as an independent field on its own. Quine viewed epistemology as a chapter of psychology. Russell viewed it as a mix of psychology and logic. William Alston presents a similar contemporary perspective, but in a historically 'oriented' manner: for him, epistemology has historically always been a part of cognitive psychology. The claim that psychology is a background for epistemology is often called its naturalization. The epistemology of Russell and Quine in the 20th century were naturalised in that way. More recently, Laurence Bonjour rejects that there is a need for that kind of psychologism in contemporary epistemology. His argument is that, nowadays, the required part of psychology, which he refers as minimal psychologism, conceptual psychologism and meliorative psychologism, are self evident within contemporary (traditional) epistemology, "involves at most a quite minor departure from traditional, nonnaturalized epistemology" or "poses no real threat to traditional epistemology". In this view point, epistemology has integrated all required psychological aspects, which are considered non controversial, and can be severed from psychologism. For Luciano Floridi, "at the turn of the [20th] century there had been a resurgence of interest in epistemology through an anti-metaphysical, naturalist, reaction against the nineteenth-century development of Neo-Kantian and Neo-Hegelian idealism." In that perspective, contemporary epistemology, which in Bonjour's perspective does not need to be "naturalized" anymore, emerged after a naturalisation that rejected meta-physical perspectives associated with Kant and Hegel. Historians of philosophy traditionally divide the modern period into a dispute between empiricists (including Francis Bacon, John Locke, David Hume, and George Berkeley) and rationalists (including René Descartes, Baruch Spinoza, and Gottfried Leibniz). The debate between them has often been framed using the question of whether knowledge comes primarily from sensory experience (empiricism), or whether a significant portion of our knowledge is derived entirely from our faculty of reason (rationalism). According to some scholars, this dispute was resolved in the late 18th century by Immanuel Kant, whose transcendental idealism famously made room for the view that "though all our knowledge begins with experience, it by no means follows that all [knowledge] arises out of experience". In Meno, the definition of knowledge as justified true belief appears for the first time. In other words, belief is required to have an explanation in order to be correct, beyond just happening to be right. A number of important epistemological concerns also appeared in the works of Aristotle. During the subsequent Hellenistic period, philosophical schools began to appear which had a greater focus on epistemological questions, often in the form of philosophical skepticism. For instance, the Hellenistic Sceptics, especially Sextus Empiricus of the Pyrrhonian school rejected justification on the basis of Agrippa's trilemma and so, in the view of Irwin (2010), rejected the possibility of knowledge as well. The Pyrrhonian school of Pyrrho and Sextus Empiricus held that eudaimonia (flourishing, happiness, or "the good life") could be attained through the application of epoché (suspension of judgment) regarding all non-evident matters. Pyrrhonism was particularly concerned with undermining the epistemological dogmas of Stoicism and Epicureanism. The other major school of Hellenistic skepticism was Academic skepticism, most notably defended by Carneades and Arcesilaus, which predominated in the Platonic Academy for almost two centuries. In ancient India the Ajñana school of ancient Indian philosophy promoted skepticism. Ajñana was a Śramaṇa movement and a major rival of early Buddhism, Jainism and the Ājīvika school. They held that it was impossible to obtain knowledge of metaphysical nature or to ascertain the truth value of philosophical propositions; and even if knowledge was possible, it was useless and disadvantageous for final salvation. They were specialized in refutation without propagating any positive doctrine of their own. During the Islamic Golden Age, one of the most prominent and influential philosophers, theologians, jurists, logicians and mystics in Islamic epistemology was Al-Ghazali. During his life, he wrote over 70 books on science, Islamic reasoning and Sufism. Al-Ghazali distributed his book The Incoherence of Philosophers, set apart as a defining moment in Islamic epistemology. He shaped a conviction that all occasions and connections are not the result of material conjunctions but are the present and prompt will of God. After the ancient philosophical era but before the modern philosophical era, a number of (non Islamic) medieval philosophers also engaged with epistemological questions at length. Most notable among the Medievals for their contributions to epistemology were Thomas Aquinas, John Duns Scotus, and William of Ockham. According to historian of philosophy Jan Woleński, the development of philosophy divides, with some exceptions, into the pre-Cartesian ontologically oriented and the post-Cartesian epistemologically oriented. There are a number of different methods that contemporary scholars use when trying to understand the relationship between past epistemology and contemporary epistemology. One of the most contentious questions is this: "Should we assume that the problems of epistemology are perennial, and that trying to reconstruct and evaluate Plato's or Hume's or Kant's arguments is meaningful for current debates, too?" Similarly, there is also a question of whether contemporary philosophers should aim to rationally reconstruct and evaluate historical views in epistemology, or to merely describe them. Stanford Encyclopedia of Philosophy articles Internet Encyclopedia of Philosophy articles Encyclopædia Britannica Other links
[ { "paragraph_id": 0, "text": "Epistemology (/ɪˌpɪstəˈmɒlədʒi/ ; from Ancient Greek ἐπιστήμη (epistḗmē) 'knowledge', and -logy) is the branch of philosophy concerned with knowledge. Epistemologists study the nature, origin, and scope of knowledge, epistemic justification, the rationality of belief, and various related issues. Debates in (contemporary) epistemology are generally clustered around four core areas:", "title": "" }, { "paragraph_id": 1, "text": "In these debates and others, epistemology aims to answer questions such as \"What do people know?\", \"What does it mean to say that people know something?\", \"What makes justified beliefs justified?\", and \"How do people know that they know?\" Specialties in epistemology ask questions such as \"How can people create formal models about issues related to knowledge?\" (in formal epistemology), \"What are the historical conditions of changes in different kinds of knowledge?\" (in historical epistemology), \"What are the methods, aims, and subject matter of epistemological inquiry?\" (in metaepistemology), and \"How do people know together?\" (in social epistemology).", "title": "" }, { "paragraph_id": 2, "text": "The etymology of the word epistemology is derived from the ancient Greek epistēmē, meaning \"knowledge, understanding, skill, scientific knowledge\", and the English suffix -ology, meaning \"the science or discipline of (what is indicated by the first element)\". The word \"epistemology\" first appeared in 1847, in a review in New York's Eclectic Magazine :", "title": "Etymology" }, { "paragraph_id": 3, "text": "The title of one of the principal works of Fichte is 'Wissenschaftslehre,' which, after the analogy of technology ... we render epistemology.", "title": "Etymology" }, { "paragraph_id": 4, "text": "The word was first used to present a philosophy in English by Scottish philosopher James Frederick Ferrier in 1854. It was the title of the first section of his Institutes of Metaphysics:", "title": "Etymology" }, { "paragraph_id": 5, "text": "This section of the science is properly termed the Epistemology—the doctrine or theory of knowing, just as ontology is the science of being... It answers the general question, 'What is knowing and the known?'—or more shortly, 'What is knowledge?'", "title": "Etymology" }, { "paragraph_id": 6, "text": "Introductory classes to epistemology often start their analysis of knowledge by pointing out three different senses of \"knowing\" something: \"knowing that\" (knowing the truth of propositions), \"knowing how\" (understanding how to perform certain actions), and \"knowing by acquaintance\" (directly perceiving an object, being familiar with it, or otherwise coming into contact with it). Epistemology is primarily concerned with the first of these forms of knowledge, propositional knowledge. All three senses of \"knowing\" can be seen in our ordinary use of the word. In mathematics, you can know that 2 + 2 = 4, but there is also knowing how to add two numbers, and knowing a person (e.g., knowing other persons, or knowing oneself), place (e.g., one's hometown), thing (e.g., cars), or activity (e.g., addition). While these distinctions are not explicit in English, they are explicitly made in other languages, including French, Italian, Portuguese, Spanish, Romanian, German and Dutch (although some languages closely related to English have been said to retain these verbs, such as Scots).. In French, Portuguese, Spanish, Romanian, German, and Dutch 'to know (a person)' is translated using connaître, conhecer, conocer, a cunoaște and kennen (both German and Dutch) respectively, whereas 'to know (how to do something)' is translated using savoir, saber (both Portuguese and Spanish), a şti, wissen, and weten. Modern Greek has the verbs γνωρίζω (gnorízo) and ξέρω (kséro). Italian has the verbs conoscere and sapere and the nouns for 'knowledge' are conoscenza and sapienza. German has the verbs wissen and kennen; the former implies knowing a fact, the latter knowing in the sense of being acquainted with and having a working knowledge of; there is also a noun derived from kennen, namely Erkennen, which has been said to imply knowledge in the form of recognition or acknowledgment. The verb itself implies a process: you have to go from one state to another, from a state of \"not-erkennen\" to a state of true erkennen. This verb seems the most appropriate in terms of describing the \"episteme\" in one of the modern European languages, hence the German name \"Erkenntnistheorie\". The theoretical interpretation and significance of these linguistic issues remains controversial.", "title": "Central concepts" }, { "paragraph_id": 7, "text": "In his paper On Denoting and his later book Problems of Philosophy, Bertrand Russell brought a great deal of attention to the distinction between \"knowledge by description\" and \"knowledge by acquaintance\". Gilbert Ryle is similarly credited with bringing more attention to the distinction between knowing how and knowing that in The Concept of Mind. In Personal Knowledge, Michael Polanyi argues for the epistemological relevance of knowledge how and knowledge that; using the example of the act of balance involved in riding a bicycle, he suggests that the theoretical knowledge of the physics involved in maintaining a state of balance cannot substitute for the practical knowledge of how to ride, and that it is important to understand how both are established and grounded. This position is essentially Ryle's, who argued that a failure to acknowledge the distinction between \"knowledge that\" and \"knowledge how\" leads to infinite regress.", "title": "Central concepts" }, { "paragraph_id": 8, "text": "One of the most important distinctions in epistemology is between what can be known a priori (independently of experience) and what can be known a posteriori (through experience). The terms originate from the Analytic methods of Aristotle's Organon, and may be roughly defined as follows:", "title": "Central concepts" }, { "paragraph_id": 9, "text": "Views that emphasize the importance of a priori knowledge are generally classified as rationalist. Views that emphasize the importance of a posteriori knowledge are generally classified as empiricist.", "title": "Central concepts" }, { "paragraph_id": 10, "text": "One of the core concepts in epistemology is belief. A belief is an attitude that a person holds regarding anything that they take to be true. For instance, to believe that snow is white is comparable to accepting the truth of the proposition \"snow is white\". Beliefs can be occurrent (e.g. a person actively thinking \"snow is white\"), or they can be dispositional (e.g. a person who if asked about the color of snow would assert \"snow is white\"). While there is not universal agreement about the nature of belief, most contemporary philosophers hold the view that a disposition to express belief B qualifies as holding the belief B. There are various different ways that contemporary philosophers have tried to describe beliefs, including as representations of ways that the world could be (Jerry Fodor), as dispositions to act as if certain things are true (Roderick Chisholm), as interpretive schemes for making sense of someone's actions (Daniel Dennett and Donald Davidson), or as mental states that fill a particular function (Hilary Putnam). Some have also attempted to offer significant revisions to our notion of belief, including eliminativists about belief who argue that there is no phenomenon in the natural world which corresponds to our folk psychological concept of belief (Paul Churchland) and formal epistemologists who aim to replace our bivalent notion of belief (\"either I have a belief or I don't have a belief\") with the more permissive, probabilistic notion of credence (\"there is an entire spectrum of degrees of belief, not a simple dichotomy between belief and non-belief\").", "title": "Central concepts" }, { "paragraph_id": 11, "text": "While belief plays a significant role in epistemological debates surrounding knowledge and justification, it also has many other philosophical debates in its own right. Notable debates include: \"What is the rational way to revise one's beliefs when presented with various sorts of evidence?\"; \"Is the content of our beliefs entirely determined by our mental states, or do the relevant facts have any bearing on our beliefs (e.g. if I believe that I'm holding a glass of water, is the non-mental fact that water is H2O part of the content of that belief)?\"; \"How fine-grained or coarse-grained are our beliefs?\"; and \"Must it be possible for a belief to be expressible in language, or are there non-linguistic beliefs?\"", "title": "Central concepts" }, { "paragraph_id": 12, "text": "Truth is the property or state of being in accordance with facts or reality. On most views, truth is the correspondence of language or thought to a mind-independent world. This is called the correspondence theory of truth. Among philosophers who think that it is possible to analyze the conditions necessary for knowledge, virtually all of them accept that truth is such a condition. There is much less agreement about the extent to which a knower must know why something is true in order to know. On such views, something being known implies that it is true. However, this should not be confused for the more contentious view that one must know that one knows in order to know (the KK principle).", "title": "Central concepts" }, { "paragraph_id": 13, "text": "Epistemologists disagree about whether belief is the only truth-bearer. Other common suggestions for things that can bear the property of being true include propositions, sentences, thoughts, utterances, and judgments. Plato, in his Gorgias, argues that belief is the most commonly invoked truth-bearer.", "title": "Central concepts" }, { "paragraph_id": 14, "text": "Many of the debates regarding truth are at the crossroads of epistemology and logic. Some contemporary debates regarding truth include: How do we define truth? Is it even possible to give an informative definition of truth? What things are truth-bearers and are therefore capable of being true or false? Are truth and falsity bivalent, or are there other truth values? What are the criteria of truth that allow us to identify it and to distinguish it from falsity? What role does truth play in constituting knowledge? And is truth absolute, or is it merely relative to one's perspective?", "title": "Central concepts" }, { "paragraph_id": 15, "text": "As the term \"justification\" is used in epistemology, a belief is justified if one has good reason for holding it. Loosely speaking, justification is the reason that someone holds a rationally admissible belief, on the assumption that it is a good reason for holding it. Sources of justification might include perceptual experience (the evidence of the senses), reason, and authoritative testimony, among others. Importantly however, a belief being justified does not guarantee that the belief is true, since a person could be justified in forming beliefs based on very convincing evidence that was nonetheless deceiving.", "title": "Central concepts" }, { "paragraph_id": 16, "text": "A central debate about the nature of justification is a debate between epistemological externalists on the one hand and epistemological internalists on the other. While epistemic externalism first arose in attempts to overcome the Gettier problem, it has flourished in the time since as an alternative way of conceiving of epistemic justification. The initial development of epistemic externalism is often attributed to Alvin Goldman, although numerous other philosophers have worked on the topic in the time since.", "title": "Central concepts" }, { "paragraph_id": 17, "text": "Externalists hold that factors deemed \"external\", meaning outside of the psychological states of those who gain knowledge, can be conditions of justification. For example, an externalist response to the Gettier problem is to say that for a justified true belief to count as knowledge, there must be a link or dependency between the belief and the state of the external world. Usually, this is understood to be a causal link. Such causation, to the extent that it is \"outside\" the mind, would count as an external, knowledge-yielding condition. Internalists, on the other hand, assert that all knowledge-yielding conditions are within the psychological states of those who gain knowledge.", "title": "Central concepts" }, { "paragraph_id": 18, "text": "Though unfamiliar with the internalist/externalist debate himself, many point to René Descartes as an early example of the internalist path to justification. He wrote that because the only method by which we perceive the external world is through our senses, and that, because the senses are not infallible, we should not consider our concept of knowledge infallible. The only way to find anything that could be described as \"indubitably true\", he advocates, would be to see things \"clearly and distinctly\". He argued that if there is an omnipotent, good being who made the world, then it is reasonable to believe that people are made with the ability to know. However, this does not mean that man's ability to know is perfect. God gave man the ability to know but not with omniscience. Descartes said that man must use his capacities for knowledge correctly and carefully through methodological doubt.", "title": "Central concepts" }, { "paragraph_id": 19, "text": "The dictum \"Cogito ergo sum\" (I think, therefore I am) is also commonly associated with Descartes's theory. In his own methodological doubt—doubting everything he previously knew so he could start from a blank slate—the first thing that he could not logically bring himself to doubt was his own existence: \"I do not exist\" would be a contradiction in terms. The act of saying that one does not exist assumes that someone must be making the statement in the first place. Descartes could doubt his senses, his body, and the world around him—but he could not deny his own existence, because he was able to doubt and must exist to manifest that doubt. Even if some \"evil genius\" were deceiving him, he would have to exist to be deceived. This one sure point provided him with what he called his Archimedean point, in order to further develop his foundation for knowledge. Simply put, Descartes's epistemological justification depended on his indubitable belief in his own existence and his clear and distinct knowledge of God.", "title": "Central concepts" }, { "paragraph_id": 20, "text": "A central issue in epistemology is the question of what the nature of knowledge is or how to define it. Sometimes the expressions \"theory of knowledge\" and \"analysis of knowledge\" are used specifically for this form of inquiry. The term \"knowledge\" has various meanings in natural language. It can refer to an awareness of facts, as in knowing that Mars is a planet, to a possession of skills, as in knowing how to swim, or to an experiential acquaintance, as in knowing Daniel Craig personally. Factual knowledge, also referred to as propositional knowledge or descriptive knowledge, plays a special role in epistemology. On the linguistic level, it is distinguished from the other forms of knowledge since it can be expressed through a that-clause, i.e. using a formulation like \"They know that...\" followed by the known proposition.", "title": "Defining knowledge" }, { "paragraph_id": 21, "text": "Some features of factual knowledge are widely accepted: it is a form of cognitive success that establishes epistemic contact with reality. However, there are still various disagreements about its exact nature even though it has been studied intensely. Different factors are responsible for these disagreements. Some theorists try to furnish a practically useful definition by describing its most noteworthy and easily identifiable features. Others engage in an analysis of knowledge, which aims to provide a theoretically precise definition that identifies the set of essential features characteristic for all instances of knowledge and only for them. Differences in the methodology may also cause disagreements. In this regard, some epistemologists use abstract and general intuitions in order to arrive at their definitions. A different approach is to start from concrete individual cases of knowledge to determine what all of them have in common. Yet another method is to focus on linguistic evidence by studying how the term \"knowledge\" is commonly used. Different standards of knowledge are further sources of disagreement. A few theorists set these standards very high by demanding that absolute certainty or infallibility is necessary. On such a view, knowledge is a very rare thing. Theorists more in tune with ordinary language usually demand lower standards and see knowledge as something commonly found in everyday life.", "title": "Defining knowledge" }, { "paragraph_id": 22, "text": "The historically most influential definition, discussed since ancient Greek philosophy, characterizes knowledge in relation to three essential features: as (1) a belief that is (2) true and (3) justified. There is still wide acceptance that the first two features are correct, i.e. that knowledge is a mental state that affirms a true proposition. However, there is a lot of dispute about the third feature: justification. This feature is usually included to distinguish knowledge from true beliefs that rest on superstition, lucky guesses, or faulty reasoning. This expresses the idea that knowledge is not the same as being right about something. Traditionally, justification is understood as the possession of evidence: a belief is justified if the believer has good evidence supporting it. Such evidence could be a perceptual experience, a memory, or a second belief.", "title": "Defining knowledge" }, { "paragraph_id": 23, "text": "The justified-true-belief account of knowledge came under severe criticism in the second half of the 20th century, when Edmund Gettier proposed various counterexamples. In a famous so-called Gettier-case, a person is driving on a country road. There are many barn façades along this road and only one real barn. But it is not possible to tell the difference between them from the road. The person then stops by a fortuitous coincidence in front of the only real barn and forms the belief that it is a barn. The idea behind this thought experiment is that this is not knowledge even though the belief is both justified and true. The reason is that it is just a lucky accident since the person cannot tell the difference: they would have formed exactly the same justified belief if they had stopped at another site, in which case the belief would have been false.", "title": "Defining knowledge" }, { "paragraph_id": 24, "text": "Various additional examples were proposed along similar lines. Most of them involve a justified true belief that apparently fails to amount to knowledge because the belief's justification is in some sense not relevant to its truth. These counterexamples have provoked very diverse responses. Some theorists think that one only needs to modify one's conception of justification to avoid them. But the more common approach is to search for an additional criterion. On this view, all cases of knowledge involve a justified true belief but some justified true beliefs do not amount to knowledge since they lack this additional feature. There are diverse suggestions for this fourth criterion. Some epistemologists require that no false belief is involved in the justification or that no defeater of the belief is present. A different approach is to require that the belief tracks truth, i.e. that the person would not have the belief if it was false. Some even require that the justification has to be infallible, i.e. that it necessitates the belief's truth.", "title": "Defining knowledge" }, { "paragraph_id": 25, "text": "A quite different approach is to affirm that the justified-true-belief account of knowledge is deeply flawed and to seek a complete reconceptualization of knowledge. These reconceptualizations often do not require justification at all. One such approach is to require that the true belief was produced by a reliable process. Naturalized epistemologists often hold that the believed fact has to cause the belief. Virtue theorists are also interested in how the belief is produced. For them, the belief must be a manifestation of a cognitive virtue.", "title": "Defining knowledge" }, { "paragraph_id": 26, "text": "The primary value problem is to determine why knowledge should be more valuable than simply true belief. In Plato's Meno, Socrates points out to Meno that a man who knew the way to Larissa could lead others there correctly. But so, too, could a man who had true beliefs about how to get there, even if he had not gone there or had any knowledge of Larissa. Socrates says that it seems that both knowledge and true opinion can guide action. Meno then wonders why knowledge is valued more than true belief and why knowledge and true belief are different. Socrates responds that knowledge, unlike belief, must be ‘tied down’ to the truth, like the mythical tethered statues of Daedalus.", "title": "The value problem" }, { "paragraph_id": 27, "text": "More generally, the problem is to identify what (if anything) makes knowledge more valuable than a mere minimal conjunction of its components such as mere true belief or justified true belief. Other components considered besides belief, truth and justification are safety, sensitivity, statistical likelihood, and any anti-Gettier condition. This is done within analyses that conceive of knowledge as divided into components. Knowledge-first epistemological theories, which posit knowledge as fundamental, are notable exceptions to these kind of analyses. The value problem re-emerged in the philosophical literature on epistemology in the twenty-first century following the rise of virtue epistemology in the 1980s, partly because of the obvious link to the concept of value in ethics.", "title": "The value problem" }, { "paragraph_id": 28, "text": "In contemporary philosophy, epistemologists including Ernest Sosa, John Greco, Jonathan Kvanvig, Linda Zagzebski, and Duncan Pritchard have defended virtue epistemology as a solution to the value problem. They argue that epistemology should also evaluate the \"properties\" of people as epistemic agents (i.e. intellectual virtues), rather than merely the properties of propositions and propositional mental attitudes.", "title": "The value problem" }, { "paragraph_id": 29, "text": "The value problem has been presented as an argument against epistemic reliabilism by Linda Zagzebski, Wayne Riggs, and Richard Swinburne, among others. Zagzebski analogizes the value of knowledge to the value of espresso produced by an espresso maker: \"The liquid in this cup is not improved by the fact that it comes from a reliable espresso maker. If the espresso tastes good, it makes no difference if it comes from an unreliable machine.\" For Zagzebski, the value of knowledge deflates to the value of mere true belief. She assumes that reliability in itself has no value or disvalue, but Goldman and Olsson disagree. They point out that Zagzebski's conclusion rests on the assumption of veritism: all that matters is the acquisition of true belief. To the contrary, they argue that a reliable process for acquiring a true belief adds value to the mere true belief by making it more likely that future beliefs of a similar kind will be true. By analogy, having a reliable espresso maker that produced a good cup of espresso would be more valuable than having an unreliable one that luckily produced a good cup because the reliable one would more likely produce good future cups compared to the unreliable one.", "title": "The value problem" }, { "paragraph_id": 30, "text": "The value problem is important to assessing the adequacy of theories of knowledge that conceive of knowledge as consisting of true belief and other components. According to Kvanvig, an adequate account of knowledge should resist counterexamples and allow an explanation of the value of knowledge over mere true belief. Should a theory of knowledge fail to do so, it would prove inadequate.", "title": "The value problem" }, { "paragraph_id": 31, "text": "One of the more influential responses to the problem is that knowledge is not particularly valuable and is not what ought to be the main focus of epistemology. Instead, epistemologists ought to focus on other mental states, such as understanding. Advocates of virtue epistemology have argued that the value of knowledge comes from an internal relationship between the knower and the mental state of believing.", "title": "The value problem" }, { "paragraph_id": 32, "text": "There are many proposed sources of knowledge and justified belief which we take to be actual sources of knowledge in our everyday lives. Some of the most commonly discussed include perception, reason, memory, and testimony.", "title": "Acquiring knowledge" }, { "paragraph_id": 33, "text": "As mentioned above, epistemologists draw a distinction between what can be known a priori (independently of experience) and what can only be known a posteriori (through experience). Much of what we call a priori knowledge is thought to be attained through reason alone, as featured prominently in rationalism. This might also include a non-rational faculty of intuition, as defended by proponents of innatism. In contrast, a posteriori knowledge is derived entirely through experience or as a result of experience, as emphasized in empiricism. This also includes cases where knowledge can be traced back to an earlier experience, as in memory or testimony.", "title": "Acquiring knowledge" }, { "paragraph_id": 34, "text": "A way to look at the difference between the two is through an example. Bruce Russell gives two propositions in which the reader decides which one he believes more. Option A: All crows are birds. Option B: All crows are black. If you believe option A, then you are a priori justified in believing it because you do not have to see a crow to know it is a bird. If you believe in option B, then you are posteriori justified to believe it because you have seen many crows therefore knowing they are black. He goes on to say that it does not matter if the statement is true or not, only that if you believe in one or the other that matters.", "title": "Acquiring knowledge" }, { "paragraph_id": 35, "text": "The idea of a priori knowledge is that it is based on intuition or rational insights. Laurence BonJour says in his article \"The Structure of Empirical Knowledge\", that a \"rational insight is an immediate, non-inferential grasp, apprehension or 'seeing' that some proposition is necessarily true.\" (3) Going back to the crow example, by Laurence BonJour's definition the reason you would believe in option A is because you have an immediate knowledge that a crow is a bird, without ever experiencing one.", "title": "Acquiring knowledge" }, { "paragraph_id": 36, "text": "Evolutionary psychology takes a novel approach to the problem. It says that there is an innate predisposition for certain types of learning. \"Only small parts of the brain resemble a tabula rasa; this is true even for human beings. The remainder is more like an exposed negative waiting to be dipped into a developer fluid\".", "title": "Acquiring knowledge" }, { "paragraph_id": 37, "text": "Immanuel Kant, in his Critique of Pure Reason, drew a distinction between \"analytic\" and \"synthetic\" propositions. He contended that some propositions are such that we can know they are true just by understanding their meaning. For example, consider, \"My father's brother is my uncle.\" We can know it is true solely by virtue of our understanding in what its terms mean. Philosophers call such propositions \"analytic\". Synthetic propositions, on the other hand, have distinct subjects and predicates. An example would be, \"My father's brother has black hair.\" Kant stated that all mathematical and scientific statements are synthetic a priori propositions because they are necessarily true but our knowledge about the attributes of the mathematical or physical subjects we can only get by logical inference.", "title": "Acquiring knowledge" }, { "paragraph_id": 38, "text": "While this distinction is first and foremost about meaning and is therefore most relevant to the philosophy of language, the distinction has significant epistemological consequences, seen most prominently in the works of the logical positivists. In particular, if the set of propositions which can only be known a posteriori is coextensive with the set of propositions which are synthetically true, and if the set of propositions which can be known a priori is coextensive with the set of propositions which are analytically true (or in other words, which are true by definition), then there can only be two kinds of successful inquiry: Logico-mathematical inquiry, which investigates what is true by definition, and empirical inquiry, which investigates what is true in the world. Most notably, this would exclude the possibility that branches of philosophy like metaphysics could ever provide informative accounts of what actually exists.", "title": "Acquiring knowledge" }, { "paragraph_id": 39, "text": "The American philosopher W. V. O. Quine, in his paper \"Two Dogmas of Empiricism\", famously challenged the analytic-synthetic distinction, arguing that the boundary between the two is too blurry to provide a clear division between propositions that are true by definition and propositions that are not. While some contemporary philosophers take themselves to have offered more sustainable accounts of the distinction that are not vulnerable to Quine's objections, there is no consensus about whether or not these succeed.", "title": "Acquiring knowledge" }, { "paragraph_id": 40, "text": "Science is often considered to be a refined, formalized, systematic, institutionalized form of the pursuit and acquisition of empirical knowledge. As such, the philosophy of science may be viewed as an application of the principles of epistemology and as a foundation for epistemological inquiry.", "title": "Acquiring knowledge" }, { "paragraph_id": 41, "text": "The regress problem (also known as Agrippa's Trilemma) is the problem of providing a complete logical foundation for human knowledge. The traditional way of supporting a rational argument is to appeal to other rational arguments, typically using chains of reason and rules of logic. A classic example that goes back to Aristotle is deducing that Socrates is mortal. We have a logical rule that says All humans are mortal and an assertion that Socrates is human and we deduce that Socrates is mortal. In this example how do we know that Socrates is human? Presumably we apply other rules such as: All born from human females are human. Which then leaves open the question how do we know that all born from humans are human? This is the regress problem: how can we eventually terminate a logical argument with some statements that do not require further justification but can still be considered rational and justified? As John Pollock stated:", "title": "The regress problem" }, { "paragraph_id": 42, "text": "... to justify a belief one must appeal to a further justified belief. This means that one of two things can be the case. Either there are some beliefs that we can be justified for holding, without being able to justify them on the basis of any other belief, or else for each justified belief there is an infinite regress of (potential) justification [the nebula theory]. On this theory there is no rock bottom of justification. Justification just meanders in and out through our network of beliefs, stopping nowhere.", "title": "The regress problem" }, { "paragraph_id": 43, "text": "The apparent impossibility of completing an infinite chain of reasoning is thought by some to support skepticism. It is also the impetus for Descartes's famous dictum: I think, therefore I am. Descartes was looking for some logical statement that could be true without appeal to other statements.", "title": "The regress problem" }, { "paragraph_id": 44, "text": "Many epistemologists studying justification have attempted to argue for various types of chains of reasoning that can escape the regress problem.", "title": "The regress problem" }, { "paragraph_id": 45, "text": "Foundationalists respond to the regress problem by asserting that certain \"foundations\" or \"basic beliefs\" support other beliefs but do not themselves require justification from other beliefs. These beliefs might be justified because they are self-evident, infallible, or derive from reliable cognitive mechanisms. Perception, memory, and a priori intuition are often considered possible examples of basic beliefs.", "title": "The regress problem" }, { "paragraph_id": 46, "text": "The chief criticism of foundationalism is that if a belief is not supported by other beliefs, accepting it may be arbitrary or unjustified.", "title": "The regress problem" }, { "paragraph_id": 47, "text": "Another response to the regress problem is coherentism, which is the rejection of the assumption that the regress proceeds according to a pattern of linear justification. To avoid the charge of circularity, coherentists hold that an individual belief is justified circularly by the way it fits together (coheres) with the rest of the belief system of which it is a part. This theory has the advantage of avoiding the infinite regress without claiming special, possibly arbitrary status for some particular class of beliefs. Yet, since a system can be coherent while also being wrong, coherentists face the difficulty of ensuring that the whole system corresponds to reality. Additionally, most logicians agree that any argument that is circular is, at best, only trivially valid. That is, to be illuminating, arguments must operate with information from multiple premises, not simply conclude by reiterating a premise.", "title": "The regress problem" }, { "paragraph_id": 48, "text": "Nigel Warburton writes in Thinking from A to Z that \"[c]ircular arguments are not invalid; in other words, from a logical point of view there is nothing intrinsically wrong with them. However, they are, when viciously circular, spectacularly uninformative.\"", "title": "The regress problem" }, { "paragraph_id": 49, "text": "An alternative resolution to the regress problem is known as \"infinitism\". Infinitists take the infinite series to be merely potential, in the sense that an individual may have indefinitely many reasons available to them, without having consciously thought through all of these reasons when the need arises. This position is motivated in part by the desire to avoid what is seen as the arbitrariness and circularity of its chief competitors, foundationalism and coherentism. The most prominent defense of infinitism has been given by Peter Klein.", "title": "The regress problem" }, { "paragraph_id": 50, "text": "An intermediate position, known as \"foundherentism\", is advanced by Susan Haack. Foundherentism is meant to unify foundationalism and coherentism. Haack explains the view by using a crossword puzzle as an analogy. Whereas, for example, infinitists regard the regress of reasons as taking the form of a single line that continues indefinitely, Haack has argued that chains of properly justified beliefs look more like a crossword puzzle, with various different lines mutually supporting each other. Thus, Haack's view leaves room for both chains of beliefs that are \"vertical\" (terminating in foundational beliefs) and chains that are \"horizontal\" (deriving their justification from coherence with beliefs that are also members of foundationalist chains of belief).", "title": "The regress problem" }, { "paragraph_id": 51, "text": "Empiricism is a view in the theory of knowledge which focuses on the role of experience, especially experience based on perceptual observations by the senses, in the generation of knowledge. Certain forms exempt disciplines such as mathematics and logic from these requirements.", "title": "Schools of thought" }, { "paragraph_id": 52, "text": "There are many variants of empiricism, including British empiricism, logical empiricism, phenomenalism, and some versions of common sense philosophy. Most forms of empiricism give epistemologically privileged status to sensory impressions or sense data, although this plays out very differently in different cases. Some of the most famous historical empiricists include John Locke, David Hume, George Berkeley, Francis Bacon, John Stuart Mill, Rudolf Carnap, and Bertrand Russell.", "title": "Schools of thought" }, { "paragraph_id": 53, "text": "Rationalism is the epistemological view that reason is the chief source of knowledge and the main determinant of what constitutes knowledge. More broadly, it can also refer to any view which appeals to reason as a source of knowledge or justification. Rationalism is one of the two classical views in epistemology, the other being empiricism. Rationalists claim that the mind, through the use of reason, can directly grasp certain truths in various domains, including logic, mathematics, ethics, and metaphysics. Rationalist views can range from modest views in mathematics and logic (such as that of Gottlob Frege) to ambitious metaphysical systems (such as that of Baruch Spinoza).", "title": "Schools of thought" }, { "paragraph_id": 54, "text": "Some of the most famous rationalists include Plato, René Descartes, Baruch Spinoza, and Gottfried Leibniz.", "title": "Schools of thought" }, { "paragraph_id": 55, "text": "Skepticism is a position that questions the possibility of human knowledge, either in particular domains or on a general level. Skepticism does not refer to any one specific school of philosophy, but is rather a thread that runs through many epistemological debates. Ancient Greek skepticism began during the Hellenistic period in philosophy, which featured both Pyrrhonism (notably defended by Pyrrho, Sextus Empiricus, and Aenesidemus) and Academic skepticism (notably defended by Arcesilaus and Carneades). Among ancient Indian philosophers, skepticism was notably defended by the Ajñana school and in the Buddhist Madhyamika tradition. In modern philosophy, René Descartes' famous inquiry into mind and body began as an exercise in skepticism, in which he started by trying to doubt all purported cases of knowledge in order to search for something that was known with absolute certainty.", "title": "Schools of thought" }, { "paragraph_id": 56, "text": "Epistemic skepticism questions whether knowledge is possible at all. Generally speaking, skeptics argue that knowledge requires certainty, and that most or all of our beliefs are fallible (meaning that our grounds for holding them always, or almost always, fall short of certainty), which would together entail that knowledge is always or almost always impossible for us. Characterizing knowledge as strong or weak is dependent on a person's viewpoint and their characterization of knowledge. Much of modern epistemology is derived from attempts to better understand and address philosophical skepticism.", "title": "Schools of thought" }, { "paragraph_id": 57, "text": "One of the oldest forms of epistemic skepticism can be found in Agrippa's trilemma (named after the Pyrrhonist philosopher Agrippa the Skeptic) which demonstrates that certainty can not be achieved with regard to beliefs. Pyrrhonism dates back to Pyrrho of Elis from the 4th century BCE, although most of what we know about Pyrrhonism today is from the surviving works of Sextus Empiricus. Pyrrhonists claim that for any argument for a non-evident proposition, an equally convincing argument for a contradictory proposition can be produced. Pyrrhonists do not dogmatically deny the possibility of knowledge, but instead point out that beliefs about non-evident matters cannot be substantiated.", "title": "Schools of thought" }, { "paragraph_id": 58, "text": "The Cartesian evil demon problem, first raised by René Descartes, supposes that our sensory impressions may be controlled by some external power rather than the result of ordinary veridical perception. In such a scenario, nothing we sense would actually exist, but would instead be mere illusion. As a result, we would never be able to know anything about the world, since we would be systematically deceived about everything. The conclusion often drawn from evil demon skepticism is that even if we are not completely deceived, all of the information provided by our senses is still compatible with skeptical scenarios in which we are completely deceived, and that we must therefore either be able to exclude the possibility of deception or else must deny the possibility of infallible knowledge (that is, knowledge which is completely certain) beyond our immediate sensory impressions. While the view that no beliefs are beyond doubt other than our immediate sensory impressions is often ascribed to Descartes, he in fact thought that we can exclude the possibility that we are systematically deceived, although his reasons for thinking this are based on a highly contentious ontological argument for the existence of a benevolent God who would not allow such deception to occur.", "title": "Schools of thought" }, { "paragraph_id": 59, "text": "Epistemological skepticism can be classified as either \"mitigated\" or \"unmitigated\" skepticism. Mitigated skepticism rejects \"strong\" or \"strict\" knowledge claims but does approve weaker ones, which can be considered \"virtual knowledge\", but only with regard to justified beliefs. Unmitigated skepticism rejects claims of both virtual and strong knowledge. Characterizing knowledge as strong, weak, virtual or genuine can be determined differently depending on a person's viewpoint as well as their characterization of knowledge. Some of the most notable attempts to respond to unmitigated skepticism include direct realism, disjunctivism, common sense philosophy, pragmatism, fideism, and fictionalism.", "title": "Schools of thought" }, { "paragraph_id": 60, "text": "Pragmatism is a fallibilist epistemology that emphasizes the role of action in knowing. Different interpretations of pragmatism variously emphasize: truth as the final outcome of ideal scientific inquiry and experimentation, truth as closely related to usefulness, experience as transacting with (instead of representing) nature, and human practices as the foundation of language. Pragmatism's origins are often attributed to Charles Sanders Peirce, William James, and John Dewey. In 1878, Peirce formulated the maxim: \"Consider what effects, that might conceivably have practical bearings, we conceive the object of our conception to have. Then, our conception of these effects is the whole of our conception of the object.\"", "title": "Schools of thought" }, { "paragraph_id": 61, "text": "William James suggested that through a pragmatist epistemology, theories \"become instruments, not answers to enigmas in which we can rest\". In James's pragmatic method, which he adapted from Peirce, metaphysical disputes can be settled by tracing the practical consequences of the different sides of the argument. If this process does not resolve the dispute, then \"the dispute is idle\".", "title": "Schools of thought" }, { "paragraph_id": 62, "text": "Contemporary versions of pragmatism have been developed by thinkers such as Richard Rorty and Hilary Putnam. Rorty proposed that values were historically contingent and dependent upon their utility within a given historical period. Contemporary philosophers working in pragmatism are called neopragmatists, and also include Nicholas Rescher, Robert Brandom, Susan Haack, and Cornel West.", "title": "Schools of thought" }, { "paragraph_id": 63, "text": "In certain respects an intellectual descendant of pragmatism, naturalized epistemology considers the evolutionary role of knowledge for agents living and evolving in the world. It de-emphasizes the questions around justification and truth, and instead asks, empirically, how reliable beliefs are formed and the role that evolution played in the development of such processes. It suggests a more empirical approach to the subject as a whole, leaving behind philosophical definitions and consistency arguments, and instead using psychological methods to study and understand how \"knowledge\" is actually formed and is used in the natural world. As such, it does not attempt to answer the analytic questions of traditional epistemology, but rather replace them with new empirical ones.", "title": "Schools of thought" }, { "paragraph_id": 64, "text": "Naturalized epistemology was first proposed in \"Epistemology Naturalized\", a seminal paper by W.V.O. Quine. A less radical view has been defended by Hilary Kornblith in Knowledge and its Place in Nature, in which he seeks to turn epistemology towards empirical investigation without completely abandoning traditional epistemic concepts.", "title": "Schools of thought" }, { "paragraph_id": 65, "text": "Epistemic relativism is the view that what is true, rational, or justified for one person need not be true, rational, or justified for another person. Epistemic relativists therefore assert that while there are relative facts about truth, rationality, justification, and so on, there is no perspective-independent fact of the matter. Note that this is distinct from epistemic contextualism, which holds that the meaning of epistemic terms vary across contexts (e.g. \"I know\" might mean something different in everyday contexts and skeptical contexts). In contrast, epistemic relativism holds that the relevant facts vary, not just linguistic meaning. Relativism about truth may also be a form of ontological relativism, insofar as relativists about truth hold that facts about what exists vary based on perspective.", "title": "Schools of thought" }, { "paragraph_id": 66, "text": "Constructivism is a view in philosophy according to which all \"knowledge is a compilation of human-made constructions\", \"not the neutral discovery of an objective truth\". Whereas objectivism is concerned with the \"object of our knowledge\", constructivism emphasizes \"how we construct knowledge\". Constructivism proposes new definitions for knowledge and truth, which emphasize intersubjectivity rather than objectivity, and viability rather than truth. The constructivist point of view is in many ways comparable to certain forms of pragmatism.", "title": "Schools of thought" }, { "paragraph_id": 67, "text": "Bayesian epistemology is a formal approach to various topics in epistemology that has its roots in Thomas Bayes' work in the field of probability theory. One advantage of its formal method in contrast to traditional epistemology is that its concepts and theorems can be defined with a high degree of precision. It is based on the idea that beliefs can be interpreted as subjective probabilities. As such, they are subject to the laws of probability theory, which act as the norms of rationality. These norms can be divided into static constraints, governing the rationality of beliefs at any moment, and dynamic constraints, governing how rational agents should change their beliefs upon receiving new evidence. The most characteristic Bayesian expression of these principles is found in the form of Dutch books, which illustrate irrationality in agents through a series of bets that lead to a loss for the agent no matter which of the probabilistic events occurs. Bayesians have applied these fundamental principles to various epistemological topics but Bayesianism does not cover all topics of traditional epistemology.", "title": "Schools of thought" }, { "paragraph_id": 68, "text": "Feminist epistemology is a subfield of epistemology which applies feminist theory to epistemological questions. It began to emerge as a distinct subfield in the 20th century. Prominent feminist epistemologists include Miranda Fricker (who developed the concept of epistemic injustice), Donna Haraway (who first proposed the concept of situated knowledge), Sandra Harding, and Elizabeth Anderson. Harding proposes that feminist epistemology can be broken into three distinct categories: feminist empiricism, standpoint epistemology, and postmodern epistemology.", "title": "Schools of thought" }, { "paragraph_id": 69, "text": "Feminist epistemology has also played a significant role in the development of many debates in social epistemology.", "title": "Schools of thought" }, { "paragraph_id": 70, "text": "Epistemicide is a term used in decolonisation studies that describes the killing of knowledge systems under systemic oppression such as colonisation and slavery. The term was coined by Boaventura de Sousa Santos, who presented the significance of such physical violence creating the centering of Western knowledge in the current world. This term challenges the thought of what is seen as knowledge in academia today.", "title": "Schools of thought" }, { "paragraph_id": 71, "text": "Indian schools of philosophy, such as the Hindu Nyaya and Carvaka schools, and the Jain and Buddhist philosophical schools, developed an epistemological tradition independently of the Western philosophical tradition called \"pramana\". Pramana can be translated as \"instrument of knowledge\" and refers to various means or sources of knowledge that Indian philosophers held to be reliable. Each school of Indian philosophy had their own theories about which pramanas were valid means to knowledge and which were unreliable (and why). A Vedic text, Taittirīya Āraṇyaka (c. 9th–6th centuries BCE), lists \"four means of attaining correct knowledge\": smṛti (\"tradition\" or \"scripture\"), pratyakṣa (\"perception\"), aitihya (\"communication by one who is expert\", or \"tradition\"), and anumāna (\"reasoning\" or \"inference\").", "title": "Schools of thought" }, { "paragraph_id": 72, "text": "In the Indian traditions, the most widely discussed pramanas are: Pratyakṣa (perception), Anumāṇa (inference), Upamāṇa (comparison and analogy), Arthāpatti (postulation, derivation from circumstances), Anupalabdi (non-perception, negative/cognitive proof) and Śabda (word, testimony of past or present reliable experts). While the Nyaya school (beginning with the Nyāya Sūtras of Gotama, between 6th-century BCE and 2nd-century CE) were a proponent of realism and supported four pramanas (perception, inference, comparison/analogy and testimony), the Buddhist epistemologists (Dignaga and Dharmakirti) generally accepted only perception and inference. The Carvaka school of materialists only accepted the pramana of perception, and hence were among the first empiricists in the Indian traditions. Another school, the Ajñana, included notable proponents of philosophical skepticism.", "title": "Schools of thought" }, { "paragraph_id": 73, "text": "The theory of knowledge of the Buddha in the early Buddhist texts has been interpreted as a form of pragmatism as well as a form of correspondence theory. Likewise, the Buddhist philosopher Dharmakirti has been interpreted both as holding a form of pragmatism or correspondence theory for his view that what is true is what has effective power (arthakriya). The Buddhist Madhyamika school's theory of emptiness (shunyata) meanwhile has been interpreted as a form of philosophical skepticism.", "title": "Schools of thought" }, { "paragraph_id": 74, "text": "The main contribution to epistemology by the Jains has been their theory of \"many sided-ness\" or \"multi-perspectivism\" (Anekantavada), which says that since the world is multifaceted, any single viewpoint is limited (naya – a partial standpoint). This has been interpreted as a kind of pluralism or perspectivism. According to Jain epistemology, none of the pramanas gives absolute or perfect knowledge since they are each limited points of view.", "title": "Schools of thought" }, { "paragraph_id": 75, "text": "Formal epistemology uses formal tools and methods from decision theory, logic, probability theory and computability theory to model and reason about issues of epistemological interest. Work in this area spans several academic fields, including philosophy, computer science, economics, and statistics. The focus of formal epistemology has tended to differ somewhat from that of traditional epistemology, with topics like uncertainty, induction, and belief revision garnering more attention than the analysis of knowledge, skepticism, and issues with justification.", "title": "Domains of inquiry" }, { "paragraph_id": 76, "text": "Historical epistemology is the study of the historical conditions of, and changes in, different kinds of knowledge. There are many versions of or approaches to historical epistemology, which is different from history of epistemology. Twentieth-century French historical epistemologists like Abel Rey, Gaston Bachelard, Jean Cavaillès, and Georges Canguilhem focused specifically on changes in scientific discourse.", "title": "Domains of inquiry" }, { "paragraph_id": 77, "text": "Metaepistemology is the metaphilosophical study of the methods, aims, and subject matter of epistemology. In general, metaepistemology aims to better understand our first-order epistemological inquiry. Some goals of metaepistemology are identifying inaccurate assumptions made in epistemological debates and determining whether the questions asked in mainline epistemology are the right epistemological questions to be asking.", "title": "Domains of inquiry" }, { "paragraph_id": 78, "text": "Social epistemology deals with questions about knowledge in contexts where our knowledge attributions cannot be explained by simply examining individuals in isolation from one another, meaning that the scope of our knowledge attributions must be widened to include broader social contexts. It also explores the ways in which interpersonal beliefs can be justified in social contexts. The most common topics discussed in contemporary social epistemology are testimony, which deals with the conditions under which a belief \"x is true\" which resulted from being told \"x is true\" constitutes knowledge; peer disagreement, which deals with when and how I should revise my beliefs in light of other people holding beliefs that contradict mine; and group epistemology, which deals with what it means to attribute knowledge to groups rather than individuals, and when group knowledge attributions are appropriate.", "title": "Domains of inquiry" }, { "paragraph_id": 79, "text": "Contemporary philosophers consider that epistemology is a major subfield of philosophy, along with ethics, logic, and metaphysics, which are more ancient subdivisions of philosophy. But in the early and mid 20th century, epistemology was not seen as an independent field on its own. Quine viewed epistemology as a chapter of psychology. Russell viewed it as a mix of psychology and logic. William Alston presents a similar contemporary perspective, but in a historically 'oriented' manner: for him, epistemology has historically always been a part of cognitive psychology.", "title": "Historical context" }, { "paragraph_id": 80, "text": "The claim that psychology is a background for epistemology is often called its naturalization. The epistemology of Russell and Quine in the 20th century were naturalised in that way. More recently, Laurence Bonjour rejects that there is a need for that kind of psychologism in contemporary epistemology. His argument is that, nowadays, the required part of psychology, which he refers as minimal psychologism, conceptual psychologism and meliorative psychologism, are self evident within contemporary (traditional) epistemology, \"involves at most a quite minor departure from traditional, nonnaturalized epistemology\" or \"poses no real threat to traditional epistemology\". In this view point, epistemology has integrated all required psychological aspects, which are considered non controversial, and can be severed from psychologism.", "title": "Historical context" }, { "paragraph_id": 81, "text": "For Luciano Floridi, \"at the turn of the [20th] century there had been a resurgence of interest in epistemology through an anti-metaphysical, naturalist, reaction against the nineteenth-century development of Neo-Kantian and Neo-Hegelian idealism.\" In that perspective, contemporary epistemology, which in Bonjour's perspective does not need to be \"naturalized\" anymore, emerged after a naturalisation that rejected meta-physical perspectives associated with Kant and Hegel.", "title": "Historical context" }, { "paragraph_id": 82, "text": "Historians of philosophy traditionally divide the modern period into a dispute between empiricists (including Francis Bacon, John Locke, David Hume, and George Berkeley) and rationalists (including René Descartes, Baruch Spinoza, and Gottfried Leibniz). The debate between them has often been framed using the question of whether knowledge comes primarily from sensory experience (empiricism), or whether a significant portion of our knowledge is derived entirely from our faculty of reason (rationalism). According to some scholars, this dispute was resolved in the late 18th century by Immanuel Kant, whose transcendental idealism famously made room for the view that \"though all our knowledge begins with experience, it by no means follows that all [knowledge] arises out of experience\".", "title": "Historical context" }, { "paragraph_id": 83, "text": "In Meno, the definition of knowledge as justified true belief appears for the first time. In other words, belief is required to have an explanation in order to be correct, beyond just happening to be right. A number of important epistemological concerns also appeared in the works of Aristotle.", "title": "Epistemological concepts in past philosophies" }, { "paragraph_id": 84, "text": "During the subsequent Hellenistic period, philosophical schools began to appear which had a greater focus on epistemological questions, often in the form of philosophical skepticism. For instance, the Hellenistic Sceptics, especially Sextus Empiricus of the Pyrrhonian school rejected justification on the basis of Agrippa's trilemma and so, in the view of Irwin (2010), rejected the possibility of knowledge as well. The Pyrrhonian school of Pyrrho and Sextus Empiricus held that eudaimonia (flourishing, happiness, or \"the good life\") could be attained through the application of epoché (suspension of judgment) regarding all non-evident matters. Pyrrhonism was particularly concerned with undermining the epistemological dogmas of Stoicism and Epicureanism. The other major school of Hellenistic skepticism was Academic skepticism, most notably defended by Carneades and Arcesilaus, which predominated in the Platonic Academy for almost two centuries.", "title": "Epistemological concepts in past philosophies" }, { "paragraph_id": 85, "text": "In ancient India the Ajñana school of ancient Indian philosophy promoted skepticism. Ajñana was a Śramaṇa movement and a major rival of early Buddhism, Jainism and the Ājīvika school. They held that it was impossible to obtain knowledge of metaphysical nature or to ascertain the truth value of philosophical propositions; and even if knowledge was possible, it was useless and disadvantageous for final salvation. They were specialized in refutation without propagating any positive doctrine of their own.", "title": "Epistemological concepts in past philosophies" }, { "paragraph_id": 86, "text": "During the Islamic Golden Age, one of the most prominent and influential philosophers, theologians, jurists, logicians and mystics in Islamic epistemology was Al-Ghazali. During his life, he wrote over 70 books on science, Islamic reasoning and Sufism. Al-Ghazali distributed his book The Incoherence of Philosophers, set apart as a defining moment in Islamic epistemology. He shaped a conviction that all occasions and connections are not the result of material conjunctions but are the present and prompt will of God.", "title": "Epistemological concepts in past philosophies" }, { "paragraph_id": 87, "text": "After the ancient philosophical era but before the modern philosophical era, a number of (non Islamic) medieval philosophers also engaged with epistemological questions at length. Most notable among the Medievals for their contributions to epistemology were Thomas Aquinas, John Duns Scotus, and William of Ockham.", "title": "Epistemological concepts in past philosophies" }, { "paragraph_id": 88, "text": "According to historian of philosophy Jan Woleński, the development of philosophy divides, with some exceptions, into the pre-Cartesian ontologically oriented and the post-Cartesian epistemologically oriented.", "title": "Epistemological concepts in past philosophies" }, { "paragraph_id": 89, "text": "There are a number of different methods that contemporary scholars use when trying to understand the relationship between past epistemology and contemporary epistemology. One of the most contentious questions is this: \"Should we assume that the problems of epistemology are perennial, and that trying to reconstruct and evaluate Plato's or Hume's or Kant's arguments is meaningful for current debates, too?\" Similarly, there is also a question of whether contemporary philosophers should aim to rationally reconstruct and evaluate historical views in epistemology, or to merely describe them.", "title": "Epistemological concepts in past philosophies" }, { "paragraph_id": 90, "text": "", "title": "Notes and references" }, { "paragraph_id": 91, "text": "Stanford Encyclopedia of Philosophy articles", "title": "External links" }, { "paragraph_id": 92, "text": "Internet Encyclopedia of Philosophy articles", "title": "External links" }, { "paragraph_id": 93, "text": "Encyclopædia Britannica", "title": "External links" }, { "paragraph_id": 94, "text": "Other links", "title": "External links" } ]
Epistemology is the branch of philosophy concerned with knowledge. Epistemologists study the nature, origin, and scope of knowledge, epistemic justification, the rationality of belief, and various related issues. Debates in (contemporary) epistemology are generally clustered around four core areas: The philosophical analysis of the nature of knowledge and the conditions required for a belief to constitute knowledge, such as truth and justification Potential sources of knowledge and justified belief, such as perception, reason, memory, and testimony The structure of a body of knowledge or justified belief, including whether all justified beliefs must be derived from justified foundational beliefs or whether justification requires only a coherent set of beliefs Philosophical skepticism, which questions the possibility of knowledge, and related problems, such as whether skepticism poses a threat to our ordinary knowledge claims and whether it is possible to refute skeptical arguments In these debates and others, epistemology aims to answer questions such as "What do people know?", "What does it mean to say that people know something?", "What makes justified beliefs justified?", and "How do people know that they know?" Specialties in epistemology ask questions such as "How can people create formal models about issues related to knowledge?", "What are the historical conditions of changes in different kinds of knowledge?", "What are the methods, aims, and subject matter of epistemological inquiry?", and "How do people know together?".
2001-06-05T01:48:41Z
2023-12-03T12:27:04Z
[ "Template:Webarchive", "Template:Refend", "Template:Citation needed", "Template:Anchor", "Template:Navboxes", "Template:Authority control", "Template:IPAc-en", "Template:Etymology", "Template:Library resources box", "Template:Epistemology sidebar", "Template:ISBN", "Template:Cbignore", "Template:Sister project links", "Template:SEP", "Template:IEP", "Template:Citation", "Template:More citations needed", "Template:Em", "Template:Lang", "Template:Clarify", "Template:Page needed", "Template:Annotated link", "Template:Cite journal", "Template:Short description", "Template:Harvtxt", "Template:Cite book", "Template:Cite IEP", "Template:PhilPapers", "Template:InPho", "Template:Circa", "Template:Distinguish", "Template:Refn", "Template:Rp", "Template:Portal", "Template:Outline", "Template:Reflist", "Template:Refbegin", "Template:Hatnote group", "Template:Epistemology", "Template:Philosophy sidebar", "Template:Blockquote", "Template:Main", "Template:Dead link", "Template:Use dmy dates" ]
https://en.wikipedia.org/wiki/Epistemology
9,248
Esperanto
Esperanto (/ˌɛspəˈrɑːntoʊ/, /-æntoʊ/) is the world's most widely spoken constructed international auxiliary language. Created by the Warsaw-based ophthalmologist L. L. Zamenhof in 1887, it is intended to be a universal second language for international communication, or "the international language" (la Lingvo Internacia). Zamenhof first described the language in Dr. Esperanto's International Language (Esperanto: Unua Libro), which he published under the pseudonym Doktoro Esperanto. Early adopters of the language liked the name Esperanto and soon used it to describe his language. The word esperanto translates into English as "one who hopes". Within the range of constructed languages, Esperanto occupies a middle ground between "naturalistic" (imitating existing natural languages) and a priori (where features are not based on existing languages). Esperanto's vocabulary, syntax and semantics derive predominantly from languages of the Indo-European group. The vocabulary derives primarily from Romance languages, with substantial contributions from Germanic languages. One of the language's most notable features is its extensive system of derivation, where prefixes and suffixes may be freely combined with roots to generate words, making it possible to communicate effectively with a smaller set of words. Esperanto is the most successful constructed international auxiliary language, and the only such language with a sizeable population of native speakers, of which there are perhaps several thousand. Usage estimates are difficult, but two estimates put the number of people who know how to speak Esperanto at around 100,000. Concentration of speakers is highest in Europe, East Asia, and South America. Although no country has adopted Esperanto officially, Esperantujo ("Esperanto-land") is used as a name for the collection of places where it is spoken. The language has also gained a noticeable presence on the internet in recent years, as it became increasingly accessible on platforms such as Duolingo, Wikipedia, Amikumu and Google Translate. Esperanto speakers are often called "Esperantists" (Esperantistoj). Esperanto has not been a secondary official language of any recognized country, but it entered the education systems of several countries, such as Hungary and China. There were plans at the beginning of the 20th century to establish Neutral Moresnet, in central-western Europe, as the world's first Esperanto state; any such plans came to an end when the Treaty of Versailles awarded the disputed territory to Belgium, effective January 10, 1920. In addition, the self-proclaimed artificial island micronation of Rose Island, near Italy in the Adriatic Sea, used Esperanto as its official language in 1968, and another micronation, the extant Republic of Molossia, near Dayton, Nevada, uses Esperanto as an official language alongside English. The Chinese government has used Esperanto since 2001 for daily news on china.org.cn. China also uses Esperanto in China Radio International, and for the internet magazine El Popola Ĉinio. The Vatican Radio has an Esperanto version of its podcasts and its website. The United States Army has published military phrase books in Esperanto, to be used from the 1950s until the 1970s in war games by mock enemy forces. A field reference manual, FM 30-101-1 Feb. 1962, contained the grammar, English-Esperanto-English dictionary, and common phrases. In the 1970s Esperanto was used as the basis for Defense Language Aptitude Tests. Esperanto is the working language of several non-profit international organizations such as the Sennacieca Asocio Tutmonda, a left-wing cultural association which had 724 members in over 85 countries in 2006. There is also Education@Internet, which has developed from an Esperanto organization; most others are specifically Esperanto organizations. The largest of these, the Universal Esperanto Association, has an official consultative relationship with the United Nations and UNESCO, which recognized Esperanto as a medium for international understanding in 1954. The Universal Esperanto Association collaborated in 2017 with UNESCO to deliver an Esperanto translation of its magazine UNESCO Courier (Unesko Kuriero en Esperanto). The World Health Organization offers an Esperanto version of the coronavirus pandemic (COVID-19, Esperanto: KOVIM-19) occupational safety and health education course. Esperanto was also the first language of teaching and administration of the International Academy of Sciences San Marino. The League of Nations made attempts to promote teaching Esperanto in member countries, but the resolutions were defeated mainly by French delegates, who did not feel there was a need for it. In the summer of 1924, the American Radio Relay League adopted Esperanto as its official international auxiliary language, and hoped that the language would be used by radio amateurs in international communications, but its actual use for radio communications was negligible. All the personal documents sold by the World Service Authority, including the World Passport, are written in Esperanto, together with English, French, Spanish, Russian, Arabic, and Chinese (the official languages of the United Nations). Esperanto was created in the late 1870s and early 1880s by L. L. Zamenhof, a Polish-Jewish ophthalmologist from Białystok, then part of the Russian Empire, but now part of Poland. In the 1870s, just a few years before Zamenhof created Esperanto, Polish was banned in public places in Białystok. According to Zamenhof, he created the language to reduce the "time and labor we spend in learning foreign tongues", and to foster harmony between people from different countries: "Were there but an international language, all translations would be made into it alone ... and all nations would be united in a common brotherhood." His feelings and the situation in Białystok may be gleaned from an extract from his letter to Nikolai Borovko: The place where I was born and spent my childhood gave direction to all my future struggles. In Białystok the inhabitants were divided into four distinct elements: Russians, Poles, Germans, and Jews; each of these spoke their own language and looked on all the others as enemies. In such a town a sensitive nature feels more acutely than elsewhere the misery caused by language division and sees at every step that the diversity of languages is the first, or at least the most influential, basis for the separation of the human family into groups of enemies. I was brought up as an idealist; I was taught that all people were brothers, while outside in the street at every step I felt that there were no people, only Russians, Poles, Germans, Jews, and so on. This was always a great torment to my infant mind, although many people may smile at such an 'anguish for the world' in a child. Since at that time I thought that 'grown-ups' were omnipotent, I often said to myself that when I grew up I would certainly destroy this evil. It was invented in 1887 and designed so that anyone could learn it in a few short months. Dr. Zamenhof lived on Dzika Street, No. 9, which was just around the corner from the street on which we lived. Brother Afrum was so impressed with that idea that he learned Esperanto in a very short time at home from a little book. He then bought many dozens of them and gave them out to relatives, friends, just anyone he could, to support that magnificent idea for he felt that this would be a common bond to promote relationships with fellow men in the world. A group of people had organized and sent letters to the government asking to change the name of the street where Dr. Zamenhof lived for many years when he invented Esperanto, from Dzika to Zamenhofa. They were told that a petition with a large number of signatures would be needed. That took time so they organized demonstrations carrying large posters encouraging people to learn the universal language and to sign the petitions... About the same time, in the middle of the block marched a huge demonstration of people holding posters reading "Learn Esperanto", "Support the Universal language", "Esperanto the language of hope and expectation", "Esperanto the bond for international communication" and so on, and many "Sign the petitions". I will never forget that rich-poor, sad-glad parade and among all these people stood two fiery red tramway cars waiting on their opposite lanes and also a few dorożkas with their horses squeezed in between. Such a sight it was. Later a few blocks were changed from Dzika Street to Dr. Zamenhofa Street and a nice monument was erected there with his name and his invention inscribed on it, to honor his memory. Zamenhof's goal was to create an easy and flexible language that would serve as a universal second language, to foster world peace and international understanding, and to build a "community of speakers". His original title for the language was simply "the international language" (la lingvo internacia), but early speakers grew fond of the name Esperanto, and began to use it as the name for the language just two years after its creation. The name quickly gained prominence, and has been used as an official name ever since. In 1905, Zamenhof published the Fundamento de Esperanto as a definitive guide to the language. Later that year, French Esperantists organized with his participation the first World Esperanto Congress, an ongoing annual conference, in Boulogne-sur-Mer, France. Zamenhof also proposed to the first congress that an independent body of linguistic scholars should steward the future evolution of Esperanto, foreshadowing the founding of the Akademio de Esperanto (in part modeled after the Académie Française), which was established soon thereafter. Since then, world congresses have been held in different countries every year, except during the two World Wars, and the 2020 COVID-19 pandemic (when it was moved to an online-only event). Since the Second World War, they have been attended by an average of more than 2,000 people, and up to 6,000 people at the most. Zamenhof wrote that he wanted mankind to "learn and use ... en masse ... the proposed language as a living one". The goal for Esperanto to become a global auxiliary language was not Zamenhof's only goal; he also wanted to "enable the learner to make direct use of his knowledge with persons of any nationality, whether the language be universally accepted or not; in other words, the language is to be directly a means of international communication." After some ten years of development, which Zamenhof spent translating literature into Esperanto, as well as writing original prose and verse, the first book of Esperanto grammar was published in Warsaw on July 26, 1887. The number of speakers grew rapidly over the next few decades; at first, primarily in the Russian Empire and Central Europe, then in other parts of Europe, the Americas, China, and Japan. In the early years before the world congresses, speakers of Esperanto kept in contact primarily through correspondence and periodicals. Zamenhof's name for the language was simply Internacia Lingvo ("International Language"). December 15, Zamenhof's birthday, is now regarded as Zamenhof Day or Esperanto Book Day. The autonomous territory of Neutral Moresnet, between what is today Belgium and Germany, had a sizable proportion of Esperanto-speaking citizens among its small, diverse population. There was a proposal to make Esperanto its official language. However, neither Belgium nor Germany had surrendered their claims to the region, with the latter having adopted a more aggressive stance towards pursuing its claim around the turn of the century, even being accused of sabotage and administrative obstruction to force the issue. The outbreak of World War I would bring about the end of neutrality, with Moresnet initially left as "an oasis in a desert of destruction" following the German invasion of Belgium. The territory was formally annexed by Prussia in 1915, though without international recognition. After the war, a great opportunity for Esperanto seemingly presented itself, when the Iranian delegation to the League of Nations proposed that the language be adopted for use in international relations following a report by a Japanese delegate to the League named Nitobe Inazō, in the context of the 13th World Congress of Esperanto, held in Prague. Ten delegates accepted the proposal with only one voice against, the French delegate, Gabriel Hanotaux. Hanotaux opposed all recognition of Esperanto at the League, from the first resolution on December 18, 1920, and subsequently through all efforts during the next three years. Hanotaux did not approve of how the French language was losing its position as the international language and saw Esperanto as a threat, effectively wielding his veto power to block the decision. However, two years later, the League recommended that its member states include Esperanto in their educational curricula. The French government retaliated by banning all instruction in Esperanto in France's schools and universities. The French Ministry of Public Instruction said that "French and English would perish and the literary standard of the world would be debased". Nonetheless, many people see the 1920s as the heyday of the Esperanto movement. During this time, Anarchism as a political movement was very supportive of both anationalism and the Esperanto language. Fran Novljan was one of the chief promoters of Esperanto in the former Kingdom of Yugoslavia. He was among the founders of the Croatian Prosvjetni savez (Educational Alliance), of which he was the first secretary, and organized Esperanto institutions in Zagreb. Novljan collaborated with Esperanto newspapers and magazines, and was the author of the Esperanto textbook Internacia lingvo esperanto i Esperanto en tridek lecionoj. In 1920s Korea, socialist thinkers pushed for the use of Esperanto through a series of columns in The Dong-a Ilbo as resistance to both Japanese occupation as well as a counter to the growing nationalist movement for Korean language standardization. This lasted until the Mukden Incident in 1931, when changing colonial policy led to an outright ban on Esperanto education in Korea. Esperanto attracted the suspicion of many states. Repression was especially pronounced in Nazi Germany, Francoist Spain up until the 1950s, and the Soviet Union under Stalin, from 1937 to 1956. In Nazi Germany, there was a motivation to ban Esperanto because Zamenhof was Jewish, and due to the internationalist nature of Esperanto, which was perceived as "Bolshevist". In his work, Mein Kampf, Adolf Hitler specifically mentioned Esperanto as an example of a language that could be used by an international Jewish conspiracy once they achieved world domination. Esperantists were killed during the Holocaust, with Zamenhof's family in particular singled out to be killed. The efforts of a minority of German Esperantists to expel their Jewish colleagues and overtly align themselves with the Reich were futile, and Esperanto was legally forbidden in 1935. Esperantists in German concentration camps did, however, teach Esperanto to fellow prisoners, telling guards they were teaching Italian, the language of one of Germany's Axis allies. In Imperial Japan, the left wing of the Japanese Esperanto movement was forbidden, but its leaders were careful enough not to give the impression to the government that the Esperantists were socialist revolutionaries, which proved a successful strategy. After the October Revolution of 1917, Esperanto was given a measure of government support by the new communist states in the former Russian Empire and later by the Soviet Union government, with the Soviet Esperantist Union being established as an organization that, temporarily, was officially recognized. In his biography on Joseph Stalin, Leon Trotsky mentions that Stalin had studied Esperanto. However, in 1937, at the height of the Great Purge, Stalin completely reversed the Soviet government's policies on Esperanto; many Esperanto speakers were executed, exiled or held in captivity in the Gulag labour camps. Quite often the accusation was: "You are an active member of an international spy organization which hides itself under the name of 'Association of Soviet Esperantists' on the territory of the Soviet Union." Until the end of the Stalin era, it was dangerous to use Esperanto in the Soviet Union, even though it was never officially forbidden to speak Esperanto. Fascist Italy allowed the use of Esperanto, finding its phonology similar to that of Italian and publishing some tourist material in the language. During and after the Spanish Civil War, Francoist Spain suppressed anarchists, socialists and Catalan nationalists for many years, among whom the use of Esperanto was extensive, but in the 1950s the Esperanto movement was again tolerated. In 1954, the United Nations — through UNESCO — granted official support to Esperanto as an international auxiliary language in the Montevideo Resolution. However, Esperanto is still not one of the official languages of the UN. The development of Esperanto has continued unabated into the 21st century. The advent of the Internet has had a significant impact on the language, as learning it has become increasingly accessible on platforms such as Duolingo, and as speakers have increasingly networked on platforms such as Amikumu. With up to two million speakers, it is the most widely spoken constructed language in the world. Although no country has adopted Esperanto officially, Esperantujo ("Esperanto-land") is the name given to the collection of places where it is spoken. While many of its advocates continue to hope for the day that Esperanto becomes officially recognized as the international auxiliary language, some (including raŭmistoj) have stopped focusing on this goal and instead view the Esperanto community as a stateless diasporic linguistic group based on freedom of association. On May 28, 2015, the language learning platform Duolingo launched a free Esperanto course for English speakers On March 25, 2016, when the first Duolingo Esperanto course completed its beta-testing phase, that course had 350,000 people registered to learn Esperanto through the medium of English. By July 2018, the number of learners had risen to 1.36 million. On July 20, 2018, Duolingo changed from recording users cumulatively to reporting only the number of "active learners" (i.e., those who are studying at the time and have not yet completed the course), which as of October 2022 stands at 299,000 learners. On October 26, 2016, a second Duolingo Esperanto course, for which the language of instruction is Spanish, appeared on the same platform and which as of April 2021 has a further 176,000 students. A third Esperanto course, taught in Brazilian Portuguese, began its beta-testing phase on May 14, 2018, and as of April 2021, 220,000 people are using this course and 155,000 people in May 2022. A fourth Esperanto course, taught in French, began its beta-testing phase in July 2020, and as of March 2021 has 72,500 students and 101,000 students in May 2022. As of October 2018, Lernu!, another online learning platform for Esperanto, has 320,000 registered users, and nearly 75,000 monthly visits. 50,000 users possess at least a basic understanding of Esperanto. On February 22, 2012, Google Translate added Esperanto as its 64th language. On July 25, 2016, Yandex Translate added Esperanto as a language. With about 347,000 articles, Esperanto Wikipedia (Vikipedio) is the 36th-largest Wikipedia, as measured by the number of articles, and is the largest Wikipedia in a constructed language. About 150,000 users consult the Vikipedio regularly, as attested by Wikipedia's automatically aggregated log-in data, which showed that in October 2019 the website has 117,366 unique individual visitors per month, plus 33,572 who view the site on a mobile device instead. Esperanto's phonology, grammar, vocabulary, and semantics are based on the Indo-European languages spoken in Europe. Some evidence has shown that Zamenhof studied German, English, Spanish, Lithuanian, Italian and French and knew 13 different languages, which had an influence on Esperanto's linguistic properties. Esperanto has been described as "a language lexically predominantly Romanic, morphologically intensively agglutinative, and to a certain degree isolating in character". Typologically, Esperanto has prepositions and a pragmatic word order that by default is subject–verb–object (SVO). Adjectives can be freely placed before or after the nouns they modify, though placing them before the noun is more common. New words are formed through extensive use of affixes and compounds. Esperanto typically has 22 to 24 consonants (depending on the phonemic analysis and individual speaker), five vowels, and two semivowels that combine with the vowels to form six diphthongs. (The consonant /j/ and semivowel /i̯/ are both written ⟨j⟩, and the uncommon consonant /dz/ is written with the digraph ⟨dz⟩, which is the only consonant that does not have its own letter.) Tone is not used to distinguish meanings of words. Stress is always on the second-to-last vowel in proper Esperanto words, unless a final vowel o is elided, a phenomenon mostly occurring in poetry. For example, familio "family" is [fa.mi.ˈli.o], with the stress on the second i, but when the word is used without the final o (famili’), the stress remains on the second i : [fa.mi.ˈli]. The 23 consonants are: There is some degree of allophony: A large number of consonant clusters can occur, up to three in initial position (as in stranga, "strange") and five in medial position (as in ekssklavo, "former slave"). Final clusters are uncommon except in unassimilated names, poetic elision of final o, and a very few basic words such as cent "hundred" and post "after". Esperanto has the five vowels found in such languages as Spanish, Modern Hebrew, and Modern Greek. Since there are only five vowels, a good deal of variation in pronunciation is tolerated. For instance, e commonly ranges from [e] (French é) to [ɛ] (French è). These details often depend on the speaker's native language. A glottal stop may occur between adjacent vowels in some people's speech, especially when the two vowels are the same, as in heroo "hero" ([he.ˈro.o] or [he.ˈro.ʔo]) and praavo "great-grandfather" ([pra.ˈa.vo] or [pra.ˈʔa.vo]). The Esperanto alphabet is based on the Latin script, using a one-sound-one-letter principle, with the exception of [d͡z]. It includes six letters with diacritics: five with circumflexes (⟨ĉ⟩, ⟨ĝ⟩, ⟨ĥ⟩, ⟨ĵ⟩, and ⟨ŝ⟩), and one with a breve (⟨ŭ⟩). The alphabet does not include the letters ⟨q⟩, ⟨w⟩, ⟨x⟩, or ⟨y⟩, which are only used in the writing of proper names and unassimilated borrowings. The 28-letter alphabet is: All letters lacking diacritics are pronounced approximately as their respective IPA symbols, with the exception of ⟨c⟩. The letters ⟨j⟩ and ⟨c⟩ are used in a way that is familiar to speakers of many Central and Eastern European languages, but may be unfamiliar to English speakers. ⟨j⟩ has the sound of English ⟨y⟩, as in yellow and boy (Esperanto jes has the same pronunciation as its English cognate yes), and ⟨c⟩ has a "ts" sound, as in hits or the ⟨zz⟩ in pizza. In addition, the ⟨g⟩ in Esperanto is always 'hard', as in gift. Esperanto makes use of the five-vowel system, essentially identical to the vowels of Spanish and Modern Greek. The accented letters are: According to one of Zamenhof's entries in the Lingvaj respondoj, the letter ⟨n⟩ ought to be pronounced as [n] in all cases, but a rendering as [ŋ] is admissible before ⟨g⟩, ⟨k⟩, and ⟨ĥ⟩. Even with the widespread adoption of Unicode, the letters with diacritics (found in the "Latin-Extended A" section of the Unicode Standard) can cause problems with printing and computing, because they are not found on most physical keyboards and are left out of certain fonts. There are two principal workarounds to this problem, which substitute digraphs for the accented letters. Zamenhof, the inventor of Esperanto, created an "h-convention", which replaces ⟨ĉ⟩, ⟨ĝ⟩, ⟨ĥ⟩, ⟨ĵ⟩, ⟨ŝ⟩, and ⟨ŭ⟩ with ⟨ch⟩, ⟨gh⟩, ⟨hh⟩, ⟨jh⟩, ⟨sh⟩, and ⟨u⟩, respectively. If used in a database, a program in principle could not determine whether to render, for example, ⟨ch⟩ as /c/ followed by /h/ or as /ĉ/, and would fail to render, for example, the word senchava unambiguously unless its component parts were intentionally separated, as in senc·hava. A more recent x-convention has also gained prominence with the advent of computing, utilizing an otherwise absent ⟨x⟩ to produce the digraphs ⟨cx⟩, ⟨gx⟩, ⟨hx⟩, ⟨jx⟩, ⟨sx⟩, and ⟨ux⟩; this has the incidental advantage of alphabetizing correctly in most cases, since the only letter after ⟨x⟩ is ⟨z⟩. There are computer keyboard layouts that support the Esperanto alphabet, and some systems use software that automatically replaces x- or h-convention digraphs with the corresponding diacritic letters (for example, Amiketo for Microsoft Windows, Mac OS X, and Linux, Esperanta Klavaro for Windows Phone, and Gboard and AnySoftKeyboard for Android). On Linux, the GNOME, Cinnamon, and KDE desktop environments support the entry of characters with Esperanto diacritics. Criticisms are levied against the letters with circumflex diacritics, which some find odd or cumbersome, along with their being invented specifically for Esperanto rather than borrowed from existing languages. Additionally, some of them are arguably unnecessary — for example, the use of ĥ instead of x and ŭ instead of w. However, Zamenhof did not choose these letters arbitrarily: In fact, they were inspired by Czech letters with the caron diacritic but replaced the caron with a circumflex for the ease of those who had access to a French typewriter (with a circumflex dead-key). The Czech letter ž was replaced with ĵ to match the French letter j with the same sound. The letter ŭ on the other hand comes from the u-breve used in Latin prosody, and is also speculated to be inspired by the Belarusian Cyrillic letter ў; French typewriters can render it approximately as the French letter ù. Esperanto words are mostly derived by stringing together roots, grammatical endings, and at times prefixes and suffixes. This process is regular so that people can create new words as they speak and be understood. Compound words are formed with a modifier-first, head-final order, as in English (compare "birdsong" and "songbird," and likewise, birdokanto and kantobirdo). Speakers may optionally insert an o between the words in a compound noun if placing them together directly without the o would make the resulting word hard to say or understand. The different parts of speech are marked by their own suffixes: all common nouns are marked with the suffix -o, all adjectives with -a, all derived adverbs with -e, and all verbs except the jussive (or imperative) and infinitive end in -s, specifically in one of six tense and mood suffixes, such as the present tense -as; the jussive mood, which is tenseless, ends in -u. Nouns and adjectives have two cases: nominative for grammatical subjects and in general, and accusative for direct objects and (after a preposition) to indicate direction of movement. Singular nouns used as grammatical subjects end in -o, plural subject nouns in -oj (pronounced [oi̯] like English "oy"). Singular direct object forms end in -on, and plural direct objects with the combination -ojn ([oi̯n]; rhymes with "coin"): -o indicates that the word is a noun, -j indicates the plural, and -n indicates the accusative (direct object) case. Adjectives agree with their nouns; their endings are singular subject -a ([a]; rhymes with "ha!"), plural subject -aj ([ai̯], pronounced "eye"), singular object -an, and plural object -ajn ([ai̯n]; rhymes with "fine"). The suffix -n, besides indicating the direct object, is used to indicate movement and a few other things as well. The six verb inflections consist of three tenses and three moods. They are present tense -as, future tense -os, past tense -is, infinitive mood -i, conditional mood -us and jussive mood -u (used for wishes and commands). Verbs are not marked for person or number. Thus, kanti means "to sing", mi kantas means "I sing", vi kantas means "you sing", and ili kantas means "they sing". Word order is comparatively free. Adjectives may precede or follow nouns; subjects, verbs and objects may occur in any order. However, the article la "the", demonstratives such as tiu "that" and prepositions (such as ĉe "at") must come before their related nouns. Similarly, the negative ne "not" and conjunctions such as kaj "and" and ke "that" must precede the phrase or clause that they introduce. In copular (A = B) clauses, word order is just as important as in English: "people are animals" is distinguished from "animals are people". The core vocabulary of Esperanto was defined by Lingvo internacia, published by Zamenhof in 1887. This book listed 917 roots; these could be expanded into tens of thousands of words using prefixes, suffixes, and compounding. In 1894, Zamenhof published the first Esperanto dictionary, Universala Vortaro, which had a larger set of roots. The rules of the language allowed speakers to borrow new roots as needed; it was recommended, however, that speakers use most international forms and then derive related meanings from these. Since then, many words have been borrowed, primarily (but not solely) from the European languages. Not all proposed borrowings become widespread, but many do, especially technical and scientific terms. Terms for everyday use, on the other hand, are more likely to be derived from existing roots; komputilo "computer", for instance, is formed from the verb komputi "compute" and the suffix -ilo "tool". Words are also calqued; that is, words acquire new meanings based on usage in other languages. For example, the word muso "mouse" has acquired the meaning of a computer mouse from its usage in many languages (English mouse, French souris, Dutch muis, Spanish ratón, etc.). Esperanto speakers often debate about whether a particular borrowing is justified or whether meaning can be expressed by deriving from or extending the meaning of existing words. Some compounds and formed words in Esperanto are not entirely straightforward; for example, eldoni, literally "give out", means "publish", paralleling the usage of certain European languages (such as German herausgeben, Dutch uitgeven, Russian издать izdat'). In addition, the suffix -um- has no defined meaning; words using the suffix must be learned separately (such as dekstren "to the right" and dekstrumen "clockwise"). There are not many idiomatic or slang words in Esperanto, as these forms of speech tend to make international communication difficult—working against Esperanto's main goal. The language contains several calques of Polish expressions. Instead of derivations of Esperanto roots, new roots are taken from European languages in the endeavor to create an international language. Ĉiuj homoj estas denaske liberaj kaj egalaj laŭ digno kaj rajtoj. Ili posedas racion kaj konsciencon, kaj devus konduti unu al alia en spirito de frateco. All human beings are born free and equal in dignity and rights. They are endowed with reason and conscience and should act towards one another in a spirit of brotherhood. The Universal Declaration of Human Rights, Article I The following short extract gives an idea of the character of Esperanto. Listed below are some useful Esperanto words and phrases along with IPA transcriptions: Esperanto speakers learn the language through self-directed study, online tutorials, and correspondence courses taught by volunteers. More recently, free teaching websites like lernu! and Duolingo have become available. Esperanto instruction is rarely available at schools, including four primary schools in a pilot project under the supervision of the University of Manchester, and by one count at a few universities. However, outside China and Hungary, these mostly involve informal arrangements, rather than dedicated departments or state sponsorship. Eötvös Loránd University in Budapest had a department of Interlinguistics and Esperanto from 1966 to 2004, after which time instruction moved to vocational colleges; there are state examinations for Esperanto instructors. Additionally, Adam Mickiewicz University in Poland offers a diploma in Interlinguistics. The Senate of Brazil passed a bill in 2009 that would make Esperanto an optional part of the curriculum in public schools, although mandatory if there is demand for it. As of 2015, the bill is still under consideration by the Chamber of Deputies. In the United States, Esperanto is notably offered as a weekly evening course at Stanford University's Bechtel International Center. Conversational Esperanto, The International Language, is a free drop-in class that is open to Stanford students and the general public on campus during the academic year. With administrative permission, Stanford Students can take the class for two credits a quarter through the Linguistics Department. "Even four lessons are enough to get more than just the basics," the Esperanto at Stanford website reads. Esperanto-USA suggests that Esperanto can be learned in, at most, one quarter of the amount of time required for other languages. The Zagreb method is an Esperanto teaching method that was developed in Zagreb, Yugoslavia (present-day capital city of Croatia), in the late 1970s to early 1980s as a response to the unsatisfactory learning outcomes of traditional natural-language teaching techniques when used for Esperanto. Its goal was to streamline the material in order to equip learners with practical knowledge that could be put to use in a short of a time frame as possible. It is now implemented and available on some of the well-known learning websites in the community. From 2006 to 2011, four primary schools in Britain, with 230 pupils, followed a course in "propaedeutic Esperanto"—that is, instruction in Esperanto to raise language awareness, and to accelerate subsequent learning of foreign languages—under the supervision of the University of Manchester. As they put it, Many schools used to teach children the recorder, not to produce a nation of recorder players, but as a preparation for learning other instruments. [We teach] Esperanto, not to produce a nation of Esperanto-speakers, but as a preparation for learning other languages. The results showed that the pupils achieved enhanced metalinguistic awareness, though the study did not indicate whether a course in a language other than Esperanto would have led to similar results. Similar studies have been conducted in New Zealand, the United States, and Germany. The results of these studies were favorable, and demonstrated that studying Esperanto before another foreign language expedites the acquisition of the other, natural language. In one study in England, a group of European secondary school students studied Esperanto for one year, then French for three years, and ended up with a better command of French than a control group, who had studied French for a four-year period. Esperanto is by far the most widely spoken constructed language in the world. Speakers are most numerous in Europe and East Asia, especially in urban areas, where they often form Esperanto clubs. Esperanto is particularly prevalent in the northern and central countries of Europe; in China, Korea, Japan, and Iran within Asia; in Brazil, and the United States in the Americas; and in Togo in Africa. Countering a common criticism against Esperanto, the statistician Svend Nielsen has found no significant correlation between the number of Esperanto speakers and the similarity of a given national native language to Esperanto. He concludes that Esperanto tends to be more popular in rich countries with widespread Internet access and a tendency to contribute more to science and culture. Linguistic diversity within a country was found to have no, or perhaps a slightly reductive, correlation with Esperanto popularity. An estimate of the number of Esperanto speakers was made by Sidney S. Culbert, a retired psychology professor at the University of Washington and a longtime Esperantist, who tracked down and tested Esperanto speakers in sample areas in dozens of countries over a period of twenty years. Culbert concluded that between one and two million people speak Esperanto at Foreign Service Level 3, "professionally proficient" (able to communicate moderately complex ideas without hesitation, and to follow speeches, radio broadcasts, etc.). Culbert's estimate was not made for Esperanto alone, but formed part of his listing of estimates for all languages of more than one million speakers, published annually in the World Almanac and Book of Facts. Culbert's most detailed account of his methodology is found in a 1989 letter to David Wolff. Since Culbert never published detailed intermediate results for particular countries and regions, it is difficult to independently gauge the accuracy of his results. In the Almanac, his estimates for numbers of language speakers were rounded to the nearest million, thus the number of Esperanto speakers is shown as two million. This latter figure appears in Ethnologue. Assuming that this figure is accurate, that means that about 0.03% of the world's population speaks the language. Although it does not meet Zamenhof's goal of a universal language, it still represents a level of popularity unmatched by any other constructed language. Marcus Sikosek (now Ziko van Dijk) has challenged this figure of 1.6 million as exaggerated. He estimated that even if Esperanto speakers were evenly distributed, assuming one million Esperanto speakers worldwide would lead one to expect about 180 in the city of Cologne. Van Dijk finds only 30 fluent speakers in that city, and similarly smaller-than-expected figures in several other places thought to have a larger-than-average concentration of Esperanto speakers. He also notes that there are a total of about 20,000 members of the various Esperanto organizations (other estimates are higher). Though there are undoubtedly many Esperanto speakers who are not members of any Esperanto organization, he thinks it unlikely that there are fifty times more speakers than organization members. Finnish linguist Jouko Lindstedt, an expert on native-born Esperanto speakers, presented the following scheme to show the overall proportions of language capabilities within the Esperanto community: In 2017, doctoral student Svend Nielsen estimated around 63,000 Esperanto speakers worldwide, taking into account association memberships, user-generated data from Esperanto websites and census statistics. This number, however, was disputed by statistician Sten Johansson, who questioned the reliability of the source data and highlighted a wide margin of error, the latter point with which Nielsen agrees. Both have stated, however, that this new number is likely more realistic than some earlier projections. In the absence of Culbert's detailed sampling data, or any other census data, it is impossible to state the number of speakers with certainty. According to the website of the Universal Esperanto Association: Numbers of textbooks sold and membership of local societies put "the number of people with some knowledge of the language in the hundreds of thousands and possibly millions". Native Esperanto speakers, eo: denaskuloj, lit. 'person from/since birth', have learned the language from birth from Esperanto-speaking parents. This usually happens when Esperanto is the chief or only common language in an international family, but sometimes occurs in a family of Esperanto speakers who often use the language. As of 1996, according to Corsetti, there were approximately 350 attested cases of families with native Esperanto speakers (which means there were around 700 Esperanto speaking natives in these families, not accounting for older native speakers). The 2022 edition of Ethnologue gives 1,000 L1 users citing Corsetti et al 2004. However, native speakers do not occupy an authoritative position in the Esperanto community, as they would in other language communities. This presents a challenge to linguists, whose usual source of grammaticality and meanings are native speakers. Esperantists can access an international culture, including a large body of original as well as translated literature. There are more than 25,000 Esperanto books, both originals and translations, as well as several regularly distributed Esperanto magazines. In 2013 a museum about Esperanto opened in China. Esperantists use the language for free accommodations with Esperantists in 92 countries using the Pasporta Servo or to develop pen pals through Esperanto Koresponda Servo [eo]. Every year, Esperantists meet for the World Congress of Esperanto (Universala Kongreso de Esperanto). Historically, much music has been written in the language such as Kaj Tiel Plu, has been in various folk traditions. There is also a variety of classical and semi-classical choral music, both original and translated, as well as large ensemble music that includes voices singing Esperanto texts. Lou Harrison, who incorporated styles and instruments from many world cultures in his music, used Esperanto titles and/or texts in several of his works, most notably La Koro-Sutro (1973). David Gaines used Esperanto poems as well as an excerpt from a speech by Zamenhof for his Symphony No. One (Esperanto) for mezzo-soprano and orchestra (1994–98). He wrote original Esperanto text for his Povas plori mi ne plu (I Can Cry No Longer) for unaccompanied SATB choir (1994). There are also shared traditions, such as Zamenhof Day, celebrated on December 15. Esperantists speak primarily in Esperanto at special conventions, such as the World Esperanto Congress. Proponents of Esperanto, such a Humphrey Tonkin, a professor at the University of Hartford, argue that Esperanto is "culturally neutral by design, as it was intended to be a facilitator between cultures, not to be the carrier of any one national culture". The late Scottish Esperanto author William Auld wrote extensively on the subject, arguing that Esperanto is "the expression of a common human culture, unencumbered by national frontiers. Thus it is considered a culture on its own." Critics have argued that the language is eurocentric, as it draws much of its vocabulary from European languages. Several Esperanto associations also advance Esperanto education, and aim to preserve its culture and heritage. Poland added Esperanto to its list of intangible cultural heritage in 2014. In the futuristic novel Lord of the World by Robert Hugh Benson, Esperanto is presented as the predominant language of the world, much as Latin is the language of the Church. A reference to Esperanto appears in the science-fiction story War with the Newts by Karel Čapek, published in 1936. As part of a passage on what language the salamander-looking creatures with human cognitive ability should learn, it is noted that "...in the Reform schools, Esperanto was taught as the medium of communication." (P. 206). Esperanto has been used in many films and novels. Typically, this is done either to add the exotic flavour of a foreign language without representing any particular ethnicity, or to avoid going to the trouble of inventing a new language. The Charlie Chaplin film The Great Dictator (1940) showed Jewish ghetto shop signs in Esperanto. Two full-length feature films have been produced with dialogue entirely in Esperanto: Angoroj, in 1964, and Incubus, a 1965 B-movie horror film which is also notable for starring William Shatner shortly before he began working on Star Trek. In Captain Fantastic (2016) there is a dialogue in Esperanto. The 1994 film Street Fighter contains Esperanto dialogue spoken by the character Sagat. Finally, Mexican film director Alfonso Cuarón has publicly shown his fascination for Esperanto, going as far as naming his film production company Esperanto Filmoj ("Esperanto Films"). In 1921 the French Academy of Sciences recommended using Esperanto for international scientific communication. A few scientists and mathematicians, such as Maurice Fréchet (mathematics), John C. Wells (linguistics), Helmar Frank (pedagogy and cybernetics), and Nobel laureate Reinhard Selten (economics) have published part of their work in Esperanto. Frank and Selten were among the founders of the International Academy of Sciences in San Marino, sometimes called the "Esperanto University", where Esperanto is the primary language of teaching and administration. A message in Esperanto was recorded and included in Voyager 1's Golden Record. Esperanto business groups have been active for many years. Research conducted in the 1920s by the French Chamber of Commerce and reported in The New York Times suggested that Esperanto seemed to be the best business language. The privacy-oriented cryptocurrency, Monero, takes its name from the Esperanto word for coin. Zamenhof had three goals, as he wrote already in 1887: to create an easy language, to create a language ready to use "whether the language be universally accepted or not" and to find some means to get many people to learn the language. So Zamenhof's intention was not only to create an easy-to-learn language to foster peace and international understanding as a general language, but also to create a language for immediate use by a (small) language community. Esperanto was to serve as an international auxiliary language, that is, as a universal second language, not to replace ethnic languages. This goal was shared by Zamenhof among Esperanto speakers at the beginning of the movement. Later, Esperanto speakers began to see the language and the culture that had grown up around it as ends in themselves, even if Esperanto is never adopted by the United Nations or other international organizations. Esperanto speakers who want to see Esperanto adopted officially or on a large scale worldwide are commonly called finvenkistoj, from fina venko, meaning "final victory". There are two kinds of finvenkismo: desubismo aims to spread Esperanto between ordinary people (desube, from below) to form a steadily growing community of Esperanto speakers, while desuprismo aims to act from above (desupre), beginning with politicians. Zamenhof considered the first way more plausible, as "for such affairs as ours, governments come with their approval and help usually only when everything is completely ready." Those who focus on the intrinsic value of the language are commonly called raŭmistoj, from Rauma, Finland, where a declaration on the short-term improbability of the fina venko and the value of Esperanto culture was made at the International Youth Congress in 1980. However the "Manifesto de Raŭmo" clearly mentions the intention to further spread the language: "We want to spread Esperanto to put into effect its positive values more and more, step by step". In 1996 the Prague Manifesto was adopted at the annual congress of the Universal Esperanto Association (UEA); it was subscribed by individual participants and later by other Esperanto speakers. More recently, language-learning apps like Duolingo and Amikumu have helped to increase the amount of fluent speakers of Esperanto, and find others in their area to speak the language with. The earliest flag, and the one most commonly used today, features a green five-pointed star against a white canton, upon a field of green. It was proposed to Zamenhof by Richard Geoghegan, author of the first Esperanto textbook for English speakers, in 1887. The flag was approved in 1905 by delegates to the first conference of Esperantists at Boulogne-sur-Mer. The green star on white (la verda stelo) is also used by itself as a round (buttonhole, etc.) emblem by many esperantists, among other reasons to enhance their visibility outside the Esperanto world. A version with an E superimposed over the green star is sometimes seen. Other variants include that for Christian Esperantists, with a white Christian cross superimposed upon the green star, and that for Leftists, with the color of the field changed from green to red. In 1987, a second flag design was chosen in a contest organized by the UEA celebrating the first centennial of the language. It featured a white background with two stylised curved "E"s facing each other. Dubbed the jubilea simbolo (jubilee symbol), it attracted criticism from some Esperantists, who dubbed it the melono (melon) for its elliptical shape. It is still in use, though to a lesser degree than the traditional symbol, known as the verda stelo (green star). Esperanto has been placed in many proposed political situations. The most popular of these is the Europe–Democracy–Esperanto, which aims to establish Esperanto as the official language of the European Union. Grin's Report, published in 2005 by François Grin, found that the use of English as the lingua franca within the European Union costs billions annually and significantly benefits English-speaking countries financially. The report considered a scenario where Esperanto would be the lingua franca, and found that it would have many advantages, particularly economically speaking, as well as ideologically. Left-wing currents exist in the wider Esperanto world, mostly organized through the Sennacieca Asocio Tutmonda founded by French theorist Eugène Lanti. Other notable Esperanto socialists include Nikolai Nekrasov and Vladimir Varankin, both of whom were put to death in October 1938 during the Stalinist repressions. Nekrasov was accused of being "an organizer and leader of a fascist, espionage, terrorist organization of Esperantists." The Oomoto religion encourages the use of Esperanto among its followers and includes Zamenhof as one of its deified spirits. The Baháʼí Faith encourages the use of an auxiliary international language. `Abdu'l-Bahá praised the ideal of Esperanto, and there was an affinity between Esperantists and Baháʼís during the late 19th century and early 20th century. On February 12, 1913, `Abdu'l-Bahá gave a talk to the Paris Esperanto Society, stating: Now, praise be to God that Dr. Zamenhof has invented the Esperanto language. It has all the potential qualities of becoming the international means of communication. All of us must be grateful and thankful to him for this noble effort; for in this way he has served his fellowmen well. With untiring effort and self-sacrifice on the part of its devotees Esperanto will become universal. Therefore every one of us must study this language and spread it as far as possible so that day by day it may receive a broader recognition, be accepted by all nations and governments of the world, and become a part of the curriculum in all the public schools. I hope that Esperanto will be adopted as the language of all the future international conferences and congresses, so that all people need acquire only two languages—one their own tongue and the other the international language. Then perfect union will be established between all the people of the world. Consider how difficult it is today to communicate with various nations. If one studies fifty languages one may yet travel through a country and not know the language. Therefore I hope that you will make the utmost effort, so that this language of Esperanto may be widely spread. Lidia Zamenhof, daughter of L. L. Zamenhof, became a Baháʼí around 1925. James Ferdinand Morton Jr., an early member of the Baháʼí Faith in Greater Boston, was vice-president of the Esperanto League for North America. Ehsan Yarshater, the founding editor of Encyclopædia Iranica, notes how as a child in Iran he learned Esperanto and that when his mother was visiting Haifa on a Baháʼí pilgrimage he wrote her a letter in Persian as well as Esperanto. At the request of 'Abdu’l-Baha, Agnes Baldwin Alexander became an early advocate of Esperanto and used it to spread the Baháʼí teachings at meetings and conferences in Japan. Today there exists an active sub-community of Baháʼí Esperantists and various volumes of Baháʼí literature have been translated into Esperanto. In 1973, the Baháʼí Esperanto-League for active Baháʼí supporters of Esperanto was founded. In 1908, spiritist Camilo Chaigneau wrote an article named "Spiritism and Esperanto" in the periodic La Vie d'Outre-Tombe recommending the use of Esperanto in a "central magazine" for all spiritists and esperantists. Esperanto then became actively promoted by spiritists, at least in Brazil, initially by Ismael Gomes Braga and František Lorenz; the latter is known in Brazil as Francisco Valdomiro Lorenz, and was a pioneer of both spiritist and Esperantist movements in this country. The Brazilian Spiritist Federation publishes Esperanto coursebooks, translations of Spiritism's basic books, and encourages Spiritists to become Esperantists. William T. Stead, a famous spiritualist and occultist in the United Kingdom, co-founded the first Esperanto club in the U.K. The Teozofia Esperanta Ligo (Theosophical Esperantist League) was formed in 1911, and the organization's journal, Espero Teozofia, was published from 1913 to 1928. The first translation of the Bible into Esperanto was a translation of the Tanakh (or Old Testament) done by L. L. Zamenhof. The translation was reviewed and compared with other languages' translations by a group of British clergy and scholars before its publication at the British and Foreign Bible Society in 1910. In 1926 this was published along with a New Testament translation, in an edition commonly called the "Londona Biblio". In the 1960s, the Internacia Asocio de Bibliistoj kaj Orientalistoj tried to organize a new, ecumenical Esperanto Bible version. Since then, the Dutch Remonstrant pastor Gerrit Berveling has translated the Deuterocanonical or apocryphal books, in addition to new translations of the Gospels, some of the New Testament epistles, and some books of the Tanakh. These have been published in various separate booklets, or serialized in Dia Regno, but the Deuterocanonical books have appeared in recent editions of the Londona Biblio. Christian Esperanto organizations and publications include: Ayatollah Khomeini of Iran called on Muslims to learn Esperanto and praised its use as a medium for better understanding among peoples of different religious backgrounds. After he suggested that Esperanto replace English as an international lingua franca, it began to be used in the seminaries of Qom. An Esperanto translation of the Qur'an was published by the state shortly thereafter. Though Esperanto itself has changed little since the publication of Fundamento de Esperanto (Foundation of Esperanto), a number of reform projects have been proposed over the years, starting with Zamenhof's proposals in 1894 and Ido in 1907. Several later constructed languages, such as Universal, Saussure, Romániço, Internasia, Esperanto sen Fleksio, and Mundolingvo, were all based on Esperanto. In modern times, conscious attempts have been made to eliminate perceived sexism in the language, such as Riism. Many words with ĥ now have alternative spellings with k and occasionally h, so that arĥitekto may also be spelled arkitekto; see Esperanto phonology for further details of ĥ replacement. Reforms aimed at altering country names have also resulted in a number of different options, either due to disputes over suffixes or Eurocentrism in naming various countries. J. R. R. Tolkien wrote in support of the language in a 1932 British Esperantist article, but criticised those who sought to adapt or "tinker" with the language, which, in his opinion, harmed unanimity and the goal of achieving wide acceptance. There have been numerous objections to Esperanto over the years. For example, there has been criticism that Esperanto is not neutral enough, but also that it should convey a specific culture, which would make it less neutral; that Esperanto does not draw on a wide enough selection of the world's languages, but also that it should be more narrowly European. Esperantists often argue for Esperanto as a culturally neutral means of communication. However, it is often accused of being Eurocentric. This is most often noted in regard to the vocabulary. The vocabulary, for example, draws about three-quarters from Romance languages, and the remainder primarily from Greek, English and German.. Supporters have argued that the agglutinative grammar and verb regularity of Esperanto has more in common with Asian languages than with European ones. A 2010 linguistic typological study concluded that "Esperanto is indeed somewhat European in character, but considerably less so than the European languages themselves." Esperanto is sometimes accused of being inherently sexist, because the default form of some nouns is used for descriptions of men while a derived form is used for the women. This is said to retain traces of the male-dominated society of late 19th-century Europe of which Esperanto is a product. These nouns are primarily titles, such as baron/baroness, and kinship terms, such as sinjoro "Mr, sir" vs. sinjorino "Ms, lady" and patro "father" vs. patrino "mother". Before the movement toward equal rights for women, this also applied to professional roles assumed to be predominantly male, such as doktoro, a PhD doctor (male or unspecified), versus doktorino, a female PhD. This was analogous to the situation with the English suffix -ess, as in the words waiter/waitress, etc. On the other hand, the pronoun ĝi ("it") may be used generically to mean he/she/they; the pronoun li ("he") is always masculine and ŝi ("she") is always female, despite some authors' arguments. A gender-neutral singular pronoun ri has gradually become more widely used in recent years, although it is not currently universal. The plural pronoun ili ("they") is always neutral, as are nouns with the prefix ge– such as gesinjoroj (equivalent to sinjoro kaj sinjorino "Mr.and Ms."), gepatroj "parents" (equivalent to patro kaj patrino "mother and father"). Speakers of languages without grammatical case or adjectival agreement frequently criticise these aspects of Esperanto. In addition, in the past some people found the Classical Greek forms of the plural (nouns in -oj, adjectives in -aj) to be awkward, proposing instead that Italian -i be used for nouns, and that no plural be used for adjectives. These suggestions were adopted by the Ido reform. A reply to that criticism is that the presence of an accusative case allows much freedom in word order, e.g. for emphasis ("Johano batis Petron", John hit Peter; "Petron batis Johano", it is Peter whom John hit), that its absence in the "predicate of the object" avoids ambiguity ("Mi vidis la blankan domon", I saw the white house; "Mi vidis la domon blanka", the house seemed white to me) and that adjective agreement allows, among others, the use of hyperbaton in poetry (as in Latin, cf. Virgil's Eclogue 1:1 Tityre, tu patulæ recubans sub tegmine fagi… where "patulæ" (spread out) is epithet to "fagi" (beech) and their agreement in the genitive feminine binds them notwithstanding their distance in the verse). The Esperanto alphabet uses two diacritics: the circumflex and the breve. The alphabet was designed with a French typewriter in mind, and although modern computers support Unicode, entering the letters with diacritic marks can be more or less problematic with certain operating systems or hardware. One of the first reform proposals (for Esperanto 1894) sought to do away with these marks and the language Ido went back to the basic Latin alphabet. One common criticism is that Esperanto has failed to live up to the hopes of its creator, who dreamed of it becoming a universal second language. Because people were reluctant to learn a new language which hardly anyone spoke, Zamenhof asked people to sign a promise to start learning Esperanto once ten million people made the same promise. He "was disappointed to receive only a thousand responses." However, Zamenhof had the goal to "enable the learner to make direct use of his knowledge with persons of any nationality, whether the language be universally accepted or not", as he wrote in 1887. The language is currently spoken by people living in more than 100 countries; there are about 2,000 native Esperanto speakers and probably up to 100,000 people who use the language regularly. In this regard, Zamenhof was well aware that it might take much time for Esperanto to achieve his desired goals. In his speech at the 1907 World Esperanto Congress in Cambridge he said, "we hope that earlier or later, maybe after many centuries, on a neutral language foundation, understanding one another, the nations will build ... a big family circle." The poet Wisława Szymborska expressed doubt that Esperanto could "produce works of lasting value," saying it is "an artificial language without variety or dialects" and that "no one thinks in Esperanto." Esperantists have replied that "lasting value" is a statement of opinion, that Esperanto grew "naturally" by the actions of its speakers on Zamenhof's intentionally elementary Fundamento, and that people do think in Esperanto. There are some geographical and astronomical features named after Esperanto, or after its creator L. L. Zamenhof. These include Esperanto Island in Antarctica, and the asteroids 1421 Esperanto and 1462 Zamenhof discovered by Finnish astronomer and Esperantist Yrjö Väisälä. (...) ni esperas, ke pli aŭ malpli frue, eble post multaj jarcentoj, Sur neŭtrala lingva fundamento, Komprenante unu la alian, La popoloj faros en konsento Unu grandan rondon familian.
[ { "paragraph_id": 0, "text": "Esperanto (/ˌɛspəˈrɑːntoʊ/, /-æntoʊ/) is the world's most widely spoken constructed international auxiliary language. Created by the Warsaw-based ophthalmologist L. L. Zamenhof in 1887, it is intended to be a universal second language for international communication, or \"the international language\" (la Lingvo Internacia). Zamenhof first described the language in Dr. Esperanto's International Language (Esperanto: Unua Libro), which he published under the pseudonym Doktoro Esperanto. Early adopters of the language liked the name Esperanto and soon used it to describe his language. The word esperanto translates into English as \"one who hopes\".", "title": "" }, { "paragraph_id": 1, "text": "Within the range of constructed languages, Esperanto occupies a middle ground between \"naturalistic\" (imitating existing natural languages) and a priori (where features are not based on existing languages). Esperanto's vocabulary, syntax and semantics derive predominantly from languages of the Indo-European group. The vocabulary derives primarily from Romance languages, with substantial contributions from Germanic languages. One of the language's most notable features is its extensive system of derivation, where prefixes and suffixes may be freely combined with roots to generate words, making it possible to communicate effectively with a smaller set of words.", "title": "" }, { "paragraph_id": 2, "text": "Esperanto is the most successful constructed international auxiliary language, and the only such language with a sizeable population of native speakers, of which there are perhaps several thousand. Usage estimates are difficult, but two estimates put the number of people who know how to speak Esperanto at around 100,000. Concentration of speakers is highest in Europe, East Asia, and South America. Although no country has adopted Esperanto officially, Esperantujo (\"Esperanto-land\") is used as a name for the collection of places where it is spoken. The language has also gained a noticeable presence on the internet in recent years, as it became increasingly accessible on platforms such as Duolingo, Wikipedia, Amikumu and Google Translate. Esperanto speakers are often called \"Esperantists\" (Esperantistoj).", "title": "" }, { "paragraph_id": 3, "text": "Esperanto has not been a secondary official language of any recognized country, but it entered the education systems of several countries, such as Hungary and China.", "title": "Official use" }, { "paragraph_id": 4, "text": "There were plans at the beginning of the 20th century to establish Neutral Moresnet, in central-western Europe, as the world's first Esperanto state; any such plans came to an end when the Treaty of Versailles awarded the disputed territory to Belgium, effective January 10, 1920. In addition, the self-proclaimed artificial island micronation of Rose Island, near Italy in the Adriatic Sea, used Esperanto as its official language in 1968, and another micronation, the extant Republic of Molossia, near Dayton, Nevada, uses Esperanto as an official language alongside English.", "title": "Official use" }, { "paragraph_id": 5, "text": "The Chinese government has used Esperanto since 2001 for daily news on china.org.cn. China also uses Esperanto in China Radio International, and for the internet magazine El Popola Ĉinio.", "title": "Official use" }, { "paragraph_id": 6, "text": "The Vatican Radio has an Esperanto version of its podcasts and its website.", "title": "Official use" }, { "paragraph_id": 7, "text": "The United States Army has published military phrase books in Esperanto, to be used from the 1950s until the 1970s in war games by mock enemy forces. A field reference manual, FM 30-101-1 Feb. 1962, contained the grammar, English-Esperanto-English dictionary, and common phrases. In the 1970s Esperanto was used as the basis for Defense Language Aptitude Tests.", "title": "Official use" }, { "paragraph_id": 8, "text": "Esperanto is the working language of several non-profit international organizations such as the Sennacieca Asocio Tutmonda, a left-wing cultural association which had 724 members in over 85 countries in 2006. There is also Education@Internet, which has developed from an Esperanto organization; most others are specifically Esperanto organizations. The largest of these, the Universal Esperanto Association, has an official consultative relationship with the United Nations and UNESCO, which recognized Esperanto as a medium for international understanding in 1954. The Universal Esperanto Association collaborated in 2017 with UNESCO to deliver an Esperanto translation of its magazine UNESCO Courier (Unesko Kuriero en Esperanto). The World Health Organization offers an Esperanto version of the coronavirus pandemic (COVID-19, Esperanto: KOVIM-19) occupational safety and health education course.", "title": "Official use" }, { "paragraph_id": 9, "text": "Esperanto was also the first language of teaching and administration of the International Academy of Sciences San Marino.", "title": "Official use" }, { "paragraph_id": 10, "text": "The League of Nations made attempts to promote teaching Esperanto in member countries, but the resolutions were defeated mainly by French delegates, who did not feel there was a need for it.", "title": "Official use" }, { "paragraph_id": 11, "text": "In the summer of 1924, the American Radio Relay League adopted Esperanto as its official international auxiliary language, and hoped that the language would be used by radio amateurs in international communications, but its actual use for radio communications was negligible.", "title": "Official use" }, { "paragraph_id": 12, "text": "All the personal documents sold by the World Service Authority, including the World Passport, are written in Esperanto, together with English, French, Spanish, Russian, Arabic, and Chinese (the official languages of the United Nations).", "title": "Official use" }, { "paragraph_id": 13, "text": "Esperanto was created in the late 1870s and early 1880s by L. L. Zamenhof, a Polish-Jewish ophthalmologist from Białystok, then part of the Russian Empire, but now part of Poland. In the 1870s, just a few years before Zamenhof created Esperanto, Polish was banned in public places in Białystok.", "title": "History" }, { "paragraph_id": 14, "text": "According to Zamenhof, he created the language to reduce the \"time and labor we spend in learning foreign tongues\", and to foster harmony between people from different countries: \"Were there but an international language, all translations would be made into it alone ... and all nations would be united in a common brotherhood.\" His feelings and the situation in Białystok may be gleaned from an extract from his letter to Nikolai Borovko:", "title": "History" }, { "paragraph_id": 15, "text": "The place where I was born and spent my childhood gave direction to all my future struggles. In Białystok the inhabitants were divided into four distinct elements: Russians, Poles, Germans, and Jews; each of these spoke their own language and looked on all the others as enemies. In such a town a sensitive nature feels more acutely than elsewhere the misery caused by language division and sees at every step that the diversity of languages is the first, or at least the most influential, basis for the separation of the human family into groups of enemies. I was brought up as an idealist; I was taught that all people were brothers, while outside in the street at every step I felt that there were no people, only Russians, Poles, Germans, Jews, and so on. This was always a great torment to my infant mind, although many people may smile at such an 'anguish for the world' in a child. Since at that time I thought that 'grown-ups' were omnipotent, I often said to myself that when I grew up I would certainly destroy this evil.", "title": "History" }, { "paragraph_id": 16, "text": "It was invented in 1887 and designed so that anyone could learn it in a few short months. Dr. Zamenhof lived on Dzika Street, No. 9, which was just around the corner from the street on which we lived. Brother Afrum was so impressed with that idea that he learned Esperanto in a very short time at home from a little book. He then bought many dozens of them and gave them out to relatives, friends, just anyone he could, to support that magnificent idea for he felt that this would be a common bond to promote relationships with fellow men in the world. A group of people had organized and sent letters to the government asking to change the name of the street where Dr. Zamenhof lived for many years when he invented Esperanto, from Dzika to Zamenhofa. They were told that a petition with a large number of signatures would be needed. That took time so they organized demonstrations carrying large posters encouraging people to learn the universal language and to sign the petitions... About the same time, in the middle of the block marched a huge demonstration of people holding posters reading \"Learn Esperanto\", \"Support the Universal language\", \"Esperanto the language of hope and expectation\", \"Esperanto the bond for international communication\" and so on, and many \"Sign the petitions\". I will never forget that rich-poor, sad-glad parade and among all these people stood two fiery red tramway cars waiting on their opposite lanes and also a few dorożkas with their horses squeezed in between. Such a sight it was. Later a few blocks were changed from Dzika Street to Dr. Zamenhofa Street and a nice monument was erected there with his name and his invention inscribed on it, to honor his memory.", "title": "History" }, { "paragraph_id": 17, "text": "Zamenhof's goal was to create an easy and flexible language that would serve as a universal second language, to foster world peace and international understanding, and to build a \"community of speakers\".", "title": "History" }, { "paragraph_id": 18, "text": "His original title for the language was simply \"the international language\" (la lingvo internacia), but early speakers grew fond of the name Esperanto, and began to use it as the name for the language just two years after its creation. The name quickly gained prominence, and has been used as an official name ever since.", "title": "History" }, { "paragraph_id": 19, "text": "In 1905, Zamenhof published the Fundamento de Esperanto as a definitive guide to the language. Later that year, French Esperantists organized with his participation the first World Esperanto Congress, an ongoing annual conference, in Boulogne-sur-Mer, France. Zamenhof also proposed to the first congress that an independent body of linguistic scholars should steward the future evolution of Esperanto, foreshadowing the founding of the Akademio de Esperanto (in part modeled after the Académie Française), which was established soon thereafter. Since then, world congresses have been held in different countries every year, except during the two World Wars, and the 2020 COVID-19 pandemic (when it was moved to an online-only event). Since the Second World War, they have been attended by an average of more than 2,000 people, and up to 6,000 people at the most.", "title": "History" }, { "paragraph_id": 20, "text": "Zamenhof wrote that he wanted mankind to \"learn and use ... en masse ... the proposed language as a living one\". The goal for Esperanto to become a global auxiliary language was not Zamenhof's only goal; he also wanted to \"enable the learner to make direct use of his knowledge with persons of any nationality, whether the language be universally accepted or not; in other words, the language is to be directly a means of international communication.\"", "title": "History" }, { "paragraph_id": 21, "text": "After some ten years of development, which Zamenhof spent translating literature into Esperanto, as well as writing original prose and verse, the first book of Esperanto grammar was published in Warsaw on July 26, 1887. The number of speakers grew rapidly over the next few decades; at first, primarily in the Russian Empire and Central Europe, then in other parts of Europe, the Americas, China, and Japan. In the early years before the world congresses, speakers of Esperanto kept in contact primarily through correspondence and periodicals.", "title": "History" }, { "paragraph_id": 22, "text": "Zamenhof's name for the language was simply Internacia Lingvo (\"International Language\"). December 15, Zamenhof's birthday, is now regarded as Zamenhof Day or Esperanto Book Day.", "title": "History" }, { "paragraph_id": 23, "text": "The autonomous territory of Neutral Moresnet, between what is today Belgium and Germany, had a sizable proportion of Esperanto-speaking citizens among its small, diverse population. There was a proposal to make Esperanto its official language.", "title": "History" }, { "paragraph_id": 24, "text": "However, neither Belgium nor Germany had surrendered their claims to the region, with the latter having adopted a more aggressive stance towards pursuing its claim around the turn of the century, even being accused of sabotage and administrative obstruction to force the issue. The outbreak of World War I would bring about the end of neutrality, with Moresnet initially left as \"an oasis in a desert of destruction\" following the German invasion of Belgium. The territory was formally annexed by Prussia in 1915, though without international recognition.", "title": "History" }, { "paragraph_id": 25, "text": "After the war, a great opportunity for Esperanto seemingly presented itself, when the Iranian delegation to the League of Nations proposed that the language be adopted for use in international relations following a report by a Japanese delegate to the League named Nitobe Inazō, in the context of the 13th World Congress of Esperanto, held in Prague. Ten delegates accepted the proposal with only one voice against, the French delegate, Gabriel Hanotaux. Hanotaux opposed all recognition of Esperanto at the League, from the first resolution on December 18, 1920, and subsequently through all efforts during the next three years. Hanotaux did not approve of how the French language was losing its position as the international language and saw Esperanto as a threat, effectively wielding his veto power to block the decision. However, two years later, the League recommended that its member states include Esperanto in their educational curricula. The French government retaliated by banning all instruction in Esperanto in France's schools and universities. The French Ministry of Public Instruction said that \"French and English would perish and the literary standard of the world would be debased\". Nonetheless, many people see the 1920s as the heyday of the Esperanto movement. During this time, Anarchism as a political movement was very supportive of both anationalism and the Esperanto language.", "title": "History" }, { "paragraph_id": 26, "text": "Fran Novljan was one of the chief promoters of Esperanto in the former Kingdom of Yugoslavia. He was among the founders of the Croatian Prosvjetni savez (Educational Alliance), of which he was the first secretary, and organized Esperanto institutions in Zagreb. Novljan collaborated with Esperanto newspapers and magazines, and was the author of the Esperanto textbook Internacia lingvo esperanto i Esperanto en tridek lecionoj.", "title": "History" }, { "paragraph_id": 27, "text": "In 1920s Korea, socialist thinkers pushed for the use of Esperanto through a series of columns in The Dong-a Ilbo as resistance to both Japanese occupation as well as a counter to the growing nationalist movement for Korean language standardization. This lasted until the Mukden Incident in 1931, when changing colonial policy led to an outright ban on Esperanto education in Korea.", "title": "History" }, { "paragraph_id": 28, "text": "Esperanto attracted the suspicion of many states. Repression was especially pronounced in Nazi Germany, Francoist Spain up until the 1950s, and the Soviet Union under Stalin, from 1937 to 1956.", "title": "History" }, { "paragraph_id": 29, "text": "In Nazi Germany, there was a motivation to ban Esperanto because Zamenhof was Jewish, and due to the internationalist nature of Esperanto, which was perceived as \"Bolshevist\". In his work, Mein Kampf, Adolf Hitler specifically mentioned Esperanto as an example of a language that could be used by an international Jewish conspiracy once they achieved world domination. Esperantists were killed during the Holocaust, with Zamenhof's family in particular singled out to be killed. The efforts of a minority of German Esperantists to expel their Jewish colleagues and overtly align themselves with the Reich were futile, and Esperanto was legally forbidden in 1935. Esperantists in German concentration camps did, however, teach Esperanto to fellow prisoners, telling guards they were teaching Italian, the language of one of Germany's Axis allies.", "title": "History" }, { "paragraph_id": 30, "text": "In Imperial Japan, the left wing of the Japanese Esperanto movement was forbidden, but its leaders were careful enough not to give the impression to the government that the Esperantists were socialist revolutionaries, which proved a successful strategy.", "title": "History" }, { "paragraph_id": 31, "text": "After the October Revolution of 1917, Esperanto was given a measure of government support by the new communist states in the former Russian Empire and later by the Soviet Union government, with the Soviet Esperantist Union being established as an organization that, temporarily, was officially recognized. In his biography on Joseph Stalin, Leon Trotsky mentions that Stalin had studied Esperanto. However, in 1937, at the height of the Great Purge, Stalin completely reversed the Soviet government's policies on Esperanto; many Esperanto speakers were executed, exiled or held in captivity in the Gulag labour camps. Quite often the accusation was: \"You are an active member of an international spy organization which hides itself under the name of 'Association of Soviet Esperantists' on the territory of the Soviet Union.\" Until the end of the Stalin era, it was dangerous to use Esperanto in the Soviet Union, even though it was never officially forbidden to speak Esperanto.", "title": "History" }, { "paragraph_id": 32, "text": "Fascist Italy allowed the use of Esperanto, finding its phonology similar to that of Italian and publishing some tourist material in the language.", "title": "History" }, { "paragraph_id": 33, "text": "During and after the Spanish Civil War, Francoist Spain suppressed anarchists, socialists and Catalan nationalists for many years, among whom the use of Esperanto was extensive, but in the 1950s the Esperanto movement was again tolerated.", "title": "History" }, { "paragraph_id": 34, "text": "In 1954, the United Nations — through UNESCO — granted official support to Esperanto as an international auxiliary language in the Montevideo Resolution. However, Esperanto is still not one of the official languages of the UN.", "title": "History" }, { "paragraph_id": 35, "text": "The development of Esperanto has continued unabated into the 21st century. The advent of the Internet has had a significant impact on the language, as learning it has become increasingly accessible on platforms such as Duolingo, and as speakers have increasingly networked on platforms such as Amikumu. With up to two million speakers, it is the most widely spoken constructed language in the world. Although no country has adopted Esperanto officially, Esperantujo (\"Esperanto-land\") is the name given to the collection of places where it is spoken.", "title": "History" }, { "paragraph_id": 36, "text": "While many of its advocates continue to hope for the day that Esperanto becomes officially recognized as the international auxiliary language, some (including raŭmistoj) have stopped focusing on this goal and instead view the Esperanto community as a stateless diasporic linguistic group based on freedom of association.", "title": "History" }, { "paragraph_id": 37, "text": "On May 28, 2015, the language learning platform Duolingo launched a free Esperanto course for English speakers On March 25, 2016, when the first Duolingo Esperanto course completed its beta-testing phase, that course had 350,000 people registered to learn Esperanto through the medium of English. By July 2018, the number of learners had risen to 1.36 million. On July 20, 2018, Duolingo changed from recording users cumulatively to reporting only the number of \"active learners\" (i.e., those who are studying at the time and have not yet completed the course), which as of October 2022 stands at 299,000 learners.", "title": "Internet" }, { "paragraph_id": 38, "text": "On October 26, 2016, a second Duolingo Esperanto course, for which the language of instruction is Spanish, appeared on the same platform and which as of April 2021 has a further 176,000 students. A third Esperanto course, taught in Brazilian Portuguese, began its beta-testing phase on May 14, 2018, and as of April 2021, 220,000 people are using this course and 155,000 people in May 2022. A fourth Esperanto course, taught in French, began its beta-testing phase in July 2020, and as of March 2021 has 72,500 students and 101,000 students in May 2022.", "title": "Internet" }, { "paragraph_id": 39, "text": "As of October 2018, Lernu!, another online learning platform for Esperanto, has 320,000 registered users, and nearly 75,000 monthly visits. 50,000 users possess at least a basic understanding of Esperanto.", "title": "Internet" }, { "paragraph_id": 40, "text": "On February 22, 2012, Google Translate added Esperanto as its 64th language. On July 25, 2016, Yandex Translate added Esperanto as a language.", "title": "Internet" }, { "paragraph_id": 41, "text": "With about 347,000 articles, Esperanto Wikipedia (Vikipedio) is the 36th-largest Wikipedia, as measured by the number of articles, and is the largest Wikipedia in a constructed language. About 150,000 users consult the Vikipedio regularly, as attested by Wikipedia's automatically aggregated log-in data, which showed that in October 2019 the website has 117,366 unique individual visitors per month, plus 33,572 who view the site on a mobile device instead.", "title": "Internet" }, { "paragraph_id": 42, "text": "Esperanto's phonology, grammar, vocabulary, and semantics are based on the Indo-European languages spoken in Europe. Some evidence has shown that Zamenhof studied German, English, Spanish, Lithuanian, Italian and French and knew 13 different languages, which had an influence on Esperanto's linguistic properties.", "title": "Linguistic properties" }, { "paragraph_id": 43, "text": "Esperanto has been described as \"a language lexically predominantly Romanic, morphologically intensively agglutinative, and to a certain degree isolating in character\". Typologically, Esperanto has prepositions and a pragmatic word order that by default is subject–verb–object (SVO). Adjectives can be freely placed before or after the nouns they modify, though placing them before the noun is more common. New words are formed through extensive use of affixes and compounds.", "title": "Linguistic properties" }, { "paragraph_id": 44, "text": "Esperanto typically has 22 to 24 consonants (depending on the phonemic analysis and individual speaker), five vowels, and two semivowels that combine with the vowels to form six diphthongs. (The consonant /j/ and semivowel /i̯/ are both written ⟨j⟩, and the uncommon consonant /dz/ is written with the digraph ⟨dz⟩, which is the only consonant that does not have its own letter.) Tone is not used to distinguish meanings of words. Stress is always on the second-to-last vowel in proper Esperanto words, unless a final vowel o is elided, a phenomenon mostly occurring in poetry. For example, familio \"family\" is [fa.mi.ˈli.o], with the stress on the second i, but when the word is used without the final o (famili’), the stress remains on the second i : [fa.mi.ˈli].", "title": "Linguistic properties" }, { "paragraph_id": 45, "text": "The 23 consonants are:", "title": "Linguistic properties" }, { "paragraph_id": 46, "text": "There is some degree of allophony:", "title": "Linguistic properties" }, { "paragraph_id": 47, "text": "A large number of consonant clusters can occur, up to three in initial position (as in stranga, \"strange\") and five in medial position (as in ekssklavo, \"former slave\"). Final clusters are uncommon except in unassimilated names, poetic elision of final o, and a very few basic words such as cent \"hundred\" and post \"after\".", "title": "Linguistic properties" }, { "paragraph_id": 48, "text": "Esperanto has the five vowels found in such languages as Spanish, Modern Hebrew, and Modern Greek.", "title": "Linguistic properties" }, { "paragraph_id": 49, "text": "Since there are only five vowels, a good deal of variation in pronunciation is tolerated. For instance, e commonly ranges from [e] (French é) to [ɛ] (French è). These details often depend on the speaker's native language. A glottal stop may occur between adjacent vowels in some people's speech, especially when the two vowels are the same, as in heroo \"hero\" ([he.ˈro.o] or [he.ˈro.ʔo]) and praavo \"great-grandfather\" ([pra.ˈa.vo] or [pra.ˈʔa.vo]).", "title": "Linguistic properties" }, { "paragraph_id": 50, "text": "The Esperanto alphabet is based on the Latin script, using a one-sound-one-letter principle, with the exception of [d͡z]. It includes six letters with diacritics: five with circumflexes (⟨ĉ⟩, ⟨ĝ⟩, ⟨ĥ⟩, ⟨ĵ⟩, and ⟨ŝ⟩), and one with a breve (⟨ŭ⟩). The alphabet does not include the letters ⟨q⟩, ⟨w⟩, ⟨x⟩, or ⟨y⟩, which are only used in the writing of proper names and unassimilated borrowings.", "title": "Linguistic properties" }, { "paragraph_id": 51, "text": "The 28-letter alphabet is:", "title": "Linguistic properties" }, { "paragraph_id": 52, "text": "All letters lacking diacritics are pronounced approximately as their respective IPA symbols, with the exception of ⟨c⟩.", "title": "Linguistic properties" }, { "paragraph_id": 53, "text": "The letters ⟨j⟩ and ⟨c⟩ are used in a way that is familiar to speakers of many Central and Eastern European languages, but may be unfamiliar to English speakers. ⟨j⟩ has the sound of English ⟨y⟩, as in yellow and boy (Esperanto jes has the same pronunciation as its English cognate yes), and ⟨c⟩ has a \"ts\" sound, as in hits or the ⟨zz⟩ in pizza. In addition, the ⟨g⟩ in Esperanto is always 'hard', as in gift. Esperanto makes use of the five-vowel system, essentially identical to the vowels of Spanish and Modern Greek.", "title": "Linguistic properties" }, { "paragraph_id": 54, "text": "The accented letters are:", "title": "Linguistic properties" }, { "paragraph_id": 55, "text": "According to one of Zamenhof's entries in the Lingvaj respondoj, the letter ⟨n⟩ ought to be pronounced as [n] in all cases, but a rendering as [ŋ] is admissible before ⟨g⟩, ⟨k⟩, and ⟨ĥ⟩.", "title": "Linguistic properties" }, { "paragraph_id": 56, "text": "Even with the widespread adoption of Unicode, the letters with diacritics (found in the \"Latin-Extended A\" section of the Unicode Standard) can cause problems with printing and computing, because they are not found on most physical keyboards and are left out of certain fonts.", "title": "Linguistic properties" }, { "paragraph_id": 57, "text": "There are two principal workarounds to this problem, which substitute digraphs for the accented letters. Zamenhof, the inventor of Esperanto, created an \"h-convention\", which replaces ⟨ĉ⟩, ⟨ĝ⟩, ⟨ĥ⟩, ⟨ĵ⟩, ⟨ŝ⟩, and ⟨ŭ⟩ with ⟨ch⟩, ⟨gh⟩, ⟨hh⟩, ⟨jh⟩, ⟨sh⟩, and ⟨u⟩, respectively. If used in a database, a program in principle could not determine whether to render, for example, ⟨ch⟩ as /c/ followed by /h/ or as /ĉ/, and would fail to render, for example, the word senchava unambiguously unless its component parts were intentionally separated, as in senc·hava. A more recent x-convention has also gained prominence with the advent of computing, utilizing an otherwise absent ⟨x⟩ to produce the digraphs ⟨cx⟩, ⟨gx⟩, ⟨hx⟩, ⟨jx⟩, ⟨sx⟩, and ⟨ux⟩; this has the incidental advantage of alphabetizing correctly in most cases, since the only letter after ⟨x⟩ is ⟨z⟩.", "title": "Linguistic properties" }, { "paragraph_id": 58, "text": "There are computer keyboard layouts that support the Esperanto alphabet, and some systems use software that automatically replaces x- or h-convention digraphs with the corresponding diacritic letters (for example, Amiketo for Microsoft Windows, Mac OS X, and Linux, Esperanta Klavaro for Windows Phone, and Gboard and AnySoftKeyboard for Android).", "title": "Linguistic properties" }, { "paragraph_id": 59, "text": "On Linux, the GNOME, Cinnamon, and KDE desktop environments support the entry of characters with Esperanto diacritics.", "title": "Linguistic properties" }, { "paragraph_id": 60, "text": "Criticisms are levied against the letters with circumflex diacritics, which some find odd or cumbersome, along with their being invented specifically for Esperanto rather than borrowed from existing languages. Additionally, some of them are arguably unnecessary — for example, the use of ĥ instead of x and ŭ instead of w. However, Zamenhof did not choose these letters arbitrarily: In fact, they were inspired by Czech letters with the caron diacritic but replaced the caron with a circumflex for the ease of those who had access to a French typewriter (with a circumflex dead-key). The Czech letter ž was replaced with ĵ to match the French letter j with the same sound. The letter ŭ on the other hand comes from the u-breve used in Latin prosody, and is also speculated to be inspired by the Belarusian Cyrillic letter ў; French typewriters can render it approximately as the French letter ù.", "title": "Linguistic properties" }, { "paragraph_id": 61, "text": "Esperanto words are mostly derived by stringing together roots, grammatical endings, and at times prefixes and suffixes. This process is regular so that people can create new words as they speak and be understood. Compound words are formed with a modifier-first, head-final order, as in English (compare \"birdsong\" and \"songbird,\" and likewise, birdokanto and kantobirdo). Speakers may optionally insert an o between the words in a compound noun if placing them together directly without the o would make the resulting word hard to say or understand.", "title": "Linguistic properties" }, { "paragraph_id": 62, "text": "The different parts of speech are marked by their own suffixes: all common nouns are marked with the suffix -o, all adjectives with -a, all derived adverbs with -e, and all verbs except the jussive (or imperative) and infinitive end in -s, specifically in one of six tense and mood suffixes, such as the present tense -as; the jussive mood, which is tenseless, ends in -u. Nouns and adjectives have two cases: nominative for grammatical subjects and in general, and accusative for direct objects and (after a preposition) to indicate direction of movement.", "title": "Linguistic properties" }, { "paragraph_id": 63, "text": "Singular nouns used as grammatical subjects end in -o, plural subject nouns in -oj (pronounced [oi̯] like English \"oy\"). Singular direct object forms end in -on, and plural direct objects with the combination -ojn ([oi̯n]; rhymes with \"coin\"): -o indicates that the word is a noun, -j indicates the plural, and -n indicates the accusative (direct object) case. Adjectives agree with their nouns; their endings are singular subject -a ([a]; rhymes with \"ha!\"), plural subject -aj ([ai̯], pronounced \"eye\"), singular object -an, and plural object -ajn ([ai̯n]; rhymes with \"fine\").", "title": "Linguistic properties" }, { "paragraph_id": 64, "text": "The suffix -n, besides indicating the direct object, is used to indicate movement and a few other things as well.", "title": "Linguistic properties" }, { "paragraph_id": 65, "text": "The six verb inflections consist of three tenses and three moods. They are present tense -as, future tense -os, past tense -is, infinitive mood -i, conditional mood -us and jussive mood -u (used for wishes and commands). Verbs are not marked for person or number. Thus, kanti means \"to sing\", mi kantas means \"I sing\", vi kantas means \"you sing\", and ili kantas means \"they sing\".", "title": "Linguistic properties" }, { "paragraph_id": 66, "text": "Word order is comparatively free. Adjectives may precede or follow nouns; subjects, verbs and objects may occur in any order. However, the article la \"the\", demonstratives such as tiu \"that\" and prepositions (such as ĉe \"at\") must come before their related nouns. Similarly, the negative ne \"not\" and conjunctions such as kaj \"and\" and ke \"that\" must precede the phrase or clause that they introduce. In copular (A = B) clauses, word order is just as important as in English: \"people are animals\" is distinguished from \"animals are people\".", "title": "Linguistic properties" }, { "paragraph_id": 67, "text": "The core vocabulary of Esperanto was defined by Lingvo internacia, published by Zamenhof in 1887. This book listed 917 roots; these could be expanded into tens of thousands of words using prefixes, suffixes, and compounding. In 1894, Zamenhof published the first Esperanto dictionary, Universala Vortaro, which had a larger set of roots. The rules of the language allowed speakers to borrow new roots as needed; it was recommended, however, that speakers use most international forms and then derive related meanings from these.", "title": "Linguistic properties" }, { "paragraph_id": 68, "text": "Since then, many words have been borrowed, primarily (but not solely) from the European languages. Not all proposed borrowings become widespread, but many do, especially technical and scientific terms. Terms for everyday use, on the other hand, are more likely to be derived from existing roots; komputilo \"computer\", for instance, is formed from the verb komputi \"compute\" and the suffix -ilo \"tool\". Words are also calqued; that is, words acquire new meanings based on usage in other languages. For example, the word muso \"mouse\" has acquired the meaning of a computer mouse from its usage in many languages (English mouse, French souris, Dutch muis, Spanish ratón, etc.). Esperanto speakers often debate about whether a particular borrowing is justified or whether meaning can be expressed by deriving from or extending the meaning of existing words.", "title": "Linguistic properties" }, { "paragraph_id": 69, "text": "Some compounds and formed words in Esperanto are not entirely straightforward; for example, eldoni, literally \"give out\", means \"publish\", paralleling the usage of certain European languages (such as German herausgeben, Dutch uitgeven, Russian издать izdat'). In addition, the suffix -um- has no defined meaning; words using the suffix must be learned separately (such as dekstren \"to the right\" and dekstrumen \"clockwise\").", "title": "Linguistic properties" }, { "paragraph_id": 70, "text": "There are not many idiomatic or slang words in Esperanto, as these forms of speech tend to make international communication difficult—working against Esperanto's main goal. The language contains several calques of Polish expressions.", "title": "Linguistic properties" }, { "paragraph_id": 71, "text": "Instead of derivations of Esperanto roots, new roots are taken from European languages in the endeavor to create an international language.", "title": "Linguistic properties" }, { "paragraph_id": 72, "text": "Ĉiuj homoj estas denaske liberaj kaj egalaj laŭ digno kaj rajtoj. Ili posedas racion kaj konsciencon, kaj devus konduti unu al alia en spirito de frateco.", "title": "Linguistic properties" }, { "paragraph_id": 73, "text": "All human beings are born free and equal in dignity and rights. They are endowed with reason and conscience and should act towards one another in a spirit of brotherhood.", "title": "Linguistic properties" }, { "paragraph_id": 74, "text": "The Universal Declaration of Human Rights, Article I", "title": "Linguistic properties" }, { "paragraph_id": 75, "text": "The following short extract gives an idea of the character of Esperanto.", "title": "Linguistic properties" }, { "paragraph_id": 76, "text": "Listed below are some useful Esperanto words and phrases along with IPA transcriptions:", "title": "Linguistic properties" }, { "paragraph_id": 77, "text": "Esperanto speakers learn the language through self-directed study, online tutorials, and correspondence courses taught by volunteers. More recently, free teaching websites like lernu! and Duolingo have become available.", "title": "Education" }, { "paragraph_id": 78, "text": "Esperanto instruction is rarely available at schools, including four primary schools in a pilot project under the supervision of the University of Manchester, and by one count at a few universities. However, outside China and Hungary, these mostly involve informal arrangements, rather than dedicated departments or state sponsorship. Eötvös Loránd University in Budapest had a department of Interlinguistics and Esperanto from 1966 to 2004, after which time instruction moved to vocational colleges; there are state examinations for Esperanto instructors. Additionally, Adam Mickiewicz University in Poland offers a diploma in Interlinguistics. The Senate of Brazil passed a bill in 2009 that would make Esperanto an optional part of the curriculum in public schools, although mandatory if there is demand for it. As of 2015, the bill is still under consideration by the Chamber of Deputies.", "title": "Education" }, { "paragraph_id": 79, "text": "In the United States, Esperanto is notably offered as a weekly evening course at Stanford University's Bechtel International Center. Conversational Esperanto, The International Language, is a free drop-in class that is open to Stanford students and the general public on campus during the academic year. With administrative permission, Stanford Students can take the class for two credits a quarter through the Linguistics Department. \"Even four lessons are enough to get more than just the basics,\" the Esperanto at Stanford website reads.", "title": "Education" }, { "paragraph_id": 80, "text": "Esperanto-USA suggests that Esperanto can be learned in, at most, one quarter of the amount of time required for other languages.", "title": "Education" }, { "paragraph_id": 81, "text": "The Zagreb method is an Esperanto teaching method that was developed in Zagreb, Yugoslavia (present-day capital city of Croatia), in the late 1970s to early 1980s as a response to the unsatisfactory learning outcomes of traditional natural-language teaching techniques when used for Esperanto. Its goal was to streamline the material in order to equip learners with practical knowledge that could be put to use in a short of a time frame as possible. It is now implemented and available on some of the well-known learning websites in the community.", "title": "Education" }, { "paragraph_id": 82, "text": "From 2006 to 2011, four primary schools in Britain, with 230 pupils, followed a course in \"propaedeutic Esperanto\"—that is, instruction in Esperanto to raise language awareness, and to accelerate subsequent learning of foreign languages—under the supervision of the University of Manchester. As they put it,", "title": "Education" }, { "paragraph_id": 83, "text": "Many schools used to teach children the recorder, not to produce a nation of recorder players, but as a preparation for learning other instruments. [We teach] Esperanto, not to produce a nation of Esperanto-speakers, but as a preparation for learning other languages.", "title": "Education" }, { "paragraph_id": 84, "text": "The results showed that the pupils achieved enhanced metalinguistic awareness, though the study did not indicate whether a course in a language other than Esperanto would have led to similar results. Similar studies have been conducted in New Zealand, the United States, and Germany. The results of these studies were favorable, and demonstrated that studying Esperanto before another foreign language expedites the acquisition of the other, natural language. In one study in England, a group of European secondary school students studied Esperanto for one year, then French for three years, and ended up with a better command of French than a control group, who had studied French for a four-year period.", "title": "Education" }, { "paragraph_id": 85, "text": "Esperanto is by far the most widely spoken constructed language in the world. Speakers are most numerous in Europe and East Asia, especially in urban areas, where they often form Esperanto clubs. Esperanto is particularly prevalent in the northern and central countries of Europe; in China, Korea, Japan, and Iran within Asia; in Brazil, and the United States in the Americas; and in Togo in Africa.", "title": "Community" }, { "paragraph_id": 86, "text": "Countering a common criticism against Esperanto, the statistician Svend Nielsen has found no significant correlation between the number of Esperanto speakers and the similarity of a given national native language to Esperanto. He concludes that Esperanto tends to be more popular in rich countries with widespread Internet access and a tendency to contribute more to science and culture. Linguistic diversity within a country was found to have no, or perhaps a slightly reductive, correlation with Esperanto popularity.", "title": "Community" }, { "paragraph_id": 87, "text": "An estimate of the number of Esperanto speakers was made by Sidney S. Culbert, a retired psychology professor at the University of Washington and a longtime Esperantist, who tracked down and tested Esperanto speakers in sample areas in dozens of countries over a period of twenty years. Culbert concluded that between one and two million people speak Esperanto at Foreign Service Level 3, \"professionally proficient\" (able to communicate moderately complex ideas without hesitation, and to follow speeches, radio broadcasts, etc.). Culbert's estimate was not made for Esperanto alone, but formed part of his listing of estimates for all languages of more than one million speakers, published annually in the World Almanac and Book of Facts. Culbert's most detailed account of his methodology is found in a 1989 letter to David Wolff. Since Culbert never published detailed intermediate results for particular countries and regions, it is difficult to independently gauge the accuracy of his results.", "title": "Community" }, { "paragraph_id": 88, "text": "In the Almanac, his estimates for numbers of language speakers were rounded to the nearest million, thus the number of Esperanto speakers is shown as two million. This latter figure appears in Ethnologue. Assuming that this figure is accurate, that means that about 0.03% of the world's population speaks the language. Although it does not meet Zamenhof's goal of a universal language, it still represents a level of popularity unmatched by any other constructed language.", "title": "Community" }, { "paragraph_id": 89, "text": "Marcus Sikosek (now Ziko van Dijk) has challenged this figure of 1.6 million as exaggerated. He estimated that even if Esperanto speakers were evenly distributed, assuming one million Esperanto speakers worldwide would lead one to expect about 180 in the city of Cologne. Van Dijk finds only 30 fluent speakers in that city, and similarly smaller-than-expected figures in several other places thought to have a larger-than-average concentration of Esperanto speakers. He also notes that there are a total of about 20,000 members of the various Esperanto organizations (other estimates are higher). Though there are undoubtedly many Esperanto speakers who are not members of any Esperanto organization, he thinks it unlikely that there are fifty times more speakers than organization members.", "title": "Community" }, { "paragraph_id": 90, "text": "Finnish linguist Jouko Lindstedt, an expert on native-born Esperanto speakers, presented the following scheme to show the overall proportions of language capabilities within the Esperanto community:", "title": "Community" }, { "paragraph_id": 91, "text": "In 2017, doctoral student Svend Nielsen estimated around 63,000 Esperanto speakers worldwide, taking into account association memberships, user-generated data from Esperanto websites and census statistics. This number, however, was disputed by statistician Sten Johansson, who questioned the reliability of the source data and highlighted a wide margin of error, the latter point with which Nielsen agrees. Both have stated, however, that this new number is likely more realistic than some earlier projections.", "title": "Community" }, { "paragraph_id": 92, "text": "In the absence of Culbert's detailed sampling data, or any other census data, it is impossible to state the number of speakers with certainty. According to the website of the Universal Esperanto Association:", "title": "Community" }, { "paragraph_id": 93, "text": "Numbers of textbooks sold and membership of local societies put \"the number of people with some knowledge of the language in the hundreds of thousands and possibly millions\".", "title": "Community" }, { "paragraph_id": 94, "text": "Native Esperanto speakers, eo: denaskuloj, lit. 'person from/since birth', have learned the language from birth from Esperanto-speaking parents. This usually happens when Esperanto is the chief or only common language in an international family, but sometimes occurs in a family of Esperanto speakers who often use the language. As of 1996, according to Corsetti, there were approximately 350 attested cases of families with native Esperanto speakers (which means there were around 700 Esperanto speaking natives in these families, not accounting for older native speakers). The 2022 edition of Ethnologue gives 1,000 L1 users citing Corsetti et al 2004.", "title": "Community" }, { "paragraph_id": 95, "text": "However, native speakers do not occupy an authoritative position in the Esperanto community, as they would in other language communities. This presents a challenge to linguists, whose usual source of grammaticality and meanings are native speakers.", "title": "Community" }, { "paragraph_id": 96, "text": "Esperantists can access an international culture, including a large body of original as well as translated literature. There are more than 25,000 Esperanto books, both originals and translations, as well as several regularly distributed Esperanto magazines. In 2013 a museum about Esperanto opened in China. Esperantists use the language for free accommodations with Esperantists in 92 countries using the Pasporta Servo or to develop pen pals through Esperanto Koresponda Servo [eo].", "title": "Community" }, { "paragraph_id": 97, "text": "Every year, Esperantists meet for the World Congress of Esperanto (Universala Kongreso de Esperanto).", "title": "Community" }, { "paragraph_id": 98, "text": "Historically, much music has been written in the language such as Kaj Tiel Plu, has been in various folk traditions. There is also a variety of classical and semi-classical choral music, both original and translated, as well as large ensemble music that includes voices singing Esperanto texts. Lou Harrison, who incorporated styles and instruments from many world cultures in his music, used Esperanto titles and/or texts in several of his works, most notably La Koro-Sutro (1973). David Gaines used Esperanto poems as well as an excerpt from a speech by Zamenhof for his Symphony No. One (Esperanto) for mezzo-soprano and orchestra (1994–98). He wrote original Esperanto text for his Povas plori mi ne plu (I Can Cry No Longer) for unaccompanied SATB choir (1994).", "title": "Community" }, { "paragraph_id": 99, "text": "There are also shared traditions, such as Zamenhof Day, celebrated on December 15. Esperantists speak primarily in Esperanto at special conventions, such as the World Esperanto Congress.", "title": "Community" }, { "paragraph_id": 100, "text": "Proponents of Esperanto, such a Humphrey Tonkin, a professor at the University of Hartford, argue that Esperanto is \"culturally neutral by design, as it was intended to be a facilitator between cultures, not to be the carrier of any one national culture\". The late Scottish Esperanto author William Auld wrote extensively on the subject, arguing that Esperanto is \"the expression of a common human culture, unencumbered by national frontiers. Thus it is considered a culture on its own.\" Critics have argued that the language is eurocentric, as it draws much of its vocabulary from European languages.", "title": "Community" }, { "paragraph_id": 101, "text": "Several Esperanto associations also advance Esperanto education, and aim to preserve its culture and heritage. Poland added Esperanto to its list of intangible cultural heritage in 2014.", "title": "Esperanto heritage" }, { "paragraph_id": 102, "text": "In the futuristic novel Lord of the World by Robert Hugh Benson, Esperanto is presented as the predominant language of the world, much as Latin is the language of the Church. A reference to Esperanto appears in the science-fiction story War with the Newts by Karel Čapek, published in 1936. As part of a passage on what language the salamander-looking creatures with human cognitive ability should learn, it is noted that \"...in the Reform schools, Esperanto was taught as the medium of communication.\" (P. 206).", "title": "Esperanto heritage" }, { "paragraph_id": 103, "text": "Esperanto has been used in many films and novels. Typically, this is done either to add the exotic flavour of a foreign language without representing any particular ethnicity, or to avoid going to the trouble of inventing a new language. The Charlie Chaplin film The Great Dictator (1940) showed Jewish ghetto shop signs in Esperanto. Two full-length feature films have been produced with dialogue entirely in Esperanto: Angoroj, in 1964, and Incubus, a 1965 B-movie horror film which is also notable for starring William Shatner shortly before he began working on Star Trek. In Captain Fantastic (2016) there is a dialogue in Esperanto. The 1994 film Street Fighter contains Esperanto dialogue spoken by the character Sagat. Finally, Mexican film director Alfonso Cuarón has publicly shown his fascination for Esperanto, going as far as naming his film production company Esperanto Filmoj (\"Esperanto Films\").", "title": "Esperanto heritage" }, { "paragraph_id": 104, "text": "In 1921 the French Academy of Sciences recommended using Esperanto for international scientific communication. A few scientists and mathematicians, such as Maurice Fréchet (mathematics), John C. Wells (linguistics), Helmar Frank (pedagogy and cybernetics), and Nobel laureate Reinhard Selten (economics) have published part of their work in Esperanto. Frank and Selten were among the founders of the International Academy of Sciences in San Marino, sometimes called the \"Esperanto University\", where Esperanto is the primary language of teaching and administration.", "title": "Esperanto heritage" }, { "paragraph_id": 105, "text": "A message in Esperanto was recorded and included in Voyager 1's Golden Record.", "title": "Esperanto heritage" }, { "paragraph_id": 106, "text": "Esperanto business groups have been active for many years. Research conducted in the 1920s by the French Chamber of Commerce and reported in The New York Times suggested that Esperanto seemed to be the best business language.", "title": "Esperanto heritage" }, { "paragraph_id": 107, "text": "The privacy-oriented cryptocurrency, Monero, takes its name from the Esperanto word for coin.", "title": "Esperanto heritage" }, { "paragraph_id": 108, "text": "Zamenhof had three goals, as he wrote already in 1887: to create an easy language, to create a language ready to use \"whether the language be universally accepted or not\" and to find some means to get many people to learn the language. So Zamenhof's intention was not only to create an easy-to-learn language to foster peace and international understanding as a general language, but also to create a language for immediate use by a (small) language community. Esperanto was to serve as an international auxiliary language, that is, as a universal second language, not to replace ethnic languages. This goal was shared by Zamenhof among Esperanto speakers at the beginning of the movement. Later, Esperanto speakers began to see the language and the culture that had grown up around it as ends in themselves, even if Esperanto is never adopted by the United Nations or other international organizations.", "title": "Esperanto heritage" }, { "paragraph_id": 109, "text": "Esperanto speakers who want to see Esperanto adopted officially or on a large scale worldwide are commonly called finvenkistoj, from fina venko, meaning \"final victory\". There are two kinds of finvenkismo: desubismo aims to spread Esperanto between ordinary people (desube, from below) to form a steadily growing community of Esperanto speakers, while desuprismo aims to act from above (desupre), beginning with politicians. Zamenhof considered the first way more plausible, as \"for such affairs as ours, governments come with their approval and help usually only when everything is completely ready.\"", "title": "Esperanto heritage" }, { "paragraph_id": 110, "text": "Those who focus on the intrinsic value of the language are commonly called raŭmistoj, from Rauma, Finland, where a declaration on the short-term improbability of the fina venko and the value of Esperanto culture was made at the International Youth Congress in 1980. However the \"Manifesto de Raŭmo\" clearly mentions the intention to further spread the language: \"We want to spread Esperanto to put into effect its positive values more and more, step by step\".", "title": "Esperanto heritage" }, { "paragraph_id": 111, "text": "In 1996 the Prague Manifesto was adopted at the annual congress of the Universal Esperanto Association (UEA); it was subscribed by individual participants and later by other Esperanto speakers. More recently, language-learning apps like Duolingo and Amikumu have helped to increase the amount of fluent speakers of Esperanto, and find others in their area to speak the language with.", "title": "Esperanto heritage" }, { "paragraph_id": 112, "text": "The earliest flag, and the one most commonly used today, features a green five-pointed star against a white canton, upon a field of green. It was proposed to Zamenhof by Richard Geoghegan, author of the first Esperanto textbook for English speakers, in 1887. The flag was approved in 1905 by delegates to the first conference of Esperantists at Boulogne-sur-Mer.", "title": "Esperanto heritage" }, { "paragraph_id": 113, "text": "The green star on white (la verda stelo) is also used by itself as a round (buttonhole, etc.) emblem by many esperantists, among other reasons to enhance their visibility outside the Esperanto world.", "title": "Esperanto heritage" }, { "paragraph_id": 114, "text": "A version with an E superimposed over the green star is sometimes seen. Other variants include that for Christian Esperantists, with a white Christian cross superimposed upon the green star, and that for Leftists, with the color of the field changed from green to red.", "title": "Esperanto heritage" }, { "paragraph_id": 115, "text": "In 1987, a second flag design was chosen in a contest organized by the UEA celebrating the first centennial of the language. It featured a white background with two stylised curved \"E\"s facing each other. Dubbed the jubilea simbolo (jubilee symbol), it attracted criticism from some Esperantists, who dubbed it the melono (melon) for its elliptical shape. It is still in use, though to a lesser degree than the traditional symbol, known as the verda stelo (green star).", "title": "Esperanto heritage" }, { "paragraph_id": 116, "text": "Esperanto has been placed in many proposed political situations. The most popular of these is the Europe–Democracy–Esperanto, which aims to establish Esperanto as the official language of the European Union. Grin's Report, published in 2005 by François Grin, found that the use of English as the lingua franca within the European Union costs billions annually and significantly benefits English-speaking countries financially. The report considered a scenario where Esperanto would be the lingua franca, and found that it would have many advantages, particularly economically speaking, as well as ideologically.", "title": "Esperanto heritage" }, { "paragraph_id": 117, "text": "Left-wing currents exist in the wider Esperanto world, mostly organized through the Sennacieca Asocio Tutmonda founded by French theorist Eugène Lanti. Other notable Esperanto socialists include Nikolai Nekrasov and Vladimir Varankin, both of whom were put to death in October 1938 during the Stalinist repressions. Nekrasov was accused of being \"an organizer and leader of a fascist, espionage, terrorist organization of Esperantists.\"", "title": "Esperanto heritage" }, { "paragraph_id": 118, "text": "The Oomoto religion encourages the use of Esperanto among its followers and includes Zamenhof as one of its deified spirits.", "title": "Esperanto heritage" }, { "paragraph_id": 119, "text": "The Baháʼí Faith encourages the use of an auxiliary international language. `Abdu'l-Bahá praised the ideal of Esperanto, and there was an affinity between Esperantists and Baháʼís during the late 19th century and early 20th century.", "title": "Esperanto heritage" }, { "paragraph_id": 120, "text": "On February 12, 1913, `Abdu'l-Bahá gave a talk to the Paris Esperanto Society, stating:", "title": "Esperanto heritage" }, { "paragraph_id": 121, "text": "Now, praise be to God that Dr. Zamenhof has invented the Esperanto language. It has all the potential qualities of becoming the international means of communication. All of us must be grateful and thankful to him for this noble effort; for in this way he has served his fellowmen well. With untiring effort and self-sacrifice on the part of its devotees Esperanto will become universal. Therefore every one of us must study this language and spread it as far as possible so that day by day it may receive a broader recognition, be accepted by all nations and governments of the world, and become a part of the curriculum in all the public schools. I hope that Esperanto will be adopted as the language of all the future international conferences and congresses, so that all people need acquire only two languages—one their own tongue and the other the international language. Then perfect union will be established between all the people of the world. Consider how difficult it is today to communicate with various nations. If one studies fifty languages one may yet travel through a country and not know the language. Therefore I hope that you will make the utmost effort, so that this language of Esperanto may be widely spread.", "title": "Esperanto heritage" }, { "paragraph_id": 122, "text": "Lidia Zamenhof, daughter of L. L. Zamenhof, became a Baháʼí around 1925. James Ferdinand Morton Jr., an early member of the Baháʼí Faith in Greater Boston, was vice-president of the Esperanto League for North America. Ehsan Yarshater, the founding editor of Encyclopædia Iranica, notes how as a child in Iran he learned Esperanto and that when his mother was visiting Haifa on a Baháʼí pilgrimage he wrote her a letter in Persian as well as Esperanto. At the request of 'Abdu’l-Baha, Agnes Baldwin Alexander became an early advocate of Esperanto and used it to spread the Baháʼí teachings at meetings and conferences in Japan.", "title": "Esperanto heritage" }, { "paragraph_id": 123, "text": "Today there exists an active sub-community of Baháʼí Esperantists and various volumes of Baháʼí literature have been translated into Esperanto. In 1973, the Baháʼí Esperanto-League for active Baháʼí supporters of Esperanto was founded.", "title": "Esperanto heritage" }, { "paragraph_id": 124, "text": "In 1908, spiritist Camilo Chaigneau wrote an article named \"Spiritism and Esperanto\" in the periodic La Vie d'Outre-Tombe recommending the use of Esperanto in a \"central magazine\" for all spiritists and esperantists. Esperanto then became actively promoted by spiritists, at least in Brazil, initially by Ismael Gomes Braga and František Lorenz; the latter is known in Brazil as Francisco Valdomiro Lorenz, and was a pioneer of both spiritist and Esperantist movements in this country. The Brazilian Spiritist Federation publishes Esperanto coursebooks, translations of Spiritism's basic books, and encourages Spiritists to become Esperantists.", "title": "Esperanto heritage" }, { "paragraph_id": 125, "text": "William T. Stead, a famous spiritualist and occultist in the United Kingdom, co-founded the first Esperanto club in the U.K.", "title": "Esperanto heritage" }, { "paragraph_id": 126, "text": "The Teozofia Esperanta Ligo (Theosophical Esperantist League) was formed in 1911, and the organization's journal, Espero Teozofia, was published from 1913 to 1928.", "title": "Esperanto heritage" }, { "paragraph_id": 127, "text": "The first translation of the Bible into Esperanto was a translation of the Tanakh (or Old Testament) done by L. L. Zamenhof. The translation was reviewed and compared with other languages' translations by a group of British clergy and scholars before its publication at the British and Foreign Bible Society in 1910. In 1926 this was published along with a New Testament translation, in an edition commonly called the \"Londona Biblio\". In the 1960s, the Internacia Asocio de Bibliistoj kaj Orientalistoj tried to organize a new, ecumenical Esperanto Bible version. Since then, the Dutch Remonstrant pastor Gerrit Berveling has translated the Deuterocanonical or apocryphal books, in addition to new translations of the Gospels, some of the New Testament epistles, and some books of the Tanakh. These have been published in various separate booklets, or serialized in Dia Regno, but the Deuterocanonical books have appeared in recent editions of the Londona Biblio.", "title": "Esperanto heritage" }, { "paragraph_id": 128, "text": "Christian Esperanto organizations and publications include:", "title": "Esperanto heritage" }, { "paragraph_id": 129, "text": "Ayatollah Khomeini of Iran called on Muslims to learn Esperanto and praised its use as a medium for better understanding among peoples of different religious backgrounds. After he suggested that Esperanto replace English as an international lingua franca, it began to be used in the seminaries of Qom. An Esperanto translation of the Qur'an was published by the state shortly thereafter.", "title": "Esperanto heritage" }, { "paragraph_id": 130, "text": "Though Esperanto itself has changed little since the publication of Fundamento de Esperanto (Foundation of Esperanto), a number of reform projects have been proposed over the years, starting with Zamenhof's proposals in 1894 and Ido in 1907. Several later constructed languages, such as Universal, Saussure, Romániço, Internasia, Esperanto sen Fleksio, and Mundolingvo, were all based on Esperanto.", "title": "Modifications" }, { "paragraph_id": 131, "text": "In modern times, conscious attempts have been made to eliminate perceived sexism in the language, such as Riism. Many words with ĥ now have alternative spellings with k and occasionally h, so that arĥitekto may also be spelled arkitekto; see Esperanto phonology for further details of ĥ replacement. Reforms aimed at altering country names have also resulted in a number of different options, either due to disputes over suffixes or Eurocentrism in naming various countries.", "title": "Modifications" }, { "paragraph_id": 132, "text": "J. R. R. Tolkien wrote in support of the language in a 1932 British Esperantist article, but criticised those who sought to adapt or \"tinker\" with the language, which, in his opinion, harmed unanimity and the goal of achieving wide acceptance.", "title": "Modifications" }, { "paragraph_id": 133, "text": "There have been numerous objections to Esperanto over the years. For example, there has been criticism that Esperanto is not neutral enough, but also that it should convey a specific culture, which would make it less neutral; that Esperanto does not draw on a wide enough selection of the world's languages, but also that it should be more narrowly European.", "title": "Criticism" }, { "paragraph_id": 134, "text": "Esperantists often argue for Esperanto as a culturally neutral means of communication. However, it is often accused of being Eurocentric. This is most often noted in regard to the vocabulary. The vocabulary, for example, draws about three-quarters from Romance languages, and the remainder primarily from Greek, English and German.. Supporters have argued that the agglutinative grammar and verb regularity of Esperanto has more in common with Asian languages than with European ones. A 2010 linguistic typological study concluded that \"Esperanto is indeed somewhat European in character, but considerably less so than the European languages themselves.\"", "title": "Criticism" }, { "paragraph_id": 135, "text": "Esperanto is sometimes accused of being inherently sexist, because the default form of some nouns is used for descriptions of men while a derived form is used for the women. This is said to retain traces of the male-dominated society of late 19th-century Europe of which Esperanto is a product. These nouns are primarily titles, such as baron/baroness, and kinship terms, such as sinjoro \"Mr, sir\" vs. sinjorino \"Ms, lady\" and patro \"father\" vs. patrino \"mother\". Before the movement toward equal rights for women, this also applied to professional roles assumed to be predominantly male, such as doktoro, a PhD doctor (male or unspecified), versus doktorino, a female PhD. This was analogous to the situation with the English suffix -ess, as in the words waiter/waitress, etc.", "title": "Criticism" }, { "paragraph_id": 136, "text": "On the other hand, the pronoun ĝi (\"it\") may be used generically to mean he/she/they; the pronoun li (\"he\") is always masculine and ŝi (\"she\") is always female, despite some authors' arguments. A gender-neutral singular pronoun ri has gradually become more widely used in recent years, although it is not currently universal. The plural pronoun ili (\"they\") is always neutral, as are nouns with the prefix ge– such as gesinjoroj (equivalent to sinjoro kaj sinjorino \"Mr.and Ms.\"), gepatroj \"parents\" (equivalent to patro kaj patrino \"mother and father\").", "title": "Criticism" }, { "paragraph_id": 137, "text": "Speakers of languages without grammatical case or adjectival agreement frequently criticise these aspects of Esperanto. In addition, in the past some people found the Classical Greek forms of the plural (nouns in -oj, adjectives in -aj) to be awkward, proposing instead that Italian -i be used for nouns, and that no plural be used for adjectives. These suggestions were adopted by the Ido reform. A reply to that criticism is that the presence of an accusative case allows much freedom in word order, e.g. for emphasis (\"Johano batis Petron\", John hit Peter; \"Petron batis Johano\", it is Peter whom John hit), that its absence in the \"predicate of the object\" avoids ambiguity (\"Mi vidis la blankan domon\", I saw the white house; \"Mi vidis la domon blanka\", the house seemed white to me) and that adjective agreement allows, among others, the use of hyperbaton in poetry (as in Latin, cf. Virgil's Eclogue 1:1 Tityre, tu patulæ recubans sub tegmine fagi… where \"patulæ\" (spread out) is epithet to \"fagi\" (beech) and their agreement in the genitive feminine binds them notwithstanding their distance in the verse).", "title": "Criticism" }, { "paragraph_id": 138, "text": "The Esperanto alphabet uses two diacritics: the circumflex and the breve. The alphabet was designed with a French typewriter in mind, and although modern computers support Unicode, entering the letters with diacritic marks can be more or less problematic with certain operating systems or hardware. One of the first reform proposals (for Esperanto 1894) sought to do away with these marks and the language Ido went back to the basic Latin alphabet.", "title": "Criticism" }, { "paragraph_id": 139, "text": "One common criticism is that Esperanto has failed to live up to the hopes of its creator, who dreamed of it becoming a universal second language. Because people were reluctant to learn a new language which hardly anyone spoke, Zamenhof asked people to sign a promise to start learning Esperanto once ten million people made the same promise. He \"was disappointed to receive only a thousand responses.\"", "title": "Criticism" }, { "paragraph_id": 140, "text": "However, Zamenhof had the goal to \"enable the learner to make direct use of his knowledge with persons of any nationality, whether the language be universally accepted or not\", as he wrote in 1887. The language is currently spoken by people living in more than 100 countries; there are about 2,000 native Esperanto speakers and probably up to 100,000 people who use the language regularly.", "title": "Criticism" }, { "paragraph_id": 141, "text": "In this regard, Zamenhof was well aware that it might take much time for Esperanto to achieve his desired goals. In his speech at the 1907 World Esperanto Congress in Cambridge he said, \"we hope that earlier or later, maybe after many centuries, on a neutral language foundation, understanding one another, the nations will build ... a big family circle.\"", "title": "Criticism" }, { "paragraph_id": 142, "text": "The poet Wisława Szymborska expressed doubt that Esperanto could \"produce works of lasting value,\" saying it is \"an artificial language without variety or dialects\" and that \"no one thinks in Esperanto.\" Esperantists have replied that \"lasting value\" is a statement of opinion, that Esperanto grew \"naturally\" by the actions of its speakers on Zamenhof's intentionally elementary Fundamento, and that people do think in Esperanto.", "title": "Criticism" }, { "paragraph_id": 143, "text": "There are some geographical and astronomical features named after Esperanto, or after its creator L. L. Zamenhof. These include Esperanto Island in Antarctica, and the asteroids 1421 Esperanto and 1462 Zamenhof discovered by Finnish astronomer and Esperantist Yrjö Väisälä.", "title": "Eponymous entities" }, { "paragraph_id": 144, "text": "(...) ni esperas, ke pli aŭ malpli frue, eble post multaj jarcentoj, Sur neŭtrala lingva fundamento, Komprenante unu la alian, La popoloj faros en konsento Unu grandan rondon familian.", "title": "References" } ]
Esperanto is the world's most widely spoken constructed international auxiliary language. Created by the Warsaw-based ophthalmologist L. L. Zamenhof in 1887, it is intended to be a universal second language for international communication, or "the international language". Zamenhof first described the language in Dr. Esperanto's International Language, which he published under the pseudonym Doktoro Esperanto. Early adopters of the language liked the name Esperanto and soon used it to describe his language. The word esperanto translates into English as "one who hopes". Within the range of constructed languages, Esperanto occupies a middle ground between "naturalistic" and a priori. Esperanto's vocabulary, syntax and semantics derive predominantly from languages of the Indo-European group. The vocabulary derives primarily from Romance languages, with substantial contributions from Germanic languages. One of the language's most notable features is its extensive system of derivation, where prefixes and suffixes may be freely combined with roots to generate words, making it possible to communicate effectively with a smaller set of words. Esperanto is the most successful constructed international auxiliary language, and the only such language with a sizeable population of native speakers, of which there are perhaps several thousand. Usage estimates are difficult, but two estimates put the number of people who know how to speak Esperanto at around 100,000. Concentration of speakers is highest in Europe, East Asia, and South America. Although no country has adopted Esperanto officially, Esperantujo ("Esperanto-land") is used as a name for the collection of places where it is spoken. The language has also gained a noticeable presence on the internet in recent years, as it became increasingly accessible on platforms such as Duolingo, Wikipedia, Amikumu and Google Translate. Esperanto speakers are often called "Esperantists" (Esperantistoj).
2001-11-03T17:39:03Z
2023-12-29T21:51:51Z
[ "Template:Citation needed", "Template:Listen", "Template:Interlanguage link", "Template:Article issues", "Template:Cite web", "Template:Main", "Template:Infobox language", "Template:Lang", "Template:Broader", "Template:Authority control", "Template:Pp", "Template:Lang-eo", "Template:Div col", "Template:Anchor", "Template:Cite encyclopedia", "Template:Poem quote", "Template:Inline audio", "Template:Audio", "Template:'", "Template:See also", "Template:Constructed languages", "Template:Small", "Template:About", "Template:TOC limit", "Template:Wiktionarycat", "Template:Multiple image", "Template:Who", "Template:E25", "Template:Short description", "Template:IPA link", "Template:Cite journal", "Template:Cbignore", "Template:Primary source inline", "Template:When", "Template:Citation", "Template:Cite news", "Template:IPAslink", "Template:Refend", "Template:Wikteo", "Template:Wikibooks", "Template:Curlie", "Template:Sister bar", "Template:Esperanto sidebar", "Template:Blockquote", "Template:More citations needed", "Template:Div col end", "Template:Portal", "Template:ISBN", "Template:Use mdy dates", "Template:IPAc-en", "Template:Quote box", "Template:As of", "Template:IPA", "Template:IPAblink", "Template:Webarchive", "Template:Refbegin", "Template:Update", "Template:IPAlink", "Template:Rp", "Template:Spoken Wikipedia", "Template:Nbsp", "Template:Columns-list", "Template:Notelist", "Template:Reflist", "Template:Cite book" ]
https://en.wikipedia.org/wiki/Esperanto
9,251
Engineering
Engineering is the practice of using natural science, mathematics, and the engineering design process to solve technical problems, increase efficiency and productivity, and improve systems. Modern engineering comprises many subfields which include designing and improving infrastructure, machinery, vehicles, electronics, materials, and energy systems. The discipline of engineering encompasses a broad range of more specialized fields of engineering, each with a more specific emphasis on particular areas of applied mathematics, applied science, and types of application. See glossary of engineering. The term engineering is derived from the Latin ingenium, meaning "cleverness" and ingeniare, meaning "to contrive, devise". The American Engineers' Council for Professional Development (ECPD, the predecessor of ABET) has defined "engineering" as: The creative application of scientific principles to design or develop structures, machines, apparatus, or manufacturing processes, or works utilizing them singly or in combination; or to construct or operate the same with full cognizance of their design; or to forecast their behavior under specific operating conditions; all as respects an intended function, economics of operation and safety to life and property. Engineering has existed since ancient times, when humans devised inventions such as the wedge, lever, wheel and pulley, etc. The term engineering is derived from the word engineer, which itself dates back to the 14th century when an engine'er (literally, one who builds or operates a siege engine) referred to "a constructor of military engines". In this context, now obsolete, an "engine" referred to a military machine, i.e., a mechanical contraption used in war (for example, a catapult). Notable examples of the obsolete usage which have survived to the present day are military engineering corps, e.g., the U.S. Army Corps of Engineers. The word "engine" itself is of even older origin, ultimately deriving from the Latin ingenium (c. 1250), meaning "innate quality, especially mental power, hence a clever invention." Later, as the design of civilian structures, such as bridges and buildings, matured as a technical discipline, the term civil engineering entered the lexicon as a way to distinguish between those specializing in the construction of such non-military projects and those involved in the discipline of military engineering. The pyramids in ancient Egypt, ziggurats of Mesopotamia, the Acropolis and Parthenon in Greece, the Roman aqueducts, Via Appia and Colosseum, Teotihuacán, and the Brihadeeswarar Temple of Thanjavur, among many others, stand as a testament to the ingenuity and skill of ancient civil and military engineers. Other monuments, no longer standing, such as the Hanging Gardens of Babylon and the Pharos of Alexandria, were important engineering achievements of their time and were considered among the Seven Wonders of the Ancient World. The six classic simple machines were known in the ancient Near East. The wedge and the inclined plane (ramp) were known since prehistoric times. The wheel, along with the wheel and axle mechanism, was invented in Mesopotamia (modern Iraq) during the 5th millennium BC. The lever mechanism first appeared around 5,000 years ago in the Near East, where it was used in a simple balance scale, and to move large objects in ancient Egyptian technology. The lever was also used in the shadoof water-lifting device, the first crane machine, which appeared in Mesopotamia c. 3000 BC, and then in ancient Egyptian technology c. 2000 BC. The earliest evidence of pulleys date back to Mesopotamia in the early 2nd millennium BC, and ancient Egypt during the Twelfth Dynasty (1991–1802 BC). The screw, the last of the simple machines to be invented, first appeared in Mesopotamia during the Neo-Assyrian period (911–609) BC. The Egyptian pyramids were built using three of the six simple machines, the inclined plane, the wedge, and the lever, to create structures like the Great Pyramid of Giza. The earliest civil engineer known by name is Imhotep. As one of the officials of the Pharaoh, Djosèr, he probably designed and supervised the construction of the Pyramid of Djoser (the Step Pyramid) at Saqqara in Egypt around 2630–2611 BC. The earliest practical water-powered machines, the water wheel and watermill, first appeared in the Persian Empire, in what are now Iraq and Iran, by the early 4th century BC. Kush developed the Sakia during the 4th century BC, which relied on animal power instead of human energy.Hafirs were developed as a type of reservoir in Kush to store and contain water as well as boost irrigation. Sappers were employed to build causeways during military campaigns. Kushite ancestors built speos during the Bronze Age between 3700 and 3250 BC.Bloomeries and blast furnaces were also created during the 7th centuries BC in Kush. Ancient Greece developed machines in both civilian and military domains. The Antikythera mechanism, an early known mechanical analog computer, and the mechanical inventions of Archimedes, are examples of Greek mechanical engineering. Some of Archimedes' inventions, as well as the Antikythera mechanism, required sophisticated knowledge of differential gearing or epicyclic gearing, two key principles in machine theory that helped design the gear trains of the Industrial Revolution, and are widely used in fields such as robotics and automotive engineering. Ancient Chinese, Greek, Roman and Hunnic armies employed military machines and inventions such as artillery which was developed by the Greeks around the 4th century BC, the trireme, the ballista and the catapult. In the Middle Ages, the trebuchet was developed. The earliest practical wind-powered machines, the windmill and wind pump, first appeared in the Muslim world during the Islamic Golden Age, in what are now Iran, Afghanistan, and Pakistan, by the 9th century AD. The earliest practical steam-powered machine was a steam jack driven by a steam turbine, described in 1551 by Taqi al-Din Muhammad ibn Ma'ruf in Ottoman Egypt. The cotton gin was invented in India by the 6th century AD, and the spinning wheel was invented in the Islamic world by the early 11th century, both of which were fundamental to the growth of the cotton industry. The spinning wheel was also a precursor to the spinning jenny, which was a key development during the early Industrial Revolution in the 18th century. The earliest programmable machines were developed in the Muslim world. A music sequencer, a programmable musical instrument, was the earliest type of programmable machine. The first music sequencer was an automated flute player invented by the Banu Musa brothers, described in their Book of Ingenious Devices, in the 9th century. In 1206, Al-Jazari invented programmable automata/robots. He described four automaton musicians, including drummers operated by a programmable drum machine, where they could be made to play different rhythms and different drum patterns. Before the development of modern engineering, mathematics was used by artisans and craftsmen, such as millwrights, clockmakers, instrument makers and surveyors. Aside from these professions, universities were not believed to have had much practical significance to technology. A standard reference for the state of mechanical arts during the Renaissance is given in the mining engineering treatise De re metallica (1556), which also contains sections on geology, mining, and chemistry. De re metallica was the standard chemistry reference for the next 180 years. The science of classical mechanics, sometimes called Newtonian mechanics, formed the scientific basis of much of modern engineering. With the rise of engineering as a profession in the 18th century, the term became more narrowly applied to fields in which mathematics and science were applied to these ends. Similarly, in addition to military and civil engineering, the fields then known as the mechanic arts became incorporated into engineering. Canal building was an important engineering work during the early phases of the Industrial Revolution. John Smeaton was the first self-proclaimed civil engineer and is often regarded as the "father" of civil engineering. He was an English civil engineer responsible for the design of bridges, canals, harbors, and lighthouses. He was also a capable mechanical engineer and an eminent physicist. Using a model water wheel, Smeaton conducted experiments for seven years, determining ways to increase efficiency. Smeaton introduced iron axles and gears to water wheels. Smeaton also made mechanical improvements to the Newcomen steam engine. Smeaton designed the third Eddystone Lighthouse (1755–59) where he pioneered the use of 'hydraulic lime' (a form of mortar which will set under water) and developed a technique involving dovetailed blocks of granite in the building of the lighthouse. He is important in the history, rediscovery of, and development of modern cement, because he identified the compositional requirements needed to obtain "hydraulicity" in lime; work which led ultimately to the invention of Portland cement. Applied science led to the development of the steam engine. The sequence of events began with the invention of the barometer and the measurement of atmospheric pressure by Evangelista Torricelli in 1643, demonstration of the force of atmospheric pressure by Otto von Guericke using the Magdeburg hemispheres in 1656, laboratory experiments by Denis Papin, who built experimental model steam engines and demonstrated the use of a piston, which he published in 1707. Edward Somerset, 2nd Marquess of Worcester published a book of 100 inventions containing a method for raising waters similar to a coffee percolator. Samuel Morland, a mathematician and inventor who worked on pumps, left notes at the Vauxhall Ordinance Office on a steam pump design that Thomas Savery read. In 1698 Savery built a steam pump called "The Miner's Friend". It employed both vacuum and pressure. Iron merchant Thomas Newcomen, who built the first commercial piston steam engine in 1712, was not known to have any scientific training. The application of steam-powered cast iron blowing cylinders for providing pressurized air for blast furnaces lead to a large increase in iron production in the late 18th century. The higher furnace temperatures made possible with steam-powered blast allowed for the use of more lime in blast furnaces, which enabled the transition from charcoal to coke. These innovations lowered the cost of iron, making horse railways and iron bridges practical. The puddling process, patented by Henry Cort in 1784 produced large scale quantities of wrought iron. Hot blast, patented by James Beaumont Neilson in 1828, greatly lowered the amount of fuel needed to smelt iron. With the development of the high pressure steam engine, the power to weight ratio of steam engines made practical steamboats and locomotives possible. New steel making processes, such as the Bessemer process and the open hearth furnace, ushered in an area of heavy engineering in the late 19th century. One of the most famous engineers of the mid-19th century was Isambard Kingdom Brunel, who built railroads, dockyards and steamships. The Industrial Revolution created a demand for machinery with metal parts, which led to the development of several machine tools. Boring cast iron cylinders with precision was not possible until John Wilkinson invented his boring machine, which is considered the first machine tool. Other machine tools included the screw cutting lathe, milling machine, turret lathe and the metal planer. Precision machining techniques were developed in the first half of the 19th century. These included the use of gigs to guide the machining tool over the work and fixtures to hold the work in the proper position. Machine tools and machining techniques capable of producing interchangeable parts lead to large scale factory production by the late 19th century. The United States Census of 1850 listed the occupation of "engineer" for the first time with a count of 2,000. There were fewer than 50 engineering graduates in the U.S. before 1865. In 1870 there were a dozen U.S. mechanical engineering graduates, with that number increasing to 43 per year in 1875. In 1890, there were 6,000 engineers in civil, mining, mechanical and electrical. There was no chair of applied mechanism and applied mechanics at Cambridge until 1875, and no chair of engineering at Oxford until 1907. Germany established technical universities earlier. The foundations of electrical engineering in the 1800s included the experiments of Alessandro Volta, Michael Faraday, Georg Ohm and others and the invention of the electric telegraph in 1816 and the electric motor in 1872. The theoretical work of James Maxwell (see: Maxwell's equations) and Heinrich Hertz in the late 19th century gave rise to the field of electronics. The later inventions of the vacuum tube and the transistor further accelerated the development of electronics to such an extent that electrical and electronics engineers currently outnumber their colleagues of any other engineering specialty. Chemical engineering developed in the late nineteenth century. Industrial scale manufacturing demanded new materials and new processes and by 1880 the need for large scale production of chemicals was such that a new industry was created, dedicated to the development and large scale manufacturing of chemicals in new industrial plants. The role of the chemical engineer was the design of these chemical plants and processes. Aeronautical engineering deals with aircraft design process design while aerospace engineering is a more modern term that expands the reach of the discipline by including spacecraft design. Its origins can be traced back to the aviation pioneers around the start of the 20th century although the work of Sir George Cayley has recently been dated as being from the last decade of the 18th century. Early knowledge of aeronautical engineering was largely empirical with some concepts and skills imported from other branches of engineering. The first PhD in engineering (technically, applied science and engineering) awarded in the United States went to Josiah Willard Gibbs at Yale University in 1863; it was also the second PhD awarded in science in the U.S. Only a decade after the successful flights by the Wright brothers, there was extensive development of aeronautical engineering through development of military aircraft that were used in World War I. Meanwhile, research to provide fundamental background science continued by combining theoretical physics with experiments. Engineering is a broad discipline that is often broken down into several sub-disciplines. Although an engineer will usually be trained in a specific discipline, he or she may become multi-disciplined through experience. Engineering is often characterized as having four main branches: chemical engineering, civil engineering, electrical engineering, and mechanical engineering. Chemical engineering is the application of physics, chemistry, biology, and engineering principles in order to carry out chemical processes on a commercial scale, such as the manufacture of commodity chemicals, specialty chemicals, petroleum refining, microfabrication, fermentation, and biomolecule production. Civil engineering is the design and construction of public and private works, such as infrastructure (airports, roads, railways, water supply, and treatment etc.), bridges, tunnels, dams, and buildings. Civil engineering is traditionally broken into a number of sub-disciplines, including structural engineering, environmental engineering, and surveying. It is traditionally considered to be separate from military engineering. Electrical engineering is the design, study, and manufacture of various electrical and electronic systems, such as broadcast engineering, electrical circuits, generators, motors, electromagnetic/electromechanical devices, electronic devices, electronic circuits, optical fibers, optoelectronic devices, computer systems, telecommunications, instrumentation, control systems, and electronics. Mechanical engineering is the design and manufacture of physical or mechanical systems, such as power and energy systems, aerospace/aircraft products, weapon systems, transportation products, engines, compressors, powertrains, kinematic chains, vacuum technology, vibration isolation equipment, manufacturing, robotics, turbines, audio equipments, and mechatronics. Bioengineering is the engineering of biological systems for a useful purpose. Examples of bioengineering research include bacteria engineered to produce chemicals, new medical imaging technology, portable and rapid disease diagnostic devices, prosthetics, biopharmaceuticals, and tissue-engineered organs. Interdisciplinary engineering draws from more than one of the principle branches of the practice. Historically, naval engineering and mining engineering were major branches. Other engineering fields are manufacturing engineering, acoustical engineering, corrosion engineering, instrumentation and control, aerospace, automotive, computer, electronic, information engineering, petroleum, environmental, systems, audio, software, architectural, agricultural, biosystems, biomedical, geological, textile, industrial, materials, and nuclear engineering. These and other branches of engineering are represented in the 36 licensed member institutions of the UK Engineering Council. New specialties sometimes combine with the traditional fields and form new branches – for example, Earth systems engineering and management involves a wide range of subject areas including engineering studies, environmental science, engineering ethics and philosophy of engineering. Aerospace engineering covers the design, development, manufacture and operational behaviour of aircraft, satellites and rockets. Marine engineering covers the design, development, manufacture and operational behaviour of watercraft and stationary structures like oil platforms and ports. Computer engineering (CE) is a branch of engineering that integrates several fields of computer science and electronic engineering required to develop computer hardware and software. Computer engineers usually have training in electronic engineering (or electrical engineering), software design, and hardware-software integration instead of only software engineering or electronic engineering. Geological engineering is associated with anything constructed on or within the Earth. This discipline applies geological sciences and engineering principles to direct or support the work of other disciplines such as civil engineering, environmental engineering, and mining engineering. Geological engineers are involved with impact studies for facilities and operations that affect surface and subsurface environments, such as rock excavations (e.g. tunnels), building foundation consolidation, slope and fill stabilization, landslide risk assessment, groundwater monitoring, groundwater remediation, mining excavations, and natural resource exploration. One who practices engineering is called an engineer, and those licensed to do so may have more formal designations such as Professional Engineer, Chartered Engineer, Incorporated Engineer, Ingenieur, European Engineer, or Designated Engineering Representative. In the engineering design process, engineers apply mathematics and sciences such as physics to find novel solutions to problems or to improve existing solutions. Engineers need proficient knowledge of relevant sciences for their design projects. As a result, many engineers continue to learn new material throughout their careers. If multiple solutions exist, engineers weigh each design choice based on their merit and choose the solution that best matches the requirements. The task of the engineer is to identify, understand, and interpret the constraints on a design in order to yield a successful result. It is generally insufficient to build a technically successful product, rather, it must also meet further requirements. Constraints may include available resources, physical, imaginative or technical limitations, flexibility for future modifications and additions, and other factors, such as requirements for cost, safety, marketability, productivity, and serviceability. By understanding the constraints, engineers derive specifications for the limits within which a viable object or system may be produced and operated. Engineers use their knowledge of science, mathematics, logic, economics, and appropriate experience or tacit knowledge to find suitable solutions to a particular problem. Creating an appropriate mathematical model of a problem often allows them to analyze it (sometimes definitively), and to test potential solutions. More than one solution to a design problem usually exists so the different design choices have to be evaluated on their merits before the one judged most suitable is chosen. Genrich Altshuller, after gathering statistics on a large number of patents, suggested that compromises are at the heart of "low-level" engineering designs, while at a higher level the best design is one which eliminates the core contradiction causing the problem. Engineers typically attempt to predict how well their designs will perform to their specifications prior to full-scale production. They use, among other things: prototypes, scale models, simulations, destructive tests, nondestructive tests, and stress tests. Testing ensures that products will perform as expected but only in so far as the testing has been representative of use in service. For products, such as aircraft, that are used differently by different users failures and unexpected shortcomings (and necessary design changes) can be expected throughout the operational life of the product. Engineers take on the responsibility of producing designs that will perform as well as expected and, except those employed in specific areas of the arms industry, will not harm people. Engineers typically include a factor of safety in their designs to reduce the risk of unexpected failure. The study of failed products is known as forensic engineering. It attempts to identify the cause of failure to allow a redesign of the product and so prevent a re-occurrence. Careful analysis is needed to establish the cause of failure of a product. The consequences of a failure may vary in severity from the minor cost of a machine breakdown to large loss of life in the case of accidents involving aircraft and large stationary structures like buildings and dams. As with all modern scientific and technological endeavors, computers and software play an increasingly important role. As well as the typical business application software there are a number of computer aided applications (computer-aided technologies) specifically for engineering. Computers can be used to generate models of fundamental physical processes, which can be solved using numerical methods. One of the most widely used design tools in the profession is computer-aided design (CAD) software. It enables engineers to create 3D models, 2D drawings, and schematics of their designs. CAD together with digital mockup (DMU) and CAE software such as finite element method analysis or analytic element method allows engineers to create models of designs that can be analyzed without having to make expensive and time-consuming physical prototypes. These allow products and components to be checked for flaws; assess fit and assembly; study ergonomics; and to analyze static and dynamic characteristics of systems such as stresses, temperatures, electromagnetic emissions, electrical currents and voltages, digital logic levels, fluid flows, and kinematics. Access and distribution of all this information is generally organized with the use of product data management software. There are also many tools to support specific engineering tasks such as computer-aided manufacturing (CAM) software to generate CNC machining instructions; manufacturing process management software for production engineering; EDA for printed circuit board (PCB) and circuit schematics for electronic engineers; MRO applications for maintenance management; and Architecture, engineering and construction (AEC) software for civil engineering. In recent years the use of computer software to aid the development of goods has collectively come to be known as product lifecycle management (PLM). The engineering profession engages in a wide range of activities, from large collaboration at the societal level, and also smaller individual projects. Almost all engineering projects are obligated to some sort of financing agency: a company, a set of investors, or a government. The few types of engineering that are minimally constrained by such issues are pro bono engineering and open-design engineering. By its very nature engineering has interconnections with society, culture and human behavior. Every product or construction used by modern society is influenced by engineering. The results of engineering activity influence changes to the environment, society and economies, and its application brings with it a responsibility and public safety. Engineering projects can be subject to controversy. Examples from different engineering disciplines include the development of nuclear weapons, the Three Gorges Dam, the design and use of sport utility vehicles and the extraction of oil. In response, some Western engineering companies have enacted serious corporate and social responsibility policies. Engineering is a key driver of innovation and human development. Sub-Saharan Africa, in particular, has a very small engineering capacity which results in many African nations being unable to develop crucial infrastructure without outside aid. The attainment of many of the Millennium Development Goals requires the achievement of sufficient engineering capacity to develop infrastructure and sustainable technological development. All overseas development and relief NGOs make considerable use of engineers to apply solutions in disaster and development scenarios. A number of charitable organizations aim to use engineering directly for the good of mankind: Engineering companies in many established economies are facing significant challenges with regard to the number of professional engineers being trained, compared with the number retiring. This problem is very prominent in the UK where engineering has a poor image and low status. There are many negative economic and political issues that this can cause, as well as ethical issues. It is widely agreed that the engineering profession faces an "image crisis", rather than it being fundamentally an unattractive career. Much work is needed to avoid huge problems in the UK and other Western economies. Still, the UK holds most engineering companies compared to other European countries, together with the United States. Many engineering societies have established codes of practice and codes of ethics to guide members and inform the public at large. The National Society of Professional Engineers code of ethics states: Engineering is an important and learned profession. As members of this profession, engineers are expected to exhibit the highest standards of honesty and integrity. Engineering has a direct and vital impact on the quality of life for all people. Accordingly, the services provided by engineers require honesty, impartiality, fairness, and equity, and must be dedicated to the protection of the public health, safety, and welfare. Engineers must perform under a standard of professional behavior that requires adherence to the highest principles of ethical conduct. In Canada, many engineers wear the Iron Ring as a symbol and reminder of the obligations and ethics associated with their profession. Scientists study the world as it is; engineers create the world that has never been. There exists an overlap between the sciences and engineering practice; in engineering, one applies science. Both areas of endeavor rely on accurate observation of materials and phenomena. Both use mathematics and classification criteria to analyze and communicate observations. Scientists may also have to complete engineering tasks, such as designing experimental apparatus or building prototypes. Conversely, in the process of developing technology, engineers sometimes find themselves exploring new phenomena, thus becoming, for the moment, scientists or more precisely "engineering scientists". In the book What Engineers Know and How They Know It, Walter Vincenti asserts that engineering research has a character different from that of scientific research. First, it often deals with areas in which the basic physics or chemistry are well understood, but the problems themselves are too complex to solve in an exact manner. There is a "real and important" difference between engineering and physics as similar to any science field has to do with technology. Physics is an exploratory science that seeks knowledge of principles while engineering uses knowledge for practical applications of principles. The former equates an understanding into a mathematical principle while the latter measures variables involved and creates technology. For technology, physics is an auxiliary and in a way technology is considered as applied physics. Though physics and engineering are interrelated, it does not mean that a physicist is trained to do an engineer's job. A physicist would typically require additional and relevant training. Physicists and engineers engage in different lines of work. But PhD physicists who specialize in sectors of engineering physics and applied physics are titled as Technology officer, R&D Engineers and System Engineers. An example of this is the use of numerical approximations to the Navier–Stokes equations to describe aerodynamic flow over an aircraft, or the use of the Finite element method to calculate the stresses in complex components. Second, engineering research employs many semi-empirical methods that are foreign to pure scientific research, one example being the method of parameter variation. As stated by Fung et al. in the revision to the classic engineering text Foundations of Solid Mechanics: Engineering is quite different from science. Scientists try to understand nature. Engineers try to make things that do not exist in nature. Engineers stress innovation and invention. To embody an invention the engineer must put his idea in concrete terms, and design something that people can use. That something can be a complex system, device, a gadget, a material, a method, a computing program, an innovative experiment, a new solution to a problem, or an improvement on what already exists. Since a design has to be realistic and functional, it must have its geometry, dimensions, and characteristics data defined. In the past engineers working on new designs found that they did not have all the required information to make design decisions. Most often, they were limited by insufficient scientific knowledge. Thus they studied mathematics, physics, chemistry, biology and mechanics. Often they had to add to the sciences relevant to their profession. Thus engineering sciences were born. Although engineering solutions make use of scientific principles, engineers must also take into account safety, efficiency, economy, reliability, and constructability or ease of fabrication as well as the environment, ethical and legal considerations such as patent infringement or liability in the case of failure of the solution. The study of the human body, albeit from different directions and for different purposes, is an important common link between medicine and some engineering disciplines. Medicine aims to sustain, repair, enhance and even replace functions of the human body, if necessary, through the use of technology. Modern medicine can replace several of the body's functions through the use of artificial organs and can significantly alter the function of the human body through artificial devices such as, for example, brain implants and pacemakers. The fields of bionics and medical bionics are dedicated to the study of synthetic implants pertaining to natural systems. Conversely, some engineering disciplines view the human body as a biological machine worth studying and are dedicated to emulating many of its functions by replacing biology with technology. This has led to fields such as artificial intelligence, neural networks, fuzzy logic, and robotics. There are also substantial interdisciplinary interactions between engineering and medicine. Both fields provide solutions to real world problems. This often requires moving forward before phenomena are completely understood in a more rigorous scientific sense and therefore experimentation and empirical knowledge is an integral part of both. Medicine, in part, studies the function of the human body. The human body, as a biological machine, has many functions that can be modeled using engineering methods. The heart for example functions much like a pump, the skeleton is like a linked structure with levers, the brain produces electrical signals etc. These similarities as well as the increasing importance and application of engineering principles in medicine, led to the development of the field of biomedical engineering that uses concepts developed in both disciplines. Newly emerging branches of science, such as systems biology, are adapting analytical tools traditionally used for engineering, such as systems modeling and computational analysis, to the description of biological systems. There are connections between engineering and art, for example, architecture, landscape architecture and industrial design (even to the extent that these disciplines may sometimes be included in a university's Faculty of Engineering). The Art Institute of Chicago, for instance, held an exhibition about the art of NASA's aerospace design. Robert Maillart's bridge design is perceived by some to have been deliberately artistic. At the University of South Florida, an engineering professor, through a grant with the National Science Foundation, has developed a course that connects art and engineering. Among famous historical figures, Leonardo da Vinci is a well-known Renaissance artist and engineer, and a prime example of the nexus between art and engineering. Business Engineering deals with the relationship between professional engineering, IT systems, business administration and change management. Engineering management or "Management engineering" is a specialized field of management concerned with engineering practice or the engineering industry sector. The demand for management-focused engineers (or from the opposite perspective, managers with an understanding of engineering), has resulted in the development of specialized engineering management degrees that develop the knowledge and skills needed for these roles. During an engineering management course, students will develop industrial engineering skills, knowledge, and expertise, alongside knowledge of business administration, management techniques, and strategic thinking. Engineers specializing in change management must have in-depth knowledge of the application of industrial and organizational psychology principles and methods. Professional engineers often train as certified management consultants in the very specialized field of management consulting applied to engineering practice or the engineering sector. This work often deals with large scale complex business transformation or Business process management initiatives in aerospace and defence, automotive, oil and gas, machinery, pharmaceutical, food and beverage, electrical & electronics, power distribution & generation, utilities and transportation systems. This combination of technical engineering practice, management consulting practice, industry sector knowledge, and change management expertise enables professional engineers who are also qualified as management consultants to lead major business transformation initiatives. These initiatives are typically sponsored by C-level executives. In political science, the term engineering has been borrowed for the study of the subjects of social engineering and political engineering, which deal with forming political and social structures using engineering methodology coupled with political science principles. Marketing engineering and Financial engineering have similarly borrowed the term.
[ { "paragraph_id": 0, "text": "Engineering is the practice of using natural science, mathematics, and the engineering design process to solve technical problems, increase efficiency and productivity, and improve systems. Modern engineering comprises many subfields which include designing and improving infrastructure, machinery, vehicles, electronics, materials, and energy systems.", "title": "" }, { "paragraph_id": 1, "text": "The discipline of engineering encompasses a broad range of more specialized fields of engineering, each with a more specific emphasis on particular areas of applied mathematics, applied science, and types of application. See glossary of engineering.", "title": "" }, { "paragraph_id": 2, "text": "The term engineering is derived from the Latin ingenium, meaning \"cleverness\" and ingeniare, meaning \"to contrive, devise\".", "title": "" }, { "paragraph_id": 3, "text": "The American Engineers' Council for Professional Development (ECPD, the predecessor of ABET) has defined \"engineering\" as:", "title": "Definition" }, { "paragraph_id": 4, "text": "The creative application of scientific principles to design or develop structures, machines, apparatus, or manufacturing processes, or works utilizing them singly or in combination; or to construct or operate the same with full cognizance of their design; or to forecast their behavior under specific operating conditions; all as respects an intended function, economics of operation and safety to life and property.", "title": "Definition" }, { "paragraph_id": 5, "text": "Engineering has existed since ancient times, when humans devised inventions such as the wedge, lever, wheel and pulley, etc.", "title": "History" }, { "paragraph_id": 6, "text": "The term engineering is derived from the word engineer, which itself dates back to the 14th century when an engine'er (literally, one who builds or operates a siege engine) referred to \"a constructor of military engines\". In this context, now obsolete, an \"engine\" referred to a military machine, i.e., a mechanical contraption used in war (for example, a catapult). Notable examples of the obsolete usage which have survived to the present day are military engineering corps, e.g., the U.S. Army Corps of Engineers.", "title": "History" }, { "paragraph_id": 7, "text": "The word \"engine\" itself is of even older origin, ultimately deriving from the Latin ingenium (c. 1250), meaning \"innate quality, especially mental power, hence a clever invention.\"", "title": "History" }, { "paragraph_id": 8, "text": "Later, as the design of civilian structures, such as bridges and buildings, matured as a technical discipline, the term civil engineering entered the lexicon as a way to distinguish between those specializing in the construction of such non-military projects and those involved in the discipline of military engineering.", "title": "History" }, { "paragraph_id": 9, "text": "The pyramids in ancient Egypt, ziggurats of Mesopotamia, the Acropolis and Parthenon in Greece, the Roman aqueducts, Via Appia and Colosseum, Teotihuacán, and the Brihadeeswarar Temple of Thanjavur, among many others, stand as a testament to the ingenuity and skill of ancient civil and military engineers. Other monuments, no longer standing, such as the Hanging Gardens of Babylon and the Pharos of Alexandria, were important engineering achievements of their time and were considered among the Seven Wonders of the Ancient World.", "title": "History" }, { "paragraph_id": 10, "text": "The six classic simple machines were known in the ancient Near East. The wedge and the inclined plane (ramp) were known since prehistoric times. The wheel, along with the wheel and axle mechanism, was invented in Mesopotamia (modern Iraq) during the 5th millennium BC. The lever mechanism first appeared around 5,000 years ago in the Near East, where it was used in a simple balance scale, and to move large objects in ancient Egyptian technology. The lever was also used in the shadoof water-lifting device, the first crane machine, which appeared in Mesopotamia c. 3000 BC, and then in ancient Egyptian technology c. 2000 BC. The earliest evidence of pulleys date back to Mesopotamia in the early 2nd millennium BC, and ancient Egypt during the Twelfth Dynasty (1991–1802 BC). The screw, the last of the simple machines to be invented, first appeared in Mesopotamia during the Neo-Assyrian period (911–609) BC. The Egyptian pyramids were built using three of the six simple machines, the inclined plane, the wedge, and the lever, to create structures like the Great Pyramid of Giza.", "title": "History" }, { "paragraph_id": 11, "text": "The earliest civil engineer known by name is Imhotep. As one of the officials of the Pharaoh, Djosèr, he probably designed and supervised the construction of the Pyramid of Djoser (the Step Pyramid) at Saqqara in Egypt around 2630–2611 BC. The earliest practical water-powered machines, the water wheel and watermill, first appeared in the Persian Empire, in what are now Iraq and Iran, by the early 4th century BC.", "title": "History" }, { "paragraph_id": 12, "text": "Kush developed the Sakia during the 4th century BC, which relied on animal power instead of human energy.Hafirs were developed as a type of reservoir in Kush to store and contain water as well as boost irrigation. Sappers were employed to build causeways during military campaigns. Kushite ancestors built speos during the Bronze Age between 3700 and 3250 BC.Bloomeries and blast furnaces were also created during the 7th centuries BC in Kush.", "title": "History" }, { "paragraph_id": 13, "text": "Ancient Greece developed machines in both civilian and military domains. The Antikythera mechanism, an early known mechanical analog computer, and the mechanical inventions of Archimedes, are examples of Greek mechanical engineering. Some of Archimedes' inventions, as well as the Antikythera mechanism, required sophisticated knowledge of differential gearing or epicyclic gearing, two key principles in machine theory that helped design the gear trains of the Industrial Revolution, and are widely used in fields such as robotics and automotive engineering.", "title": "History" }, { "paragraph_id": 14, "text": "Ancient Chinese, Greek, Roman and Hunnic armies employed military machines and inventions such as artillery which was developed by the Greeks around the 4th century BC, the trireme, the ballista and the catapult. In the Middle Ages, the trebuchet was developed.", "title": "History" }, { "paragraph_id": 15, "text": "The earliest practical wind-powered machines, the windmill and wind pump, first appeared in the Muslim world during the Islamic Golden Age, in what are now Iran, Afghanistan, and Pakistan, by the 9th century AD. The earliest practical steam-powered machine was a steam jack driven by a steam turbine, described in 1551 by Taqi al-Din Muhammad ibn Ma'ruf in Ottoman Egypt.", "title": "History" }, { "paragraph_id": 16, "text": "The cotton gin was invented in India by the 6th century AD, and the spinning wheel was invented in the Islamic world by the early 11th century, both of which were fundamental to the growth of the cotton industry. The spinning wheel was also a precursor to the spinning jenny, which was a key development during the early Industrial Revolution in the 18th century.", "title": "History" }, { "paragraph_id": 17, "text": "The earliest programmable machines were developed in the Muslim world. A music sequencer, a programmable musical instrument, was the earliest type of programmable machine. The first music sequencer was an automated flute player invented by the Banu Musa brothers, described in their Book of Ingenious Devices, in the 9th century. In 1206, Al-Jazari invented programmable automata/robots. He described four automaton musicians, including drummers operated by a programmable drum machine, where they could be made to play different rhythms and different drum patterns.", "title": "History" }, { "paragraph_id": 18, "text": "Before the development of modern engineering, mathematics was used by artisans and craftsmen, such as millwrights, clockmakers, instrument makers and surveyors. Aside from these professions, universities were not believed to have had much practical significance to technology.", "title": "History" }, { "paragraph_id": 19, "text": "A standard reference for the state of mechanical arts during the Renaissance is given in the mining engineering treatise De re metallica (1556), which also contains sections on geology, mining, and chemistry. De re metallica was the standard chemistry reference for the next 180 years.", "title": "History" }, { "paragraph_id": 20, "text": "The science of classical mechanics, sometimes called Newtonian mechanics, formed the scientific basis of much of modern engineering. With the rise of engineering as a profession in the 18th century, the term became more narrowly applied to fields in which mathematics and science were applied to these ends. Similarly, in addition to military and civil engineering, the fields then known as the mechanic arts became incorporated into engineering.", "title": "History" }, { "paragraph_id": 21, "text": "Canal building was an important engineering work during the early phases of the Industrial Revolution.", "title": "History" }, { "paragraph_id": 22, "text": "John Smeaton was the first self-proclaimed civil engineer and is often regarded as the \"father\" of civil engineering. He was an English civil engineer responsible for the design of bridges, canals, harbors, and lighthouses. He was also a capable mechanical engineer and an eminent physicist. Using a model water wheel, Smeaton conducted experiments for seven years, determining ways to increase efficiency. Smeaton introduced iron axles and gears to water wheels. Smeaton also made mechanical improvements to the Newcomen steam engine. Smeaton designed the third Eddystone Lighthouse (1755–59) where he pioneered the use of 'hydraulic lime' (a form of mortar which will set under water) and developed a technique involving dovetailed blocks of granite in the building of the lighthouse. He is important in the history, rediscovery of, and development of modern cement, because he identified the compositional requirements needed to obtain \"hydraulicity\" in lime; work which led ultimately to the invention of Portland cement.", "title": "History" }, { "paragraph_id": 23, "text": "Applied science led to the development of the steam engine. The sequence of events began with the invention of the barometer and the measurement of atmospheric pressure by Evangelista Torricelli in 1643, demonstration of the force of atmospheric pressure by Otto von Guericke using the Magdeburg hemispheres in 1656, laboratory experiments by Denis Papin, who built experimental model steam engines and demonstrated the use of a piston, which he published in 1707. Edward Somerset, 2nd Marquess of Worcester published a book of 100 inventions containing a method for raising waters similar to a coffee percolator. Samuel Morland, a mathematician and inventor who worked on pumps, left notes at the Vauxhall Ordinance Office on a steam pump design that Thomas Savery read. In 1698 Savery built a steam pump called \"The Miner's Friend\". It employed both vacuum and pressure. Iron merchant Thomas Newcomen, who built the first commercial piston steam engine in 1712, was not known to have any scientific training.", "title": "History" }, { "paragraph_id": 24, "text": "The application of steam-powered cast iron blowing cylinders for providing pressurized air for blast furnaces lead to a large increase in iron production in the late 18th century. The higher furnace temperatures made possible with steam-powered blast allowed for the use of more lime in blast furnaces, which enabled the transition from charcoal to coke. These innovations lowered the cost of iron, making horse railways and iron bridges practical. The puddling process, patented by Henry Cort in 1784 produced large scale quantities of wrought iron. Hot blast, patented by James Beaumont Neilson in 1828, greatly lowered the amount of fuel needed to smelt iron. With the development of the high pressure steam engine, the power to weight ratio of steam engines made practical steamboats and locomotives possible. New steel making processes, such as the Bessemer process and the open hearth furnace, ushered in an area of heavy engineering in the late 19th century.", "title": "History" }, { "paragraph_id": 25, "text": "One of the most famous engineers of the mid-19th century was Isambard Kingdom Brunel, who built railroads, dockyards and steamships.", "title": "History" }, { "paragraph_id": 26, "text": "The Industrial Revolution created a demand for machinery with metal parts, which led to the development of several machine tools. Boring cast iron cylinders with precision was not possible until John Wilkinson invented his boring machine, which is considered the first machine tool. Other machine tools included the screw cutting lathe, milling machine, turret lathe and the metal planer. Precision machining techniques were developed in the first half of the 19th century. These included the use of gigs to guide the machining tool over the work and fixtures to hold the work in the proper position. Machine tools and machining techniques capable of producing interchangeable parts lead to large scale factory production by the late 19th century.", "title": "History" }, { "paragraph_id": 27, "text": "The United States Census of 1850 listed the occupation of \"engineer\" for the first time with a count of 2,000. There were fewer than 50 engineering graduates in the U.S. before 1865. In 1870 there were a dozen U.S. mechanical engineering graduates, with that number increasing to 43 per year in 1875. In 1890, there were 6,000 engineers in civil, mining, mechanical and electrical.", "title": "History" }, { "paragraph_id": 28, "text": "There was no chair of applied mechanism and applied mechanics at Cambridge until 1875, and no chair of engineering at Oxford until 1907. Germany established technical universities earlier.", "title": "History" }, { "paragraph_id": 29, "text": "The foundations of electrical engineering in the 1800s included the experiments of Alessandro Volta, Michael Faraday, Georg Ohm and others and the invention of the electric telegraph in 1816 and the electric motor in 1872. The theoretical work of James Maxwell (see: Maxwell's equations) and Heinrich Hertz in the late 19th century gave rise to the field of electronics. The later inventions of the vacuum tube and the transistor further accelerated the development of electronics to such an extent that electrical and electronics engineers currently outnumber their colleagues of any other engineering specialty. Chemical engineering developed in the late nineteenth century. Industrial scale manufacturing demanded new materials and new processes and by 1880 the need for large scale production of chemicals was such that a new industry was created, dedicated to the development and large scale manufacturing of chemicals in new industrial plants. The role of the chemical engineer was the design of these chemical plants and processes.", "title": "History" }, { "paragraph_id": 30, "text": "Aeronautical engineering deals with aircraft design process design while aerospace engineering is a more modern term that expands the reach of the discipline by including spacecraft design. Its origins can be traced back to the aviation pioneers around the start of the 20th century although the work of Sir George Cayley has recently been dated as being from the last decade of the 18th century. Early knowledge of aeronautical engineering was largely empirical with some concepts and skills imported from other branches of engineering.", "title": "History" }, { "paragraph_id": 31, "text": "The first PhD in engineering (technically, applied science and engineering) awarded in the United States went to Josiah Willard Gibbs at Yale University in 1863; it was also the second PhD awarded in science in the U.S.", "title": "History" }, { "paragraph_id": 32, "text": "Only a decade after the successful flights by the Wright brothers, there was extensive development of aeronautical engineering through development of military aircraft that were used in World War I. Meanwhile, research to provide fundamental background science continued by combining theoretical physics with experiments.", "title": "History" }, { "paragraph_id": 33, "text": "Engineering is a broad discipline that is often broken down into several sub-disciplines. Although an engineer will usually be trained in a specific discipline, he or she may become multi-disciplined through experience. Engineering is often characterized as having four main branches: chemical engineering, civil engineering, electrical engineering, and mechanical engineering.", "title": "Main branches of engineering" }, { "paragraph_id": 34, "text": "Chemical engineering is the application of physics, chemistry, biology, and engineering principles in order to carry out chemical processes on a commercial scale, such as the manufacture of commodity chemicals, specialty chemicals, petroleum refining, microfabrication, fermentation, and biomolecule production.", "title": "Main branches of engineering" }, { "paragraph_id": 35, "text": "Civil engineering is the design and construction of public and private works, such as infrastructure (airports, roads, railways, water supply, and treatment etc.), bridges, tunnels, dams, and buildings. Civil engineering is traditionally broken into a number of sub-disciplines, including structural engineering, environmental engineering, and surveying. It is traditionally considered to be separate from military engineering.", "title": "Main branches of engineering" }, { "paragraph_id": 36, "text": "Electrical engineering is the design, study, and manufacture of various electrical and electronic systems, such as broadcast engineering, electrical circuits, generators, motors, electromagnetic/electromechanical devices, electronic devices, electronic circuits, optical fibers, optoelectronic devices, computer systems, telecommunications, instrumentation, control systems, and electronics.", "title": "Main branches of engineering" }, { "paragraph_id": 37, "text": "Mechanical engineering is the design and manufacture of physical or mechanical systems, such as power and energy systems, aerospace/aircraft products, weapon systems, transportation products, engines, compressors, powertrains, kinematic chains, vacuum technology, vibration isolation equipment, manufacturing, robotics, turbines, audio equipments, and mechatronics.", "title": "Main branches of engineering" }, { "paragraph_id": 38, "text": "Bioengineering is the engineering of biological systems for a useful purpose. Examples of bioengineering research include bacteria engineered to produce chemicals, new medical imaging technology, portable and rapid disease diagnostic devices, prosthetics, biopharmaceuticals, and tissue-engineered organs.", "title": "Main branches of engineering" }, { "paragraph_id": 39, "text": "Interdisciplinary engineering draws from more than one of the principle branches of the practice. Historically, naval engineering and mining engineering were major branches. Other engineering fields are manufacturing engineering, acoustical engineering, corrosion engineering, instrumentation and control, aerospace, automotive, computer, electronic, information engineering, petroleum, environmental, systems, audio, software, architectural, agricultural, biosystems, biomedical, geological, textile, industrial, materials, and nuclear engineering. These and other branches of engineering are represented in the 36 licensed member institutions of the UK Engineering Council.", "title": "Interdisciplinary engineering" }, { "paragraph_id": 40, "text": "New specialties sometimes combine with the traditional fields and form new branches – for example, Earth systems engineering and management involves a wide range of subject areas including engineering studies, environmental science, engineering ethics and philosophy of engineering.", "title": "Interdisciplinary engineering" }, { "paragraph_id": 41, "text": "Aerospace engineering covers the design, development, manufacture and operational behaviour of aircraft, satellites and rockets.", "title": "Other branches of engineering" }, { "paragraph_id": 42, "text": "Marine engineering covers the design, development, manufacture and operational behaviour of watercraft and stationary structures like oil platforms and ports.", "title": "Other branches of engineering" }, { "paragraph_id": 43, "text": "Computer engineering (CE) is a branch of engineering that integrates several fields of computer science and electronic engineering required to develop computer hardware and software. Computer engineers usually have training in electronic engineering (or electrical engineering), software design, and hardware-software integration instead of only software engineering or electronic engineering.", "title": "Other branches of engineering" }, { "paragraph_id": 44, "text": "Geological engineering is associated with anything constructed on or within the Earth. This discipline applies geological sciences and engineering principles to direct or support the work of other disciplines such as civil engineering, environmental engineering, and mining engineering. Geological engineers are involved with impact studies for facilities and operations that affect surface and subsurface environments, such as rock excavations (e.g. tunnels), building foundation consolidation, slope and fill stabilization, landslide risk assessment, groundwater monitoring, groundwater remediation, mining excavations, and natural resource exploration.", "title": "Other branches of engineering" }, { "paragraph_id": 45, "text": "One who practices engineering is called an engineer, and those licensed to do so may have more formal designations such as Professional Engineer, Chartered Engineer, Incorporated Engineer, Ingenieur, European Engineer, or Designated Engineering Representative.", "title": "Practice" }, { "paragraph_id": 46, "text": "In the engineering design process, engineers apply mathematics and sciences such as physics to find novel solutions to problems or to improve existing solutions. Engineers need proficient knowledge of relevant sciences for their design projects. As a result, many engineers continue to learn new material throughout their careers.", "title": "Methodology" }, { "paragraph_id": 47, "text": "If multiple solutions exist, engineers weigh each design choice based on their merit and choose the solution that best matches the requirements. The task of the engineer is to identify, understand, and interpret the constraints on a design in order to yield a successful result. It is generally insufficient to build a technically successful product, rather, it must also meet further requirements.", "title": "Methodology" }, { "paragraph_id": 48, "text": "Constraints may include available resources, physical, imaginative or technical limitations, flexibility for future modifications and additions, and other factors, such as requirements for cost, safety, marketability, productivity, and serviceability. By understanding the constraints, engineers derive specifications for the limits within which a viable object or system may be produced and operated.", "title": "Methodology" }, { "paragraph_id": 49, "text": "Engineers use their knowledge of science, mathematics, logic, economics, and appropriate experience or tacit knowledge to find suitable solutions to a particular problem. Creating an appropriate mathematical model of a problem often allows them to analyze it (sometimes definitively), and to test potential solutions.", "title": "Methodology" }, { "paragraph_id": 50, "text": "More than one solution to a design problem usually exists so the different design choices have to be evaluated on their merits before the one judged most suitable is chosen. Genrich Altshuller, after gathering statistics on a large number of patents, suggested that compromises are at the heart of \"low-level\" engineering designs, while at a higher level the best design is one which eliminates the core contradiction causing the problem.", "title": "Methodology" }, { "paragraph_id": 51, "text": "Engineers typically attempt to predict how well their designs will perform to their specifications prior to full-scale production. They use, among other things: prototypes, scale models, simulations, destructive tests, nondestructive tests, and stress tests. Testing ensures that products will perform as expected but only in so far as the testing has been representative of use in service. For products, such as aircraft, that are used differently by different users failures and unexpected shortcomings (and necessary design changes) can be expected throughout the operational life of the product.", "title": "Methodology" }, { "paragraph_id": 52, "text": "Engineers take on the responsibility of producing designs that will perform as well as expected and, except those employed in specific areas of the arms industry, will not harm people. Engineers typically include a factor of safety in their designs to reduce the risk of unexpected failure.", "title": "Methodology" }, { "paragraph_id": 53, "text": "The study of failed products is known as forensic engineering. It attempts to identify the cause of failure to allow a redesign of the product and so prevent a re-occurrence. Careful analysis is needed to establish the cause of failure of a product. The consequences of a failure may vary in severity from the minor cost of a machine breakdown to large loss of life in the case of accidents involving aircraft and large stationary structures like buildings and dams.", "title": "Methodology" }, { "paragraph_id": 54, "text": "As with all modern scientific and technological endeavors, computers and software play an increasingly important role. As well as the typical business application software there are a number of computer aided applications (computer-aided technologies) specifically for engineering. Computers can be used to generate models of fundamental physical processes, which can be solved using numerical methods.", "title": "Methodology" }, { "paragraph_id": 55, "text": "One of the most widely used design tools in the profession is computer-aided design (CAD) software. It enables engineers to create 3D models, 2D drawings, and schematics of their designs. CAD together with digital mockup (DMU) and CAE software such as finite element method analysis or analytic element method allows engineers to create models of designs that can be analyzed without having to make expensive and time-consuming physical prototypes.", "title": "Methodology" }, { "paragraph_id": 56, "text": "These allow products and components to be checked for flaws; assess fit and assembly; study ergonomics; and to analyze static and dynamic characteristics of systems such as stresses, temperatures, electromagnetic emissions, electrical currents and voltages, digital logic levels, fluid flows, and kinematics. Access and distribution of all this information is generally organized with the use of product data management software.", "title": "Methodology" }, { "paragraph_id": 57, "text": "There are also many tools to support specific engineering tasks such as computer-aided manufacturing (CAM) software to generate CNC machining instructions; manufacturing process management software for production engineering; EDA for printed circuit board (PCB) and circuit schematics for electronic engineers; MRO applications for maintenance management; and Architecture, engineering and construction (AEC) software for civil engineering.", "title": "Methodology" }, { "paragraph_id": 58, "text": "In recent years the use of computer software to aid the development of goods has collectively come to be known as product lifecycle management (PLM).", "title": "Methodology" }, { "paragraph_id": 59, "text": "The engineering profession engages in a wide range of activities, from large collaboration at the societal level, and also smaller individual projects. Almost all engineering projects are obligated to some sort of financing agency: a company, a set of investors, or a government. The few types of engineering that are minimally constrained by such issues are pro bono engineering and open-design engineering.", "title": "Social context" }, { "paragraph_id": 60, "text": "By its very nature engineering has interconnections with society, culture and human behavior. Every product or construction used by modern society is influenced by engineering. The results of engineering activity influence changes to the environment, society and economies, and its application brings with it a responsibility and public safety.", "title": "Social context" }, { "paragraph_id": 61, "text": "Engineering projects can be subject to controversy. Examples from different engineering disciplines include the development of nuclear weapons, the Three Gorges Dam, the design and use of sport utility vehicles and the extraction of oil. In response, some Western engineering companies have enacted serious corporate and social responsibility policies.", "title": "Social context" }, { "paragraph_id": 62, "text": "Engineering is a key driver of innovation and human development. Sub-Saharan Africa, in particular, has a very small engineering capacity which results in many African nations being unable to develop crucial infrastructure without outside aid. The attainment of many of the Millennium Development Goals requires the achievement of sufficient engineering capacity to develop infrastructure and sustainable technological development.", "title": "Social context" }, { "paragraph_id": 63, "text": "All overseas development and relief NGOs make considerable use of engineers to apply solutions in disaster and development scenarios. A number of charitable organizations aim to use engineering directly for the good of mankind:", "title": "Social context" }, { "paragraph_id": 64, "text": "Engineering companies in many established economies are facing significant challenges with regard to the number of professional engineers being trained, compared with the number retiring. This problem is very prominent in the UK where engineering has a poor image and low status. There are many negative economic and political issues that this can cause, as well as ethical issues. It is widely agreed that the engineering profession faces an \"image crisis\", rather than it being fundamentally an unattractive career. Much work is needed to avoid huge problems in the UK and other Western economies. Still, the UK holds most engineering companies compared to other European countries, together with the United States.", "title": "Social context" }, { "paragraph_id": 65, "text": "Many engineering societies have established codes of practice and codes of ethics to guide members and inform the public at large. The National Society of Professional Engineers code of ethics states:", "title": "Social context" }, { "paragraph_id": 66, "text": "Engineering is an important and learned profession. As members of this profession, engineers are expected to exhibit the highest standards of honesty and integrity. Engineering has a direct and vital impact on the quality of life for all people. Accordingly, the services provided by engineers require honesty, impartiality, fairness, and equity, and must be dedicated to the protection of the public health, safety, and welfare. Engineers must perform under a standard of professional behavior that requires adherence to the highest principles of ethical conduct.", "title": "Social context" }, { "paragraph_id": 67, "text": "In Canada, many engineers wear the Iron Ring as a symbol and reminder of the obligations and ethics associated with their profession.", "title": "Social context" }, { "paragraph_id": 68, "text": "Scientists study the world as it is; engineers create the world that has never been.", "title": "Relationships with other disciplines" }, { "paragraph_id": 69, "text": "There exists an overlap between the sciences and engineering practice; in engineering, one applies science. Both areas of endeavor rely on accurate observation of materials and phenomena. Both use mathematics and classification criteria to analyze and communicate observations.", "title": "Relationships with other disciplines" }, { "paragraph_id": 70, "text": "Scientists may also have to complete engineering tasks, such as designing experimental apparatus or building prototypes. Conversely, in the process of developing technology, engineers sometimes find themselves exploring new phenomena, thus becoming, for the moment, scientists or more precisely \"engineering scientists\".", "title": "Relationships with other disciplines" }, { "paragraph_id": 71, "text": "In the book What Engineers Know and How They Know It, Walter Vincenti asserts that engineering research has a character different from that of scientific research. First, it often deals with areas in which the basic physics or chemistry are well understood, but the problems themselves are too complex to solve in an exact manner.", "title": "Relationships with other disciplines" }, { "paragraph_id": 72, "text": "There is a \"real and important\" difference between engineering and physics as similar to any science field has to do with technology. Physics is an exploratory science that seeks knowledge of principles while engineering uses knowledge for practical applications of principles. The former equates an understanding into a mathematical principle while the latter measures variables involved and creates technology. For technology, physics is an auxiliary and in a way technology is considered as applied physics. Though physics and engineering are interrelated, it does not mean that a physicist is trained to do an engineer's job. A physicist would typically require additional and relevant training. Physicists and engineers engage in different lines of work. But PhD physicists who specialize in sectors of engineering physics and applied physics are titled as Technology officer, R&D Engineers and System Engineers.", "title": "Relationships with other disciplines" }, { "paragraph_id": 73, "text": "An example of this is the use of numerical approximations to the Navier–Stokes equations to describe aerodynamic flow over an aircraft, or the use of the Finite element method to calculate the stresses in complex components. Second, engineering research employs many semi-empirical methods that are foreign to pure scientific research, one example being the method of parameter variation.", "title": "Relationships with other disciplines" }, { "paragraph_id": 74, "text": "As stated by Fung et al. in the revision to the classic engineering text Foundations of Solid Mechanics:", "title": "Relationships with other disciplines" }, { "paragraph_id": 75, "text": "Engineering is quite different from science. Scientists try to understand nature. Engineers try to make things that do not exist in nature. Engineers stress innovation and invention. To embody an invention the engineer must put his idea in concrete terms, and design something that people can use. That something can be a complex system, device, a gadget, a material, a method, a computing program, an innovative experiment, a new solution to a problem, or an improvement on what already exists. Since a design has to be realistic and functional, it must have its geometry, dimensions, and characteristics data defined. In the past engineers working on new designs found that they did not have all the required information to make design decisions. Most often, they were limited by insufficient scientific knowledge. Thus they studied mathematics, physics, chemistry, biology and mechanics. Often they had to add to the sciences relevant to their profession. Thus engineering sciences were born.", "title": "Relationships with other disciplines" }, { "paragraph_id": 76, "text": "Although engineering solutions make use of scientific principles, engineers must also take into account safety, efficiency, economy, reliability, and constructability or ease of fabrication as well as the environment, ethical and legal considerations such as patent infringement or liability in the case of failure of the solution.", "title": "Relationships with other disciplines" }, { "paragraph_id": 77, "text": "The study of the human body, albeit from different directions and for different purposes, is an important common link between medicine and some engineering disciplines. Medicine aims to sustain, repair, enhance and even replace functions of the human body, if necessary, through the use of technology.", "title": "Relationships with other disciplines" }, { "paragraph_id": 78, "text": "Modern medicine can replace several of the body's functions through the use of artificial organs and can significantly alter the function of the human body through artificial devices such as, for example, brain implants and pacemakers. The fields of bionics and medical bionics are dedicated to the study of synthetic implants pertaining to natural systems.", "title": "Relationships with other disciplines" }, { "paragraph_id": 79, "text": "Conversely, some engineering disciplines view the human body as a biological machine worth studying and are dedicated to emulating many of its functions by replacing biology with technology. This has led to fields such as artificial intelligence, neural networks, fuzzy logic, and robotics. There are also substantial interdisciplinary interactions between engineering and medicine.", "title": "Relationships with other disciplines" }, { "paragraph_id": 80, "text": "Both fields provide solutions to real world problems. This often requires moving forward before phenomena are completely understood in a more rigorous scientific sense and therefore experimentation and empirical knowledge is an integral part of both.", "title": "Relationships with other disciplines" }, { "paragraph_id": 81, "text": "Medicine, in part, studies the function of the human body. The human body, as a biological machine, has many functions that can be modeled using engineering methods.", "title": "Relationships with other disciplines" }, { "paragraph_id": 82, "text": "The heart for example functions much like a pump, the skeleton is like a linked structure with levers, the brain produces electrical signals etc. These similarities as well as the increasing importance and application of engineering principles in medicine, led to the development of the field of biomedical engineering that uses concepts developed in both disciplines.", "title": "Relationships with other disciplines" }, { "paragraph_id": 83, "text": "Newly emerging branches of science, such as systems biology, are adapting analytical tools traditionally used for engineering, such as systems modeling and computational analysis, to the description of biological systems.", "title": "Relationships with other disciplines" }, { "paragraph_id": 84, "text": "There are connections between engineering and art, for example, architecture, landscape architecture and industrial design (even to the extent that these disciplines may sometimes be included in a university's Faculty of Engineering).", "title": "Relationships with other disciplines" }, { "paragraph_id": 85, "text": "The Art Institute of Chicago, for instance, held an exhibition about the art of NASA's aerospace design. Robert Maillart's bridge design is perceived by some to have been deliberately artistic. At the University of South Florida, an engineering professor, through a grant with the National Science Foundation, has developed a course that connects art and engineering.", "title": "Relationships with other disciplines" }, { "paragraph_id": 86, "text": "Among famous historical figures, Leonardo da Vinci is a well-known Renaissance artist and engineer, and a prime example of the nexus between art and engineering.", "title": "Relationships with other disciplines" }, { "paragraph_id": 87, "text": "Business Engineering deals with the relationship between professional engineering, IT systems, business administration and change management. Engineering management or \"Management engineering\" is a specialized field of management concerned with engineering practice or the engineering industry sector. The demand for management-focused engineers (or from the opposite perspective, managers with an understanding of engineering), has resulted in the development of specialized engineering management degrees that develop the knowledge and skills needed for these roles. During an engineering management course, students will develop industrial engineering skills, knowledge, and expertise, alongside knowledge of business administration, management techniques, and strategic thinking. Engineers specializing in change management must have in-depth knowledge of the application of industrial and organizational psychology principles and methods. Professional engineers often train as certified management consultants in the very specialized field of management consulting applied to engineering practice or the engineering sector. This work often deals with large scale complex business transformation or Business process management initiatives in aerospace and defence, automotive, oil and gas, machinery, pharmaceutical, food and beverage, electrical & electronics, power distribution & generation, utilities and transportation systems. This combination of technical engineering practice, management consulting practice, industry sector knowledge, and change management expertise enables professional engineers who are also qualified as management consultants to lead major business transformation initiatives. These initiatives are typically sponsored by C-level executives.", "title": "Relationships with other disciplines" }, { "paragraph_id": 88, "text": "In political science, the term engineering has been borrowed for the study of the subjects of social engineering and political engineering, which deal with forming political and social structures using engineering methodology coupled with political science principles. Marketing engineering and Financial engineering have similarly borrowed the term.", "title": "Relationships with other disciplines" } ]
Engineering is the practice of using natural science, mathematics, and the engineering design process to solve technical problems, increase efficiency and productivity, and improve systems. Modern engineering comprises many subfields which include designing and improving infrastructure, machinery, vehicles, electronics, materials, and energy systems. The discipline of engineering encompasses a broad range of more specialized fields of engineering, each with a more specific emphasis on particular areas of applied mathematics, applied science, and types of application. See glossary of engineering. The term engineering is derived from the Latin ingenium, meaning "cleverness" and ingeniare, meaning "to contrive, devise".
2001-11-07T14:50:41Z
2023-12-22T14:15:51Z
[ "Template:Short description", "Template:Other uses", "Template:Circa", "Template:Reflist", "Template:Cite web", "Template:Cite book", "Template:OCLC", "Template:More citations needed section", "Template:Authority control", "Template:For outline", "Template:Original research section", "Template:Webarchive", "Template:Hounshell1984", "Template:Wikiversity inline", "Template:Engineering fields", "Template:TopicTOC-Engineering", "Template:Cite OED", "Template:Cite encyclopedia", "Template:Refend", "Template:Wikisource-inline", "Template:Philosophy of science", "Template:Use American English", "Template:Main", "Template:Div col end", "Template:Cite news", "Template:Wiktionary-inline", "Template:Wikiquote-inline", "Template:Pp-semi-indef", "Template:Use mdy dates", "Template:Blockquote", "Template:Industries", "Template:Rp", "Template:Unreferenced section", "Template:Div col", "Template:Refbegin", "Template:Convert", "Template:Citation needed", "Template:Stack", "Template:Cite journal", "Template:ISBN", "Template:Glossaries of science and engineering" ]
https://en.wikipedia.org/wiki/Engineering
9,252
Education
Education is the transmission of knowledge, skills, and character traits and comes in many forms. Formal education happens in a complex institutional framework, like public schools. Non-formal education is also structured but takes place outside the formal schooling system while informal education is unstructured learning through daily experiences. Formal and non-formal education are divided into levels that include early childhood education, primary education, secondary education, and tertiary education. Other classifications focus on the teaching method, like teacher-centered and student-centered education, and on the subject, like science education, language education, and physical education. The term "education" can also refer to the mental states and qualities of educated people and the academic field studying educational phenomena. The precise definition of education is disputed and there are disagreements about what the aims of education are and to what extent education is different from indoctrination by fostering critical thinking. These disagreements affect how to identify, measure, and improve forms of education. Fundamentally, education socializes children into society by teaching cultural values and norms. It equips them with the skills needed to become productive members of society. This way, it stimulates economic growth and raises awareness of local and global problems. Organized institutions affect many aspects of education. For example, governments set education policies to determine when school classes happen, what is taught, and who can or must attend. International organizations, like UNESCO, have been influential in promoting primary education for all children. Many factors influence whether education is successful. Psychological factors include motivation, intelligence, and personality. Social factors, like socioeconomic status, ethnicity, and gender, are often linked to discrimination. Further factors include access to educational technology, teacher quality, and parent involvement. The main field investigating education is called education studies. It examines what education is, what aims and effects it has, and how to improve it. Education studies has many subfields, like philosophy, psychology, sociology, and economics of education. It also discusses comparative education, pedagogy, and the history of education. In prehistory, education happened informally through oral communication and imitation. With the rise of ancient civilizations, writing was invented, and the amount of knowledge grew. This caused a shift from informal to formal education. Initially, formal education was mainly available to elites and religious groups. The invention of the printing press in the 15th century made books more widely available. This increased general literacy. Beginning in the 18th and 19th centuries, public education became more important. This development led to the worldwide process of making primary education available to all, free of charge, and compulsory up to a certain age. The definition of education has been explored by theorists from various fields. Many agree that education is a purposeful activity aimed at achieving goals like the transmission of knowledge, skills, and character traits. There is extensive debate regarding its exact nature beyond these general features. One approach views education as a process that occurs during events such as schooling, teaching, and learning. Another outlook understands education not as a process but as the mental states and dispositions of educated persons that result from this process. Additionally, the term may also refer to the academic field that studies the methods, processes, and social institutions involved in teaching and learning. Having a clear idea of what the term means matters when trying to identify educational phenomena, measure educational success, and improve educational practices. The term "education" is derived from the Latin words educare, meaning "to bring up" and educere, meaning "to bring forth". Some theorists provide precise definitions by identifying the specific features that are exclusive to all forms of education. Education theorist R. S. Peters, for instance, outlines three essential features of education, which include that knowledge and understanding are imparted to the student and that this process is beneficial and done in a morally appropriate manner. Such precise definitions often succeed at characterizing the most typical forms of education. However, they often face criticism because less common types of education occasionally fall outside their parameters. The difficulty of dealing with counterexamples not covered by precise definitions can be avoided by offering less exact definitions based on family resemblance instead. This means that all the forms of education are similar to each other but they need not share a set of essential features that all of them have in common. Some education theorists, such as Keira Sewell and Stephen Newman, hold that the term "education" is context-dependent. This implies that its meaning varies depending on the situation in which it is used. There is disagreement in the academic literature on whether education is an evaluative concept. Thick definitions characterize education as an evaluative concept. They state it is part of the nature of education to be beneficial to the student or lead to some kind of improvement. Different thick definitions express differing views about what kind of improvement is involved. They contrast with thin definitions, which provide a value-neutral explanation of education. A closely related distinction is between descriptive and prescriptive conceptions of education. Descriptive conceptions refer to how the term is commonly used in ordinary language. Prescriptive conceptions define what good education is or how education should be practiced. Many thick and prescriptive conceptions hold that education is an activity that tries to achieve certain aims. Some concentrate on epistemic aims, like knowledge and understanding. Others give more emphasis to the development of skills, like rationality and critical thinking, and character traits, like kindness and honesty. One approach is to focus on a single overarching purpose of education and see the more specific aims as means to this end. According to one suggestion, socialization is the aim of education. It is realized by transmitting accumulated knowledge from one generation to the next. This process helps the student to function in society as a citizen. More person-centered definitions focus on the well-being of the student instead. According to them, education is a process that helps students lead a good life or the life they wish to lead. Various scholars stress the role of critical thinking to distinguish education from indoctrination. They state that mere indoctrination is only interested in instilling beliefs in the student, independent of whether the beliefs are rational; whereas education also fosters the rational ability to critically reflect on and question those beliefs. However, it is not universally accepted that these two phenomena can be clearly distinguished. One reason for this view is that some forms of indoctrination may be necessary in the early stages of education while the child's mind is not yet sufficiently developed. This applies to cases in which young children need to learn something without being able to understand the underlying reasons, like certain safety rules and hygiene practices. Education can be characterized from the teacher's or the student's perspective. Teacher-centered definitions focus on the perspective and role of the teacher in the transmission of knowledge and skills in a morally appropriate way. Student-centered definitions analyze education from the student's involvement in the learning process and hold that this process transforms and enriches their subsequent experiences. Definitions taking both perspectives into account are also possible. This can take the form of describing the process as the shared experience of a common world. In the shared experience, different aspects of the world are discovered and problems are posed and solved. There are many classifications of education. One of them depends on the institutional framework and distinguishes between formal, non-formal, and informal education. Another classification includes different levels of education based on factors like the student's age and the complexity of the content. Further categories focus on the topic, the teaching method, the medium used, and the funding. The most common division is between formal, non-formal, and informal education. Formal education happens in a complex institutional framework. Such frameworks have a chronological and hierarchical order: the modern schooling system has classes based on the student's age and progress, extending from primary school to university. Formal education is usually controlled and guided by the government. It tends to be compulsory up to a certain age. Non-formal and informal education take place outside the formal schooling system. Non-formal education is a middle ground. Like formal education, it is organized, systematic, and carried out with a clear purpose, like tutoring, fitness classes, and the scouting movement. Informal education happens in an unsystematic way through daily experiences and exposure to the environment. Unlike formal and non-formal education, there is usually no designated authority figure responsible for teaching. Informal education takes place in many different settings and situations throughout one's life, usually in a spontaneous way. This is how children learn their first language from their parents and how people learn to prepare a dish by cooking together. Some theorists distinguish the three types based on the location of learning: formal education takes place in school, non-formal education happens in places that are not regularly visited, like museums, and informal education occurs in places of everyday routines. There are also differences in the source of motivation. Formal education tends to be driven by extrinsic motivation for external rewards. Non-formal and informal education are closely linked to intrinsic motivation because the learning itself is enjoyed. The distinction between the three types is normally clear but some forms of education do not easily fall into one category. Formal education plays a central role in modern civilization, though in primitive cultures, most of the education happened on the informal level. This usually meant that there was no distinction between activities focused on education and other activities. Instead, the whole environment acted as a form of school and most adults acted as teachers. Informal education is often not efficient enough to teach large quantities of knowledge. To do so, a formal setting and well-trained teachers are usually required. This was one of the reasons why in the course of history, formal education became more and more important. In this process, the experience of education and the discussed topics became more abstract and removed from daily life while more emphasis was put on grasping general patterns and concepts instead of observing and imitating particular forms of behavior. Types of education are often divided into levels or stages. The most influential framework is the International Standard Classification of Education, maintained by the United Nations Educational, Scientific and Cultural Organization (UNESCO). It covers both formal and non-formal education and distinguishes levels based on the student's age, the duration of learning, and the complexity of the discussed content. Further criteria include entry requirements, teacher qualifications, and the intended outcome of successful completion. The levels are grouped into early childhood education (level 0), primary education (level 1), secondary education (levels 2–3), post-secondary non-tertiary education (level 4), and tertiary education (levels 5–8). Early childhood education, also known as preschool education or nursery education, is the stage of education that begins with birth and lasts until the start of primary school. It follows the holistic aim of fostering early child development at the physical, mental, and social levels. It plays a key role in socialization and personality development and includes various basic skills in the areas of communication, learning, and problem-solving. This way, it aims to prepare children for their entry into primary education. Preschool education is usually optional but in some countries, such as Brazil, it is mandatory starting from the age of four. Primary (or elementary) education usually starts within the ages of five to seven and lasts for four to seven years. It does not have any further entry requirements and its main goal is to teach the basic skills in the fields of reading, writing, and mathematics. It also covers the core knowledge in other fields, like history, geography, the sciences, music, and art. A further aim is to foster personal development. Today, primary education is compulsory in almost all countries and over 90% of all primary-school-age children worldwide attend primary school. Secondary education is the stage of education following primary education and usually covers the ages of 12 to 18 years. It is commonly divided into lower secondary education (middle school or junior high school) and upper secondary education (high school, senior high school, or college depending on the country). Lower secondary education normally has the completion of primary school as its entry requirement. It aims to extend and deepen the learning outcomes and is more focused on subject-specific curricula and teachers are specialized in only one or a few specific subjects. One of its aims is to familiarize students with the basic theoretical concepts in the different subjects. This helps create a solid basis for lifelong learning. In some cases, it also includes basic forms of vocational training. Lower secondary education is compulsory in many countries in Central and East Asia, Europe, and America. In some countries, it is the last stage of compulsory education. Mandatory lower secondary education is not as prevalent in Arab states, sub-Saharan Africa, and South and West Asia. Upper secondary education starts roughly at the age of 15 and aims to provide students with the skills and knowledge needed for employment or tertiary education. Its requirement is usually the completion of lower secondary education. Its subjects are more varied and complex and students can often choose between a few subjects. Its successful completion is commonly tied to a formal qualification in the form of a high school diploma. Some types of education after secondary education do not belong to tertiary education and are categorized as post-secondary non-tertiary education. They are similar in complexity to secondary education but tend to focus more on vocational training to prepare students for the job market. In some countries, tertiary education is used as a synonym of higher education, while in others, tertiary education is the wider term. Tertiary education expands upon the foundations of secondary education but has a more narrow and in-depth focus on a specific field or subject. Its completion leads to an academic degree. It can be divided into four levels: short-cycle tertiary, Bachelor's, Master's, and doctoral level education. These levels often form a hierarchical structure with later levels depending on the completion of previous levels. Short-cycle tertiary education focuses on practical matters. It includes advanced vocational and professional training to prepare students for the job market in specialized professions. Bachelor's level education, also referred to as undergraduate education, tends to be longer than short-cycle tertiary education. It is usually offered by universities and results in an intermediary academic certification in the form of a bachelor's degree. Master's level education is more specialized than undergraduate education. Many programs require independent research in the form of a master's thesis as a requirement for successful completion. Doctoral level education leads to an advanced research qualification, normally in the form of a doctor's degree, such as a Doctor of Philosophy (PhD). It usually requires the submission of a substantial academic work, such as a dissertation. More advanced levels include post-doctoral studies and habilitation. Many other types of education are discussed in the academic literature, like the distinction between traditional and alternative education. Traditional education concerns long-established and mainstream schooling practices. It uses teacher-centered education and takes place in a well-regulated school environment. Regulations cover many aspects of education, such as the curriculum and the timeframe when classes start and end. Alternative education is an umbrella term for forms of schooling that differ from the mainstream traditional approach. They may use a different learning environment, teach different subjects, or promote a different teacher-student relationship. Alternative schooling is characterized by voluntary participation, relatively small class and school sizes, and personalized instruction. This often results in a more welcoming and emotionally safe atmosphere. Alternative education encompasses many types like charter schools and special programs for problematic or gifted children. It also includes homeschooling and unschooling. There are many alternative schooling traditions, like Montessori schools, Waldorf schools, Round Square schools, Escuela Nueva schools, free schools, and democratic schools. Alternative education also includes indigenous education, which focuses on the transmission of knowledge and skills from an indigenous heritage and employs methods like narration and storytelling. Further types of alternative schools include gurukul schools in India, madrasa schools in the Middle East, and yeshivas in Jewish tradition. Other distinctions between types of education are based on who receives education. Categories by the age of the learner are childhood education, adolescent education, adult education, and elderly education. Special education is education that is specifically adapted to meet the unique needs of students with disabilities. It covers various forms of impairments on the intellectual, social, communicative, and physical levels. It aims to overcome the challenges posed by these impairments. This way, it provides the affected students with access to an appropriate educational structure. When understood in the broadest sense, special education also includes education for very gifted children who need adjusted curricula to reach their fullest potential. Some classifications focus on the teaching method. In teacher-centered education, the teacher takes center stage in providing students with information. It contrasts with student-centered education, in which students take on a more active and responsible role in shaping classroom activities. For conscious education, learning and teaching happen with a clear purpose in mind. Unconscious education occurs on its own without being consciously planned or guided. This may happen in part through the personality of teachers and adults, which can have indirect effects on the development of the student's personality. Evidence-based education uses well-designed scientific studies to determine which methods of education work best. Its goal is to maximize the effectiveness of educational practices and policies. This is achieved by ensuring that they are informed by the best available empirical evidence. It includes evidence-based teaching, evidence-based learning, and school effectiveness research. Autodidacticism is self-education and happens without the guidance of teachers and institutions. It mainly occurs in adult education and is characterized by the freedom to choose what and when to study, which is why it can be a more fulfilling learning experience. The lack of structure and guidance can result in aimless learning and the absence of external feedback may lead autodidacts to develop false ideas and inaccurately assess their learning progress. Autodidacticism is closely related to lifelong education, which is an ongoing learning process throughout a person's entire life. Forms of education can also be categorized by the subject and the medium used. Types based on the subject include science education, language education, art education, religious education, and physical education. Special mediums, such as radio or websites, are used in distance education. Examples include e-learning (use of computers), m-learning (use of mobile devices), and online education. They often take the form of open education, in which the courses and materials are made available with a minimal amount of barriers. They contrast with regular classroom or onsite education. Some forms of online education are not open education, such as full online degree programs offered by some universities. A further distinction is based on the type of funding. State education, also referred to as public education, is funded and controlled by the government and available to the general public. It normally does not require tuition fees and is thus a form of free education. Private education, by contrast, is funded and managed by private institutions. Private schools often have a more selective admission process and offer paid education by charging tuition fees. A more detailed classification focuses on the social institution responsible for education, like family, school, civil society, state, and church. Compulsory education is education that people are legally required to receive. It concerns mainly children who need to visit school up to a certain age. It contrasts with voluntary education, which people pursue by personal choice without a legal requirement. Education plays various roles in society, including in social, economic, and personal fields. On a social level, education makes it possible to establish and sustain a stable society. It helps people acquire the basic skills needed to interact with their environment and fulfill their needs and desires. In modern society, this involves a wide range of skills like being able to speak, read, write, solve arithmetic problems, and handle information and communications technology. Another key part of socialization is to learn the dominant social and cultural norms and what kinds of behavior are considered appropriate in different contexts. Education enables the social cohesion, stability, and peace needed for people to productively engage in daily business. Socialization happens throughout life but is of special relevance to early childhood education. Education plays a key role in democracies by increasing civic participation in the form of voting and organizing, and through its tendency to promote equal opportunity for all. On an economic level, people become productive members of society through education by acquiring the technical and analytical skills needed to pursue their profession, produce goods, and provide services to others. In early societies, there was little specialization and each child would generally learn most of the skills that the community required to function. Modern societies are increasingly complex and many professions are only mastered by relatively few people who receive specialized training in addition to general education. Some of the skills and tendencies learned to function in society may conflict with each other and their value depends on the context of their usage. For example, cultivating the tendency to be inquisitive and question established teachings promotes critical thinking and innovation but in some cases, obedience to an authority is required to ensure social stability. By helping people become productive members of society, education stimulates economic growth and reduces poverty. It helps workers become more skilled and thereby increases the quality of the produced goods and services, which in turn leads to prosperity and increased competitiveness. Public education is often understood as a long-term investment to benefit society as a whole. The rate of return is especially high for investments in primary education. Besides increasing economic prosperity, it can also lead to technological and scientific advances as well as decrease unemployment while promoting social equity. Education can prepare a country to adapt to changes and successfully face new challenges. It can help raise awareness and contribute to the solution of contemporary global problems, such as climate change, sustainability, and the widening inequalities between the rich and the poor. By making students aware of how their lives and actions affect others, it may inspire some to work toward realizing a more sustainable and fair world. This way, education serves not just the purpose of maintaining the societal status quo, but can also be an instrument of social development. That applies also to changing circumstances in the economic sector. For example, technological advances, particularly increased automation, are accompanied by new demands on the workforce, which education can help address. Changing circumstances may render currently taught skills and knowledge redundant while shifting the importance to other areas. Education can be used to prepare people for such changes by adjusting the curriculum, introducing subjects like digital literacy, promoting skills in handling new technologies, and including new forms of education such as massive open online courses. On a more individual level, education promotes personal development. This can include factors such as learning new skills, developing talents, fostering creativity, and increasing self-knowledge as well as improving problem-solving and decision-making abilities. Education also has positive effects on health and well-being. The social importance of education is recognized by the annual International Day of Education on January 24. The United Nations declared the year 1970 the International Education Year. Organized institutions play a key role in various aspects of education. Institutions like schools, universities, teacher training institutions, and ministries of education make up the education sector. They interact both with each other and with other stakeholders, such as parents, local communities, and religious groups. Further stakeholders are Non-governmental organizations, professionals in healthcare, law enforcement, media platforms, and political leaders. Many people are directly involved in the education sector. They include students, teachers, and school principals as well as school nurses and curriculum developers. Various aspects of formal education are regulated by the policies of governmental institutions. They determine at what age children need to attend school and at what times classes are held as well as issues pertaining to the school environment, like infrastructure. Regulations also cover the exact qualifications and requirements that teachers need to fulfill. An important aspect of education policy concerns the curriculum used for teaching at schools, colleges, and universities. A curriculum is a plan of instruction or a program of learning that guides students to achieve their educational goals. The topics are usually selected based on their importance and depend on the type of school. The goals of public school curricula are usually to offer a comprehensive and well-rounded education while vocational training focuses more on specific practical skills within a field. The curricula also cover various aspects besides the topic to be discussed, such as the teaching method, the objectives to be reached, and the standards for assessing progress. By determining the curricula, governmental institutions have a strong impact on what knowledge and skills are transmitted to the students. Examples of governmental institutions include the Ministry of Education in India, the Department of Basic Education in South Africa, and the Secretariat of Public Education in Mexico. International organizations also play a key role in education. For instance, UNESCO is an intergovernmental organization that promotes education in many ways. One of its activities is to advocate education policies, like the treaty Convention on the Rights of the Child, which states that education is a human right of all children and young people. Another was the Education for All initiative. It aimed to offer basic education to all children, adolescents, and adults by the year 2015 and was later replaced by the initiative Sustainable Development Goals as goal 4. Related policies include the Convention against Discrimination in Education and the Futures of Education initiative. Some influential organizations are not intergovernmental, but non-governmental. For example, the International Association of Universities promotes collaboration and the exchange of knowledge between colleges and universities around the world, while the International Baccalaureate offers international diploma programs. Institutions, like the Erasmus Programme, facilitate student exchanges between countries, while initiatives such as the Fulbright Program provide a similar service for teachers. Several factors influence educational achievement. They include psychological factors, which concern the student as an individual, and sociological factors, which pertain to the student's social environment. Further factors are access to educational technology, teacher quality, and parent involvement. Many of these factors overlap and influence each other. On a psychological level, relevant factors include motivation, intelligence, and personality. Motivation is the internal force propelling people to engage in learning. Motivated students are more likely to interact with the content to be learned by participating in classroom activities like discussions, which often results in a deeper understanding of the subject. Motivation can also help students overcome difficulties and setbacks. An important distinction is between intrinsic and extrinsic motivation. Intrinsically motivated students are driven by an interest in the subject and the learning experience itself. Extrinsically motivated students seek external rewards like good grades and recognition from peers. Intrinsic motivation tends to be more beneficial by leading to increased creativity and engagement as well as long-term commitment. Educational psychologists try to discover how to increase motivation. This can be achieved, for instance, by encouraging some competition among students while ensuring a balance of positive and negative feedback in the form of praise and criticism. Intelligence is another important factor in how people respond to education. It is a mental quality linked to the ability to learn from experience, to understand, and to employ knowledge and skills to solve problems. Those who have higher scores in intelligence metrics tend to perform better at school and go on to higher levels of education. Intelligence is often primarily associated with the so-called IQ, a standardized numerical metric for assessing intelligence. However, it has been argued that there different types of intelligences pertaining to distinct areas. According to psychologist Howard Gardner, they can be distinguished into areas like mathematics, logic, spatial cognition, language, and music. Further types affect how a person interacts with other people and with themselves. These forms are largely independent of each other, meaning that someone may excel at one type while scoring low on another. A closely related factor concerns learning styles, which are preferred forms of acquiring knowledge and skills. According to proponents of learning style theory, students with an auditory learning style find it easy to follow spoken lectures and discussions while visual learners benefit if information is presented visually in diagrams and videos. For efficient learning, it may be beneficial to include a wide variety of learning modalities. The learner's personality may also affect educational achievement. For example, the features of conscientiousness and openness to experience from the Big Five personality traits are linked to academic success. Further mental factors include self-efficacy, self-esteem, and metacognitive abilities. Sociological factors focus not on psychological attributes of learners but on their environment and position in society. They include socioeconomic status, ethnicity, and cultural background, as well as gender. They are of interest to researchers since they are associated with inequality and discrimination. For this reason, they play a key role in policy-making in attempts to mitigate their effects. Socioeconomic status depends on income but includes other factors, such as financial security, social status, and social class, as well as quality of life attributes. Low socioeconomic status affects educational success in various ways. It is linked to slower cognitive developments in language and memory and higher dropout rates. Poor families may not have enough money to meet basic the nutritional needs of their children, causing poor development. They may also lack the means to invest in educational resources like stimulating toys, books, and computers. Additionally, they may be unable to afford tuition at prestigious schools and are more likely to attend schools in poorer areas. Such schools tend to offer lower standards of teaching because of teacher shortages or because they lack educational materials and facilities, like libraries. Poor parents may also be unable to afford private lessons if their children fall behind. In some cases, students from an economically disadvantaged background are forced to dropout from school to provide income to their families. They also have less access to information on higher education and may face additional difficulties in securing and repaying student loans. Low socioeconomic status also has many indirect negative effects by being linked to lower physical and mental health. Due to these factors, social inequalities on the level of the parents are often reproduced in the children. Ethnic background is linked to cultural differences and language barriers, which make it more difficult for students to adapt to the school environment and follow classes. Additional factors are explicit and implicit biases and discrimination toward ethnic minorities. This may affect the students' self-esteem and motivation as well as their access to educational opportunities. For example, teachers may hold stereotypical views even if they are not overtly racist, which can lead them to grade comparable performances differently based on the child's ethnicity. Historically, gender has been a central factor in education since the roles of men and women were defined differently in many societies. Education tended to strongly favor men, who were expected to provide for the family. Women, by contrast, were expected to manage the household and rear children, which barred most educational opportunities available to them. And while these inequalities have improved in most modern societies, there are still gender differences in education. Among other things, this concerns biases and stereotypes linked to the role of gender in education. They affect subjects like science, technology, engineering, and mathematics, which are often presented as male fields. This discourages female students from following them. In various cases, discrimination based on gender and social factors happens openly as part of official educational policy, such as the severe restrictions on female education instituted by the Taliban in Afghanistan and the school segregation of migrants and locals in urban China under the hukou system. One aspect of many social factors is given by the expectations associated with stereotypes. They work both on an external level, based on how other people react to a person belonging to a certain group, and on an internal level, based on how the person internalizes them and acts accordingly. In this sense, the expectations may turn into self-fulfilling prophecies by causing the educational outcomes they anticipate. This can happen both for positive and negative stereotypes. Technology plays another significant role in educational success. Educational technology is commonly associated with the use of modern digital devices, like computers. But understood in the broadest sense, it involves a wide range of resources and tools for learning, including basic aids that do not involve the use of machines, like regular books and worksheets. Educational technology can benefit learning in various ways. In the form of media, it often takes the role of the primary supplier of information in the classroom. This means that the teacher can focus their time and energy on other tasks, like planning the lesson and guiding students as well as assessing educational performance. Educational technology can also make information easier to understand by presenting it using graphics and videos rather than through mere text. In this regard, interactive elements may be used to make the learning experience more engaging in the form of educational games. Technology can be employed to make educational materials accessible to many people, like when using online resources. It additionally facilitates collaboration between students and communication with teachers. Lack of educational technology affects developing countries in particular. Many efforts are made to address it through organisations such as the One Laptop per Child initiative, the African Library Project, and Pratham. A closely related issue concerns the effects of school infrastructure. It includes physical aspects of the school, like its location and size as well as the available school facilities and equipment. A healthy and safe environment, well-maintained classrooms, and suitable classroom furniture as well as the availability of a library and a canteen tend to contribute to educational success. The quality of the teacher also has an important impact on student achievement. Skilled teachers know how to motivate and inspire students and are able to adjust their instructions to the students' abilities and needs. Important in this regard are the teacher's own education and training as well as their past teaching experience. A meta-analysis by Engin Karadağ et al. concludes that, compared to other influences, factors related to the school and the teacher have the biggest impact on educational success. An additional factor to boost student achievement is parent involvement. It can make children more motivated and invested if they are aware that their parents care about their educational efforts. This tends to lead to increased self-esteem, better attendance rates, and more constructive behavior at school. Parent involvement also includes communication with teachers and other school staff to make other parties aware of current issues and how they may be resolved. Further relevant factors sometimes discussed in the academic literature include historical, political, demographic, religious, and legal aspects. The main discipline investigating education is called education studies, also referred to as education sciences. It tries to determine how people transmit and acquire knowledge by studying the methods and forms of education. It is interested in its aims, effects, and value as well as the cultural, societal, governmental, and historical contexts that shape education. Education theorists integrate insights from many other fields of inquiry, including philosophy, psychology, sociology, economics, history, politics, and international relations. Because of these influences, some theorists claim that education studies is not an independent academic discipline like physics or history since its method and subject are not as clearly defined. Education studies differs from regular training programs, such as teacher training, since its focus on academic analysis and critical reflection goes beyond the skills needed to be a good teacher. It is not restricted to the topic of formal education but examines all forms and aspects of education. Various research methods are used to study educational phenomena. They can roughly be divided into quantitative, qualitative, and mixed-methods approaches. Quantitative research emulates the methods found in the natural sciences by using precise numerical measurements to gather data from many observations and employs statistical tools to analyze it. It aims to arrive at an objective and impersonal understanding. Qualitative research usually has a much smaller sample size and tries to get an in-depth insight into more subjective and personal factors, like how different actors experience the process of education. Mixed-methods research aims to combine data gathered from both approaches to arrive at a balanced and comprehensive understanding. Data can be collected in various ways, like using direct observation or test scores as well as interviews and questionnaires. Some research projects study basic factors affecting all forms of education while others concentrate on one specific application. Some investigations look for solutions to concrete problems while others examine the effectiveness of educational projects and policies. Education studies encompasses various subfields like philosophy of education, pedagogy, psychology of education, sociology of education, economics of education, comparative education, and history of education. The philosophy of education is the branch of applied philosophy that examines many of the basic assumptions underlying the theory and practice of education. It studies education both as a process and as a discipline while trying to provide exact definitions of its nature and how it differs from other phenomena. It further examines the purpose of education, its different types, and how to conceptualize teachers, students, and their relation. It includes educational ethics, which investigates the moral implications of education; for example, what ethical principles direct it and how teachers should apply them to specific cases. The philosophy of education has a long history and was discussed in ancient Greek philosophy. The term "pedagogy" is sometimes used as a synonym for education studies but when understood in a more restricted sense, it refers to the subfield interested in teaching methods. It studies how the aims of education, like the transmission of knowledge or fostering skills and character traits, can be realized. It is interested in the methods and practices used for teaching in regular schools. Some definitions restrict it to this domain, but in a wider sense, it covers all types of education, including forms of teaching outside schools. In this general sense, it explores how teachers can bring about experiences in learners to advance their understanding of the studied topic and how the learning itself takes place. The psychology of education studies how education happens on the mental level, specifically how new knowledge and skills are acquired as well as how personal growth takes place. It examines the factors responsible for successful education and how these factors may differ between individuals. Important factors include intelligence, motivation, and personality. A central topic in this field is the interplay between nature and nurture and how it affects educational success. Influential psychological theories of education are behaviorism, cognitivism, and constructivism. Closely related fields are the neurology of education and educational neuroscience, which are interested in the neuropsychological processes and changes brought about through learning. The sociology of education is concerned with how social factors influence education and how it leads to socialization. Often-discussed factors are socioeconomic status, ethnicity, and gender. The sociology of education studies how these factors, together with the dominant ideology in society, affect what kind of education is available to a person and how successful they are. Closely related questions include how education affects different groups in society and how educational experiences can form someone's personal identity. The sociology of education is specifically interested in aspects that result in inequalities. Its insights are relevant to education policy for trying to identify and mitigate factors that cause inequality. Two influential schools of thought are consensus theory and conflict theory. Consensus theorists hold that education benefits society as a whole by preparing people for their roles. Conflict theories have a more negative outlook on the resulting inequalities and see education as a force used by the ruling class to promote their own agenda. The economics of education is the field of inquiry studying how education is produced, distributed, and consumed. It tries to determine how resources should be used to improve education, for example, by examining to what extent the quality of teachers is increased by raising their salary. Other questions are how smaller class sizes affect educational success and how to invest in new educational technologies. This way, the economics of education helps policy-makers decide how to distribute the limited resources most efficiently to benefit society as a whole. It also tries to understand what long-term role education plays for the economy of a country by providing a highly skilled labor force and increasing its competitiveness. A closely related issue concerns the economic advantages and disadvantages of different systems of education. Comparative education is the discipline that examines and contrasts systems of education. Comparisons can happen from a general perspective or focus on specific factors, like social, political, or economic aspects. Comparative education is often applied to different countries to assess the similarities and differences of their educational institutions and practices as well as to evaluate the consequences of the distinct approaches. It can be used to learn from other countries which education policies work and how one's own system of education may be improved. This practice is known as policy borrowing and comes with many difficulties since the success of policies can depend to a large degree on the social and cultural context of students and teachers. A closely related and controversial topic concerns the question of whether the educational systems of developed countries are superior and should be exported to less developed countries. Other key topics are the internationalization of education and the role of education in transitioning from an authoritarian regime to a democracy. The history of education examines the evolution of educational practices, systems, and institutions. It discusses various key processes, their possible causes and effects, and their relations to each other. A central topic in education studies concerns the question of how people should be educated and what goals should guide this process. Many aims of education have been suggested. On a basic level, education is about the acquisition of knowledge and skills but may also include personal development and fostering of character traits. Common suggestions encompass features like curiosity, creativity, rationality, and critical thinking as well as the tendency to think, feel, and act morally. Some scholars focus on liberal values linked to freedom, autonomy, and open-mindedness. Others prioritize qualities like obedience to authority, ideological purity, piety, and religious faith. An important discussion in this regard is about the role of critical thinking and the extent to which indoctrination forms part of education. On a social level, it is often stressed that education should socialize people. A controversial issue concerns who primarily benefits from education: the educated person, society as a whole, or dominant groups within society. Educational ideologies are systems of basic philosophical assumptions and principles that can be used to interpret, understand, and evaluate existing educational practicies and policies. They cover various additional issues besides the aims of education, like what topics are learned and how the learning activity is structured. Other themes include the role of the teacher, how educational progress should be assessed, and how institutional frameworks and policies should be structured. There are many ideologies and they often overlap in various ways. Teacher-centered ideologies place the main emphasis on the teacher's role in transmitting knowledge to students while student-centered ideologies give a more active role to the students in the process. Process-based ideologies focus on what the processes of teaching and learning should be like. They contrast with product-based ideologies, which discuss education from the perspective of the result to be achieved. Another classification contrasts progressivism with more traditional and conservative ideologies. Further categories are humanism, romanticism, essentialism, encyclopaedism, and pragmatism. There are also distinct types for authoritarian and democratic ideologies. Learning theories try to explain how learning happens. Influential theories are behaviorism, cognitivism, and constructivism. Behaviorism understands learning as a change in behavior in response to environmental stimuli. This happens by presenting the learner with a stimulus, associating this stimulus with the desired response, and solidifying this stimulus-response pair. Cognitivism sees learning as a change in cognitive structures and focuses on the mental processes involved in storing, retrieving, and processing information. Constructivism holds that learning is based on the personal experience of each individual and puts more emphasis on social interactions and how they are interpreted by the learner. These theories have important implications for how to teach. For example, behaviorists tend to focus on drills while cognitivists may advocate the use of mnemonics and constructivists tend to employ collaborative learning strategies. Various theories suggest that learning is more efficient when it is based on personal experience. An additional factor is to aim at a deeper understanding by connecting new to pre-existing knowledge rather than merely memorizing a list of unrelated facts. An influential developmental theory of learning is proposed by psychologist Jean Piaget, who outlines four stages of learning through which children pass on their way to adulthood: the sensorimotor, the pre-operational, the concrete operational, and the formal operational stage. They correspond to different levels of abstraction with early stages focusing more on simple sensory and motor activities while later stages include more complex internal representations and information processing in the form of logical reasoning. The teaching method concerns the way the content is presented by the teacher, for example, whether group work is used instead of a focus on individual learning. There are many teaching methods available. Which one is most efficient in a case depends on various factors, like the subject matter as well as the learner's age and competence level. This is reflected in the fact that modern school systems organize students by age, competence, specialization, and native language into different classes to ensure a productive learning process. Different subjects frequently use very different approaches. Language education often focuses on verbal learning while mathematical education is about abstract and symbolic thinking together with deductive reasoning. One central requirement for teaching methodologies is to ensure that the learner remains motivated because of interest and curiosity or through external rewards. Further aspects of teaching methods include the instructional media used, such as books, worksheets, and audio-visual recordings, and having some form of test or assessment to evaluate the learning progress. An important pedagogical aspect in many forms of modern education is that each lesson is part of a larger educational enterprise governed by a syllabus, which often covers several months or years. According to Herbartianism, teaching is divided into phases. The initial phase consists of preparing the student's mind for new information. Next, new ideas are first presented to the learner and then associated with ideas with which the learner is already familiar. In later phases, the understanding shifts to a more general level behind the specific instances and the ideas are then put into concrete practice. The history of education studies the processes, methods, and institutions involved in teaching and learning. It tries to explain how they have interacted with each other and shaped educational practice until the present day. Education in prehistory took place as a form of enculturation and focused on practical knowledge and skills relevant to everyday concerns, for example, in relation to food, clothing, shelter, and protection. There were no formal schools or specialized teachers and most adults in the community performed that role and learning happened informally during everyday activities, for example, when children observed and imitated their elders. For these oral societies, storytelling played a key role in transferring cultural and religious ideas from one generation to the next. Beginning with the emergence of agriculture around 9000 BCE, a slow educational change towards more specialization began to occur as people formed larger groups and more complex artisanal and technical skills were needed. Starting in the 4th millennium BCE and the following millennia, a major shift in educational practices started to take place with the invention of writing in regions such as Mesopotamia, ancient Egypt, the Indus Valley, and ancient China. This development had a significant influence on the history of education as a whole. Through writing, it was possible to store, preserve, and communicate information. This facilitated various subsequent developments; for example, the creation of educational tools, like textbooks, and the formation of institutions, like schools. Another key aspect of ancient education was the establishment of formal education. This became necessary since the amount of knowledge grew as civilizations evolved and informal education proved insufficient to transmit all requisite knowledge between generations. Teachers would act as specialists to impart knowledge and education became more abstract and further removed from daily life. Formal education was still quite rare in ancient societies and was restricted to the intellectual elites. It covered fields like reading and writing, record keeping, leadership, civic and political life, religion, and technical skills associated with specific professions. Formal education introduced a new way of teaching that gave more emphasis to discipline and drills than the earlier informal modes of education. Two often-discussed achievements of ancient education are the establishment of Plato's Academy in Ancient Greece, which is sometimes considered the first institute of higher learning, and the creation of the Great Library of Alexandria in Ancient Egypt as one of the most prestigious libraries of the ancient world. Many aspects of education in the medieval period were shaped by religious traditions. In Europe, the Catholic Church wielded a significant influence over formal education. In the Arab world, the newly founded religion of Islam spread rapidly and led to various educational developments during the Islamic Golden Age, for example, by integrating classical and religious knowledge and by establishing madrasa schools. In Jewish communities, yeshivas were established as institutions dedicated to the study of religious texts and Jewish law. In China, an expansive state educational and exam system influenced by Confucian teachings was established. New complex societies began to evolve in other regions, such as Africa, the Americas, Northern Europe, and Japan. Some incorporated preexisting educational practices while others developed new traditions. Additionally, this period saw the establishment of various institutes of higher education and research. The first universities in Europe were the University of Bologna, the University of Paris, and Oxford University. Other influential centers of higher learning were the Al-Qarawiyyin University in Morocco, the Al-Azhar University in Egypt, and the House of Wisdom in Iraq. Another key development was the creation of guilds, which were associations of skilled craftsmen and merchants who controlled the practice of their trades. They were responsible for vocational education and new members had to pass through different stages on their way to masterhood. Starting in the early modern period, education in Europe during the Renaissance slowly began to shift from a religious approach towards one which was more secular. This development was tied to an increased appreciation of the importance of education and a broadened range of topics, including a revived interest in ancient literary texts and educational programs. The turn toward secularization was accelerated during the Age of Enlightenment starting in the 17th century, which emphasized the role of reason and the empirical sciences. European colonization affected education in the Americas through Christian missionary initiatives. In China, the state educational system was further expanded and focused more on the teachings of neo-Confucianism. In the Islamic world, the outreach of formal education increased and remained under the influence of religion. A key development in the early modern period was the invention and popularization of the printing press in the middle of the 15th century, which had a profound impact on general education. It significantly reduced the cost of producing books, which were hand-written before, and thereby augmented the dissemination of written documents, including new forms like newspapers and pamphlets. The increased availability of written media had a major influence on the general literacy of the population. These changes prepared the rise of public education in the 18th and 19th centuries. This period saw the establishment of publicly funded schools with the aim of providing education for all. This contrasts with earlier periods when formal education was primarily provided by private schools, religious institutions, and individual tutors. Aztec civilization was an exception in this regard since formal education was mandatory for the youth regardless of social class as early as the 14th century. Closely related changes were to make education compulsory and free of charge for all children up to a certain age. Initiatives to promote public education and universal access to education made significant progress in the 20th and the 21st centuries and were promoted by intergovernmental organizations like the UN. Examples include the Universal Declaration of Human Rights, the Convention on the Rights of the Child, the Education for All initiative, the Millennium Development Goals, and the Sustainable Development Goals. These efforts resulted in a steady rise of all forms of education but affected primary education in particular. In 1970, 28% of all primary-school-age children worldwide did not attend school; in 2015, this number dropped to 9%. The establishment of public education was accompanied by the introduction of standardized curricula for public schools as well as standardized tests to assess the student's progress. Contemporary examples include the Test of English as a Foreign Language, which is a globally used test to assess English language proficiency of non-native English speakers, and the Programme for International Student Assessment, which evaluates education systems worldwide based on how 15-year-old students perform in the fields of reading, mathematics, and science. Similar changes also affected teachers by setting in place institutions and norms to guide and oversee teacher training, like certification requirements for teaching at public schools. A further influence on contemporary education was the emergence of new educational technologies. The widespread availability of computers and the internet dramatically increased access to educational resources and made new types of education possible, such as online education. This was of particular relevance during the COVID-19 pandemic when schools globally closed for extended periods and many offered remote learning through video conferencing or pre-recorded video lessons to continue instruction. A further contemporary factor is the increased globalization and internationalization of education.
[ { "paragraph_id": 0, "text": "Education is the transmission of knowledge, skills, and character traits and comes in many forms. Formal education happens in a complex institutional framework, like public schools. Non-formal education is also structured but takes place outside the formal schooling system while informal education is unstructured learning through daily experiences. Formal and non-formal education are divided into levels that include early childhood education, primary education, secondary education, and tertiary education. Other classifications focus on the teaching method, like teacher-centered and student-centered education, and on the subject, like science education, language education, and physical education. The term \"education\" can also refer to the mental states and qualities of educated people and the academic field studying educational phenomena.", "title": "" }, { "paragraph_id": 1, "text": "The precise definition of education is disputed and there are disagreements about what the aims of education are and to what extent education is different from indoctrination by fostering critical thinking. These disagreements affect how to identify, measure, and improve forms of education. Fundamentally, education socializes children into society by teaching cultural values and norms. It equips them with the skills needed to become productive members of society. This way, it stimulates economic growth and raises awareness of local and global problems. Organized institutions affect many aspects of education. For example, governments set education policies to determine when school classes happen, what is taught, and who can or must attend. International organizations, like UNESCO, have been influential in promoting primary education for all children.", "title": "" }, { "paragraph_id": 2, "text": "Many factors influence whether education is successful. Psychological factors include motivation, intelligence, and personality. Social factors, like socioeconomic status, ethnicity, and gender, are often linked to discrimination. Further factors include access to educational technology, teacher quality, and parent involvement.", "title": "" }, { "paragraph_id": 3, "text": "The main field investigating education is called education studies. It examines what education is, what aims and effects it has, and how to improve it. Education studies has many subfields, like philosophy, psychology, sociology, and economics of education. It also discusses comparative education, pedagogy, and the history of education. In prehistory, education happened informally through oral communication and imitation. With the rise of ancient civilizations, writing was invented, and the amount of knowledge grew. This caused a shift from informal to formal education. Initially, formal education was mainly available to elites and religious groups. The invention of the printing press in the 15th century made books more widely available. This increased general literacy. Beginning in the 18th and 19th centuries, public education became more important. This development led to the worldwide process of making primary education available to all, free of charge, and compulsory up to a certain age.", "title": "" }, { "paragraph_id": 4, "text": "The definition of education has been explored by theorists from various fields. Many agree that education is a purposeful activity aimed at achieving goals like the transmission of knowledge, skills, and character traits. There is extensive debate regarding its exact nature beyond these general features. One approach views education as a process that occurs during events such as schooling, teaching, and learning. Another outlook understands education not as a process but as the mental states and dispositions of educated persons that result from this process. Additionally, the term may also refer to the academic field that studies the methods, processes, and social institutions involved in teaching and learning. Having a clear idea of what the term means matters when trying to identify educational phenomena, measure educational success, and improve educational practices. The term \"education\" is derived from the Latin words educare, meaning \"to bring up\" and educere, meaning \"to bring forth\".", "title": "Definitions" }, { "paragraph_id": 5, "text": "Some theorists provide precise definitions by identifying the specific features that are exclusive to all forms of education. Education theorist R. S. Peters, for instance, outlines three essential features of education, which include that knowledge and understanding are imparted to the student and that this process is beneficial and done in a morally appropriate manner. Such precise definitions often succeed at characterizing the most typical forms of education. However, they often face criticism because less common types of education occasionally fall outside their parameters. The difficulty of dealing with counterexamples not covered by precise definitions can be avoided by offering less exact definitions based on family resemblance instead. This means that all the forms of education are similar to each other but they need not share a set of essential features that all of them have in common. Some education theorists, such as Keira Sewell and Stephen Newman, hold that the term \"education\" is context-dependent. This implies that its meaning varies depending on the situation in which it is used.", "title": "Definitions" }, { "paragraph_id": 6, "text": "There is disagreement in the academic literature on whether education is an evaluative concept. Thick definitions characterize education as an evaluative concept. They state it is part of the nature of education to be beneficial to the student or lead to some kind of improvement. Different thick definitions express differing views about what kind of improvement is involved. They contrast with thin definitions, which provide a value-neutral explanation of education. A closely related distinction is between descriptive and prescriptive conceptions of education. Descriptive conceptions refer to how the term is commonly used in ordinary language. Prescriptive conceptions define what good education is or how education should be practiced. Many thick and prescriptive conceptions hold that education is an activity that tries to achieve certain aims. Some concentrate on epistemic aims, like knowledge and understanding. Others give more emphasis to the development of skills, like rationality and critical thinking, and character traits, like kindness and honesty.", "title": "Definitions" }, { "paragraph_id": 7, "text": "One approach is to focus on a single overarching purpose of education and see the more specific aims as means to this end. According to one suggestion, socialization is the aim of education. It is realized by transmitting accumulated knowledge from one generation to the next. This process helps the student to function in society as a citizen. More person-centered definitions focus on the well-being of the student instead. According to them, education is a process that helps students lead a good life or the life they wish to lead. Various scholars stress the role of critical thinking to distinguish education from indoctrination. They state that mere indoctrination is only interested in instilling beliefs in the student, independent of whether the beliefs are rational; whereas education also fosters the rational ability to critically reflect on and question those beliefs. However, it is not universally accepted that these two phenomena can be clearly distinguished. One reason for this view is that some forms of indoctrination may be necessary in the early stages of education while the child's mind is not yet sufficiently developed. This applies to cases in which young children need to learn something without being able to understand the underlying reasons, like certain safety rules and hygiene practices.", "title": "Definitions" }, { "paragraph_id": 8, "text": "Education can be characterized from the teacher's or the student's perspective. Teacher-centered definitions focus on the perspective and role of the teacher in the transmission of knowledge and skills in a morally appropriate way. Student-centered definitions analyze education from the student's involvement in the learning process and hold that this process transforms and enriches their subsequent experiences. Definitions taking both perspectives into account are also possible. This can take the form of describing the process as the shared experience of a common world. In the shared experience, different aspects of the world are discovered and problems are posed and solved.", "title": "Definitions" }, { "paragraph_id": 9, "text": "There are many classifications of education. One of them depends on the institutional framework and distinguishes between formal, non-formal, and informal education. Another classification includes different levels of education based on factors like the student's age and the complexity of the content. Further categories focus on the topic, the teaching method, the medium used, and the funding.", "title": "Types" }, { "paragraph_id": 10, "text": "The most common division is between formal, non-formal, and informal education. Formal education happens in a complex institutional framework. Such frameworks have a chronological and hierarchical order: the modern schooling system has classes based on the student's age and progress, extending from primary school to university. Formal education is usually controlled and guided by the government. It tends to be compulsory up to a certain age.", "title": "Types" }, { "paragraph_id": 11, "text": "Non-formal and informal education take place outside the formal schooling system. Non-formal education is a middle ground. Like formal education, it is organized, systematic, and carried out with a clear purpose, like tutoring, fitness classes, and the scouting movement. Informal education happens in an unsystematic way through daily experiences and exposure to the environment. Unlike formal and non-formal education, there is usually no designated authority figure responsible for teaching. Informal education takes place in many different settings and situations throughout one's life, usually in a spontaneous way. This is how children learn their first language from their parents and how people learn to prepare a dish by cooking together.", "title": "Types" }, { "paragraph_id": 12, "text": "Some theorists distinguish the three types based on the location of learning: formal education takes place in school, non-formal education happens in places that are not regularly visited, like museums, and informal education occurs in places of everyday routines. There are also differences in the source of motivation. Formal education tends to be driven by extrinsic motivation for external rewards. Non-formal and informal education are closely linked to intrinsic motivation because the learning itself is enjoyed. The distinction between the three types is normally clear but some forms of education do not easily fall into one category.", "title": "Types" }, { "paragraph_id": 13, "text": "Formal education plays a central role in modern civilization, though in primitive cultures, most of the education happened on the informal level. This usually meant that there was no distinction between activities focused on education and other activities. Instead, the whole environment acted as a form of school and most adults acted as teachers. Informal education is often not efficient enough to teach large quantities of knowledge. To do so, a formal setting and well-trained teachers are usually required. This was one of the reasons why in the course of history, formal education became more and more important. In this process, the experience of education and the discussed topics became more abstract and removed from daily life while more emphasis was put on grasping general patterns and concepts instead of observing and imitating particular forms of behavior.", "title": "Types" }, { "paragraph_id": 14, "text": "Types of education are often divided into levels or stages. The most influential framework is the International Standard Classification of Education, maintained by the United Nations Educational, Scientific and Cultural Organization (UNESCO). It covers both formal and non-formal education and distinguishes levels based on the student's age, the duration of learning, and the complexity of the discussed content. Further criteria include entry requirements, teacher qualifications, and the intended outcome of successful completion. The levels are grouped into early childhood education (level 0), primary education (level 1), secondary education (levels 2–3), post-secondary non-tertiary education (level 4), and tertiary education (levels 5–8).", "title": "Types" }, { "paragraph_id": 15, "text": "Early childhood education, also known as preschool education or nursery education, is the stage of education that begins with birth and lasts until the start of primary school. It follows the holistic aim of fostering early child development at the physical, mental, and social levels. It plays a key role in socialization and personality development and includes various basic skills in the areas of communication, learning, and problem-solving. This way, it aims to prepare children for their entry into primary education. Preschool education is usually optional but in some countries, such as Brazil, it is mandatory starting from the age of four.", "title": "Types" }, { "paragraph_id": 16, "text": "Primary (or elementary) education usually starts within the ages of five to seven and lasts for four to seven years. It does not have any further entry requirements and its main goal is to teach the basic skills in the fields of reading, writing, and mathematics. It also covers the core knowledge in other fields, like history, geography, the sciences, music, and art. A further aim is to foster personal development. Today, primary education is compulsory in almost all countries and over 90% of all primary-school-age children worldwide attend primary school.", "title": "Types" }, { "paragraph_id": 17, "text": "Secondary education is the stage of education following primary education and usually covers the ages of 12 to 18 years. It is commonly divided into lower secondary education (middle school or junior high school) and upper secondary education (high school, senior high school, or college depending on the country). Lower secondary education normally has the completion of primary school as its entry requirement. It aims to extend and deepen the learning outcomes and is more focused on subject-specific curricula and teachers are specialized in only one or a few specific subjects. One of its aims is to familiarize students with the basic theoretical concepts in the different subjects. This helps create a solid basis for lifelong learning. In some cases, it also includes basic forms of vocational training. Lower secondary education is compulsory in many countries in Central and East Asia, Europe, and America. In some countries, it is the last stage of compulsory education. Mandatory lower secondary education is not as prevalent in Arab states, sub-Saharan Africa, and South and West Asia.", "title": "Types" }, { "paragraph_id": 18, "text": "Upper secondary education starts roughly at the age of 15 and aims to provide students with the skills and knowledge needed for employment or tertiary education. Its requirement is usually the completion of lower secondary education. Its subjects are more varied and complex and students can often choose between a few subjects. Its successful completion is commonly tied to a formal qualification in the form of a high school diploma. Some types of education after secondary education do not belong to tertiary education and are categorized as post-secondary non-tertiary education. They are similar in complexity to secondary education but tend to focus more on vocational training to prepare students for the job market.", "title": "Types" }, { "paragraph_id": 19, "text": "In some countries, tertiary education is used as a synonym of higher education, while in others, tertiary education is the wider term. Tertiary education expands upon the foundations of secondary education but has a more narrow and in-depth focus on a specific field or subject. Its completion leads to an academic degree. It can be divided into four levels: short-cycle tertiary, Bachelor's, Master's, and doctoral level education. These levels often form a hierarchical structure with later levels depending on the completion of previous levels. Short-cycle tertiary education focuses on practical matters. It includes advanced vocational and professional training to prepare students for the job market in specialized professions. Bachelor's level education, also referred to as undergraduate education, tends to be longer than short-cycle tertiary education. It is usually offered by universities and results in an intermediary academic certification in the form of a bachelor's degree. Master's level education is more specialized than undergraduate education. Many programs require independent research in the form of a master's thesis as a requirement for successful completion. Doctoral level education leads to an advanced research qualification, normally in the form of a doctor's degree, such as a Doctor of Philosophy (PhD). It usually requires the submission of a substantial academic work, such as a dissertation. More advanced levels include post-doctoral studies and habilitation.", "title": "Types" }, { "paragraph_id": 20, "text": "Many other types of education are discussed in the academic literature, like the distinction between traditional and alternative education. Traditional education concerns long-established and mainstream schooling practices. It uses teacher-centered education and takes place in a well-regulated school environment. Regulations cover many aspects of education, such as the curriculum and the timeframe when classes start and end.", "title": "Types" }, { "paragraph_id": 21, "text": "Alternative education is an umbrella term for forms of schooling that differ from the mainstream traditional approach. They may use a different learning environment, teach different subjects, or promote a different teacher-student relationship. Alternative schooling is characterized by voluntary participation, relatively small class and school sizes, and personalized instruction. This often results in a more welcoming and emotionally safe atmosphere. Alternative education encompasses many types like charter schools and special programs for problematic or gifted children. It also includes homeschooling and unschooling. There are many alternative schooling traditions, like Montessori schools, Waldorf schools, Round Square schools, Escuela Nueva schools, free schools, and democratic schools. Alternative education also includes indigenous education, which focuses on the transmission of knowledge and skills from an indigenous heritage and employs methods like narration and storytelling. Further types of alternative schools include gurukul schools in India, madrasa schools in the Middle East, and yeshivas in Jewish tradition.", "title": "Types" }, { "paragraph_id": 22, "text": "Other distinctions between types of education are based on who receives education. Categories by the age of the learner are childhood education, adolescent education, adult education, and elderly education. Special education is education that is specifically adapted to meet the unique needs of students with disabilities. It covers various forms of impairments on the intellectual, social, communicative, and physical levels. It aims to overcome the challenges posed by these impairments. This way, it provides the affected students with access to an appropriate educational structure. When understood in the broadest sense, special education also includes education for very gifted children who need adjusted curricula to reach their fullest potential.", "title": "Types" }, { "paragraph_id": 23, "text": "Some classifications focus on the teaching method. In teacher-centered education, the teacher takes center stage in providing students with information. It contrasts with student-centered education, in which students take on a more active and responsible role in shaping classroom activities. For conscious education, learning and teaching happen with a clear purpose in mind. Unconscious education occurs on its own without being consciously planned or guided. This may happen in part through the personality of teachers and adults, which can have indirect effects on the development of the student's personality. Evidence-based education uses well-designed scientific studies to determine which methods of education work best. Its goal is to maximize the effectiveness of educational practices and policies. This is achieved by ensuring that they are informed by the best available empirical evidence. It includes evidence-based teaching, evidence-based learning, and school effectiveness research.", "title": "Types" }, { "paragraph_id": 24, "text": "Autodidacticism is self-education and happens without the guidance of teachers and institutions. It mainly occurs in adult education and is characterized by the freedom to choose what and when to study, which is why it can be a more fulfilling learning experience. The lack of structure and guidance can result in aimless learning and the absence of external feedback may lead autodidacts to develop false ideas and inaccurately assess their learning progress. Autodidacticism is closely related to lifelong education, which is an ongoing learning process throughout a person's entire life.", "title": "Types" }, { "paragraph_id": 25, "text": "Forms of education can also be categorized by the subject and the medium used. Types based on the subject include science education, language education, art education, religious education, and physical education. Special mediums, such as radio or websites, are used in distance education. Examples include e-learning (use of computers), m-learning (use of mobile devices), and online education. They often take the form of open education, in which the courses and materials are made available with a minimal amount of barriers. They contrast with regular classroom or onsite education. Some forms of online education are not open education, such as full online degree programs offered by some universities.", "title": "Types" }, { "paragraph_id": 26, "text": "A further distinction is based on the type of funding. State education, also referred to as public education, is funded and controlled by the government and available to the general public. It normally does not require tuition fees and is thus a form of free education. Private education, by contrast, is funded and managed by private institutions. Private schools often have a more selective admission process and offer paid education by charging tuition fees. A more detailed classification focuses on the social institution responsible for education, like family, school, civil society, state, and church.", "title": "Types" }, { "paragraph_id": 27, "text": "Compulsory education is education that people are legally required to receive. It concerns mainly children who need to visit school up to a certain age. It contrasts with voluntary education, which people pursue by personal choice without a legal requirement.", "title": "Types" }, { "paragraph_id": 28, "text": "Education plays various roles in society, including in social, economic, and personal fields. On a social level, education makes it possible to establish and sustain a stable society. It helps people acquire the basic skills needed to interact with their environment and fulfill their needs and desires. In modern society, this involves a wide range of skills like being able to speak, read, write, solve arithmetic problems, and handle information and communications technology. Another key part of socialization is to learn the dominant social and cultural norms and what kinds of behavior are considered appropriate in different contexts. Education enables the social cohesion, stability, and peace needed for people to productively engage in daily business. Socialization happens throughout life but is of special relevance to early childhood education. Education plays a key role in democracies by increasing civic participation in the form of voting and organizing, and through its tendency to promote equal opportunity for all.", "title": "Role in society" }, { "paragraph_id": 29, "text": "On an economic level, people become productive members of society through education by acquiring the technical and analytical skills needed to pursue their profession, produce goods, and provide services to others. In early societies, there was little specialization and each child would generally learn most of the skills that the community required to function. Modern societies are increasingly complex and many professions are only mastered by relatively few people who receive specialized training in addition to general education. Some of the skills and tendencies learned to function in society may conflict with each other and their value depends on the context of their usage. For example, cultivating the tendency to be inquisitive and question established teachings promotes critical thinking and innovation but in some cases, obedience to an authority is required to ensure social stability.", "title": "Role in society" }, { "paragraph_id": 30, "text": "By helping people become productive members of society, education stimulates economic growth and reduces poverty. It helps workers become more skilled and thereby increases the quality of the produced goods and services, which in turn leads to prosperity and increased competitiveness. Public education is often understood as a long-term investment to benefit society as a whole. The rate of return is especially high for investments in primary education. Besides increasing economic prosperity, it can also lead to technological and scientific advances as well as decrease unemployment while promoting social equity.", "title": "Role in society" }, { "paragraph_id": 31, "text": "Education can prepare a country to adapt to changes and successfully face new challenges. It can help raise awareness and contribute to the solution of contemporary global problems, such as climate change, sustainability, and the widening inequalities between the rich and the poor. By making students aware of how their lives and actions affect others, it may inspire some to work toward realizing a more sustainable and fair world. This way, education serves not just the purpose of maintaining the societal status quo, but can also be an instrument of social development. That applies also to changing circumstances in the economic sector. For example, technological advances, particularly increased automation, are accompanied by new demands on the workforce, which education can help address. Changing circumstances may render currently taught skills and knowledge redundant while shifting the importance to other areas. Education can be used to prepare people for such changes by adjusting the curriculum, introducing subjects like digital literacy, promoting skills in handling new technologies, and including new forms of education such as massive open online courses.", "title": "Role in society" }, { "paragraph_id": 32, "text": "On a more individual level, education promotes personal development. This can include factors such as learning new skills, developing talents, fostering creativity, and increasing self-knowledge as well as improving problem-solving and decision-making abilities. Education also has positive effects on health and well-being. The social importance of education is recognized by the annual International Day of Education on January 24. The United Nations declared the year 1970 the International Education Year.", "title": "Role in society" }, { "paragraph_id": 33, "text": "Organized institutions play a key role in various aspects of education. Institutions like schools, universities, teacher training institutions, and ministries of education make up the education sector. They interact both with each other and with other stakeholders, such as parents, local communities, and religious groups. Further stakeholders are Non-governmental organizations, professionals in healthcare, law enforcement, media platforms, and political leaders. Many people are directly involved in the education sector. They include students, teachers, and school principals as well as school nurses and curriculum developers.", "title": "Role of institutions" }, { "paragraph_id": 34, "text": "Various aspects of formal education are regulated by the policies of governmental institutions. They determine at what age children need to attend school and at what times classes are held as well as issues pertaining to the school environment, like infrastructure. Regulations also cover the exact qualifications and requirements that teachers need to fulfill. An important aspect of education policy concerns the curriculum used for teaching at schools, colleges, and universities. A curriculum is a plan of instruction or a program of learning that guides students to achieve their educational goals. The topics are usually selected based on their importance and depend on the type of school. The goals of public school curricula are usually to offer a comprehensive and well-rounded education while vocational training focuses more on specific practical skills within a field. The curricula also cover various aspects besides the topic to be discussed, such as the teaching method, the objectives to be reached, and the standards for assessing progress. By determining the curricula, governmental institutions have a strong impact on what knowledge and skills are transmitted to the students. Examples of governmental institutions include the Ministry of Education in India, the Department of Basic Education in South Africa, and the Secretariat of Public Education in Mexico.", "title": "Role of institutions" }, { "paragraph_id": 35, "text": "International organizations also play a key role in education. For instance, UNESCO is an intergovernmental organization that promotes education in many ways. One of its activities is to advocate education policies, like the treaty Convention on the Rights of the Child, which states that education is a human right of all children and young people. Another was the Education for All initiative. It aimed to offer basic education to all children, adolescents, and adults by the year 2015 and was later replaced by the initiative Sustainable Development Goals as goal 4. Related policies include the Convention against Discrimination in Education and the Futures of Education initiative.", "title": "Role of institutions" }, { "paragraph_id": 36, "text": "Some influential organizations are not intergovernmental, but non-governmental. For example, the International Association of Universities promotes collaboration and the exchange of knowledge between colleges and universities around the world, while the International Baccalaureate offers international diploma programs. Institutions, like the Erasmus Programme, facilitate student exchanges between countries, while initiatives such as the Fulbright Program provide a similar service for teachers.", "title": "Role of institutions" }, { "paragraph_id": 37, "text": "Several factors influence educational achievement. They include psychological factors, which concern the student as an individual, and sociological factors, which pertain to the student's social environment. Further factors are access to educational technology, teacher quality, and parent involvement. Many of these factors overlap and influence each other.", "title": "Factors of educational success" }, { "paragraph_id": 38, "text": "On a psychological level, relevant factors include motivation, intelligence, and personality. Motivation is the internal force propelling people to engage in learning. Motivated students are more likely to interact with the content to be learned by participating in classroom activities like discussions, which often results in a deeper understanding of the subject. Motivation can also help students overcome difficulties and setbacks. An important distinction is between intrinsic and extrinsic motivation. Intrinsically motivated students are driven by an interest in the subject and the learning experience itself. Extrinsically motivated students seek external rewards like good grades and recognition from peers. Intrinsic motivation tends to be more beneficial by leading to increased creativity and engagement as well as long-term commitment. Educational psychologists try to discover how to increase motivation. This can be achieved, for instance, by encouraging some competition among students while ensuring a balance of positive and negative feedback in the form of praise and criticism.", "title": "Factors of educational success" }, { "paragraph_id": 39, "text": "Intelligence is another important factor in how people respond to education. It is a mental quality linked to the ability to learn from experience, to understand, and to employ knowledge and skills to solve problems. Those who have higher scores in intelligence metrics tend to perform better at school and go on to higher levels of education. Intelligence is often primarily associated with the so-called IQ, a standardized numerical metric for assessing intelligence. However, it has been argued that there different types of intelligences pertaining to distinct areas. According to psychologist Howard Gardner, they can be distinguished into areas like mathematics, logic, spatial cognition, language, and music. Further types affect how a person interacts with other people and with themselves. These forms are largely independent of each other, meaning that someone may excel at one type while scoring low on another.", "title": "Factors of educational success" }, { "paragraph_id": 40, "text": "A closely related factor concerns learning styles, which are preferred forms of acquiring knowledge and skills. According to proponents of learning style theory, students with an auditory learning style find it easy to follow spoken lectures and discussions while visual learners benefit if information is presented visually in diagrams and videos. For efficient learning, it may be beneficial to include a wide variety of learning modalities. The learner's personality may also affect educational achievement. For example, the features of conscientiousness and openness to experience from the Big Five personality traits are linked to academic success. Further mental factors include self-efficacy, self-esteem, and metacognitive abilities.", "title": "Factors of educational success" }, { "paragraph_id": 41, "text": "Sociological factors focus not on psychological attributes of learners but on their environment and position in society. They include socioeconomic status, ethnicity, and cultural background, as well as gender. They are of interest to researchers since they are associated with inequality and discrimination. For this reason, they play a key role in policy-making in attempts to mitigate their effects.", "title": "Factors of educational success" }, { "paragraph_id": 42, "text": "Socioeconomic status depends on income but includes other factors, such as financial security, social status, and social class, as well as quality of life attributes. Low socioeconomic status affects educational success in various ways. It is linked to slower cognitive developments in language and memory and higher dropout rates. Poor families may not have enough money to meet basic the nutritional needs of their children, causing poor development. They may also lack the means to invest in educational resources like stimulating toys, books, and computers. Additionally, they may be unable to afford tuition at prestigious schools and are more likely to attend schools in poorer areas. Such schools tend to offer lower standards of teaching because of teacher shortages or because they lack educational materials and facilities, like libraries. Poor parents may also be unable to afford private lessons if their children fall behind. In some cases, students from an economically disadvantaged background are forced to dropout from school to provide income to their families. They also have less access to information on higher education and may face additional difficulties in securing and repaying student loans. Low socioeconomic status also has many indirect negative effects by being linked to lower physical and mental health. Due to these factors, social inequalities on the level of the parents are often reproduced in the children.", "title": "Factors of educational success" }, { "paragraph_id": 43, "text": "Ethnic background is linked to cultural differences and language barriers, which make it more difficult for students to adapt to the school environment and follow classes. Additional factors are explicit and implicit biases and discrimination toward ethnic minorities. This may affect the students' self-esteem and motivation as well as their access to educational opportunities. For example, teachers may hold stereotypical views even if they are not overtly racist, which can lead them to grade comparable performances differently based on the child's ethnicity.", "title": "Factors of educational success" }, { "paragraph_id": 44, "text": "Historically, gender has been a central factor in education since the roles of men and women were defined differently in many societies. Education tended to strongly favor men, who were expected to provide for the family. Women, by contrast, were expected to manage the household and rear children, which barred most educational opportunities available to them. And while these inequalities have improved in most modern societies, there are still gender differences in education. Among other things, this concerns biases and stereotypes linked to the role of gender in education. They affect subjects like science, technology, engineering, and mathematics, which are often presented as male fields. This discourages female students from following them. In various cases, discrimination based on gender and social factors happens openly as part of official educational policy, such as the severe restrictions on female education instituted by the Taliban in Afghanistan and the school segregation of migrants and locals in urban China under the hukou system.", "title": "Factors of educational success" }, { "paragraph_id": 45, "text": "One aspect of many social factors is given by the expectations associated with stereotypes. They work both on an external level, based on how other people react to a person belonging to a certain group, and on an internal level, based on how the person internalizes them and acts accordingly. In this sense, the expectations may turn into self-fulfilling prophecies by causing the educational outcomes they anticipate. This can happen both for positive and negative stereotypes.", "title": "Factors of educational success" }, { "paragraph_id": 46, "text": "Technology plays another significant role in educational success. Educational technology is commonly associated with the use of modern digital devices, like computers. But understood in the broadest sense, it involves a wide range of resources and tools for learning, including basic aids that do not involve the use of machines, like regular books and worksheets.", "title": "Factors of educational success" }, { "paragraph_id": 47, "text": "Educational technology can benefit learning in various ways. In the form of media, it often takes the role of the primary supplier of information in the classroom. This means that the teacher can focus their time and energy on other tasks, like planning the lesson and guiding students as well as assessing educational performance. Educational technology can also make information easier to understand by presenting it using graphics and videos rather than through mere text. In this regard, interactive elements may be used to make the learning experience more engaging in the form of educational games. Technology can be employed to make educational materials accessible to many people, like when using online resources. It additionally facilitates collaboration between students and communication with teachers. Lack of educational technology affects developing countries in particular. Many efforts are made to address it through organisations such as the One Laptop per Child initiative, the African Library Project, and Pratham.", "title": "Factors of educational success" }, { "paragraph_id": 48, "text": "A closely related issue concerns the effects of school infrastructure. It includes physical aspects of the school, like its location and size as well as the available school facilities and equipment. A healthy and safe environment, well-maintained classrooms, and suitable classroom furniture as well as the availability of a library and a canteen tend to contribute to educational success. The quality of the teacher also has an important impact on student achievement. Skilled teachers know how to motivate and inspire students and are able to adjust their instructions to the students' abilities and needs. Important in this regard are the teacher's own education and training as well as their past teaching experience. A meta-analysis by Engin Karadağ et al. concludes that, compared to other influences, factors related to the school and the teacher have the biggest impact on educational success.", "title": "Factors of educational success" }, { "paragraph_id": 49, "text": "An additional factor to boost student achievement is parent involvement. It can make children more motivated and invested if they are aware that their parents care about their educational efforts. This tends to lead to increased self-esteem, better attendance rates, and more constructive behavior at school. Parent involvement also includes communication with teachers and other school staff to make other parties aware of current issues and how they may be resolved. Further relevant factors sometimes discussed in the academic literature include historical, political, demographic, religious, and legal aspects.", "title": "Factors of educational success" }, { "paragraph_id": 50, "text": "The main discipline investigating education is called education studies, also referred to as education sciences. It tries to determine how people transmit and acquire knowledge by studying the methods and forms of education. It is interested in its aims, effects, and value as well as the cultural, societal, governmental, and historical contexts that shape education. Education theorists integrate insights from many other fields of inquiry, including philosophy, psychology, sociology, economics, history, politics, and international relations. Because of these influences, some theorists claim that education studies is not an independent academic discipline like physics or history since its method and subject are not as clearly defined. Education studies differs from regular training programs, such as teacher training, since its focus on academic analysis and critical reflection goes beyond the skills needed to be a good teacher. It is not restricted to the topic of formal education but examines all forms and aspects of education.", "title": "Education studies" }, { "paragraph_id": 51, "text": "Various research methods are used to study educational phenomena. They can roughly be divided into quantitative, qualitative, and mixed-methods approaches. Quantitative research emulates the methods found in the natural sciences by using precise numerical measurements to gather data from many observations and employs statistical tools to analyze it. It aims to arrive at an objective and impersonal understanding. Qualitative research usually has a much smaller sample size and tries to get an in-depth insight into more subjective and personal factors, like how different actors experience the process of education. Mixed-methods research aims to combine data gathered from both approaches to arrive at a balanced and comprehensive understanding. Data can be collected in various ways, like using direct observation or test scores as well as interviews and questionnaires. Some research projects study basic factors affecting all forms of education while others concentrate on one specific application. Some investigations look for solutions to concrete problems while others examine the effectiveness of educational projects and policies.", "title": "Education studies" }, { "paragraph_id": 52, "text": "Education studies encompasses various subfields like philosophy of education, pedagogy, psychology of education, sociology of education, economics of education, comparative education, and history of education. The philosophy of education is the branch of applied philosophy that examines many of the basic assumptions underlying the theory and practice of education. It studies education both as a process and as a discipline while trying to provide exact definitions of its nature and how it differs from other phenomena. It further examines the purpose of education, its different types, and how to conceptualize teachers, students, and their relation. It includes educational ethics, which investigates the moral implications of education; for example, what ethical principles direct it and how teachers should apply them to specific cases. The philosophy of education has a long history and was discussed in ancient Greek philosophy.", "title": "Education studies" }, { "paragraph_id": 53, "text": "The term \"pedagogy\" is sometimes used as a synonym for education studies but when understood in a more restricted sense, it refers to the subfield interested in teaching methods. It studies how the aims of education, like the transmission of knowledge or fostering skills and character traits, can be realized. It is interested in the methods and practices used for teaching in regular schools. Some definitions restrict it to this domain, but in a wider sense, it covers all types of education, including forms of teaching outside schools. In this general sense, it explores how teachers can bring about experiences in learners to advance their understanding of the studied topic and how the learning itself takes place.", "title": "Education studies" }, { "paragraph_id": 54, "text": "The psychology of education studies how education happens on the mental level, specifically how new knowledge and skills are acquired as well as how personal growth takes place. It examines the factors responsible for successful education and how these factors may differ between individuals. Important factors include intelligence, motivation, and personality. A central topic in this field is the interplay between nature and nurture and how it affects educational success. Influential psychological theories of education are behaviorism, cognitivism, and constructivism. Closely related fields are the neurology of education and educational neuroscience, which are interested in the neuropsychological processes and changes brought about through learning.", "title": "Education studies" }, { "paragraph_id": 55, "text": "The sociology of education is concerned with how social factors influence education and how it leads to socialization. Often-discussed factors are socioeconomic status, ethnicity, and gender. The sociology of education studies how these factors, together with the dominant ideology in society, affect what kind of education is available to a person and how successful they are. Closely related questions include how education affects different groups in society and how educational experiences can form someone's personal identity. The sociology of education is specifically interested in aspects that result in inequalities. Its insights are relevant to education policy for trying to identify and mitigate factors that cause inequality. Two influential schools of thought are consensus theory and conflict theory. Consensus theorists hold that education benefits society as a whole by preparing people for their roles. Conflict theories have a more negative outlook on the resulting inequalities and see education as a force used by the ruling class to promote their own agenda.", "title": "Education studies" }, { "paragraph_id": 56, "text": "The economics of education is the field of inquiry studying how education is produced, distributed, and consumed. It tries to determine how resources should be used to improve education, for example, by examining to what extent the quality of teachers is increased by raising their salary. Other questions are how smaller class sizes affect educational success and how to invest in new educational technologies. This way, the economics of education helps policy-makers decide how to distribute the limited resources most efficiently to benefit society as a whole. It also tries to understand what long-term role education plays for the economy of a country by providing a highly skilled labor force and increasing its competitiveness. A closely related issue concerns the economic advantages and disadvantages of different systems of education.", "title": "Education studies" }, { "paragraph_id": 57, "text": "Comparative education is the discipline that examines and contrasts systems of education. Comparisons can happen from a general perspective or focus on specific factors, like social, political, or economic aspects. Comparative education is often applied to different countries to assess the similarities and differences of their educational institutions and practices as well as to evaluate the consequences of the distinct approaches. It can be used to learn from other countries which education policies work and how one's own system of education may be improved. This practice is known as policy borrowing and comes with many difficulties since the success of policies can depend to a large degree on the social and cultural context of students and teachers. A closely related and controversial topic concerns the question of whether the educational systems of developed countries are superior and should be exported to less developed countries. Other key topics are the internationalization of education and the role of education in transitioning from an authoritarian regime to a democracy.", "title": "Education studies" }, { "paragraph_id": 58, "text": "The history of education examines the evolution of educational practices, systems, and institutions. It discusses various key processes, their possible causes and effects, and their relations to each other.", "title": "Education studies" }, { "paragraph_id": 59, "text": "A central topic in education studies concerns the question of how people should be educated and what goals should guide this process. Many aims of education have been suggested. On a basic level, education is about the acquisition of knowledge and skills but may also include personal development and fostering of character traits. Common suggestions encompass features like curiosity, creativity, rationality, and critical thinking as well as the tendency to think, feel, and act morally. Some scholars focus on liberal values linked to freedom, autonomy, and open-mindedness. Others prioritize qualities like obedience to authority, ideological purity, piety, and religious faith. An important discussion in this regard is about the role of critical thinking and the extent to which indoctrination forms part of education. On a social level, it is often stressed that education should socialize people. A controversial issue concerns who primarily benefits from education: the educated person, society as a whole, or dominant groups within society.", "title": "Education studies" }, { "paragraph_id": 60, "text": "Educational ideologies are systems of basic philosophical assumptions and principles that can be used to interpret, understand, and evaluate existing educational practicies and policies. They cover various additional issues besides the aims of education, like what topics are learned and how the learning activity is structured. Other themes include the role of the teacher, how educational progress should be assessed, and how institutional frameworks and policies should be structured. There are many ideologies and they often overlap in various ways. Teacher-centered ideologies place the main emphasis on the teacher's role in transmitting knowledge to students while student-centered ideologies give a more active role to the students in the process. Process-based ideologies focus on what the processes of teaching and learning should be like. They contrast with product-based ideologies, which discuss education from the perspective of the result to be achieved. Another classification contrasts progressivism with more traditional and conservative ideologies. Further categories are humanism, romanticism, essentialism, encyclopaedism, and pragmatism. There are also distinct types for authoritarian and democratic ideologies.", "title": "Education studies" }, { "paragraph_id": 61, "text": "Learning theories try to explain how learning happens. Influential theories are behaviorism, cognitivism, and constructivism. Behaviorism understands learning as a change in behavior in response to environmental stimuli. This happens by presenting the learner with a stimulus, associating this stimulus with the desired response, and solidifying this stimulus-response pair. Cognitivism sees learning as a change in cognitive structures and focuses on the mental processes involved in storing, retrieving, and processing information. Constructivism holds that learning is based on the personal experience of each individual and puts more emphasis on social interactions and how they are interpreted by the learner. These theories have important implications for how to teach. For example, behaviorists tend to focus on drills while cognitivists may advocate the use of mnemonics and constructivists tend to employ collaborative learning strategies.", "title": "Education studies" }, { "paragraph_id": 62, "text": "Various theories suggest that learning is more efficient when it is based on personal experience. An additional factor is to aim at a deeper understanding by connecting new to pre-existing knowledge rather than merely memorizing a list of unrelated facts. An influential developmental theory of learning is proposed by psychologist Jean Piaget, who outlines four stages of learning through which children pass on their way to adulthood: the sensorimotor, the pre-operational, the concrete operational, and the formal operational stage. They correspond to different levels of abstraction with early stages focusing more on simple sensory and motor activities while later stages include more complex internal representations and information processing in the form of logical reasoning.", "title": "Education studies" }, { "paragraph_id": 63, "text": "The teaching method concerns the way the content is presented by the teacher, for example, whether group work is used instead of a focus on individual learning. There are many teaching methods available. Which one is most efficient in a case depends on various factors, like the subject matter as well as the learner's age and competence level. This is reflected in the fact that modern school systems organize students by age, competence, specialization, and native language into different classes to ensure a productive learning process. Different subjects frequently use very different approaches. Language education often focuses on verbal learning while mathematical education is about abstract and symbolic thinking together with deductive reasoning. One central requirement for teaching methodologies is to ensure that the learner remains motivated because of interest and curiosity or through external rewards.", "title": "Education studies" }, { "paragraph_id": 64, "text": "Further aspects of teaching methods include the instructional media used, such as books, worksheets, and audio-visual recordings, and having some form of test or assessment to evaluate the learning progress. An important pedagogical aspect in many forms of modern education is that each lesson is part of a larger educational enterprise governed by a syllabus, which often covers several months or years. According to Herbartianism, teaching is divided into phases. The initial phase consists of preparing the student's mind for new information. Next, new ideas are first presented to the learner and then associated with ideas with which the learner is already familiar. In later phases, the understanding shifts to a more general level behind the specific instances and the ideas are then put into concrete practice.", "title": "Education studies" }, { "paragraph_id": 65, "text": "The history of education studies the processes, methods, and institutions involved in teaching and learning. It tries to explain how they have interacted with each other and shaped educational practice until the present day. Education in prehistory took place as a form of enculturation and focused on practical knowledge and skills relevant to everyday concerns, for example, in relation to food, clothing, shelter, and protection. There were no formal schools or specialized teachers and most adults in the community performed that role and learning happened informally during everyday activities, for example, when children observed and imitated their elders. For these oral societies, storytelling played a key role in transferring cultural and religious ideas from one generation to the next. Beginning with the emergence of agriculture around 9000 BCE, a slow educational change towards more specialization began to occur as people formed larger groups and more complex artisanal and technical skills were needed.", "title": "History" }, { "paragraph_id": 66, "text": "Starting in the 4th millennium BCE and the following millennia, a major shift in educational practices started to take place with the invention of writing in regions such as Mesopotamia, ancient Egypt, the Indus Valley, and ancient China. This development had a significant influence on the history of education as a whole. Through writing, it was possible to store, preserve, and communicate information. This facilitated various subsequent developments; for example, the creation of educational tools, like textbooks, and the formation of institutions, like schools.", "title": "History" }, { "paragraph_id": 67, "text": "Another key aspect of ancient education was the establishment of formal education. This became necessary since the amount of knowledge grew as civilizations evolved and informal education proved insufficient to transmit all requisite knowledge between generations. Teachers would act as specialists to impart knowledge and education became more abstract and further removed from daily life. Formal education was still quite rare in ancient societies and was restricted to the intellectual elites. It covered fields like reading and writing, record keeping, leadership, civic and political life, religion, and technical skills associated with specific professions. Formal education introduced a new way of teaching that gave more emphasis to discipline and drills than the earlier informal modes of education. Two often-discussed achievements of ancient education are the establishment of Plato's Academy in Ancient Greece, which is sometimes considered the first institute of higher learning, and the creation of the Great Library of Alexandria in Ancient Egypt as one of the most prestigious libraries of the ancient world.", "title": "History" }, { "paragraph_id": 68, "text": "Many aspects of education in the medieval period were shaped by religious traditions. In Europe, the Catholic Church wielded a significant influence over formal education. In the Arab world, the newly founded religion of Islam spread rapidly and led to various educational developments during the Islamic Golden Age, for example, by integrating classical and religious knowledge and by establishing madrasa schools. In Jewish communities, yeshivas were established as institutions dedicated to the study of religious texts and Jewish law. In China, an expansive state educational and exam system influenced by Confucian teachings was established. New complex societies began to evolve in other regions, such as Africa, the Americas, Northern Europe, and Japan. Some incorporated preexisting educational practices while others developed new traditions. Additionally, this period saw the establishment of various institutes of higher education and research. The first universities in Europe were the University of Bologna, the University of Paris, and Oxford University. Other influential centers of higher learning were the Al-Qarawiyyin University in Morocco, the Al-Azhar University in Egypt, and the House of Wisdom in Iraq. Another key development was the creation of guilds, which were associations of skilled craftsmen and merchants who controlled the practice of their trades. They were responsible for vocational education and new members had to pass through different stages on their way to masterhood.", "title": "History" }, { "paragraph_id": 69, "text": "Starting in the early modern period, education in Europe during the Renaissance slowly began to shift from a religious approach towards one which was more secular. This development was tied to an increased appreciation of the importance of education and a broadened range of topics, including a revived interest in ancient literary texts and educational programs. The turn toward secularization was accelerated during the Age of Enlightenment starting in the 17th century, which emphasized the role of reason and the empirical sciences. European colonization affected education in the Americas through Christian missionary initiatives. In China, the state educational system was further expanded and focused more on the teachings of neo-Confucianism. In the Islamic world, the outreach of formal education increased and remained under the influence of religion. A key development in the early modern period was the invention and popularization of the printing press in the middle of the 15th century, which had a profound impact on general education. It significantly reduced the cost of producing books, which were hand-written before, and thereby augmented the dissemination of written documents, including new forms like newspapers and pamphlets. The increased availability of written media had a major influence on the general literacy of the population.", "title": "History" }, { "paragraph_id": 70, "text": "These changes prepared the rise of public education in the 18th and 19th centuries. This period saw the establishment of publicly funded schools with the aim of providing education for all. This contrasts with earlier periods when formal education was primarily provided by private schools, religious institutions, and individual tutors. Aztec civilization was an exception in this regard since formal education was mandatory for the youth regardless of social class as early as the 14th century. Closely related changes were to make education compulsory and free of charge for all children up to a certain age. Initiatives to promote public education and universal access to education made significant progress in the 20th and the 21st centuries and were promoted by intergovernmental organizations like the UN. Examples include the Universal Declaration of Human Rights, the Convention on the Rights of the Child, the Education for All initiative, the Millennium Development Goals, and the Sustainable Development Goals. These efforts resulted in a steady rise of all forms of education but affected primary education in particular. In 1970, 28% of all primary-school-age children worldwide did not attend school; in 2015, this number dropped to 9%.", "title": "History" }, { "paragraph_id": 71, "text": "The establishment of public education was accompanied by the introduction of standardized curricula for public schools as well as standardized tests to assess the student's progress. Contemporary examples include the Test of English as a Foreign Language, which is a globally used test to assess English language proficiency of non-native English speakers, and the Programme for International Student Assessment, which evaluates education systems worldwide based on how 15-year-old students perform in the fields of reading, mathematics, and science. Similar changes also affected teachers by setting in place institutions and norms to guide and oversee teacher training, like certification requirements for teaching at public schools.", "title": "History" }, { "paragraph_id": 72, "text": "A further influence on contemporary education was the emergence of new educational technologies. The widespread availability of computers and the internet dramatically increased access to educational resources and made new types of education possible, such as online education. This was of particular relevance during the COVID-19 pandemic when schools globally closed for extended periods and many offered remote learning through video conferencing or pre-recorded video lessons to continue instruction. A further contemporary factor is the increased globalization and internationalization of education.", "title": "History" } ]
Education is the transmission of knowledge, skills, and character traits and comes in many forms. Formal education happens in a complex institutional framework, like public schools. Non-formal education is also structured but takes place outside the formal schooling system while informal education is unstructured learning through daily experiences. Formal and non-formal education are divided into levels that include early childhood education, primary education, secondary education, and tertiary education. Other classifications focus on the teaching method, like teacher-centered and student-centered education, and on the subject, like science education, language education, and physical education. The term "education" can also refer to the mental states and qualities of educated people and the academic field studying educational phenomena. The precise definition of education is disputed and there are disagreements about what the aims of education are and to what extent education is different from indoctrination by fostering critical thinking. These disagreements affect how to identify, measure, and improve forms of education. Fundamentally, education socializes children into society by teaching cultural values and norms. It equips them with the skills needed to become productive members of society. This way, it stimulates economic growth and raises awareness of local and global problems. Organized institutions affect many aspects of education. For example, governments set education policies to determine when school classes happen, what is taught, and who can or must attend. International organizations, like UNESCO, have been influential in promoting primary education for all children. Many factors influence whether education is successful. Psychological factors include motivation, intelligence, and personality. Social factors, like socioeconomic status, ethnicity, and gender, are often linked to discrimination. Further factors include access to educational technology, teacher quality, and parent involvement. The main field investigating education is called education studies. It examines what education is, what aims and effects it has, and how to improve it. Education studies has many subfields, like philosophy, psychology, sociology, and economics of education. It also discusses comparative education, pedagogy, and the history of education. In prehistory, education happened informally through oral communication and imitation. With the rise of ancient civilizations, writing was invented, and the amount of knowledge grew. This caused a shift from informal to formal education. Initially, formal education was mainly available to elites and religious groups. The invention of the printing press in the 15th century made books more widely available. This increased general literacy. Beginning in the 18th and 19th centuries, public education became more important. This development led to the worldwide process of making primary education available to all, free of charge, and compulsory up to a certain age.
2001-11-07T14:43:15Z
2023-12-29T08:27:14Z
[ "Template:Pp-move", "Template:Good article", "Template:Cite journal", "Template:Redirect", "Template:Reflist", "Template:Library resources box", "Template:Lang", "Template:Main", "Template:Education", "Template:Use dmy dates", "Template:Social sciences", "Template:Multiple image", "Template:Div col", "Template:Efn", "Template:Cite web", "Template:Curlie", "Template:Pp", "Template:Div col end", "Template:Refbegin", "Template:Cite book", "Template:Cite news", "Template:Refend", "Template:Annotated link", "Template:Use American English", "Template:Notelist", "Template:Multiref", "Template:Harvnb", "Template:Subject bar", "Template:Authority control", "Template:Short description" ]
https://en.wikipedia.org/wiki/Education
9,253
Encyclopedia
An encyclopedia (American English) or encyclopædia (British English) is a reference work or compendium providing summaries of knowledge either general or special to a particular field or discipline. Encyclopedias are divided into articles or entries that are arranged alphabetically by article name or by thematic categories, or else are hyperlinked and searchable. Encyclopedia entries are longer and more detailed than those in most dictionaries. Generally speaking, encyclopedia articles focus on factual information concerning the subject named in the article's title; this is unlike dictionary entries, which focus on linguistic information about words, such as their etymology, meaning, pronunciation, use, and grammatical forms. Encyclopedias have existed for around 2,000 years and have evolved considerably during that time as regards language (written in a major international or a vernacular language), size (few or many volumes), intent (presentation of a global or a limited range of knowledge), cultural perspective (authoritative, ideological, didactic, utilitarian), authorship (qualifications, style), readership (education level, background, interests, capabilities), and the technologies available for their production and distribution (hand-written manuscripts, small or large print runs, Internet). As a valued source of reliable information compiled by experts, printed versions found a prominent place in libraries, schools and other educational institutions. The appearance of digital and open-source versions in the 21st century, such as Wikipedia, has vastly expanded the accessibility, authorship, readership, and variety of encyclopedia entries. Indeed, the purpose of an encyclopedia is to collect knowledge disseminated around the globe; to set forth its general system to the men with whom we live, and transmit it to those who will come after us, so that the work of preceding centuries will not become useless to the centuries to come; and so that our offspring, becoming better instructed, will at the same time become more virtuous and happy, and that we should not die without having rendered a service to the human race in the future years to come. Diderot The word encyclopedia (encyclo|pedia) comes from the Koine Greek ἐγκύκλιος παιδεία, transliterated enkyklios paideia, meaning 'general education' from enkyklios (ἐγκύκλιος), meaning 'circular, recurrent, required regularly, general' and paideia (παιδεία), meaning 'education, rearing of a child'; together, the phrase literally translates as 'complete instruction' or 'complete knowledge'. However, the two separate words were reduced to a single word due to a scribal error by copyists of a Latin manuscript edition of Quintillian in 1470. The copyists took this phrase to be a single Greek word, enkyklopaedia, with the same meaning, and this spurious Greek word became the Neo-Latin word encyclopaedia, which in turn came into English. Because of this compounded word, fifteenth-century readers and since have often, and incorrectly, thought that the Roman authors Quintillian and Pliny described an ancient genre. The modern encyclopedia evolved from the dictionary in the 18th century; this lineage can be seen in the alphabetical order of print encyclopedias. Historically, both encyclopedias and dictionaries have been compiled by well-educated, well-informed content experts, but they are significantly different in structure. A dictionary is a linguistic work which primarily focuses on alphabetical listing of words and their definitions. Synonymous words and those related by the subject matter are to be found scattered around the dictionary, giving no obvious place for in-depth treatment. Thus, a dictionary typically provides limited information, analysis or background for the word defined. While it may offer a definition, it may leave the reader lacking in understanding the meaning, significance or limitations of a term, and how the term relates to a broader field of knowledge. To address those needs, an encyclopedia article is typically not limited to simple definitions, and is not limited to defining an individual word, but provides a more extensive meaning for a subject or discipline. In addition to defining and listing synonymous terms for the topic, the article is able to treat the topic's more extensive meaning in more depth and convey the most relevant accumulated knowledge on that subject. An encyclopedia article also often includes many maps and illustrations, as well as bibliography and statistics. An encyclopedia is, theoretically, not written in order to convince, although one of its goals is indeed to convince its reader of its own veracity. Wikipedia co-founder Jimmy Wales has said that the goal of an encyclopedia should be to provide "the sum of all human knowledge, but sum meaning summary." In addition, sometimes books or reading lists are compiled from a compendium of articles (either wholly or partially taken) from a specific encyclopedia. There are four major elements that define an encyclopedia: its subject matter, its scope, its method of organization, and its method of production: Some works entitled "dictionaries" are actually similar to encyclopedias, especially those concerned with a particular field (such as the Dictionary of the Middle Ages, the Dictionary of American Naval Fighting Ships, and Black's Law Dictionary). The Macquarie Dictionary, Australia's national dictionary, became an encyclopedic dictionary after its first edition in recognition of the use of proper nouns in common communication, and the words derived from such proper nouns. There are some broad differences between encyclopedias and dictionaries. Most noticeably, encyclopedia articles are longer, fuller and more thorough than entries in most general-purpose dictionaries. There are differences in content as well. Generally speaking, dictionaries provide linguistic information about words themselves, while encyclopedias focus more on the things for which those words stand. Thus, while dictionary entries are inextricably fixed to the word described, encyclopedia articles can be given a different entry name. As such, dictionary entries are not fully translatable into other languages, but encyclopedia articles can be. In practice, however, the distinction is not concrete, as there is no clear-cut difference between factual, "encyclopedic" information and linguistic information such as appears in dictionaries. Thus encyclopedias may contain material that is also found in dictionaries, and vice versa. In particular, dictionary entries often contain factual information about the thing named by the word. The earliest encyclopedic work to have survived to modern times is the Naturalis Historia of Pliny the Elder, a Roman statesman living in the 1st century AD. He compiled a work of 37 chapters covering natural history, architecture, medicine, geography, geology, and all aspects of the world around him. This work became very popular in Antiquity, was one of the first classical manuscripts to be printed in 1470, and has remained popular ever since as a source of information on the Roman world, and especially Roman art, Roman technology and Roman engineering. The Spanish scholar Isidore of Seville was the first Christian writer to try to compile a summa of universal knowledge, the Etymologiae (c. 600–625), also known by classicists as the Origines (abbreviated Orig.). This encyclopedia—the first such Christian epitome—formed a huge compilation of 448 chapters in 20 books based on hundreds of classical sources, including the Naturalis Historia. Of the Etymologiae in its time it was said quaecunque fere sciri debentur, "practically everything that it is necessary to know". Among the areas covered were: grammar, rhetoric, mathematics, geometry, music, astronomy, medicine, law, the Catholic Church and heretical sects, pagan philosophers, languages, cities, animals and birds, the physical world, geography, public buildings, roads, metals, rocks, agriculture, ships, clothes, food, and tools. Another Christian encyclopedia was the Institutiones divinarum et saecularium litterarum of Cassiodorus (543–560) dedicated to the Christian divinity and to the seven liberal arts. The encyclopedia of Suda, a massive 10th-century Byzantine encyclopedia, had 30,000 entries, many drawing from ancient sources that have since been lost, and often derived from medieval Christian compilers. The text was arranged alphabetically with some slight deviations from common vowel order and place in the Greek alphabet. From India, the Siribhoovalaya (Kannada: ಸಿರಿಭೂವಲಯ), dated between 800 A.D. to 15th century, is a work of Kannada literature written by Kumudendu Muni, a Jain monk. It is unique because rather than employing alphabets, it is composed entirely in Kannada numerals. Many philosophies which existed in the Jain classics are eloquently and skillfully interpreted in the work. The enormous encyclopedic work in China of the Four Great Books of Song, compiled by the 11th century during the early Song dynasty (960–1279), was a massive literary undertaking for the time. The last encyclopedia of the four, the Prime Tortoise of the Record Bureau, amounted to 9.4 million Chinese characters in 1,000 written volumes. There were many great encyclopedists throughout Chinese history, including the scientist and statesman Shen Kuo (1031–1095) with his Dream Pool Essays of 1088; the statesman, inventor, and agronomist Wang Zhen (active 1290–1333) with his Nong Shu of 1313; and Song Yingxing (1587–1666) with his Tiangong Kaiwu. Song Yingxing was termed the "Diderot of China" by British historian Joseph Needham. Before the advent of the printing press, encyclopedic works were all hand copied and thus rarely available, beyond wealthy patrons or monastic men of learning: they were expensive, and usually written for those extending knowledge rather than those using it. During the Renaissance, the creation of printing allowed a wider diffusion of encyclopedias and every scholar could have his or her own copy. The De expetendis et fugiendis rebus by Giorgio Valla was posthumously printed in 1501 by Aldo Manuzio in Venice. This work followed the traditional scheme of liberal arts. However, Valla added the translation of ancient Greek works on mathematics (firstly by Archimedes), newly discovered and translated. The Margarita Philosophica by Gregor Reisch, printed in 1503, was a complete encyclopedia explaining the seven liberal arts. Financial, commercial, legal, and intellectual factors changed the size of encyclopedias. Middle classes had more time to read and encyclopedias helped them to learn more. Publishers wanted to increase their output so some countries like Germany started selling books missing alphabetical sections, to publish faster. Also, publishers could not afford all the resources by themselves, so multiple publishers would come together with their resources to create better encyclopedias. Later, rivalry grew, causing copyright to occur due to weak underdeveloped laws. John Harris is often credited with introducing the now-familiar alphabetic format in 1704 with his English Lexicon Technicum: Or, A Universal English Dictionary of Arts and Sciences: Explaining not only the Terms of Art, but the Arts Themselves – to give its full title. Organized alphabetically, its content does indeed contain explanation not merely of the terms used in the arts and sciences, but of the arts and sciences themselves. Sir Isaac Newton contributed his only published work on chemistry to the second volume of 1710. Encyclopédie, ou dictionnaire raisonné des sciences, des arts et des métiers (English: Encyclopedia, or a Systematic Dictionary of the Sciences, Arts, and Crafts), better known as Encyclopédie, was a general encyclopedia published in France between 1751 and 1772, with later supplements, revised editions, and translations. It had many writers, known as the Encyclopédistes. It was edited by Denis Diderot and, until 1759, co-edited by Jean le Rond d'Alembert. The Encyclopédie is most famous for representing the thought of the Enlightenment. According to Denis Diderot in the article "Encyclopédie", the Encyclopédies aim was "to change the way people think" and for people (bourgeoisie) to be able to inform themselves and to know things. He and the other contributors advocated for the secularization of learning away from the Jesuits. Diderot wanted to incorporate all of the world's knowledge into the Encyclopédie and hoped that the text could disseminate all this information to the public and future generations. Thus, it is an example of democratization of knowledge. The Encyclopædia Britannica (Latin for "British Encyclopædia") is a general knowledge English-language encyclopaedia. It has been published by Encyclopædia Britannica, Inc. since 1768, although the company has changed ownership seven times. The encyclopaedia is maintained by about 100 full-time editors and more than 4,000 contributors. The 2010 version of the 15th edition, which spans 32 volumes and 32,640 pages, was the last printed edition. Since 2016, it has been published exclusively as an online encyclopaedia. Printed for 244 years, the Britannica was the longest-running in-print encyclopaedia in the English language. It was first published between 1768 and 1771 in the Scottish capital of Edinburgh, as three volumes. The encyclopaedia grew in size: the second edition was 10 volumes, and by its fourth edition (1801–1810) it had expanded to 20 volumes. Its rising stature as a scholarly work helped recruit eminent contributors, and the 9th (1875–1889) and 11th editions (1911) are landmark encyclopaedias for scholarship and literary style. Starting with the 11th edition and following its acquisition by an American firm, the Britannica shortened and simplified articles to broaden its appeal to the North American market. In 1933, the Britannica became the first encyclopaedia to adopt "continuous revision", in which the encyclopaedia is continually reprinted, with every article updated on a schedule. In the 21st century, the Britannica has suffered due to competition with the online crowdsourced encyclopaedia Wikipedia, although the Britannica was previously suffering from competition with the digital multimedia encyclopaedia Microsoft Encarta. In March 2012, it announced it would no longer publish printed editions and would focus instead on the online version. Britannica has been assessed to be politically closer to the centre of the US political spectrum than Wikipedia. The Brockhaus Enzyklopädie (German for Brockhaus Encyclopedia) is a German-language encyclopedia which until 2009 was published by the F. A. Brockhaus printing house. The first edition originated in the Conversations-Lexikon published by Renatus Gotthelf Löbel and Franke in Leipzig 1796–1808. Renamed Der Große Brockhaus in 1928 and Brockhaus Enzyklopädie from 1966, the current 21st thirty-volume edition contains about 300,000 entries on about 24,000 pages, with about 40,000 maps, graphics and tables. It is the largest German-language printed encyclopedia in the 21st century. In the United States, the 1950s and 1960s saw the introduction of several large popular encyclopedias, often sold on installment plans. The best known of these were World Book and Funk and Wagnalls. As many as 90% were sold door to door. Jack Lynch says in his book You Could Look It Up that encyclopedia salespeople were so common that they became the butt of jokes. He describes their sales pitch saying, "They were selling not books but a lifestyle, a future, a promise of social mobility." A 1961 World Book ad said, "You are holding your family's future in your hands right now," while showing a feminine hand holding an order form. As of the 1990s, two of the most prominent encyclopedias published in the United States were Collier's Encyclopedia and Encyclopedia Americana. By the late 20th century, encyclopedias were being published on CD-ROMs for use with personal computers. This was the usual way computer users accessed encyclopedic knowledge from the 1980s and 1990s. Later, DVD discs replaced CD-ROMs, and by the mid-2000s, internet encyclopedias were dominant and replaced disc-based software encyclopedias. CD-ROM encyclopedias were usually a macOS or Microsoft Windows (3.0, 3.1 or 95/98) application on a CD-ROM disc. The user would execute the encyclopedia's software program to see a menu that allowed them to start browsing the encyclopedia's articles, and most encyclopedias also supported a way to search the contents of the encyclopedia. The article text was usually hyperlinked and also included photographs, audio clips (for example in articles about historical speeches or musical instruments), and video clips. In the CD-ROM age the video clips had usually a low resolution, often 160x120 or 320x240 pixels. Such encyclopedias which made use of photos, audio and video were also called multimedia encyclopedias. However, because of the online encyclopedia, CD-ROM encyclopedias have been declared obsolete. Microsoft's Encarta, launched in 1993, was a landmark example as it had no printed equivalent. Articles were supplemented with video and audio files as well as numerous high-quality images. After sixteen years, Microsoft discontinued the Encarta line of products in 2009. Other examples of CD-ROM encyclopedia are Grolier Multimedia Encyclopedia and Britannica. Digital encyclopedias enable "Encyclopedia Services" (such as Wikimedia Enterprise) to facilitate programatic access to the content. The concept of a free encyclopedia began with the Interpedia proposal on Usenet in 1993, which outlined an Internet-based online encyclopedia to which anyone could submit content and that would be freely accessible. Early projects in this vein included Everything2 and Open Site. In 1999, Richard Stallman proposed the GNUPedia, an online encyclopedia which, similar to the GNU operating system, would be a "generic" resource. The concept was very similar to Interpedia, but more in line with Stallman's GNU philosophy. It was not until Nupedia and later Wikipedia that a stable free encyclopedia project was able to be established on the Internet. The English Wikipedia, which was started in 2001, became the world's largest encyclopedia in 2004 at the 300,000 article stage. By late 2005, Wikipedia had produced over two million articles in more than 80 languages with content licensed under the copyleft GNU Free Documentation License. As of August 2009, Wikipedia had over 3 million articles in English and well over 10 million combined articles in over 250 languages. Today, Wikipedia has 6,763,567 articles in English, over 60 million combined articles in over 300 languages, and over 250 million combined pages including project and discussion pages. Since 2002, other free encyclopedias appeared, including Hudong (2005–) and Baidu Baike (2006–) in Chinese, and Google's Knol (2008–2012) in English. Some MediaWiki-based encyclopedias have appeared, usually under a license compatible with Wikipedia, including Enciclopedia Libre (2002–2021) in Spanish and Conservapedia (2006–), Scholarpedia (2006–), and Citizendium (2007–) in English, the latter of which had become inactive by 2014.
[ { "paragraph_id": 0, "text": "An encyclopedia (American English) or encyclopædia (British English) is a reference work or compendium providing summaries of knowledge either general or special to a particular field or discipline. Encyclopedias are divided into articles or entries that are arranged alphabetically by article name or by thematic categories, or else are hyperlinked and searchable. Encyclopedia entries are longer and more detailed than those in most dictionaries. Generally speaking, encyclopedia articles focus on factual information concerning the subject named in the article's title; this is unlike dictionary entries, which focus on linguistic information about words, such as their etymology, meaning, pronunciation, use, and grammatical forms.", "title": "" }, { "paragraph_id": 1, "text": "Encyclopedias have existed for around 2,000 years and have evolved considerably during that time as regards language (written in a major international or a vernacular language), size (few or many volumes), intent (presentation of a global or a limited range of knowledge), cultural perspective (authoritative, ideological, didactic, utilitarian), authorship (qualifications, style), readership (education level, background, interests, capabilities), and the technologies available for their production and distribution (hand-written manuscripts, small or large print runs, Internet). As a valued source of reliable information compiled by experts, printed versions found a prominent place in libraries, schools and other educational institutions.", "title": "" }, { "paragraph_id": 2, "text": "The appearance of digital and open-source versions in the 21st century, such as Wikipedia, has vastly expanded the accessibility, authorship, readership, and variety of encyclopedia entries.", "title": "" }, { "paragraph_id": 3, "text": "Indeed, the purpose of an encyclopedia is to collect knowledge disseminated around the globe; to set forth its general system to the men with whom we live, and transmit it to those who will come after us, so that the work of preceding centuries will not become useless to the centuries to come; and so that our offspring, becoming better instructed, will at the same time become more virtuous and happy, and that we should not die without having rendered a service to the human race in the future years to come.", "title": "Etymology" }, { "paragraph_id": 4, "text": "Diderot", "title": "Etymology" }, { "paragraph_id": 5, "text": "The word encyclopedia (encyclo|pedia) comes from the Koine Greek ἐγκύκλιος παιδεία, transliterated enkyklios paideia, meaning 'general education' from enkyklios (ἐγκύκλιος), meaning 'circular, recurrent, required regularly, general' and paideia (παιδεία), meaning 'education, rearing of a child'; together, the phrase literally translates as 'complete instruction' or 'complete knowledge'. However, the two separate words were reduced to a single word due to a scribal error by copyists of a Latin manuscript edition of Quintillian in 1470. The copyists took this phrase to be a single Greek word, enkyklopaedia, with the same meaning, and this spurious Greek word became the Neo-Latin word encyclopaedia, which in turn came into English. Because of this compounded word, fifteenth-century readers and since have often, and incorrectly, thought that the Roman authors Quintillian and Pliny described an ancient genre.", "title": "Etymology" }, { "paragraph_id": 6, "text": "The modern encyclopedia evolved from the dictionary in the 18th century; this lineage can be seen in the alphabetical order of print encyclopedias. Historically, both encyclopedias and dictionaries have been compiled by well-educated, well-informed content experts, but they are significantly different in structure. A dictionary is a linguistic work which primarily focuses on alphabetical listing of words and their definitions. Synonymous words and those related by the subject matter are to be found scattered around the dictionary, giving no obvious place for in-depth treatment. Thus, a dictionary typically provides limited information, analysis or background for the word defined. While it may offer a definition, it may leave the reader lacking in understanding the meaning, significance or limitations of a term, and how the term relates to a broader field of knowledge.", "title": "Characteristics" }, { "paragraph_id": 7, "text": "To address those needs, an encyclopedia article is typically not limited to simple definitions, and is not limited to defining an individual word, but provides a more extensive meaning for a subject or discipline. In addition to defining and listing synonymous terms for the topic, the article is able to treat the topic's more extensive meaning in more depth and convey the most relevant accumulated knowledge on that subject. An encyclopedia article also often includes many maps and illustrations, as well as bibliography and statistics. An encyclopedia is, theoretically, not written in order to convince, although one of its goals is indeed to convince its reader of its own veracity.", "title": "Characteristics" }, { "paragraph_id": 8, "text": "Wikipedia co-founder Jimmy Wales has said that the goal of an encyclopedia should be to provide \"the sum of all human knowledge, but sum meaning summary.\"", "title": "Characteristics" }, { "paragraph_id": 9, "text": "In addition, sometimes books or reading lists are compiled from a compendium of articles (either wholly or partially taken) from a specific encyclopedia.", "title": "Characteristics" }, { "paragraph_id": 10, "text": "There are four major elements that define an encyclopedia: its subject matter, its scope, its method of organization, and its method of production:", "title": "Characteristics" }, { "paragraph_id": 11, "text": "Some works entitled \"dictionaries\" are actually similar to encyclopedias, especially those concerned with a particular field (such as the Dictionary of the Middle Ages, the Dictionary of American Naval Fighting Ships, and Black's Law Dictionary). The Macquarie Dictionary, Australia's national dictionary, became an encyclopedic dictionary after its first edition in recognition of the use of proper nouns in common communication, and the words derived from such proper nouns.", "title": "Characteristics" }, { "paragraph_id": 12, "text": "There are some broad differences between encyclopedias and dictionaries. Most noticeably, encyclopedia articles are longer, fuller and more thorough than entries in most general-purpose dictionaries. There are differences in content as well. Generally speaking, dictionaries provide linguistic information about words themselves, while encyclopedias focus more on the things for which those words stand. Thus, while dictionary entries are inextricably fixed to the word described, encyclopedia articles can be given a different entry name. As such, dictionary entries are not fully translatable into other languages, but encyclopedia articles can be.", "title": "Characteristics" }, { "paragraph_id": 13, "text": "In practice, however, the distinction is not concrete, as there is no clear-cut difference between factual, \"encyclopedic\" information and linguistic information such as appears in dictionaries. Thus encyclopedias may contain material that is also found in dictionaries, and vice versa. In particular, dictionary entries often contain factual information about the thing named by the word.", "title": "Characteristics" }, { "paragraph_id": 14, "text": "The earliest encyclopedic work to have survived to modern times is the Naturalis Historia of Pliny the Elder, a Roman statesman living in the 1st century AD. He compiled a work of 37 chapters covering natural history, architecture, medicine, geography, geology, and all aspects of the world around him. This work became very popular in Antiquity, was one of the first classical manuscripts to be printed in 1470, and has remained popular ever since as a source of information on the Roman world, and especially Roman art, Roman technology and Roman engineering.", "title": "Pre-modern encyclopedias" }, { "paragraph_id": 15, "text": "The Spanish scholar Isidore of Seville was the first Christian writer to try to compile a summa of universal knowledge, the Etymologiae (c. 600–625), also known by classicists as the Origines (abbreviated Orig.). This encyclopedia—the first such Christian epitome—formed a huge compilation of 448 chapters in 20 books based on hundreds of classical sources, including the Naturalis Historia. Of the Etymologiae in its time it was said quaecunque fere sciri debentur, \"practically everything that it is necessary to know\". Among the areas covered were: grammar, rhetoric, mathematics, geometry, music, astronomy, medicine, law, the Catholic Church and heretical sects, pagan philosophers, languages, cities, animals and birds, the physical world, geography, public buildings, roads, metals, rocks, agriculture, ships, clothes, food, and tools.", "title": "Pre-modern encyclopedias" }, { "paragraph_id": 16, "text": "Another Christian encyclopedia was the Institutiones divinarum et saecularium litterarum of Cassiodorus (543–560) dedicated to the Christian divinity and to the seven liberal arts. The encyclopedia of Suda, a massive 10th-century Byzantine encyclopedia, had 30,000 entries, many drawing from ancient sources that have since been lost, and often derived from medieval Christian compilers. The text was arranged alphabetically with some slight deviations from common vowel order and place in the Greek alphabet.", "title": "Pre-modern encyclopedias" }, { "paragraph_id": 17, "text": "From India, the Siribhoovalaya (Kannada: ಸಿರಿಭೂವಲಯ), dated between 800 A.D. to 15th century, is a work of Kannada literature written by Kumudendu Muni, a Jain monk. It is unique because rather than employing alphabets, it is composed entirely in Kannada numerals. Many philosophies which existed in the Jain classics are eloquently and skillfully interpreted in the work.", "title": "Pre-modern encyclopedias" }, { "paragraph_id": 18, "text": "The enormous encyclopedic work in China of the Four Great Books of Song, compiled by the 11th century during the early Song dynasty (960–1279), was a massive literary undertaking for the time. The last encyclopedia of the four, the Prime Tortoise of the Record Bureau, amounted to 9.4 million Chinese characters in 1,000 written volumes.", "title": "Pre-modern encyclopedias" }, { "paragraph_id": 19, "text": "There were many great encyclopedists throughout Chinese history, including the scientist and statesman Shen Kuo (1031–1095) with his Dream Pool Essays of 1088; the statesman, inventor, and agronomist Wang Zhen (active 1290–1333) with his Nong Shu of 1313; and Song Yingxing (1587–1666) with his Tiangong Kaiwu. Song Yingxing was termed the \"Diderot of China\" by British historian Joseph Needham.", "title": "Pre-modern encyclopedias" }, { "paragraph_id": 20, "text": "Before the advent of the printing press, encyclopedic works were all hand copied and thus rarely available, beyond wealthy patrons or monastic men of learning: they were expensive, and usually written for those extending knowledge rather than those using it. During the Renaissance, the creation of printing allowed a wider diffusion of encyclopedias and every scholar could have his or her own copy. The De expetendis et fugiendis rebus by Giorgio Valla was posthumously printed in 1501 by Aldo Manuzio in Venice. This work followed the traditional scheme of liberal arts. However, Valla added the translation of ancient Greek works on mathematics (firstly by Archimedes), newly discovered and translated. The Margarita Philosophica by Gregor Reisch, printed in 1503, was a complete encyclopedia explaining the seven liberal arts.", "title": "Printed encyclopedias" }, { "paragraph_id": 21, "text": "Financial, commercial, legal, and intellectual factors changed the size of encyclopedias. Middle classes had more time to read and encyclopedias helped them to learn more. Publishers wanted to increase their output so some countries like Germany started selling books missing alphabetical sections, to publish faster. Also, publishers could not afford all the resources by themselves, so multiple publishers would come together with their resources to create better encyclopedias. Later, rivalry grew, causing copyright to occur due to weak underdeveloped laws. John Harris is often credited with introducing the now-familiar alphabetic format in 1704 with his English Lexicon Technicum: Or, A Universal English Dictionary of Arts and Sciences: Explaining not only the Terms of Art, but the Arts Themselves – to give its full title. Organized alphabetically, its content does indeed contain explanation not merely of the terms used in the arts and sciences, but of the arts and sciences themselves. Sir Isaac Newton contributed his only published work on chemistry to the second volume of 1710.", "title": "Printed encyclopedias" }, { "paragraph_id": 22, "text": "Encyclopédie, ou dictionnaire raisonné des sciences, des arts et des métiers (English: Encyclopedia, or a Systematic Dictionary of the Sciences, Arts, and Crafts), better known as Encyclopédie, was a general encyclopedia published in France between 1751 and 1772, with later supplements, revised editions, and translations. It had many writers, known as the Encyclopédistes. It was edited by Denis Diderot and, until 1759, co-edited by Jean le Rond d'Alembert.", "title": "Printed encyclopedias" }, { "paragraph_id": 23, "text": "The Encyclopédie is most famous for representing the thought of the Enlightenment. According to Denis Diderot in the article \"Encyclopédie\", the Encyclopédies aim was \"to change the way people think\" and for people (bourgeoisie) to be able to inform themselves and to know things. He and the other contributors advocated for the secularization of learning away from the Jesuits. Diderot wanted to incorporate all of the world's knowledge into the Encyclopédie and hoped that the text could disseminate all this information to the public and future generations. Thus, it is an example of democratization of knowledge.", "title": "Printed encyclopedias" }, { "paragraph_id": 24, "text": "The Encyclopædia Britannica (Latin for \"British Encyclopædia\") is a general knowledge English-language encyclopaedia. It has been published by Encyclopædia Britannica, Inc. since 1768, although the company has changed ownership seven times. The encyclopaedia is maintained by about 100 full-time editors and more than 4,000 contributors. The 2010 version of the 15th edition, which spans 32 volumes and 32,640 pages, was the last printed edition. Since 2016, it has been published exclusively as an online encyclopaedia.", "title": "Printed encyclopedias" }, { "paragraph_id": 25, "text": "Printed for 244 years, the Britannica was the longest-running in-print encyclopaedia in the English language. It was first published between 1768 and 1771 in the Scottish capital of Edinburgh, as three volumes. The encyclopaedia grew in size: the second edition was 10 volumes, and by its fourth edition (1801–1810) it had expanded to 20 volumes. Its rising stature as a scholarly work helped recruit eminent contributors, and the 9th (1875–1889) and 11th editions (1911) are landmark encyclopaedias for scholarship and literary style. Starting with the 11th edition and following its acquisition by an American firm, the Britannica shortened and simplified articles to broaden its appeal to the North American market.", "title": "Printed encyclopedias" }, { "paragraph_id": 26, "text": "In 1933, the Britannica became the first encyclopaedia to adopt \"continuous revision\", in which the encyclopaedia is continually reprinted, with every article updated on a schedule. In the 21st century, the Britannica has suffered due to competition with the online crowdsourced encyclopaedia Wikipedia, although the Britannica was previously suffering from competition with the digital multimedia encyclopaedia Microsoft Encarta.", "title": "Printed encyclopedias" }, { "paragraph_id": 27, "text": "In March 2012, it announced it would no longer publish printed editions and would focus instead on the online version. Britannica has been assessed to be politically closer to the centre of the US political spectrum than Wikipedia.", "title": "Printed encyclopedias" }, { "paragraph_id": 28, "text": "The Brockhaus Enzyklopädie (German for Brockhaus Encyclopedia) is a German-language encyclopedia which until 2009 was published by the F. A. Brockhaus printing house.", "title": "Printed encyclopedias" }, { "paragraph_id": 29, "text": "The first edition originated in the Conversations-Lexikon published by Renatus Gotthelf Löbel and Franke in Leipzig 1796–1808. Renamed Der Große Brockhaus in 1928 and Brockhaus Enzyklopädie from 1966, the current 21st thirty-volume edition contains about 300,000 entries on about 24,000 pages, with about 40,000 maps, graphics and tables. It is the largest German-language printed encyclopedia in the 21st century.", "title": "Printed encyclopedias" }, { "paragraph_id": 30, "text": "In the United States, the 1950s and 1960s saw the introduction of several large popular encyclopedias, often sold on installment plans. The best known of these were World Book and Funk and Wagnalls. As many as 90% were sold door to door. Jack Lynch says in his book You Could Look It Up that encyclopedia salespeople were so common that they became the butt of jokes. He describes their sales pitch saying, \"They were selling not books but a lifestyle, a future, a promise of social mobility.\" A 1961 World Book ad said, \"You are holding your family's future in your hands right now,\" while showing a feminine hand holding an order form. As of the 1990s, two of the most prominent encyclopedias published in the United States were Collier's Encyclopedia and Encyclopedia Americana.", "title": "Printed encyclopedias" }, { "paragraph_id": 31, "text": "By the late 20th century, encyclopedias were being published on CD-ROMs for use with personal computers. This was the usual way computer users accessed encyclopedic knowledge from the 1980s and 1990s. Later, DVD discs replaced CD-ROMs, and by the mid-2000s, internet encyclopedias were dominant and replaced disc-based software encyclopedias.", "title": "Digital encyclopedias" }, { "paragraph_id": 32, "text": "CD-ROM encyclopedias were usually a macOS or Microsoft Windows (3.0, 3.1 or 95/98) application on a CD-ROM disc. The user would execute the encyclopedia's software program to see a menu that allowed them to start browsing the encyclopedia's articles, and most encyclopedias also supported a way to search the contents of the encyclopedia. The article text was usually hyperlinked and also included photographs, audio clips (for example in articles about historical speeches or musical instruments), and video clips. In the CD-ROM age the video clips had usually a low resolution, often 160x120 or 320x240 pixels. Such encyclopedias which made use of photos, audio and video were also called multimedia encyclopedias. However, because of the online encyclopedia, CD-ROM encyclopedias have been declared obsolete.", "title": "Digital encyclopedias" }, { "paragraph_id": 33, "text": "Microsoft's Encarta, launched in 1993, was a landmark example as it had no printed equivalent. Articles were supplemented with video and audio files as well as numerous high-quality images. After sixteen years, Microsoft discontinued the Encarta line of products in 2009. Other examples of CD-ROM encyclopedia are Grolier Multimedia Encyclopedia and Britannica.", "title": "Digital encyclopedias" }, { "paragraph_id": 34, "text": "Digital encyclopedias enable \"Encyclopedia Services\" (such as Wikimedia Enterprise) to facilitate programatic access to the content.", "title": "Digital encyclopedias" }, { "paragraph_id": 35, "text": "The concept of a free encyclopedia began with the Interpedia proposal on Usenet in 1993, which outlined an Internet-based online encyclopedia to which anyone could submit content and that would be freely accessible. Early projects in this vein included Everything2 and Open Site. In 1999, Richard Stallman proposed the GNUPedia, an online encyclopedia which, similar to the GNU operating system, would be a \"generic\" resource. The concept was very similar to Interpedia, but more in line with Stallman's GNU philosophy.", "title": "Digital encyclopedias" }, { "paragraph_id": 36, "text": "It was not until Nupedia and later Wikipedia that a stable free encyclopedia project was able to be established on the Internet.", "title": "Digital encyclopedias" }, { "paragraph_id": 37, "text": "The English Wikipedia, which was started in 2001, became the world's largest encyclopedia in 2004 at the 300,000 article stage. By late 2005, Wikipedia had produced over two million articles in more than 80 languages with content licensed under the copyleft GNU Free Documentation License. As of August 2009, Wikipedia had over 3 million articles in English and well over 10 million combined articles in over 250 languages. Today, Wikipedia has 6,763,567 articles in English, over 60 million combined articles in over 300 languages, and over 250 million combined pages including project and discussion pages.", "title": "Digital encyclopedias" }, { "paragraph_id": 38, "text": "Since 2002, other free encyclopedias appeared, including Hudong (2005–) and Baidu Baike (2006–) in Chinese, and Google's Knol (2008–2012) in English. Some MediaWiki-based encyclopedias have appeared, usually under a license compatible with Wikipedia, including Enciclopedia Libre (2002–2021) in Spanish and Conservapedia (2006–), Scholarpedia (2006–), and Citizendium (2007–) in English, the latter of which had become inactive by 2014.", "title": "Digital encyclopedias" } ]
An encyclopedia or encyclopædia is a reference work or compendium providing summaries of knowledge either general or special to a particular field or discipline. Encyclopedias are divided into articles or entries that are arranged alphabetically by article name or by thematic categories, or else are hyperlinked and searchable. Encyclopedia entries are longer and more detailed than those in most dictionaries. Generally speaking, encyclopedia articles focus on factual information concerning the subject named in the article's title; this is unlike dictionary entries, which focus on linguistic information about words, such as their etymology, meaning, pronunciation, use, and grammatical forms. Encyclopedias have existed for around 2,000 years and have evolved considerably during that time as regards language, size, intent, cultural perspective, authorship, readership, and the technologies available for their production and distribution. As a valued source of reliable information compiled by experts, printed versions found a prominent place in libraries, schools and other educational institutions. The appearance of digital and open-source versions in the 21st century, such as Wikipedia, has vastly expanded the accessibility, authorship, readership, and variety of encyclopedia entries.
2001-11-13T20:56:47Z
2023-12-30T01:31:59Z
[ "Template:Refend", "Template:Wikisource portal", "Template:Cite web", "Template:ISBN", "Template:Pp-move-indef", "Template:Quote box", "Template:Citation needed", "Template:Div col end", "Template:Webarchive", "Template:Cite journal", "Template:Lang", "Template:More citations needed section", "Template:As of", "Template:Portal", "Template:Authority control", "Template:Pp-semi-indef", "Template:Reflist", "Template:Cite encyclopedia", "Template:Wiktionary", "Template:Short description", "Template:Transl", "Template:Main", "Template:Circa", "Template:By whom", "Template:Div col", "Template:Cite book", "Template:Other uses", "Template:Use mdy dates", "Template:Use American English", "Template:Original research", "Template:Excerpt", "Template:Redirect", "Template:Refbegin", "Template:Commons category", "Template:Snd" ]
https://en.wikipedia.org/wiki/Encyclopedia
9,256
Enigma machine
The Enigma machine is a cipher device developed and used in the early- to mid-20th century to protect commercial, diplomatic, and military communication. It was employed extensively by Nazi Germany during World War II, in all branches of the German military. The Enigma machine was considered so secure that it was used to encipher the most top-secret messages. The Enigma has an electromechanical rotor mechanism that scrambles the 26 letters of the alphabet. In typical use, one person enters text on the Enigma's keyboard and another person writes down which of the 26 lights above the keyboard illuminated at each key press. If plain text is entered, the illuminated letters are the ciphertext. Entering ciphertext transforms it back into readable plaintext. The rotor mechanism changes the electrical connections between the keys and the lights with each keypress. The security of the system depends on machine settings that were generally changed daily, based on secret key lists distributed in advance, and on other settings that were changed for each message. The receiving station would have to know and use the exact settings employed by the transmitting station to successfully decrypt a message. While Nazi Germany introduced a series of improvements to the Enigma over the years, and these hampered decryption efforts, they did not prevent Poland from cracking the machine as early as December 1932 and reading messages prior to and into the war. Poland's sharing of their achievements enabled the Allies to exploit Enigma-enciphered messages as a major source of intelligence. Many commentators say the flow of Ultra communications intelligence from the decrypting of Enigma, Lorenz, and other ciphers shortened the war substantially and may even have altered its outcome. The Enigma machine was invented by German engineer Arthur Scherbius at the end of World War I. The German firm Scherbius & Ritter, co-founded by Scherbius, patented ideas for a cipher machine in 1918 and began marketing the finished product under the brand name Enigma in 1923, initially targeted at commercial markets. Early models were used commercially from the early 1920s, and adopted by military and government services of several countries, most notably Nazi Germany before and during World War II. Several different Enigma models were produced, but the German military models, having a plugboard, were the most complex. Japanese and Italian models were also in use. With its adoption (in slightly modified form) by the German Navy in 1926 and the German Army and Air Force soon after, the name Enigma became widely known in military circles. Pre-war German military planning emphasized fast, mobile forces and tactics, later known as blitzkrieg, which depend on radio communication for command and coordination. Since adversaries would likely intercept radio signals, messages had to be protected with secure encipherment. Compact and easily portable, the Enigma machine filled that need. Around December 1932 Marian Rejewski, a Polish mathematician and cryptologist at the Polish Cipher Bureau, used the theory of permutations, and flaws in the German military-message encipherment procedures, to break message keys of the plugboard Enigma machine. France's spy Hans-Thilo Schmidt obtained access to German cipher materials that included the daily keys used in September and October 1932. Those keys included the plugboard settings. The French passed the material to the Poles, and Rejewski used some of that material and the message traffic in September and October to solve for the unknown rotor wiring. Consequently the Polish mathematicians were able to build their own Enigma machines, dubbed "Enigma doubles". Rejewski was aided by fellow mathematician-cryptologists Jerzy Różycki and Henryk Zygalski, both of whom had been recruited with Rejewski from Poznań University, which had been selected for its students' knowledge of the German language, since that area was held by Germany prior to World War I. The Polish Cipher Bureau developed techniques to defeat the plugboard and find all components of the daily key, which enabled the Cipher Bureau to read German Enigma messages starting from January 1933. Over time, the German cryptographic procedures improved, and the Cipher Bureau developed techniques and designed mechanical devices to continue reading Enigma traffic. As part of that effort, the Poles exploited quirks of the rotors, compiled catalogues, built a cyclometer (invented by Rejewski) to help make a catalogue with 100,000 entries, invented and produced Zygalski sheets, and built the electromechanical cryptologic bomba (invented by Rejewski) to search for rotor settings. In 1938 the Poles had six bomby (plural of bomba), but when that year the Germans added two more rotors, ten times as many bomby would have been needed to read the traffic. On 26 and 27 July 1939, in Pyry, just south of Warsaw, the Poles initiated French and British military intelligence representatives into the Polish Enigma-decryption techniques and equipment, including Zygalski sheets and the cryptologic bomb, and promised each delegation a Polish-reconstructed Enigma (the devices were soon delivered). In September 1939, British Military Mission 4, which included Colin Gubbins and Vera Atkins, went to Poland, intending to evacuate cipher-breakers Marian Rejewski, Jerzy Różycki, and Henryk Zygalski from the country. The cryptologists, however, had been evacuated by their own superiors into Romania, at the time a Polish-allied country. On the way, for security reasons, the Polish Cipher Bureau personnel had deliberately destroyed their records and equipment. From Romania they traveled on to France, where they resumed their cryptological work, collaborating by teletype with the British, who began work on decrypting German Enigma messages, using the Polish equipment and techniques. Gordon Welchman, who became head of Hut 6 at Bletchley Park, has written: "Hut 6 Ultra would never have got off the ground if we had not learned from the Poles, in the nick of time, the details both of the German military version of the commercial Enigma machine, and of the operating procedures that were in use." The Polish transfer of theory and technology at Pyry formed the crucial basis for the subsequent World War II British Enigma-decryption effort at Bletchley Park, where Welchman worked. During the war, British cryptologists decrypted a vast number of messages enciphered on Enigma. The intelligence gleaned from this source, codenamed "Ultra" by the British, was a substantial aid to the Allied war effort. Though Enigma had some cryptographic weaknesses, in practice it was German procedural flaws, operator mistakes, failure to systematically introduce changes in encipherment procedures, and Allied capture of key tables and hardware that, during the war, enabled Allied cryptologists to succeed. From October 1944, the German Abwehr used the Schlüsselgerät 41. Like other rotor machines, the Enigma machine is a combination of mechanical and electrical subsystems. The mechanical subsystem consists of a keyboard; a set of rotating disks called rotors arranged adjacently along a spindle; one of various stepping components to turn at least one rotor with each key press, and a series of lamps, one for each letter. These design features are the reason that the Enigma machine was originally referred to as the rotor-based cipher machine during its intellectual inception in 1915. An electrical pathway is a route for current to travel. By manipulating this phenomenon the Enigma machine was able to scramble messages. The mechanical parts act by forming a varying electrical circuit. When a key is pressed, one or more rotors rotate on the spindle. On the sides of the rotors are a series of electrical contacts that, after rotation, line up with contacts on the other rotors or fixed wiring on either end of the spindle. When the rotors are properly aligned, each key on the keyboard is connected to a unique electrical pathway through the series of contacts and internal wiring. Current, typically from a battery, flows through the pressed key, into the newly configured set of circuits and back out again, ultimately lighting one display lamp, which shows the output letter. For example, when encrypting a message starting ANX..., the operator would first press the A key, and the Z lamp might light, so Z would be the first letter of the ciphertext. The operator would next press N, and then X in the same fashion, and so on. Current flows from the battery (1) through a depressed bi-directional keyboard switch (2) to the plugboard (3). Next, it passes through the (unused in this instance, so shown closed) plug "A" (3) via the entry wheel (4), through the wiring of the three (Wehrmacht Enigma) or four (Kriegsmarine M4 and Abwehr variants) installed rotors (5), and enters the reflector (6). The reflector returns the current, via an entirely different path, back through the rotors (5) and entry wheel (4), proceeding through plug "S" (7) connected with a cable (8) to plug "D", and another bi-directional switch (9) to light the appropriate lamp. The repeated changes of electrical path through an Enigma scrambler implement a polyalphabetic substitution cipher that provides Enigma's security. The diagram on the right shows how the electrical pathway changes with each key depression, which causes rotation of at least the right-hand rotor. Current passes into the set of rotors, into and back out of the reflector, and out through the rotors again. The greyed-out lines are other possible paths within each rotor; these are hard-wired from one side of each rotor to the other. The letter A encrypts differently with consecutive key presses, first to G, and then to C. This is because the right-hand rotor steps (rotates one position) on each key press, sending the signal on a completely different route. Eventually other rotors step with a key press. The rotors (alternatively wheels or drums, Walzen in German) form the heart of an Enigma machine. Each rotor is a disc approximately 10 cm (3.9 in) in diameter made from Ebonite or Bakelite with 26 brass, spring-loaded, electrical contact pins arranged in a circle on one face, with the other face housing 26 corresponding electrical contacts in the form of circular plates. The pins and contacts represent the alphabet — typically the 26 letters A–Z, as will be assumed for the rest of this description. When the rotors are mounted side by side on the spindle, the pins of one rotor rest against the plate contacts of the neighbouring rotor, forming an electrical connection. Inside the body of the rotor, 26 wires connect each pin on one side to a contact on the other in a complex pattern. Most of the rotors are identified by Roman numerals, and each issued copy of rotor I, for instance, is wired identically to all others. The same is true for the special thin beta and gamma rotors used in the M4 naval variant. By itself, a rotor performs only a very simple type of encryption, a simple substitution cipher. For example, the pin corresponding to the letter E might be wired to the contact for letter T on the opposite face, and so on. Enigma's security comes from using several rotors in series (usually three or four) and the regular stepping movement of the rotors, thus implementing a polyalphabetic substitution cipher. Each rotor can be set to one of 26 possible starting positions when placed in an Enigma machine. After insertion, a rotor can be turned to the correct position by hand, using the grooved finger-wheel which protrudes from the internal Enigma cover when closed. In order for the operator to know the rotor's position, each has an alphabet tyre (or letter ring) attached to the outside of the rotor disc, with 26 characters (typically letters); one of these is visible through the window for that slot in the cover, thus indicating the rotational position of the rotor. In early models, the alphabet ring was fixed to the rotor disc. A later improvement was the ability to adjust the alphabet ring relative to the rotor disc. The position of the ring was known as the Ringstellung ("ring setting"), and that setting was a part of the initial setup needed prior to an operating session. In modern terms it was a part of the initialization vector. Each rotor contains one or more notches that control rotor stepping. In the military variants, the notches are located on the alphabet ring. The Army and Air Force Enigmas were used with several rotors, initially three. On 15 December 1938, this changed to five, from which three were chosen for a given session. Rotors were marked with Roman numerals to distinguish them: I, II, III, IV and V, all with single notches located at different points on the alphabet ring. This variation was probably intended as a security measure, but ultimately allowed the Polish Clock Method and British Banburismus attacks. The Naval version of the Wehrmacht Enigma had always been issued with more rotors than the other services: At first six, then seven, and finally eight. The additional rotors were marked VI, VII and VIII, all with different wiring, and had two notches, resulting in more frequent turnover. The four-rotor Naval Enigma (M4) machine accommodated an extra rotor in the same space as the three-rotor version. This was accomplished by replacing the original reflector with a thinner one and by adding a thin fourth rotor. That fourth rotor was one of two types, Beta or Gamma, and never stepped, but could be manually set to any of 26 positions. One of the 26 made the machine perform identically to the three-rotor machine. To avoid merely implementing a simple (solvable) substitution cipher, every key press caused one or more rotors to step by one twenty-sixth of a full rotation, before the electrical connections were made. This changed the substitution alphabet used for encryption, ensuring that the cryptographic substitution was different at each new rotor position, producing a more formidable polyalphabetic substitution cipher. The stepping mechanism varied slightly from model to model. The right-hand rotor stepped once with each keystroke, and other rotors stepped less frequently. The advancement of a rotor other than the left-hand one was called a turnover by the British. This was achieved by a ratchet and pawl mechanism. Each rotor had a ratchet with 26 teeth and every time a key was pressed, the set of spring-loaded pawls moved forward in unison, trying to engage with a ratchet. The alphabet ring of the rotor to the right normally prevented this. As this ring rotated with its rotor, a notch machined into it would eventually align itself with the pawl, allowing it to engage with the ratchet, and advance the rotor on its left. The right-hand pawl, having no rotor and ring to its right, stepped its rotor with every key depression. For a single-notch rotor in the right-hand position, the middle rotor stepped once for every 26 steps of the right-hand rotor. Similarly for rotors two and three. For a two-notch rotor, the rotor to its left would turn over twice for each rotation. The first five rotors to be introduced (I–V) contained one notch each, while the additional naval rotors VI, VII and VIII each had two notches. The position of the notch on each rotor was determined by the letter ring which could be adjusted in relation to the core containing the interconnections. The points on the rings at which they caused the next wheel to move were as follows. The design also included a feature known as double-stepping. This occurred when each pawl aligned with both the ratchet of its rotor and the rotating notched ring of the neighbouring rotor. If a pawl engaged with a ratchet through alignment with a notch, as it moved forward it pushed against both the ratchet and the notch, advancing both rotors. In a three-rotor machine, double-stepping affected rotor two only. If, in moving forward, the ratchet of rotor three was engaged, rotor two would move again on the subsequent keystroke, resulting in two consecutive steps. Rotor two also pushes rotor one forward after 26 steps, but since rotor one moves forward with every keystroke anyway, there is no double-stepping. This double-stepping caused the rotors to deviate from odometer-style regular motion. With three wheels and only single notches in the first and second wheels, the machine had a period of 26×25×26 = 16,900 (not 26×26×26, because of double-stepping). Historically, messages were limited to a few hundred letters, and so there was no chance of repeating any combined rotor position during a single session, denying cryptanalysts valuable clues. To make room for the Naval fourth rotors, the reflector was made much thinner. The fourth rotor fitted into the space made available. No other changes were made, which eased the changeover. Since there were only three pawls, the fourth rotor never stepped, but could be manually set into one of 26 possible positions. A device that was designed, but not implemented before the war's end, was the Lückenfüllerwalze (gap-fill wheel) that implemented irregular stepping. It allowed field configuration of notches in all 26 positions. If the number of notches was a relative prime of 26 and the number of notches were different for each wheel, the stepping would be more unpredictable. Like the Umkehrwalze-D it also allowed the internal wiring to be reconfigured. The current entry wheel (Eintrittswalze in German), or entry stator, connects the plugboard to the rotor assembly. If the plugboard is not present, the entry wheel instead connects the keyboard and lampboard to the rotor assembly. While the exact wiring used is of comparatively little importance to security, it proved an obstacle to Rejewski's progress during his study of the rotor wirings. The commercial Enigma connects the keys in the order of their sequence on a QWERTZ keyboard: Q→A, W→B, E→C and so on. The military Enigma connects them in straight alphabetical order: A→A, B→B, C→C, and so on. It took inspired guesswork for Rejewski to penetrate the modification. With the exception of models A and B, the last rotor came before a 'reflector' (German: Umkehrwalze, meaning 'reversal rotor'), a patented feature unique to Enigma among the period's various rotor machines. The reflector connected outputs of the last rotor in pairs, redirecting current back through the rotors by a different route. The reflector ensured that Enigma would be self-reciprocal; thus, with two identically configured machines, a message could be encrypted on one and decrypted on the other, without the need for a bulky mechanism to switch between encryption and decryption modes. The reflector allowed a more compact design, but it also gave Enigma the property that no letter ever encrypted to itself. This was a severe cryptological flaw that was subsequently exploited by codebreakers. In Model 'C', the reflector could be inserted in one of two different positions. In Model 'D', the reflector could be set in 26 possible positions, although it did not move during encryption. In the Abwehr Enigma, the reflector stepped during encryption in a manner similar to the other wheels. In the German Army and Air Force Enigma, the reflector was fixed and did not rotate; there were four versions. The original version was marked 'A', and was replaced by Umkehrwalze B on 1 November 1937. A third version, Umkehrwalze C was used briefly in 1940, possibly by mistake, and was solved by Hut 6. The fourth version, first observed on 2 January 1944, had a rewireable reflector, called Umkehrwalze D, nick-named Uncle Dick by the British, allowing the Enigma operator to alter the connections as part of the key settings. The plugboard (Steckerbrett in German) permitted variable wiring that could be reconfigured by the operator (visible on the front panel of Figure 1; some of the patch cords can be seen in the lid). It was introduced on German Army versions in 1928, and was soon adopted by the Reichsmarine (German Navy). The plugboard contributed more cryptographic strength than an extra rotor, as it had 150 trillion possible settings (see below). Enigma without a plugboard (known as unsteckered Enigma) could be solved relatively straightforwardly using hand methods; these techniques were generally defeated by the plugboard, driving Allied cryptanalysts to develop special machines to solve it. A cable placed onto the plugboard connected letters in pairs; for example, E and Q might be a steckered pair. The effect was to swap those letters before and after the main rotor scrambling unit. For example, when an operator pressed E, the signal was diverted to Q before entering the rotors. Up to 13 steckered pairs might be used at one time, although only 10 were normally used. Current flowed from the keyboard through the plugboard, and proceeded to the entry-rotor or Eintrittswalze. Each letter on the plugboard had two jacks. Inserting a plug disconnected the upper jack (from the keyboard) and the lower jack (to the entry-rotor) of that letter. The plug at the other end of the crosswired cable was inserted into another letter's jacks, thus switching the connections of the two letters. Other features made various Enigma machines more secure or more convenient. Some M4 Enigmas used the Schreibmax, a small printer that could print the 26 letters on a narrow paper ribbon. This eliminated the need for a second operator to read the lamps and transcribe the letters. The Schreibmax was placed on top of the Enigma machine and was connected to the lamp panel. To install the printer, the lamp cover and light bulbs had to be removed. It improved both convenience and operational security; the printer could be installed remotely such that the signal officer operating the machine no longer had to see the decrypted plaintext. Another accessory was the remote lamp panel Fernlesegerät. For machines equipped with the extra panel, the wooden case of the Enigma was wider and could store the extra panel. A lamp panel version could be connected afterwards, but that required, as with the Schreibmax, that the lamp panel and light bulbs be removed. The remote panel made it possible for a person to read the decrypted plaintext without the operator seeing it. In 1944, the Luftwaffe introduced a plugboard switch, called the Uhr (clock), a small box containing a switch with 40 positions. It replaced the standard plugs. After connecting the plugs, as determined in the daily key sheet, the operator turned the switch into one of the 40 positions, each producing a different combination of plug wiring. Most of these plug connections were, unlike the default plugs, not pair-wise. In one switch position, the Uhr did not swap letters, but simply emulated the 13 stecker wires with plugs. The Enigma transformation for each letter can be specified mathematically as a product of permutations. Assuming a three-rotor German Army/Air Force Enigma, let P denote the plugboard transformation, U denote that of the reflector ( U = U − 1 {\displaystyle U=U^{-1}} ), and L, M, R denote those of the left, middle and right rotors respectively. Then the encryption E can be expressed as After each key press, the rotors turn, changing the transformation. For example, if the right-hand rotor R is rotated n positions, the transformation becomes where ρ is the cyclic permutation mapping A to B, B to C, and so forth. Similarly, the middle and left-hand rotors can be represented as j and k rotations of M and L. The encryption transformation can then be described as Combining three rotors from a set of five, each of the 3 rotor settings with 26 positions, and the plugboard with ten pairs of letters connected, the military Enigma has 158,962,555,217,826,360,000 different settings (nearly 159 quintillion or about 67 bits). A German Enigma operator would be given a plaintext message to encrypt. After setting up his machine, he would type the message on the Enigma keyboard. For each letter pressed, one lamp lit indicating a different letter according to a pseudo-random substitution determined by the electrical pathways inside the machine. The letter indicated by the lamp would be recorded, typically by a second operator, as the cyphertext letter. The action of pressing a key also moved one or more rotors so that the next key press used a different electrical pathway, and thus a different substitution would occur even if the same plaintext letter were entered again. For each key press there was rotation of at least the right hand rotor and less often the other two, resulting in a different substitution alphabet being used for every letter in the message. This process continued until the message was completed. The cyphertext recorded by the second operator would then be transmitted, usually by radio in Morse code, to an operator of another Enigma machine. This operator would type in the cyphertext and — as long as all the settings of the deciphering machine were identical to those of the enciphering machine — for every key press the reverse substitution would occur and the plaintext message would emerge. In use, the Enigma required a list of daily key settings and auxiliary documents. In German military practice, communications were divided into separate networks, each using different settings. These communication nets were termed keys at Bletchley Park, and were assigned code names, such as Red, Chaffinch, and Shark. Each unit operating in a network was given the same settings list for its Enigma, valid for a period of time. The procedures for German Naval Enigma were more elaborate and more secure than those in other services and employed auxiliary codebooks. Navy codebooks were printed in red, water-soluble ink on pink paper so that they could easily be destroyed if they were endangered or if the vessel was sunk. An Enigma machine's setting (its cryptographic key in modern terms; Schlüssel in German) specified each operator-adjustable aspect of the machine: For a message to be correctly encrypted and decrypted, both sender and receiver had to configure their Enigma in the same way; rotor selection and order, ring positions, plugboard connections and starting rotor positions must be identical. Except for the starting positions, these settings were established beforehand, distributed in key lists and changed daily. For example, the settings for the 18th day of the month in the German Luftwaffe Enigma key list number 649 (see image) were as follows: Enigma was designed to be secure even if the rotor wiring was known to an opponent, although in practice considerable effort protected the wiring configuration. If the wiring is secret, the total number of possible configurations has been calculated to be around 3×10 (approximately 380 bits); with known wiring and other operational constraints, this is reduced to around 10 (76 bits). Because of the large number of possibilities, users of Enigma were confident of its security; it was not then feasible for an adversary to even begin to try a brute-force attack. Most of the key was kept constant for a set time period, typically a day. A different initial rotor position was used for each message, a concept similar to an initialisation vector in modern cryptography. The reason is that encrypting many messages with identical or near-identical settings (termed in cryptanalysis as being in depth), would enable an attack using a statistical procedure such as Friedman's Index of coincidence. The starting position for the rotors was transmitted just before the ciphertext, usually after having been enciphered. The exact method used was termed the indicator procedure. Design weakness and operator sloppiness in these indicator procedures were two of the main weaknesses that made cracking Enigma possible. One of the earliest indicator procedures for the Enigma was cryptographically flawed and allowed Polish cryptanalysts to make the initial breaks into the plugboard Enigma. The procedure had the operator set his machine in accordance with the secret settings that all operators on the net shared. The settings included an initial position for the rotors (the Grundstellung), say, AOH. The operator turned his rotors until AOH was visible through the rotor windows. At that point, the operator chose his own arbitrary starting position for the message he would send. An operator might select EIN, and that became the message setting for that encryption session. The operator then typed EIN into the machine twice, this producing the encrypted indicator, for example XHTLOA. This was then transmitted, at which point the operator would turn the rotors to his message settings, EIN in this example, and then type the plaintext of the message. At the receiving end, the operator set the machine to the initial settings (AOH) and typed in the first six letters of the message (XHTLOA). In this example, EINEIN emerged on the lamps, so the operator would learn the message setting that the sender used to encrypt this message. The receiving operator would set his rotors to EIN, type in the rest of the ciphertext, and get the deciphered message. This indicator scheme had two weaknesses. First, the use of a global initial position (Grundstellung) meant all message keys used the same polyalphabetic substitution. In later indicator procedures, the operator selected his initial position for encrypting the indicator and sent that initial position in the clear. The second problem was the repetition of the indicator, which was a serious security flaw. The message setting was encoded twice, resulting in a relation between first and fourth, second and fifth, and third and sixth character. These security flaws enabled the Polish Cipher Bureau to break into the pre-war Enigma system as early as 1932. The early indicator procedure was subsequently described by German cryptanalysts as the "faulty indicator technique". During World War II, codebooks were only used each day to set up the rotors, their ring settings and the plugboard. For each message, the operator selected a random start position, let's say WZA, and a random message key, perhaps SXT. He moved the rotors to the WZA start position and encoded the message key SXT. Assume the result was UHL. He then set up the message key, SXT, as the start position and encrypted the message. Next, he transmitted the start position, WZA, the encoded message key, UHL, and then the ciphertext. The receiver set up the start position according to the first trigram, WZA, and decoded the second trigram, UHL, to obtain the SXT message setting. Next, he used this SXT message setting as the start position to decrypt the message. This way, each ground setting was different and the new procedure avoided the security flaw of double encoded message settings. This procedure was used by Wehrmacht and Luftwaffe only. The Kriegsmarine procedures on sending messages with the Enigma were far more complex and elaborate. Prior to encryption the message was encoded using the Kurzsignalheft code book. The Kurzsignalheft contained tables to convert sentences into four-letter groups. A great many choices were included, for example, logistic matters such as refuelling and rendezvous with supply ships, positions and grid lists, harbour names, countries, weapons, weather conditions, enemy positions and ships, date and time tables. Another codebook contained the Kenngruppen and Spruchschlüssel: the key identification and message key. The Army Enigma machine used only the 26 alphabet characters. Punctuation was replaced with rare character combinations. A space was omitted or replaced with an X. The X was generally used as full-stop. Some punctuation marks were different in other parts of the armed forces. The Wehrmacht replaced a comma with ZZ and the question mark with FRAGE or FRAQ. The Kriegsmarine replaced the comma with Y and the question mark with UD. The combination CH, as in "Acht" (eight) or "Richtung" (direction), was replaced with Q (AQT, RIQTUNG). Two, three and four zeros were replaced with CENTA, MILLE and MYRIA. The Wehrmacht and the Luftwaffe transmitted messages in groups of five characters and counted the letters. The Kriegsmarine used four-character groups and counted those groups. Frequently used names or words were varied as much as possible. Words like Minensuchboot (minesweeper) could be written as MINENSUCHBOOT, MINBOOT or MMMBOOT. To make cryptanalysis harder, messages were limited to 250 characters. Longer messages were divided into several parts, each using a different message key. The character substitutions by the Enigma machine as a whole can be expressed as a string of letters with each position occupied by the character that will replace the character at the corresponding position in the alphabet. For example, a given machine configuration that enciphered A to L, B to U, C to S, ..., and Z to J could be represented compactly as and the enciphering of a particular character by that configuration could be represented by highlighting the enciphered character as in Since the operation of an Enigma machine enciphering a message is a series of such configurations, each associated with a single character being enciphered, a sequence of such representations can be used to represent the operation of the machine as it enciphers a message. For example, the process of enciphering the first sentence of the main body of the famous "Dönitz message" to can be represented as where the letters following each mapping are the letters that appear at the windows at that stage (the only state changes visible to the operator) and the numbers show the underlying physical position of each rotor. The character mappings for a given configuration of the machine are in turn the result of a series of such mappings applied by each pass through a component of the machine: the enciphering of a character resulting from the application of a given component's mapping serves as the input to the mapping of the subsequent component. For example, the 4th step in the enciphering above can be expanded to show each of these stages using the same representation of mappings and highlighting for the enciphered character: Here the enciphering begins trivially with the first "mapping" representing the keyboard (which has no effect), followed by the plugboard, configured as AE.BF.CM.DQ.HU.JN.LX.PR.SZ.VW which has no effect on 'G', followed by the VIII rotor in the 03 position, which maps G to A, then the VI rotor in the 17 position, which maps A to N, ..., and finally the plugboard again, which maps B to F, producing the overall mapping indicated at the final step: G to F. Note that this model has 4 rotors (lines 1 through 4) and that the reflector (line R) also permutes (garbles) letters. The Enigma family included multiple designs. The earliest were commercial models dating from the early 1920s. Starting in the mid-1920s, the German military began to use Enigma, making a number of security-related changes. Various nations either adopted or adapted the design for their own cipher machines. An estimated 40,000 Enigma machines were constructed. After the end of World War II, the Allies sold captured Enigma machines, still widely considered secure, to developing countries. On 23 February 1918, Arthur Scherbius applied for a patent for a ciphering machine that used rotors. Scherbius and E. Richard Ritter founded the firm of Scherbius & Ritter. They approached the German Navy and Foreign Office with their design, but neither agency was interested. Scherbius & Ritter then assigned the patent rights to Gewerkschaft Securitas, who founded the Chiffriermaschinen Aktien-Gesellschaft (Cipher Machines Stock Corporation) on 9 July 1923; Scherbius and Ritter were on the board of directors. Chiffriermaschinen AG began advertising a rotor machine, Enigma Handelsmaschine, which was exhibited at the Congress of the International Postal Union in 1924. The machine was heavy and bulky, incorporating a typewriter. It measured 65×45×38 cm and weighed about 50 kilograms (110 lb). This was also a model with a type writer. There were a number of problems associated with the printer and the construction was not stable until 1926. Both early versions of Enigma lacked the reflector and had to be switched between enciphering and deciphering. The reflector, suggested by Scherbius' colleague Willi Korn, was introduced with the glow lamp version. The machine was also known as the military Enigma. It had two rotors and a manually rotatable reflector. The typewriter was omitted and glow lamps were used for output. The operation was somewhat different from later models. Before the next key pressure, the operator had to press a button to advance the right rotor one step. Enigma model B was introduced late in 1924, and was of a similar construction. While bearing the Enigma name, both models A and B were quite unlike later versions: They differed in physical size and shape, but also cryptographically, in that they lacked the reflector. This model of Enigma machine was referred to as the Glowlamp Enigma or Glühlampenmaschine since it produced its output on a lamp panel rather than paper. This method of output was much more reliable and cost effective. Hence this machine was 1/8th the price of its predecessor. Model C was the third model of the so-called ″glowlamp Enigmas″ (after A and B) and it again lacked a typewriter. The Enigma C quickly gave way to Enigma D (1927). This version was widely used, with shipments to Sweden, the Netherlands, United Kingdom, Japan, Italy, Spain, United States and Poland. In 1927 Hugh Foss at the British Government Code and Cypher School was able to show that commercial Enigma machines could be broken, provided suitable cribs were available. Soon, the Enigma D would pioneer the use of a standard keyboard layout to be used in German computing. This "QWERTZ" layout is very similar to the American QWERTY keyboard format used in many languages. Other countries used Enigma machines. The Italian Navy adopted the commercial Enigma as "Navy Cipher D". The Spanish also used commercial Enigma machines during their Civil War. British codebreakers succeeded in breaking these machines, which lacked a plugboard. Enigma machines were also used by diplomatic services. There was also a large, eight-rotor printing model, the Enigma H, called Enigma II by the Reichswehr. In 1933 the Polish Cipher Bureau detected that it was in use for high-level military communication, but it was soon withdrawn, as it was unreliable and jammed frequently. The Swiss used a version of Enigma called Model K or Swiss K for military and diplomatic use, which was very similar to commercial Enigma D. The machine's code was cracked by Poland, France, the United Kingdom and the United States; the latter code-named it INDIGO. An Enigma T model, code-named Tirpitz, was used by Japan. The various services of the Wehrmacht used various Enigma versions, and replaced them frequently, sometimes with ones adapted from other services. Enigma seldom carried high-level strategic messages, which when not urgent went by courier, and when urgent went by other cryptographic systems including the Geheimschreiber. The Reichsmarine was the first military branch to adopt Enigma. This version, named Funkschlüssel C ("Radio cipher C"), had been put into production by 1925 and was introduced into service in 1926. The keyboard and lampboard contained 29 letters — A-Z, Ä, Ö and Ü — that were arranged alphabetically, as opposed to the QWERTZUI ordering. The rotors had 28 contacts, with the letter X wired to bypass the rotors unencrypted. Three rotors were chosen from a set of five and the reflector could be inserted in one of four different positions, denoted α, β, γ and δ. The machine was revised slightly in July 1933. By 15 July 1928, the German Army (Reichswehr) had introduced their own exclusive version of the Enigma machine, the Enigma G. The Abwehr used the Enigma G (the Abwehr Enigma). This Enigma variant was a four-wheel unsteckered machine with multiple notches on the rotors. This model was equipped with a counter that incremented upon each key press, and so is also known as the "counter machine" or the Zählwerk Enigma. Enigma machine G was modified to the Enigma I by June 1930. Enigma I is also known as the Wehrmacht, or "Services" Enigma, and was used extensively by German military services and other government organisations (such as the railways) before and during World War II. The major difference between Enigma I (German Army version from 1930), and commercial Enigma models was the addition of a plugboard to swap pairs of letters, greatly increasing cryptographic strength. Other differences included the use of a fixed reflector and the relocation of the stepping notches from the rotor body to the movable letter rings. The machine measured 28 cm × 34 cm × 15 cm (11.0 in × 13.4 in × 5.9 in) and weighed around 12 kg (26 lb). In August 1935, the Air Force introduced the Wehrmacht Enigma for their communications. By 1930, the Reichswehr had suggested that the Navy adopt their machine, citing the benefits of increased security (with the plugboard) and easier interservice communications. The Reichsmarine eventually agreed and in 1934 brought into service the Navy version of the Army Enigma, designated Funkschlüssel ' or M3. While the Army used only three rotors at that time, the Navy specified a choice of three from a possible five. In December 1938, the Army issued two extra rotors so that the three rotors were chosen from a set of five. In 1938, the Navy added two more rotors, and then another in 1939 to allow a choice of three rotors from a set of eight. A four-rotor Enigma was introduced by the Navy for U-boat traffic on 1 February 1942, called M4 (the network was known as Triton, or Shark to the Allies). The extra rotor was fitted in the same space by splitting the reflector into a combination of a thin reflector and a thin fourth rotor. The effort to break the Enigma was not disclosed until the 1970s. Since then, interest in the Enigma machine has grown. Enigmas are on public display in museums around the world, and several are in the hands of private collectors and computer history enthusiasts. The Deutsches Museum in Munich has both the three- and four-rotor German military variants, as well as several civilian versions. Enigma machines are exhibited at the National Codes Centre in Bletchley Park, the Government Communications Headquarters, the Science Museum in London, Discovery Park of America in Tennessee, the Polish Army Museum in Warsaw, the Swedish Army Museum (Armémuseum) in Stockholm, the Military Museum of A Coruña in Spain, the Nordland Red Cross War Memorial Museum in Narvik, Norway, The Artillery, Engineers and Signals Museum in Hämeenlinna, Finland the Technical University of Denmark in Lyngby, Denmark, in Skanderborg Bunkerne at Skanderborg, Denmark, and at the Australian War Memorial and in the foyer of the Australian Signals Directorate, both in Canberra, Australia. The Jozef Pilsudski Institute in London exhibited a rare Polish Enigma double assembled in France in 1940. In 2020, thanks to the support of the Ministry of Culture and National Heritage, it became the property of the Polish History Museum. In the United States, Enigma machines can be seen at the Computer History Museum in Mountain View, California, and at the National Security Agency's National Cryptologic Museum in Fort Meade, Maryland, where visitors can try their hand at enciphering and deciphering messages. Two machines that were acquired after the capture of U-505 during World War II are on display alongside the submarine at the Museum of Science and Industry in Chicago, Illinois. A three-rotor Enigma is on display at Discovery Park of America in Union City, Tennessee. A four-rotor device is on display in the ANZUS Corridor of the Pentagon on the second floor, A ring, between corridors 8 and 9. This machine is on loan from Australia. The United States Air Force Academy in Colorado Springs has a machine on display in the Computer Science Department. There is also a machine located at The National WWII Museum in New Orleans. The International Museum of World War II near Boston has seven Enigma machines on display, including a U-boat four-rotor model, one of three surviving examples of an Enigma machine with a printer, one of fewer than ten surviving ten-rotor code machines, an example blown up by a retreating German Army unit, and two three-rotor Enigmas that visitors can operate to encode and decode messages. Computer Museum of America in Roswell, Georgia has a three-rotor model with two additional rotors. The machine is fully restored and CMoA has the original paperwork for the purchase on 7 March 1936 by the German Army. The National Museum of Computing also contains surviving Enigma machines in Bletchley, England. In Canada, a Swiss Army issue Enigma-K, is in Calgary, Alberta. It is on permanent display at the Naval Museum of Alberta inside the Military Museums of Calgary. A four-rotor Enigma machine is on display at the Military Communications and Electronics Museum at Canadian Forces Base (CFB) Kingston in Kingston, Ontario. Occasionally, Enigma machines are sold at auction; prices have in recent years ranged from US$40,000 to US$547,500 in 2017. Replicas are available in various forms, including an exact reconstructed copy of the Naval M4 model, an Enigma implemented in electronics (Enigma-E), various simulators and paper-and-scissors analogues. A rare Abwehr Enigma machine, designated G312, was stolen from the Bletchley Park museum on 1 April 2000. In September, a man identifying himself as "The Master" sent a note demanding £25,000 and threatening to destroy the machine if the ransom was not paid. In early October 2000, Bletchley Park officials announced that they would pay the ransom, but the stated deadline passed with no word from the blackmailer. Shortly afterward, the machine was sent anonymously to BBC journalist Jeremy Paxman, missing three rotors. In November 2000, an antiques dealer named Dennis Yates was arrested after telephoning The Sunday Times to arrange the return of the missing parts. The Enigma machine was returned to Bletchley Park after the incident. In October 2001, Yates was sentenced to ten months in prison and served three months. In October 2008, the Spanish daily newspaper El País reported that 28 Enigma machines had been discovered by chance in an attic of Army headquarters in Madrid. These four-rotor commercial machines had helped Franco's Nationalists win the Spanish Civil War, because, though the British cryptologist Alfred Dilwyn Knox in 1937 broke the cipher generated by Franco's Enigma machines, this was not disclosed to the Republicans, who failed to break the cipher. The Nationalist government continued using its 50 Enigmas into the 1950s. Some machines have gone on display in Spanish military museums, including one at the National Museum of Science and Technology (MUNCYT) in La Coruña and one at the Spanish Army Museum. Two have been given to Britain's GCHQ. The Bulgarian military used Enigma machines with a Cyrillic keyboard; one is on display in the National Museum of Military History in Sofia. On 3 December 2020, German divers working on behalf of the World Wide Fund for Nature discovered a destroyed Enigma machine in Flensburg Firth (part of the Baltic Sea) which is believed to be from a scuttled U-boat. This Enigma machine will be restored by and be the property of the Archaeology Museum of Schleswig Holstein. An M4 Enigma was salvaged in the 1980s from the German minesweeper R15, which was sunk off the Istrian coast in 1945. The machine was put on display in the Pivka Park of Military History in Slovenia on 13 April 2023. The Enigma was influential in the field of cipher machine design, spinning off other rotor machines. Once the British discovered Enigma's principle of operation, they created the Typex rotor cipher, which the Germans believed to be unsolvable. Typex was originally derived from the Enigma patents; Typex even includes features from the patent descriptions that were omitted from the actual Enigma machine. The British paid no royalties for the use of the patents. In the United States, cryptologist William Friedman designed the M-325 machine, starting in 1936, that is logically similar. Machines like the SIGABA, NEMA, Typex, and so forth, are not considered to be Enigma derivatives as their internal ciphering functions are not mathematically identical to the Enigma transform. A unique rotor machine called Cryptograph was constructed in 2002 by Netherlands-based Tatjana van Vark. This device makes use of 40-point rotors, allowing letters, numbers and some punctuation to be used; each rotor contains 509 parts.
[ { "paragraph_id": 0, "text": "The Enigma machine is a cipher device developed and used in the early- to mid-20th century to protect commercial, diplomatic, and military communication. It was employed extensively by Nazi Germany during World War II, in all branches of the German military. The Enigma machine was considered so secure that it was used to encipher the most top-secret messages.", "title": "" }, { "paragraph_id": 1, "text": "The Enigma has an electromechanical rotor mechanism that scrambles the 26 letters of the alphabet. In typical use, one person enters text on the Enigma's keyboard and another person writes down which of the 26 lights above the keyboard illuminated at each key press. If plain text is entered, the illuminated letters are the ciphertext. Entering ciphertext transforms it back into readable plaintext. The rotor mechanism changes the electrical connections between the keys and the lights with each keypress.", "title": "" }, { "paragraph_id": 2, "text": "The security of the system depends on machine settings that were generally changed daily, based on secret key lists distributed in advance, and on other settings that were changed for each message. The receiving station would have to know and use the exact settings employed by the transmitting station to successfully decrypt a message.", "title": "" }, { "paragraph_id": 3, "text": "While Nazi Germany introduced a series of improvements to the Enigma over the years, and these hampered decryption efforts, they did not prevent Poland from cracking the machine as early as December 1932 and reading messages prior to and into the war. Poland's sharing of their achievements enabled the Allies to exploit Enigma-enciphered messages as a major source of intelligence. Many commentators say the flow of Ultra communications intelligence from the decrypting of Enigma, Lorenz, and other ciphers shortened the war substantially and may even have altered its outcome.", "title": "" }, { "paragraph_id": 4, "text": "The Enigma machine was invented by German engineer Arthur Scherbius at the end of World War I. The German firm Scherbius & Ritter, co-founded by Scherbius, patented ideas for a cipher machine in 1918 and began marketing the finished product under the brand name Enigma in 1923, initially targeted at commercial markets. Early models were used commercially from the early 1920s, and adopted by military and government services of several countries, most notably Nazi Germany before and during World War II.", "title": "History" }, { "paragraph_id": 5, "text": "Several different Enigma models were produced, but the German military models, having a plugboard, were the most complex. Japanese and Italian models were also in use. With its adoption (in slightly modified form) by the German Navy in 1926 and the German Army and Air Force soon after, the name Enigma became widely known in military circles. Pre-war German military planning emphasized fast, mobile forces and tactics, later known as blitzkrieg, which depend on radio communication for command and coordination. Since adversaries would likely intercept radio signals, messages had to be protected with secure encipherment. Compact and easily portable, the Enigma machine filled that need.", "title": "History" }, { "paragraph_id": 6, "text": "Around December 1932 Marian Rejewski, a Polish mathematician and cryptologist at the Polish Cipher Bureau, used the theory of permutations, and flaws in the German military-message encipherment procedures, to break message keys of the plugboard Enigma machine. France's spy Hans-Thilo Schmidt obtained access to German cipher materials that included the daily keys used in September and October 1932. Those keys included the plugboard settings. The French passed the material to the Poles, and Rejewski used some of that material and the message traffic in September and October to solve for the unknown rotor wiring. Consequently the Polish mathematicians were able to build their own Enigma machines, dubbed \"Enigma doubles\". Rejewski was aided by fellow mathematician-cryptologists Jerzy Różycki and Henryk Zygalski, both of whom had been recruited with Rejewski from Poznań University, which had been selected for its students' knowledge of the German language, since that area was held by Germany prior to World War I. The Polish Cipher Bureau developed techniques to defeat the plugboard and find all components of the daily key, which enabled the Cipher Bureau to read German Enigma messages starting from January 1933.", "title": "History" }, { "paragraph_id": 7, "text": "Over time, the German cryptographic procedures improved, and the Cipher Bureau developed techniques and designed mechanical devices to continue reading Enigma traffic. As part of that effort, the Poles exploited quirks of the rotors, compiled catalogues, built a cyclometer (invented by Rejewski) to help make a catalogue with 100,000 entries, invented and produced Zygalski sheets, and built the electromechanical cryptologic bomba (invented by Rejewski) to search for rotor settings. In 1938 the Poles had six bomby (plural of bomba), but when that year the Germans added two more rotors, ten times as many bomby would have been needed to read the traffic.", "title": "History" }, { "paragraph_id": 8, "text": "On 26 and 27 July 1939, in Pyry, just south of Warsaw, the Poles initiated French and British military intelligence representatives into the Polish Enigma-decryption techniques and equipment, including Zygalski sheets and the cryptologic bomb, and promised each delegation a Polish-reconstructed Enigma (the devices were soon delivered).", "title": "History" }, { "paragraph_id": 9, "text": "In September 1939, British Military Mission 4, which included Colin Gubbins and Vera Atkins, went to Poland, intending to evacuate cipher-breakers Marian Rejewski, Jerzy Różycki, and Henryk Zygalski from the country. The cryptologists, however, had been evacuated by their own superiors into Romania, at the time a Polish-allied country. On the way, for security reasons, the Polish Cipher Bureau personnel had deliberately destroyed their records and equipment. From Romania they traveled on to France, where they resumed their cryptological work, collaborating by teletype with the British, who began work on decrypting German Enigma messages, using the Polish equipment and techniques.", "title": "History" }, { "paragraph_id": 10, "text": "Gordon Welchman, who became head of Hut 6 at Bletchley Park, has written: \"Hut 6 Ultra would never have got off the ground if we had not learned from the Poles, in the nick of time, the details both of the German military version of the commercial Enigma machine, and of the operating procedures that were in use.\" The Polish transfer of theory and technology at Pyry formed the crucial basis for the subsequent World War II British Enigma-decryption effort at Bletchley Park, where Welchman worked.", "title": "History" }, { "paragraph_id": 11, "text": "During the war, British cryptologists decrypted a vast number of messages enciphered on Enigma. The intelligence gleaned from this source, codenamed \"Ultra\" by the British, was a substantial aid to the Allied war effort.", "title": "History" }, { "paragraph_id": 12, "text": "Though Enigma had some cryptographic weaknesses, in practice it was German procedural flaws, operator mistakes, failure to systematically introduce changes in encipherment procedures, and Allied capture of key tables and hardware that, during the war, enabled Allied cryptologists to succeed.", "title": "History" }, { "paragraph_id": 13, "text": "From October 1944, the German Abwehr used the Schlüsselgerät 41.", "title": "History" }, { "paragraph_id": 14, "text": "Like other rotor machines, the Enigma machine is a combination of mechanical and electrical subsystems. The mechanical subsystem consists of a keyboard; a set of rotating disks called rotors arranged adjacently along a spindle; one of various stepping components to turn at least one rotor with each key press, and a series of lamps, one for each letter. These design features are the reason that the Enigma machine was originally referred to as the rotor-based cipher machine during its intellectual inception in 1915.", "title": "Design" }, { "paragraph_id": 15, "text": "An electrical pathway is a route for current to travel. By manipulating this phenomenon the Enigma machine was able to scramble messages. The mechanical parts act by forming a varying electrical circuit. When a key is pressed, one or more rotors rotate on the spindle. On the sides of the rotors are a series of electrical contacts that, after rotation, line up with contacts on the other rotors or fixed wiring on either end of the spindle. When the rotors are properly aligned, each key on the keyboard is connected to a unique electrical pathway through the series of contacts and internal wiring. Current, typically from a battery, flows through the pressed key, into the newly configured set of circuits and back out again, ultimately lighting one display lamp, which shows the output letter. For example, when encrypting a message starting ANX..., the operator would first press the A key, and the Z lamp might light, so Z would be the first letter of the ciphertext. The operator would next press N, and then X in the same fashion, and so on.", "title": "Design" }, { "paragraph_id": 16, "text": "Current flows from the battery (1) through a depressed bi-directional keyboard switch (2) to the plugboard (3). Next, it passes through the (unused in this instance, so shown closed) plug \"A\" (3) via the entry wheel (4), through the wiring of the three (Wehrmacht Enigma) or four (Kriegsmarine M4 and Abwehr variants) installed rotors (5), and enters the reflector (6). The reflector returns the current, via an entirely different path, back through the rotors (5) and entry wheel (4), proceeding through plug \"S\" (7) connected with a cable (8) to plug \"D\", and another bi-directional switch (9) to light the appropriate lamp.", "title": "Design" }, { "paragraph_id": 17, "text": "The repeated changes of electrical path through an Enigma scrambler implement a polyalphabetic substitution cipher that provides Enigma's security. The diagram on the right shows how the electrical pathway changes with each key depression, which causes rotation of at least the right-hand rotor. Current passes into the set of rotors, into and back out of the reflector, and out through the rotors again. The greyed-out lines are other possible paths within each rotor; these are hard-wired from one side of each rotor to the other. The letter A encrypts differently with consecutive key presses, first to G, and then to C. This is because the right-hand rotor steps (rotates one position) on each key press, sending the signal on a completely different route. Eventually other rotors step with a key press.", "title": "Design" }, { "paragraph_id": 18, "text": "The rotors (alternatively wheels or drums, Walzen in German) form the heart of an Enigma machine. Each rotor is a disc approximately 10 cm (3.9 in) in diameter made from Ebonite or Bakelite with 26 brass, spring-loaded, electrical contact pins arranged in a circle on one face, with the other face housing 26 corresponding electrical contacts in the form of circular plates. The pins and contacts represent the alphabet — typically the 26 letters A–Z, as will be assumed for the rest of this description. When the rotors are mounted side by side on the spindle, the pins of one rotor rest against the plate contacts of the neighbouring rotor, forming an electrical connection. Inside the body of the rotor, 26 wires connect each pin on one side to a contact on the other in a complex pattern. Most of the rotors are identified by Roman numerals, and each issued copy of rotor I, for instance, is wired identically to all others. The same is true for the special thin beta and gamma rotors used in the M4 naval variant.", "title": "Design" }, { "paragraph_id": 19, "text": "By itself, a rotor performs only a very simple type of encryption, a simple substitution cipher. For example, the pin corresponding to the letter E might be wired to the contact for letter T on the opposite face, and so on. Enigma's security comes from using several rotors in series (usually three or four) and the regular stepping movement of the rotors, thus implementing a polyalphabetic substitution cipher.", "title": "Design" }, { "paragraph_id": 20, "text": "Each rotor can be set to one of 26 possible starting positions when placed in an Enigma machine. After insertion, a rotor can be turned to the correct position by hand, using the grooved finger-wheel which protrudes from the internal Enigma cover when closed. In order for the operator to know the rotor's position, each has an alphabet tyre (or letter ring) attached to the outside of the rotor disc, with 26 characters (typically letters); one of these is visible through the window for that slot in the cover, thus indicating the rotational position of the rotor. In early models, the alphabet ring was fixed to the rotor disc. A later improvement was the ability to adjust the alphabet ring relative to the rotor disc. The position of the ring was known as the Ringstellung (\"ring setting\"), and that setting was a part of the initial setup needed prior to an operating session. In modern terms it was a part of the initialization vector.", "title": "Design" }, { "paragraph_id": 21, "text": "Each rotor contains one or more notches that control rotor stepping. In the military variants, the notches are located on the alphabet ring.", "title": "Design" }, { "paragraph_id": 22, "text": "The Army and Air Force Enigmas were used with several rotors, initially three. On 15 December 1938, this changed to five, from which three were chosen for a given session. Rotors were marked with Roman numerals to distinguish them: I, II, III, IV and V, all with single notches located at different points on the alphabet ring. This variation was probably intended as a security measure, but ultimately allowed the Polish Clock Method and British Banburismus attacks.", "title": "Design" }, { "paragraph_id": 23, "text": "The Naval version of the Wehrmacht Enigma had always been issued with more rotors than the other services: At first six, then seven, and finally eight. The additional rotors were marked VI, VII and VIII, all with different wiring, and had two notches, resulting in more frequent turnover. The four-rotor Naval Enigma (M4) machine accommodated an extra rotor in the same space as the three-rotor version. This was accomplished by replacing the original reflector with a thinner one and by adding a thin fourth rotor. That fourth rotor was one of two types, Beta or Gamma, and never stepped, but could be manually set to any of 26 positions. One of the 26 made the machine perform identically to the three-rotor machine.", "title": "Design" }, { "paragraph_id": 24, "text": "To avoid merely implementing a simple (solvable) substitution cipher, every key press caused one or more rotors to step by one twenty-sixth of a full rotation, before the electrical connections were made. This changed the substitution alphabet used for encryption, ensuring that the cryptographic substitution was different at each new rotor position, producing a more formidable polyalphabetic substitution cipher. The stepping mechanism varied slightly from model to model. The right-hand rotor stepped once with each keystroke, and other rotors stepped less frequently.", "title": "Design" }, { "paragraph_id": 25, "text": "The advancement of a rotor other than the left-hand one was called a turnover by the British. This was achieved by a ratchet and pawl mechanism. Each rotor had a ratchet with 26 teeth and every time a key was pressed, the set of spring-loaded pawls moved forward in unison, trying to engage with a ratchet. The alphabet ring of the rotor to the right normally prevented this. As this ring rotated with its rotor, a notch machined into it would eventually align itself with the pawl, allowing it to engage with the ratchet, and advance the rotor on its left. The right-hand pawl, having no rotor and ring to its right, stepped its rotor with every key depression. For a single-notch rotor in the right-hand position, the middle rotor stepped once for every 26 steps of the right-hand rotor. Similarly for rotors two and three. For a two-notch rotor, the rotor to its left would turn over twice for each rotation.", "title": "Design" }, { "paragraph_id": 26, "text": "The first five rotors to be introduced (I–V) contained one notch each, while the additional naval rotors VI, VII and VIII each had two notches. The position of the notch on each rotor was determined by the letter ring which could be adjusted in relation to the core containing the interconnections. The points on the rings at which they caused the next wheel to move were as follows.", "title": "Design" }, { "paragraph_id": 27, "text": "The design also included a feature known as double-stepping. This occurred when each pawl aligned with both the ratchet of its rotor and the rotating notched ring of the neighbouring rotor. If a pawl engaged with a ratchet through alignment with a notch, as it moved forward it pushed against both the ratchet and the notch, advancing both rotors. In a three-rotor machine, double-stepping affected rotor two only. If, in moving forward, the ratchet of rotor three was engaged, rotor two would move again on the subsequent keystroke, resulting in two consecutive steps. Rotor two also pushes rotor one forward after 26 steps, but since rotor one moves forward with every keystroke anyway, there is no double-stepping. This double-stepping caused the rotors to deviate from odometer-style regular motion.", "title": "Design" }, { "paragraph_id": 28, "text": "With three wheels and only single notches in the first and second wheels, the machine had a period of 26×25×26 = 16,900 (not 26×26×26, because of double-stepping). Historically, messages were limited to a few hundred letters, and so there was no chance of repeating any combined rotor position during a single session, denying cryptanalysts valuable clues.", "title": "Design" }, { "paragraph_id": 29, "text": "To make room for the Naval fourth rotors, the reflector was made much thinner. The fourth rotor fitted into the space made available. No other changes were made, which eased the changeover. Since there were only three pawls, the fourth rotor never stepped, but could be manually set into one of 26 possible positions.", "title": "Design" }, { "paragraph_id": 30, "text": "A device that was designed, but not implemented before the war's end, was the Lückenfüllerwalze (gap-fill wheel) that implemented irregular stepping. It allowed field configuration of notches in all 26 positions. If the number of notches was a relative prime of 26 and the number of notches were different for each wheel, the stepping would be more unpredictable. Like the Umkehrwalze-D it also allowed the internal wiring to be reconfigured.", "title": "Design" }, { "paragraph_id": 31, "text": "The current entry wheel (Eintrittswalze in German), or entry stator, connects the plugboard to the rotor assembly. If the plugboard is not present, the entry wheel instead connects the keyboard and lampboard to the rotor assembly. While the exact wiring used is of comparatively little importance to security, it proved an obstacle to Rejewski's progress during his study of the rotor wirings. The commercial Enigma connects the keys in the order of their sequence on a QWERTZ keyboard: Q→A, W→B, E→C and so on. The military Enigma connects them in straight alphabetical order: A→A, B→B, C→C, and so on. It took inspired guesswork for Rejewski to penetrate the modification.", "title": "Design" }, { "paragraph_id": 32, "text": "With the exception of models A and B, the last rotor came before a 'reflector' (German: Umkehrwalze, meaning 'reversal rotor'), a patented feature unique to Enigma among the period's various rotor machines. The reflector connected outputs of the last rotor in pairs, redirecting current back through the rotors by a different route. The reflector ensured that Enigma would be self-reciprocal; thus, with two identically configured machines, a message could be encrypted on one and decrypted on the other, without the need for a bulky mechanism to switch between encryption and decryption modes. The reflector allowed a more compact design, but it also gave Enigma the property that no letter ever encrypted to itself. This was a severe cryptological flaw that was subsequently exploited by codebreakers.", "title": "Design" }, { "paragraph_id": 33, "text": "In Model 'C', the reflector could be inserted in one of two different positions. In Model 'D', the reflector could be set in 26 possible positions, although it did not move during encryption. In the Abwehr Enigma, the reflector stepped during encryption in a manner similar to the other wheels.", "title": "Design" }, { "paragraph_id": 34, "text": "In the German Army and Air Force Enigma, the reflector was fixed and did not rotate; there were four versions. The original version was marked 'A', and was replaced by Umkehrwalze B on 1 November 1937. A third version, Umkehrwalze C was used briefly in 1940, possibly by mistake, and was solved by Hut 6. The fourth version, first observed on 2 January 1944, had a rewireable reflector, called Umkehrwalze D, nick-named Uncle Dick by the British, allowing the Enigma operator to alter the connections as part of the key settings.", "title": "Design" }, { "paragraph_id": 35, "text": "The plugboard (Steckerbrett in German) permitted variable wiring that could be reconfigured by the operator (visible on the front panel of Figure 1; some of the patch cords can be seen in the lid). It was introduced on German Army versions in 1928, and was soon adopted by the Reichsmarine (German Navy). The plugboard contributed more cryptographic strength than an extra rotor, as it had 150 trillion possible settings (see below). Enigma without a plugboard (known as unsteckered Enigma) could be solved relatively straightforwardly using hand methods; these techniques were generally defeated by the plugboard, driving Allied cryptanalysts to develop special machines to solve it.", "title": "Design" }, { "paragraph_id": 36, "text": "A cable placed onto the plugboard connected letters in pairs; for example, E and Q might be a steckered pair. The effect was to swap those letters before and after the main rotor scrambling unit. For example, when an operator pressed E, the signal was diverted to Q before entering the rotors. Up to 13 steckered pairs might be used at one time, although only 10 were normally used.", "title": "Design" }, { "paragraph_id": 37, "text": "Current flowed from the keyboard through the plugboard, and proceeded to the entry-rotor or Eintrittswalze. Each letter on the plugboard had two jacks. Inserting a plug disconnected the upper jack (from the keyboard) and the lower jack (to the entry-rotor) of that letter. The plug at the other end of the crosswired cable was inserted into another letter's jacks, thus switching the connections of the two letters.", "title": "Design" }, { "paragraph_id": 38, "text": "Other features made various Enigma machines more secure or more convenient.", "title": "Design" }, { "paragraph_id": 39, "text": "Some M4 Enigmas used the Schreibmax, a small printer that could print the 26 letters on a narrow paper ribbon. This eliminated the need for a second operator to read the lamps and transcribe the letters. The Schreibmax was placed on top of the Enigma machine and was connected to the lamp panel. To install the printer, the lamp cover and light bulbs had to be removed. It improved both convenience and operational security; the printer could be installed remotely such that the signal officer operating the machine no longer had to see the decrypted plaintext.", "title": "Design" }, { "paragraph_id": 40, "text": "Another accessory was the remote lamp panel Fernlesegerät. For machines equipped with the extra panel, the wooden case of the Enigma was wider and could store the extra panel. A lamp panel version could be connected afterwards, but that required, as with the Schreibmax, that the lamp panel and light bulbs be removed. The remote panel made it possible for a person to read the decrypted plaintext without the operator seeing it.", "title": "Design" }, { "paragraph_id": 41, "text": "In 1944, the Luftwaffe introduced a plugboard switch, called the Uhr (clock), a small box containing a switch with 40 positions. It replaced the standard plugs. After connecting the plugs, as determined in the daily key sheet, the operator turned the switch into one of the 40 positions, each producing a different combination of plug wiring. Most of these plug connections were, unlike the default plugs, not pair-wise. In one switch position, the Uhr did not swap letters, but simply emulated the 13 stecker wires with plugs.", "title": "Design" }, { "paragraph_id": 42, "text": "The Enigma transformation for each letter can be specified mathematically as a product of permutations. Assuming a three-rotor German Army/Air Force Enigma, let P denote the plugboard transformation, U denote that of the reflector ( U = U − 1 {\\displaystyle U=U^{-1}} ), and L, M, R denote those of the left, middle and right rotors respectively. Then the encryption E can be expressed as", "title": "Design" }, { "paragraph_id": 43, "text": "After each key press, the rotors turn, changing the transformation. For example, if the right-hand rotor R is rotated n positions, the transformation becomes", "title": "Design" }, { "paragraph_id": 44, "text": "where ρ is the cyclic permutation mapping A to B, B to C, and so forth. Similarly, the middle and left-hand rotors can be represented as j and k rotations of M and L. The encryption transformation can then be described as", "title": "Design" }, { "paragraph_id": 45, "text": "Combining three rotors from a set of five, each of the 3 rotor settings with 26 positions, and the plugboard with ten pairs of letters connected, the military Enigma has 158,962,555,217,826,360,000 different settings (nearly 159 quintillion or about 67 bits).", "title": "Design" }, { "paragraph_id": 46, "text": "A German Enigma operator would be given a plaintext message to encrypt. After setting up his machine, he would type the message on the Enigma keyboard. For each letter pressed, one lamp lit indicating a different letter according to a pseudo-random substitution determined by the electrical pathways inside the machine. The letter indicated by the lamp would be recorded, typically by a second operator, as the cyphertext letter. The action of pressing a key also moved one or more rotors so that the next key press used a different electrical pathway, and thus a different substitution would occur even if the same plaintext letter were entered again. For each key press there was rotation of at least the right hand rotor and less often the other two, resulting in a different substitution alphabet being used for every letter in the message. This process continued until the message was completed. The cyphertext recorded by the second operator would then be transmitted, usually by radio in Morse code, to an operator of another Enigma machine. This operator would type in the cyphertext and — as long as all the settings of the deciphering machine were identical to those of the enciphering machine — for every key press the reverse substitution would occur and the plaintext message would emerge.", "title": "Operation" }, { "paragraph_id": 47, "text": "In use, the Enigma required a list of daily key settings and auxiliary documents. In German military practice, communications were divided into separate networks, each using different settings. These communication nets were termed keys at Bletchley Park, and were assigned code names, such as Red, Chaffinch, and Shark. Each unit operating in a network was given the same settings list for its Enigma, valid for a period of time. The procedures for German Naval Enigma were more elaborate and more secure than those in other services and employed auxiliary codebooks. Navy codebooks were printed in red, water-soluble ink on pink paper so that they could easily be destroyed if they were endangered or if the vessel was sunk.", "title": "Operation" }, { "paragraph_id": 48, "text": "An Enigma machine's setting (its cryptographic key in modern terms; Schlüssel in German) specified each operator-adjustable aspect of the machine:", "title": "Operation" }, { "paragraph_id": 49, "text": "For a message to be correctly encrypted and decrypted, both sender and receiver had to configure their Enigma in the same way; rotor selection and order, ring positions, plugboard connections and starting rotor positions must be identical. Except for the starting positions, these settings were established beforehand, distributed in key lists and changed daily. For example, the settings for the 18th day of the month in the German Luftwaffe Enigma key list number 649 (see image) were as follows:", "title": "Operation" }, { "paragraph_id": 50, "text": "Enigma was designed to be secure even if the rotor wiring was known to an opponent, although in practice considerable effort protected the wiring configuration. If the wiring is secret, the total number of possible configurations has been calculated to be around 3×10 (approximately 380 bits); with known wiring and other operational constraints, this is reduced to around 10 (76 bits). Because of the large number of possibilities, users of Enigma were confident of its security; it was not then feasible for an adversary to even begin to try a brute-force attack.", "title": "Operation" }, { "paragraph_id": 51, "text": "Most of the key was kept constant for a set time period, typically a day. A different initial rotor position was used for each message, a concept similar to an initialisation vector in modern cryptography. The reason is that encrypting many messages with identical or near-identical settings (termed in cryptanalysis as being in depth), would enable an attack using a statistical procedure such as Friedman's Index of coincidence. The starting position for the rotors was transmitted just before the ciphertext, usually after having been enciphered. The exact method used was termed the indicator procedure. Design weakness and operator sloppiness in these indicator procedures were two of the main weaknesses that made cracking Enigma possible.", "title": "Operation" }, { "paragraph_id": 52, "text": "One of the earliest indicator procedures for the Enigma was cryptographically flawed and allowed Polish cryptanalysts to make the initial breaks into the plugboard Enigma. The procedure had the operator set his machine in accordance with the secret settings that all operators on the net shared. The settings included an initial position for the rotors (the Grundstellung), say, AOH. The operator turned his rotors until AOH was visible through the rotor windows. At that point, the operator chose his own arbitrary starting position for the message he would send. An operator might select EIN, and that became the message setting for that encryption session. The operator then typed EIN into the machine twice, this producing the encrypted indicator, for example XHTLOA. This was then transmitted, at which point the operator would turn the rotors to his message settings, EIN in this example, and then type the plaintext of the message.", "title": "Operation" }, { "paragraph_id": 53, "text": "At the receiving end, the operator set the machine to the initial settings (AOH) and typed in the first six letters of the message (XHTLOA). In this example, EINEIN emerged on the lamps, so the operator would learn the message setting that the sender used to encrypt this message. The receiving operator would set his rotors to EIN, type in the rest of the ciphertext, and get the deciphered message.", "title": "Operation" }, { "paragraph_id": 54, "text": "This indicator scheme had two weaknesses. First, the use of a global initial position (Grundstellung) meant all message keys used the same polyalphabetic substitution. In later indicator procedures, the operator selected his initial position for encrypting the indicator and sent that initial position in the clear. The second problem was the repetition of the indicator, which was a serious security flaw. The message setting was encoded twice, resulting in a relation between first and fourth, second and fifth, and third and sixth character. These security flaws enabled the Polish Cipher Bureau to break into the pre-war Enigma system as early as 1932. The early indicator procedure was subsequently described by German cryptanalysts as the \"faulty indicator technique\".", "title": "Operation" }, { "paragraph_id": 55, "text": "During World War II, codebooks were only used each day to set up the rotors, their ring settings and the plugboard. For each message, the operator selected a random start position, let's say WZA, and a random message key, perhaps SXT. He moved the rotors to the WZA start position and encoded the message key SXT. Assume the result was UHL. He then set up the message key, SXT, as the start position and encrypted the message. Next, he transmitted the start position, WZA, the encoded message key, UHL, and then the ciphertext. The receiver set up the start position according to the first trigram, WZA, and decoded the second trigram, UHL, to obtain the SXT message setting. Next, he used this SXT message setting as the start position to decrypt the message. This way, each ground setting was different and the new procedure avoided the security flaw of double encoded message settings.", "title": "Operation" }, { "paragraph_id": 56, "text": "This procedure was used by Wehrmacht and Luftwaffe only. The Kriegsmarine procedures on sending messages with the Enigma were far more complex and elaborate. Prior to encryption the message was encoded using the Kurzsignalheft code book. The Kurzsignalheft contained tables to convert sentences into four-letter groups. A great many choices were included, for example, logistic matters such as refuelling and rendezvous with supply ships, positions and grid lists, harbour names, countries, weapons, weather conditions, enemy positions and ships, date and time tables. Another codebook contained the Kenngruppen and Spruchschlüssel: the key identification and message key.", "title": "Operation" }, { "paragraph_id": 57, "text": "The Army Enigma machine used only the 26 alphabet characters. Punctuation was replaced with rare character combinations. A space was omitted or replaced with an X. The X was generally used as full-stop.", "title": "Operation" }, { "paragraph_id": 58, "text": "Some punctuation marks were different in other parts of the armed forces. The Wehrmacht replaced a comma with ZZ and the question mark with FRAGE or FRAQ.", "title": "Operation" }, { "paragraph_id": 59, "text": "The Kriegsmarine replaced the comma with Y and the question mark with UD. The combination CH, as in \"Acht\" (eight) or \"Richtung\" (direction), was replaced with Q (AQT, RIQTUNG). Two, three and four zeros were replaced with CENTA, MILLE and MYRIA.", "title": "Operation" }, { "paragraph_id": 60, "text": "The Wehrmacht and the Luftwaffe transmitted messages in groups of five characters and counted the letters.", "title": "Operation" }, { "paragraph_id": 61, "text": "The Kriegsmarine used four-character groups and counted those groups.", "title": "Operation" }, { "paragraph_id": 62, "text": "Frequently used names or words were varied as much as possible. Words like Minensuchboot (minesweeper) could be written as MINENSUCHBOOT, MINBOOT or MMMBOOT. To make cryptanalysis harder, messages were limited to 250 characters. Longer messages were divided into several parts, each using a different message key.", "title": "Operation" }, { "paragraph_id": 63, "text": "The character substitutions by the Enigma machine as a whole can be expressed as a string of letters with each position occupied by the character that will replace the character at the corresponding position in the alphabet. For example, a given machine configuration that enciphered A to L, B to U, C to S, ..., and Z to J could be represented compactly as", "title": "Operation" }, { "paragraph_id": 64, "text": "and the enciphering of a particular character by that configuration could be represented by highlighting the enciphered character as in", "title": "Operation" }, { "paragraph_id": 65, "text": "Since the operation of an Enigma machine enciphering a message is a series of such configurations, each associated with a single character being enciphered, a sequence of such representations can be used to represent the operation of the machine as it enciphers a message. For example, the process of enciphering the first sentence of the main body of the famous \"Dönitz message\" to", "title": "Operation" }, { "paragraph_id": 66, "text": "can be represented as", "title": "Operation" }, { "paragraph_id": 67, "text": "where the letters following each mapping are the letters that appear at the windows at that stage (the only state changes visible to the operator) and the numbers show the underlying physical position of each rotor.", "title": "Operation" }, { "paragraph_id": 68, "text": "The character mappings for a given configuration of the machine are in turn the result of a series of such mappings applied by each pass through a component of the machine: the enciphering of a character resulting from the application of a given component's mapping serves as the input to the mapping of the subsequent component. For example, the 4th step in the enciphering above can be expanded to show each of these stages using the same representation of mappings and highlighting for the enciphered character:", "title": "Operation" }, { "paragraph_id": 69, "text": "Here the enciphering begins trivially with the first \"mapping\" representing the keyboard (which has no effect), followed by the plugboard, configured as AE.BF.CM.DQ.HU.JN.LX.PR.SZ.VW which has no effect on 'G', followed by the VIII rotor in the 03 position, which maps G to A, then the VI rotor in the 17 position, which maps A to N, ..., and finally the plugboard again, which maps B to F, producing the overall mapping indicated at the final step: G to F.", "title": "Operation" }, { "paragraph_id": 70, "text": "Note that this model has 4 rotors (lines 1 through 4) and that the reflector (line R) also permutes (garbles) letters.", "title": "Operation" }, { "paragraph_id": 71, "text": "The Enigma family included multiple designs. The earliest were commercial models dating from the early 1920s. Starting in the mid-1920s, the German military began to use Enigma, making a number of security-related changes. Various nations either adopted or adapted the design for their own cipher machines.", "title": "Models" }, { "paragraph_id": 72, "text": "An estimated 40,000 Enigma machines were constructed. After the end of World War II, the Allies sold captured Enigma machines, still widely considered secure, to developing countries.", "title": "Models" }, { "paragraph_id": 73, "text": "On 23 February 1918, Arthur Scherbius applied for a patent for a ciphering machine that used rotors. Scherbius and E. Richard Ritter founded the firm of Scherbius & Ritter. They approached the German Navy and Foreign Office with their design, but neither agency was interested. Scherbius & Ritter then assigned the patent rights to Gewerkschaft Securitas, who founded the Chiffriermaschinen Aktien-Gesellschaft (Cipher Machines Stock Corporation) on 9 July 1923; Scherbius and Ritter were on the board of directors.", "title": "Models" }, { "paragraph_id": 74, "text": "Chiffriermaschinen AG began advertising a rotor machine, Enigma Handelsmaschine, which was exhibited at the Congress of the International Postal Union in 1924. The machine was heavy and bulky, incorporating a typewriter. It measured 65×45×38 cm and weighed about 50 kilograms (110 lb).", "title": "Models" }, { "paragraph_id": 75, "text": "This was also a model with a type writer. There were a number of problems associated with the printer and the construction was not stable until 1926. Both early versions of Enigma lacked the reflector and had to be switched between enciphering and deciphering.", "title": "Models" }, { "paragraph_id": 76, "text": "The reflector, suggested by Scherbius' colleague Willi Korn, was introduced with the glow lamp version.", "title": "Models" }, { "paragraph_id": 77, "text": "The machine was also known as the military Enigma. It had two rotors and a manually rotatable reflector. The typewriter was omitted and glow lamps were used for output. The operation was somewhat different from later models. Before the next key pressure, the operator had to press a button to advance the right rotor one step.", "title": "Models" }, { "paragraph_id": 78, "text": "Enigma model B was introduced late in 1924, and was of a similar construction. While bearing the Enigma name, both models A and B were quite unlike later versions: They differed in physical size and shape, but also cryptographically, in that they lacked the reflector. This model of Enigma machine was referred to as the Glowlamp Enigma or Glühlampenmaschine since it produced its output on a lamp panel rather than paper. This method of output was much more reliable and cost effective. Hence this machine was 1/8th the price of its predecessor.", "title": "Models" }, { "paragraph_id": 79, "text": "Model C was the third model of the so-called ″glowlamp Enigmas″ (after A and B) and it again lacked a typewriter.", "title": "Models" }, { "paragraph_id": 80, "text": "The Enigma C quickly gave way to Enigma D (1927). This version was widely used, with shipments to Sweden, the Netherlands, United Kingdom, Japan, Italy, Spain, United States and Poland. In 1927 Hugh Foss at the British Government Code and Cypher School was able to show that commercial Enigma machines could be broken, provided suitable cribs were available. Soon, the Enigma D would pioneer the use of a standard keyboard layout to be used in German computing. This \"QWERTZ\" layout is very similar to the American QWERTY keyboard format used in many languages.", "title": "Models" }, { "paragraph_id": 81, "text": "Other countries used Enigma machines. The Italian Navy adopted the commercial Enigma as \"Navy Cipher D\". The Spanish also used commercial Enigma machines during their Civil War. British codebreakers succeeded in breaking these machines, which lacked a plugboard. Enigma machines were also used by diplomatic services.", "title": "Models" }, { "paragraph_id": 82, "text": "There was also a large, eight-rotor printing model, the Enigma H, called Enigma II by the Reichswehr. In 1933 the Polish Cipher Bureau detected that it was in use for high-level military communication, but it was soon withdrawn, as it was unreliable and jammed frequently.", "title": "Models" }, { "paragraph_id": 83, "text": "The Swiss used a version of Enigma called Model K or Swiss K for military and diplomatic use, which was very similar to commercial Enigma D. The machine's code was cracked by Poland, France, the United Kingdom and the United States; the latter code-named it INDIGO. An Enigma T model, code-named Tirpitz, was used by Japan.", "title": "Models" }, { "paragraph_id": 84, "text": "The various services of the Wehrmacht used various Enigma versions, and replaced them frequently, sometimes with ones adapted from other services. Enigma seldom carried high-level strategic messages, which when not urgent went by courier, and when urgent went by other cryptographic systems including the Geheimschreiber.", "title": "Models" }, { "paragraph_id": 85, "text": "The Reichsmarine was the first military branch to adopt Enigma. This version, named Funkschlüssel C (\"Radio cipher C\"), had been put into production by 1925 and was introduced into service in 1926.", "title": "Models" }, { "paragraph_id": 86, "text": "The keyboard and lampboard contained 29 letters — A-Z, Ä, Ö and Ü — that were arranged alphabetically, as opposed to the QWERTZUI ordering. The rotors had 28 contacts, with the letter X wired to bypass the rotors unencrypted. Three rotors were chosen from a set of five and the reflector could be inserted in one of four different positions, denoted α, β, γ and δ. The machine was revised slightly in July 1933.", "title": "Models" }, { "paragraph_id": 87, "text": "By 15 July 1928, the German Army (Reichswehr) had introduced their own exclusive version of the Enigma machine, the Enigma G.", "title": "Models" }, { "paragraph_id": 88, "text": "The Abwehr used the Enigma G (the Abwehr Enigma). This Enigma variant was a four-wheel unsteckered machine with multiple notches on the rotors. This model was equipped with a counter that incremented upon each key press, and so is also known as the \"counter machine\" or the Zählwerk Enigma.", "title": "Models" }, { "paragraph_id": 89, "text": "Enigma machine G was modified to the Enigma I by June 1930. Enigma I is also known as the Wehrmacht, or \"Services\" Enigma, and was used extensively by German military services and other government organisations (such as the railways) before and during World War II.", "title": "Models" }, { "paragraph_id": 90, "text": "The major difference between Enigma I (German Army version from 1930), and commercial Enigma models was the addition of a plugboard to swap pairs of letters, greatly increasing cryptographic strength.", "title": "Models" }, { "paragraph_id": 91, "text": "Other differences included the use of a fixed reflector and the relocation of the stepping notches from the rotor body to the movable letter rings. The machine measured 28 cm × 34 cm × 15 cm (11.0 in × 13.4 in × 5.9 in) and weighed around 12 kg (26 lb).", "title": "Models" }, { "paragraph_id": 92, "text": "In August 1935, the Air Force introduced the Wehrmacht Enigma for their communications.", "title": "Models" }, { "paragraph_id": 93, "text": "By 1930, the Reichswehr had suggested that the Navy adopt their machine, citing the benefits of increased security (with the plugboard) and easier interservice communications. The Reichsmarine eventually agreed and in 1934 brought into service the Navy version of the Army Enigma, designated Funkschlüssel ' or M3. While the Army used only three rotors at that time, the Navy specified a choice of three from a possible five.", "title": "Models" }, { "paragraph_id": 94, "text": "In December 1938, the Army issued two extra rotors so that the three rotors were chosen from a set of five. In 1938, the Navy added two more rotors, and then another in 1939 to allow a choice of three rotors from a set of eight.", "title": "Models" }, { "paragraph_id": 95, "text": "A four-rotor Enigma was introduced by the Navy for U-boat traffic on 1 February 1942, called M4 (the network was known as Triton, or Shark to the Allies). The extra rotor was fitted in the same space by splitting the reflector into a combination of a thin reflector and a thin fourth rotor.", "title": "Models" }, { "paragraph_id": 96, "text": "The effort to break the Enigma was not disclosed until the 1970s. Since then, interest in the Enigma machine has grown. Enigmas are on public display in museums around the world, and several are in the hands of private collectors and computer history enthusiasts.", "title": "Surviving machines" }, { "paragraph_id": 97, "text": "The Deutsches Museum in Munich has both the three- and four-rotor German military variants, as well as several civilian versions. Enigma machines are exhibited at the National Codes Centre in Bletchley Park, the Government Communications Headquarters, the Science Museum in London, Discovery Park of America in Tennessee, the Polish Army Museum in Warsaw, the Swedish Army Museum (Armémuseum) in Stockholm, the Military Museum of A Coruña in Spain, the Nordland Red Cross War Memorial Museum in Narvik, Norway, The Artillery, Engineers and Signals Museum in Hämeenlinna, Finland the Technical University of Denmark in Lyngby, Denmark, in Skanderborg Bunkerne at Skanderborg, Denmark, and at the Australian War Memorial and in the foyer of the Australian Signals Directorate, both in Canberra, Australia. The Jozef Pilsudski Institute in London exhibited a rare Polish Enigma double assembled in France in 1940. In 2020, thanks to the support of the Ministry of Culture and National Heritage, it became the property of the Polish History Museum.", "title": "Surviving machines" }, { "paragraph_id": 98, "text": "In the United States, Enigma machines can be seen at the Computer History Museum in Mountain View, California, and at the National Security Agency's National Cryptologic Museum in Fort Meade, Maryland, where visitors can try their hand at enciphering and deciphering messages. Two machines that were acquired after the capture of U-505 during World War II are on display alongside the submarine at the Museum of Science and Industry in Chicago, Illinois. A three-rotor Enigma is on display at Discovery Park of America in Union City, Tennessee. A four-rotor device is on display in the ANZUS Corridor of the Pentagon on the second floor, A ring, between corridors 8 and 9. This machine is on loan from Australia. The United States Air Force Academy in Colorado Springs has a machine on display in the Computer Science Department. There is also a machine located at The National WWII Museum in New Orleans. The International Museum of World War II near Boston has seven Enigma machines on display, including a U-boat four-rotor model, one of three surviving examples of an Enigma machine with a printer, one of fewer than ten surviving ten-rotor code machines, an example blown up by a retreating German Army unit, and two three-rotor Enigmas that visitors can operate to encode and decode messages. Computer Museum of America in Roswell, Georgia has a three-rotor model with two additional rotors. The machine is fully restored and CMoA has the original paperwork for the purchase on 7 March 1936 by the German Army. The National Museum of Computing also contains surviving Enigma machines in Bletchley, England.", "title": "Surviving machines" }, { "paragraph_id": 99, "text": "In Canada, a Swiss Army issue Enigma-K, is in Calgary, Alberta. It is on permanent display at the Naval Museum of Alberta inside the Military Museums of Calgary. A four-rotor Enigma machine is on display at the Military Communications and Electronics Museum at Canadian Forces Base (CFB) Kingston in Kingston, Ontario.", "title": "Surviving machines" }, { "paragraph_id": 100, "text": "Occasionally, Enigma machines are sold at auction; prices have in recent years ranged from US$40,000 to US$547,500 in 2017. Replicas are available in various forms, including an exact reconstructed copy of the Naval M4 model, an Enigma implemented in electronics (Enigma-E), various simulators and paper-and-scissors analogues.", "title": "Surviving machines" }, { "paragraph_id": 101, "text": "A rare Abwehr Enigma machine, designated G312, was stolen from the Bletchley Park museum on 1 April 2000. In September, a man identifying himself as \"The Master\" sent a note demanding £25,000 and threatening to destroy the machine if the ransom was not paid. In early October 2000, Bletchley Park officials announced that they would pay the ransom, but the stated deadline passed with no word from the blackmailer. Shortly afterward, the machine was sent anonymously to BBC journalist Jeremy Paxman, missing three rotors.", "title": "Surviving machines" }, { "paragraph_id": 102, "text": "In November 2000, an antiques dealer named Dennis Yates was arrested after telephoning The Sunday Times to arrange the return of the missing parts. The Enigma machine was returned to Bletchley Park after the incident. In October 2001, Yates was sentenced to ten months in prison and served three months.", "title": "Surviving machines" }, { "paragraph_id": 103, "text": "In October 2008, the Spanish daily newspaper El País reported that 28 Enigma machines had been discovered by chance in an attic of Army headquarters in Madrid. These four-rotor commercial machines had helped Franco's Nationalists win the Spanish Civil War, because, though the British cryptologist Alfred Dilwyn Knox in 1937 broke the cipher generated by Franco's Enigma machines, this was not disclosed to the Republicans, who failed to break the cipher. The Nationalist government continued using its 50 Enigmas into the 1950s. Some machines have gone on display in Spanish military museums, including one at the National Museum of Science and Technology (MUNCYT) in La Coruña and one at the Spanish Army Museum. Two have been given to Britain's GCHQ.", "title": "Surviving machines" }, { "paragraph_id": 104, "text": "The Bulgarian military used Enigma machines with a Cyrillic keyboard; one is on display in the National Museum of Military History in Sofia.", "title": "Surviving machines" }, { "paragraph_id": 105, "text": "On 3 December 2020, German divers working on behalf of the World Wide Fund for Nature discovered a destroyed Enigma machine in Flensburg Firth (part of the Baltic Sea) which is believed to be from a scuttled U-boat. This Enigma machine will be restored by and be the property of the Archaeology Museum of Schleswig Holstein.", "title": "Surviving machines" }, { "paragraph_id": 106, "text": "An M4 Enigma was salvaged in the 1980s from the German minesweeper R15, which was sunk off the Istrian coast in 1945. The machine was put on display in the Pivka Park of Military History in Slovenia on 13 April 2023.", "title": "Surviving machines" }, { "paragraph_id": 107, "text": "The Enigma was influential in the field of cipher machine design, spinning off other rotor machines. Once the British discovered Enigma's principle of operation, they created the Typex rotor cipher, which the Germans believed to be unsolvable. Typex was originally derived from the Enigma patents; Typex even includes features from the patent descriptions that were omitted from the actual Enigma machine. The British paid no royalties for the use of the patents. In the United States, cryptologist William Friedman designed the M-325 machine, starting in 1936, that is logically similar.", "title": "Derivatives" }, { "paragraph_id": 108, "text": "Machines like the SIGABA, NEMA, Typex, and so forth, are not considered to be Enigma derivatives as their internal ciphering functions are not mathematically identical to the Enigma transform.", "title": "Derivatives" }, { "paragraph_id": 109, "text": "A unique rotor machine called Cryptograph was constructed in 2002 by Netherlands-based Tatjana van Vark. This device makes use of 40-point rotors, allowing letters, numbers and some punctuation to be used; each rotor contains 509 parts.", "title": "Derivatives" } ]
The Enigma machine is a cipher device developed and used in the early- to mid-20th century to protect commercial, diplomatic, and military communication. It was employed extensively by Nazi Germany during World War II, in all branches of the German military. The Enigma machine was considered so secure that it was used to encipher the most top-secret messages. The Enigma has an electromechanical rotor mechanism that scrambles the 26 letters of the alphabet. In typical use, one person enters text on the Enigma's keyboard and another person writes down which of the 26 lights above the keyboard illuminated at each key press. If plain text is entered, the illuminated letters are the ciphertext. Entering ciphertext transforms it back into readable plaintext. The rotor mechanism changes the electrical connections between the keys and the lights with each keypress. The security of the system depends on machine settings that were generally changed daily, based on secret key lists distributed in advance, and on other settings that were changed for each message. The receiving station would have to know and use the exact settings employed by the transmitting station to successfully decrypt a message. While Nazi Germany introduced a series of improvements to the Enigma over the years, and these hampered decryption efforts, they did not prevent Poland from cracking the machine as early as December 1932 and reading messages prior to and into the war. Poland's sharing of their achievements enabled the Allies to exploit Enigma-enciphered messages as a major source of intelligence. Many commentators say the flow of Ultra communications intelligence from the decrypting of Enigma, Lorenz, and other ciphers shortened the war substantially and may even have altered its outcome.
2001-11-14T15:51:07Z
2023-12-14T19:59:43Z
[ "Template:Efn", "Template:Val", "Template:Cite patent", "Template:ISBN", "Template:Authority control", "Template:Short description", "Template:Use dmy dates", "Template:Citation needed", "Template:Cite web", "Template:TOC limit", "Template:Convert", "Template:Sfn", "Template:Clear", "Template:US patent", "Template:Refbegin", "Template:Cite journal", "Template:Commons category", "Template:Cite book", "Template:Cite news", "Template:Webarchive", "Template:Harvnb", "Template:About", "Template:EnigmaSeries", "Template:Main", "Template:Mvar", "Template:Cite thesis", "Template:Refend", "Template:Cryptography navbox", "Template:See also", "Template:Ship", "Template:Notelist", "Template:Reflist", "Template:Curlie" ]
https://en.wikipedia.org/wiki/Enigma_machine
9,257
Enzyme
Enzymes (/ˈɛnzaɪmz/) are proteins that act as biological catalysts by accelerating chemical reactions. The molecules upon which enzymes may act are called substrates, and the enzyme converts the substrates into different molecules known as products. Almost all metabolic processes in the cell need enzyme catalysis in order to occur at rates fast enough to sustain life. Metabolic pathways depend upon enzymes to catalyze individual steps. The study of enzymes is called enzymology and the field of pseudoenzyme analysis recognizes that during evolution, some enzymes have lost the ability to carry out biological catalysis, which is often reflected in their amino acid sequences and unusual 'pseudocatalytic' properties. Enzymes are known to catalyze more than 5,000 biochemical reaction types. Other biocatalysts are catalytic RNA molecules, called ribozymes. An enzyme's specificity comes from its unique three-dimensional structure. Like all catalysts, enzymes increase the reaction rate by lowering its activation energy. Some enzymes can make their conversion of substrate to product occur many millions of times faster. An extreme example is orotidine 5'-phosphate decarboxylase, which allows a reaction that would otherwise take millions of years to occur in milliseconds. Chemically, enzymes are like any catalyst and are not consumed in chemical reactions, nor do they alter the equilibrium of a reaction. Enzymes differ from most other catalysts by being much more specific. Enzyme activity can be affected by other molecules: inhibitors are molecules that decrease enzyme activity, and activators are molecules that increase activity. Many therapeutic drugs and poisons are enzyme inhibitors. An enzyme's activity decreases markedly outside its optimal temperature and pH, and many enzymes are (permanently) denatured when exposed to excessive heat, losing their structure and catalytic properties. Some enzymes are used commercially, for example, in the synthesis of antibiotics. Some household products use enzymes to speed up chemical reactions: enzymes in biological washing powders break down protein, starch or fat stains on clothes, and enzymes in meat tenderizer break down proteins into smaller molecules, making the meat easier to chew. By the late 17th and early 18th centuries, the digestion of meat by stomach secretions and the conversion of starch to sugars by plant extracts and saliva were known but the mechanisms by which these occurred had not been identified. French chemist Anselme Payen was the first to discover an enzyme, diastase, in 1833. A few decades later, when studying the fermentation of sugar to alcohol by yeast, Louis Pasteur concluded that this fermentation was caused by a vital force contained within the yeast cells called "ferments", which were thought to function only within living organisms. He wrote that "alcoholic fermentation is an act correlated with the life and organization of the yeast cells, not with the death or putrefaction of the cells." In 1877, German physiologist Wilhelm Kühne (1837–1900) first used the term enzyme, which comes from Ancient Greek ἔνζυμον (énzymon) 'leavened, in yeast', to describe this process. The word enzyme was used later to refer to nonliving substances such as pepsin, and the word ferment was used to refer to chemical activity produced by living organisms. Eduard Buchner submitted his first paper on the study of yeast extracts in 1897. In a series of experiments at the University of Berlin, he found that sugar was fermented by yeast extracts even when there were no living yeast cells in the mixture. He named the enzyme that brought about the fermentation of sucrose "zymase". In 1907, he received the Nobel Prize in Chemistry for "his discovery of cell-free fermentation". Following Buchner's example, enzymes are usually named according to the reaction they carry out: the suffix -ase is combined with the name of the substrate (e.g., lactase is the enzyme that cleaves lactose) or to the type of reaction (e.g., DNA polymerase forms DNA polymers). The biochemical identity of enzymes was still unknown in the early 1900s. Many scientists observed that enzymatic activity was associated with proteins, but others (such as Nobel laureate Richard Willstätter) argued that proteins were merely carriers for the true enzymes and that proteins per se were incapable of catalysis. In 1926, James B. Sumner showed that the enzyme urease was a pure protein and crystallized it; he did likewise for the enzyme catalase in 1937. The conclusion that pure proteins can be enzymes was definitively demonstrated by John Howard Northrop and Wendell Meredith Stanley, who worked on the digestive enzymes pepsin (1930), trypsin and chymotrypsin. These three scientists were awarded the 1946 Nobel Prize in Chemistry. The discovery that enzymes could be crystallized eventually allowed their structures to be solved by x-ray crystallography. This was first done for lysozyme, an enzyme found in tears, saliva and egg whites that digests the coating of some bacteria; the structure was solved by a group led by David Chilton Phillips and published in 1965. This high-resolution structure of lysozyme marked the beginning of the field of structural biology and the effort to understand how enzymes work at an atomic level of detail. Enzymes can be classified by two main criteria: either amino acid sequence similarity (and thus evolutionary relationship) or enzymatic activity. Enzyme activity. An enzyme's name is often derived from its substrate or the chemical reaction it catalyzes, with the word ending in -ase. Examples are lactase, alcohol dehydrogenase and DNA polymerase. Different enzymes that catalyze the same chemical reaction are called isozymes. The International Union of Biochemistry and Molecular Biology have developed a nomenclature for enzymes, the EC numbers (for "Enzyme Commission"). Each enzyme is described by "EC" followed by a sequence of four numbers which represent the hierarchy of enzymatic activity (from very general to very specific). That is, the first number broadly classifies the enzyme based on its mechanism while the other digits add more and more specificity. The top-level classification is: These sections are subdivided by other features such as the substrate, products, and chemical mechanism. An enzyme is fully specified by four numerical designations. For example, hexokinase (EC 2.7.1.1) is a transferase (EC 2) that adds a phosphate group (EC 2.7) to a hexose sugar, a molecule containing an alcohol group (EC 2.7.1). Sequence similarity. EC categories do not reflect sequence similarity. For instance, two ligases of the same EC number that catalyze exactly the same reaction can have completely different sequences. Independent of their function, enzymes, like any other proteins, have been classified by their sequence similarity into numerous families. These families have been documented in dozens of different protein and protein family databases such as Pfam. Non-homologous isofunctional enzymes. Unrelated enzymes that have the same enzymatic activity have been called non-homologous isofunctional enzymes. Horizontal gene transfer may spread these genes to unrelated species, especially bacteria where they can replace endogenous genes of the same function, leading to hon-homologous gene displacement. Enzymes are generally globular proteins, acting alone or in larger complexes. The sequence of the amino acids specifies the structure which in turn determines the catalytic activity of the enzyme. Although structure determines function, a novel enzymatic activity cannot yet be predicted from structure alone. Enzyme structures unfold (denature) when heated or exposed to chemical denaturants and this disruption to the structure typically causes a loss of activity. Enzyme denaturation is normally linked to temperatures above a species' normal level; as a result, enzymes from bacteria living in volcanic environments such as hot springs are prized by industrial users for their ability to function at high temperatures, allowing enzyme-catalysed reactions to be operated at a very high rate. Enzymes are usually much larger than their substrates. Sizes range from just 62 amino acid residues, for the monomer of 4-oxalocrotonate tautomerase, to over 2,500 residues in the animal fatty acid synthase. Only a small portion of their structure (around 2–4 amino acids) is directly involved in catalysis: the catalytic site. This catalytic site is located next to one or more binding sites where residues orient the substrates. The catalytic site and binding site together compose the enzyme's active site. The remaining majority of the enzyme structure serves to maintain the precise orientation and dynamics of the active site. In some enzymes, no amino acids are directly involved in catalysis; instead, the enzyme contains sites to bind and orient catalytic cofactors. Enzyme structures may also contain allosteric sites where the binding of a small molecule causes a conformational change that increases or decreases activity. A small number of RNA-based biological catalysts called ribozymes exist, which again can act alone or in complex with proteins. The most common of these is the ribosome which is a complex of protein and catalytic RNA components. Enzymes must bind their substrates before they can catalyse any chemical reaction. Enzymes are usually very specific as to what substrates they bind and then the chemical reaction catalysed. Specificity is achieved by binding pockets with complementary shape, charge and hydrophilic/hydrophobic characteristics to the substrates. Enzymes can therefore distinguish between very similar substrate molecules to be chemoselective, regioselective and stereospecific. Some of the enzymes showing the highest specificity and accuracy are involved in the copying and expression of the genome. Some of these enzymes have "proof-reading" mechanisms. Here, an enzyme such as DNA polymerase catalyzes a reaction in a first step and then checks that the product is correct in a second step. This two-step process results in average error rates of less than 1 error in 100 million reactions in high-fidelity mammalian polymerases. Similar proofreading mechanisms are also found in RNA polymerase, aminoacyl tRNA synthetases and ribosomes. Conversely, some enzymes display enzyme promiscuity, having broad specificity and acting on a range of different physiologically relevant substrates. Many enzymes possess small side activities which arose fortuitously (i.e. neutrally), which may be the starting point for the evolutionary selection of a new function. To explain the observed specificity of enzymes, in 1894 Emil Fischer proposed that both the enzyme and the substrate possess specific complementary geometric shapes that fit exactly into one another. This is often referred to as "the lock and key" model. This early model explains enzyme specificity, but fails to explain the stabilization of the transition state that enzymes achieve. In 1958, Daniel Koshland suggested a modification to the lock and key model: since enzymes are rather flexible structures, the active site is continuously reshaped by interactions with the substrate as the substrate interacts with the enzyme. As a result, the substrate does not simply bind to a rigid active site; the amino acid side-chains that make up the active site are molded into the precise positions that enable the enzyme to perform its catalytic function. In some cases, such as glycosidases, the substrate molecule also changes shape slightly as it enters the active site. The active site continues to change until the substrate is completely bound, at which point the final shape and charge distribution is determined. Induced fit may enhance the fidelity of molecular recognition in the presence of competition and noise via the conformational proofreading mechanism. Enzymes can accelerate reactions in several ways, all of which lower the activation energy (ΔG, Gibbs free energy) Enzymes may use several of these mechanisms simultaneously. For example, proteases such as trypsin perform covalent catalysis using a catalytic triad, stabilise charge build-up on the transition states using an oxyanion hole, complete hydrolysis using an oriented water substrate. Enzymes are not rigid, static structures; instead they have complex internal dynamic motions – that is, movements of parts of the enzyme's structure such as individual amino acid residues, groups of residues forming a protein loop or unit of secondary structure, or even an entire protein domain. These motions give rise to a conformational ensemble of slightly different structures that interconvert with one another at equilibrium. Different states within this ensemble may be associated with different aspects of an enzyme's function. For example, different conformations of the enzyme dihydrofolate reductase are associated with the substrate binding, catalysis, cofactor release, and product release steps of the catalytic cycle, consistent with catalytic resonance theory. Substrate presentation is a process where the enzyme is sequestered away from its substrate. Enzymes can be sequestered to the plasma membrane away from a substrate in the nucleus or cytosol. Or within the membrane, an enzyme can be sequestered into lipid rafts away from its substrate in the disordered region. When the enzyme is released it mixes with its substrate. Alternatively, the enzyme can be sequestered near its substrate to activate the enzyme. For example, the enzyme can be soluble and upon activation bind to a lipid in the plasma membrane and then act upon molecules in the plasma membrane. Allosteric sites are pockets on the enzyme, distinct from the active site, that bind to molecules in the cellular environment. These molecules then cause a change in the conformation or dynamics of the enzyme that is transduced to the active site and thus affects the reaction rate of the enzyme. In this way, allosteric interactions can either inhibit or activate enzymes. Allosteric interactions with metabolites upstream or downstream in an enzyme's metabolic pathway cause feedback regulation, altering the activity of the enzyme according to the flux through the rest of the pathway. Some enzymes do not need additional components to show full activity. Others require non-protein molecules called cofactors to be bound for activity. Cofactors can be either inorganic (e.g., metal ions and iron–sulfur clusters) or organic compounds (e.g., flavin and heme). These cofactors serve many purposes; for instance, metal ions can help in stabilizing nucleophilic species within the active site. Organic cofactors can be either coenzymes, which are released from the enzyme's active site during the reaction, or prosthetic groups, which are tightly bound to an enzyme. Organic prosthetic groups can be covalently bound (e.g., biotin in enzymes such as pyruvate carboxylase). An example of an enzyme that contains a cofactor is carbonic anhydrase, which uses a zinc cofactor bound as part of its active site. These tightly bound ions or molecules are usually found in the active site and are involved in catalysis. For example, flavin and heme cofactors are often involved in redox reactions. Enzymes that require a cofactor but do not have one bound are called apoenzymes or apoproteins. An enzyme together with the cofactor(s) required for activity is called a holoenzyme (or haloenzyme). The term holoenzyme can also be applied to enzymes that contain multiple protein subunits, such as the DNA polymerases; here the holoenzyme is the complete complex containing all the subunits needed for activity. Coenzymes are small organic molecules that can be loosely or tightly bound to an enzyme. Coenzymes transport chemical groups from one enzyme to another. Examples include NADH, NADPH and adenosine triphosphate (ATP). Some coenzymes, such as flavin mononucleotide (FMN), flavin adenine dinucleotide (FAD), thiamine pyrophosphate (TPP), and tetrahydrofolate (THF), are derived from vitamins. These coenzymes cannot be synthesized by the body de novo and closely related compounds (vitamins) must be acquired from the diet. The chemical groups carried include: Since coenzymes are chemically changed as a consequence of enzyme action, it is useful to consider coenzymes to be a special class of substrates, or second substrates, which are common to many different enzymes. For example, about 1000 enzymes are known to use the coenzyme NADH. Coenzymes are usually continuously regenerated and their concentrations maintained at a steady level inside the cell. For example, NADPH is regenerated through the pentose phosphate pathway and S-adenosylmethionine by methionine adenosyltransferase. This continuous regeneration means that small amounts of coenzymes can be used very intensively. For example, the human body turns over its own weight in ATP each day. As with all catalysts, enzymes do not alter the position of the chemical equilibrium of the reaction. In the presence of an enzyme, the reaction runs in the same direction as it would without the enzyme, just more quickly. For example, carbonic anhydrase catalyzes its reaction in either direction depending on the concentration of its reactants: The rate of a reaction is dependent on the activation energy needed to form the transition state which then decays into products. Enzymes increase reaction rates by lowering the energy of the transition state. First, binding forms a low energy enzyme-substrate complex (ES). Second, the enzyme stabilises the transition state such that it requires less energy to achieve compared to the uncatalyzed reaction (ES). Finally the enzyme-product complex (EP) dissociates to release the products. Enzymes can couple two or more reactions, so that a thermodynamically favorable reaction can be used to "drive" a thermodynamically unfavourable one so that the combined energy of the products is lower than the substrates. For example, the hydrolysis of ATP is often used to drive other chemical reactions. Enzyme kinetics is the investigation of how enzymes bind substrates and turn them into products. The rate data used in kinetic analyses are commonly obtained from enzyme assays. In 1913 Leonor Michaelis and Maud Leonora Menten proposed a quantitative theory of enzyme kinetics, which is referred to as Michaelis–Menten kinetics. The major contribution of Michaelis and Menten was to think of enzyme reactions in two stages. In the first, the substrate binds reversibly to the enzyme, forming the enzyme-substrate complex. This is sometimes called the Michaelis–Menten complex in their honor. The enzyme then catalyzes the chemical step in the reaction and releases the product. This work was further developed by G. E. Briggs and J. B. S. Haldane, who derived kinetic equations that are still widely used today. Enzyme rates depend on solution conditions and substrate concentration. To find the maximum speed of an enzymatic reaction, the substrate concentration is increased until a constant rate of product formation is seen. This is shown in the saturation curve on the right. Saturation happens because, as substrate concentration increases, more and more of the free enzyme is converted into the substrate-bound ES complex. At the maximum reaction rate (Vmax) of the enzyme, all the enzyme active sites are bound to substrate, and the amount of ES complex is the same as the total amount of enzyme. Vmax is only one of several important kinetic parameters. The amount of substrate needed to achieve a given rate of reaction is also important. This is given by the Michaelis–Menten constant (Km), which is the substrate concentration required for an enzyme to reach one-half its maximum reaction rate; generally, each enzyme has a characteristic KM for a given substrate. Another useful constant is kcat, also called the turnover number, which is the number of substrate molecules handled by one active site per second. The efficiency of an enzyme can be expressed in terms of kcat/Km. This is also called the specificity constant and incorporates the rate constants for all steps in the reaction up to and including the first irreversible step. Because the specificity constant reflects both affinity and catalytic ability, it is useful for comparing different enzymes against each other, or the same enzyme with different substrates. The theoretical maximum for the specificity constant is called the diffusion limit and is about 10 to 10 (M s). At this point every collision of the enzyme with its substrate will result in catalysis, and the rate of product formation is not limited by the reaction rate but by the diffusion rate. Enzymes with this property are called catalytically perfect or kinetically perfect. Example of such enzymes are triose-phosphate isomerase, carbonic anhydrase, acetylcholinesterase, catalase, fumarase, β-lactamase, and superoxide dismutase. The turnover of such enzymes can reach several million reactions per second. But most enzymes are far from perfect: the average values of k c a t / K m {\displaystyle k_{\rm {cat}}/K_{\rm {m}}} and k c a t {\displaystyle k_{\rm {cat}}} are about 10 5 s − 1 M − 1 {\displaystyle 10^{5}{\rm {s}}^{-1}{\rm {M}}^{-1}} and 10 s − 1 {\displaystyle 10{\rm {s}}^{-1}} , respectively. Michaelis–Menten kinetics relies on the law of mass action, which is derived from the assumptions of free diffusion and thermodynamically driven random collision. Many biochemical or cellular processes deviate significantly from these conditions, because of macromolecular crowding and constrained molecular movement. More recent, complex extensions of the model attempt to correct for these effects. Enzyme reaction rates can be decreased by various types of enzyme inhibitors. A competitive inhibitor and substrate cannot bind to the enzyme at the same time. Often competitive inhibitors strongly resemble the real substrate of the enzyme. For example, the drug methotrexate is a competitive inhibitor of the enzyme dihydrofolate reductase, which catalyzes the reduction of dihydrofolate to tetrahydrofolate. The similarity between the structures of dihydrofolate and this drug are shown in the accompanying figure. This type of inhibition can be overcome with high substrate concentration. In some cases, the inhibitor can bind to a site other than the binding-site of the usual substrate and exert an allosteric effect to change the shape of the usual binding-site. A non-competitive inhibitor binds to a site other than where the substrate binds. The substrate still binds with its usual affinity and hence Km remains the same. However the inhibitor reduces the catalytic efficiency of the enzyme so that Vmax is reduced. In contrast to competitive inhibition, non-competitive inhibition cannot be overcome with high substrate concentration. An uncompetitive inhibitor cannot bind to the free enzyme, only to the enzyme-substrate complex; hence, these types of inhibitors are most effective at high substrate concentration. In the presence of the inhibitor, the enzyme-substrate complex is inactive. This type of inhibition is rare. A mixed inhibitor binds to an allosteric site and the binding of the substrate and the inhibitor affect each other. The enzyme's function is reduced but not eliminated when bound to the inhibitor. This type of inhibitor does not follow the Michaelis–Menten equation. An irreversible inhibitor permanently inactivates the enzyme, usually by forming a covalent bond to the protein. Penicillin and aspirin are common drugs that act in this manner. In many organisms, inhibitors may act as part of a feedback mechanism. If an enzyme produces too much of one substance in the organism, that substance may act as an inhibitor for the enzyme at the beginning of the pathway that produces it, causing production of the substance to slow down or stop when there is sufficient amount. This is a form of negative feedback. Major metabolic pathways such as the citric acid cycle make use of this mechanism. Since inhibitors modulate the function of enzymes they are often used as drugs. Many such drugs are reversible competitive inhibitors that resemble the enzyme's native substrate, similar to methotrexate above; other well-known examples include statins used to treat high cholesterol, and protease inhibitors used to treat retroviral infections such as HIV. A common example of an irreversible inhibitor that is used as a drug is aspirin, which inhibits the COX-1 and COX-2 enzymes that produce the inflammation messenger prostaglandin. Other enzyme inhibitors are poisons. For example, the poison cyanide is an irreversible enzyme inhibitor that combines with the copper and iron in the active site of the enzyme cytochrome c oxidase and blocks cellular respiration. As enzymes are made up of proteins, their actions are sensitive to change in many physio chemical factors such as pH, temperature, substrate concentration, etc. The following table shows pH optima for various enzymes. Enzymes serve a wide variety of functions inside living organisms. They are indispensable for signal transduction and cell regulation, often via kinases and phosphatases. They also generate movement, with myosin hydrolyzing adenosine triphosphate (ATP) to generate muscle contraction, and also transport cargo around the cell as part of the cytoskeleton. Other ATPases in the cell membrane are ion pumps involved in active transport. Enzymes are also involved in more exotic functions, such as luciferase generating light in fireflies. Viruses can also contain enzymes for infecting cells, such as the HIV integrase and reverse transcriptase, or for viral release from cells, like the influenza virus neuraminidase. An important function of enzymes is in the digestive systems of animals. Enzymes such as amylases and proteases break down large molecules (starch or proteins, respectively) into smaller ones, so they can be absorbed by the intestines. Starch molecules, for example, are too large to be absorbed from the intestine, but enzymes hydrolyze the starch chains into smaller molecules such as maltose and eventually glucose, which can then be absorbed. Different enzymes digest different food substances. In ruminants, which have herbivorous diets, microorganisms in the gut produce another enzyme, cellulase, to break down the cellulose cell walls of plant fiber. Several enzymes can work together in a specific order, creating metabolic pathways. In a metabolic pathway, one enzyme takes the product of another enzyme as a substrate. After the catalytic reaction, the product is then passed on to another enzyme. Sometimes more than one enzyme can catalyze the same reaction in parallel; this can allow more complex regulation: with, for example, a low constant activity provided by one enzyme but an inducible high activity from a second enzyme. Enzymes determine what steps occur in these pathways. Without enzymes, metabolism would neither progress through the same steps and could not be regulated to serve the needs of the cell. Most central metabolic pathways are regulated at a few key steps, typically through enzymes whose activity involves the hydrolysis of ATP. Because this reaction releases so much energy, other reactions that are thermodynamically unfavorable can be coupled to ATP hydrolysis, driving the overall series of linked metabolic reactions. There are five main ways that enzyme activity is controlled in the cell. Enzymes can be either activated or inhibited by other molecules. For example, the end product(s) of a metabolic pathway are often inhibitors for one of the first enzymes of the pathway (usually the first irreversible step, called committed step), thus regulating the amount of end product made by the pathways. Such a regulatory mechanism is called a negative feedback mechanism, because the amount of the end product produced is regulated by its own concentration. Negative feedback mechanism can effectively adjust the rate of synthesis of intermediate metabolites according to the demands of the cells. This helps with effective allocations of materials and energy economy, and it prevents the excess manufacture of end products. Like other homeostatic devices, the control of enzymatic action helps to maintain a stable internal environment in living organisms. Examples of post-translational modification include phosphorylation, myristoylation and glycosylation. For example, in the response to insulin, the phosphorylation of multiple enzymes, including glycogen synthase, helps control the synthesis or degradation of glycogen and allows the cell to respond to changes in blood sugar. Another example of post-translational modification is the cleavage of the polypeptide chain. Chymotrypsin, a digestive protease, is produced in inactive form as chymotrypsinogen in the pancreas and transported in this form to the stomach where it is activated. This stops the enzyme from digesting the pancreas or other tissues before it enters the gut. This type of inactive precursor to an enzyme is known as a zymogen or proenzyme. Enzyme production (transcription and translation of enzyme genes) can be enhanced or diminished by a cell in response to changes in the cell's environment. This form of gene regulation is called enzyme induction. For example, bacteria may become resistant to antibiotics such as penicillin because enzymes called beta-lactamases are induced that hydrolyse the crucial beta-lactam ring within the penicillin molecule. Another example comes from enzymes in the liver called cytochrome P450 oxidases, which are important in drug metabolism. Induction or inhibition of these enzymes can cause drug interactions. Enzyme levels can also be regulated by changing the rate of enzyme degradation. The opposite of enzyme induction is enzyme repression. Enzymes can be compartmentalized, with different metabolic pathways occurring in different cellular compartments. For example, fatty acids are synthesized by one set of enzymes in the cytosol, endoplasmic reticulum and Golgi and used by a different set of enzymes as a source of energy in the mitochondrion, through β-oxidation. In addition, trafficking of the enzyme to different compartments may change the degree of protonation (e.g., the neutral cytoplasm and the acidic lysosome) or oxidative state (e.g., oxidizing periplasm or reducing cytoplasm) which in turn affects enzyme activity. In contrast to partitioning into membrane bound organelles, enzyme subcellular localisation may also be altered through polymerisation of enzymes into macromolecular cytoplasmic filaments. In multicellular eukaryotes, cells in different organs and tissues have different patterns of gene expression and therefore have different sets of enzymes (known as isozymes) available for metabolic reactions. This provides a mechanism for regulating the overall metabolism of the organism. For example, hexokinase, the first enzyme in the glycolysis pathway, has a specialized form called glucokinase expressed in the liver and pancreas that has a lower affinity for glucose yet is more sensitive to glucose concentration. This enzyme is involved in sensing blood sugar and regulating insulin production. Since the tight control of enzyme activity is essential for homeostasis, any malfunction (mutation, overproduction, underproduction or deletion) of a single critical enzyme can lead to a genetic disease. The malfunction of just one type of enzyme out of the thousands of types present in the human body can be fatal. An example of a fatal genetic disease due to enzyme insufficiency is Tay–Sachs disease, in which patients lack the enzyme hexosaminidase. One example of enzyme deficiency is the most common type of phenylketonuria. Many different single amino acid mutations in the enzyme phenylalanine hydroxylase, which catalyzes the first step in the degradation of phenylalanine, result in build-up of phenylalanine and related products. Some mutations are in the active site, directly disrupting binding and catalysis, but many are far from the active site and reduce activity by destabilising the protein structure, or affecting correct oligomerisation. This can lead to intellectual disability if the disease is untreated. Another example is pseudocholinesterase deficiency, in which the body's ability to break down choline ester drugs is impaired. Oral administration of enzymes can be used to treat some functional enzyme deficiencies, such as pancreatic insufficiency and lactose intolerance. Another way enzyme malfunctions can cause disease comes from germline mutations in genes coding for DNA repair enzymes. Defects in these enzymes cause cancer because cells are less able to repair mutations in their genomes. This causes a slow accumulation of mutations and results in the development of cancers. An example of such a hereditary cancer syndrome is xeroderma pigmentosum, which causes the development of skin cancers in response to even minimal exposure to ultraviolet light. Similar to any other protein, enzymes change over time through mutations and sequence divergence. Given their central role in metabolism, enzyme evolution plays a critical role in adaptation. A key question is therefore whether and how enzymes can change their enzymatic activities alongside. It is generally accepted that many new enzyme activities have evolved through gene duplication and mutation of the duplicate copies although evolution can also happen without duplication. One example of an enzyme that has changed its activity is the ancestor of methionyl aminopeptidase (MAP) and creatine amidinohydrolase (creatinase) which are clearly homologous but catalyze very different reactions (MAP removes the amino-terminal methionine in new proteins while creatinase hydrolyses creatine to sarcosine and urea). In addition, MAP is metal-ion dependent while creatinase is not, hence this property was also lost over time. Small changes of enzymatic activity are extremely common among enzymes. In particular, substrate binding specificity (see above) can easily and quickly change with single amino acid changes in their substrate binding pockets. This is frequently seen in the main enzyme classes such as kinases. Artificial (in vitro) evolution is now commonly used to modify enzyme activity or specificity for industrial applications (see below). Enzymes are used in the chemical industry and other industrial applications when extremely specific catalysts are required. Enzymes in general are limited in the number of reactions they have evolved to catalyze and also by their lack of stability in organic solvents and at high temperatures. As a consequence, protein engineering is an active area of research and involves attempts to create new enzymes with novel properties, either through rational design or in vitro evolution. These efforts have begun to be successful, and a few enzymes have now been designed "from scratch" to catalyze reactions that do not occur in nature.
[ { "paragraph_id": 0, "text": "Enzymes (/ˈɛnzaɪmz/) are proteins that act as biological catalysts by accelerating chemical reactions. The molecules upon which enzymes may act are called substrates, and the enzyme converts the substrates into different molecules known as products. Almost all metabolic processes in the cell need enzyme catalysis in order to occur at rates fast enough to sustain life. Metabolic pathways depend upon enzymes to catalyze individual steps. The study of enzymes is called enzymology and the field of pseudoenzyme analysis recognizes that during evolution, some enzymes have lost the ability to carry out biological catalysis, which is often reflected in their amino acid sequences and unusual 'pseudocatalytic' properties.", "title": "" }, { "paragraph_id": 1, "text": "Enzymes are known to catalyze more than 5,000 biochemical reaction types. Other biocatalysts are catalytic RNA molecules, called ribozymes. An enzyme's specificity comes from its unique three-dimensional structure.", "title": "" }, { "paragraph_id": 2, "text": "Like all catalysts, enzymes increase the reaction rate by lowering its activation energy. Some enzymes can make their conversion of substrate to product occur many millions of times faster. An extreme example is orotidine 5'-phosphate decarboxylase, which allows a reaction that would otherwise take millions of years to occur in milliseconds. Chemically, enzymes are like any catalyst and are not consumed in chemical reactions, nor do they alter the equilibrium of a reaction. Enzymes differ from most other catalysts by being much more specific. Enzyme activity can be affected by other molecules: inhibitors are molecules that decrease enzyme activity, and activators are molecules that increase activity. Many therapeutic drugs and poisons are enzyme inhibitors. An enzyme's activity decreases markedly outside its optimal temperature and pH, and many enzymes are (permanently) denatured when exposed to excessive heat, losing their structure and catalytic properties.", "title": "" }, { "paragraph_id": 3, "text": "Some enzymes are used commercially, for example, in the synthesis of antibiotics. Some household products use enzymes to speed up chemical reactions: enzymes in biological washing powders break down protein, starch or fat stains on clothes, and enzymes in meat tenderizer break down proteins into smaller molecules, making the meat easier to chew.", "title": "" }, { "paragraph_id": 4, "text": "By the late 17th and early 18th centuries, the digestion of meat by stomach secretions and the conversion of starch to sugars by plant extracts and saliva were known but the mechanisms by which these occurred had not been identified.", "title": "Etymology and history" }, { "paragraph_id": 5, "text": "French chemist Anselme Payen was the first to discover an enzyme, diastase, in 1833. A few decades later, when studying the fermentation of sugar to alcohol by yeast, Louis Pasteur concluded that this fermentation was caused by a vital force contained within the yeast cells called \"ferments\", which were thought to function only within living organisms. He wrote that \"alcoholic fermentation is an act correlated with the life and organization of the yeast cells, not with the death or putrefaction of the cells.\"", "title": "Etymology and history" }, { "paragraph_id": 6, "text": "In 1877, German physiologist Wilhelm Kühne (1837–1900) first used the term enzyme, which comes from Ancient Greek ἔνζυμον (énzymon) 'leavened, in yeast', to describe this process. The word enzyme was used later to refer to nonliving substances such as pepsin, and the word ferment was used to refer to chemical activity produced by living organisms.", "title": "Etymology and history" }, { "paragraph_id": 7, "text": "Eduard Buchner submitted his first paper on the study of yeast extracts in 1897. In a series of experiments at the University of Berlin, he found that sugar was fermented by yeast extracts even when there were no living yeast cells in the mixture. He named the enzyme that brought about the fermentation of sucrose \"zymase\". In 1907, he received the Nobel Prize in Chemistry for \"his discovery of cell-free fermentation\". Following Buchner's example, enzymes are usually named according to the reaction they carry out: the suffix -ase is combined with the name of the substrate (e.g., lactase is the enzyme that cleaves lactose) or to the type of reaction (e.g., DNA polymerase forms DNA polymers).", "title": "Etymology and history" }, { "paragraph_id": 8, "text": "The biochemical identity of enzymes was still unknown in the early 1900s. Many scientists observed that enzymatic activity was associated with proteins, but others (such as Nobel laureate Richard Willstätter) argued that proteins were merely carriers for the true enzymes and that proteins per se were incapable of catalysis. In 1926, James B. Sumner showed that the enzyme urease was a pure protein and crystallized it; he did likewise for the enzyme catalase in 1937. The conclusion that pure proteins can be enzymes was definitively demonstrated by John Howard Northrop and Wendell Meredith Stanley, who worked on the digestive enzymes pepsin (1930), trypsin and chymotrypsin. These three scientists were awarded the 1946 Nobel Prize in Chemistry.", "title": "Etymology and history" }, { "paragraph_id": 9, "text": "The discovery that enzymes could be crystallized eventually allowed their structures to be solved by x-ray crystallography. This was first done for lysozyme, an enzyme found in tears, saliva and egg whites that digests the coating of some bacteria; the structure was solved by a group led by David Chilton Phillips and published in 1965. This high-resolution structure of lysozyme marked the beginning of the field of structural biology and the effort to understand how enzymes work at an atomic level of detail.", "title": "Etymology and history" }, { "paragraph_id": 10, "text": "Enzymes can be classified by two main criteria: either amino acid sequence similarity (and thus evolutionary relationship) or enzymatic activity.", "title": "Classification and nomenclature" }, { "paragraph_id": 11, "text": "Enzyme activity. An enzyme's name is often derived from its substrate or the chemical reaction it catalyzes, with the word ending in -ase. Examples are lactase, alcohol dehydrogenase and DNA polymerase. Different enzymes that catalyze the same chemical reaction are called isozymes.", "title": "Classification and nomenclature" }, { "paragraph_id": 12, "text": "The International Union of Biochemistry and Molecular Biology have developed a nomenclature for enzymes, the EC numbers (for \"Enzyme Commission\"). Each enzyme is described by \"EC\" followed by a sequence of four numbers which represent the hierarchy of enzymatic activity (from very general to very specific). That is, the first number broadly classifies the enzyme based on its mechanism while the other digits add more and more specificity.", "title": "Classification and nomenclature" }, { "paragraph_id": 13, "text": "The top-level classification is:", "title": "Classification and nomenclature" }, { "paragraph_id": 14, "text": "These sections are subdivided by other features such as the substrate, products, and chemical mechanism. An enzyme is fully specified by four numerical designations. For example, hexokinase (EC 2.7.1.1) is a transferase (EC 2) that adds a phosphate group (EC 2.7) to a hexose sugar, a molecule containing an alcohol group (EC 2.7.1).", "title": "Classification and nomenclature" }, { "paragraph_id": 15, "text": "Sequence similarity. EC categories do not reflect sequence similarity. For instance, two ligases of the same EC number that catalyze exactly the same reaction can have completely different sequences. Independent of their function, enzymes, like any other proteins, have been classified by their sequence similarity into numerous families. These families have been documented in dozens of different protein and protein family databases such as Pfam.", "title": "Classification and nomenclature" }, { "paragraph_id": 16, "text": "Non-homologous isofunctional enzymes. Unrelated enzymes that have the same enzymatic activity have been called non-homologous isofunctional enzymes. Horizontal gene transfer may spread these genes to unrelated species, especially bacteria where they can replace endogenous genes of the same function, leading to hon-homologous gene displacement.", "title": "Classification and nomenclature" }, { "paragraph_id": 17, "text": "Enzymes are generally globular proteins, acting alone or in larger complexes. The sequence of the amino acids specifies the structure which in turn determines the catalytic activity of the enzyme. Although structure determines function, a novel enzymatic activity cannot yet be predicted from structure alone. Enzyme structures unfold (denature) when heated or exposed to chemical denaturants and this disruption to the structure typically causes a loss of activity. Enzyme denaturation is normally linked to temperatures above a species' normal level; as a result, enzymes from bacteria living in volcanic environments such as hot springs are prized by industrial users for their ability to function at high temperatures, allowing enzyme-catalysed reactions to be operated at a very high rate.", "title": "Structure" }, { "paragraph_id": 18, "text": "Enzymes are usually much larger than their substrates. Sizes range from just 62 amino acid residues, for the monomer of 4-oxalocrotonate tautomerase, to over 2,500 residues in the animal fatty acid synthase. Only a small portion of their structure (around 2–4 amino acids) is directly involved in catalysis: the catalytic site. This catalytic site is located next to one or more binding sites where residues orient the substrates. The catalytic site and binding site together compose the enzyme's active site. The remaining majority of the enzyme structure serves to maintain the precise orientation and dynamics of the active site.", "title": "Structure" }, { "paragraph_id": 19, "text": "In some enzymes, no amino acids are directly involved in catalysis; instead, the enzyme contains sites to bind and orient catalytic cofactors. Enzyme structures may also contain allosteric sites where the binding of a small molecule causes a conformational change that increases or decreases activity.", "title": "Structure" }, { "paragraph_id": 20, "text": "A small number of RNA-based biological catalysts called ribozymes exist, which again can act alone or in complex with proteins. The most common of these is the ribosome which is a complex of protein and catalytic RNA components.", "title": "Structure" }, { "paragraph_id": 21, "text": "Enzymes must bind their substrates before they can catalyse any chemical reaction. Enzymes are usually very specific as to what substrates they bind and then the chemical reaction catalysed. Specificity is achieved by binding pockets with complementary shape, charge and hydrophilic/hydrophobic characteristics to the substrates. Enzymes can therefore distinguish between very similar substrate molecules to be chemoselective, regioselective and stereospecific.", "title": "Mechanism" }, { "paragraph_id": 22, "text": "Some of the enzymes showing the highest specificity and accuracy are involved in the copying and expression of the genome. Some of these enzymes have \"proof-reading\" mechanisms. Here, an enzyme such as DNA polymerase catalyzes a reaction in a first step and then checks that the product is correct in a second step. This two-step process results in average error rates of less than 1 error in 100 million reactions in high-fidelity mammalian polymerases. Similar proofreading mechanisms are also found in RNA polymerase, aminoacyl tRNA synthetases and ribosomes.", "title": "Mechanism" }, { "paragraph_id": 23, "text": "Conversely, some enzymes display enzyme promiscuity, having broad specificity and acting on a range of different physiologically relevant substrates. Many enzymes possess small side activities which arose fortuitously (i.e. neutrally), which may be the starting point for the evolutionary selection of a new function.", "title": "Mechanism" }, { "paragraph_id": 24, "text": "To explain the observed specificity of enzymes, in 1894 Emil Fischer proposed that both the enzyme and the substrate possess specific complementary geometric shapes that fit exactly into one another. This is often referred to as \"the lock and key\" model. This early model explains enzyme specificity, but fails to explain the stabilization of the transition state that enzymes achieve.", "title": "Mechanism" }, { "paragraph_id": 25, "text": "In 1958, Daniel Koshland suggested a modification to the lock and key model: since enzymes are rather flexible structures, the active site is continuously reshaped by interactions with the substrate as the substrate interacts with the enzyme. As a result, the substrate does not simply bind to a rigid active site; the amino acid side-chains that make up the active site are molded into the precise positions that enable the enzyme to perform its catalytic function. In some cases, such as glycosidases, the substrate molecule also changes shape slightly as it enters the active site. The active site continues to change until the substrate is completely bound, at which point the final shape and charge distribution is determined. Induced fit may enhance the fidelity of molecular recognition in the presence of competition and noise via the conformational proofreading mechanism.", "title": "Mechanism" }, { "paragraph_id": 26, "text": "Enzymes can accelerate reactions in several ways, all of which lower the activation energy (ΔG, Gibbs free energy)", "title": "Mechanism" }, { "paragraph_id": 27, "text": "Enzymes may use several of these mechanisms simultaneously. For example, proteases such as trypsin perform covalent catalysis using a catalytic triad, stabilise charge build-up on the transition states using an oxyanion hole, complete hydrolysis using an oriented water substrate.", "title": "Mechanism" }, { "paragraph_id": 28, "text": "Enzymes are not rigid, static structures; instead they have complex internal dynamic motions – that is, movements of parts of the enzyme's structure such as individual amino acid residues, groups of residues forming a protein loop or unit of secondary structure, or even an entire protein domain. These motions give rise to a conformational ensemble of slightly different structures that interconvert with one another at equilibrium. Different states within this ensemble may be associated with different aspects of an enzyme's function. For example, different conformations of the enzyme dihydrofolate reductase are associated with the substrate binding, catalysis, cofactor release, and product release steps of the catalytic cycle, consistent with catalytic resonance theory.", "title": "Mechanism" }, { "paragraph_id": 29, "text": "Substrate presentation is a process where the enzyme is sequestered away from its substrate. Enzymes can be sequestered to the plasma membrane away from a substrate in the nucleus or cytosol. Or within the membrane, an enzyme can be sequestered into lipid rafts away from its substrate in the disordered region. When the enzyme is released it mixes with its substrate. Alternatively, the enzyme can be sequestered near its substrate to activate the enzyme. For example, the enzyme can be soluble and upon activation bind to a lipid in the plasma membrane and then act upon molecules in the plasma membrane.", "title": "Mechanism" }, { "paragraph_id": 30, "text": "Allosteric sites are pockets on the enzyme, distinct from the active site, that bind to molecules in the cellular environment. These molecules then cause a change in the conformation or dynamics of the enzyme that is transduced to the active site and thus affects the reaction rate of the enzyme. In this way, allosteric interactions can either inhibit or activate enzymes. Allosteric interactions with metabolites upstream or downstream in an enzyme's metabolic pathway cause feedback regulation, altering the activity of the enzyme according to the flux through the rest of the pathway.", "title": "Mechanism" }, { "paragraph_id": 31, "text": "Some enzymes do not need additional components to show full activity. Others require non-protein molecules called cofactors to be bound for activity. Cofactors can be either inorganic (e.g., metal ions and iron–sulfur clusters) or organic compounds (e.g., flavin and heme). These cofactors serve many purposes; for instance, metal ions can help in stabilizing nucleophilic species within the active site. Organic cofactors can be either coenzymes, which are released from the enzyme's active site during the reaction, or prosthetic groups, which are tightly bound to an enzyme. Organic prosthetic groups can be covalently bound (e.g., biotin in enzymes such as pyruvate carboxylase).", "title": "Cofactors" }, { "paragraph_id": 32, "text": "An example of an enzyme that contains a cofactor is carbonic anhydrase, which uses a zinc cofactor bound as part of its active site. These tightly bound ions or molecules are usually found in the active site and are involved in catalysis. For example, flavin and heme cofactors are often involved in redox reactions.", "title": "Cofactors" }, { "paragraph_id": 33, "text": "Enzymes that require a cofactor but do not have one bound are called apoenzymes or apoproteins. An enzyme together with the cofactor(s) required for activity is called a holoenzyme (or haloenzyme). The term holoenzyme can also be applied to enzymes that contain multiple protein subunits, such as the DNA polymerases; here the holoenzyme is the complete complex containing all the subunits needed for activity.", "title": "Cofactors" }, { "paragraph_id": 34, "text": "Coenzymes are small organic molecules that can be loosely or tightly bound to an enzyme. Coenzymes transport chemical groups from one enzyme to another. Examples include NADH, NADPH and adenosine triphosphate (ATP). Some coenzymes, such as flavin mononucleotide (FMN), flavin adenine dinucleotide (FAD), thiamine pyrophosphate (TPP), and tetrahydrofolate (THF), are derived from vitamins. These coenzymes cannot be synthesized by the body de novo and closely related compounds (vitamins) must be acquired from the diet. The chemical groups carried include:", "title": "Cofactors" }, { "paragraph_id": 35, "text": "Since coenzymes are chemically changed as a consequence of enzyme action, it is useful to consider coenzymes to be a special class of substrates, or second substrates, which are common to many different enzymes. For example, about 1000 enzymes are known to use the coenzyme NADH.", "title": "Cofactors" }, { "paragraph_id": 36, "text": "Coenzymes are usually continuously regenerated and their concentrations maintained at a steady level inside the cell. For example, NADPH is regenerated through the pentose phosphate pathway and S-adenosylmethionine by methionine adenosyltransferase. This continuous regeneration means that small amounts of coenzymes can be used very intensively. For example, the human body turns over its own weight in ATP each day.", "title": "Cofactors" }, { "paragraph_id": 37, "text": "As with all catalysts, enzymes do not alter the position of the chemical equilibrium of the reaction. In the presence of an enzyme, the reaction runs in the same direction as it would without the enzyme, just more quickly. For example, carbonic anhydrase catalyzes its reaction in either direction depending on the concentration of its reactants:", "title": "Thermodynamics" }, { "paragraph_id": 38, "text": "The rate of a reaction is dependent on the activation energy needed to form the transition state which then decays into products. Enzymes increase reaction rates by lowering the energy of the transition state. First, binding forms a low energy enzyme-substrate complex (ES). Second, the enzyme stabilises the transition state such that it requires less energy to achieve compared to the uncatalyzed reaction (ES). Finally the enzyme-product complex (EP) dissociates to release the products.", "title": "Thermodynamics" }, { "paragraph_id": 39, "text": "Enzymes can couple two or more reactions, so that a thermodynamically favorable reaction can be used to \"drive\" a thermodynamically unfavourable one so that the combined energy of the products is lower than the substrates. For example, the hydrolysis of ATP is often used to drive other chemical reactions.", "title": "Thermodynamics" }, { "paragraph_id": 40, "text": "Enzyme kinetics is the investigation of how enzymes bind substrates and turn them into products. The rate data used in kinetic analyses are commonly obtained from enzyme assays. In 1913 Leonor Michaelis and Maud Leonora Menten proposed a quantitative theory of enzyme kinetics, which is referred to as Michaelis–Menten kinetics. The major contribution of Michaelis and Menten was to think of enzyme reactions in two stages. In the first, the substrate binds reversibly to the enzyme, forming the enzyme-substrate complex. This is sometimes called the Michaelis–Menten complex in their honor. The enzyme then catalyzes the chemical step in the reaction and releases the product. This work was further developed by G. E. Briggs and J. B. S. Haldane, who derived kinetic equations that are still widely used today.", "title": "Kinetics" }, { "paragraph_id": 41, "text": "Enzyme rates depend on solution conditions and substrate concentration. To find the maximum speed of an enzymatic reaction, the substrate concentration is increased until a constant rate of product formation is seen. This is shown in the saturation curve on the right. Saturation happens because, as substrate concentration increases, more and more of the free enzyme is converted into the substrate-bound ES complex. At the maximum reaction rate (Vmax) of the enzyme, all the enzyme active sites are bound to substrate, and the amount of ES complex is the same as the total amount of enzyme.", "title": "Kinetics" }, { "paragraph_id": 42, "text": "Vmax is only one of several important kinetic parameters. The amount of substrate needed to achieve a given rate of reaction is also important. This is given by the Michaelis–Menten constant (Km), which is the substrate concentration required for an enzyme to reach one-half its maximum reaction rate; generally, each enzyme has a characteristic KM for a given substrate. Another useful constant is kcat, also called the turnover number, which is the number of substrate molecules handled by one active site per second.", "title": "Kinetics" }, { "paragraph_id": 43, "text": "The efficiency of an enzyme can be expressed in terms of kcat/Km. This is also called the specificity constant and incorporates the rate constants for all steps in the reaction up to and including the first irreversible step. Because the specificity constant reflects both affinity and catalytic ability, it is useful for comparing different enzymes against each other, or the same enzyme with different substrates. The theoretical maximum for the specificity constant is called the diffusion limit and is about 10 to 10 (M s). At this point every collision of the enzyme with its substrate will result in catalysis, and the rate of product formation is not limited by the reaction rate but by the diffusion rate. Enzymes with this property are called catalytically perfect or kinetically perfect. Example of such enzymes are triose-phosphate isomerase, carbonic anhydrase, acetylcholinesterase, catalase, fumarase, β-lactamase, and superoxide dismutase. The turnover of such enzymes can reach several million reactions per second. But most enzymes are far from perfect: the average values of k c a t / K m {\\displaystyle k_{\\rm {cat}}/K_{\\rm {m}}} and k c a t {\\displaystyle k_{\\rm {cat}}} are about 10 5 s − 1 M − 1 {\\displaystyle 10^{5}{\\rm {s}}^{-1}{\\rm {M}}^{-1}} and 10 s − 1 {\\displaystyle 10{\\rm {s}}^{-1}} , respectively.", "title": "Kinetics" }, { "paragraph_id": 44, "text": "Michaelis–Menten kinetics relies on the law of mass action, which is derived from the assumptions of free diffusion and thermodynamically driven random collision. Many biochemical or cellular processes deviate significantly from these conditions, because of macromolecular crowding and constrained molecular movement. More recent, complex extensions of the model attempt to correct for these effects.", "title": "Kinetics" }, { "paragraph_id": 45, "text": "Enzyme reaction rates can be decreased by various types of enzyme inhibitors.", "title": "Inhibition" }, { "paragraph_id": 46, "text": "A competitive inhibitor and substrate cannot bind to the enzyme at the same time. Often competitive inhibitors strongly resemble the real substrate of the enzyme. For example, the drug methotrexate is a competitive inhibitor of the enzyme dihydrofolate reductase, which catalyzes the reduction of dihydrofolate to tetrahydrofolate. The similarity between the structures of dihydrofolate and this drug are shown in the accompanying figure. This type of inhibition can be overcome with high substrate concentration. In some cases, the inhibitor can bind to a site other than the binding-site of the usual substrate and exert an allosteric effect to change the shape of the usual binding-site.", "title": "Inhibition" }, { "paragraph_id": 47, "text": "A non-competitive inhibitor binds to a site other than where the substrate binds. The substrate still binds with its usual affinity and hence Km remains the same. However the inhibitor reduces the catalytic efficiency of the enzyme so that Vmax is reduced. In contrast to competitive inhibition, non-competitive inhibition cannot be overcome with high substrate concentration.", "title": "Inhibition" }, { "paragraph_id": 48, "text": "An uncompetitive inhibitor cannot bind to the free enzyme, only to the enzyme-substrate complex; hence, these types of inhibitors are most effective at high substrate concentration. In the presence of the inhibitor, the enzyme-substrate complex is inactive. This type of inhibition is rare.", "title": "Inhibition" }, { "paragraph_id": 49, "text": "A mixed inhibitor binds to an allosteric site and the binding of the substrate and the inhibitor affect each other. The enzyme's function is reduced but not eliminated when bound to the inhibitor. This type of inhibitor does not follow the Michaelis–Menten equation.", "title": "Inhibition" }, { "paragraph_id": 50, "text": "An irreversible inhibitor permanently inactivates the enzyme, usually by forming a covalent bond to the protein. Penicillin and aspirin are common drugs that act in this manner.", "title": "Inhibition" }, { "paragraph_id": 51, "text": "In many organisms, inhibitors may act as part of a feedback mechanism. If an enzyme produces too much of one substance in the organism, that substance may act as an inhibitor for the enzyme at the beginning of the pathway that produces it, causing production of the substance to slow down or stop when there is sufficient amount. This is a form of negative feedback. Major metabolic pathways such as the citric acid cycle make use of this mechanism.", "title": "Inhibition" }, { "paragraph_id": 52, "text": "Since inhibitors modulate the function of enzymes they are often used as drugs. Many such drugs are reversible competitive inhibitors that resemble the enzyme's native substrate, similar to methotrexate above; other well-known examples include statins used to treat high cholesterol, and protease inhibitors used to treat retroviral infections such as HIV. A common example of an irreversible inhibitor that is used as a drug is aspirin, which inhibits the COX-1 and COX-2 enzymes that produce the inflammation messenger prostaglandin. Other enzyme inhibitors are poisons. For example, the poison cyanide is an irreversible enzyme inhibitor that combines with the copper and iron in the active site of the enzyme cytochrome c oxidase and blocks cellular respiration.", "title": "Inhibition" }, { "paragraph_id": 53, "text": "As enzymes are made up of proteins, their actions are sensitive to change in many physio chemical factors such as pH, temperature, substrate concentration, etc.", "title": "Factors affecting enzyme activity" }, { "paragraph_id": 54, "text": "The following table shows pH optima for various enzymes.", "title": "Factors affecting enzyme activity" }, { "paragraph_id": 55, "text": "Enzymes serve a wide variety of functions inside living organisms. They are indispensable for signal transduction and cell regulation, often via kinases and phosphatases. They also generate movement, with myosin hydrolyzing adenosine triphosphate (ATP) to generate muscle contraction, and also transport cargo around the cell as part of the cytoskeleton. Other ATPases in the cell membrane are ion pumps involved in active transport. Enzymes are also involved in more exotic functions, such as luciferase generating light in fireflies. Viruses can also contain enzymes for infecting cells, such as the HIV integrase and reverse transcriptase, or for viral release from cells, like the influenza virus neuraminidase.", "title": "Biological function" }, { "paragraph_id": 56, "text": "An important function of enzymes is in the digestive systems of animals. Enzymes such as amylases and proteases break down large molecules (starch or proteins, respectively) into smaller ones, so they can be absorbed by the intestines. Starch molecules, for example, are too large to be absorbed from the intestine, but enzymes hydrolyze the starch chains into smaller molecules such as maltose and eventually glucose, which can then be absorbed. Different enzymes digest different food substances. In ruminants, which have herbivorous diets, microorganisms in the gut produce another enzyme, cellulase, to break down the cellulose cell walls of plant fiber.", "title": "Biological function" }, { "paragraph_id": 57, "text": "Several enzymes can work together in a specific order, creating metabolic pathways. In a metabolic pathway, one enzyme takes the product of another enzyme as a substrate. After the catalytic reaction, the product is then passed on to another enzyme. Sometimes more than one enzyme can catalyze the same reaction in parallel; this can allow more complex regulation: with, for example, a low constant activity provided by one enzyme but an inducible high activity from a second enzyme.", "title": "Biological function" }, { "paragraph_id": 58, "text": "Enzymes determine what steps occur in these pathways. Without enzymes, metabolism would neither progress through the same steps and could not be regulated to serve the needs of the cell. Most central metabolic pathways are regulated at a few key steps, typically through enzymes whose activity involves the hydrolysis of ATP. Because this reaction releases so much energy, other reactions that are thermodynamically unfavorable can be coupled to ATP hydrolysis, driving the overall series of linked metabolic reactions.", "title": "Biological function" }, { "paragraph_id": 59, "text": "There are five main ways that enzyme activity is controlled in the cell.", "title": "Biological function" }, { "paragraph_id": 60, "text": "Enzymes can be either activated or inhibited by other molecules. For example, the end product(s) of a metabolic pathway are often inhibitors for one of the first enzymes of the pathway (usually the first irreversible step, called committed step), thus regulating the amount of end product made by the pathways. Such a regulatory mechanism is called a negative feedback mechanism, because the amount of the end product produced is regulated by its own concentration. Negative feedback mechanism can effectively adjust the rate of synthesis of intermediate metabolites according to the demands of the cells. This helps with effective allocations of materials and energy economy, and it prevents the excess manufacture of end products. Like other homeostatic devices, the control of enzymatic action helps to maintain a stable internal environment in living organisms.", "title": "Biological function" }, { "paragraph_id": 61, "text": "Examples of post-translational modification include phosphorylation, myristoylation and glycosylation. For example, in the response to insulin, the phosphorylation of multiple enzymes, including glycogen synthase, helps control the synthesis or degradation of glycogen and allows the cell to respond to changes in blood sugar. Another example of post-translational modification is the cleavage of the polypeptide chain. Chymotrypsin, a digestive protease, is produced in inactive form as chymotrypsinogen in the pancreas and transported in this form to the stomach where it is activated. This stops the enzyme from digesting the pancreas or other tissues before it enters the gut. This type of inactive precursor to an enzyme is known as a zymogen or proenzyme.", "title": "Biological function" }, { "paragraph_id": 62, "text": "Enzyme production (transcription and translation of enzyme genes) can be enhanced or diminished by a cell in response to changes in the cell's environment. This form of gene regulation is called enzyme induction. For example, bacteria may become resistant to antibiotics such as penicillin because enzymes called beta-lactamases are induced that hydrolyse the crucial beta-lactam ring within the penicillin molecule. Another example comes from enzymes in the liver called cytochrome P450 oxidases, which are important in drug metabolism. Induction or inhibition of these enzymes can cause drug interactions. Enzyme levels can also be regulated by changing the rate of enzyme degradation. The opposite of enzyme induction is enzyme repression.", "title": "Biological function" }, { "paragraph_id": 63, "text": "Enzymes can be compartmentalized, with different metabolic pathways occurring in different cellular compartments. For example, fatty acids are synthesized by one set of enzymes in the cytosol, endoplasmic reticulum and Golgi and used by a different set of enzymes as a source of energy in the mitochondrion, through β-oxidation. In addition, trafficking of the enzyme to different compartments may change the degree of protonation (e.g., the neutral cytoplasm and the acidic lysosome) or oxidative state (e.g., oxidizing periplasm or reducing cytoplasm) which in turn affects enzyme activity. In contrast to partitioning into membrane bound organelles, enzyme subcellular localisation may also be altered through polymerisation of enzymes into macromolecular cytoplasmic filaments.", "title": "Biological function" }, { "paragraph_id": 64, "text": "In multicellular eukaryotes, cells in different organs and tissues have different patterns of gene expression and therefore have different sets of enzymes (known as isozymes) available for metabolic reactions. This provides a mechanism for regulating the overall metabolism of the organism. For example, hexokinase, the first enzyme in the glycolysis pathway, has a specialized form called glucokinase expressed in the liver and pancreas that has a lower affinity for glucose yet is more sensitive to glucose concentration. This enzyme is involved in sensing blood sugar and regulating insulin production.", "title": "Biological function" }, { "paragraph_id": 65, "text": "Since the tight control of enzyme activity is essential for homeostasis, any malfunction (mutation, overproduction, underproduction or deletion) of a single critical enzyme can lead to a genetic disease. The malfunction of just one type of enzyme out of the thousands of types present in the human body can be fatal. An example of a fatal genetic disease due to enzyme insufficiency is Tay–Sachs disease, in which patients lack the enzyme hexosaminidase.", "title": "Biological function" }, { "paragraph_id": 66, "text": "One example of enzyme deficiency is the most common type of phenylketonuria. Many different single amino acid mutations in the enzyme phenylalanine hydroxylase, which catalyzes the first step in the degradation of phenylalanine, result in build-up of phenylalanine and related products. Some mutations are in the active site, directly disrupting binding and catalysis, but many are far from the active site and reduce activity by destabilising the protein structure, or affecting correct oligomerisation. This can lead to intellectual disability if the disease is untreated. Another example is pseudocholinesterase deficiency, in which the body's ability to break down choline ester drugs is impaired. Oral administration of enzymes can be used to treat some functional enzyme deficiencies, such as pancreatic insufficiency and lactose intolerance.", "title": "Biological function" }, { "paragraph_id": 67, "text": "Another way enzyme malfunctions can cause disease comes from germline mutations in genes coding for DNA repair enzymes. Defects in these enzymes cause cancer because cells are less able to repair mutations in their genomes. This causes a slow accumulation of mutations and results in the development of cancers. An example of such a hereditary cancer syndrome is xeroderma pigmentosum, which causes the development of skin cancers in response to even minimal exposure to ultraviolet light.", "title": "Biological function" }, { "paragraph_id": 68, "text": "Similar to any other protein, enzymes change over time through mutations and sequence divergence. Given their central role in metabolism, enzyme evolution plays a critical role in adaptation. A key question is therefore whether and how enzymes can change their enzymatic activities alongside. It is generally accepted that many new enzyme activities have evolved through gene duplication and mutation of the duplicate copies although evolution can also happen without duplication. One example of an enzyme that has changed its activity is the ancestor of methionyl aminopeptidase (MAP) and creatine amidinohydrolase (creatinase) which are clearly homologous but catalyze very different reactions (MAP removes the amino-terminal methionine in new proteins while creatinase hydrolyses creatine to sarcosine and urea). In addition, MAP is metal-ion dependent while creatinase is not, hence this property was also lost over time. Small changes of enzymatic activity are extremely common among enzymes. In particular, substrate binding specificity (see above) can easily and quickly change with single amino acid changes in their substrate binding pockets. This is frequently seen in the main enzyme classes such as kinases.", "title": "Evolution" }, { "paragraph_id": 69, "text": "Artificial (in vitro) evolution is now commonly used to modify enzyme activity or specificity for industrial applications (see below).", "title": "Evolution" }, { "paragraph_id": 70, "text": "Enzymes are used in the chemical industry and other industrial applications when extremely specific catalysts are required. Enzymes in general are limited in the number of reactions they have evolved to catalyze and also by their lack of stability in organic solvents and at high temperatures. As a consequence, protein engineering is an active area of research and involves attempts to create new enzymes with novel properties, either through rational design or in vitro evolution. These efforts have begun to be successful, and a few enzymes have now been designed \"from scratch\" to catalyze reactions that do not occur in nature.", "title": "Industrial applications" }, { "paragraph_id": 71, "text": "", "title": "External links" } ]
Enzymes are proteins that act as biological catalysts by accelerating chemical reactions. The molecules upon which enzymes may act are called substrates, and the enzyme converts the substrates into different molecules known as products. Almost all metabolic processes in the cell need enzyme catalysis in order to occur at rates fast enough to sustain life. Metabolic pathways depend upon enzymes to catalyze individual steps. The study of enzymes is called enzymology and the field of pseudoenzyme analysis recognizes that during evolution, some enzymes have lost the ability to carry out biological catalysis, which is often reflected in their amino acid sequences and unusual 'pseudocatalytic' properties. Enzymes are known to catalyze more than 5,000 biochemical reaction types. Other biocatalysts are catalytic RNA molecules, called ribozymes. An enzyme's specificity comes from its unique three-dimensional structure. Like all catalysts, enzymes increase the reaction rate by lowering its activation energy. Some enzymes can make their conversion of substrate to product occur many millions of times faster. An extreme example is orotidine 5'-phosphate decarboxylase, which allows a reaction that would otherwise take millions of years to occur in milliseconds. Chemically, enzymes are like any catalyst and are not consumed in chemical reactions, nor do they alter the equilibrium of a reaction. Enzymes differ from most other catalysts by being much more specific. Enzyme activity can be affected by other molecules: inhibitors are molecules that decrease enzyme activity, and activators are molecules that increase activity. Many therapeutic drugs and poisons are enzyme inhibitors. An enzyme's activity decreases markedly outside its optimal temperature and pH, and many enzymes are (permanently) denatured when exposed to excessive heat, losing their structure and catalytic properties. Some enzymes are used commercially, for example, in the synthesis of antibiotics. Some household products use enzymes to speed up chemical reactions: enzymes in biological washing powders break down protein, starch or fat stains on clothes, and enzymes in meat tenderizer break down proteins into smaller molecules, making the meat easier to chew.
2001-09-29T16:53:25Z
2023-12-11T01:39:16Z
[ "Template:Main", "Template:Multiple image", "Template:Portal", "Template:Col-2-of-2", "Template:Col-end", "Template:IPAc-en", "Template:Ety", "Template:See also", "Template:Enzymes", "Template:Rp", "Template:Cite book", "Template:Cite journal", "Template:Featured article", "Template:Authority control", "Template:Redirect", "Template:Reflist", "Template:Col-1-of-2", "Template:Col-begin", "Template:Open access", "Template:Cite web", "Template:Food chemistry", "Template:Use dmy dates", "Template:Commonscatinline", "Template:NumBlk", "Template:Short description", "Template:Pp-move-indef", "Template:PDB2", "Template:Toclimit", "Template:More citations needed section", "Template:Pp-vandalism", "Template:PDB", "Template:Biochemistry sidebar" ]
https://en.wikipedia.org/wiki/Enzyme
9,258
Ethics
Ethics or moral philosophy is a branch of philosophy that "involves systematizing, defending, and recommending concepts of right and wrong behavior". The field of ethics, along with aesthetics, concerns matters of value; these fields comprise the branch of philosophy called axiology. Ethics seeks to resolve questions of human morality by defining concepts such as good and evil, right and wrong, virtue and vice, justice and crime. As a field of intellectual inquiry, moral philosophy is related to the fields of moral psychology, descriptive ethics, and value theory. Three major areas of study within ethics recognized today are: The English word ethics is derived from the Ancient Greek word ēthikós (ἠθικός), meaning "relating to one's character", which itself comes from the root word êthos (ἦθος) meaning "character, moral nature". This word was transferred into Latin as ethica and then into French as éthique, from which it was transferred into English. Rushworth Kidder states that "standard definitions of ethics have typically included such phrases as 'the science of the ideal human character' or 'the science of moral duty'". Richard William Paul and Linda Elder define ethics as "a set of concepts and principles that guide us in determining what behavior helps or harms sentient creatures". The Cambridge Dictionary of Philosophy states that the word "ethics" is "commonly used interchangeably with 'morality' ... and sometimes it is used more narrowly to mean the moral principles of a particular tradition, group or individual." Paul and Elder state that most people confuse ethics with behaving in accordance with social conventions, religious beliefs, the law, and do not treat ethics as a stand-alone concept. The word ethics in English refers to several things. It can refer to philosophical ethics or moral philosophy—a project that attempts to use reason to answer various kinds of ethical questions. As the English moral philosopher Bernard Williams writes, attempting to explain moral philosophy: "What makes an inquiry a philosophical one is reflective generality and a style of argument that claims to be rationally persuasive." Williams describes the content of this area of inquiry as addressing the very broad question, "how one should live". Ethics can also refer to a common human ability to think about ethical problems that is not particular to philosophy. As bioethicist Larry Churchill has written: "Ethics, understood as the capacity to think critically about moral values and direct our actions in terms of such values, is a generic human capacity." Metaethics is the branch of ethics that examines the nature, foundations, and scope of moral judgments, concepts, and values. It is not interested in what actions are right or wrong but in what it means for an action to be right or wrong and whether moral judgments are objective and can be true at all. It further examines the meaning of morality and moral terms. Metaethics is a metatheory that operates on a higher level of abstraction than normative ethics by investigating its underlying background assumptions. Metaethical theories usually do not directly take substantive positions regarding normative ethical theories but they can influence them nonetheless by questioning the foundational principles on which they rest. Metaethics overlaps with various branches of philosophy. On the level of ontology, it is concerned with the metaphysical status of moral values and principles. In relation to semantics, it asks what the meaning of moral terms is and whether moral statements have a truth value. The epistemological side of metaethics discusses whether and how people can acquire moral knowledge. Metaethics further covers psychological and anthropological considerations in regard to how moral judgments motivate people to act and how to explain cross-cultural differences in moral assessments. A key debate in metaethics concerns the ontological status of morality and encompasses the question of whether ethical values and principles form part of reality. It examines whether moral properties exist as objective features independent of the human mind and culture rather than as subjective constructs or expressions of personal preferences and cultural norms. Moral realists accept the claim that there are objective moral facts. This view implies that moral values are mind-independent aspects of reality and that there is an absolute fact about whether a given action is right or wrong. A consequence of this view is that moral requirements have the same ontological status as non-moral facts: it is an objective fact whether there is an obligation to keep a promise just as there is an objective fact whether a thing has a black color. Moral realism is often associated with the claim that there are universal ethical principles that apply equally to everyone. It implies that if two people disagree about a moral evaluation then at least one of them is wrong. This observation is sometimes taken as an argument against moral realism since moral disagreement is widespread and concerns most fields. Moral relativists reject the idea that morality is an objective feature of reality. They argue instead that moral principles are human inventions. This means that a behavior is not objectively right or wrong but only subjectively right or wrong relative to a certain standpoint. Moral standpoints may differ between persons, cultures, and historical periods. For example, moral statements like "slavery is wrong" or "suicide is permitted" may be true in one culture and false in another. This position can be understood in analogy to Einstein's theory of relativity, which states that the magnitude of physical properties like mass, length, and duration depends on the frame of reference of the observer. Some moral relativists hold that moral systems are constructed to serve certain goals such as social coordination. According to this view, different societies and different social groups within a society construct different moral systems based on their diverging purposes. A different explanation states that morality arises from moral emotions, which people project onto the external world. Moral nihilists deny the existence of moral facts. They are opposed to both objective moral facts defended by moral realism and subjective moral facts defended by moral relativism. They believe that the basic assumptions underlying moral claims are misguided. Some moral nihilists, like Friedrich Nietzsche, conclude from this that anything is allowed. A slightly different view emphasizes that moral nihilism is not itself a moral position about what is allowed and prohibited but the rejection of any moral position. Moral nihilism agrees with moral relativism that there are different standpoints according to which people judge actions to be right or wrong. However, it disagrees that this practice involves a form of morality and understands it instead as one among many types of human practices. An influential debate among moral realists is between naturalism and non-naturalism. Naturalism states that moral properties are natural properties and are in this respect similar to the natural properties accessible to empirical observation and investigated by the natural sciences, like color and shape. Some moral naturalists hold that moral properties are a unique and basic type of natural property. Another view states that moral properties are real but not a fundamental part of reality and can be reduced to other natural properties, for example, concerning what causes pleasure and pain. Non-naturalism accepts that moral properties form part of reality and argues that moral features are not identical or reducible to natural properties. This view is usually motivated by the idea that moral properties are unique because they express normative features or what should be the case. Proponents of this position often emphasize this uniqueness by claiming that it is a fallacy to define ethics in terms of natural entities or to infer prescriptive from descriptive statements. The metaethical debate between cognitivism and non-cognitivism belongs to the field of semantics and concerns the meaning of moral statements. According to cognitivism, moral statements like "Abortion is morally wrong" and "Going to war is never morally justified" are truth-apt. This means that they all have a truth value: they are either true or false. Cognitivism only claims that moral statements have a truth value but is not interested in which truth value they have. It is often seen as the default position since moral statements resemble other statements, like "Abortion is a medical procedure" or "Going to war is a political decision", which have a truth value. The semantic position of cognitivism is closely related to the ontological position of moral realism and philosophers who accept one often accept the other as well. An exception is J. L. Mackie's error theory, which combines cognitivism with moral nihilism by claiming that all moral statements are false because there are no moral facts. Non-cognitivism is the view that moral statements lack a truth value. According to this view, the statement "Murder is wrong" is neither true nor false. Some non-cognitivists claim that moral statements have no meaning at all. A different interpretation is that they express other types of meaning contents. Emotivism holds that they articulate emotional attitudes. According to this view, the statement "Murder is wrong" expresses that the speaker has negative moral attitudes towards murder or dislikes it. Prescriptivism, by contrast, understands moral statements as commands. According to this view, stating that "Murder is wrong" expresses a command like "Do not commit murder". The epistemology of ethics studies whether or how one can know moral truths. Foundationalist views state that some moral beliefs are basic and do not require further justification. Ethical intuitionism is one foundationalist view that states that humans have a special cognitive faculty through which they can know right from wrong. Intuitionists often argue that general moral truths, like "lying is wrong", are self-evident and that it is possible to know them a priori without relying on empirical experience. A different foundationalist view relies not on general intuitions but on particular observations. It holds that if people are confronted with a concrete moral situation, they can perceive whether right or wrong conduct was involved. In contrast to foundationalists, coherentists hold that there are no basic moral beliefs. They argue that beliefs form a complex network and mutually support and justify one another. According to this view, a moral belief can only amount to knowledge if it coheres with the rest of the beliefs in the network. Moral skeptics reject the idea that moral knowledge is possible by arguing that people are unable to distinguish between right and wrong behavior. Moral skepticism is often criticized based on the claim that it leads to immoral behavior. On the level of psychology, metaethics is interested in how moral beliefs and experiences affect behavior. According to motivational internalists, there is a direct link between moral judgments and action. This means that every judgment about what is right motivates the person to act accordingly. For example, Socrates defends a strong form of motivational internalism by holding that a person can only perform an evil deed if they are unaware that it is evil. Weaker forms of motivational internalism allow that people can act against moral judgments, for example, because of weakness of the will. Motivational externalists accept that people can judge a behavior to be morally required without feeling a reason to engage in it. This means that moral judgments do not always provide motivational force. The debate between internalism and externalism is relevant for explaining the behavior of psychopaths or sociopaths, who fail either to judge that a behavior is wrong or to translate their judgment into action. A closely related question is whether moral judgments can provide motivation on their own or need to be accompanied by other mental states, such as a desire to act morally. Normative ethics is the study of ethical action. It is the branch of ethics that investigates the set of questions that arise when considering how one ought to act, morally speaking. Normative ethics is distinct from metaethics because normative ethics examines standards for the rightness and wrongness of actions, while metaethics studies the meaning of moral language and the metaphysics of moral facts. Normative ethics is also distinct from descriptive ethics, as the latter is an empirical investigation of people's moral beliefs. To put it another way, descriptive ethics would be concerned with determining what proportion of people believe that killing is always wrong, while normative ethics is concerned with whether it is correct to hold such a belief. Hence, normative ethics is sometimes called prescriptive rather than descriptive. However, on certain versions of the metaethical view called moral realism, moral facts are both descriptive and prescriptive at the same time. Traditionally, normative ethics (also known as moral theory) was the study of what makes actions right and wrong. These theories offered an overarching moral principle one could appeal to in resolving difficult moral decisions. At the turn of the 20th century, moral theories became more complex and were no longer concerned solely with rightness and wrongness, but were interested in many different kinds of moral status. During the middle of the century, the study of normative ethics declined as metaethics grew in prominence. This focus on metaethics was in part caused by an intense linguistic focus in analytic philosophy and by the popularity of logical positivism. Virtue ethics describes the character of a moral agent as a driving force for ethical behavior, and it is used to describe the ethics of early Greek philosophers such as Socrates and Aristotle, and ancient Indian philosophers such as Valluvar. Socrates (469–399 BC) was one of the first Greek philosophers to encourage both scholars and the common citizen to turn their attention from the outside world to the condition of humankind. In this view, knowledge bearing on human life was placed highest, while all other knowledge was secondary. Self-knowledge was considered necessary for success and inherently an essential good. A self-aware person will act completely within his capabilities to his pinnacle, while an ignorant person will flounder and encounter difficulty. To Socrates, a person must become aware of every fact (and its context) relevant to his existence, if he wishes to attain self-knowledge. He posited that people will naturally do what is good if they know what is right. Evil or bad actions are the results of ignorance. If a criminal was truly aware of the intellectual and spiritual consequences of his or her actions, he or she would neither commit nor even consider committing those actions. Any person who knows what is truly right will automatically do it, according to Socrates. While he correlated knowledge with virtue, he similarly equated virtue with joy. The truly wise man will know what is right, do what is good, and therefore be happy. Aristotle (384–323 BC) posited an ethical system that may be termed "virtuous." In Aristotle's view, when a person acts in accordance with virtue this person will do good and be content. Unhappiness and frustration are caused by doing wrong, leading to failed goals and a poor life. Therefore, it is imperative for people to act in accordance with virtue, which is only attainable by the practice of the virtues in order to be content and complete. Happiness was held to be the ultimate goal. All other things, such as civic life or wealth, were only made worthwhile and of benefit when employed in the practice of the virtues. The practice of the virtues is the surest path to happiness. Aristotle asserted that the soul of man had three natures: body (physical/metabolism), animal (emotional/appetite), and rational (mental/conceptual). Physical nature can be assuaged through exercise and care; emotional nature through indulgence of instinct and urges; and mental nature through human reason and developed potential. Rational development was considered the most important, as essential to philosophical self-awareness, and as uniquely human. Moderation was encouraged, with the extremes seen as degraded and immoral. For example, courage is the moderate virtue between the extremes of cowardice and recklessness. Man should not simply live, but live well with conduct governed by virtue. This is regarded as difficult, as virtue denotes doing the right thing, in the right way, at the right time, for the right reason. Valluvar (before 5th century CE) keeps virtue, or aṟam (dharma) as he calls it, as the cornerstone throughout the writing of the Kural literature. While religious scriptures generally consider aṟam as divine in nature, Valluvar describes it as a way of life rather than any spiritual observance, a way of harmonious living that leads to universal happiness. Contrary to what other contemporary works say, Valluvar holds that aṟam is common for all, irrespective of whether the person is a bearer of palanquin or the rider in it. Valluvar considered justice as a facet of aṟam. While ancient Greek philosophers such as Plato, Aristotle, and their descendants opined that justice cannot be defined and that it was a divine mystery, Valluvar positively suggested that a divine origin is not required to define the concept of justice. In the words of V. R. Nedunchezhiyan, justice according to Valluvar "dwells in the minds of those who have knowledge of the standard of right and wrong; so too deceit dwells in the minds which breed fraud." The Stoic philosopher Epictetus posited that the greatest good was contentment and serenity. Peace of mind, or apatheia, was of the highest value; self-mastery over one's desires and emotions leads to spiritual peace. The "unconquerable will" is central to this philosophy. The individual's will should be independent and inviolate. Allowing a person to disturb the mental equilibrium is, in essence, offering yourself in slavery. If a person is free to anger you at will, you have no control over your internal world, and therefore no freedom. Freedom from material attachments is also necessary. If a thing breaks, the person should not be upset, but realize it was a thing that could break. Similarly, if someone should die, those close to them should hold to their serenity because the loved one was made of flesh and blood destined to death. Stoic philosophy says to accept things that cannot be changed, resigning oneself to the existence and enduring in a rational fashion. Death is not feared. People do not "lose" their life, but instead "return", for they are returning to God (who initially gave what the person is as a person). Epictetus said difficult problems in life should not be avoided, but rather embraced. They are spiritual exercises needed for the health of the spirit, just as physical exercise is required for the health of the body. He also stated that sex and sexual desire are to be avoided as the greatest threat to the integrity and equilibrium of a man's mind. Abstinence is highly desirable. Epictetus said remaining abstinent in the face of temptation was a victory for which a man could be proud. Modern virtue ethics was popularized during the late 20th century in large part due to a revival of Aristotelianism, and as a response to G.E.M. Anscombe's "Modern Moral Philosophy". Anscombe argues that consequentialist and deontological ethics are only feasible as universal theories if the two schools ground themselves in divine law. As a deeply devoted Christian herself, Anscombe proposed that either those who do not give ethical credence to notions of divine law take up virtue ethics, which does not necessitate universal laws as agents themselves are investigated for virtue or vice and held up to "universal standards", or that those who wish to be utilitarian or consequentialist ground their theories in religious conviction. Alasdair MacIntyre, who wrote the book After Virtue, was a key contributor and proponent of modern virtue ethics, although some claim that MacIntyre supports a relativistic account of virtue based on cultural norms, not objective standards. Martha Nussbaum, a contemporary virtue ethicist, objects to MacIntyre's relativism, among that of others, and responds to relativist objections to form an objective account in her work "Non-Relative Virtues: An Aristotelian Approach". However, Nussbaum's accusation of relativism appears to be a misreading. In Whose Justice, Whose Rationality?, MacIntyre's ambition of taking a rational path beyond relativism was quite clear when he stated "rival claims made by different traditions […] are to be evaluated […] without relativism" (p. 354) because indeed "rational debate between and rational choice among rival traditions is possible" (p. 352). Complete Conduct Principles for the 21st Century blended the Eastern virtue ethics and the Western virtue ethics, with some modifications to suit the 21st Century, and formed a part of contemporary virtue ethics. Mortimer J. Adler described Aristotle's Nicomachean Ethics as a "unique book in the Western tradition of moral philosophy, the only ethics that is sound, practical, and undogmatic." One major trend in contemporary virtue ethics is the Modern Stoicism movement. Hedonism posits that the principal ethic is maximizing pleasure and minimizing pain. There are several schools of Hedonist thought ranging from those advocating the indulgence of even momentary desires to those teaching a pursuit of spiritual bliss. In their consideration of consequences, they range from those advocating self-gratification regardless of the pain and expense to others, to those stating that the most ethical pursuit maximizes pleasure and happiness for the most people. Founded by Aristippus of Cyrene, Cyrenaics supported immediate gratification or pleasure. "Eat, drink and be merry, for tomorrow we die." Even fleeting desires should be indulged, for fear the opportunity should be forever lost. There was little to no concern with the future, the present dominating in the pursuit of immediate pleasure. Cyrenaic hedonism encouraged the pursuit of enjoyment and indulgence without hesitation, believing pleasure to be the only good. Epicurean ethics is a hedonist form of virtue ethics. Epicurus "presented a sustained argument that pleasure, correctly understood, will coincide with virtue." He rejected the extremism of the Cyrenaics, believing some pleasures and indulgences to be detrimental to human beings. Epicureans observed that indiscriminate indulgence sometimes resulted in negative consequences. Some experiences were therefore rejected out of hand, and some unpleasant experiences endured in the present to ensure a better life in the future. To Epicurus, the summum bonum, or greatest good, was prudence, exercised through moderation and caution. Excessive indulgence can be destructive to pleasure and can even lead to pain. For example, eating one food too often makes a person lose a taste for it. Eating too much food at once leads to discomfort and ill-health. Pain and fear were to be avoided. Living was essentially good, barring pain and illness. Death was not to be feared. Fear was considered the source of most unhappiness. Conquering the fear of death would naturally lead to a happier life. Epicurus reasoned if there were an afterlife and immortality, the fear of death was irrational. If there was no life after death, then the person would not be alive to suffer, fear, or worry; he would be non-existent in death. It is irrational to fret over circumstances that do not exist, such as one's state of death in the absence of an afterlife. State consequentialism, also known as Mohist consequentialism, is an ethical theory that evaluates the moral worth of an action based on how much it contributes to the basic goods of a state. The Stanford Encyclopedia of Philosophy describes Mohist consequentialism, dating back to the 5th century BC, as "a remarkably sophisticated version based on a plurality of intrinsic goods taken as constitutive of human welfare". Unlike utilitarianism, which views pleasure as a moral good, "the basic goods in Mohist consequentialist thinking are … order, material wealth, and increase in population". During Mozi's era, war and famines were common, and population growth was seen as a moral necessity for a harmonious society. The "material wealth" of Mohist consequentialism refers to basic needs like shelter and clothing, and the "order" of Mohist consequentialism refers to Mozi's stance against warfare and violence, which he viewed as pointless and a threat to social stability. Stanford sinologist David Shepherd Nivison, in The Cambridge History of Ancient China, writes that the moral goods of Mohism "are interrelated: more basic wealth, then more reproduction; more people, then more production and wealth … if people have plenty, they would be good, filial, kind, and so on unproblematically." The Mohists believed that morality is based on "promoting the benefit of all under heaven and eliminating harm to all under heaven". In contrast to Bentham's views, state consequentialism is not utilitarian because it is not hedonistic or individualistic. The importance of outcomes that are good for the community outweighs the importance of individual pleasure and pain. Consequentialism refers to moral theories that hold the consequences of a particular action form the basis for any valid moral judgment about that action (or create a structure for judgment, see rule consequentialism). Thus, from a consequentialist standpoint, morally right action is one that produces a good outcome, or consequence. This view is often expressed as the aphorism "The ends justify the means". The term "consequentialism" was coined by G.E.M. Anscombe in her essay "Modern Moral Philosophy" in 1958, to describe what she saw as the central error of certain moral theories, such as those propounded by Mill and Sidgwick. Since then, the term has become common in English-language ethical theory. The defining feature of consequentialist moral theories is the weight given to the consequences in evaluating the rightness and wrongness of actions. In consequentialist theories, the consequences of an action or rule generally outweigh other considerations. Apart from this basic outline, there is little else that can be unequivocally said about consequentialism as such. However, there are some questions that many consequentialist theories address: One way to divide various consequentialisms is by the many types of consequences that are taken to matter most, that is, which consequences count as good states of affairs. According to utilitarianism, a good action is one that results in an increase and positive effect, and the best action is one that results in that effect for the greatest number. Closely related is eudaimonic consequentialism, according to which a full, flourishing life, which may or may not be the same as enjoying a great deal of pleasure, is the ultimate aim. Similarly, one might adopt an aesthetic consequentialism, in which the ultimate aim is to produce beauty. However, one might fix on non-psychological goods as the relevant effect. Thus, one might pursue an increase in material equality or political liberty instead of something like the more ephemeral "pleasure". Other theories adopt a package of several goods, all to be promoted equally. Whether a particular consequentialist theory focuses on a single good or many, conflicts and tensions between different good states of affairs are to be expected and must be adjudicated. Utilitarianism is an ethical theory that argues the proper course of action is one that maximizes a positive effect, such as "happiness", "welfare", or the ability to live according to personal preferences. Jeremy Bentham and John Stuart Mill are influential proponents of this school of thought. In A Fragment on Government Bentham says 'it is the greatest happiness of the greatest number that is the measure of right and wrong' and describes this as a fundamental axiom. In An Introduction to the Principles of Morals and Legislation he talks of 'the principle of utility' but later prefers "the greatest happiness principle". Utilitarianism is the paradigmatic example of a consequentialist moral theory. This form of utilitarianism holds that the morally correct action is the one that produces the best outcome for all people affected by the action. John Stuart Mill, in his exposition of utilitarianism, proposed a hierarchy of pleasures, meaning that the pursuit of certain kinds of pleasure is more highly valued than the pursuit of other pleasures. Other noteworthy proponents of utilitarianism are neuroscientist Sam Harris, author of The Moral Landscape, and moral philosopher Peter Singer, author of, amongst other works, Practical Ethics. The major division within utilitarianism is between act utilitarianism and rule utilitarianism. In act utilitarianism, the principle of utility applies directly to each alternative act in a situation of choice. The right act is the one that brings about the best results (or the least bad results). In rule utilitarianism, the principle of utility determines the validity of rules of conduct (moral principles). A rule like promise-keeping is established by looking at the consequences of a world in which people break promises at will and a world in which promises are binding. Right and wrong are the following or breaking of rules that are sanctioned by their utilitarian value. A proposed "middle ground" between these two types is Two-level utilitarianism, where rules are applied in ordinary circumstances, but with an allowance to choose actions outside of such rules when unusual situations call for it. Deontological ethics or deontology (from Greek δέον, deon, "obligation, duty"; and -λογία, -logia) is an approach to ethics that determines goodness or rightness from examining acts, or the rules and duties that the person doing the act strove to fulfill. This is in contrast to consequentialism, in which rightness is based on the consequences of an act, and not the act by itself. Under deontology, an act may be considered right even if it produces a bad consequence, if it follows the rule or moral law. According to the deontological view, people have a duty to act in ways that are deemed inherently good ("truth-telling" for example), or follow an objectively obligatory rule (as in rule utilitarianism). Immanuel Kant's theory of ethics is considered deontological for several different reasons. First, Kant argues that to act in the morally right way, people must act from duty (Pflicht). Second, Kant argued that it was not the consequences of actions that make them right or wrong but the motives of the person who carries out the action. Kant's argument that to act in the morally right way one must act purely from duty begins with an argument that the highest good must be both good in itself and good without qualification. Something is "good in itself" when it is intrinsically good, and "good without qualification", when the addition of that thing never makes a situation ethically worse. Kant then argues that those things that are usually thought to be good, such as intelligence, perseverance and pleasure, fail to be either intrinsically good or good without qualification. Pleasure, for example, appears not to be good without qualification, because when people take pleasure in watching someone suffer, this seems to make the situation ethically worse. He concludes that there is only one thing that is truly good: Nothing in the world—indeed nothing even beyond the world—can possibly be conceived which could be called good without qualification except a good will. Kant then argues that the consequences of an act of willing cannot be used to determine that the person has a good will; good consequences could arise by accident from an action that was motivated by a desire to cause harm to an innocent person, and bad consequences could arise from an action that was well-motivated. Instead, he claims, a person has goodwill when he 'acts out of respect for the moral law'. People 'act out of respect for the moral law' when they act in some way because they have a duty to do so. So, the only thing that is truly good in itself is goodwill, and goodwill is only good when the willer chooses to do something because it is that person's duty, i.e. out of "respect" for the law. He defines respect as "the concept of a worth which thwarts my self-love". Kant's three significant formulations of the categorical imperative are: Kant argued that the only absolutely good thing is a good will, and so the single determining factor of whether an action is morally right is the will, or motive of the person doing it. If they are acting on a bad maxim, e.g. "I will lie", then their action is wrong, even if some good consequences come of it. In his essay, On a Supposed Right to Lie Because of Philanthropic Concerns, arguing against the position of Benjamin Constant, Des réactions politiques, Kant states that "Hence a lie defined merely as an intentionally untruthful declaration to another man does not require the additional condition that it must do harm to another, as jurists require in their definition (mendacium est falsiloquium in praeiudicium alterius). For a lie always harms another; if not some human being, then it nevertheless does harm to humanity in general, inasmuch as it vitiates the very source of right [Rechtsquelle] ... All practical principles of right must contain rigorous truth ... This is because such exceptions would destroy the universality on account of which alone they bear the name of principles." Although not all deontologists are religious, some believe in the 'divine command theory', which is actually a cluster of related theories which essentially state that an action is right if God has decreed that it is right. According to Ralph Cudworth, an English philosopher, William of Ockham, René Descartes, and eighteenth-century Calvinists all accepted various versions of this moral theory, as they all held that moral obligations arise from God's commands. The Divine Command Theory is a form of deontology because, according to it, the rightness of any action depends upon that action being performed because it is a duty, not because of any good consequences arising from that action. If God commands people not to work on Sabbath, then people act rightly if they do not work on Sabbath because God has commanded that they do not do so. If they do not work on Sabbath because they are lazy, then their action is not truly speaking "right", even though the actual physical action performed is the same. If God commands not to covet a neighbor's goods, this theory holds that it would be immoral to do so, even if coveting provides the beneficial outcome of a drive to succeed or do well. One thing that clearly distinguishes Kantian deontologism from divine command deontology is that Kantianism maintains that man, as a rational being, makes the moral law universal, whereas divine command maintains that God makes the moral law universal. German philosopher Jürgen Habermas has proposed a theory of discourse ethics that he states is a descendant of Kantian ethics. He proposes that action should be based on communication between those involved, in which their interests and intentions are discussed so they can be understood by all. Rejecting any form of coercion or manipulation, Habermas believes that agreement between the parties is crucial for a moral decision to be reached. Like Kantian ethics, discourse ethics is a cognitive ethical theory, in that it supposes that truth and falsity can be attributed to ethical propositions. It also formulates a rule by which ethical actions can be determined and proposes that ethical actions should be universalizable, in a similar way to Kant's ethics. Habermas argues that his ethical theory is an improvement on Kant's ethics. He rejects the dualistic framework of Kant's ethics. Kant distinguished between the phenomena world, which can be sensed and experienced by humans, and the noumena, or spiritual world, which is inaccessible to humans. This dichotomy was necessary for Kant because it could explain the autonomy of a human agent: although a human is bound in the phenomenal world, their actions are free in the noumenal world. For Habermas, morality arises from discourse, which is made necessary by their rationality and needs, rather than their freedom. Associated with the pragmatists Charles Sanders Peirce, William James, and especially John Dewey, pragmatic ethics holds that moral correctness evolves similarly to scientific knowledge: socially over the course of many lifetimes. Thus, we should prioritize social reform over attempts to account for consequences, individual virtue or duty (although these may be worthwhile attempts, if social reform is provided for). Care ethics contrasts with more well-known ethical models, such as consequentialist theories (e.g. utilitarianism) and deontological theories (e.g., Kantian ethics) in that it seeks to incorporate traditionally feminized virtues and values that—proponents of care ethics contend—are absent in such traditional models of ethics. These values include the importance of empathetic relationships and compassion. Care-focused feminism is a branch of feminist thought, informed primarily by ethics of care as developed by Carol Gilligan and Nel Noddings. This body of theory is critical of how caring is socially assigned to women, and consequently devalued. They write, "Care-focused feminists regard women's capacity for care as a human strength," that should be taught to and expected of men as well as women. Noddings proposes that ethical caring has the potential to be a more concrete evaluative model of moral dilemma than an ethic of justice. Noddings’ care-focused feminism requires practical application of relational ethics, predicated on an ethic of care. The 'metafeminist' theory of the matrixial gaze and the matrixial time-space, coined and developed Bracha L. Ettinger since 1985, articulates a revolutionary philosophical approach that, in "daring to approach", to use Griselda Pollock's description of Ettinger's ethical turn, "the prenatal with the pre-maternal encounter", violence toward women at war, and the Shoah, has philosophically established the rights of each female subject over her own reproductive body, and offered a language to relate to human experiences which escape the phallic domain. The matrixial sphere is a psychic and symbolic dimension that the 'phallic' language and regulations cannot control. In Ettinger's model, the relations between self and other are of neither assimilation nor rejection but 'coemergence'. In her conversation with Emmanuel Levinas, 1991, Ettinger prooses that the source of human Ethics is feminine-maternal and feminine-pre-maternal matrixial encounter-event. Sexuality and maternality coexist and are not in contradiction (the contradiction established by Sigmund Freud and Jacques Lacan), and the feminine is not an absolute alterity (the alterity established by Jacques Lacan and Emmanuel Levinas). With the 'originary response-ability', 'wit(h)nessing', 'borderlinking', 'communicaring', 'com-passion', 'seduction into life' and other processes invested by affects that occur in the Ettingerian matrixial time-space, the feminine is presented as the source of humanized Ethics in all genders. Compassion and Seduction into life occurs earlier than the primary seduction which passes through enigmatic signals from the maternal sexuality according to Jean Laplanche, since it is active in 'coemergence' in 'withnessing' for any born subject, earlier to its birth. Ettinger suggests to Emanuel Levinas in their conversations in 1991, that the feminine understood via the matrixial perspective is the heart and the source of Ethics. At the beginning of life, an originary 'fascinance' felt by the infant is related to the passage from response-ability to responsibility, from com-passion to compassion, and from wit(h)nessing to witnessing operated and transmitted by the m/Other. The 'differentiation in jointness' that is at the heart of the matrixial borderspace has deep implications in the relational field and for the ethics of care. The matrixial theory that proposes new ways to rethink sexual difference through the fluidity of boundaries informs aesthetics and ethics of compassion, carrying and non-abandonment in 'subjectivity as encounter-event'. It has become significant in Psychoanalysis and in transgender studies. Role ethics is an ethical theory based on family roles. Unlike virtue ethics, role ethics is not individualistic. Morality is derived from a person's relationship with their community. Confucian ethics is an example of role ethics though this is not straightforwardly uncontested. Confucian roles center around the concept of filial piety or xiao, a respect for family members. According to Roger T. Ames and Henry Rosemont, "Confucian normativity is defined by living one's family roles to maximum effect." Morality is determined through a person's fulfillment of a role, such as that of a parent or a child. Confucian roles are not rational, and originate through the xin, or human emotions. Anarchist ethics is an ethical theory based on the studies of anarchist thinkers. The biggest contributor to anarchist ethics is Peter Kropotkin. Starting from the premise that the goal of ethical philosophy should be to help humans adapt and thrive in evolutionary terms, Kropotkin's ethical framework uses biology and anthropology as a basis – in order to scientifically establish what will best enable a given social order to thrive biologically and socially – and advocates certain behavioural practices to enhance humanity's capacity for freedom and well-being, namely practices which emphasise solidarity, equality, and justice. Kropotkin argues that ethics itself is evolutionary, and is inherited as a sort of a social instinct through cultural history, and by so, he rejects any religious and transcendental explanation of morality. The origin of ethical feeling in both animals and humans can be found, he claims, in the natural fact of "sociality" (mutualistic symbiosis), which humans can then combine with the instinct for justice (i.e. equality) and then with the practice of reason to construct a non-supernatural and anarchistic system of ethics. Kropotkin suggests that the principle of equality at the core of anarchism is the same as the Golden rule: This principle of treating others as one wishes to be treated oneself, what is it but the very same principle as equality, the fundamental principle of anarchism? And how can any one manage to believe himself an anarchist unless he practices it? We do not wish to be ruled. And by this very fact, do we not declare that we ourselves wish to rule nobody? We do not wish to be deceived, we wish always to be told nothing but the truth. And by this very fact, do we not declare that we ourselves do not wish to deceive anybody, that we promise to always tell the truth, nothing but the truth, the whole truth? We do not wish to have the fruits of our labor stolen from us. And by that very fact, do we not declare that we respect the fruits of others' labor? By what right indeed can we demand that we should be treated in one fashion, reserving it to ourselves to treat others in a fashion entirely different? Our sense of equality revolts at such an idea. Antihumanists such as Louis Althusser, Michel Foucault and structuralists such as Roland Barthes challenged the possibilities of individual agency and the coherence of the notion of the 'individual' itself. This was on the basis that personal identity was, in the most part, a social construction. As critical theory developed in the later 20th century, post-structuralism sought to problematize human relationships to knowledge and 'objective' reality. Jacques Derrida argued that access to meaning and the 'real' was always deferred, and sought to demonstrate via recourse to the linguistic realm that "there is no outside-text/non-text" ("il n'y a pas de hors-texte" is often mistranslated as "there is nothing outside the text"); at the same time, Jean Baudrillard theorised that signs and symbols or simulacra mask reality (and eventually the absence of reality itself), particularly in the consumer world. Post-structuralism and postmodernism argue that ethics must study the complex and relational conditions of actions. A simple alignment of ideas of right and particular acts is not possible. There will always be an ethical remainder that cannot be taken into account or often even recognized. Such theorists find narrative (or, following Nietzsche and Foucault, genealogy) to be a helpful tool for understanding ethics because narrative is always about particular lived experiences in all their complexity rather than the assignment of an idea or norm to separate and individual actions. Zygmunt Bauman says postmodernity is best described as modernity without illusion, the illusion being the belief that humanity can be repaired by some ethic principle. Postmodernity can be seen in this light as accepting the messy nature of humanity as unchangeable. In this postmodern world, the means to act collectively and globally to solve large-scale problems have been all but discredited, dismantled or lost. Problems can be handled only locally and each on its own. All problem-handling means building a mini-order at the expense of order elsewhere, and at the cost of rising global disorder as well as depleting the shrinking supplies of resources which make ordering possible. He considers Emmanuel Levinas's ethics as postmodern. Unlike the modern ethical philosophy which leaves the Other on the outside of the self as an ambivalent presence, Levinas's philosophy readmits her as a neighbor and as a crucial character in the process through which the moral self comes into its own. David Couzens Hoy states that Emmanuel Levinas's writings on the face of the Other and Derrida's meditations on the relevance of death to ethics are signs of the "ethical turn" in Continental philosophy that occurred in the 1980s and 1990s. Hoy describes post-critique ethics as the "obligations that present themselves as necessarily to be fulfilled but are neither forced on one or are enforceable". Hoy's post-critique model uses the term ethical resistance. Examples of this would be an individual's resistance to consumerism in a retreat to a simpler but perhaps harder lifestyle, or an individual's resistance to a terminal illness. Hoy describes Levinas's account as "not the attempt to use power against itself, or to mobilize sectors of the population to exert their political power; the ethical resistance is instead the resistance of the powerless". Hoy concludes that The ethical resistance of the powerless others to our capacity to exert power over them is therefore what imposes unenforceable obligations on us. The obligations are unenforceable precisely because of the other's lack of power. That actions are at once obligatory and at the same time unenforceable is what put them in the category of the ethical. Obligations that were enforced would, by the virtue of the force behind them, not be freely undertaken and would not be in the realm of the ethical. Applied ethics, also known as practical ethics, is the branch of ethics and applied philosophy that examines concrete moral problems encountered in real-life situations. Unlike normative ethics, it is not concerned with discovering or justifying universal ethical principles. Instead, it studies how those principles can be applied to specific domains of practical life, what consequences they have in these fields, and whether other considerations are relevant. One of the main challenges of applied ethics is to breach the gap between abstract universal theories and their application to concrete situations. For example, an in-depth understanding of Kantianism or utilitarianism is usually not sufficient to decide how to analyze the moral implications of a medical procedure. One reason is that it may not be clear how the procedure affects the Kantian requirement of respecting everyone's personhood and what the consequences of the procedure are in terms of the greatest good for the greatest number. This difficulty is particularly relevant to applied ethicists who employ a top-down methodology by starting from universal ethical principles and applying them to particular cases within a specific domain. A different approach is to use a bottom-up methodology, which relies on many observations of particular cases to arrive at an understanding of the moral principles relevant to this particular domain. In either case, inquiry into applied ethics is often triggered by ethical dilemmas to solve cases in which a person is subject to conflicting moral requirements. Applied ethics covers issues pertaining to both the private sphere, like right conduct in the family and close relationships, and the public sphere, like moral problems posed by new technologies and international duties toward future generations. Major branches include bioethics, business ethics, and professional ethics. There are many other branches and their domains of inquiry often overlap. Bioethics is a wide field that covers moral problems associated with living organisms and biological disciplines. A key problem in bioethics concerns the moral status of entities and to what extent this status depends on features such as consciousness, being able to feel pleasure and pain, rationality, and personhood. These differences concern, for example, how to treat non-living entities like rocks and non-sentient entities like plants in contrast to animals and whether humans have a different moral status than other animals. According to anthropocentrism, only humans have a basic moral status. This implies that all other entities only have a derivative moral status to the extent that they affect human life. Sentientism, by contrast, extends an inherent moral status to all sentient beings. Further positions include biocentrism, which also covers non-sentient lifeforms, and ecocentrism, which states that all of nature has a basic moral status. Bioethics is relevant to various aspects of life and to many professions. It covers a wide range of moral problems associated with topics like abortion, cloning, stem cell research, euthanasia, suicide, animal testing, intensive animal farming, nuclear waste, and air pollution. Bioethics can be divided into medical ethics, animal ethics, and environmental ethics based on whether the ethical problems relate to humans, other animals, or nature in general. Medical ethics is the oldest branch of bioethics and has its origins in the Hippocratic Oath, which establishes ethical guidelines for medical practitioners like a prohibition to harm the patient. A central topic in medical ethics concerns issues associated with the beginning and the end of life. One debate focuses on the question of whether a fetus is a full-fledged person with all the rights associated with this status. For example, some proponents of this view argue that abortion is a form of murder. In relation to the end of life, there are ethical dilemmas concerning whether a person has a right to end their own life in cases of terminal illness and whether a medical practitioner may assist them in doing so. Other topics in medical ethics include medical confidentiality, informed consent, research on human beings, organ transplantation, and access to healthcare. Animal ethics examines how humans should treat other animals. An influential consideration in this field emphasizes the importance of animal welfare while arguing that humans should avoid or minimize the harm done to animals. There is wide agreement that it is wrong to torture animals for fun. The situation is more complicated in cases where harm is inflicted on animals as a side effect of the pursuit of human interests. This happens, for example, during factory farming, when using animals as food, and for research experiments on animals. A key topic in animal ethics is the formulation of animal rights. Animal rights theorists assert that animals have a certain moral status and that humans have an obligation to respect this status when interacting with them. Examples of suggested animal rights include the right to life, the right to be free from unnecessary suffering, and the right to natural behavior in a suitable environment. Environmental ethics deals with moral problems relating to the natural environment including animals, plants, natural resources, and ecosystems. In its widest sense, it also covers the whole biosphere and the cosmos. In the domain of agriculture, this concerns questions like under what circumstances it is acceptable to clear the vegetation of an area to use it for farming and the implications of using genetically modified crops. On a wider scale, environmental ethics addresses the problem of global warming and how people are responsible for this both on an individual and a collective level. Environmental ethicists often promote sustainable practices and policies directed at protecting and conserving ecosystems and biodiversity. Business ethics examines the moral implications of business conduct and investigates how ethical principles apply to corporations and organizations. A key topic is corporate social responsibility, which is the responsibility of corporations to act in a manner that benefits society at large. Corporate social responsibility is a complex issue since many stakeholders are directly and indirectly involved in corporate decisions, such as the CEO, the board of directors, and the shareholders. A closely related topic concerns the question of whether corporations themselves, and not just their stakeholders, have moral agency. Business ethics further examines the role of truthfulness, honesty, and fairness in business practices as well as the moral implications of bribery, conflict of interest, protection of investors and consumers, worker's rights, ethical leadership, and corporate philanthropy. Professional ethics is a closely related field that studies ethical principles applying to members of a specific profession, like engineers, medical doctors, lawyers, and teachers. It is a diverse field since different professions often have different responsibilities. Principles applying to many professions include that the professional has the required expertise for the intended work and that they have personal integrity and are trustworthy. Further principles are to serve the interest of their target group, follow client confidentiality, and respect and uphold the client's rights, such as informed consent. More precise requirements often vary between professions. A cornerstone of engineering ethics is to protect the public's safety, health, and wellbeing. Legal ethics emphasizes the importance of respect for justice, personal integrity, and confidentiality. Key factors in journalism ethics include accuracy, truthfulness, independence, and impartiality as well as proper attribution to avoid plagiarism. Many other fields of applied ethics are discussed in the academic literature. Communication ethics covers moral principles in relation to communicative conduct. Two key issues in it are freedom of speech and speech responsibility. Freedom of speech concerns the ability to articulate one's opinions and ideas without the threats of punishment and censorship. Speech responsibility is about being accountable for the consequences of communicative action and inaction. A closely related field is information ethics, which focuses on the moral implications of creating, controlling, disseminating, and using information. The ethics of technology has implications for both communication ethics and information ethics in regard to communication and information technologies. In its widest sense, it examines the moral issues associated with any artifacts created and used for instrumental means, from simple artifacts like spears to high-tech computers and nanotechnology. Central topics in the ethics of technology include the risks associated with creating new technologies, their responsible use, and questions surrounding the issue of human enhancement through technological means, such as prosthetic limbs, performance-enhancing drugs, and genetic enhancement. Important subfields include computer ethics, ethics of artificial intelligence, machine ethics, ethics of nanotechnology, and nuclear ethics. The ethics of war investigates moral problems in relation to war and violent conflicts. According to just war theory, waging war is morally justified if it fulfills certain conditions. They are commonly divided into requirements concerning the cause to initiate violent activities, such as self-defense, and the way those violent activities are conducted, such as avoiding excessive harm to civilians in the pursuit of legitimate military targets. Military ethics is a closely related field that is interested in the conduct of military personnel. It governs questions of the circumstances under which they are permitted to kill enemies, destroy infrastructure, and put the lives of their own troops at risk. Additional topics are recruitment, training, and discharge of military personnel as well as the procurement of military equipment. Further fields of applied ethics include political ethics, which examines the moral dimensions of political decisions, educational ethics, which covers ethical issues related to proper teaching practices, and sexual ethics, which addresses the moral implications of sexual behavior. Moral psychology is a field of study that began as an issue in philosophy and that is now properly considered part of the discipline of psychology. Some use the term "moral psychology" relatively narrowly to refer to the study of moral development. However, others tend to use the term more broadly to include any topics at the intersection of ethics and psychology (and philosophy of mind). Such topics are ones that involve the mind and are relevant to moral issues. Some of the main topics of the field are moral responsibility, moral development, moral character (especially as related to virtue ethics), altruism, psychological egoism, moral luck, and moral disagreement. Evolutionary ethics concerns approaches to ethics (morality) based on the role of evolution in shaping human psychology and behavior. Such approaches may be based in scientific fields such as evolutionary psychology or sociobiology, with a focus on understanding and explaining observed ethical preferences and choices. Descriptive ethics is on the less philosophical end of the spectrum since it seeks to gather particular information about how people live and draw general conclusions based on observed patterns. Abstract and theoretical questions that are more clearly philosophical—such as, "Is ethical knowledge possible?"—are not central to descriptive ethics. Descriptive ethics offers a value-free approach to ethics, which defines it as a social science rather than a humanities discipline. Its examination of ethics does not start with a preconceived theory but rather investigates observations of actual choices made by moral agents in practice. Some philosophers rely on descriptive ethics and choices made and unchallenged by a society or culture to derive categories, which typically vary by context. This can lead to situational ethics and situated ethics. These philosophers often view aesthetics, etiquette, and arbitration as more fundamental, percolating "bottom up" to imply the existence of, rather than explicitly prescribe, theories of value or of conduct. The study of descriptive ethics may include examinations of the following: The history of ethics studies how moral philosophy has developed and evolved in the course of history. It has its origin in the ancient civilizations. In ancient Egypt, the concept of Maat was used as an ethical principle to guide behavior and maintain order by emphasizing the importance of truth, balance, and harmony. In ancient India, the Vedas and Upanishads were written as the foundational texts of Hindu philosophy and discussed the role of duty and the consequences of one's actions. Buddhist ethics also originated in ancient India and advocated compassion, non-violence, and the pursuit of enlightenment. Ancient China saw the emergence of Confucianism, which focuses on moral conduct and self-cultivation by acting in accordance with virtues, and Daoism, which teaches that human behavior should be in harmony with the natural order of the universe. In ancient Greece, Socrates emphasized the importance of inquiry into what a good life is by critically questioning established ideas and exploring concepts like virtue, justice, courage, and wisdom. According to Plato, to lead a good life means that the different parts of the soul are in harmony with each other. For Aristotle, a good life is associated with being happy by cultivating virtues and flourishing. The close relation between right action and happiness was also explored by Hellenistic schools of Epicureanism, which recommended a simple lifestyle without indulging in sensory pleasures, and Stoicism, which advocated living in tune with reason and virtue while practicing self-mastery and becoming immune to disturbing emotions. Ethical thought in the medieval period was strongly influenced by religious teachings. Christian philosophers interpreted moral principles as divine commands originating from God. Thomas Aquinas developed natural law ethics by claiming that ethical behavior consists in following the laws and order of nature, which he believed were created by God. In the Islamic world, philosophers like Al-Farabi and Avicenna synthesized ancient Greek philosophy with the ethical teachings of Islam while emphasizing the harmony between reason and faith. In medieval India, philosophers like Adi Shankara and Ramanuja saw the practice of spirituality to attain liberation as the highest goal of human behavior. Moral philosophy in the modern period was characterized by a shift toward a secular approach to ethics. Thomas Hobbes identified self-interest as the primary drive of humans. He concluded that it would lead to "a war of every man against every man" unless a social contract is established to avoid this outcome. David Hume thought that only moral sentiments, like empathy, can motivate ethical actions while he saw reason not as a motivating factor but only as what anticipates the consequences of possible actions. Immanuel Kant, by contrast, saw reason as the source of morality. He formulated a deontological theory, according to which the ethical value of actions depends on their conformity with moral laws independent of their outcome. These laws take the form of categorical imperatives, which are universal requirements that apply to every situation. Another influential development in this period was the formulation of utilitarianism by Jeremy Bentham and John Stuart Mill. According to the utilitarian doctrine, actions should promote happiness while reducing suffering and the right action is the one that produces the greatest good for the greatest number of people. An important development in 20th-century analytic philosophy was the emergence of metaethics. Significant early contributions to this field were made by G. E. Moore, who argued that moral values are essentially different from other properties found in the natural world. R. M. Hare followed this idea in formulating his prescriptivism, which states that moral statements are commands that, unlike regular judgments, are neither true nor false. An influential argument for moral realism was made by Derek Parfit, who argued that morality concerns objective features of reality that give people reasons to act in one way or another. Another development in this period was the revival of ancient virtue ethics by philosophers like Philippa Foot. In the field of political philosophy, John Rawls relied on Kantian ethics to analyze social justice as a form of fairness. In continental philosophy, phenomenologists such as Max Scheler and Nicolai Hartmann built ethical systems based on the claim that values have objective reality that can be investigated using the phenomenological method. Existentialists like Jean-Paul Sartre, by contrast, held that values are created by humans and explored the consequences of this view in relation to individual freedom, responsibility, and authenticity. This period also saw the emergence of feminist ethics, which questions traditional ethical assumptions associated with a male perspective and puts alternative concepts, like care, at the center.
[ { "paragraph_id": 0, "text": "Ethics or moral philosophy is a branch of philosophy that \"involves systematizing, defending, and recommending concepts of right and wrong behavior\". The field of ethics, along with aesthetics, concerns matters of value; these fields comprise the branch of philosophy called axiology.", "title": "" }, { "paragraph_id": 1, "text": "Ethics seeks to resolve questions of human morality by defining concepts such as good and evil, right and wrong, virtue and vice, justice and crime. As a field of intellectual inquiry, moral philosophy is related to the fields of moral psychology, descriptive ethics, and value theory.", "title": "" }, { "paragraph_id": 2, "text": "Three major areas of study within ethics recognized today are:", "title": "" }, { "paragraph_id": 3, "text": "The English word ethics is derived from the Ancient Greek word ēthikós (ἠθικός), meaning \"relating to one's character\", which itself comes from the root word êthos (ἦθος) meaning \"character, moral nature\". This word was transferred into Latin as ethica and then into French as éthique, from which it was transferred into English.", "title": "Definition " }, { "paragraph_id": 4, "text": "Rushworth Kidder states that \"standard definitions of ethics have typically included such phrases as 'the science of the ideal human character' or 'the science of moral duty'\". Richard William Paul and Linda Elder define ethics as \"a set of concepts and principles that guide us in determining what behavior helps or harms sentient creatures\". The Cambridge Dictionary of Philosophy states that the word \"ethics\" is \"commonly used interchangeably with 'morality' ... and sometimes it is used more narrowly to mean the moral principles of a particular tradition, group or individual.\" Paul and Elder state that most people confuse ethics with behaving in accordance with social conventions, religious beliefs, the law, and do not treat ethics as a stand-alone concept.", "title": "Definition " }, { "paragraph_id": 5, "text": "The word ethics in English refers to several things. It can refer to philosophical ethics or moral philosophy—a project that attempts to use reason to answer various kinds of ethical questions. As the English moral philosopher Bernard Williams writes, attempting to explain moral philosophy: \"What makes an inquiry a philosophical one is reflective generality and a style of argument that claims to be rationally persuasive.\" Williams describes the content of this area of inquiry as addressing the very broad question, \"how one should live\". Ethics can also refer to a common human ability to think about ethical problems that is not particular to philosophy. As bioethicist Larry Churchill has written: \"Ethics, understood as the capacity to think critically about moral values and direct our actions in terms of such values, is a generic human capacity.\"", "title": "Definition " }, { "paragraph_id": 6, "text": "Metaethics is the branch of ethics that examines the nature, foundations, and scope of moral judgments, concepts, and values. It is not interested in what actions are right or wrong but in what it means for an action to be right or wrong and whether moral judgments are objective and can be true at all. It further examines the meaning of morality and moral terms. Metaethics is a metatheory that operates on a higher level of abstraction than normative ethics by investigating its underlying background assumptions. Metaethical theories usually do not directly take substantive positions regarding normative ethical theories but they can influence them nonetheless by questioning the foundational principles on which they rest.", "title": "Metaethics" }, { "paragraph_id": 7, "text": "Metaethics overlaps with various branches of philosophy. On the level of ontology, it is concerned with the metaphysical status of moral values and principles. In relation to semantics, it asks what the meaning of moral terms is and whether moral statements have a truth value. The epistemological side of metaethics discusses whether and how people can acquire moral knowledge. Metaethics further covers psychological and anthropological considerations in regard to how moral judgments motivate people to act and how to explain cross-cultural differences in moral assessments.", "title": "Metaethics" }, { "paragraph_id": 8, "text": "A key debate in metaethics concerns the ontological status of morality and encompasses the question of whether ethical values and principles form part of reality. It examines whether moral properties exist as objective features independent of the human mind and culture rather than as subjective constructs or expressions of personal preferences and cultural norms.", "title": "Metaethics" }, { "paragraph_id": 9, "text": "Moral realists accept the claim that there are objective moral facts. This view implies that moral values are mind-independent aspects of reality and that there is an absolute fact about whether a given action is right or wrong. A consequence of this view is that moral requirements have the same ontological status as non-moral facts: it is an objective fact whether there is an obligation to keep a promise just as there is an objective fact whether a thing has a black color. Moral realism is often associated with the claim that there are universal ethical principles that apply equally to everyone. It implies that if two people disagree about a moral evaluation then at least one of them is wrong. This observation is sometimes taken as an argument against moral realism since moral disagreement is widespread and concerns most fields.", "title": "Metaethics" }, { "paragraph_id": 10, "text": "Moral relativists reject the idea that morality is an objective feature of reality. They argue instead that moral principles are human inventions. This means that a behavior is not objectively right or wrong but only subjectively right or wrong relative to a certain standpoint. Moral standpoints may differ between persons, cultures, and historical periods. For example, moral statements like \"slavery is wrong\" or \"suicide is permitted\" may be true in one culture and false in another. This position can be understood in analogy to Einstein's theory of relativity, which states that the magnitude of physical properties like mass, length, and duration depends on the frame of reference of the observer. Some moral relativists hold that moral systems are constructed to serve certain goals such as social coordination. According to this view, different societies and different social groups within a society construct different moral systems based on their diverging purposes. A different explanation states that morality arises from moral emotions, which people project onto the external world.", "title": "Metaethics" }, { "paragraph_id": 11, "text": "Moral nihilists deny the existence of moral facts. They are opposed to both objective moral facts defended by moral realism and subjective moral facts defended by moral relativism. They believe that the basic assumptions underlying moral claims are misguided. Some moral nihilists, like Friedrich Nietzsche, conclude from this that anything is allowed. A slightly different view emphasizes that moral nihilism is not itself a moral position about what is allowed and prohibited but the rejection of any moral position. Moral nihilism agrees with moral relativism that there are different standpoints according to which people judge actions to be right or wrong. However, it disagrees that this practice involves a form of morality and understands it instead as one among many types of human practices.", "title": "Metaethics" }, { "paragraph_id": 12, "text": "An influential debate among moral realists is between naturalism and non-naturalism. Naturalism states that moral properties are natural properties and are in this respect similar to the natural properties accessible to empirical observation and investigated by the natural sciences, like color and shape. Some moral naturalists hold that moral properties are a unique and basic type of natural property. Another view states that moral properties are real but not a fundamental part of reality and can be reduced to other natural properties, for example, concerning what causes pleasure and pain.", "title": "Metaethics" }, { "paragraph_id": 13, "text": "Non-naturalism accepts that moral properties form part of reality and argues that moral features are not identical or reducible to natural properties. This view is usually motivated by the idea that moral properties are unique because they express normative features or what should be the case. Proponents of this position often emphasize this uniqueness by claiming that it is a fallacy to define ethics in terms of natural entities or to infer prescriptive from descriptive statements.", "title": "Metaethics" }, { "paragraph_id": 14, "text": "The metaethical debate between cognitivism and non-cognitivism belongs to the field of semantics and concerns the meaning of moral statements. According to cognitivism, moral statements like \"Abortion is morally wrong\" and \"Going to war is never morally justified\" are truth-apt. This means that they all have a truth value: they are either true or false. Cognitivism only claims that moral statements have a truth value but is not interested in which truth value they have. It is often seen as the default position since moral statements resemble other statements, like \"Abortion is a medical procedure\" or \"Going to war is a political decision\", which have a truth value.", "title": "Metaethics" }, { "paragraph_id": 15, "text": "The semantic position of cognitivism is closely related to the ontological position of moral realism and philosophers who accept one often accept the other as well. An exception is J. L. Mackie's error theory, which combines cognitivism with moral nihilism by claiming that all moral statements are false because there are no moral facts.", "title": "Metaethics" }, { "paragraph_id": 16, "text": "Non-cognitivism is the view that moral statements lack a truth value. According to this view, the statement \"Murder is wrong\" is neither true nor false. Some non-cognitivists claim that moral statements have no meaning at all. A different interpretation is that they express other types of meaning contents. Emotivism holds that they articulate emotional attitudes. According to this view, the statement \"Murder is wrong\" expresses that the speaker has negative moral attitudes towards murder or dislikes it. Prescriptivism, by contrast, understands moral statements as commands. According to this view, stating that \"Murder is wrong\" expresses a command like \"Do not commit murder\".", "title": "Metaethics" }, { "paragraph_id": 17, "text": "The epistemology of ethics studies whether or how one can know moral truths. Foundationalist views state that some moral beliefs are basic and do not require further justification. Ethical intuitionism is one foundationalist view that states that humans have a special cognitive faculty through which they can know right from wrong. Intuitionists often argue that general moral truths, like \"lying is wrong\", are self-evident and that it is possible to know them a priori without relying on empirical experience. A different foundationalist view relies not on general intuitions but on particular observations. It holds that if people are confronted with a concrete moral situation, they can perceive whether right or wrong conduct was involved.", "title": "Metaethics" }, { "paragraph_id": 18, "text": "In contrast to foundationalists, coherentists hold that there are no basic moral beliefs. They argue that beliefs form a complex network and mutually support and justify one another. According to this view, a moral belief can only amount to knowledge if it coheres with the rest of the beliefs in the network. Moral skeptics reject the idea that moral knowledge is possible by arguing that people are unable to distinguish between right and wrong behavior. Moral skepticism is often criticized based on the claim that it leads to immoral behavior.", "title": "Metaethics" }, { "paragraph_id": 19, "text": "On the level of psychology, metaethics is interested in how moral beliefs and experiences affect behavior. According to motivational internalists, there is a direct link between moral judgments and action. This means that every judgment about what is right motivates the person to act accordingly. For example, Socrates defends a strong form of motivational internalism by holding that a person can only perform an evil deed if they are unaware that it is evil. Weaker forms of motivational internalism allow that people can act against moral judgments, for example, because of weakness of the will. Motivational externalists accept that people can judge a behavior to be morally required without feeling a reason to engage in it. This means that moral judgments do not always provide motivational force. The debate between internalism and externalism is relevant for explaining the behavior of psychopaths or sociopaths, who fail either to judge that a behavior is wrong or to translate their judgment into action. A closely related question is whether moral judgments can provide motivation on their own or need to be accompanied by other mental states, such as a desire to act morally.", "title": "Metaethics" }, { "paragraph_id": 20, "text": "Normative ethics is the study of ethical action. It is the branch of ethics that investigates the set of questions that arise when considering how one ought to act, morally speaking. Normative ethics is distinct from metaethics because normative ethics examines standards for the rightness and wrongness of actions, while metaethics studies the meaning of moral language and the metaphysics of moral facts. Normative ethics is also distinct from descriptive ethics, as the latter is an empirical investigation of people's moral beliefs. To put it another way, descriptive ethics would be concerned with determining what proportion of people believe that killing is always wrong, while normative ethics is concerned with whether it is correct to hold such a belief. Hence, normative ethics is sometimes called prescriptive rather than descriptive. However, on certain versions of the metaethical view called moral realism, moral facts are both descriptive and prescriptive at the same time.", "title": "Normative ethics" }, { "paragraph_id": 21, "text": "Traditionally, normative ethics (also known as moral theory) was the study of what makes actions right and wrong. These theories offered an overarching moral principle one could appeal to in resolving difficult moral decisions.", "title": "Normative ethics" }, { "paragraph_id": 22, "text": "At the turn of the 20th century, moral theories became more complex and were no longer concerned solely with rightness and wrongness, but were interested in many different kinds of moral status. During the middle of the century, the study of normative ethics declined as metaethics grew in prominence. This focus on metaethics was in part caused by an intense linguistic focus in analytic philosophy and by the popularity of logical positivism.", "title": "Normative ethics" }, { "paragraph_id": 23, "text": "Virtue ethics describes the character of a moral agent as a driving force for ethical behavior, and it is used to describe the ethics of early Greek philosophers such as Socrates and Aristotle, and ancient Indian philosophers such as Valluvar. Socrates (469–399 BC) was one of the first Greek philosophers to encourage both scholars and the common citizen to turn their attention from the outside world to the condition of humankind. In this view, knowledge bearing on human life was placed highest, while all other knowledge was secondary. Self-knowledge was considered necessary for success and inherently an essential good. A self-aware person will act completely within his capabilities to his pinnacle, while an ignorant person will flounder and encounter difficulty. To Socrates, a person must become aware of every fact (and its context) relevant to his existence, if he wishes to attain self-knowledge. He posited that people will naturally do what is good if they know what is right. Evil or bad actions are the results of ignorance. If a criminal was truly aware of the intellectual and spiritual consequences of his or her actions, he or she would neither commit nor even consider committing those actions. Any person who knows what is truly right will automatically do it, according to Socrates. While he correlated knowledge with virtue, he similarly equated virtue with joy. The truly wise man will know what is right, do what is good, and therefore be happy.", "title": "Normative ethics" }, { "paragraph_id": 24, "text": "Aristotle (384–323 BC) posited an ethical system that may be termed \"virtuous.\" In Aristotle's view, when a person acts in accordance with virtue this person will do good and be content. Unhappiness and frustration are caused by doing wrong, leading to failed goals and a poor life. Therefore, it is imperative for people to act in accordance with virtue, which is only attainable by the practice of the virtues in order to be content and complete. Happiness was held to be the ultimate goal. All other things, such as civic life or wealth, were only made worthwhile and of benefit when employed in the practice of the virtues. The practice of the virtues is the surest path to happiness. Aristotle asserted that the soul of man had three natures: body (physical/metabolism), animal (emotional/appetite), and rational (mental/conceptual). Physical nature can be assuaged through exercise and care; emotional nature through indulgence of instinct and urges; and mental nature through human reason and developed potential. Rational development was considered the most important, as essential to philosophical self-awareness, and as uniquely human. Moderation was encouraged, with the extremes seen as degraded and immoral. For example, courage is the moderate virtue between the extremes of cowardice and recklessness. Man should not simply live, but live well with conduct governed by virtue. This is regarded as difficult, as virtue denotes doing the right thing, in the right way, at the right time, for the right reason.", "title": "Normative ethics" }, { "paragraph_id": 25, "text": "Valluvar (before 5th century CE) keeps virtue, or aṟam (dharma) as he calls it, as the cornerstone throughout the writing of the Kural literature. While religious scriptures generally consider aṟam as divine in nature, Valluvar describes it as a way of life rather than any spiritual observance, a way of harmonious living that leads to universal happiness. Contrary to what other contemporary works say, Valluvar holds that aṟam is common for all, irrespective of whether the person is a bearer of palanquin or the rider in it. Valluvar considered justice as a facet of aṟam. While ancient Greek philosophers such as Plato, Aristotle, and their descendants opined that justice cannot be defined and that it was a divine mystery, Valluvar positively suggested that a divine origin is not required to define the concept of justice. In the words of V. R. Nedunchezhiyan, justice according to Valluvar \"dwells in the minds of those who have knowledge of the standard of right and wrong; so too deceit dwells in the minds which breed fraud.\"", "title": "Normative ethics" }, { "paragraph_id": 26, "text": "The Stoic philosopher Epictetus posited that the greatest good was contentment and serenity. Peace of mind, or apatheia, was of the highest value; self-mastery over one's desires and emotions leads to spiritual peace. The \"unconquerable will\" is central to this philosophy. The individual's will should be independent and inviolate. Allowing a person to disturb the mental equilibrium is, in essence, offering yourself in slavery. If a person is free to anger you at will, you have no control over your internal world, and therefore no freedom. Freedom from material attachments is also necessary. If a thing breaks, the person should not be upset, but realize it was a thing that could break. Similarly, if someone should die, those close to them should hold to their serenity because the loved one was made of flesh and blood destined to death. Stoic philosophy says to accept things that cannot be changed, resigning oneself to the existence and enduring in a rational fashion. Death is not feared. People do not \"lose\" their life, but instead \"return\", for they are returning to God (who initially gave what the person is as a person). Epictetus said difficult problems in life should not be avoided, but rather embraced. They are spiritual exercises needed for the health of the spirit, just as physical exercise is required for the health of the body. He also stated that sex and sexual desire are to be avoided as the greatest threat to the integrity and equilibrium of a man's mind. Abstinence is highly desirable. Epictetus said remaining abstinent in the face of temptation was a victory for which a man could be proud.", "title": "Normative ethics" }, { "paragraph_id": 27, "text": "Modern virtue ethics was popularized during the late 20th century in large part due to a revival of Aristotelianism, and as a response to G.E.M. Anscombe's \"Modern Moral Philosophy\". Anscombe argues that consequentialist and deontological ethics are only feasible as universal theories if the two schools ground themselves in divine law. As a deeply devoted Christian herself, Anscombe proposed that either those who do not give ethical credence to notions of divine law take up virtue ethics, which does not necessitate universal laws as agents themselves are investigated for virtue or vice and held up to \"universal standards\", or that those who wish to be utilitarian or consequentialist ground their theories in religious conviction. Alasdair MacIntyre, who wrote the book After Virtue, was a key contributor and proponent of modern virtue ethics, although some claim that MacIntyre supports a relativistic account of virtue based on cultural norms, not objective standards. Martha Nussbaum, a contemporary virtue ethicist, objects to MacIntyre's relativism, among that of others, and responds to relativist objections to form an objective account in her work \"Non-Relative Virtues: An Aristotelian Approach\". However, Nussbaum's accusation of relativism appears to be a misreading. In Whose Justice, Whose Rationality?, MacIntyre's ambition of taking a rational path beyond relativism was quite clear when he stated \"rival claims made by different traditions […] are to be evaluated […] without relativism\" (p. 354) because indeed \"rational debate between and rational choice among rival traditions is possible\" (p. 352). Complete Conduct Principles for the 21st Century blended the Eastern virtue ethics and the Western virtue ethics, with some modifications to suit the 21st Century, and formed a part of contemporary virtue ethics. Mortimer J. Adler described Aristotle's Nicomachean Ethics as a \"unique book in the Western tradition of moral philosophy, the only ethics that is sound, practical, and undogmatic.\"", "title": "Normative ethics" }, { "paragraph_id": 28, "text": "One major trend in contemporary virtue ethics is the Modern Stoicism movement.", "title": "Normative ethics" }, { "paragraph_id": 29, "text": "Hedonism posits that the principal ethic is maximizing pleasure and minimizing pain. There are several schools of Hedonist thought ranging from those advocating the indulgence of even momentary desires to those teaching a pursuit of spiritual bliss. In their consideration of consequences, they range from those advocating self-gratification regardless of the pain and expense to others, to those stating that the most ethical pursuit maximizes pleasure and happiness for the most people.", "title": "Normative ethics" }, { "paragraph_id": 30, "text": "Founded by Aristippus of Cyrene, Cyrenaics supported immediate gratification or pleasure. \"Eat, drink and be merry, for tomorrow we die.\" Even fleeting desires should be indulged, for fear the opportunity should be forever lost. There was little to no concern with the future, the present dominating in the pursuit of immediate pleasure. Cyrenaic hedonism encouraged the pursuit of enjoyment and indulgence without hesitation, believing pleasure to be the only good.", "title": "Normative ethics" }, { "paragraph_id": 31, "text": "Epicurean ethics is a hedonist form of virtue ethics. Epicurus \"presented a sustained argument that pleasure, correctly understood, will coincide with virtue.\" He rejected the extremism of the Cyrenaics, believing some pleasures and indulgences to be detrimental to human beings. Epicureans observed that indiscriminate indulgence sometimes resulted in negative consequences. Some experiences were therefore rejected out of hand, and some unpleasant experiences endured in the present to ensure a better life in the future. To Epicurus, the summum bonum, or greatest good, was prudence, exercised through moderation and caution. Excessive indulgence can be destructive to pleasure and can even lead to pain. For example, eating one food too often makes a person lose a taste for it. Eating too much food at once leads to discomfort and ill-health. Pain and fear were to be avoided. Living was essentially good, barring pain and illness. Death was not to be feared. Fear was considered the source of most unhappiness. Conquering the fear of death would naturally lead to a happier life. Epicurus reasoned if there were an afterlife and immortality, the fear of death was irrational. If there was no life after death, then the person would not be alive to suffer, fear, or worry; he would be non-existent in death. It is irrational to fret over circumstances that do not exist, such as one's state of death in the absence of an afterlife.", "title": "Normative ethics" }, { "paragraph_id": 32, "text": "State consequentialism, also known as Mohist consequentialism, is an ethical theory that evaluates the moral worth of an action based on how much it contributes to the basic goods of a state. The Stanford Encyclopedia of Philosophy describes Mohist consequentialism, dating back to the 5th century BC, as \"a remarkably sophisticated version based on a plurality of intrinsic goods taken as constitutive of human welfare\". Unlike utilitarianism, which views pleasure as a moral good, \"the basic goods in Mohist consequentialist thinking are … order, material wealth, and increase in population\". During Mozi's era, war and famines were common, and population growth was seen as a moral necessity for a harmonious society. The \"material wealth\" of Mohist consequentialism refers to basic needs like shelter and clothing, and the \"order\" of Mohist consequentialism refers to Mozi's stance against warfare and violence, which he viewed as pointless and a threat to social stability.", "title": "Normative ethics" }, { "paragraph_id": 33, "text": "Stanford sinologist David Shepherd Nivison, in The Cambridge History of Ancient China, writes that the moral goods of Mohism \"are interrelated: more basic wealth, then more reproduction; more people, then more production and wealth … if people have plenty, they would be good, filial, kind, and so on unproblematically.\" The Mohists believed that morality is based on \"promoting the benefit of all under heaven and eliminating harm to all under heaven\". In contrast to Bentham's views, state consequentialism is not utilitarian because it is not hedonistic or individualistic. The importance of outcomes that are good for the community outweighs the importance of individual pleasure and pain.", "title": "Normative ethics" }, { "paragraph_id": 34, "text": "Consequentialism refers to moral theories that hold the consequences of a particular action form the basis for any valid moral judgment about that action (or create a structure for judgment, see rule consequentialism). Thus, from a consequentialist standpoint, morally right action is one that produces a good outcome, or consequence. This view is often expressed as the aphorism \"The ends justify the means\".", "title": "Normative ethics" }, { "paragraph_id": 35, "text": "The term \"consequentialism\" was coined by G.E.M. Anscombe in her essay \"Modern Moral Philosophy\" in 1958, to describe what she saw as the central error of certain moral theories, such as those propounded by Mill and Sidgwick. Since then, the term has become common in English-language ethical theory.", "title": "Normative ethics" }, { "paragraph_id": 36, "text": "The defining feature of consequentialist moral theories is the weight given to the consequences in evaluating the rightness and wrongness of actions. In consequentialist theories, the consequences of an action or rule generally outweigh other considerations. Apart from this basic outline, there is little else that can be unequivocally said about consequentialism as such. However, there are some questions that many consequentialist theories address:", "title": "Normative ethics" }, { "paragraph_id": 37, "text": "One way to divide various consequentialisms is by the many types of consequences that are taken to matter most, that is, which consequences count as good states of affairs. According to utilitarianism, a good action is one that results in an increase and positive effect, and the best action is one that results in that effect for the greatest number. Closely related is eudaimonic consequentialism, according to which a full, flourishing life, which may or may not be the same as enjoying a great deal of pleasure, is the ultimate aim. Similarly, one might adopt an aesthetic consequentialism, in which the ultimate aim is to produce beauty. However, one might fix on non-psychological goods as the relevant effect. Thus, one might pursue an increase in material equality or political liberty instead of something like the more ephemeral \"pleasure\". Other theories adopt a package of several goods, all to be promoted equally. Whether a particular consequentialist theory focuses on a single good or many, conflicts and tensions between different good states of affairs are to be expected and must be adjudicated.", "title": "Normative ethics" }, { "paragraph_id": 38, "text": "Utilitarianism is an ethical theory that argues the proper course of action is one that maximizes a positive effect, such as \"happiness\", \"welfare\", or the ability to live according to personal preferences. Jeremy Bentham and John Stuart Mill are influential proponents of this school of thought. In A Fragment on Government Bentham says 'it is the greatest happiness of the greatest number that is the measure of right and wrong' and describes this as a fundamental axiom. In An Introduction to the Principles of Morals and Legislation he talks of 'the principle of utility' but later prefers \"the greatest happiness principle\".", "title": "Normative ethics" }, { "paragraph_id": 39, "text": "Utilitarianism is the paradigmatic example of a consequentialist moral theory. This form of utilitarianism holds that the morally correct action is the one that produces the best outcome for all people affected by the action. John Stuart Mill, in his exposition of utilitarianism, proposed a hierarchy of pleasures, meaning that the pursuit of certain kinds of pleasure is more highly valued than the pursuit of other pleasures. Other noteworthy proponents of utilitarianism are neuroscientist Sam Harris, author of The Moral Landscape, and moral philosopher Peter Singer, author of, amongst other works, Practical Ethics.", "title": "Normative ethics" }, { "paragraph_id": 40, "text": "The major division within utilitarianism is between act utilitarianism and rule utilitarianism. In act utilitarianism, the principle of utility applies directly to each alternative act in a situation of choice. The right act is the one that brings about the best results (or the least bad results). In rule utilitarianism, the principle of utility determines the validity of rules of conduct (moral principles). A rule like promise-keeping is established by looking at the consequences of a world in which people break promises at will and a world in which promises are binding. Right and wrong are the following or breaking of rules that are sanctioned by their utilitarian value. A proposed \"middle ground\" between these two types is Two-level utilitarianism, where rules are applied in ordinary circumstances, but with an allowance to choose actions outside of such rules when unusual situations call for it.", "title": "Normative ethics" }, { "paragraph_id": 41, "text": "Deontological ethics or deontology (from Greek δέον, deon, \"obligation, duty\"; and -λογία, -logia) is an approach to ethics that determines goodness or rightness from examining acts, or the rules and duties that the person doing the act strove to fulfill. This is in contrast to consequentialism, in which rightness is based on the consequences of an act, and not the act by itself. Under deontology, an act may be considered right even if it produces a bad consequence, if it follows the rule or moral law. According to the deontological view, people have a duty to act in ways that are deemed inherently good (\"truth-telling\" for example), or follow an objectively obligatory rule (as in rule utilitarianism).", "title": "Normative ethics" }, { "paragraph_id": 42, "text": "Immanuel Kant's theory of ethics is considered deontological for several different reasons. First, Kant argues that to act in the morally right way, people must act from duty (Pflicht). Second, Kant argued that it was not the consequences of actions that make them right or wrong but the motives of the person who carries out the action.", "title": "Normative ethics" }, { "paragraph_id": 43, "text": "Kant's argument that to act in the morally right way one must act purely from duty begins with an argument that the highest good must be both good in itself and good without qualification. Something is \"good in itself\" when it is intrinsically good, and \"good without qualification\", when the addition of that thing never makes a situation ethically worse. Kant then argues that those things that are usually thought to be good, such as intelligence, perseverance and pleasure, fail to be either intrinsically good or good without qualification. Pleasure, for example, appears not to be good without qualification, because when people take pleasure in watching someone suffer, this seems to make the situation ethically worse. He concludes that there is only one thing that is truly good:", "title": "Normative ethics" }, { "paragraph_id": 44, "text": "Nothing in the world—indeed nothing even beyond the world—can possibly be conceived which could be called good without qualification except a good will.", "title": "Normative ethics" }, { "paragraph_id": 45, "text": "Kant then argues that the consequences of an act of willing cannot be used to determine that the person has a good will; good consequences could arise by accident from an action that was motivated by a desire to cause harm to an innocent person, and bad consequences could arise from an action that was well-motivated. Instead, he claims, a person has goodwill when he 'acts out of respect for the moral law'. People 'act out of respect for the moral law' when they act in some way because they have a duty to do so. So, the only thing that is truly good in itself is goodwill, and goodwill is only good when the willer chooses to do something because it is that person's duty, i.e. out of \"respect\" for the law. He defines respect as \"the concept of a worth which thwarts my self-love\".", "title": "Normative ethics" }, { "paragraph_id": 46, "text": "Kant's three significant formulations of the categorical imperative are:", "title": "Normative ethics" }, { "paragraph_id": 47, "text": "Kant argued that the only absolutely good thing is a good will, and so the single determining factor of whether an action is morally right is the will, or motive of the person doing it. If they are acting on a bad maxim, e.g. \"I will lie\", then their action is wrong, even if some good consequences come of it. In his essay, On a Supposed Right to Lie Because of Philanthropic Concerns, arguing against the position of Benjamin Constant, Des réactions politiques, Kant states that \"Hence a lie defined merely as an intentionally untruthful declaration to another man does not require the additional condition that it must do harm to another, as jurists require in their definition (mendacium est falsiloquium in praeiudicium alterius). For a lie always harms another; if not some human being, then it nevertheless does harm to humanity in general, inasmuch as it vitiates the very source of right [Rechtsquelle] ... All practical principles of right must contain rigorous truth ... This is because such exceptions would destroy the universality on account of which alone they bear the name of principles.\"", "title": "Normative ethics" }, { "paragraph_id": 48, "text": "Although not all deontologists are religious, some believe in the 'divine command theory', which is actually a cluster of related theories which essentially state that an action is right if God has decreed that it is right. According to Ralph Cudworth, an English philosopher, William of Ockham, René Descartes, and eighteenth-century Calvinists all accepted various versions of this moral theory, as they all held that moral obligations arise from God's commands. The Divine Command Theory is a form of deontology because, according to it, the rightness of any action depends upon that action being performed because it is a duty, not because of any good consequences arising from that action. If God commands people not to work on Sabbath, then people act rightly if they do not work on Sabbath because God has commanded that they do not do so. If they do not work on Sabbath because they are lazy, then their action is not truly speaking \"right\", even though the actual physical action performed is the same. If God commands not to covet a neighbor's goods, this theory holds that it would be immoral to do so, even if coveting provides the beneficial outcome of a drive to succeed or do well.", "title": "Normative ethics" }, { "paragraph_id": 49, "text": "One thing that clearly distinguishes Kantian deontologism from divine command deontology is that Kantianism maintains that man, as a rational being, makes the moral law universal, whereas divine command maintains that God makes the moral law universal.", "title": "Normative ethics" }, { "paragraph_id": 50, "text": "German philosopher Jürgen Habermas has proposed a theory of discourse ethics that he states is a descendant of Kantian ethics. He proposes that action should be based on communication between those involved, in which their interests and intentions are discussed so they can be understood by all. Rejecting any form of coercion or manipulation, Habermas believes that agreement between the parties is crucial for a moral decision to be reached. Like Kantian ethics, discourse ethics is a cognitive ethical theory, in that it supposes that truth and falsity can be attributed to ethical propositions. It also formulates a rule by which ethical actions can be determined and proposes that ethical actions should be universalizable, in a similar way to Kant's ethics.", "title": "Normative ethics" }, { "paragraph_id": 51, "text": "Habermas argues that his ethical theory is an improvement on Kant's ethics. He rejects the dualistic framework of Kant's ethics. Kant distinguished between the phenomena world, which can be sensed and experienced by humans, and the noumena, or spiritual world, which is inaccessible to humans. This dichotomy was necessary for Kant because it could explain the autonomy of a human agent: although a human is bound in the phenomenal world, their actions are free in the noumenal world. For Habermas, morality arises from discourse, which is made necessary by their rationality and needs, rather than their freedom.", "title": "Normative ethics" }, { "paragraph_id": 52, "text": "Associated with the pragmatists Charles Sanders Peirce, William James, and especially John Dewey, pragmatic ethics holds that moral correctness evolves similarly to scientific knowledge: socially over the course of many lifetimes. Thus, we should prioritize social reform over attempts to account for consequences, individual virtue or duty (although these may be worthwhile attempts, if social reform is provided for).", "title": "Normative ethics" }, { "paragraph_id": 53, "text": "Care ethics contrasts with more well-known ethical models, such as consequentialist theories (e.g. utilitarianism) and deontological theories (e.g., Kantian ethics) in that it seeks to incorporate traditionally feminized virtues and values that—proponents of care ethics contend—are absent in such traditional models of ethics. These values include the importance of empathetic relationships and compassion.", "title": "Normative ethics" }, { "paragraph_id": 54, "text": "Care-focused feminism is a branch of feminist thought, informed primarily by ethics of care as developed by Carol Gilligan and Nel Noddings. This body of theory is critical of how caring is socially assigned to women, and consequently devalued. They write, \"Care-focused feminists regard women's capacity for care as a human strength,\" that should be taught to and expected of men as well as women. Noddings proposes that ethical caring has the potential to be a more concrete evaluative model of moral dilemma than an ethic of justice. Noddings’ care-focused feminism requires practical application of relational ethics, predicated on an ethic of care.", "title": "Normative ethics" }, { "paragraph_id": 55, "text": "The 'metafeminist' theory of the matrixial gaze and the matrixial time-space, coined and developed Bracha L. Ettinger since 1985, articulates a revolutionary philosophical approach that, in \"daring to approach\", to use Griselda Pollock's description of Ettinger's ethical turn, \"the prenatal with the pre-maternal encounter\", violence toward women at war, and the Shoah, has philosophically established the rights of each female subject over her own reproductive body, and offered a language to relate to human experiences which escape the phallic domain. The matrixial sphere is a psychic and symbolic dimension that the 'phallic' language and regulations cannot control. In Ettinger's model, the relations between self and other are of neither assimilation nor rejection but 'coemergence'. In her conversation with Emmanuel Levinas, 1991, Ettinger prooses that the source of human Ethics is feminine-maternal and feminine-pre-maternal matrixial encounter-event. Sexuality and maternality coexist and are not in contradiction (the contradiction established by Sigmund Freud and Jacques Lacan), and the feminine is not an absolute alterity (the alterity established by Jacques Lacan and Emmanuel Levinas). With the 'originary response-ability', 'wit(h)nessing', 'borderlinking', 'communicaring', 'com-passion', 'seduction into life' and other processes invested by affects that occur in the Ettingerian matrixial time-space, the feminine is presented as the source of humanized Ethics in all genders. Compassion and Seduction into life occurs earlier than the primary seduction which passes through enigmatic signals from the maternal sexuality according to Jean Laplanche, since it is active in 'coemergence' in 'withnessing' for any born subject, earlier to its birth. Ettinger suggests to Emanuel Levinas in their conversations in 1991, that the feminine understood via the matrixial perspective is the heart and the source of Ethics. At the beginning of life, an originary 'fascinance' felt by the infant is related to the passage from response-ability to responsibility, from com-passion to compassion, and from wit(h)nessing to witnessing operated and transmitted by the m/Other. The 'differentiation in jointness' that is at the heart of the matrixial borderspace has deep implications in the relational field and for the ethics of care. The matrixial theory that proposes new ways to rethink sexual difference through the fluidity of boundaries informs aesthetics and ethics of compassion, carrying and non-abandonment in 'subjectivity as encounter-event'. It has become significant in Psychoanalysis and in transgender studies.", "title": "Normative ethics" }, { "paragraph_id": 56, "text": "Role ethics is an ethical theory based on family roles. Unlike virtue ethics, role ethics is not individualistic. Morality is derived from a person's relationship with their community. Confucian ethics is an example of role ethics though this is not straightforwardly uncontested. Confucian roles center around the concept of filial piety or xiao, a respect for family members. According to Roger T. Ames and Henry Rosemont, \"Confucian normativity is defined by living one's family roles to maximum effect.\" Morality is determined through a person's fulfillment of a role, such as that of a parent or a child. Confucian roles are not rational, and originate through the xin, or human emotions.", "title": "Normative ethics" }, { "paragraph_id": 57, "text": "Anarchist ethics is an ethical theory based on the studies of anarchist thinkers. The biggest contributor to anarchist ethics is Peter Kropotkin.", "title": "Normative ethics" }, { "paragraph_id": 58, "text": "Starting from the premise that the goal of ethical philosophy should be to help humans adapt and thrive in evolutionary terms, Kropotkin's ethical framework uses biology and anthropology as a basis – in order to scientifically establish what will best enable a given social order to thrive biologically and socially – and advocates certain behavioural practices to enhance humanity's capacity for freedom and well-being, namely practices which emphasise solidarity, equality, and justice.", "title": "Normative ethics" }, { "paragraph_id": 59, "text": "Kropotkin argues that ethics itself is evolutionary, and is inherited as a sort of a social instinct through cultural history, and by so, he rejects any religious and transcendental explanation of morality. The origin of ethical feeling in both animals and humans can be found, he claims, in the natural fact of \"sociality\" (mutualistic symbiosis), which humans can then combine with the instinct for justice (i.e. equality) and then with the practice of reason to construct a non-supernatural and anarchistic system of ethics. Kropotkin suggests that the principle of equality at the core of anarchism is the same as the Golden rule:", "title": "Normative ethics" }, { "paragraph_id": 60, "text": "This principle of treating others as one wishes to be treated oneself, what is it but the very same principle as equality, the fundamental principle of anarchism? And how can any one manage to believe himself an anarchist unless he practices it? We do not wish to be ruled. And by this very fact, do we not declare that we ourselves wish to rule nobody? We do not wish to be deceived, we wish always to be told nothing but the truth. And by this very fact, do we not declare that we ourselves do not wish to deceive anybody, that we promise to always tell the truth, nothing but the truth, the whole truth? We do not wish to have the fruits of our labor stolen from us. And by that very fact, do we not declare that we respect the fruits of others' labor? By what right indeed can we demand that we should be treated in one fashion, reserving it to ourselves to treat others in a fashion entirely different? Our sense of equality revolts at such an idea.", "title": "Normative ethics" }, { "paragraph_id": 61, "text": "Antihumanists such as Louis Althusser, Michel Foucault and structuralists such as Roland Barthes challenged the possibilities of individual agency and the coherence of the notion of the 'individual' itself. This was on the basis that personal identity was, in the most part, a social construction. As critical theory developed in the later 20th century, post-structuralism sought to problematize human relationships to knowledge and 'objective' reality. Jacques Derrida argued that access to meaning and the 'real' was always deferred, and sought to demonstrate via recourse to the linguistic realm that \"there is no outside-text/non-text\" (\"il n'y a pas de hors-texte\" is often mistranslated as \"there is nothing outside the text\"); at the same time, Jean Baudrillard theorised that signs and symbols or simulacra mask reality (and eventually the absence of reality itself), particularly in the consumer world.", "title": "Normative ethics" }, { "paragraph_id": 62, "text": "Post-structuralism and postmodernism argue that ethics must study the complex and relational conditions of actions. A simple alignment of ideas of right and particular acts is not possible. There will always be an ethical remainder that cannot be taken into account or often even recognized. Such theorists find narrative (or, following Nietzsche and Foucault, genealogy) to be a helpful tool for understanding ethics because narrative is always about particular lived experiences in all their complexity rather than the assignment of an idea or norm to separate and individual actions.", "title": "Normative ethics" }, { "paragraph_id": 63, "text": "Zygmunt Bauman says postmodernity is best described as modernity without illusion, the illusion being the belief that humanity can be repaired by some ethic principle. Postmodernity can be seen in this light as accepting the messy nature of humanity as unchangeable. In this postmodern world, the means to act collectively and globally to solve large-scale problems have been all but discredited, dismantled or lost. Problems can be handled only locally and each on its own. All problem-handling means building a mini-order at the expense of order elsewhere, and at the cost of rising global disorder as well as depleting the shrinking supplies of resources which make ordering possible. He considers Emmanuel Levinas's ethics as postmodern. Unlike the modern ethical philosophy which leaves the Other on the outside of the self as an ambivalent presence, Levinas's philosophy readmits her as a neighbor and as a crucial character in the process through which the moral self comes into its own.", "title": "Normative ethics" }, { "paragraph_id": 64, "text": "David Couzens Hoy states that Emmanuel Levinas's writings on the face of the Other and Derrida's meditations on the relevance of death to ethics are signs of the \"ethical turn\" in Continental philosophy that occurred in the 1980s and 1990s. Hoy describes post-critique ethics as the \"obligations that present themselves as necessarily to be fulfilled but are neither forced on one or are enforceable\".", "title": "Normative ethics" }, { "paragraph_id": 65, "text": "Hoy's post-critique model uses the term ethical resistance. Examples of this would be an individual's resistance to consumerism in a retreat to a simpler but perhaps harder lifestyle, or an individual's resistance to a terminal illness. Hoy describes Levinas's account as \"not the attempt to use power against itself, or to mobilize sectors of the population to exert their political power; the ethical resistance is instead the resistance of the powerless\".", "title": "Normative ethics" }, { "paragraph_id": 66, "text": "Hoy concludes that", "title": "Normative ethics" }, { "paragraph_id": 67, "text": "The ethical resistance of the powerless others to our capacity to exert power over them is therefore what imposes unenforceable obligations on us. The obligations are unenforceable precisely because of the other's lack of power. That actions are at once obligatory and at the same time unenforceable is what put them in the category of the ethical. Obligations that were enforced would, by the virtue of the force behind them, not be freely undertaken and would not be in the realm of the ethical.", "title": "Normative ethics" }, { "paragraph_id": 68, "text": "Applied ethics, also known as practical ethics, is the branch of ethics and applied philosophy that examines concrete moral problems encountered in real-life situations. Unlike normative ethics, it is not concerned with discovering or justifying universal ethical principles. Instead, it studies how those principles can be applied to specific domains of practical life, what consequences they have in these fields, and whether other considerations are relevant.", "title": "Applied ethics" }, { "paragraph_id": 69, "text": "One of the main challenges of applied ethics is to breach the gap between abstract universal theories and their application to concrete situations. For example, an in-depth understanding of Kantianism or utilitarianism is usually not sufficient to decide how to analyze the moral implications of a medical procedure. One reason is that it may not be clear how the procedure affects the Kantian requirement of respecting everyone's personhood and what the consequences of the procedure are in terms of the greatest good for the greatest number. This difficulty is particularly relevant to applied ethicists who employ a top-down methodology by starting from universal ethical principles and applying them to particular cases within a specific domain. A different approach is to use a bottom-up methodology, which relies on many observations of particular cases to arrive at an understanding of the moral principles relevant to this particular domain. In either case, inquiry into applied ethics is often triggered by ethical dilemmas to solve cases in which a person is subject to conflicting moral requirements.", "title": "Applied ethics" }, { "paragraph_id": 70, "text": "Applied ethics covers issues pertaining to both the private sphere, like right conduct in the family and close relationships, and the public sphere, like moral problems posed by new technologies and international duties toward future generations. Major branches include bioethics, business ethics, and professional ethics. There are many other branches and their domains of inquiry often overlap.", "title": "Applied ethics" }, { "paragraph_id": 71, "text": "Bioethics is a wide field that covers moral problems associated with living organisms and biological disciplines. A key problem in bioethics concerns the moral status of entities and to what extent this status depends on features such as consciousness, being able to feel pleasure and pain, rationality, and personhood. These differences concern, for example, how to treat non-living entities like rocks and non-sentient entities like plants in contrast to animals and whether humans have a different moral status than other animals. According to anthropocentrism, only humans have a basic moral status. This implies that all other entities only have a derivative moral status to the extent that they affect human life. Sentientism, by contrast, extends an inherent moral status to all sentient beings. Further positions include biocentrism, which also covers non-sentient lifeforms, and ecocentrism, which states that all of nature has a basic moral status.", "title": "Applied ethics" }, { "paragraph_id": 72, "text": "Bioethics is relevant to various aspects of life and to many professions. It covers a wide range of moral problems associated with topics like abortion, cloning, stem cell research, euthanasia, suicide, animal testing, intensive animal farming, nuclear waste, and air pollution.", "title": "Applied ethics" }, { "paragraph_id": 73, "text": "Bioethics can be divided into medical ethics, animal ethics, and environmental ethics based on whether the ethical problems relate to humans, other animals, or nature in general. Medical ethics is the oldest branch of bioethics and has its origins in the Hippocratic Oath, which establishes ethical guidelines for medical practitioners like a prohibition to harm the patient. A central topic in medical ethics concerns issues associated with the beginning and the end of life. One debate focuses on the question of whether a fetus is a full-fledged person with all the rights associated with this status. For example, some proponents of this view argue that abortion is a form of murder. In relation to the end of life, there are ethical dilemmas concerning whether a person has a right to end their own life in cases of terminal illness and whether a medical practitioner may assist them in doing so. Other topics in medical ethics include medical confidentiality, informed consent, research on human beings, organ transplantation, and access to healthcare.", "title": "Applied ethics" }, { "paragraph_id": 74, "text": "Animal ethics examines how humans should treat other animals. An influential consideration in this field emphasizes the importance of animal welfare while arguing that humans should avoid or minimize the harm done to animals. There is wide agreement that it is wrong to torture animals for fun. The situation is more complicated in cases where harm is inflicted on animals as a side effect of the pursuit of human interests. This happens, for example, during factory farming, when using animals as food, and for research experiments on animals. A key topic in animal ethics is the formulation of animal rights. Animal rights theorists assert that animals have a certain moral status and that humans have an obligation to respect this status when interacting with them. Examples of suggested animal rights include the right to life, the right to be free from unnecessary suffering, and the right to natural behavior in a suitable environment.", "title": "Applied ethics" }, { "paragraph_id": 75, "text": "Environmental ethics deals with moral problems relating to the natural environment including animals, plants, natural resources, and ecosystems. In its widest sense, it also covers the whole biosphere and the cosmos. In the domain of agriculture, this concerns questions like under what circumstances it is acceptable to clear the vegetation of an area to use it for farming and the implications of using genetically modified crops. On a wider scale, environmental ethics addresses the problem of global warming and how people are responsible for this both on an individual and a collective level. Environmental ethicists often promote sustainable practices and policies directed at protecting and conserving ecosystems and biodiversity.", "title": "Applied ethics" }, { "paragraph_id": 76, "text": "Business ethics examines the moral implications of business conduct and investigates how ethical principles apply to corporations and organizations. A key topic is corporate social responsibility, which is the responsibility of corporations to act in a manner that benefits society at large. Corporate social responsibility is a complex issue since many stakeholders are directly and indirectly involved in corporate decisions, such as the CEO, the board of directors, and the shareholders. A closely related topic concerns the question of whether corporations themselves, and not just their stakeholders, have moral agency. Business ethics further examines the role of truthfulness, honesty, and fairness in business practices as well as the moral implications of bribery, conflict of interest, protection of investors and consumers, worker's rights, ethical leadership, and corporate philanthropy.", "title": "Applied ethics" }, { "paragraph_id": 77, "text": "Professional ethics is a closely related field that studies ethical principles applying to members of a specific profession, like engineers, medical doctors, lawyers, and teachers. It is a diverse field since different professions often have different responsibilities. Principles applying to many professions include that the professional has the required expertise for the intended work and that they have personal integrity and are trustworthy. Further principles are to serve the interest of their target group, follow client confidentiality, and respect and uphold the client's rights, such as informed consent. More precise requirements often vary between professions. A cornerstone of engineering ethics is to protect the public's safety, health, and wellbeing. Legal ethics emphasizes the importance of respect for justice, personal integrity, and confidentiality. Key factors in journalism ethics include accuracy, truthfulness, independence, and impartiality as well as proper attribution to avoid plagiarism.", "title": "Applied ethics" }, { "paragraph_id": 78, "text": "Many other fields of applied ethics are discussed in the academic literature. Communication ethics covers moral principles in relation to communicative conduct. Two key issues in it are freedom of speech and speech responsibility. Freedom of speech concerns the ability to articulate one's opinions and ideas without the threats of punishment and censorship. Speech responsibility is about being accountable for the consequences of communicative action and inaction. A closely related field is information ethics, which focuses on the moral implications of creating, controlling, disseminating, and using information.", "title": "Applied ethics" }, { "paragraph_id": 79, "text": "The ethics of technology has implications for both communication ethics and information ethics in regard to communication and information technologies. In its widest sense, it examines the moral issues associated with any artifacts created and used for instrumental means, from simple artifacts like spears to high-tech computers and nanotechnology. Central topics in the ethics of technology include the risks associated with creating new technologies, their responsible use, and questions surrounding the issue of human enhancement through technological means, such as prosthetic limbs, performance-enhancing drugs, and genetic enhancement. Important subfields include computer ethics, ethics of artificial intelligence, machine ethics, ethics of nanotechnology, and nuclear ethics.", "title": "Applied ethics" }, { "paragraph_id": 80, "text": "The ethics of war investigates moral problems in relation to war and violent conflicts. According to just war theory, waging war is morally justified if it fulfills certain conditions. They are commonly divided into requirements concerning the cause to initiate violent activities, such as self-defense, and the way those violent activities are conducted, such as avoiding excessive harm to civilians in the pursuit of legitimate military targets. Military ethics is a closely related field that is interested in the conduct of military personnel. It governs questions of the circumstances under which they are permitted to kill enemies, destroy infrastructure, and put the lives of their own troops at risk. Additional topics are recruitment, training, and discharge of military personnel as well as the procurement of military equipment.", "title": "Applied ethics" }, { "paragraph_id": 81, "text": "Further fields of applied ethics include political ethics, which examines the moral dimensions of political decisions, educational ethics, which covers ethical issues related to proper teaching practices, and sexual ethics, which addresses the moral implications of sexual behavior.", "title": "Applied ethics" }, { "paragraph_id": 82, "text": "Moral psychology is a field of study that began as an issue in philosophy and that is now properly considered part of the discipline of psychology. Some use the term \"moral psychology\" relatively narrowly to refer to the study of moral development. However, others tend to use the term more broadly to include any topics at the intersection of ethics and psychology (and philosophy of mind). Such topics are ones that involve the mind and are relevant to moral issues. Some of the main topics of the field are moral responsibility, moral development, moral character (especially as related to virtue ethics), altruism, psychological egoism, moral luck, and moral disagreement.", "title": "Moral psychology" }, { "paragraph_id": 83, "text": "Evolutionary ethics concerns approaches to ethics (morality) based on the role of evolution in shaping human psychology and behavior. Such approaches may be based in scientific fields such as evolutionary psychology or sociobiology, with a focus on understanding and explaining observed ethical preferences and choices.", "title": "Moral psychology" }, { "paragraph_id": 84, "text": "Descriptive ethics is on the less philosophical end of the spectrum since it seeks to gather particular information about how people live and draw general conclusions based on observed patterns. Abstract and theoretical questions that are more clearly philosophical—such as, \"Is ethical knowledge possible?\"—are not central to descriptive ethics. Descriptive ethics offers a value-free approach to ethics, which defines it as a social science rather than a humanities discipline. Its examination of ethics does not start with a preconceived theory but rather investigates observations of actual choices made by moral agents in practice. Some philosophers rely on descriptive ethics and choices made and unchallenged by a society or culture to derive categories, which typically vary by context. This can lead to situational ethics and situated ethics. These philosophers often view aesthetics, etiquette, and arbitration as more fundamental, percolating \"bottom up\" to imply the existence of, rather than explicitly prescribe, theories of value or of conduct. The study of descriptive ethics may include examinations of the following:", "title": "Descriptive ethics" }, { "paragraph_id": 85, "text": "The history of ethics studies how moral philosophy has developed and evolved in the course of history. It has its origin in the ancient civilizations. In ancient Egypt, the concept of Maat was used as an ethical principle to guide behavior and maintain order by emphasizing the importance of truth, balance, and harmony. In ancient India, the Vedas and Upanishads were written as the foundational texts of Hindu philosophy and discussed the role of duty and the consequences of one's actions. Buddhist ethics also originated in ancient India and advocated compassion, non-violence, and the pursuit of enlightenment. Ancient China saw the emergence of Confucianism, which focuses on moral conduct and self-cultivation by acting in accordance with virtues, and Daoism, which teaches that human behavior should be in harmony with the natural order of the universe.", "title": "History" }, { "paragraph_id": 86, "text": "In ancient Greece, Socrates emphasized the importance of inquiry into what a good life is by critically questioning established ideas and exploring concepts like virtue, justice, courage, and wisdom. According to Plato, to lead a good life means that the different parts of the soul are in harmony with each other. For Aristotle, a good life is associated with being happy by cultivating virtues and flourishing. The close relation between right action and happiness was also explored by Hellenistic schools of Epicureanism, which recommended a simple lifestyle without indulging in sensory pleasures, and Stoicism, which advocated living in tune with reason and virtue while practicing self-mastery and becoming immune to disturbing emotions.", "title": "History" }, { "paragraph_id": 87, "text": "Ethical thought in the medieval period was strongly influenced by religious teachings. Christian philosophers interpreted moral principles as divine commands originating from God. Thomas Aquinas developed natural law ethics by claiming that ethical behavior consists in following the laws and order of nature, which he believed were created by God. In the Islamic world, philosophers like Al-Farabi and Avicenna synthesized ancient Greek philosophy with the ethical teachings of Islam while emphasizing the harmony between reason and faith. In medieval India, philosophers like Adi Shankara and Ramanuja saw the practice of spirituality to attain liberation as the highest goal of human behavior.", "title": "History" }, { "paragraph_id": 88, "text": "Moral philosophy in the modern period was characterized by a shift toward a secular approach to ethics. Thomas Hobbes identified self-interest as the primary drive of humans. He concluded that it would lead to \"a war of every man against every man\" unless a social contract is established to avoid this outcome. David Hume thought that only moral sentiments, like empathy, can motivate ethical actions while he saw reason not as a motivating factor but only as what anticipates the consequences of possible actions. Immanuel Kant, by contrast, saw reason as the source of morality. He formulated a deontological theory, according to which the ethical value of actions depends on their conformity with moral laws independent of their outcome. These laws take the form of categorical imperatives, which are universal requirements that apply to every situation. Another influential development in this period was the formulation of utilitarianism by Jeremy Bentham and John Stuart Mill. According to the utilitarian doctrine, actions should promote happiness while reducing suffering and the right action is the one that produces the greatest good for the greatest number of people.", "title": "History" }, { "paragraph_id": 89, "text": "An important development in 20th-century analytic philosophy was the emergence of metaethics. Significant early contributions to this field were made by G. E. Moore, who argued that moral values are essentially different from other properties found in the natural world. R. M. Hare followed this idea in formulating his prescriptivism, which states that moral statements are commands that, unlike regular judgments, are neither true nor false. An influential argument for moral realism was made by Derek Parfit, who argued that morality concerns objective features of reality that give people reasons to act in one way or another. Another development in this period was the revival of ancient virtue ethics by philosophers like Philippa Foot. In the field of political philosophy, John Rawls relied on Kantian ethics to analyze social justice as a form of fairness. In continental philosophy, phenomenologists such as Max Scheler and Nicolai Hartmann built ethical systems based on the claim that values have objective reality that can be investigated using the phenomenological method. Existentialists like Jean-Paul Sartre, by contrast, held that values are created by humans and explored the consequences of this view in relation to individual freedom, responsibility, and authenticity. This period also saw the emergence of feminist ethics, which questions traditional ethical assumptions associated with a male perspective and puts alternative concepts, like care, at the center.", "title": "History" } ]
Ethics or moral philosophy is a branch of philosophy that "involves systematizing, defending, and recommending concepts of right and wrong behavior". The field of ethics, along with aesthetics, concerns matters of value; these fields comprise the branch of philosophy called axiology. Ethics seeks to resolve questions of human morality by defining concepts such as good and evil, right and wrong, virtue and vice, justice and crime. As a field of intellectual inquiry, moral philosophy is related to the fields of moral psychology, descriptive ethics, and value theory. Three major areas of study within ethics recognized today are: Metaethics, concerning the theoretical meaning and reference of moral propositions, and how their truth values can be determined; Normative ethics, concerning the practical means of determining a moral course of action; Applied ethics, concerning what a person is obligated to do in a specific situation or a particular domain of action.
2001-08-28T06:03:53Z
2023-12-31T07:22:45Z
[ "Template:Short description", "Template:-\"", "Template:Dead link", "Template:PhilPapers", "Template:Snd", "Template:Cite web", "Template:Cite journal", "Template:Cite dictionary", "Template:Other uses", "Template:Anchor", "Template:Blockquote", "Template:Synthesis", "Template:Cite news", "Template:Multiref", "Template:Refend", "Template:Lang", "Template:Library resources box", "Template:Use mdy dates", "Template:Sfn", "Template:Ethics", "Template:Authority control", "Template:See also", "Template:Cite IEP", "Template:Librivox book", "Template:Webarchive", "Template:Harvnb", "Template:Ethical frameworks sidebar", "Template:Rp", "Template:Div col", "Template:Reflist", "Template:Sister project links", "Template:InPho", "Template:Philosophy sidebar", "Template:Main", "Template:Div col end", "Template:Refbegin", "Template:ISBN", "Template:Philosophy topics", "Template:Cn", "Template:Citation needed", "Template:Quote", "Template:Cite book" ]
https://en.wikipedia.org/wiki/Ethics
9,259
Equivalence relation
In mathematics, an equivalence relation is a binary relation that is reflexive, symmetric and transitive. The equipollence relation between line segments in geometry is a common example of an equivalence relation. A simpler example is equality. Any number a is equal to itself (reflexive). If a = b, then b = a (symmetric). If a = b and b = c, then a = c (transitive). Each equivalence relation provides a partition of the underlying set into disjoint equivalence classes. Two elements of the given set are equivalent to each other if and only if they belong to the same equivalence class. Various notations are used in the literature to denote that two elements a {\displaystyle a} and b {\displaystyle b} of a set are equivalent with respect to an equivalence relation R ; {\displaystyle R;} the most common are " a ∼ b {\displaystyle a\sim b} " and "a ≡ b", which are used when R {\displaystyle R} is implicit, and variations of " a ∼ R b {\displaystyle a\sim _{R}b} ", "a ≡R b", or " a R b {\displaystyle {a\mathop {R} b}} " to specify R {\displaystyle R} explicitly. Non-equivalence may be written "a ≁ b" or " a ≢ b {\displaystyle a\not \equiv b} ". A binary relation ∼ {\displaystyle \,\sim \,} on a set X {\displaystyle X} is said to be an equivalence relation, if and only if it is reflexive, symmetric and transitive. That is, for all a , b , {\displaystyle a,b,} and c {\displaystyle c} in X : {\displaystyle X:} X {\displaystyle X} together with the relation ∼ {\displaystyle \,\sim \,} is called a setoid. The equivalence class of a {\displaystyle a} under ∼ , {\displaystyle \,\sim ,} denoted [ a ] , {\displaystyle [a],} is defined as [ a ] = { x ∈ X : x ∼ a } . {\displaystyle [a]=\{x\in X:x\sim a\}.} In relational algebra, if R ⊆ X × Y {\displaystyle R\subseteq X\times Y} and S ⊆ Y × Z {\displaystyle S\subseteq Y\times Z} are relations, then the composite relation S R ⊆ X × Z {\displaystyle SR\subseteq X\times Z} is defined so that x S R z {\displaystyle x\,SR\,z} if and only if there is a y ∈ Y {\displaystyle y\in Y} such that x R y {\displaystyle x\,R\,y} and y S z {\displaystyle y\,S\,z} . This definition is a generalisation of the definition of functional composition. The defining properties of an equivalence relation R {\displaystyle R} on a set X {\displaystyle X} can then be reformulated as follows: On the set X = { a , b , c } {\displaystyle X=\{a,b,c\}} , the relation R = { ( a , a ) , ( b , b ) , ( c , c ) , ( b , c ) , ( c , b ) } {\displaystyle R=\{(a,a),(b,b),(c,c),(b,c),(c,b)\}} is an equivalence relation. The following sets are equivalence classes of this relation: The set of all equivalence classes for R {\displaystyle R} is { { a } , { b , c } } . {\displaystyle \{\{a\},\{b,c\}\}.} This set is a partition of the set X {\displaystyle X} with respect to R {\displaystyle R} . The following relations are all equivalence relations: If ∼ {\displaystyle \,\sim \,} is an equivalence relation on X , {\displaystyle X,} and P ( x ) {\displaystyle P(x)} is a property of elements of X , {\displaystyle X,} such that whenever x ∼ y , {\displaystyle x\sim y,} P ( x ) {\displaystyle P(x)} is true if P ( y ) {\displaystyle P(y)} is true, then the property P {\displaystyle P} is said to be well-defined or a class invariant under the relation ∼ . {\displaystyle \,\sim .} A frequent particular case occurs when f {\displaystyle f} is a function from X {\displaystyle X} to another set Y ; {\displaystyle Y;} if x 1 ∼ x 2 {\displaystyle x_{1}\sim x_{2}} implies f ( x 1 ) = f ( x 2 ) {\displaystyle f\left(x_{1}\right)=f\left(x_{2}\right)} then f {\displaystyle f} is said to be a morphism for ∼ , {\displaystyle \,\sim ,} a class invariant under ∼ , {\displaystyle \,\sim ,} or simply invariant under ∼ . {\displaystyle \,\sim .} This occurs, e.g. in the character theory of finite groups. The latter case with the function f {\displaystyle f} can be expressed by a commutative triangle. See also invariant. Some authors use "compatible with ∼ {\displaystyle \,\sim } " or just "respects ∼ {\displaystyle \,\sim } " instead of "invariant under ∼ {\displaystyle \,\sim } ". More generally, a function may map equivalent arguments (under an equivalence relation ∼ A {\displaystyle \,\sim _{A}} ) to equivalent values (under an equivalence relation ∼ B {\displaystyle \,\sim _{B}} ). Such a function is known as a morphism from ∼ A {\displaystyle \,\sim _{A}} to ∼ B . {\displaystyle \,\sim _{B}.} Let a , b ∈ X {\displaystyle a,b\in X} , and ∼ {\displaystyle \sim } be an equivalence relation. Some key definitions and terminology follow: A subset Y of X such that a ∼ b {\displaystyle a\sim b} holds for all a and b in Y, and never for a in Y and b outside Y, is called an equivalence class of X by ~. Let [ a ] := { x ∈ X : a ∼ x } {\displaystyle [a]:=\{x\in X:a\sim x\}} denote the equivalence class to which a belongs. All elements of X equivalent to each other are also elements of the same equivalence class. The set of all equivalence classes of X by ~, denoted X / ∼ := { [ x ] : x ∈ X } , {\displaystyle X/{\mathord {\sim }}:=\{[x]:x\in X\},} is the quotient set of X by ~. If X is a topological space, there is a natural way of transforming X / ∼ {\displaystyle X/\sim } into a topological space; see quotient space for the details. The projection of ∼ {\displaystyle \,\sim \,} is the function π : X → X / ∼ {\displaystyle \pi :X\to X/{\mathord {\sim }}} defined by π ( x ) = [ x ] {\displaystyle \pi (x)=[x]} which maps elements of X {\displaystyle X} into their respective equivalence classes by ∼ . {\displaystyle \,\sim .} The equivalence kernel of a function f {\displaystyle f} is the equivalence relation ~ defined by x ∼ y if and only if f ( x ) = f ( y ) . {\displaystyle x\sim y{\text{ if and only if }}f(x)=f(y).} The equivalence kernel of an injection is the identity relation. A partition of X is a set P of nonempty subsets of X, such that every element of X is an element of a single element of P. Each element of P is a cell of the partition. Moreover, the elements of P are pairwise disjoint and their union is X. Let X be a finite set with n elements. Since every equivalence relation over X corresponds to a partition of X, and vice versa, the number of equivalence relations on X equals the number of distinct partitions of X, which is the nth Bell number Bn: A key result links equivalence relations and partitions: In both cases, the cells of the partition of X are the equivalence classes of X by ~. Since each element of X belongs to a unique cell of any partition of X, and since each cell of the partition is identical to an equivalence class of X by ~, each element of X belongs to a unique equivalence class of X by ~. Thus there is a natural bijection between the set of all equivalence relations on X and the set of all partitions of X. If ∼ {\displaystyle \sim } and ≈ {\displaystyle \approx } are two equivalence relations on the same set S {\displaystyle S} , and a ∼ b {\displaystyle a\sim b} implies a ≈ b {\displaystyle a\approx b} for all a , b ∈ S , {\displaystyle a,b\in S,} then ≈ {\displaystyle \approx } is said to be a coarser relation than ∼ {\displaystyle \sim } , and ∼ {\displaystyle \sim } is a finer relation than ≈ {\displaystyle \approx } . Equivalently, The equality equivalence relation is the finest equivalence relation on any set, while the universal relation, which relates all pairs of elements, is the coarsest. The relation " ∼ {\displaystyle \sim } is finer than ≈ {\displaystyle \approx } " on the collection of all equivalence relations on a fixed set is itself a partial order relation, which makes the collection a geometric lattice. Much of mathematics is grounded in the study of equivalences, and order relations. Lattice theory captures the mathematical structure of order relations. Even though equivalence relations are as ubiquitous in mathematics as order relations, the algebraic structure of equivalences is not as well known as that of orders. The former structure draws primarily on group theory and, to a lesser extent, on the theory of lattices, categories, and groupoids. Just as order relations are grounded in ordered sets, sets closed under pairwise supremum and infimum, equivalence relations are grounded in partitioned sets, which are sets closed under bijections that preserve partition structure. Since all such bijections map an equivalence class onto itself, such bijections are also known as permutations. Hence permutation groups (also known as transformation groups) and the related notion of orbit shed light on the mathematical structure of equivalence relations. Let '~' denote an equivalence relation over some nonempty set A, called the universe or underlying set. Let G denote the set of bijective functions over A that preserve the partition structure of A, meaning that for all x ∈ A {\displaystyle x\in A} and g ∈ G , g ( x ) ∈ [ x ] . {\displaystyle g\in G,g(x)\in [x].} Then the following three connected theorems hold: In sum, given an equivalence relation ~ over A, there exists a transformation group G over A whose orbits are the equivalence classes of A under ~. This transformation group characterisation of equivalence relations differs fundamentally from the way lattices characterize order relations. The arguments of the lattice theory operations meet and join are elements of some universe A. Meanwhile, the arguments of the transformation group operations composition and inverse are elements of a set of bijections, A → A. Moving to groups in general, let H be a subgroup of some group G. Let ~ be an equivalence relation on G, such that a ∼ b if and only if a b − 1 ∈ H . {\displaystyle a\sim b{\text{ if and only if }}ab^{-1}\in H.} The equivalence classes of ~—also called the orbits of the action of H on G—are the right cosets of H in G. Interchanging a and b yields the left cosets. Related thinking can be found in Rosen (2008: chpt. 10). Let G be a set and let "~" denote an equivalence relation over G. Then we can form a groupoid representing this equivalence relation as follows. The objects are the elements of G, and for any two elements x and y of G, there exists a unique morphism from x to y if and only if x ∼ y . {\displaystyle x\sim y.} The advantages of regarding an equivalence relation as a special case of a groupoid include: The equivalence relations on any set X, when ordered by set inclusion, form a complete lattice, called Con X by convention. The canonical map ker: X^X → Con X, relates the monoid X^X of all functions on X and Con X. ker is surjective but not injective. Less formally, the equivalence relation ker on X, takes each function f: X→X to its kernel ker f. Likewise, ker(ker) is an equivalence relation on X^X. Equivalence relations are a ready source of examples or counterexamples. For example, an equivalence relation with exactly two infinite equivalence classes is an easy example of a theory which is ω-categorical, but not categorical for any larger cardinal number. An implication of model theory is that the properties defining a relation can be proved independent of each other (and hence necessary parts of the definition) if and only if, for each property, examples can be found of relations not satisfying the given property while satisfying all the other properties. Hence the three defining properties of equivalence relations can be proved mutually independent by the following three examples: Properties definable in first-order logic that an equivalence relation may or may not possess include:
[ { "paragraph_id": 0, "text": "In mathematics, an equivalence relation is a binary relation that is reflexive, symmetric and transitive. The equipollence relation between line segments in geometry is a common example of an equivalence relation. A simpler example is equality. Any number a is equal to itself (reflexive). If a = b, then b = a (symmetric). If a = b and b = c, then a = c (transitive).", "title": "" }, { "paragraph_id": 1, "text": "Each equivalence relation provides a partition of the underlying set into disjoint equivalence classes. Two elements of the given set are equivalent to each other if and only if they belong to the same equivalence class.", "title": "" }, { "paragraph_id": 2, "text": "Various notations are used in the literature to denote that two elements a {\\displaystyle a} and b {\\displaystyle b} of a set are equivalent with respect to an equivalence relation R ; {\\displaystyle R;} the most common are \" a ∼ b {\\displaystyle a\\sim b} \" and \"a ≡ b\", which are used when R {\\displaystyle R} is implicit, and variations of \" a ∼ R b {\\displaystyle a\\sim _{R}b} \", \"a ≡R b\", or \" a R b {\\displaystyle {a\\mathop {R} b}} \" to specify R {\\displaystyle R} explicitly. Non-equivalence may be written \"a ≁ b\" or \" a ≢ b {\\displaystyle a\\not \\equiv b} \".", "title": "Notation" }, { "paragraph_id": 3, "text": "A binary relation ∼ {\\displaystyle \\,\\sim \\,} on a set X {\\displaystyle X} is said to be an equivalence relation, if and only if it is reflexive, symmetric and transitive. That is, for all a , b , {\\displaystyle a,b,} and c {\\displaystyle c} in X : {\\displaystyle X:}", "title": "Definition" }, { "paragraph_id": 4, "text": "X {\\displaystyle X} together with the relation ∼ {\\displaystyle \\,\\sim \\,} is called a setoid. The equivalence class of a {\\displaystyle a} under ∼ , {\\displaystyle \\,\\sim ,} denoted [ a ] , {\\displaystyle [a],} is defined as [ a ] = { x ∈ X : x ∼ a } . {\\displaystyle [a]=\\{x\\in X:x\\sim a\\}.}", "title": "Definition" }, { "paragraph_id": 5, "text": "In relational algebra, if R ⊆ X × Y {\\displaystyle R\\subseteq X\\times Y} and S ⊆ Y × Z {\\displaystyle S\\subseteq Y\\times Z} are relations, then the composite relation S R ⊆ X × Z {\\displaystyle SR\\subseteq X\\times Z} is defined so that x S R z {\\displaystyle x\\,SR\\,z} if and only if there is a y ∈ Y {\\displaystyle y\\in Y} such that x R y {\\displaystyle x\\,R\\,y} and y S z {\\displaystyle y\\,S\\,z} . This definition is a generalisation of the definition of functional composition. The defining properties of an equivalence relation R {\\displaystyle R} on a set X {\\displaystyle X} can then be reformulated as follows:", "title": "Definition" }, { "paragraph_id": 6, "text": "On the set X = { a , b , c } {\\displaystyle X=\\{a,b,c\\}} , the relation R = { ( a , a ) , ( b , b ) , ( c , c ) , ( b , c ) , ( c , b ) } {\\displaystyle R=\\{(a,a),(b,b),(c,c),(b,c),(c,b)\\}} is an equivalence relation. The following sets are equivalence classes of this relation:", "title": "Examples" }, { "paragraph_id": 7, "text": "The set of all equivalence classes for R {\\displaystyle R} is { { a } , { b , c } } . {\\displaystyle \\{\\{a\\},\\{b,c\\}\\}.} This set is a partition of the set X {\\displaystyle X} with respect to R {\\displaystyle R} .", "title": "Examples" }, { "paragraph_id": 8, "text": "The following relations are all equivalence relations:", "title": "Examples" }, { "paragraph_id": 9, "text": "If ∼ {\\displaystyle \\,\\sim \\,} is an equivalence relation on X , {\\displaystyle X,} and P ( x ) {\\displaystyle P(x)} is a property of elements of X , {\\displaystyle X,} such that whenever x ∼ y , {\\displaystyle x\\sim y,} P ( x ) {\\displaystyle P(x)} is true if P ( y ) {\\displaystyle P(y)} is true, then the property P {\\displaystyle P} is said to be well-defined or a class invariant under the relation ∼ . {\\displaystyle \\,\\sim .}", "title": "Well-definedness under an equivalence relation" }, { "paragraph_id": 10, "text": "A frequent particular case occurs when f {\\displaystyle f} is a function from X {\\displaystyle X} to another set Y ; {\\displaystyle Y;} if x 1 ∼ x 2 {\\displaystyle x_{1}\\sim x_{2}} implies f ( x 1 ) = f ( x 2 ) {\\displaystyle f\\left(x_{1}\\right)=f\\left(x_{2}\\right)} then f {\\displaystyle f} is said to be a morphism for ∼ , {\\displaystyle \\,\\sim ,} a class invariant under ∼ , {\\displaystyle \\,\\sim ,} or simply invariant under ∼ . {\\displaystyle \\,\\sim .} This occurs, e.g. in the character theory of finite groups. The latter case with the function f {\\displaystyle f} can be expressed by a commutative triangle. See also invariant. Some authors use \"compatible with ∼ {\\displaystyle \\,\\sim } \" or just \"respects ∼ {\\displaystyle \\,\\sim } \" instead of \"invariant under ∼ {\\displaystyle \\,\\sim } \".", "title": "Well-definedness under an equivalence relation" }, { "paragraph_id": 11, "text": "More generally, a function may map equivalent arguments (under an equivalence relation ∼ A {\\displaystyle \\,\\sim _{A}} ) to equivalent values (under an equivalence relation ∼ B {\\displaystyle \\,\\sim _{B}} ). Such a function is known as a morphism from ∼ A {\\displaystyle \\,\\sim _{A}} to ∼ B . {\\displaystyle \\,\\sim _{B}.}", "title": "Well-definedness under an equivalence relation" }, { "paragraph_id": 12, "text": "Let a , b ∈ X {\\displaystyle a,b\\in X} , and ∼ {\\displaystyle \\sim } be an equivalence relation. Some key definitions and terminology follow:", "title": "Related important definitions" }, { "paragraph_id": 13, "text": "A subset Y of X such that a ∼ b {\\displaystyle a\\sim b} holds for all a and b in Y, and never for a in Y and b outside Y, is called an equivalence class of X by ~. Let [ a ] := { x ∈ X : a ∼ x } {\\displaystyle [a]:=\\{x\\in X:a\\sim x\\}} denote the equivalence class to which a belongs. All elements of X equivalent to each other are also elements of the same equivalence class.", "title": "Related important definitions" }, { "paragraph_id": 14, "text": "The set of all equivalence classes of X by ~, denoted X / ∼ := { [ x ] : x ∈ X } , {\\displaystyle X/{\\mathord {\\sim }}:=\\{[x]:x\\in X\\},} is the quotient set of X by ~. If X is a topological space, there is a natural way of transforming X / ∼ {\\displaystyle X/\\sim } into a topological space; see quotient space for the details.", "title": "Related important definitions" }, { "paragraph_id": 15, "text": "The projection of ∼ {\\displaystyle \\,\\sim \\,} is the function π : X → X / ∼ {\\displaystyle \\pi :X\\to X/{\\mathord {\\sim }}} defined by π ( x ) = [ x ] {\\displaystyle \\pi (x)=[x]} which maps elements of X {\\displaystyle X} into their respective equivalence classes by ∼ . {\\displaystyle \\,\\sim .}", "title": "Related important definitions" }, { "paragraph_id": 16, "text": "The equivalence kernel of a function f {\\displaystyle f} is the equivalence relation ~ defined by x ∼ y if and only if f ( x ) = f ( y ) . {\\displaystyle x\\sim y{\\text{ if and only if }}f(x)=f(y).} The equivalence kernel of an injection is the identity relation.", "title": "Related important definitions" }, { "paragraph_id": 17, "text": "A partition of X is a set P of nonempty subsets of X, such that every element of X is an element of a single element of P. Each element of P is a cell of the partition. Moreover, the elements of P are pairwise disjoint and their union is X.", "title": "Related important definitions" }, { "paragraph_id": 18, "text": "Let X be a finite set with n elements. Since every equivalence relation over X corresponds to a partition of X, and vice versa, the number of equivalence relations on X equals the number of distinct partitions of X, which is the nth Bell number Bn:", "title": "Related important definitions" }, { "paragraph_id": 19, "text": "A key result links equivalence relations and partitions:", "title": "Fundamental theorem of equivalence relations" }, { "paragraph_id": 20, "text": "In both cases, the cells of the partition of X are the equivalence classes of X by ~. Since each element of X belongs to a unique cell of any partition of X, and since each cell of the partition is identical to an equivalence class of X by ~, each element of X belongs to a unique equivalence class of X by ~. Thus there is a natural bijection between the set of all equivalence relations on X and the set of all partitions of X.", "title": "Fundamental theorem of equivalence relations" }, { "paragraph_id": 21, "text": "If ∼ {\\displaystyle \\sim } and ≈ {\\displaystyle \\approx } are two equivalence relations on the same set S {\\displaystyle S} , and a ∼ b {\\displaystyle a\\sim b} implies a ≈ b {\\displaystyle a\\approx b} for all a , b ∈ S , {\\displaystyle a,b\\in S,} then ≈ {\\displaystyle \\approx } is said to be a coarser relation than ∼ {\\displaystyle \\sim } , and ∼ {\\displaystyle \\sim } is a finer relation than ≈ {\\displaystyle \\approx } . Equivalently,", "title": "Comparing equivalence relations" }, { "paragraph_id": 22, "text": "The equality equivalence relation is the finest equivalence relation on any set, while the universal relation, which relates all pairs of elements, is the coarsest.", "title": "Comparing equivalence relations" }, { "paragraph_id": 23, "text": "The relation \" ∼ {\\displaystyle \\sim } is finer than ≈ {\\displaystyle \\approx } \" on the collection of all equivalence relations on a fixed set is itself a partial order relation, which makes the collection a geometric lattice.", "title": "Comparing equivalence relations" }, { "paragraph_id": 24, "text": "Much of mathematics is grounded in the study of equivalences, and order relations. Lattice theory captures the mathematical structure of order relations. Even though equivalence relations are as ubiquitous in mathematics as order relations, the algebraic structure of equivalences is not as well known as that of orders. The former structure draws primarily on group theory and, to a lesser extent, on the theory of lattices, categories, and groupoids.", "title": "Algebraic structure" }, { "paragraph_id": 25, "text": "Just as order relations are grounded in ordered sets, sets closed under pairwise supremum and infimum, equivalence relations are grounded in partitioned sets, which are sets closed under bijections that preserve partition structure. Since all such bijections map an equivalence class onto itself, such bijections are also known as permutations. Hence permutation groups (also known as transformation groups) and the related notion of orbit shed light on the mathematical structure of equivalence relations.", "title": "Algebraic structure" }, { "paragraph_id": 26, "text": "Let '~' denote an equivalence relation over some nonempty set A, called the universe or underlying set. Let G denote the set of bijective functions over A that preserve the partition structure of A, meaning that for all x ∈ A {\\displaystyle x\\in A} and g ∈ G , g ( x ) ∈ [ x ] . {\\displaystyle g\\in G,g(x)\\in [x].} Then the following three connected theorems hold:", "title": "Algebraic structure" }, { "paragraph_id": 27, "text": "In sum, given an equivalence relation ~ over A, there exists a transformation group G over A whose orbits are the equivalence classes of A under ~.", "title": "Algebraic structure" }, { "paragraph_id": 28, "text": "This transformation group characterisation of equivalence relations differs fundamentally from the way lattices characterize order relations. The arguments of the lattice theory operations meet and join are elements of some universe A. Meanwhile, the arguments of the transformation group operations composition and inverse are elements of a set of bijections, A → A.", "title": "Algebraic structure" }, { "paragraph_id": 29, "text": "Moving to groups in general, let H be a subgroup of some group G. Let ~ be an equivalence relation on G, such that a ∼ b if and only if a b − 1 ∈ H . {\\displaystyle a\\sim b{\\text{ if and only if }}ab^{-1}\\in H.} The equivalence classes of ~—also called the orbits of the action of H on G—are the right cosets of H in G. Interchanging a and b yields the left cosets.", "title": "Algebraic structure" }, { "paragraph_id": 30, "text": "Related thinking can be found in Rosen (2008: chpt. 10).", "title": "Algebraic structure" }, { "paragraph_id": 31, "text": "Let G be a set and let \"~\" denote an equivalence relation over G. Then we can form a groupoid representing this equivalence relation as follows. The objects are the elements of G, and for any two elements x and y of G, there exists a unique morphism from x to y if and only if x ∼ y . {\\displaystyle x\\sim y.}", "title": "Algebraic structure" }, { "paragraph_id": 32, "text": "The advantages of regarding an equivalence relation as a special case of a groupoid include:", "title": "Algebraic structure" }, { "paragraph_id": 33, "text": "The equivalence relations on any set X, when ordered by set inclusion, form a complete lattice, called Con X by convention. The canonical map ker: X^X → Con X, relates the monoid X^X of all functions on X and Con X. ker is surjective but not injective. Less formally, the equivalence relation ker on X, takes each function f: X→X to its kernel ker f. Likewise, ker(ker) is an equivalence relation on X^X.", "title": "Algebraic structure" }, { "paragraph_id": 34, "text": "Equivalence relations are a ready source of examples or counterexamples. For example, an equivalence relation with exactly two infinite equivalence classes is an easy example of a theory which is ω-categorical, but not categorical for any larger cardinal number.", "title": "Equivalence relations and mathematical logic" }, { "paragraph_id": 35, "text": "An implication of model theory is that the properties defining a relation can be proved independent of each other (and hence necessary parts of the definition) if and only if, for each property, examples can be found of relations not satisfying the given property while satisfying all the other properties. Hence the three defining properties of equivalence relations can be proved mutually independent by the following three examples:", "title": "Equivalence relations and mathematical logic" }, { "paragraph_id": 36, "text": "Properties definable in first-order logic that an equivalence relation may or may not possess include:", "title": "Equivalence relations and mathematical logic" } ]
In mathematics, an equivalence relation is a binary relation that is reflexive, symmetric and transitive. The equipollence relation between line segments in geometry is a common example of an equivalence relation. A simpler example is equality. Any number a is equal to itself (reflexive). If a = b, then b = a (symmetric). If a = b and b = c, then a = c (transitive). Each equivalence relation provides a partition of the underlying set into disjoint equivalence classes. Two elements of the given set are equivalent to each other if and only if they belong to the same equivalence class.
2001-10-09T21:43:21Z
2023-11-28T15:15:49Z
[ "Template:Springer", "Template:Set theory", "Template:Redirect", "Template:Stack", "Template:Em", "Template:Main", "Template:See also", "Template:ISBN", "Template:Short description", "Template:Citation", "Template:Math", "Template:Reflist", "Template:Cite book", "Template:Mathematical logic", "Template:Authority control", "Template:About", "Template:Annotated link", "Template:Cite web", "Template:OEIS el" ]
https://en.wikipedia.org/wiki/Equivalence_relation
9,260
Equivalence class
In mathematics, when the elements of some set S {\displaystyle S} have a notion of equivalence (formalized as an equivalence relation), then one may naturally split the set S {\displaystyle S} into equivalence classes. These equivalence classes are constructed so that elements a {\displaystyle a} and b {\displaystyle b} belong to the same equivalence class if, and only if, they are equivalent. Formally, given a set S {\displaystyle S} and an equivalence relation ∼ {\displaystyle \,\sim \,} on S , {\displaystyle S,} the equivalence class of an element a {\displaystyle a} in S , {\displaystyle S,} often denoted by [ a ] . {\displaystyle [a].} The definition of equivalence relations implies that the equivalence classes form a partition of S , {\displaystyle S,} meaning, that every element of the set belongs to exactly one equivalence class. The set of the equivalence classes is sometimes called the quotient set or the quotient space of S {\displaystyle S} by ∼ , {\displaystyle \,\sim \,,} and is denoted by S / ∼ ′ {\displaystyle S/{\sim }'} When the set S {\displaystyle S} has some structure (such as a group operation or a topology) and the equivalence relation ∼ {\displaystyle \,\sim \,} is compatible with this structure, the quotient set often inherits a similar structure from its parent set. Examples include quotient spaces in linear algebra, quotient spaces in topology, quotient groups, homogeneous spaces, quotient rings, quotient monoids, and quotient categories. An equivalence relation on a set X {\displaystyle X} is a binary relation ∼ {\displaystyle \,\sim \,} on X {\displaystyle X} satisfying the three properties: The equivalence class of an element a {\displaystyle a} is often denoted [ a ] {\displaystyle [a]} or [ a ] ∼ , {\displaystyle [a]_{\sim },} and is defined as the set { x ∈ X : a ∼ x } {\displaystyle \{x\in X:a\sim x\}} of elements that are related to a {\displaystyle a} by ∼ . {\displaystyle \,\sim .} The word "class" in the term "equivalence class" may generally be considered as a synonym of "set", although some equivalence classes are not sets but proper classes. For example, "being isomorphic" is an equivalence relation on groups, and the equivalence classes, called isomorphism classes, are not sets. The set of all equivalence classes in X {\displaystyle X} with respect to an equivalence relation R {\displaystyle R} is denoted as X / R , {\displaystyle X/R,} and is called X {\displaystyle X} modulo R {\displaystyle R} (or the quotient set of X {\displaystyle X} by R {\displaystyle R} ). The surjective map x ↦ [ x ] {\displaystyle x\mapsto [x]} from X {\displaystyle X} onto X / R , {\displaystyle X/R,} which maps each element to its equivalence class, is called the canonical surjection, or the canonical projection. Every element of an equivalence class characterizes the class, and may be used to represent it. When such an element is chosen, it is called a representative of the class. The choice of a representative in each class defines an injection from X / R {\displaystyle X/R} to X. Since its composition with the canonical surjection is the identity of X / R , {\displaystyle X/R,} such an injection is called a section, when using the terminology of category theory. Sometimes, there is a section that is more "natural" than the other ones. In this case, the representatives are called canonical representatives. For example, in modular arithmetic, for every integer m greater than 1, the congruence modulo m is an equivalence relation on the integers, for which two integers a and b are equivalent—in this case, one says congruent —if m divides a − b ; {\displaystyle a-b;} this is denoted a ≡ b ( mod m ) . {\textstyle a\equiv b{\pmod {m}}.} Each class contains a unique non-negative integer smaller than m , {\displaystyle m,} and these integers are the canonical representatives. The use of representatives for representing classes allows avoiding to consider explicitly classes as sets. In this case, the canonical surjection that maps an element to its class is replaced by the function that maps an element to the representative of its class. In the preceding example, this function is denoted a mod m , {\displaystyle a{\bmod {m}},} and produces the remainder of the Euclidean division of a by m. Every element x {\displaystyle x} of X {\displaystyle X} is a member of the equivalence class [ x ] . {\displaystyle [x].} Every two equivalence classes [ x ] {\displaystyle [x]} and [ y ] {\displaystyle [y]} are either equal or disjoint. Therefore, the set of all equivalence classes of X {\displaystyle X} forms a partition of X {\displaystyle X} : every element of X {\displaystyle X} belongs to one and only one equivalence class. Conversely, every partition of X {\displaystyle X} comes from an equivalence relation in this way, according to which x ∼ y {\displaystyle x\sim y} if and only if x {\displaystyle x} and y {\displaystyle y} belong to the same set of the partition. It follows from the properties of an equivalence relation that if and only if [ x ] = [ y ] . {\displaystyle [x]=[y].} In other words, if ∼ {\displaystyle \,\sim \,} is an equivalence relation on a set X , {\displaystyle X,} and x {\displaystyle x} and y {\displaystyle y} are two elements of X , {\displaystyle X,} then these statements are equivalent: An undirected graph may be associated to any symmetric relation on a set X , {\displaystyle X,} where the vertices are the elements of X , {\displaystyle X,} and two vertices s {\displaystyle s} and t {\displaystyle t} are joined if and only if s ∼ t . {\displaystyle s\sim t.} Among these graphs are the graphs of equivalence relations. These graphs, called cluster graphs, are characterized as the graphs such that the connected components are cliques. If ∼ {\displaystyle \,\sim \,} is an equivalence relation on X , {\displaystyle X,} and P ( x ) {\displaystyle P(x)} is a property of elements of X {\displaystyle X} such that whenever x ∼ y , {\displaystyle x\sim y,} P ( x ) {\displaystyle P(x)} is true if P ( y ) {\displaystyle P(y)} is true, then the property P {\displaystyle P} is said to be an invariant of ∼ , {\displaystyle \,\sim \,,} or well-defined under the relation ∼ . {\displaystyle \,\sim .} A frequent particular case occurs when f {\displaystyle f} is a function from X {\displaystyle X} to another set Y {\displaystyle Y} ; if f ( x 1 ) = f ( x 2 ) {\displaystyle f\left(x_{1}\right)=f\left(x_{2}\right)} whenever x 1 ∼ x 2 , {\displaystyle x_{1}\sim x_{2},} then f {\displaystyle f} is said to be class invariant under ∼ , {\displaystyle \,\sim \,,} or simply invariant under ∼ . {\displaystyle \,\sim .} This occurs, for example, in the character theory of finite groups. Some authors use "compatible with ∼ {\displaystyle \,\sim \,} " or just "respects ∼ {\displaystyle \,\sim \,} " instead of "invariant under ∼ {\displaystyle \,\sim \,} ". Any function f : X → Y {\displaystyle f:X\to Y} is class invariant under ∼ , {\displaystyle \,\sim \,,} according to which x 1 ∼ x 2 {\displaystyle x_{1}\sim x_{2}} if and only if f ( x 1 ) = f ( x 2 ) . {\displaystyle f\left(x_{1}\right)=f\left(x_{2}\right).} The equivalence class of x {\displaystyle x} is the set of all elements in X {\displaystyle X} which get mapped to f ( x ) , {\displaystyle f(x),} that is, the class [ x ] {\displaystyle [x]} is the inverse image of f ( x ) . {\displaystyle f(x).} This equivalence relation is known as the kernel of f . {\displaystyle f.} More generally, a function may map equivalent arguments (under an equivalence relation ∼ X {\displaystyle \sim _{X}} on X {\displaystyle X} ) to equivalent values (under an equivalence relation ∼ Y {\displaystyle \sim _{Y}} on Y {\displaystyle Y} ). Such a function is a morphism of sets equipped with an equivalence relation. In topology, a quotient space is a topological space formed on the set of equivalence classes of an equivalence relation on a topological space, using the original space's topology to create the topology on the set of equivalence classes. In abstract algebra, congruence relations on the underlying set of an algebra allow the algebra to induce an algebra on the equivalence classes of the relation, called a quotient algebra. In linear algebra, a quotient space is a vector space formed by taking a quotient group, where the quotient homomorphism is a linear map. By extension, in abstract algebra, the term quotient space may be used for quotient modules, quotient rings, quotient groups, or any quotient algebra. However, the use of the term for the more general cases can as often be by analogy with the orbits of a group action. The orbits of a group action on a set may be called the quotient space of the action on the set, particularly when the orbits of the group action are the right cosets of a subgroup of a group, which arise from the action of the subgroup on the group by left translations, or respectively the left cosets as orbits under right translation. A normal subgroup of a topological group, acting on the group by translation action, is a quotient space in the senses of topology, abstract algebra, and group actions simultaneously. Although the term can be used for any equivalence relation's set of equivalence classes, possibly with further structure, the intent of using the term is generally to compare that type of equivalence relation on a set X , {\displaystyle X,} either to an equivalence relation that induces some structure on the set of equivalence classes from a structure of the same kind on X , {\displaystyle X,} or to the orbits of a group action. Both the sense of a structure preserved by an equivalence relation, and the study of invariants under group actions, lead to the definition of invariants of equivalence relations given above.
[ { "paragraph_id": 0, "text": "In mathematics, when the elements of some set S {\\displaystyle S} have a notion of equivalence (formalized as an equivalence relation), then one may naturally split the set S {\\displaystyle S} into equivalence classes. These equivalence classes are constructed so that elements a {\\displaystyle a} and b {\\displaystyle b} belong to the same equivalence class if, and only if, they are equivalent.", "title": "" }, { "paragraph_id": 1, "text": "Formally, given a set S {\\displaystyle S} and an equivalence relation ∼ {\\displaystyle \\,\\sim \\,} on S , {\\displaystyle S,} the equivalence class of an element a {\\displaystyle a} in S , {\\displaystyle S,} often denoted by [ a ] . {\\displaystyle [a].} The definition of equivalence relations implies that the equivalence classes form a partition of S , {\\displaystyle S,} meaning, that every element of the set belongs to exactly one equivalence class. The set of the equivalence classes is sometimes called the quotient set or the quotient space of S {\\displaystyle S} by ∼ , {\\displaystyle \\,\\sim \\,,} and is denoted by S / ∼ ′ {\\displaystyle S/{\\sim }'}", "title": "" }, { "paragraph_id": 2, "text": "When the set S {\\displaystyle S} has some structure (such as a group operation or a topology) and the equivalence relation ∼ {\\displaystyle \\,\\sim \\,} is compatible with this structure, the quotient set often inherits a similar structure from its parent set. Examples include quotient spaces in linear algebra, quotient spaces in topology, quotient groups, homogeneous spaces, quotient rings, quotient monoids, and quotient categories.", "title": "" }, { "paragraph_id": 3, "text": "An equivalence relation on a set X {\\displaystyle X} is a binary relation ∼ {\\displaystyle \\,\\sim \\,} on X {\\displaystyle X} satisfying the three properties:", "title": "Definition and notation" }, { "paragraph_id": 4, "text": "The equivalence class of an element a {\\displaystyle a} is often denoted [ a ] {\\displaystyle [a]} or [ a ] ∼ , {\\displaystyle [a]_{\\sim },} and is defined as the set { x ∈ X : a ∼ x } {\\displaystyle \\{x\\in X:a\\sim x\\}} of elements that are related to a {\\displaystyle a} by ∼ . {\\displaystyle \\,\\sim .} The word \"class\" in the term \"equivalence class\" may generally be considered as a synonym of \"set\", although some equivalence classes are not sets but proper classes. For example, \"being isomorphic\" is an equivalence relation on groups, and the equivalence classes, called isomorphism classes, are not sets.", "title": "Definition and notation" }, { "paragraph_id": 5, "text": "The set of all equivalence classes in X {\\displaystyle X} with respect to an equivalence relation R {\\displaystyle R} is denoted as X / R , {\\displaystyle X/R,} and is called X {\\displaystyle X} modulo R {\\displaystyle R} (or the quotient set of X {\\displaystyle X} by R {\\displaystyle R} ). The surjective map x ↦ [ x ] {\\displaystyle x\\mapsto [x]} from X {\\displaystyle X} onto X / R , {\\displaystyle X/R,} which maps each element to its equivalence class, is called the canonical surjection, or the canonical projection.", "title": "Definition and notation" }, { "paragraph_id": 6, "text": "Every element of an equivalence class characterizes the class, and may be used to represent it. When such an element is chosen, it is called a representative of the class. The choice of a representative in each class defines an injection from X / R {\\displaystyle X/R} to X. Since its composition with the canonical surjection is the identity of X / R , {\\displaystyle X/R,} such an injection is called a section, when using the terminology of category theory.", "title": "Definition and notation" }, { "paragraph_id": 7, "text": "Sometimes, there is a section that is more \"natural\" than the other ones. In this case, the representatives are called canonical representatives. For example, in modular arithmetic, for every integer m greater than 1, the congruence modulo m is an equivalence relation on the integers, for which two integers a and b are equivalent—in this case, one says congruent —if m divides a − b ; {\\displaystyle a-b;} this is denoted a ≡ b ( mod m ) . {\\textstyle a\\equiv b{\\pmod {m}}.} Each class contains a unique non-negative integer smaller than m , {\\displaystyle m,} and these integers are the canonical representatives.", "title": "Definition and notation" }, { "paragraph_id": 8, "text": "The use of representatives for representing classes allows avoiding to consider explicitly classes as sets. In this case, the canonical surjection that maps an element to its class is replaced by the function that maps an element to the representative of its class. In the preceding example, this function is denoted a mod m , {\\displaystyle a{\\bmod {m}},} and produces the remainder of the Euclidean division of a by m.", "title": "Definition and notation" }, { "paragraph_id": 9, "text": "Every element x {\\displaystyle x} of X {\\displaystyle X} is a member of the equivalence class [ x ] . {\\displaystyle [x].} Every two equivalence classes [ x ] {\\displaystyle [x]} and [ y ] {\\displaystyle [y]} are either equal or disjoint. Therefore, the set of all equivalence classes of X {\\displaystyle X} forms a partition of X {\\displaystyle X} : every element of X {\\displaystyle X} belongs to one and only one equivalence class. Conversely, every partition of X {\\displaystyle X} comes from an equivalence relation in this way, according to which x ∼ y {\\displaystyle x\\sim y} if and only if x {\\displaystyle x} and y {\\displaystyle y} belong to the same set of the partition.", "title": "Properties" }, { "paragraph_id": 10, "text": "It follows from the properties of an equivalence relation that", "title": "Properties" }, { "paragraph_id": 11, "text": "if and only if [ x ] = [ y ] . {\\displaystyle [x]=[y].}", "title": "Properties" }, { "paragraph_id": 12, "text": "In other words, if ∼ {\\displaystyle \\,\\sim \\,} is an equivalence relation on a set X , {\\displaystyle X,} and x {\\displaystyle x} and y {\\displaystyle y} are two elements of X , {\\displaystyle X,} then these statements are equivalent:", "title": "Properties" }, { "paragraph_id": 13, "text": "An undirected graph may be associated to any symmetric relation on a set X , {\\displaystyle X,} where the vertices are the elements of X , {\\displaystyle X,} and two vertices s {\\displaystyle s} and t {\\displaystyle t} are joined if and only if s ∼ t . {\\displaystyle s\\sim t.} Among these graphs are the graphs of equivalence relations. These graphs, called cluster graphs, are characterized as the graphs such that the connected components are cliques.", "title": "Graphical representation" }, { "paragraph_id": 14, "text": "If ∼ {\\displaystyle \\,\\sim \\,} is an equivalence relation on X , {\\displaystyle X,} and P ( x ) {\\displaystyle P(x)} is a property of elements of X {\\displaystyle X} such that whenever x ∼ y , {\\displaystyle x\\sim y,} P ( x ) {\\displaystyle P(x)} is true if P ( y ) {\\displaystyle P(y)} is true, then the property P {\\displaystyle P} is said to be an invariant of ∼ , {\\displaystyle \\,\\sim \\,,} or well-defined under the relation ∼ . {\\displaystyle \\,\\sim .}", "title": "Invariants" }, { "paragraph_id": 15, "text": "A frequent particular case occurs when f {\\displaystyle f} is a function from X {\\displaystyle X} to another set Y {\\displaystyle Y} ; if f ( x 1 ) = f ( x 2 ) {\\displaystyle f\\left(x_{1}\\right)=f\\left(x_{2}\\right)} whenever x 1 ∼ x 2 , {\\displaystyle x_{1}\\sim x_{2},} then f {\\displaystyle f} is said to be class invariant under ∼ , {\\displaystyle \\,\\sim \\,,} or simply invariant under ∼ . {\\displaystyle \\,\\sim .} This occurs, for example, in the character theory of finite groups. Some authors use \"compatible with ∼ {\\displaystyle \\,\\sim \\,} \" or just \"respects ∼ {\\displaystyle \\,\\sim \\,} \" instead of \"invariant under ∼ {\\displaystyle \\,\\sim \\,} \".", "title": "Invariants" }, { "paragraph_id": 16, "text": "Any function f : X → Y {\\displaystyle f:X\\to Y} is class invariant under ∼ , {\\displaystyle \\,\\sim \\,,} according to which x 1 ∼ x 2 {\\displaystyle x_{1}\\sim x_{2}} if and only if f ( x 1 ) = f ( x 2 ) . {\\displaystyle f\\left(x_{1}\\right)=f\\left(x_{2}\\right).} The equivalence class of x {\\displaystyle x} is the set of all elements in X {\\displaystyle X} which get mapped to f ( x ) , {\\displaystyle f(x),} that is, the class [ x ] {\\displaystyle [x]} is the inverse image of f ( x ) . {\\displaystyle f(x).} This equivalence relation is known as the kernel of f . {\\displaystyle f.}", "title": "Invariants" }, { "paragraph_id": 17, "text": "More generally, a function may map equivalent arguments (under an equivalence relation ∼ X {\\displaystyle \\sim _{X}} on X {\\displaystyle X} ) to equivalent values (under an equivalence relation ∼ Y {\\displaystyle \\sim _{Y}} on Y {\\displaystyle Y} ). Such a function is a morphism of sets equipped with an equivalence relation.", "title": "Invariants" }, { "paragraph_id": 18, "text": "In topology, a quotient space is a topological space formed on the set of equivalence classes of an equivalence relation on a topological space, using the original space's topology to create the topology on the set of equivalence classes.", "title": "Quotient space in topology" }, { "paragraph_id": 19, "text": "In abstract algebra, congruence relations on the underlying set of an algebra allow the algebra to induce an algebra on the equivalence classes of the relation, called a quotient algebra. In linear algebra, a quotient space is a vector space formed by taking a quotient group, where the quotient homomorphism is a linear map. By extension, in abstract algebra, the term quotient space may be used for quotient modules, quotient rings, quotient groups, or any quotient algebra. However, the use of the term for the more general cases can as often be by analogy with the orbits of a group action.", "title": "Quotient space in topology" }, { "paragraph_id": 20, "text": "The orbits of a group action on a set may be called the quotient space of the action on the set, particularly when the orbits of the group action are the right cosets of a subgroup of a group, which arise from the action of the subgroup on the group by left translations, or respectively the left cosets as orbits under right translation.", "title": "Quotient space in topology" }, { "paragraph_id": 21, "text": "A normal subgroup of a topological group, acting on the group by translation action, is a quotient space in the senses of topology, abstract algebra, and group actions simultaneously.", "title": "Quotient space in topology" }, { "paragraph_id": 22, "text": "Although the term can be used for any equivalence relation's set of equivalence classes, possibly with further structure, the intent of using the term is generally to compare that type of equivalence relation on a set X , {\\displaystyle X,} either to an equivalence relation that induces some structure on the set of equivalence classes from a structure of the same kind on X , {\\displaystyle X,} or to the orbits of a group action. Both the sense of a structure preserved by an equivalence relation, and the study of invariants under group actions, lead to the definition of invariants of equivalence relations given above.", "title": "Quotient space in topology" } ]
In mathematics, when the elements of some set S have a notion of equivalence, then one may naturally split the set S into equivalence classes. These equivalence classes are constructed so that elements a and b belong to the same equivalence class if, and only if, they are equivalent. Formally, given a set S and an equivalence relation ∼ on S , the equivalence class of an element a in S , often denoted by [ a ] . The definition of equivalence relations implies that the equivalence classes form a partition of S , meaning, that every element of the set belongs to exactly one equivalence class. The set of the equivalence classes is sometimes called the quotient set or the quotient space of S by ∼ , and is denoted by S / ∼ ′ When the set S has some structure and the equivalence relation ∼ is compatible with this structure, the quotient set often inherits a similar structure from its parent set. Examples include quotient spaces in linear algebra, quotient spaces in topology, quotient groups, homogeneous spaces, quotient rings, quotient monoids, and quotient categories.
2001-08-20T13:12:50Z
2023-12-12T08:55:23Z
[ "Template:Vanchor", "Template:Reflist", "Template:Commons category-inline", "Template:Set theory", "Template:Redirect", "Template:Em", "Template:Anchor", "Template:Annotated link", "Template:Harvnb", "Template:About", "Template:Mvar", "Template:Math", "Template:Main", "Template:Citation", "Template:Authority control", "Template:Short description", "Template:Cite web" ]
https://en.wikipedia.org/wiki/Equivalence_class
9,262
Entertainment
Entertainment is a form of activity that holds the attention and interest of an audience or gives pleasure and delight. It can be an idea or a task, but it is more likely to be one of the activities or events that have developed over thousands of years specifically for the purpose of keeping an audience's attention. Although people's attention is held by different things because individuals have different preferences, most forms of entertainment are recognisable and familiar. Storytelling, music, drama, dance, and different kinds of performance exist in all cultures, were supported in royal courts, and developed into sophisticated forms over time, becoming available to all citizens. The process has been accelerated in modern times by an entertainment industry that records and sells entertainment products. Entertainment evolves and can be adapted to suit any scale, ranging from an individual who chooses private entertainment from a now enormous array of pre-recorded products, to a banquet adapted for two, to any size or type of party with appropriate music and dance, to performances intended for thousands, and even for a global audience. The experience of being entertained has come to be strongly associated with amusement, so that one common understanding of the idea is fun and laughter, although many entertainments have a serious purpose. This may be the case in various forms of ceremony, celebration, religious festival, or satire, for example. Hence, there is the possibility that what appears to be entertainment may also be a means of achieving insight or intellectual growth. An important aspect of entertainment is the audience, which turns a private recreation or leisure activity into entertainment. The audience may have a passive role, as in the case of people watching a play, opera, television show, or film; or the audience role may be active, as in the case of games, where the participant and audience roles may be routinely reversed. Entertainment can be public or private, involving formal, scripted performances, as in the case of theatre or concerts, or unscripted and spontaneous, as in the case of children's games. Most forms of entertainment have persisted over many centuries, evolving due to changes in culture, technology, and fashion, as with stage magic. Films and video games, although they use newer media, continue to tell stories, present drama, and play music. Festivals devoted to music, film, or dance allow audiences to be entertained over a number of consecutive days. Some entertainment, such as public executions, is now illegal in most countries. Activities such as fencing or archery, once used in hunting or war, have become spectator sports. In the same way, other activities, such as cooking, have developed into performances among professionals, staged as global competitions, and then broadcast for entertainment. What is entertainment for one group or individual may be regarded as work or an act of cruelty by another. The familiar forms of entertainment have the capacity to cross over into different media and have demonstrated a seemingly unlimited potential for creative remix. This has ensured the continuity and longevity of many themes, images, and structures. The Oxford English Dictionary gives Latin and French origins for the word "entertain", including inter (among) + tenir (to hold) as derivations, giving translations of "to hold mutually" or "to hold intertwined" and "to engage, keep occupied, the attention, thoughts, or time (of a person)". It also provides words like "merry-making", "pleasure", and "delight", as well as "to receive as a guest and show hospitality to". It cites a 1490 usage by William Caxton. Entertainment can be distinguished from other activities such as education and marketing even though they have learned how to use the appeal of entertainment to achieve their different goals. Sometimes entertainment can be a mixture for both. The importance and impact of entertainment is recognised by scholars and its increasing sophistication has influenced practices in other fields such as museology. Psychologists say the function of media entertainment is "the attainment of gratification". No other results or measurable benefits are usually expected from it (except perhaps the final score in a sporting entertainment). This is in contrast to education (which is designed with the purpose of developing understanding or helping people to learn) and marketing (which aims to encourage people to purchase commercial products). However, the distinctions become blurred when education seeks to be more "entertaining" and entertainment or marketing seek to be more "educational". Such mixtures are often known by the neologisms "edutainment" or "infotainment". The psychology of entertainment as well as of learning has been applied to all these fields. Some education-entertainment is a serious attempt to combine the best features of the two. Some people are entertained by others' pain or the idea of their unhappiness (schadenfreude). An entertainment might go beyond gratification and produce some insight in its audience. Entertainment may skilfully consider universal philosophical questions such as: "What does it mean to be human?"; "What is the right thing to do?"; or "How do I know what I know?". "The meaning of life", for example, is the subject in a wide range of entertainment forms, including film, music and literature. Questions such as these drive many narratives and dramas, whether they are presented in the form of a story, film, play, poem, book, dance, comic, or game. Dramatic examples include Shakespeare's influential play Hamlet, whose hero articulates these concerns in poetry; and films, such as The Matrix, which explores the nature of knowledge and was released worldwide. Novels give great scope for investigating these themes while they entertain their readers. An example of a creative work that considers philosophical questions so entertainingly that it has been presented in a very wide range of forms is The Hitchhiker's Guide to the Galaxy. Originally a radio comedy, this story became so popular that it has also appeared as a novel, film, television series, stage show, comic, audiobook, LP record, adventure game and online game, its ideas became popular references (see Phrases from The Hitchhiker's Guide to the Galaxy) and has been translated into many languages. Its themes encompass the meaning of life, as well as "the ethics of entertainment, artificial intelligence, multiple worlds, God, and philosophical method". The "ancient craft of communicating events and experiences, using words, images, sounds and gestures" by telling a story is not only the means by which people passed on their cultural values and traditions and history from one generation to another, it has been an important part of most forms of entertainment ever since the earliest times. Stories are still told in the early forms, for example, around a fire while camping, or when listening to the stories of another culture as a tourist. "The earliest storytelling sequences we possess, now of course, committed to writing, were undoubtedly originally a speaking from mouth to ear and their force as entertainment derived from the very same elements we today enjoy in films and novels." Storytelling is an activity that has evolved and developed "toward variety". Many entertainments, including storytelling but especially music and drama, remain familiar but have developed into a wide variety of form to suit a very wide range of personal preferences and cultural expression. Many types are blended or supported by other forms. For example, drama, stories and banqueting (or dining) are commonly enhanced by music; sport and games are incorporated into other activities to increase appeal. Some may have evolved from serious or necessary activities (such as running and jumping) into competition and then become entertainment. It is said, for example, that pole vaulting "may have originated in the Netherlands, where people used long poles to vault over wide canals rather than wear out their clogs walking miles to the nearest bridge. Others maintain that pole vaulting was used in warfare to vault over fortress walls during battle." The equipment for such sports has become increasingly sophisticated. Vaulting poles, for example, were originally made from woods such as ash, hickory or hazel; in the 19th century bamboo was used and in the 21st century poles can be made of carbon fibre. Other activities, such as walking on stilts, are still seen in circus performances in the 21st century. Gladiatorial combats, also known as "gladiatorial games", popular during Roman times, provide a good example of an activity that is a combination of sport, punishment, and entertainment. Changes to what is regarded as entertainment can occur in response to cultural or historical shifts. Hunting wild animals, for example, was introduced into the Roman Empire from Carthage and became a popular public entertainment and spectacle, supporting an international trade in wild animals. Entertainment also evolved into different forms and expressions as a result of social upheavals such as wars and revolutions. During the Chinese Cultural Revolution, for example, Revolutionary opera was sanctioned by the Communist party and World War I, the Great Depression and the Russian Revolution all affected entertainment. Relatively minor changes to the form and venue of an entertainment continue to come and go as they are affected by the period, fashion, culture, technology, and economics. For example, a story told in dramatic form can be presented in an open-air theatre, a music hall, a movie theatre, a multiplex, or as technological possibilities advanced, via a personal electronic device such as a tablet computer. Entertainment is provided for mass audiences in purpose-built structures such as a theatre, auditorium, or stadium. One of the most famous venues in the Western world, the Colosseum, "dedicated AD 80 with a hundred days of games, held fifty thousand spectators," and in it audiences "enjoyed blood sport with the trappings of stage shows". Spectacles, competitions, races, and sports were once presented in this purpose-built arena as public entertainment. New stadia continue to be built to suit the ever more sophisticated requirements of global audiences. Imperial and royal courts have provided training grounds and support for professional entertainers, with different cultures using palaces, castles and forts in different ways. In the Maya city states, for example, "spectacles often took place in large plazas in front of palaces; the crowds gathered either there or in designated places from which they could watch at a distance." Court entertainments also crossed cultures. For example, the durbar was introduced to India by the Mughals, and passed onto the British Empire, which then followed Indian tradition: "institutions, titles, customs, ceremonies by which a Maharaja or Nawab were installed ... the exchange of official presents ... the order of precedence", for example, were "all inherited from ... the Emperors of Delhi". In Korea, the "court entertainment dance" was "originally performed in the palace for entertainment at court banquets." Court entertainment often moved from being associated with the court to more general use among commoners. This was the case with "masked dance-dramas" in Korea, which "originated in conjunction with village shaman rituals and eventually became largely an entertainment form for commoners". Nautch dancers in the Mughal Empire performed in Indian courts and palaces. Another evolution, similar to that from courtly entertainment to common practice, was the transition from religious ritual to secular entertainment, such as happened during the Goryeo dynasty with the Narye festival. Originally "solely religious or ritualistic, a secular component was added at the conclusion". Former courtly entertainments, such as jousting, often also survived in children's games. In some courts, such as those during the Byzantine Empire, the genders were segregated among the upper classes, so that "at least before the period of the Komnenoi" (1081–1185) men were separated from women at ceremonies where there was entertainment such as receptions and banquets. Court ceremonies, palace banquets and the spectacles associated with them, have been used not only to entertain but also to demonstrate wealth and power. Such events reinforce the relationship between ruler and ruled; between those with power and those without, serving to "dramatise the differences between ordinary families and that of the ruler". This is the case as much as for traditional courts as it is for contemporary ceremonials, such as the Hong Kong handover ceremony in 1997, at which an array of entertainments (including a banquet, a parade, fireworks, a festival performance and an art spectacle) were put to the service of highlighting a change in political power. Court entertainments were typically performed for royalty and courtiers as well as "for the pleasure of local and visiting dignitaries". Royal courts, such as the Korean one, also supported traditional dances. In Sudan, musical instruments such as the so-called "slit" or "talking" drums, once "part of the court orchestra of a powerful chief", had multiple purposes: they were used to make music; "speak" at ceremonies; mark community events; send long-distance messages; and call men to hunt or war. Courtly entertainments also demonstrate the complex relationship between entertainer and spectator: individuals may be either an entertainer or part of the audience, or they may swap roles even during the course of one entertainment. In the court at the Palace of Versailles, "thousands of courtiers, including men and women who inhabited its apartments, acted as both performers and spectators in daily rituals that reinforced the status hierarchy". Like court entertainment, royal occasions such as coronations and weddings provided opportunities to entertain both the aristocracy and the people. For example, the splendid 1595 Accession Day celebrations of Queen Elizabeth I offered tournaments and jousting and other events performed "not only before the assembled court, in all their finery, but also before thousands of Londoners eager for a good day's entertainment. Entry for the day's events at the Tiltyard in Whitehall was set at 12d". Although most forms of entertainment have evolved and continued over time, some once-popular forms are no longer as acceptable. For example, during earlier centuries in Europe, watching or participating in the punishment of criminals or social outcasts was an accepted and popular form of entertainment. Many forms of public humiliation also offered local entertainment in the past. Even capital punishment such as hanging and beheading, offered to the public as a warning, were also regarded partly as entertainment. Capital punishments that lasted longer, such as stoning and drawing and quartering, afforded a greater public spectacle. "A hanging was a carnival that diverted not merely the unemployed but the unemployable. Good bourgeois or curious aristocrats who could afford it watched it from a carriage or rented a room." Public punishment as entertainment lasted until the 19th century by which time "the awesome event of a public hanging aroused the[ir] loathing of writers and philosophers". Both Dickens and Thackeray wrote about a hanging in Newgate Prison in 1840, and "taught an even wider public that executions are obscene entertainments". Children's entertainment is centred on play and is significant for their growth. It often mimics adult activities, such as watching performances (on television); prepares them for adult responsibilities, such as child rearing or social interaction (through dolls, pets and group games); or develops skills such as motor skills (such as a game of marbles), needed for sports and music. In the modern day, it often involves sedentary engagement with television or tablet computer. Entertainment is also provided to children or taught to them by adults and many activities that appeal to them such as puppets, clowns, pantomimes and cartoons are also enjoyed by adults. Children have always played games. It is accepted that as well as being entertaining, playing games helps children's development. One of the most famous visual accounts of children's games is a painting by Pieter Bruegel the Elder called Children's Games, painted in 1560. It depicts children playing a range of games that presumably were typical of the time. Many of these games, such as marbles, hide-and-seek, blowing soap bubbles and piggyback riding continue to be played. Most forms of entertainment can be or are modified to suit children's needs and interests. During the 20th century, starting with the often criticised but nonetheless important work of G. Stanley Hall, who "promoted the link between the study of development and the 'new' laboratory psychology", and especially with the work of Jean Piaget, who "saw cognitive development as being analogous to biological development", it became understood that the psychological development of children occurs in stages and that their capacities differ from adults. Hence, stories and activities, whether in books, film, or video games were developed specifically for child audiences. Countries have responded to the special needs of children and the rise of digital entertainment by developing systems such as television content rating systems, to guide the public and the entertainment industry. In the 21st century, as with adult products, much entertainment is available for children on the internet for private use. This constitutes a significant change from earlier times. The amount of time expended by children indoors on screen-based entertainment and the "remarkable collapse of children's engagement with nature" has drawn criticism for its negative effects on imagination, adult cognition and psychological well-being. Banquets have been a venue for amusement, entertainment or pleasure since ancient times, continuing into the modern era. until the 21st century when they are still being used for many of their original purposes – to impress visitors, especially important ones; to show hospitality; as an occasion to showcase supporting entertainments such as music or dancing, or both. They were an integral part of court entertainments and helped entertainers develop their skills. They are also important components of celebrations such as coronations, weddings, birthdays civic or political achievements, military engagements or victories as well as religious obligations, one of the most famous being the Banqueting House, Whitehall in London. In modern times, banquets are available privately, or commercially in restaurants, sometimes combined with a dramatic performance in dinner theatres. Cooking by professional chefs has also become a form of entertainment as part of global competitions such as the Bocuse d'Or. Music is a supporting component of many kinds of entertainment and most kinds of performance. For example, it is used to enhance storytelling, it is indispensable in dance and opera, and is usually incorporated into dramatic film or theatre productions. Music is also a universal and popular type of entertainment on its own, constituting an entire performance such as when concerts are given. Depending on the rhythm, instrument, performance and style, music is divided into many genres, such as classical, jazz, folk, rock, pop music or traditional. Since the 20th century, performed music, once available only to those who could pay for the performers, has been available cheaply to individuals by the entertainment industry, which broadcasts it or pre-records it for sale. The wide variety of musical performances, whether or not they are artificially amplified, all provide entertainment irrespective of whether the performance is from soloists, choral or orchestral groups, or ensemble. Live performances use specialised venues, which might be small or large; indoors or outdoors; free or expensive. The audiences have different expectations of the performers as well as of their own role in the performance. For example, some audiences expect to listen silently and are entertained by the excellence of the music, its rendition or its interpretation. Other audiences of live performances are entertained by the ambience and the chance to participate. Even more listeners are entertained by pre-recorded music and listen privately. The instruments used in musical entertainment are either solely the human voice or solely instrumental or some combination of the two. Whether the performance is given by vocalists or instrumentalists, the performers may be soloists or part of a small or large group, in turn entertaining an audience that might be individual, passing by, small or large. Singing is generally accompanied by instruments although some forms, notably a cappella and overtone singing, are unaccompanied. Modern concerts often use various special effects and other theatrics to accompany performances of singing and dancing. Games are played for entertainment – sometimes purely for recreation, sometimes for achievement or reward as well. They can be played alone, in teams, or online; by amateurs or by professionals. The players may have an audience of non-players, such as when people are entertained by watching a chess championship. On the other hand, players in a game may constitute their own audience as they take their turn to play. Often, part of the entertainment for children playing a game is deciding who is part of their audience and who is a player. Equipment varies with the game. Board games, such as Go, Monopoly or backgammon need a board and markers. One of the oldest known board games is Senet, a game played in Ancient Egypt, enjoyed by the pharaoh Tutankhamun. Card games, such as whist, poker and Bridge have long been played as evening entertainment among friends. For these games, all that is needed is a deck of playing cards. Other games, such as bingo, played with numerous strangers, have been organised to involve the participation of non-players via gambling. Many are geared for children, and can be played outdoors, including hopscotch, hide and seek, or Blind man's bluff. The list of ball games is quite extensive. It includes, for example, croquet, lawn bowling and paintball as well as many sports using various forms of balls. The options cater to a wide range of skill and fitness levels. Physical games can develop agility and competence in motor skills. Number games such as Sudoku and puzzle games like the Rubik's cube can develop mental prowess. Video games are played using a controller to create results on a screen. They can also be played online with participants joining in remotely. In the second half of the 20th century and in the 21st century the number of such games increased enormously, providing a wide variety of entertainment to players around the world. Video games are popular across the world. French poet Louise Labé (1520/1522–1566) wrote "a profound and timeless insight into reading's innate power". The past gives us pleasure and is of more service than the present; but the delight of what we once felt is dimly lost never to return and its memory is as distressing as the events themselves were then delectable ... But when we happen to put our thoughts in writing, how easily, later on, does our mind race through an infinity of events, incessantly alive, so that a long time afterwards when we take up those written pages we can return to the same place and to the same disposition in which we once found ourselves. quote from and commentary by Fischer (2003) The young Saint Teresa of Ávila (1515–1582) read chivalrous novels and wrote about the "rapture" that books provided. I became accustomed to reading [novels] and that small fault made me cool my desire and will to do other tasks. I thought nothing of spending many hours a day and night in this vain exercise, hidden from my father. My rapture in this was so great, that unless I had a new book to read, it seemed to me that I could not be happy. quoted in Fischer (2003) Reading has been a source of entertainment for a very long time, especially when other forms, such as performance entertainments, were (or are) either unavailable or too costly. Even when the primary purpose of the writing is to inform or instruct, reading is well known for its capacity to distract from everyday worries. Both stories and information have been passed on through the tradition of orality and oral traditions survive in the form of performance poetry for example. However, they have drastically declined. "Once literacy had arrived in strength, there was no return to the oral prerogative." The advent of printing, the reduction in costs of books and an increasing literacy all served to enhance the mass appeal of reading. Furthermore, as fonts were standardised and texts became clearer, "reading ceased being a painful process of decipherment and became an act of pure pleasure". By the 16th century in Europe, the appeal of reading for entertainment was well established. Among literature's many genres are some designed, in whole or in part, purely for entertainment. Limericks, for example, use verse in a strict, predictable rhyme and rhythm to create humour and to amuse an audience of listeners or readers. Interactive books such as "choose your own adventure" can make literary entertainment more participatory. Comics and editorial cartoons are literary genres that use drawings or graphics, usually in combination with text, to convey an entertaining narrative. Many contemporary comics have elements of fantasy and are produced by companies that are part of the entertainment industry. Others have unique authors who offer a more personal, philosophical view of the world and the problems people face. Comics about superheroes such as Superman are of the first type. Examples of the second sort include the individual work over 50 years of Charles M. Schulz who produced a popular comic called Peanuts about the relationships among a cast of child characters; and Michael Leunig who entertains by producing whimsical cartoons that also incorporate social criticism. The Japanese Manga style differs from the western approach in that it encompasses a wide range of genres and themes for a readership of all ages. Caricature uses a kind of graphic entertainment for purposes ranging from merely putting a smile on the viewer's face, to raising social awareness, to highlighting the moral characteristics of a person being caricatured. Comedy is both a genre of entertainment and a component of it, providing laughter and amusement, whether the comedy is the sole purpose or used as a form of contrast in an otherwise serious piece. It is a valued contributor to many forms of entertainment, including in literature, theatre, opera, film and games. In royal courts, such as in the Byzantine court, and presumably, also in its wealthy households, "mimes were the focus of orchestrated humour, expected or obliged to make fun of all at court, not even excepting the emperor and members of the imperial family. This highly structured role of jester consisted of verbal humour, including teasing, jests, insult, ridicule, and obscenity and non-verbal humour such as slapstick and horseplay in the presence of an audience." In medieval times, all comic types – the buffoon, jester, hunchback, dwarf, jokester, were all "considered to be essentially of one comic type: the fool", who while not necessarily funny, represented "the shortcomings of the individual". Shakespeare wrote seventeen comedies that incorporate many techniques still used by performers and writers of comedy – such as jokes, puns, parody, wit, observational humor, or the unexpected effect of irony. One-liner jokes and satire are also used to comedic effect in literature. In farce, the comedy is a primary purpose. The meaning of the word "comedy" and the audience's expectations of it have changed over time and vary according to culture. Simple physical comedy such as slapstick is entertaining to a broad range of people of all ages. However, as cultures become more sophisticated, national nuances appear in the style and references so that what is amusing in one culture may be unintelligible in another. Live performances before an audience constitute a major form of entertainment, especially before the invention of audio and video recording. Performance takes a wide range of forms, including theatre, music and drama. In the 16th and 17th centuries, European royal courts presented masques that were complex theatrical entertainments involving dancing, singing and acting. Opera is a similarly demanding performance style that remains popular. It also encompass all three forms, demanding a high level of musical and dramatic skill, collaboration and like the masque, production expertise as well. Audiences generally show their appreciation of an entertaining performance with applause. However, all performers run the risk of failing to hold their audience's attention and thus, failing to entertain. Audience dissatisfaction is often brutally honest and direct. "Of course you all ought to know that while singing a good song or, or giving a good recitation ... helps to arrest the company's attention ... Such at least was the case with me – the publican devised a plan to bring my entertainment to an end abruptly, and the plan was, he told the waiter to throw a wet towel at me, which, of course, the waiter did ... and I received the wet towel, full force, in the face, which staggered me ... and had the desired effect of putting an end to me giving any more entertainments in the house." William McGonagall (Performance artist and poet) Storytelling is an ancient form of entertainment that has influenced almost all other forms. It is "not only entertainment, it is also thinking through human conflicts and contradictions". Hence, although stories may be delivered directly to a small listening audience, they are also presented as entertainment and used as a component of any piece that relies on a narrative, such as film, drama, ballet, and opera. Written stories have been enhanced by illustrations, often to a very high artistic standard, for example, on illuminated manuscripts and on ancient scrolls such as Japanese ones. Stories remain a common way of entertaining a group that is on a journey. Showing how stories are used to pass the time and entertain an audience of travellers, Chaucer used pilgrims in his literary work The Canterbury Tales in the 14th century, as did Wu Cheng'en in the 16th century in Journey to the West. Even though journeys can now be completed much faster, stories are still told to passengers en route in cars and aeroplanes either orally or delivered by some form of technology. The power of stories to entertain is evident in one of the most famous ones – Scheherazade – a story in the Persian professional storytelling tradition, of a woman who saves her own life by telling stories. The connections between the different types of entertainment are shown by the way that stories like this inspire a retelling in another medium, such as music, film or games. For example, composers Rimsky-Korsakov, Ravel and Szymanowski have each been inspired by the Scheherazade story and turned it into an orchestral work; director Pasolini made a film adaptation; and there is an innovative video game based on the tale. Stories may be told wordlessly, in music, dance or puppetry for example, such as in the Javanese tradition of wayang, in which the performance is accompanied by a gamelan orchestra or the similarly traditional Punch and Judy show. Epic narratives, poems, sagas and allegories from all cultures tell such gripping tales that they have inspired countless other stories in all forms of entertainment. Examples include the Hindu Ramayana and Mahabharata; Homer's Odyssey and Iliad; the first Arabic novel Hayy ibn Yaqdhan; the Persian epic Shahnameh; the Sagas of Icelanders and the celebrated Tale of the Genji. Collections of stories, such as Grimms' Fairy Tales or those by Hans Christian Andersen, have been similarly influential. Originally published in the early 19th century, this collection of folk stories significantly influence modern popular culture, which subsequently used its themes, images, symbols, and structural elements to create new entertainment forms. Some of the most powerful and long-lasting stories are the foundation stories, also called origin or creation myths such as the Dreamtime myths of the Australian aborigines, the Mesopotamian Epic of Gilgamesh, or the Hawaiian stories of the origin of the world. These too are developed into books, films, music and games in a way that increases their longevity and enhances their entertainment value. Theatre performances, typically dramatic or musical, are presented on a stage for an audience and have a history that goes back to Hellenistic times when "leading musicians and actors" performed widely at "poetical competitions", for example at "Delphi, Delos, Ephesus". Aristotle and his teacher Plato both wrote on the theory and purpose of theatre. Aristotle posed questions such as "What is the function of the arts in shaping character? Should a member of the ruling class merely watch performances or be a participant and perform? What kind of entertainment should be provided for those who do not belong to the elite?" The "Ptolemys in Egypt, the Seleucids in Pergamum" also had a strong theatrical tradition and later, wealthy patrons in Rome staged "far more lavish productions". Expectations about the performance and their engagement with it have changed over time. For example, in England during the 18th century, "the prejudice against actresses had faded" and in Europe generally, going to the theatre, once a socially dubious activity, became "a more respectable middle-class pastime" in the late 19th and early 20th centuries, when the variety of popular entertainments increased. Operetta and music halls became available, and new drama theatres such as the Moscow Art Theatre and the Suvorin Theatre in Russia opened. At the same time, commercial newspapers "began to carry theatre columns and reviews" that helped make theatre "a legitimate subject of intellectual debate" in general discussions about art and culture. Audiences began to gather to "appreciate creative achievement, to marvel at, and be entertained by, the prominent 'stars'." Vaudeville and music halls, popular at this time in the United States, England, Canada, Australia and New Zealand, were themselves eventually superseded. Plays, musicals, monologues, pantomimes, and performance poetry are part of the very long history of theatre, which is also the venue for the type of performance known as stand-up comedy. In the 20th century, radio and television, often broadcast live, extended the theatrical tradition that continued to exist alongside the new forms. The stage and the spaces set out in front of it for an audience create a theatre. All types of stage are used with all types of seating for the audience, including the impromptu or improvised; the temporary; the elaborate; or the traditional and permanent. They are erected indoors or outdoors. The skill of managing, organising and preparing the stage for a performance is known as stagecraft. The audience's experience of the entertainment is affected by their expectations, the stagecraft, the type of stage, and the type and standard of seating provided. Films are a major form of entertainment, although not all films have entertainment as their primary purpose: documentary film, for example, aims to create a record or inform, although the two purposes often work together. The medium was a global business from the beginning: "The Lumière brothers were the first to send cameramen throughout the world, instructing them to film everything which could be of interest for the public." In 1908, Pathé launched and distributed newsreels and by World War I, films were meeting an enormous need for mass entertainment. "In the first decade of the [20th] century cinematic programmes combined, at random, fictions and newsfilms." The Americans first "contrived a way of producing an illusion of motion through successive images," but "the French were able to transform a scientific principle into a commercially lucrative spectacle". Film therefore became a part of the entertainment industry from its early days. Increasingly sophisticated techniques have been used in the film medium to delight and entertain audiences. Animation, for example, which involves the display of rapid movement in an art work, is one of these techniques that particularly appeals to younger audiences. The advent of computer-generated imagery (CGI) in the 21st century made it "possible to do spectacle" more cheaply and "on a scale never dreamed of" by Cecil B. DeMille. From the 1930s to 1950s, movies and radio were the "only mass entertainment" but by the second decade of the 21st century, technological changes, economic decisions, risk aversion and globalisation reduced both the quality and range of films being produced. Sophisticated visual effects and CGI techniques, for example, rather than humans, were used not only to create realistic images of people, landscapes and events (both real and fantastic) but also to animate non-living items such as Lego normally used as entertainment as a game in physical form. Creators of The Lego Movie "wanted the audience to believe they were looking at actual Lego bricks on a tabletop that were shot with a real camera, not what we actually did, which was create vast environments with digital bricks inside the computer." The convergence of computers and film has allowed entertainment to be presented in a new way and the technology has also allowed for those with the personal resources to screen films in a home theatre, recreating in a private venue the quality and experience of a public theatre. This is similar to the way that the nobility in earlier times could stage private musical performances or the use of domestic theatres in large homes to perform private plays in earlier centuries. Films also re-imagine entertainment from other forms, turning stories, books and plays, for example, into new entertainments. The Story of Film, a documentary about the history of film, gives a survey of global achievements and innovations in the medium, as well as changes in the conception of film-making. It demonstrates that while some films, particularly those in the Hollywood tradition that combines "realism and melodramatic romanticism", are intended as a form of escapism, others require a deeper engagement or more thoughtful response from their audiences. For example, the award-winning Senegalese film Xala takes government corruption as its theme. Charlie Chaplin's film The Great Dictator was a brave and innovative parody, also on a political theme. Stories that are thousands of years old, such as Noah, have been re-interpreted in film, applying familiar literary devices such as allegory and personification with new techniques such as CGI to explore big themes such as "human folly", good and evil, courage and despair, love, faith, and death – themes that have been a main-stay of entertainment across all its forms. As in other media, excellence and achievement in films is recognised through a range of awards, including ones from the American Academy of Motion Picture Arts and Sciences, the British Academy of Film and Television Arts, the Cannes International Film Festival in France and the Asia Pacific Screen Awards. The many forms of dance provide entertainment for all age groups and cultures. Dance can be serious in tone, such as when it is used to express a culture's history or important stories; it may be provocative; or it may put in the service of comedy. Since it combines many forms of entertainment – music, movement, storytelling, theatre – it provides a good example of the various ways that these forms can be combined to create entertainment for different purposes and audiences. Dance is "a form of cultural representation" that involves not just dancers, but "choreographers, audience members, patrons and impresarios ... coming from all over the globe and from vastly varied time periods." Whether from Africa, Asia or Europe, dance is constantly negotiating the realms of political, social, spiritual and artistic influence." Even though dance traditions may be limited to one cultural group, they all develop. For example, in Africa, there are "Dahomean dances, Hausa dances, Masai dances and so forth." Ballet is an example of a highly developed Western form of dance that moved to the theatres from the French court during the time of Louis XIV, the dancers becoming professional theatrical performers. Some dances, such as the quadrille, a square dance that "emerged during the Napoleonic years in France" and other country dances were once popular at social gatherings like balls, but are now rarely performed. On the other hand, many folk dances (such as Scottish Highland dancing and Irish dancing), have evolved into competitions, which by adding to their audiences, has increased their entertainment value. "Irish dance theatre, which sometimes features traditional Irish steps and music, has developed into a major dance form with an international reputation." Since dance is often "associated with the female body and women's experiences", female dancers, who dance to entertain, have in some cases been regarded as distinct from "decent" women because they "use their bodies to make a living instead of hiding them as much as possible". Society's attitudes to female dancers depend on the culture, its history and the entertainment industry itself. For example, while some cultures regard any dancing by women as "the most shameful form of entertainment", other cultures have established venues such as strip clubs where deliberately erotic or sexually provocative dances such as striptease are performed in public by professional women dancers for mostly male audiences. Various political regimes have sought to control or ban dancing or specific types of dancing, sometimes because of disapproval of the music or clothes associated with it. Nationalism, authoritarianism and racism have played a part in banning dances or dancing. For example, during the Nazi regime, American dances such as swing, regarded as "completely un-German", had "become a public offense and needed to be banned". Similarly, in Shanghai, China, in the 1930s, "dancing and nightclubs had come to symbolise the excess that plagued Chinese society" and officials wondered if "other forms of entertainment such as brothels" should also be banned. Banning had the effect of making "the dance craze" even greater. In Ireland, the Public Dance Hall Act of 1935 "banned – but did not stop – dancing at the crossroads and other popular dance forms such as house and barn dances." In the US, various dances were once banned, either because like burlesque, they were suggestive, or because, like the Twist, they were associated with African Americans. "African American dancers were typically banned from performing in minstrel shows until after the American Civil War." Dances can be performed solo, in pairs, in groups, or by massed performers. They might be improvised or highly choreographed; spontaneous for personal entertainment (such as when children begin dancing for themselves); a private audience, a paying audience, a world audience, or an audience interested in a particular dance genre. They might be a part of a celebration, such as a wedding or New Year, or a cultural ritual with a specific purpose, such as a dance by warriors like a haka. Some dances, such as traditional dance and ballet, need a very high level of skill and training; others, such as the can-can, require a very high level of energy and physical fitness. Entertaining the audience is a normal part of dance but its physicality often also produces joy for the dancers themselves. Animals have been used for the purposes of entertainment for millennia. They have been hunted for entertainment (as opposed to hunted for food); displayed while they hunt for prey; watched when they compete with each other; and watched while they perform a trained routine for human amusement. The Romans, for example, were entertained both by competitions involving wild animals and acts performed by trained animals. They watched as "lions and bears danced to the music of pipes and cymbals; horses were trained to kneel, bow, dance and prance ... acrobats turning handsprings over wild lions and vaulting over wild leopards." There were "violent confrontations with wild beasts" and "performances over time became more brutal and bloodier". Animals that perform trained routines or "acts" for human entertainment include fleas in flea circuses, dolphins in dolphinaria, and monkeys doing tricks for an audience on behalf of the player of a street organ. Animals kept in zoos in ancient times were often kept there for later use in the arena as entertainment or for their entertainment value as exotica. Many contests between animals are now regarded as sports – for example, horse racing is regarded as both a sport and an important source of entertainment. Its economic impact means that it is also considered a global industry, one in which horses are carefully transported around the world to compete in races. In Australia, the horse race run on Melbourne Cup Day is a public holiday and the public regards the race as an important annual event. Like horse racing, camel racing requires human riders, while greyhound racing does not. People find it entertaining to watch animals race competitively, whether they are trained, like horses, camels or dogs, or untrained, like cockroaches. The use of animals for entertainment is sometimes controversial, especially the hunting of wild animals. Some contests between animals, once popular entertainment for the public, have become illegal because of the cruelty involved. Among these are blood sports such as bear-baiting, dog fighting and cockfighting. Other contests involving animals remain controversial and have both supporters and detractors. For example, the conflict between opponents of pigeon shooting who view it as "a cruel and moronic exercise in marksmanship, and proponents, who view it as entertainment" has been tested in a court of law. Fox hunting, which involves the use of horses as well as hounds, and bullfighting, which has a strong theatrical component, are two entertainments that have a long and significant cultural history. They both involve animals and are variously regarded as sport, entertainment or cultural tradition. Among the organisations set up to advocate for the rights of animals are some whose concerns include the use of animals for entertainment. However, "in many cases of animal advocacy groups versus organisations accused of animal abuse, both sides have cultural claims." A circus, described as "one of the most brazen of entertainment forms", is a special type of theatrical performance, involving a variety of physical skills such as acrobatics and juggling and sometimes performing animals. Usually thought of as a travelling show performed in a big top, circus was first performed in permanent venues. Philip Astley is regarded as the founder of the modern circus in the second half of the 18th century and Jules Léotard is the French performer credited with developing the art of the trapeze, considered synonymous with circuses. Astley brought together performances that were generally familiar in traditional British fairs "at least since the beginning of the 17th century": "tumbling, rope-dancing, juggling, animal tricks and so on". It has been claimed that "there is no direct link between the Roman circus and the circus of modern times. ... Between the demise of the Roman 'circus' and the foundation of Astley's Amphitheatre in London some 1300 years later, the nearest thing to a circus ring was the rough circle formed by the curious onlookers who gathered around the itinerant tumbler or juggler on a village green." The form of entertainment known as stage magic or conjuring and recognisable as performance, is based on traditions and texts of magical rites and dogmas that have been a part of most cultural traditions since ancient times. (References to magic, for example, can be found in the Bible, in Hermeticism, in Zoroastrianism, in the Kabbalistic tradition, in mysticism and in the sources of Freemasonry.) Stage magic is performed for an audience in a variety of media and locations: on stage, on television, in the street, and live at parties or events. It is often combined with other forms of entertainment, such as comedy or music and showmanship is often an essential part of magic performances. Performance magic relies on deception, psychological manipulation, sleight of hand and other forms of trickery to give an audience the illusion that a performer can achieve the impossible. Audiences amazed at the stunt performances and escape acts of Harry Houdini, for example, regarded him as a magician. Fantasy magicians have held an important place in literature for centuries, offering entertainment to millions of readers. Famous wizards such as Merlin in the Arthurian legends have been written about since the 5th and 6th centuries, while in the 21st century, the young wizard Harry Potter became a global entertainment phenomenon when the book series about him sold about 450 million copies (as at June 2011), making it the best-selling book series in history. Street entertainment, street performance, or "busking" are forms of performance that have been meeting the public's need for entertainment for centuries. It was "an integral aspect of London's life", for example, when the city in the early 19th century was "filled with spectacle and diversion". Minstrels or troubadours are part of the tradition. The art and practice of busking is still celebrated at annual busking festivals. There are three basic forms of contemporary street performance. The first form is the "circle show". It tends to gather a crowd, usually has a distinct beginning and end, and is done in conjunction with street theatre, puppeteering, magicians, comedians, acrobats, jugglers and sometimes musicians. This type has the potential to be the most lucrative for the performer because there are likely to be more donations from larger audiences if they are entertained by the act. Good buskers control the crowd so patrons do not obstruct foot traffic. The second form, the walk-by act, has no distinct beginning or end. Typically, the busker provides an entertaining ambience, often with an unusual instrument, and the audience may not stop to watch or form a crowd. Sometimes a walk-by act spontaneously turns into a circle show. The third form, café busking, is performed mostly in restaurants, pubs, bars and cafés. This type of act occasionally uses public transport as a venue. Parades are held for a range of purposes, often more than one. Whether their mood is sombre or festive, being public events that are designed to attract attention and activities that necessarily divert normal traffic, parades have a clear entertainment value to their audiences. Cavalcades and the modern variant, the motorcade, are examples of public processions. Some people watching the parade or procession may have made a special effort to attend, while others become part of the audience by happenstance. Whatever their mood or primary purpose, parades attract and entertain people who watch them pass by. Occasionally, a parade takes place in an improvised theatre space (such as the Trooping the Colour in ) and tickets are sold to the physical audience while the global audience participates via broadcast. One of the earliest forms of parade were "triumphs" – grand and sensational displays of foreign treasures and spoils, given by triumphant Roman generals to celebrate their victories. They presented conquered peoples and nations that exalted the prestige of the victor. "In the summer of 46 BCE Julius Caesar chose to celebrate four triumphs held on different days extending for about one month." In Europe from the Middle Ages to the Baroque the Royal Entry celebrated the formal visit of the monarch to the city with a parade through elaborately decorated streets, passing various shows and displays. The annual Lord Mayor's Show in London is an example of a civic parade that has survived since medieval times. Many religious festivals (especially those that incorporate processions, such as Holy Week processions or the Indian festival of Holi) have some entertainment appeal in addition to their serious purpose. Sometimes, religious rituals have been adapted or evolved into secular entertainments, or like the Festa del Redentore in Venice, have managed to grow in popularity while holding both secular and sacred purposes in balance. However, pilgrimages, such as the Roman Catholic pilgrimage of the Way of St. James, the Muslim Hajj and the Hindu Kumbh Mela, which may appear to the outsider as an entertaining parade or procession, are not intended as entertainment: they are instead about an individual's spiritual journey. Hence, the relationship between spectator and participant, unlike entertainments proper, is different. The manner in which the Kumbh Mela, for example, "is divorced from its cultural context and repackaged for Western consumption – renders the presence of voyeurs deeply problematic." Parades generally impress and delight often by including unusual, colourful costumes. Sometimes they also commemorate or celebrate. Sometimes they have a serious purpose, such as when the context is military, when the intention is sometimes to intimidate; or religious, when the audience might participate or have a role to play. Even if a parade uses new technology and is some distance away, it is likely to have a strong appeal, draw the attention of onlookers and entertain them. Fireworks are a part of many public entertainments and have retained an enduring popularity since they became a "crowning feature of elaborate celebrations" in the 17th century. First used in China, classical antiquity and Europe for military purposes, fireworks were most popular in the 18th century and high prices were paid for pyrotechnists, especially the skilled Italian ones, who were summoned to other countries to organise displays. Fire and water were important aspects of court spectacles because the displays "inspired by means of fire, sudden noise, smoke and general magnificence the sentiments thought fitting for the subject to entertain of his sovereign: awe fear and a vicarious sense of glory in his might. Birthdays, name-days, weddings and anniversaries provided the occasion for celebration." One of the most famous courtly uses of fireworks was one used to celebrate the end of the War of the Austrian Succession and while the fireworks themselves caused a fire, the accompanying Music for the Royal Fireworks written by Handel has been popular ever since. Aside from their contribution to entertainments related to military successes, courtly displays and personal celebrations, fireworks are also used as part of religious ceremony. For example, during the Indian Dashavatara Kala of Gomantaka "the temple deity is taken around in a procession with a lot of singing, dancing and display of fireworks". The "fire, sudden noise and smoke" of fireworks is still a significant part of public celebration and entertainment. For example, fireworks were one of the primary forms of display chosen to celebrate the turn of the millennium around the world. As the clock struck midnight and 1999 became 2000, firework displays and open-air parties greeted the New Year as the time zones changed over to the next century. Fireworks, carefully planned and choreographed, were let off against the backdrop of many of the world's most famous buildings, including the Sydney Harbour Bridge, the Pyramids of Giza in Egypt, the Acropolis in Athens, Red Square in Moscow, Vatican City in Rome, the Brandenburg Gate in Berlin, the Eiffel Tower in Paris, and Elizabeth Tower in London. Sporting competitions have always provided entertainment for crowds. To distinguish the players from the audience, the latter are often known as spectators. Developments in stadium and auditorium design, as well as in recording and broadcast technology, have allowed off-site spectators to watch sport, with the result that the size of the audience has grown ever larger and spectator sport has become increasingly popular. Two of the most popular sports with global appeal are association football and cricket. Their ultimate international competitions, the FIFA World Cup and the Cricket World Cup, are broadcast around the world. Beyond the very large numbers involved in playing these sports, they are notable for being a major source of entertainment for many millions of non-players worldwide. A comparable multi-stage, long-form sport with global appeal is the Tour de France, unusual in that it takes place outside of special stadia, being run instead in the countryside. Aside from sports that have worldwide appeal and competitions, such as the Olympic Games, the entertainment value of a sport depends on the culture and country where people play it. For example, in the United States, baseball and basketball games are popular forms of entertainment; in Bhutan, the national sport is archery; in New Zealand, it is rugby union; in Iran, it is freestyle wrestling. Japan's unique sumo wrestling contains ritual elements that derive from its long history. In some cases, such as the international running group Hash House Harriers, participants create a blend of sport and entertainment for themselves, largely independent of spectator involvement, where the social component is more important than the competitive. The evolution of an activity into a sport and then an entertainment is also affected by the local climate and conditions. For example, the modern sport of surfing is associated with Hawaii and that of snow skiing probably evolved in Scandinavia. While these sports and the entertainment they offer to spectators have spread around the world, people in the two originating countries remain well known for their prowess. Sometimes the climate offers a chance to adapt another sport such as in the case of ice hockey – an important entertainment in Canada. Fairs and exhibitions have existed since ancient and medieval times, displaying wealth, innovations and objects for trade and offering specific entertainments as well as being places of entertainment in themselves. Whether in a medieval market or a small shop, "shopping always offered forms of exhilaration that took one away from the everyday". However, in the modern world, "merchandising has become entertainment: spinning signs, flashing signs, thumping music ... video screens, interactive computer kiosks, day care .. cafés". By the 19th century, "expos" that encouraged arts, manufactures and commerce had become international. They were not only hugely popular but affected international ideas. For example, the 1878 Paris Exposition facilitated international cooperation about ideas, innovations and standards. From London 1851 to Paris 1900, "in excess of 200 million visitors had entered the turnstiles in London, Paris, Vienna, Philadelphia, Chicago and a myriad of smaller shows around the world." Since World War II "well over 500 million visits have been recorded through world expo turnstiles". As a form of spectacle and entertainment, expositions influenced "everything from architecture, to patterns of globalisation, to fundamental matters of human identity" and in the process established the close relationship between "fairs, the rise of department stores and art museums", the modern world of mass consumption and the entertainment industry. Some entertainments, such as at large festivals (whether religious or secular), concerts, clubs, parties and celebrations, involve big crowds. From earliest times, crowds at an entertainment have associated hazards and dangers, especially when combined with the recreational consumption of intoxicants such as alcohol. The Ancient Greeks had Dionysian Mysteries, for example, and the Romans had Saturnalia. The consequence of excess and crowds can produce breaches of social norms of behaviour, sometimes causing injury or even death, such as for example, at the Altamont Free Concert, an outdoor rock festival. The list of serious incidents at nightclubs includes those caused by stampede; overcrowding; terrorism, such as the 2002 Bali bombings that targeted a nightclub; and especially fire. Investigations, such as that carried out in the US after The Station nightclub fire often demonstrate that lessons learned "regarding fire safety in nightclubs" from earlier events such as the Cocoanut Grove fire do "not necessarily result in lasting effective change". Efforts to prevent such incidents include appointing special officers, such as the medieval Lord of Misrule or, in modern times, security officers who control access; and also ongoing improvement of relevant standards such as those for building safety. The tourism industry now regards safety and security at entertainment venues as an important management task. Entertainment is big business, especially in the United States, but ubiquitous in all cultures. Although kings, rulers and powerful people have always been able to pay for entertainment to be provided for them and in many cases have paid for public entertainment, people generally have made their own entertainment or when possible, attended a live performance. Technological developments in the 20th century, especially in the area of mass media, meant that entertainment could be produced independently of the audience, packaged and sold on a commercial basis by an entertainment industry. Sometimes referred to as show business, the industry relies on business models to produce, market, broadcast or otherwise distribute many of its traditional forms, including performances of all types. The industry became so sophisticated that its economics became a separate area of academic study. The film industry is a part of the entertainment industry. Components of it include the Hollywood and Bollywood film industries, as well as the cinema of the United Kingdom and all the cinemas of Europe, including France, Germany, Spain, Italy and others. The sex industry is another component of the entertainment industry, applying the same forms and media (for example, film, books, dance and other performances) to the development, marketing and sale of sex products on a commercial basis. Amusement parks entertain paying guests with rides, such as roller coasters, ridable miniature railways, water rides, and dark rides, as well as other events and associated attractions. The parks are built on a large area subdivided into themed areas named "lands". Sometimes the whole amusement park is based on one theme, such as the various SeaWorld parks that focus on the theme of sea life. One of the consequences of the development of the entertainment industry has been the creation of new types of employment. While jobs such as writer, musician and composer exist as they always have, people doing this work are likely to be employed by a company rather than a patron as they once would have been. New jobs have appeared, such as gaffer or special effects supervisor in the film industry, and attendants in an amusement park. Prestigious awards are given by the industry for excellence in the various types of entertainment. For example, there are awards for Music, Games (including video games), Comics, Comedy, Theatre, television, Film, Dance and Magic. Sporting awards are made for the results and skill, rather than for the entertainment value. Purpose-built structures as venues for entertainment that accommodate audiences have produced many famous and innovative buildings, among the most recognisable of which are theatre structures. For the ancient Greeks, "the architectural importance of the theatre is a reflection of their importance to the community, made apparent in their monumentality, in the effort put into their design, and in the care put into their detail." The Romans subsequently developed the stadium in an oval form known as a circus. In modern times, some of the grandest buildings for entertainment have brought fame to their cities as well as their designers. The Sydney Opera House, for example, is a World Heritage Site and The O₂ in London is an entertainment precinct that contains an indoor arena, a music club, a cinema and exhibition space. The Bayreuth Festspielhaus in Germany is a theatre designed and built for performances of one specific musical composition. Two of the chief architectural concerns for the design of venues for mass audiences are speed of egress and safety. The speed at which the venue empty is important both for amenity and safety, because large crowds take a long time to disperse from a badly designed venue, which creates a safety risk. The Hillsborough disaster is an example of how poor aspects of building design can contribute to audience deaths. Sightlines and acoustics are also important design considerations in most theatrical venues. In the 21st century, entertainment venues, especially stadia, are "likely to figure among the leading architectural genres". However, they require "a whole new approach" to design, because they need to be "sophisticated entertainment centres, multi-experience venues, capable of being enjoyed in many diverse ways". Hence, architects now have to design "with two distinct functions in mind, as sports and entertainment centres playing host to live audiences, and as sports and entertainment studios serving the viewing and listening requirements of the remote audience". Architects who push the boundaries of design or construction sometimes create buildings that are entertaining because they exceed the expectations of the public and the client and are aesthetically outstanding. Buildings such as Guggenheim Museum Bilbao, designed by Frank Gehry, are of this type, becoming a tourist attraction as well as a significant international museum. Other apparently usable buildings are really follies, deliberately constructed for a decorative purpose and never intended to be practical. On the other hand, sometimes architecture is entertainment, while pretending to be functional. The tourism industry, for example, creates or renovates buildings as "attractions" that have either never been used or can never be used for their ostensible purpose. They are instead re-purposed to entertain visitors often by simulating cultural experiences. Buildings, history and sacred spaces are thus made into commodities for purchase. Such intentional tourist attractions divorce buildings from the past so that "the difference between historical authenticity and contemporary entertainment venues/theme parks becomes hard to define". Examples include "the preservation of the Alcázar of Toledo, with its grim Civil War History, the conversion of slave dungeons into tourist attractions in Ghana, [such as, for example, Cape Coast Castle] and the presentation of indigenous culture in Libya". The specially constructed buildings in amusement parks represent the park's theme and are usually neither authentic nor completely functional. By the second half of the 20th century, developments in electronic media made possible the delivery of entertainment products to mass audiences across the globe. The technology enabled people to see, hear and participate in all the familiar forms – stories, theatre, music, dance – wherever they live. The rapid development of entertainment technology was assisted by improvements in data storage devices such as cassette tapes or compact discs, along with increasing miniaturisation. Computerisation and the development of barcodes also made ticketing easier, faster and global. In the 1940s, radio was the electronic medium for family entertainment and information. In the 1950s, it was television that was the new medium and it rapidly became global, bringing visual entertainment, first in black and white, then in colour, to the world. By the 1970s, games could be played electronically, then hand-held devices provided mobile entertainment, and by the last decade of the 20th century, via networked play. In combination with products from the entertainment industry, all the traditional forms of entertainment became available personally. People could not only select an entertainment product such as a piece of music, film or game, they could choose the time and place to use it. The "proliferation of portable media players and the emphasis on the computer as a site for film consumption" together have significantly changed how audiences encounter films. One of the most notable consequences of the rise of electronic entertainment has been the rapid obsolescence of the various recording and storage methods. As an example of speed of change driven by electronic media, over the course of one generation, television as a medium for receiving standardised entertainment products went from unknown, to novel, to ubiquitous and finally to superseded. One estimate was that by 2011 over 30 percent of households in the US would own a Wii console, "about the same percentage that owned a television in 1953". Some expected that halfway through the second decade of the 21st century, online entertainment would have completely replaced television – which did not happen. The so-called "digital revolution" has produced an increasingly transnational marketplace that has caused difficulties for governments, business, industries, and individuals, as they all try to keep up. Even the sports stadium of the future will increasingly compete with television viewing "...in terms of comfort, safety and the constant flow of audio-visual information and entertainment available." Other flow on effects of the shift are likely to include those on public architecture such as hospitals and nursing homes, where television, regarded as an essential entertainment service for patients and residents, will need to be replaced by access to the internet. At the same time, the ongoing need for entertainers as "professional engagers" shows the continuity of traditional entertainment. By the second decade of the 21st century, analogue recording was being replaced by digital recording and all forms of electronic entertainment began to converge. For example, convergence is challenging standard practices in the film industry: whereas "success or failure used to be determined by the first weekend of its run. Today, ... a series of exhibition 'windows', such as DVD, pay-per-view, and fibre-optic video-on-demand are used to maximise profits." Part of the industry's adjustment is its release of new commercial product directly via video hosting services. Media convergence is said to be more than technological: the convergence is cultural as well. It is also "the result of a deliberate effort to protect the interests of business entities, policy institutions and other groups". Globalisation and cultural imperialism are two of the cultural consequences of convergence. Others include fandom and interactive storytelling as well as the way that single franchises are distributed through and affect a range of delivery methods. The "greater diversity in the ways that signals may be received and packaged for the viewer, via terrestrial, satellite or cable television, and of course, via the Internet" also affects entertainment venues, such as sports stadia, which now need to be designed so that both live and remote audiences can interact in increasingly sophisticated ways – for example, audiences can "watch highlights, call up statistics", "order tickets and merchandise" and generally "tap into the stadium's resources at any time of the day or night". The introduction of television altered the availability, cost, variety and quality of entertainment products for the public and the convergence of online entertainment is having a similar effect. For example, the possibility and popularity of user-generated content, as distinct from commercial product, creates a "networked audience model [that] makes programming obsolete". Individuals and corporations use video hosting services to broadcast content that is equally accepted by the public as legitimate entertainment. While technology increases demand for entertainment products and offers increased speed of delivery, the forms that make up the content are in themselves, relatively stable. Storytelling, music, theatre, dance and games are recognisably the same as in earlier centuries.
[ { "paragraph_id": 0, "text": "Entertainment is a form of activity that holds the attention and interest of an audience or gives pleasure and delight. It can be an idea or a task, but it is more likely to be one of the activities or events that have developed over thousands of years specifically for the purpose of keeping an audience's attention.", "title": "" }, { "paragraph_id": 1, "text": "Although people's attention is held by different things because individuals have different preferences, most forms of entertainment are recognisable and familiar. Storytelling, music, drama, dance, and different kinds of performance exist in all cultures, were supported in royal courts, and developed into sophisticated forms over time, becoming available to all citizens. The process has been accelerated in modern times by an entertainment industry that records and sells entertainment products. Entertainment evolves and can be adapted to suit any scale, ranging from an individual who chooses private entertainment from a now enormous array of pre-recorded products, to a banquet adapted for two, to any size or type of party with appropriate music and dance, to performances intended for thousands, and even for a global audience.", "title": "" }, { "paragraph_id": 2, "text": "The experience of being entertained has come to be strongly associated with amusement, so that one common understanding of the idea is fun and laughter, although many entertainments have a serious purpose. This may be the case in various forms of ceremony, celebration, religious festival, or satire, for example. Hence, there is the possibility that what appears to be entertainment may also be a means of achieving insight or intellectual growth.", "title": "" }, { "paragraph_id": 3, "text": "An important aspect of entertainment is the audience, which turns a private recreation or leisure activity into entertainment. The audience may have a passive role, as in the case of people watching a play, opera, television show, or film; or the audience role may be active, as in the case of games, where the participant and audience roles may be routinely reversed. Entertainment can be public or private, involving formal, scripted performances, as in the case of theatre or concerts, or unscripted and spontaneous, as in the case of children's games. Most forms of entertainment have persisted over many centuries, evolving due to changes in culture, technology, and fashion, as with stage magic. Films and video games, although they use newer media, continue to tell stories, present drama, and play music. Festivals devoted to music, film, or dance allow audiences to be entertained over a number of consecutive days.", "title": "" }, { "paragraph_id": 4, "text": "Some entertainment, such as public executions, is now illegal in most countries. Activities such as fencing or archery, once used in hunting or war, have become spectator sports. In the same way, other activities, such as cooking, have developed into performances among professionals, staged as global competitions, and then broadcast for entertainment. What is entertainment for one group or individual may be regarded as work or an act of cruelty by another.", "title": "" }, { "paragraph_id": 5, "text": "The familiar forms of entertainment have the capacity to cross over into different media and have demonstrated a seemingly unlimited potential for creative remix. This has ensured the continuity and longevity of many themes, images, and structures.", "title": "" }, { "paragraph_id": 6, "text": "The Oxford English Dictionary gives Latin and French origins for the word \"entertain\", including inter (among) + tenir (to hold) as derivations, giving translations of \"to hold mutually\" or \"to hold intertwined\" and \"to engage, keep occupied, the attention, thoughts, or time (of a person)\". It also provides words like \"merry-making\", \"pleasure\", and \"delight\", as well as \"to receive as a guest and show hospitality to\". It cites a 1490 usage by William Caxton.", "title": "Etymology" }, { "paragraph_id": 7, "text": "Entertainment can be distinguished from other activities such as education and marketing even though they have learned how to use the appeal of entertainment to achieve their different goals. Sometimes entertainment can be a mixture for both. The importance and impact of entertainment is recognised by scholars and its increasing sophistication has influenced practices in other fields such as museology.", "title": "Psychology and philosophy" }, { "paragraph_id": 8, "text": "Psychologists say the function of media entertainment is \"the attainment of gratification\". No other results or measurable benefits are usually expected from it (except perhaps the final score in a sporting entertainment). This is in contrast to education (which is designed with the purpose of developing understanding or helping people to learn) and marketing (which aims to encourage people to purchase commercial products). However, the distinctions become blurred when education seeks to be more \"entertaining\" and entertainment or marketing seek to be more \"educational\". Such mixtures are often known by the neologisms \"edutainment\" or \"infotainment\". The psychology of entertainment as well as of learning has been applied to all these fields. Some education-entertainment is a serious attempt to combine the best features of the two. Some people are entertained by others' pain or the idea of their unhappiness (schadenfreude).", "title": "Psychology and philosophy" }, { "paragraph_id": 9, "text": "An entertainment might go beyond gratification and produce some insight in its audience. Entertainment may skilfully consider universal philosophical questions such as: \"What does it mean to be human?\"; \"What is the right thing to do?\"; or \"How do I know what I know?\". \"The meaning of life\", for example, is the subject in a wide range of entertainment forms, including film, music and literature. Questions such as these drive many narratives and dramas, whether they are presented in the form of a story, film, play, poem, book, dance, comic, or game. Dramatic examples include Shakespeare's influential play Hamlet, whose hero articulates these concerns in poetry; and films, such as The Matrix, which explores the nature of knowledge and was released worldwide. Novels give great scope for investigating these themes while they entertain their readers. An example of a creative work that considers philosophical questions so entertainingly that it has been presented in a very wide range of forms is The Hitchhiker's Guide to the Galaxy. Originally a radio comedy, this story became so popular that it has also appeared as a novel, film, television series, stage show, comic, audiobook, LP record, adventure game and online game, its ideas became popular references (see Phrases from The Hitchhiker's Guide to the Galaxy) and has been translated into many languages. Its themes encompass the meaning of life, as well as \"the ethics of entertainment, artificial intelligence, multiple worlds, God, and philosophical method\".", "title": "Psychology and philosophy" }, { "paragraph_id": 10, "text": "The \"ancient craft of communicating events and experiences, using words, images, sounds and gestures\" by telling a story is not only the means by which people passed on their cultural values and traditions and history from one generation to another, it has been an important part of most forms of entertainment ever since the earliest times. Stories are still told in the early forms, for example, around a fire while camping, or when listening to the stories of another culture as a tourist. \"The earliest storytelling sequences we possess, now of course, committed to writing, were undoubtedly originally a speaking from mouth to ear and their force as entertainment derived from the very same elements we today enjoy in films and novels.\" Storytelling is an activity that has evolved and developed \"toward variety\". Many entertainments, including storytelling but especially music and drama, remain familiar but have developed into a wide variety of form to suit a very wide range of personal preferences and cultural expression. Many types are blended or supported by other forms. For example, drama, stories and banqueting (or dining) are commonly enhanced by music; sport and games are incorporated into other activities to increase appeal. Some may have evolved from serious or necessary activities (such as running and jumping) into competition and then become entertainment. It is said, for example, that pole vaulting \"may have originated in the Netherlands, where people used long poles to vault over wide canals rather than wear out their clogs walking miles to the nearest bridge. Others maintain that pole vaulting was used in warfare to vault over fortress walls during battle.\" The equipment for such sports has become increasingly sophisticated. Vaulting poles, for example, were originally made from woods such as ash, hickory or hazel; in the 19th century bamboo was used and in the 21st century poles can be made of carbon fibre. Other activities, such as walking on stilts, are still seen in circus performances in the 21st century. Gladiatorial combats, also known as \"gladiatorial games\", popular during Roman times, provide a good example of an activity that is a combination of sport, punishment, and entertainment.", "title": "History" }, { "paragraph_id": 11, "text": "Changes to what is regarded as entertainment can occur in response to cultural or historical shifts. Hunting wild animals, for example, was introduced into the Roman Empire from Carthage and became a popular public entertainment and spectacle, supporting an international trade in wild animals.", "title": "History" }, { "paragraph_id": 12, "text": "Entertainment also evolved into different forms and expressions as a result of social upheavals such as wars and revolutions. During the Chinese Cultural Revolution, for example, Revolutionary opera was sanctioned by the Communist party and World War I, the Great Depression and the Russian Revolution all affected entertainment.", "title": "History" }, { "paragraph_id": 13, "text": "Relatively minor changes to the form and venue of an entertainment continue to come and go as they are affected by the period, fashion, culture, technology, and economics. For example, a story told in dramatic form can be presented in an open-air theatre, a music hall, a movie theatre, a multiplex, or as technological possibilities advanced, via a personal electronic device such as a tablet computer. Entertainment is provided for mass audiences in purpose-built structures such as a theatre, auditorium, or stadium. One of the most famous venues in the Western world, the Colosseum, \"dedicated AD 80 with a hundred days of games, held fifty thousand spectators,\" and in it audiences \"enjoyed blood sport with the trappings of stage shows\". Spectacles, competitions, races, and sports were once presented in this purpose-built arena as public entertainment. New stadia continue to be built to suit the ever more sophisticated requirements of global audiences.", "title": "History" }, { "paragraph_id": 14, "text": "Imperial and royal courts have provided training grounds and support for professional entertainers, with different cultures using palaces, castles and forts in different ways. In the Maya city states, for example, \"spectacles often took place in large plazas in front of palaces; the crowds gathered either there or in designated places from which they could watch at a distance.\" Court entertainments also crossed cultures. For example, the durbar was introduced to India by the Mughals, and passed onto the British Empire, which then followed Indian tradition: \"institutions, titles, customs, ceremonies by which a Maharaja or Nawab were installed ... the exchange of official presents ... the order of precedence\", for example, were \"all inherited from ... the Emperors of Delhi\". In Korea, the \"court entertainment dance\" was \"originally performed in the palace for entertainment at court banquets.\"", "title": "History" }, { "paragraph_id": 15, "text": "Court entertainment often moved from being associated with the court to more general use among commoners. This was the case with \"masked dance-dramas\" in Korea, which \"originated in conjunction with village shaman rituals and eventually became largely an entertainment form for commoners\". Nautch dancers in the Mughal Empire performed in Indian courts and palaces. Another evolution, similar to that from courtly entertainment to common practice, was the transition from religious ritual to secular entertainment, such as happened during the Goryeo dynasty with the Narye festival. Originally \"solely religious or ritualistic, a secular component was added at the conclusion\". Former courtly entertainments, such as jousting, often also survived in children's games.", "title": "History" }, { "paragraph_id": 16, "text": "In some courts, such as those during the Byzantine Empire, the genders were segregated among the upper classes, so that \"at least before the period of the Komnenoi\" (1081–1185) men were separated from women at ceremonies where there was entertainment such as receptions and banquets.", "title": "History" }, { "paragraph_id": 17, "text": "Court ceremonies, palace banquets and the spectacles associated with them, have been used not only to entertain but also to demonstrate wealth and power. Such events reinforce the relationship between ruler and ruled; between those with power and those without, serving to \"dramatise the differences between ordinary families and that of the ruler\". This is the case as much as for traditional courts as it is for contemporary ceremonials, such as the Hong Kong handover ceremony in 1997, at which an array of entertainments (including a banquet, a parade, fireworks, a festival performance and an art spectacle) were put to the service of highlighting a change in political power. Court entertainments were typically performed for royalty and courtiers as well as \"for the pleasure of local and visiting dignitaries\". Royal courts, such as the Korean one, also supported traditional dances. In Sudan, musical instruments such as the so-called \"slit\" or \"talking\" drums, once \"part of the court orchestra of a powerful chief\", had multiple purposes: they were used to make music; \"speak\" at ceremonies; mark community events; send long-distance messages; and call men to hunt or war.", "title": "History" }, { "paragraph_id": 18, "text": "Courtly entertainments also demonstrate the complex relationship between entertainer and spectator: individuals may be either an entertainer or part of the audience, or they may swap roles even during the course of one entertainment. In the court at the Palace of Versailles, \"thousands of courtiers, including men and women who inhabited its apartments, acted as both performers and spectators in daily rituals that reinforced the status hierarchy\".", "title": "History" }, { "paragraph_id": 19, "text": "Like court entertainment, royal occasions such as coronations and weddings provided opportunities to entertain both the aristocracy and the people. For example, the splendid 1595 Accession Day celebrations of Queen Elizabeth I offered tournaments and jousting and other events performed \"not only before the assembled court, in all their finery, but also before thousands of Londoners eager for a good day's entertainment. Entry for the day's events at the Tiltyard in Whitehall was set at 12d\".", "title": "History" }, { "paragraph_id": 20, "text": "Although most forms of entertainment have evolved and continued over time, some once-popular forms are no longer as acceptable. For example, during earlier centuries in Europe, watching or participating in the punishment of criminals or social outcasts was an accepted and popular form of entertainment. Many forms of public humiliation also offered local entertainment in the past. Even capital punishment such as hanging and beheading, offered to the public as a warning, were also regarded partly as entertainment. Capital punishments that lasted longer, such as stoning and drawing and quartering, afforded a greater public spectacle. \"A hanging was a carnival that diverted not merely the unemployed but the unemployable. Good bourgeois or curious aristocrats who could afford it watched it from a carriage or rented a room.\" Public punishment as entertainment lasted until the 19th century by which time \"the awesome event of a public hanging aroused the[ir] loathing of writers and philosophers\". Both Dickens and Thackeray wrote about a hanging in Newgate Prison in 1840, and \"taught an even wider public that executions are obscene entertainments\".", "title": "History" }, { "paragraph_id": 21, "text": "Children's entertainment is centred on play and is significant for their growth. It often mimics adult activities, such as watching performances (on television); prepares them for adult responsibilities, such as child rearing or social interaction (through dolls, pets and group games); or develops skills such as motor skills (such as a game of marbles), needed for sports and music. In the modern day, it often involves sedentary engagement with television or tablet computer.", "title": "Children" }, { "paragraph_id": 22, "text": "Entertainment is also provided to children or taught to them by adults and many activities that appeal to them such as puppets, clowns, pantomimes and cartoons are also enjoyed by adults.", "title": "Children" }, { "paragraph_id": 23, "text": "Children have always played games. It is accepted that as well as being entertaining, playing games helps children's development. One of the most famous visual accounts of children's games is a painting by Pieter Bruegel the Elder called Children's Games, painted in 1560. It depicts children playing a range of games that presumably were typical of the time. Many of these games, such as marbles, hide-and-seek, blowing soap bubbles and piggyback riding continue to be played.", "title": "Children" }, { "paragraph_id": 24, "text": "Most forms of entertainment can be or are modified to suit children's needs and interests. During the 20th century, starting with the often criticised but nonetheless important work of G. Stanley Hall, who \"promoted the link between the study of development and the 'new' laboratory psychology\", and especially with the work of Jean Piaget, who \"saw cognitive development as being analogous to biological development\", it became understood that the psychological development of children occurs in stages and that their capacities differ from adults. Hence, stories and activities, whether in books, film, or video games were developed specifically for child audiences. Countries have responded to the special needs of children and the rise of digital entertainment by developing systems such as television content rating systems, to guide the public and the entertainment industry.", "title": "Children" }, { "paragraph_id": 25, "text": "In the 21st century, as with adult products, much entertainment is available for children on the internet for private use. This constitutes a significant change from earlier times. The amount of time expended by children indoors on screen-based entertainment and the \"remarkable collapse of children's engagement with nature\" has drawn criticism for its negative effects on imagination, adult cognition and psychological well-being.", "title": "Children" }, { "paragraph_id": 26, "text": "Banquets have been a venue for amusement, entertainment or pleasure since ancient times, continuing into the modern era. until the 21st century when they are still being used for many of their original purposes – to impress visitors, especially important ones; to show hospitality; as an occasion to showcase supporting entertainments such as music or dancing, or both. They were an integral part of court entertainments and helped entertainers develop their skills. They are also important components of celebrations such as coronations, weddings, birthdays civic or political achievements, military engagements or victories as well as religious obligations, one of the most famous being the Banqueting House, Whitehall in London. In modern times, banquets are available privately, or commercially in restaurants, sometimes combined with a dramatic performance in dinner theatres. Cooking by professional chefs has also become a form of entertainment as part of global competitions such as the Bocuse d'Or.", "title": "Forms" }, { "paragraph_id": 27, "text": "Music is a supporting component of many kinds of entertainment and most kinds of performance. For example, it is used to enhance storytelling, it is indispensable in dance and opera, and is usually incorporated into dramatic film or theatre productions.", "title": "Forms" }, { "paragraph_id": 28, "text": "Music is also a universal and popular type of entertainment on its own, constituting an entire performance such as when concerts are given. Depending on the rhythm, instrument, performance and style, music is divided into many genres, such as classical, jazz, folk, rock, pop music or traditional. Since the 20th century, performed music, once available only to those who could pay for the performers, has been available cheaply to individuals by the entertainment industry, which broadcasts it or pre-records it for sale.", "title": "Forms" }, { "paragraph_id": 29, "text": "The wide variety of musical performances, whether or not they are artificially amplified, all provide entertainment irrespective of whether the performance is from soloists, choral or orchestral groups, or ensemble. Live performances use specialised venues, which might be small or large; indoors or outdoors; free or expensive. The audiences have different expectations of the performers as well as of their own role in the performance. For example, some audiences expect to listen silently and are entertained by the excellence of the music, its rendition or its interpretation. Other audiences of live performances are entertained by the ambience and the chance to participate. Even more listeners are entertained by pre-recorded music and listen privately.", "title": "Forms" }, { "paragraph_id": 30, "text": "The instruments used in musical entertainment are either solely the human voice or solely instrumental or some combination of the two. Whether the performance is given by vocalists or instrumentalists, the performers may be soloists or part of a small or large group, in turn entertaining an audience that might be individual, passing by, small or large. Singing is generally accompanied by instruments although some forms, notably a cappella and overtone singing, are unaccompanied. Modern concerts often use various special effects and other theatrics to accompany performances of singing and dancing.", "title": "Forms" }, { "paragraph_id": 31, "text": "Games are played for entertainment – sometimes purely for recreation, sometimes for achievement or reward as well. They can be played alone, in teams, or online; by amateurs or by professionals. The players may have an audience of non-players, such as when people are entertained by watching a chess championship. On the other hand, players in a game may constitute their own audience as they take their turn to play. Often, part of the entertainment for children playing a game is deciding who is part of their audience and who is a player.", "title": "Forms" }, { "paragraph_id": 32, "text": "Equipment varies with the game. Board games, such as Go, Monopoly or backgammon need a board and markers. One of the oldest known board games is Senet, a game played in Ancient Egypt, enjoyed by the pharaoh Tutankhamun. Card games, such as whist, poker and Bridge have long been played as evening entertainment among friends. For these games, all that is needed is a deck of playing cards. Other games, such as bingo, played with numerous strangers, have been organised to involve the participation of non-players via gambling. Many are geared for children, and can be played outdoors, including hopscotch, hide and seek, or Blind man's bluff. The list of ball games is quite extensive. It includes, for example, croquet, lawn bowling and paintball as well as many sports using various forms of balls. The options cater to a wide range of skill and fitness levels. Physical games can develop agility and competence in motor skills. Number games such as Sudoku and puzzle games like the Rubik's cube can develop mental prowess.", "title": "Forms" }, { "paragraph_id": 33, "text": "Video games are played using a controller to create results on a screen. They can also be played online with participants joining in remotely. In the second half of the 20th century and in the 21st century the number of such games increased enormously, providing a wide variety of entertainment to players around the world. Video games are popular across the world.", "title": "Forms" }, { "paragraph_id": 34, "text": "French poet Louise Labé (1520/1522–1566) wrote \"a profound and timeless insight into reading's innate power\".", "title": "Forms" }, { "paragraph_id": 35, "text": "The past gives us pleasure and is of more service than the present; but the delight of what we once felt is dimly lost never to return and its memory is as distressing as the events themselves were then delectable ... But when we happen to put our thoughts in writing, how easily, later on, does our mind race through an infinity of events, incessantly alive, so that a long time afterwards when we take up those written pages we can return to the same place and to the same disposition in which we once found ourselves. quote from and commentary by Fischer (2003)", "title": "Forms" }, { "paragraph_id": 36, "text": "The young Saint Teresa of Ávila (1515–1582) read chivalrous novels and wrote about the \"rapture\" that books provided.", "title": "Forms" }, { "paragraph_id": 37, "text": "I became accustomed to reading [novels] and that small fault made me cool my desire and will to do other tasks. I thought nothing of spending many hours a day and night in this vain exercise, hidden from my father. My rapture in this was so great, that unless I had a new book to read, it seemed to me that I could not be happy. quoted in Fischer (2003)", "title": "Forms" }, { "paragraph_id": 38, "text": "Reading has been a source of entertainment for a very long time, especially when other forms, such as performance entertainments, were (or are) either unavailable or too costly. Even when the primary purpose of the writing is to inform or instruct, reading is well known for its capacity to distract from everyday worries. Both stories and information have been passed on through the tradition of orality and oral traditions survive in the form of performance poetry for example. However, they have drastically declined. \"Once literacy had arrived in strength, there was no return to the oral prerogative.\" The advent of printing, the reduction in costs of books and an increasing literacy all served to enhance the mass appeal of reading. Furthermore, as fonts were standardised and texts became clearer, \"reading ceased being a painful process of decipherment and became an act of pure pleasure\". By the 16th century in Europe, the appeal of reading for entertainment was well established.", "title": "Forms" }, { "paragraph_id": 39, "text": "Among literature's many genres are some designed, in whole or in part, purely for entertainment. Limericks, for example, use verse in a strict, predictable rhyme and rhythm to create humour and to amuse an audience of listeners or readers. Interactive books such as \"choose your own adventure\" can make literary entertainment more participatory.", "title": "Forms" }, { "paragraph_id": 40, "text": "Comics and editorial cartoons are literary genres that use drawings or graphics, usually in combination with text, to convey an entertaining narrative. Many contemporary comics have elements of fantasy and are produced by companies that are part of the entertainment industry. Others have unique authors who offer a more personal, philosophical view of the world and the problems people face. Comics about superheroes such as Superman are of the first type. Examples of the second sort include the individual work over 50 years of Charles M. Schulz who produced a popular comic called Peanuts about the relationships among a cast of child characters; and Michael Leunig who entertains by producing whimsical cartoons that also incorporate social criticism. The Japanese Manga style differs from the western approach in that it encompasses a wide range of genres and themes for a readership of all ages. Caricature uses a kind of graphic entertainment for purposes ranging from merely putting a smile on the viewer's face, to raising social awareness, to highlighting the moral characteristics of a person being caricatured.", "title": "Forms" }, { "paragraph_id": 41, "text": "Comedy is both a genre of entertainment and a component of it, providing laughter and amusement, whether the comedy is the sole purpose or used as a form of contrast in an otherwise serious piece. It is a valued contributor to many forms of entertainment, including in literature, theatre, opera, film and games. In royal courts, such as in the Byzantine court, and presumably, also in its wealthy households, \"mimes were the focus of orchestrated humour, expected or obliged to make fun of all at court, not even excepting the emperor and members of the imperial family. This highly structured role of jester consisted of verbal humour, including teasing, jests, insult, ridicule, and obscenity and non-verbal humour such as slapstick and horseplay in the presence of an audience.\" In medieval times, all comic types – the buffoon, jester, hunchback, dwarf, jokester, were all \"considered to be essentially of one comic type: the fool\", who while not necessarily funny, represented \"the shortcomings of the individual\".", "title": "Forms" }, { "paragraph_id": 42, "text": "Shakespeare wrote seventeen comedies that incorporate many techniques still used by performers and writers of comedy – such as jokes, puns, parody, wit, observational humor, or the unexpected effect of irony. One-liner jokes and satire are also used to comedic effect in literature. In farce, the comedy is a primary purpose.", "title": "Forms" }, { "paragraph_id": 43, "text": "The meaning of the word \"comedy\" and the audience's expectations of it have changed over time and vary according to culture. Simple physical comedy such as slapstick is entertaining to a broad range of people of all ages. However, as cultures become more sophisticated, national nuances appear in the style and references so that what is amusing in one culture may be unintelligible in another.", "title": "Forms" }, { "paragraph_id": 44, "text": "Live performances before an audience constitute a major form of entertainment, especially before the invention of audio and video recording. Performance takes a wide range of forms, including theatre, music and drama. In the 16th and 17th centuries, European royal courts presented masques that were complex theatrical entertainments involving dancing, singing and acting. Opera is a similarly demanding performance style that remains popular. It also encompass all three forms, demanding a high level of musical and dramatic skill, collaboration and like the masque, production expertise as well.", "title": "Forms" }, { "paragraph_id": 45, "text": "Audiences generally show their appreciation of an entertaining performance with applause. However, all performers run the risk of failing to hold their audience's attention and thus, failing to entertain. Audience dissatisfaction is often brutally honest and direct.", "title": "Forms" }, { "paragraph_id": 46, "text": "\"Of course you all ought to know that while singing a good song or, or giving a good recitation ... helps to arrest the company's attention ... Such at least was the case with me – the publican devised a plan to bring my entertainment to an end abruptly, and the plan was, he told the waiter to throw a wet towel at me, which, of course, the waiter did ... and I received the wet towel, full force, in the face, which staggered me ... and had the desired effect of putting an end to me giving any more entertainments in the house.\" William McGonagall (Performance artist and poet)", "title": "Forms" }, { "paragraph_id": 47, "text": "Storytelling is an ancient form of entertainment that has influenced almost all other forms. It is \"not only entertainment, it is also thinking through human conflicts and contradictions\". Hence, although stories may be delivered directly to a small listening audience, they are also presented as entertainment and used as a component of any piece that relies on a narrative, such as film, drama, ballet, and opera. Written stories have been enhanced by illustrations, often to a very high artistic standard, for example, on illuminated manuscripts and on ancient scrolls such as Japanese ones. Stories remain a common way of entertaining a group that is on a journey. Showing how stories are used to pass the time and entertain an audience of travellers, Chaucer used pilgrims in his literary work The Canterbury Tales in the 14th century, as did Wu Cheng'en in the 16th century in Journey to the West. Even though journeys can now be completed much faster, stories are still told to passengers en route in cars and aeroplanes either orally or delivered by some form of technology.", "title": "Forms" }, { "paragraph_id": 48, "text": "The power of stories to entertain is evident in one of the most famous ones – Scheherazade – a story in the Persian professional storytelling tradition, of a woman who saves her own life by telling stories. The connections between the different types of entertainment are shown by the way that stories like this inspire a retelling in another medium, such as music, film or games. For example, composers Rimsky-Korsakov, Ravel and Szymanowski have each been inspired by the Scheherazade story and turned it into an orchestral work; director Pasolini made a film adaptation; and there is an innovative video game based on the tale. Stories may be told wordlessly, in music, dance or puppetry for example, such as in the Javanese tradition of wayang, in which the performance is accompanied by a gamelan orchestra or the similarly traditional Punch and Judy show.", "title": "Forms" }, { "paragraph_id": 49, "text": "Epic narratives, poems, sagas and allegories from all cultures tell such gripping tales that they have inspired countless other stories in all forms of entertainment. Examples include the Hindu Ramayana and Mahabharata; Homer's Odyssey and Iliad; the first Arabic novel Hayy ibn Yaqdhan; the Persian epic Shahnameh; the Sagas of Icelanders and the celebrated Tale of the Genji. Collections of stories, such as Grimms' Fairy Tales or those by Hans Christian Andersen, have been similarly influential. Originally published in the early 19th century, this collection of folk stories significantly influence modern popular culture, which subsequently used its themes, images, symbols, and structural elements to create new entertainment forms.", "title": "Forms" }, { "paragraph_id": 50, "text": "Some of the most powerful and long-lasting stories are the foundation stories, also called origin or creation myths such as the Dreamtime myths of the Australian aborigines, the Mesopotamian Epic of Gilgamesh, or the Hawaiian stories of the origin of the world. These too are developed into books, films, music and games in a way that increases their longevity and enhances their entertainment value.", "title": "Forms" }, { "paragraph_id": 51, "text": "Theatre performances, typically dramatic or musical, are presented on a stage for an audience and have a history that goes back to Hellenistic times when \"leading musicians and actors\" performed widely at \"poetical competitions\", for example at \"Delphi, Delos, Ephesus\". Aristotle and his teacher Plato both wrote on the theory and purpose of theatre. Aristotle posed questions such as \"What is the function of the arts in shaping character? Should a member of the ruling class merely watch performances or be a participant and perform? What kind of entertainment should be provided for those who do not belong to the elite?\" The \"Ptolemys in Egypt, the Seleucids in Pergamum\" also had a strong theatrical tradition and later, wealthy patrons in Rome staged \"far more lavish productions\".", "title": "Forms" }, { "paragraph_id": 52, "text": "Expectations about the performance and their engagement with it have changed over time. For example, in England during the 18th century, \"the prejudice against actresses had faded\" and in Europe generally, going to the theatre, once a socially dubious activity, became \"a more respectable middle-class pastime\" in the late 19th and early 20th centuries, when the variety of popular entertainments increased. Operetta and music halls became available, and new drama theatres such as the Moscow Art Theatre and the Suvorin Theatre in Russia opened. At the same time, commercial newspapers \"began to carry theatre columns and reviews\" that helped make theatre \"a legitimate subject of intellectual debate\" in general discussions about art and culture. Audiences began to gather to \"appreciate creative achievement, to marvel at, and be entertained by, the prominent 'stars'.\" Vaudeville and music halls, popular at this time in the United States, England, Canada, Australia and New Zealand, were themselves eventually superseded.", "title": "Forms" }, { "paragraph_id": 53, "text": "Plays, musicals, monologues, pantomimes, and performance poetry are part of the very long history of theatre, which is also the venue for the type of performance known as stand-up comedy. In the 20th century, radio and television, often broadcast live, extended the theatrical tradition that continued to exist alongside the new forms.", "title": "Forms" }, { "paragraph_id": 54, "text": "The stage and the spaces set out in front of it for an audience create a theatre. All types of stage are used with all types of seating for the audience, including the impromptu or improvised; the temporary; the elaborate; or the traditional and permanent. They are erected indoors or outdoors. The skill of managing, organising and preparing the stage for a performance is known as stagecraft. The audience's experience of the entertainment is affected by their expectations, the stagecraft, the type of stage, and the type and standard of seating provided.", "title": "Forms" }, { "paragraph_id": 55, "text": "Films are a major form of entertainment, although not all films have entertainment as their primary purpose: documentary film, for example, aims to create a record or inform, although the two purposes often work together. The medium was a global business from the beginning: \"The Lumière brothers were the first to send cameramen throughout the world, instructing them to film everything which could be of interest for the public.\" In 1908, Pathé launched and distributed newsreels and by World War I, films were meeting an enormous need for mass entertainment. \"In the first decade of the [20th] century cinematic programmes combined, at random, fictions and newsfilms.\" The Americans first \"contrived a way of producing an illusion of motion through successive images,\" but \"the French were able to transform a scientific principle into a commercially lucrative spectacle\". Film therefore became a part of the entertainment industry from its early days. Increasingly sophisticated techniques have been used in the film medium to delight and entertain audiences. Animation, for example, which involves the display of rapid movement in an art work, is one of these techniques that particularly appeals to younger audiences. The advent of computer-generated imagery (CGI) in the 21st century made it \"possible to do spectacle\" more cheaply and \"on a scale never dreamed of\" by Cecil B. DeMille. From the 1930s to 1950s, movies and radio were the \"only mass entertainment\" but by the second decade of the 21st century, technological changes, economic decisions, risk aversion and globalisation reduced both the quality and range of films being produced. Sophisticated visual effects and CGI techniques, for example, rather than humans, were used not only to create realistic images of people, landscapes and events (both real and fantastic) but also to animate non-living items such as Lego normally used as entertainment as a game in physical form. Creators of The Lego Movie \"wanted the audience to believe they were looking at actual Lego bricks on a tabletop that were shot with a real camera, not what we actually did, which was create vast environments with digital bricks inside the computer.\" The convergence of computers and film has allowed entertainment to be presented in a new way and the technology has also allowed for those with the personal resources to screen films in a home theatre, recreating in a private venue the quality and experience of a public theatre. This is similar to the way that the nobility in earlier times could stage private musical performances or the use of domestic theatres in large homes to perform private plays in earlier centuries.", "title": "Forms" }, { "paragraph_id": 56, "text": "Films also re-imagine entertainment from other forms, turning stories, books and plays, for example, into new entertainments. The Story of Film, a documentary about the history of film, gives a survey of global achievements and innovations in the medium, as well as changes in the conception of film-making. It demonstrates that while some films, particularly those in the Hollywood tradition that combines \"realism and melodramatic romanticism\", are intended as a form of escapism, others require a deeper engagement or more thoughtful response from their audiences. For example, the award-winning Senegalese film Xala takes government corruption as its theme. Charlie Chaplin's film The Great Dictator was a brave and innovative parody, also on a political theme. Stories that are thousands of years old, such as Noah, have been re-interpreted in film, applying familiar literary devices such as allegory and personification with new techniques such as CGI to explore big themes such as \"human folly\", good and evil, courage and despair, love, faith, and death – themes that have been a main-stay of entertainment across all its forms.", "title": "Forms" }, { "paragraph_id": 57, "text": "As in other media, excellence and achievement in films is recognised through a range of awards, including ones from the American Academy of Motion Picture Arts and Sciences, the British Academy of Film and Television Arts, the Cannes International Film Festival in France and the Asia Pacific Screen Awards.", "title": "Forms" }, { "paragraph_id": 58, "text": "The many forms of dance provide entertainment for all age groups and cultures. Dance can be serious in tone, such as when it is used to express a culture's history or important stories; it may be provocative; or it may put in the service of comedy. Since it combines many forms of entertainment – music, movement, storytelling, theatre – it provides a good example of the various ways that these forms can be combined to create entertainment for different purposes and audiences.", "title": "Forms" }, { "paragraph_id": 59, "text": "Dance is \"a form of cultural representation\" that involves not just dancers, but \"choreographers, audience members, patrons and impresarios ... coming from all over the globe and from vastly varied time periods.\" Whether from Africa, Asia or Europe, dance is constantly negotiating the realms of political, social, spiritual and artistic influence.\" Even though dance traditions may be limited to one cultural group, they all develop. For example, in Africa, there are \"Dahomean dances, Hausa dances, Masai dances and so forth.\" Ballet is an example of a highly developed Western form of dance that moved to the theatres from the French court during the time of Louis XIV, the dancers becoming professional theatrical performers. Some dances, such as the quadrille, a square dance that \"emerged during the Napoleonic years in France\" and other country dances were once popular at social gatherings like balls, but are now rarely performed. On the other hand, many folk dances (such as Scottish Highland dancing and Irish dancing), have evolved into competitions, which by adding to their audiences, has increased their entertainment value. \"Irish dance theatre, which sometimes features traditional Irish steps and music, has developed into a major dance form with an international reputation.\"", "title": "Forms" }, { "paragraph_id": 60, "text": "Since dance is often \"associated with the female body and women's experiences\", female dancers, who dance to entertain, have in some cases been regarded as distinct from \"decent\" women because they \"use their bodies to make a living instead of hiding them as much as possible\". Society's attitudes to female dancers depend on the culture, its history and the entertainment industry itself. For example, while some cultures regard any dancing by women as \"the most shameful form of entertainment\", other cultures have established venues such as strip clubs where deliberately erotic or sexually provocative dances such as striptease are performed in public by professional women dancers for mostly male audiences.", "title": "Forms" }, { "paragraph_id": 61, "text": "Various political regimes have sought to control or ban dancing or specific types of dancing, sometimes because of disapproval of the music or clothes associated with it. Nationalism, authoritarianism and racism have played a part in banning dances or dancing. For example, during the Nazi regime, American dances such as swing, regarded as \"completely un-German\", had \"become a public offense and needed to be banned\". Similarly, in Shanghai, China, in the 1930s, \"dancing and nightclubs had come to symbolise the excess that plagued Chinese society\" and officials wondered if \"other forms of entertainment such as brothels\" should also be banned. Banning had the effect of making \"the dance craze\" even greater. In Ireland, the Public Dance Hall Act of 1935 \"banned – but did not stop – dancing at the crossroads and other popular dance forms such as house and barn dances.\" In the US, various dances were once banned, either because like burlesque, they were suggestive, or because, like the Twist, they were associated with African Americans. \"African American dancers were typically banned from performing in minstrel shows until after the American Civil War.\"", "title": "Forms" }, { "paragraph_id": 62, "text": "Dances can be performed solo, in pairs, in groups, or by massed performers. They might be improvised or highly choreographed; spontaneous for personal entertainment (such as when children begin dancing for themselves); a private audience, a paying audience, a world audience, or an audience interested in a particular dance genre. They might be a part of a celebration, such as a wedding or New Year, or a cultural ritual with a specific purpose, such as a dance by warriors like a haka. Some dances, such as traditional dance and ballet, need a very high level of skill and training; others, such as the can-can, require a very high level of energy and physical fitness. Entertaining the audience is a normal part of dance but its physicality often also produces joy for the dancers themselves.", "title": "Forms" }, { "paragraph_id": 63, "text": "Animals have been used for the purposes of entertainment for millennia. They have been hunted for entertainment (as opposed to hunted for food); displayed while they hunt for prey; watched when they compete with each other; and watched while they perform a trained routine for human amusement. The Romans, for example, were entertained both by competitions involving wild animals and acts performed by trained animals. They watched as \"lions and bears danced to the music of pipes and cymbals; horses were trained to kneel, bow, dance and prance ... acrobats turning handsprings over wild lions and vaulting over wild leopards.\" There were \"violent confrontations with wild beasts\" and \"performances over time became more brutal and bloodier\".", "title": "Forms" }, { "paragraph_id": 64, "text": "Animals that perform trained routines or \"acts\" for human entertainment include fleas in flea circuses, dolphins in dolphinaria, and monkeys doing tricks for an audience on behalf of the player of a street organ. Animals kept in zoos in ancient times were often kept there for later use in the arena as entertainment or for their entertainment value as exotica.", "title": "Forms" }, { "paragraph_id": 65, "text": "Many contests between animals are now regarded as sports – for example, horse racing is regarded as both a sport and an important source of entertainment. Its economic impact means that it is also considered a global industry, one in which horses are carefully transported around the world to compete in races. In Australia, the horse race run on Melbourne Cup Day is a public holiday and the public regards the race as an important annual event. Like horse racing, camel racing requires human riders, while greyhound racing does not. People find it entertaining to watch animals race competitively, whether they are trained, like horses, camels or dogs, or untrained, like cockroaches.", "title": "Forms" }, { "paragraph_id": 66, "text": "The use of animals for entertainment is sometimes controversial, especially the hunting of wild animals. Some contests between animals, once popular entertainment for the public, have become illegal because of the cruelty involved. Among these are blood sports such as bear-baiting, dog fighting and cockfighting. Other contests involving animals remain controversial and have both supporters and detractors. For example, the conflict between opponents of pigeon shooting who view it as \"a cruel and moronic exercise in marksmanship, and proponents, who view it as entertainment\" has been tested in a court of law. Fox hunting, which involves the use of horses as well as hounds, and bullfighting, which has a strong theatrical component, are two entertainments that have a long and significant cultural history. They both involve animals and are variously regarded as sport, entertainment or cultural tradition. Among the organisations set up to advocate for the rights of animals are some whose concerns include the use of animals for entertainment. However, \"in many cases of animal advocacy groups versus organisations accused of animal abuse, both sides have cultural claims.\"", "title": "Forms" }, { "paragraph_id": 67, "text": "A circus, described as \"one of the most brazen of entertainment forms\", is a special type of theatrical performance, involving a variety of physical skills such as acrobatics and juggling and sometimes performing animals. Usually thought of as a travelling show performed in a big top, circus was first performed in permanent venues. Philip Astley is regarded as the founder of the modern circus in the second half of the 18th century and Jules Léotard is the French performer credited with developing the art of the trapeze, considered synonymous with circuses. Astley brought together performances that were generally familiar in traditional British fairs \"at least since the beginning of the 17th century\": \"tumbling, rope-dancing, juggling, animal tricks and so on\". It has been claimed that \"there is no direct link between the Roman circus and the circus of modern times. ... Between the demise of the Roman 'circus' and the foundation of Astley's Amphitheatre in London some 1300 years later, the nearest thing to a circus ring was the rough circle formed by the curious onlookers who gathered around the itinerant tumbler or juggler on a village green.\"", "title": "Forms" }, { "paragraph_id": 68, "text": "The form of entertainment known as stage magic or conjuring and recognisable as performance, is based on traditions and texts of magical rites and dogmas that have been a part of most cultural traditions since ancient times. (References to magic, for example, can be found in the Bible, in Hermeticism, in Zoroastrianism, in the Kabbalistic tradition, in mysticism and in the sources of Freemasonry.)", "title": "Forms" }, { "paragraph_id": 69, "text": "Stage magic is performed for an audience in a variety of media and locations: on stage, on television, in the street, and live at parties or events. It is often combined with other forms of entertainment, such as comedy or music and showmanship is often an essential part of magic performances. Performance magic relies on deception, psychological manipulation, sleight of hand and other forms of trickery to give an audience the illusion that a performer can achieve the impossible. Audiences amazed at the stunt performances and escape acts of Harry Houdini, for example, regarded him as a magician.", "title": "Forms" }, { "paragraph_id": 70, "text": "Fantasy magicians have held an important place in literature for centuries, offering entertainment to millions of readers. Famous wizards such as Merlin in the Arthurian legends have been written about since the 5th and 6th centuries, while in the 21st century, the young wizard Harry Potter became a global entertainment phenomenon when the book series about him sold about 450 million copies (as at June 2011), making it the best-selling book series in history.", "title": "Forms" }, { "paragraph_id": 71, "text": "Street entertainment, street performance, or \"busking\" are forms of performance that have been meeting the public's need for entertainment for centuries. It was \"an integral aspect of London's life\", for example, when the city in the early 19th century was \"filled with spectacle and diversion\". Minstrels or troubadours are part of the tradition. The art and practice of busking is still celebrated at annual busking festivals.", "title": "Forms" }, { "paragraph_id": 72, "text": "There are three basic forms of contemporary street performance. The first form is the \"circle show\". It tends to gather a crowd, usually has a distinct beginning and end, and is done in conjunction with street theatre, puppeteering, magicians, comedians, acrobats, jugglers and sometimes musicians. This type has the potential to be the most lucrative for the performer because there are likely to be more donations from larger audiences if they are entertained by the act. Good buskers control the crowd so patrons do not obstruct foot traffic. The second form, the walk-by act, has no distinct beginning or end. Typically, the busker provides an entertaining ambience, often with an unusual instrument, and the audience may not stop to watch or form a crowd. Sometimes a walk-by act spontaneously turns into a circle show. The third form, café busking, is performed mostly in restaurants, pubs, bars and cafés. This type of act occasionally uses public transport as a venue.", "title": "Forms" }, { "paragraph_id": 73, "text": "Parades are held for a range of purposes, often more than one. Whether their mood is sombre or festive, being public events that are designed to attract attention and activities that necessarily divert normal traffic, parades have a clear entertainment value to their audiences. Cavalcades and the modern variant, the motorcade, are examples of public processions. Some people watching the parade or procession may have made a special effort to attend, while others become part of the audience by happenstance. Whatever their mood or primary purpose, parades attract and entertain people who watch them pass by. Occasionally, a parade takes place in an improvised theatre space (such as the Trooping the Colour in ) and tickets are sold to the physical audience while the global audience participates via broadcast.", "title": "Forms" }, { "paragraph_id": 74, "text": "One of the earliest forms of parade were \"triumphs\" – grand and sensational displays of foreign treasures and spoils, given by triumphant Roman generals to celebrate their victories. They presented conquered peoples and nations that exalted the prestige of the victor. \"In the summer of 46 BCE Julius Caesar chose to celebrate four triumphs held on different days extending for about one month.\" In Europe from the Middle Ages to the Baroque the Royal Entry celebrated the formal visit of the monarch to the city with a parade through elaborately decorated streets, passing various shows and displays. The annual Lord Mayor's Show in London is an example of a civic parade that has survived since medieval times.", "title": "Forms" }, { "paragraph_id": 75, "text": "Many religious festivals (especially those that incorporate processions, such as Holy Week processions or the Indian festival of Holi) have some entertainment appeal in addition to their serious purpose. Sometimes, religious rituals have been adapted or evolved into secular entertainments, or like the Festa del Redentore in Venice, have managed to grow in popularity while holding both secular and sacred purposes in balance. However, pilgrimages, such as the Roman Catholic pilgrimage of the Way of St. James, the Muslim Hajj and the Hindu Kumbh Mela, which may appear to the outsider as an entertaining parade or procession, are not intended as entertainment: they are instead about an individual's spiritual journey. Hence, the relationship between spectator and participant, unlike entertainments proper, is different. The manner in which the Kumbh Mela, for example, \"is divorced from its cultural context and repackaged for Western consumption – renders the presence of voyeurs deeply problematic.\"", "title": "Forms" }, { "paragraph_id": 76, "text": "Parades generally impress and delight often by including unusual, colourful costumes. Sometimes they also commemorate or celebrate. Sometimes they have a serious purpose, such as when the context is military, when the intention is sometimes to intimidate; or religious, when the audience might participate or have a role to play. Even if a parade uses new technology and is some distance away, it is likely to have a strong appeal, draw the attention of onlookers and entertain them.", "title": "Forms" }, { "paragraph_id": 77, "text": "Fireworks are a part of many public entertainments and have retained an enduring popularity since they became a \"crowning feature of elaborate celebrations\" in the 17th century. First used in China, classical antiquity and Europe for military purposes, fireworks were most popular in the 18th century and high prices were paid for pyrotechnists, especially the skilled Italian ones, who were summoned to other countries to organise displays. Fire and water were important aspects of court spectacles because the displays \"inspired by means of fire, sudden noise, smoke and general magnificence the sentiments thought fitting for the subject to entertain of his sovereign: awe fear and a vicarious sense of glory in his might. Birthdays, name-days, weddings and anniversaries provided the occasion for celebration.\" One of the most famous courtly uses of fireworks was one used to celebrate the end of the War of the Austrian Succession and while the fireworks themselves caused a fire, the accompanying Music for the Royal Fireworks written by Handel has been popular ever since. Aside from their contribution to entertainments related to military successes, courtly displays and personal celebrations, fireworks are also used as part of religious ceremony. For example, during the Indian Dashavatara Kala of Gomantaka \"the temple deity is taken around in a procession with a lot of singing, dancing and display of fireworks\".", "title": "Forms" }, { "paragraph_id": 78, "text": "The \"fire, sudden noise and smoke\" of fireworks is still a significant part of public celebration and entertainment. For example, fireworks were one of the primary forms of display chosen to celebrate the turn of the millennium around the world. As the clock struck midnight and 1999 became 2000, firework displays and open-air parties greeted the New Year as the time zones changed over to the next century. Fireworks, carefully planned and choreographed, were let off against the backdrop of many of the world's most famous buildings, including the Sydney Harbour Bridge, the Pyramids of Giza in Egypt, the Acropolis in Athens, Red Square in Moscow, Vatican City in Rome, the Brandenburg Gate in Berlin, the Eiffel Tower in Paris, and Elizabeth Tower in London.", "title": "Forms" }, { "paragraph_id": 79, "text": "Sporting competitions have always provided entertainment for crowds. To distinguish the players from the audience, the latter are often known as spectators. Developments in stadium and auditorium design, as well as in recording and broadcast technology, have allowed off-site spectators to watch sport, with the result that the size of the audience has grown ever larger and spectator sport has become increasingly popular. Two of the most popular sports with global appeal are association football and cricket. Their ultimate international competitions, the FIFA World Cup and the Cricket World Cup, are broadcast around the world. Beyond the very large numbers involved in playing these sports, they are notable for being a major source of entertainment for many millions of non-players worldwide. A comparable multi-stage, long-form sport with global appeal is the Tour de France, unusual in that it takes place outside of special stadia, being run instead in the countryside.", "title": "Forms" }, { "paragraph_id": 80, "text": "Aside from sports that have worldwide appeal and competitions, such as the Olympic Games, the entertainment value of a sport depends on the culture and country where people play it. For example, in the United States, baseball and basketball games are popular forms of entertainment; in Bhutan, the national sport is archery; in New Zealand, it is rugby union; in Iran, it is freestyle wrestling. Japan's unique sumo wrestling contains ritual elements that derive from its long history. In some cases, such as the international running group Hash House Harriers, participants create a blend of sport and entertainment for themselves, largely independent of spectator involvement, where the social component is more important than the competitive.", "title": "Forms" }, { "paragraph_id": 81, "text": "The evolution of an activity into a sport and then an entertainment is also affected by the local climate and conditions. For example, the modern sport of surfing is associated with Hawaii and that of snow skiing probably evolved in Scandinavia. While these sports and the entertainment they offer to spectators have spread around the world, people in the two originating countries remain well known for their prowess. Sometimes the climate offers a chance to adapt another sport such as in the case of ice hockey – an important entertainment in Canada.", "title": "Forms" }, { "paragraph_id": 82, "text": "Fairs and exhibitions have existed since ancient and medieval times, displaying wealth, innovations and objects for trade and offering specific entertainments as well as being places of entertainment in themselves. Whether in a medieval market or a small shop, \"shopping always offered forms of exhilaration that took one away from the everyday\". However, in the modern world, \"merchandising has become entertainment: spinning signs, flashing signs, thumping music ... video screens, interactive computer kiosks, day care .. cafés\".", "title": "Forms" }, { "paragraph_id": 83, "text": "By the 19th century, \"expos\" that encouraged arts, manufactures and commerce had become international. They were not only hugely popular but affected international ideas. For example, the 1878 Paris Exposition facilitated international cooperation about ideas, innovations and standards. From London 1851 to Paris 1900, \"in excess of 200 million visitors had entered the turnstiles in London, Paris, Vienna, Philadelphia, Chicago and a myriad of smaller shows around the world.\" Since World War II \"well over 500 million visits have been recorded through world expo turnstiles\". As a form of spectacle and entertainment, expositions influenced \"everything from architecture, to patterns of globalisation, to fundamental matters of human identity\" and in the process established the close relationship between \"fairs, the rise of department stores and art museums\", the modern world of mass consumption and the entertainment industry.", "title": "Forms" }, { "paragraph_id": 84, "text": "Some entertainments, such as at large festivals (whether religious or secular), concerts, clubs, parties and celebrations, involve big crowds. From earliest times, crowds at an entertainment have associated hazards and dangers, especially when combined with the recreational consumption of intoxicants such as alcohol. The Ancient Greeks had Dionysian Mysteries, for example, and the Romans had Saturnalia. The consequence of excess and crowds can produce breaches of social norms of behaviour, sometimes causing injury or even death, such as for example, at the Altamont Free Concert, an outdoor rock festival. The list of serious incidents at nightclubs includes those caused by stampede; overcrowding; terrorism, such as the 2002 Bali bombings that targeted a nightclub; and especially fire. Investigations, such as that carried out in the US after The Station nightclub fire often demonstrate that lessons learned \"regarding fire safety in nightclubs\" from earlier events such as the Cocoanut Grove fire do \"not necessarily result in lasting effective change\". Efforts to prevent such incidents include appointing special officers, such as the medieval Lord of Misrule or, in modern times, security officers who control access; and also ongoing improvement of relevant standards such as those for building safety. The tourism industry now regards safety and security at entertainment venues as an important management task.", "title": "Safety" }, { "paragraph_id": 85, "text": "Entertainment is big business, especially in the United States, but ubiquitous in all cultures. Although kings, rulers and powerful people have always been able to pay for entertainment to be provided for them and in many cases have paid for public entertainment, people generally have made their own entertainment or when possible, attended a live performance. Technological developments in the 20th century, especially in the area of mass media, meant that entertainment could be produced independently of the audience, packaged and sold on a commercial basis by an entertainment industry. Sometimes referred to as show business, the industry relies on business models to produce, market, broadcast or otherwise distribute many of its traditional forms, including performances of all types. The industry became so sophisticated that its economics became a separate area of academic study.", "title": " Industry" }, { "paragraph_id": 86, "text": "The film industry is a part of the entertainment industry. Components of it include the Hollywood and Bollywood film industries, as well as the cinema of the United Kingdom and all the cinemas of Europe, including France, Germany, Spain, Italy and others. The sex industry is another component of the entertainment industry, applying the same forms and media (for example, film, books, dance and other performances) to the development, marketing and sale of sex products on a commercial basis.", "title": " Industry" }, { "paragraph_id": 87, "text": "Amusement parks entertain paying guests with rides, such as roller coasters, ridable miniature railways, water rides, and dark rides, as well as other events and associated attractions. The parks are built on a large area subdivided into themed areas named \"lands\". Sometimes the whole amusement park is based on one theme, such as the various SeaWorld parks that focus on the theme of sea life.", "title": " Industry" }, { "paragraph_id": 88, "text": "One of the consequences of the development of the entertainment industry has been the creation of new types of employment. While jobs such as writer, musician and composer exist as they always have, people doing this work are likely to be employed by a company rather than a patron as they once would have been. New jobs have appeared, such as gaffer or special effects supervisor in the film industry, and attendants in an amusement park.", "title": " Industry" }, { "paragraph_id": 89, "text": "Prestigious awards are given by the industry for excellence in the various types of entertainment. For example, there are awards for Music, Games (including video games), Comics, Comedy, Theatre, television, Film, Dance and Magic. Sporting awards are made for the results and skill, rather than for the entertainment value.", "title": " Industry" }, { "paragraph_id": 90, "text": "Purpose-built structures as venues for entertainment that accommodate audiences have produced many famous and innovative buildings, among the most recognisable of which are theatre structures. For the ancient Greeks, \"the architectural importance of the theatre is a reflection of their importance to the community, made apparent in their monumentality, in the effort put into their design, and in the care put into their detail.\" The Romans subsequently developed the stadium in an oval form known as a circus. In modern times, some of the grandest buildings for entertainment have brought fame to their cities as well as their designers. The Sydney Opera House, for example, is a World Heritage Site and The O₂ in London is an entertainment precinct that contains an indoor arena, a music club, a cinema and exhibition space. The Bayreuth Festspielhaus in Germany is a theatre designed and built for performances of one specific musical composition.", "title": "Architecture" }, { "paragraph_id": 91, "text": "Two of the chief architectural concerns for the design of venues for mass audiences are speed of egress and safety. The speed at which the venue empty is important both for amenity and safety, because large crowds take a long time to disperse from a badly designed venue, which creates a safety risk. The Hillsborough disaster is an example of how poor aspects of building design can contribute to audience deaths. Sightlines and acoustics are also important design considerations in most theatrical venues.", "title": "Architecture" }, { "paragraph_id": 92, "text": "In the 21st century, entertainment venues, especially stadia, are \"likely to figure among the leading architectural genres\". However, they require \"a whole new approach\" to design, because they need to be \"sophisticated entertainment centres, multi-experience venues, capable of being enjoyed in many diverse ways\". Hence, architects now have to design \"with two distinct functions in mind, as sports and entertainment centres playing host to live audiences, and as sports and entertainment studios serving the viewing and listening requirements of the remote audience\".", "title": "Architecture" }, { "paragraph_id": 93, "text": "Architects who push the boundaries of design or construction sometimes create buildings that are entertaining because they exceed the expectations of the public and the client and are aesthetically outstanding. Buildings such as Guggenheim Museum Bilbao, designed by Frank Gehry, are of this type, becoming a tourist attraction as well as a significant international museum. Other apparently usable buildings are really follies, deliberately constructed for a decorative purpose and never intended to be practical.", "title": "Architecture" }, { "paragraph_id": 94, "text": "On the other hand, sometimes architecture is entertainment, while pretending to be functional. The tourism industry, for example, creates or renovates buildings as \"attractions\" that have either never been used or can never be used for their ostensible purpose. They are instead re-purposed to entertain visitors often by simulating cultural experiences. Buildings, history and sacred spaces are thus made into commodities for purchase. Such intentional tourist attractions divorce buildings from the past so that \"the difference between historical authenticity and contemporary entertainment venues/theme parks becomes hard to define\". Examples include \"the preservation of the Alcázar of Toledo, with its grim Civil War History, the conversion of slave dungeons into tourist attractions in Ghana, [such as, for example, Cape Coast Castle] and the presentation of indigenous culture in Libya\". The specially constructed buildings in amusement parks represent the park's theme and are usually neither authentic nor completely functional.", "title": "Architecture" }, { "paragraph_id": 95, "text": "By the second half of the 20th century, developments in electronic media made possible the delivery of entertainment products to mass audiences across the globe. The technology enabled people to see, hear and participate in all the familiar forms – stories, theatre, music, dance – wherever they live. The rapid development of entertainment technology was assisted by improvements in data storage devices such as cassette tapes or compact discs, along with increasing miniaturisation. Computerisation and the development of barcodes also made ticketing easier, faster and global.", "title": "Effects of developments in electronic media" }, { "paragraph_id": 96, "text": "In the 1940s, radio was the electronic medium for family entertainment and information. In the 1950s, it was television that was the new medium and it rapidly became global, bringing visual entertainment, first in black and white, then in colour, to the world. By the 1970s, games could be played electronically, then hand-held devices provided mobile entertainment, and by the last decade of the 20th century, via networked play. In combination with products from the entertainment industry, all the traditional forms of entertainment became available personally. People could not only select an entertainment product such as a piece of music, film or game, they could choose the time and place to use it. The \"proliferation of portable media players and the emphasis on the computer as a site for film consumption\" together have significantly changed how audiences encounter films. One of the most notable consequences of the rise of electronic entertainment has been the rapid obsolescence of the various recording and storage methods. As an example of speed of change driven by electronic media, over the course of one generation, television as a medium for receiving standardised entertainment products went from unknown, to novel, to ubiquitous and finally to superseded. One estimate was that by 2011 over 30 percent of households in the US would own a Wii console, \"about the same percentage that owned a television in 1953\". Some expected that halfway through the second decade of the 21st century, online entertainment would have completely replaced television – which did not happen. The so-called \"digital revolution\" has produced an increasingly transnational marketplace that has caused difficulties for governments, business, industries, and individuals, as they all try to keep up. Even the sports stadium of the future will increasingly compete with television viewing \"...in terms of comfort, safety and the constant flow of audio-visual information and entertainment available.\" Other flow on effects of the shift are likely to include those on public architecture such as hospitals and nursing homes, where television, regarded as an essential entertainment service for patients and residents, will need to be replaced by access to the internet. At the same time, the ongoing need for entertainers as \"professional engagers\" shows the continuity of traditional entertainment.", "title": "Effects of developments in electronic media" }, { "paragraph_id": 97, "text": "By the second decade of the 21st century, analogue recording was being replaced by digital recording and all forms of electronic entertainment began to converge. For example, convergence is challenging standard practices in the film industry: whereas \"success or failure used to be determined by the first weekend of its run. Today, ... a series of exhibition 'windows', such as DVD, pay-per-view, and fibre-optic video-on-demand are used to maximise profits.\" Part of the industry's adjustment is its release of new commercial product directly via video hosting services. Media convergence is said to be more than technological: the convergence is cultural as well. It is also \"the result of a deliberate effort to protect the interests of business entities, policy institutions and other groups\". Globalisation and cultural imperialism are two of the cultural consequences of convergence. Others include fandom and interactive storytelling as well as the way that single franchises are distributed through and affect a range of delivery methods. The \"greater diversity in the ways that signals may be received and packaged for the viewer, via terrestrial, satellite or cable television, and of course, via the Internet\" also affects entertainment venues, such as sports stadia, which now need to be designed so that both live and remote audiences can interact in increasingly sophisticated ways – for example, audiences can \"watch highlights, call up statistics\", \"order tickets and merchandise\" and generally \"tap into the stadium's resources at any time of the day or night\".", "title": "Effects of developments in electronic media" }, { "paragraph_id": 98, "text": "The introduction of television altered the availability, cost, variety and quality of entertainment products for the public and the convergence of online entertainment is having a similar effect. For example, the possibility and popularity of user-generated content, as distinct from commercial product, creates a \"networked audience model [that] makes programming obsolete\". Individuals and corporations use video hosting services to broadcast content that is equally accepted by the public as legitimate entertainment.", "title": "Effects of developments in electronic media" }, { "paragraph_id": 99, "text": "While technology increases demand for entertainment products and offers increased speed of delivery, the forms that make up the content are in themselves, relatively stable. Storytelling, music, theatre, dance and games are recognisably the same as in earlier centuries.", "title": "Effects of developments in electronic media" } ]
Entertainment is a form of activity that holds the attention and interest of an audience or gives pleasure and delight. It can be an idea or a task, but it is more likely to be one of the activities or events that have developed over thousands of years specifically for the purpose of keeping an audience's attention. Although people's attention is held by different things because individuals have different preferences, most forms of entertainment are recognisable and familiar. Storytelling, music, drama, dance, and different kinds of performance exist in all cultures, were supported in royal courts, and developed into sophisticated forms over time, becoming available to all citizens. The process has been accelerated in modern times by an entertainment industry that records and sells entertainment products. Entertainment evolves and can be adapted to suit any scale, ranging from an individual who chooses private entertainment from a now enormous array of pre-recorded products, to a banquet adapted for two, to any size or type of party with appropriate music and dance, to performances intended for thousands, and even for a global audience. The experience of being entertained has come to be strongly associated with amusement, so that one common understanding of the idea is fun and laughter, although many entertainments have a serious purpose. This may be the case in various forms of ceremony, celebration, religious festival, or satire, for example. Hence, there is the possibility that what appears to be entertainment may also be a means of achieving insight or intellectual growth. An important aspect of entertainment is the audience, which turns a private recreation or leisure activity into entertainment. The audience may have a passive role, as in the case of people watching a play, opera, television show, or film; or the audience role may be active, as in the case of games, where the participant and audience roles may be routinely reversed. Entertainment can be public or private, involving formal, scripted performances, as in the case of theatre or concerts, or unscripted and spontaneous, as in the case of children's games. Most forms of entertainment have persisted over many centuries, evolving due to changes in culture, technology, and fashion, as with stage magic. Films and video games, although they use newer media, continue to tell stories, present drama, and play music. Festivals devoted to music, film, or dance allow audiences to be entertained over a number of consecutive days. Some entertainment, such as public executions, is now illegal in most countries. Activities such as fencing or archery, once used in hunting or war, have become spectator sports. In the same way, other activities, such as cooking, have developed into performances among professionals, staged as global competitions, and then broadcast for entertainment. What is entertainment for one group or individual may be regarded as work or an act of cruelty by another. The familiar forms of entertainment have the capacity to cross over into different media and have demonstrated a seemingly unlimited potential for creative remix. This has ensured the continuity and longevity of many themes, images, and structures.
2001-10-26T01:14:13Z
2023-12-13T19:11:10Z
[ "Template:Short description", "Template:Redirect", "Template:Listen", "Template:Snd", "Template:Cite web", "Template:Webarchive", "Template:Commons category", "Template:Pp-pc", "Template:Circa", "Template:Quote box", "Template:Blockquote", "Template:Anchor", "Template:Citation", "Template:Aesthetics", "Template:More citations needed", "Template:Use dmy dates", "Template:Reflist", "Template:Cite journal", "Template:Wikiquote", "Template:Authority control", "Template:Main", "Template:Sfnp", "Template:Sfn", "Template:Further", "Template:Cite book", "Template:Cite news" ]
https://en.wikipedia.org/wiki/Entertainment
9,263
Ether
In organic chemistry, ethers are a class of compounds that contain an ether group—an oxygen atom connected to two alkyl or aryl groups. They have the general formula R−O−R′, where R and R′ represent the alkyl or aryl groups. Ethers can again be classified into two varieties: if the alkyl or aryl groups are the same on both sides of the oxygen atom, then it is a simple or symmetrical ether, whereas if they are different, the ethers are called mixed or unsymmetrical ethers. A typical example of the first group is the solvent and anaesthetic diethyl ether, commonly referred to simply as "ether" (CH3−CH2−O−CH2−CH3). Ethers are common in organic chemistry and even more prevalent in biochemistry, as they are common linkages in carbohydrates and lignin. Ethers feature bent C–O–C linkages. In dimethyl ether, the bond angle is 111° and C–O distances are 141 pm. The barrier to rotation about the C–O bonds is low. The bonding of oxygen in ethers, alcohols, and water is similar. In the language of valence bond theory, the hybridization at oxygen is sp. Oxygen is more electronegative than carbon, thus the alpha hydrogens of ethers are more acidic than those of simple hydrocarbons. They are far less acidic than alpha hydrogens of carbonyl groups (such as in ketones or aldehydes), however. Ethers can be symmetrical of the type ROR or unsymmetrical of the type ROR'. Examples of the former are dimethyl ether, diethyl ether, dipropyl ether etc. Illustrative unsymmetrical ethers are anisole (methoxybenzene) and dimethoxyethane. Vinyl- and acetylenic ethers are far less common than alkyl or aryl ethers. Vinylethers, often called enol ethers, are important intermediates in organic synthesis. Acetylenic ethers are especially rare. Di-tert-butoxyacetylene is the most common example of this rare class of compounds. In the IUPAC Nomenclature system, ethers are named using the general formula "alkoxyalkane", for example CH3–CH2–O–CH3 is methoxyethane. If the ether is part of a more-complex molecule, it is described as an alkoxy substituent, so –OCH3 would be considered a "methoxy-" group. The simpler alkyl radical is written in front, so CH3–O–CH2CH3 would be given as methoxy(CH3O)ethane(CH2CH3). IUPAC rules are often not followed for simple ethers. The trivial names for simple ethers (i.e., those with none or few other functional groups) are a composite of the two substituents followed by "ether". For example, ethyl methyl ether (CH3OC2H5), diphenylether (C6H5OC6H5). As for other organic compounds, very common ethers acquired names before rules for nomenclature were formalized. Diethyl ether is simply called ether, but was once called sweet oil of vitriol. Methyl phenyl ether is anisole, because it was originally found in aniseed. The aromatic ethers include furans. Acetals (α-alkoxy ethers R–CH(–OR)–O–R) are another class of ethers with characteristic properties. Polyethers are generally polymers containing ether linkages in their main chain. The term polyol generally refers to polyether polyols with one or more functional end-groups such as a hydroxyl group. The term "oxide" or other terms are used for high molar mass polymer when end-groups no longer affect polymer properties. Crown ethers are cyclic polyethers. Some toxins produced by dinoflagellates such as brevetoxin and ciguatoxin are extremely large and are known as cyclic or ladder polyethers. The phenyl ether polymers are a class of aromatic polyethers containing aromatic cycles in their main chain: polyphenyl ether (PPE) and poly(p-phenylene oxide) (PPO). Many classes of compounds with C–O–C linkages are not considered ethers: Esters (R–C(=O)–O–R′), hemiacetals (R–CH(–OH)–O–R′), carboxylic acid anhydrides (RC(=O)–O–C(=O)R′). Ethers have boiling points similar to those of the analogous alkanes. Simple ethers are generally colorless. The C-O bonds that comprise simple ethers are strong. They are unreactive toward all but the strongest bases. Although generally of low chemical reactivity, they are more reactive than alkanes. Specialized ethers such as epoxides, ketals, and acetals are unrepresentative classes of ethers and are discussed in separate articles. Important reactions are listed below. Although ethers resist hydrolysis, they are cleaved by hydrobromic acid and hydroiodic acid. Hydrogen chloride cleaves ethers only slowly. Methyl ethers typically afford methyl halides: These reactions proceed via onium intermediates, i.e. [RO(H)CH3]Br. Some ethers undergo rapid cleavage with boron tribromide (even aluminium chloride is used in some cases) to give the alkyl bromide. Depending on the substituents, some ethers can be cleaved with a variety of reagents, e.g. strong base. Despite these difficulties the chemical paper pulping processes are based on cleavage of ether bonds in the lignin. When stored in the presence of air or oxygen, ethers tend to form explosive peroxides, such as diethyl ether hydroperoxide. The reaction is accelerated by light, metal catalysts, and aldehydes. In addition to avoiding storage conditions likely to form peroxides, it is recommended, when an ether is used as a solvent, not to distill it to dryness, as any peroxides that may have formed, being less volatile than the original ether, will become concentrated in the last few drops of liquid. The presence of peroxide in old samples of ethers may be detected by shaking them with freshly prepared solution of a ferrous sulfate followed by addition of KSCN. Appearance of blood red color indicates presence of peroxides. The dangerous properties of ether peroxides are the reason that diethyl ether and other peroxide forming ethers like tetrahydrofuran (THF) or ethylene glycol dimethyl ether (1,2-dimethoxyethane) are avoided in industrial processes. Ethers serve as Lewis bases. For instance, diethyl ether forms a complex with boron trifluoride, i.e. diethyl etherate (BF3·OEt2). Ethers also coordinate to the Mg center in Grignard reagents. Tetrahydrofuran is more basic than acyclic ethers. It forms complexes with many metal halides. This reactivity is similar to the tendency of ethers with alpha hydrogen atoms to form peroxides. Reaction with chlorine produces alpha-chloroethers. Ethers can be prepared by numerous routes. In general alkyl ethers form more readily than aryl ethers, with the later species often requiring metal catalysts. The synthesis of diethyl ether by a reaction between ethanol and sulfuric acid has been known since the 13th century. The dehydration of alcohols affords ethers: This direct nucleophilic substitution reaction requires elevated temperatures (about 125 °C). The reaction is catalyzed by acids, usually sulfuric acid. The method is effective for generating symmetrical ethers, but not unsymmetrical ethers, since either OH can be protonated, which would give a mixture of products. Diethyl ether is produced from ethanol by this method. Cyclic ethers are readily generated by this approach. Elimination reactions compete with dehydration of the alcohol: The dehydration route often requires conditions incompatible with delicate molecules. Several milder methods exist to produce ethers. Nucleophilic displacement of alkyl halides by alkoxides This reaction is called the Williamson ether synthesis. It involves treatment of a parent alcohol with a strong base to form the alkoxide, followed by addition of an appropriate aliphatic compound bearing a suitable leaving group (R–X). Suitable leaving groups (X) include iodide, bromide, or sulfonates. This method usually does not work well for aryl halides (e.g. bromobenzene, see Ullmann condensation below). Likewise, this method only gives the best yields for primary halides. Secondary and tertiary halides are prone to undergo E2 elimination on exposure to the basic alkoxide anion used in the reaction due to steric hindrance from the large alkyl groups. In a related reaction, alkyl halides undergo nucleophilic displacement by phenoxides. The R–X cannot be used to react with the alcohol. However phenols can be used to replace the alcohol while maintaining the alkyl halide. Since phenols are acidic, they readily react with a strong base like sodium hydroxide to form phenoxide ions. The phenoxide ion will then substitute the –X group in the alkyl halide, forming an ether with an aryl group attached to it in a reaction with an SN2 mechanism. The Ullmann condensation is similar to the Williamson method except that the substrate is an aryl halide. Such reactions generally require a catalyst, such as copper. Alcohols add to electrophilically activated alkenes. Acid catalysis is required for this reaction. Often, mercury trifluoroacetate (Hg(OCOCF3)2) is used as a catalyst for the reaction generating an ether with Markovnikov regiochemistry. Using similar reactions, tetrahydropyranyl ethers are used as protective groups for alcohols. Epoxides are typically prepared by oxidation of alkenes. The most important epoxide in terms of industrial scale is ethylene oxide, which is produced by oxidation of ethylene with oxygen. Other epoxides are produced by one of two routes:
[ { "paragraph_id": 0, "text": "In organic chemistry, ethers are a class of compounds that contain an ether group—an oxygen atom connected to two alkyl or aryl groups. They have the general formula R−O−R′, where R and R′ represent the alkyl or aryl groups. Ethers can again be classified into two varieties: if the alkyl or aryl groups are the same on both sides of the oxygen atom, then it is a simple or symmetrical ether, whereas if they are different, the ethers are called mixed or unsymmetrical ethers. A typical example of the first group is the solvent and anaesthetic diethyl ether, commonly referred to simply as \"ether\" (CH3−CH2−O−CH2−CH3). Ethers are common in organic chemistry and even more prevalent in biochemistry, as they are common linkages in carbohydrates and lignin.", "title": "" }, { "paragraph_id": 1, "text": "Ethers feature bent C–O–C linkages. In dimethyl ether, the bond angle is 111° and C–O distances are 141 pm. The barrier to rotation about the C–O bonds is low. The bonding of oxygen in ethers, alcohols, and water is similar. In the language of valence bond theory, the hybridization at oxygen is sp.", "title": "Structure and bonding" }, { "paragraph_id": 2, "text": "Oxygen is more electronegative than carbon, thus the alpha hydrogens of ethers are more acidic than those of simple hydrocarbons. They are far less acidic than alpha hydrogens of carbonyl groups (such as in ketones or aldehydes), however.", "title": "Structure and bonding" }, { "paragraph_id": 3, "text": "Ethers can be symmetrical of the type ROR or unsymmetrical of the type ROR'. Examples of the former are dimethyl ether, diethyl ether, dipropyl ether etc. Illustrative unsymmetrical ethers are anisole (methoxybenzene) and dimethoxyethane.", "title": "Structure and bonding" }, { "paragraph_id": 4, "text": "Vinyl- and acetylenic ethers are far less common than alkyl or aryl ethers. Vinylethers, often called enol ethers, are important intermediates in organic synthesis. Acetylenic ethers are especially rare. Di-tert-butoxyacetylene is the most common example of this rare class of compounds.", "title": "Structure and bonding" }, { "paragraph_id": 5, "text": "In the IUPAC Nomenclature system, ethers are named using the general formula \"alkoxyalkane\", for example CH3–CH2–O–CH3 is methoxyethane. If the ether is part of a more-complex molecule, it is described as an alkoxy substituent, so –OCH3 would be considered a \"methoxy-\" group. The simpler alkyl radical is written in front, so CH3–O–CH2CH3 would be given as methoxy(CH3O)ethane(CH2CH3).", "title": "Nomenclature" }, { "paragraph_id": 6, "text": "IUPAC rules are often not followed for simple ethers. The trivial names for simple ethers (i.e., those with none or few other functional groups) are a composite of the two substituents followed by \"ether\". For example, ethyl methyl ether (CH3OC2H5), diphenylether (C6H5OC6H5). As for other organic compounds, very common ethers acquired names before rules for nomenclature were formalized. Diethyl ether is simply called ether, but was once called sweet oil of vitriol. Methyl phenyl ether is anisole, because it was originally found in aniseed. The aromatic ethers include furans. Acetals (α-alkoxy ethers R–CH(–OR)–O–R) are another class of ethers with characteristic properties.", "title": "Nomenclature" }, { "paragraph_id": 7, "text": "Polyethers are generally polymers containing ether linkages in their main chain. The term polyol generally refers to polyether polyols with one or more functional end-groups such as a hydroxyl group. The term \"oxide\" or other terms are used for high molar mass polymer when end-groups no longer affect polymer properties.", "title": "Nomenclature" }, { "paragraph_id": 8, "text": "Crown ethers are cyclic polyethers. Some toxins produced by dinoflagellates such as brevetoxin and ciguatoxin are extremely large and are known as cyclic or ladder polyethers.", "title": "Nomenclature" }, { "paragraph_id": 9, "text": "The phenyl ether polymers are a class of aromatic polyethers containing aromatic cycles in their main chain: polyphenyl ether (PPE) and poly(p-phenylene oxide) (PPO).", "title": "Nomenclature" }, { "paragraph_id": 10, "text": "Many classes of compounds with C–O–C linkages are not considered ethers: Esters (R–C(=O)–O–R′), hemiacetals (R–CH(–OH)–O–R′), carboxylic acid anhydrides (RC(=O)–O–C(=O)R′).", "title": "Nomenclature" }, { "paragraph_id": 11, "text": "Ethers have boiling points similar to those of the analogous alkanes. Simple ethers are generally colorless.", "title": "Physical properties" }, { "paragraph_id": 12, "text": "The C-O bonds that comprise simple ethers are strong. They are unreactive toward all but the strongest bases. Although generally of low chemical reactivity, they are more reactive than alkanes.", "title": "Reactions" }, { "paragraph_id": 13, "text": "Specialized ethers such as epoxides, ketals, and acetals are unrepresentative classes of ethers and are discussed in separate articles. Important reactions are listed below.", "title": "Reactions" }, { "paragraph_id": 14, "text": "Although ethers resist hydrolysis, they are cleaved by hydrobromic acid and hydroiodic acid. Hydrogen chloride cleaves ethers only slowly. Methyl ethers typically afford methyl halides:", "title": "Reactions" }, { "paragraph_id": 15, "text": "These reactions proceed via onium intermediates, i.e. [RO(H)CH3]Br.", "title": "Reactions" }, { "paragraph_id": 16, "text": "Some ethers undergo rapid cleavage with boron tribromide (even aluminium chloride is used in some cases) to give the alkyl bromide. Depending on the substituents, some ethers can be cleaved with a variety of reagents, e.g. strong base.", "title": "Reactions" }, { "paragraph_id": 17, "text": "Despite these difficulties the chemical paper pulping processes are based on cleavage of ether bonds in the lignin.", "title": "Reactions" }, { "paragraph_id": 18, "text": "When stored in the presence of air or oxygen, ethers tend to form explosive peroxides, such as diethyl ether hydroperoxide. The reaction is accelerated by light, metal catalysts, and aldehydes. In addition to avoiding storage conditions likely to form peroxides, it is recommended, when an ether is used as a solvent, not to distill it to dryness, as any peroxides that may have formed, being less volatile than the original ether, will become concentrated in the last few drops of liquid. The presence of peroxide in old samples of ethers may be detected by shaking them with freshly prepared solution of a ferrous sulfate followed by addition of KSCN. Appearance of blood red color indicates presence of peroxides. The dangerous properties of ether peroxides are the reason that diethyl ether and other peroxide forming ethers like tetrahydrofuran (THF) or ethylene glycol dimethyl ether (1,2-dimethoxyethane) are avoided in industrial processes.", "title": "Reactions" }, { "paragraph_id": 19, "text": "Ethers serve as Lewis bases. For instance, diethyl ether forms a complex with boron trifluoride, i.e. diethyl etherate (BF3·OEt2). Ethers also coordinate to the Mg center in Grignard reagents. Tetrahydrofuran is more basic than acyclic ethers. It forms complexes with many metal halides.", "title": "Reactions" }, { "paragraph_id": 20, "text": "This reactivity is similar to the tendency of ethers with alpha hydrogen atoms to form peroxides. Reaction with chlorine produces alpha-chloroethers.", "title": "Reactions" }, { "paragraph_id": 21, "text": "Ethers can be prepared by numerous routes. In general alkyl ethers form more readily than aryl ethers, with the later species often requiring metal catalysts.", "title": "Synthesis" }, { "paragraph_id": 22, "text": "The synthesis of diethyl ether by a reaction between ethanol and sulfuric acid has been known since the 13th century.", "title": "Synthesis" }, { "paragraph_id": 23, "text": "The dehydration of alcohols affords ethers:", "title": "Synthesis" }, { "paragraph_id": 24, "text": "This direct nucleophilic substitution reaction requires elevated temperatures (about 125 °C). The reaction is catalyzed by acids, usually sulfuric acid. The method is effective for generating symmetrical ethers, but not unsymmetrical ethers, since either OH can be protonated, which would give a mixture of products. Diethyl ether is produced from ethanol by this method. Cyclic ethers are readily generated by this approach. Elimination reactions compete with dehydration of the alcohol:", "title": "Synthesis" }, { "paragraph_id": 25, "text": "The dehydration route often requires conditions incompatible with delicate molecules. Several milder methods exist to produce ethers.", "title": "Synthesis" }, { "paragraph_id": 26, "text": "Nucleophilic displacement of alkyl halides by alkoxides", "title": "Synthesis" }, { "paragraph_id": 27, "text": "This reaction is called the Williamson ether synthesis. It involves treatment of a parent alcohol with a strong base to form the alkoxide, followed by addition of an appropriate aliphatic compound bearing a suitable leaving group (R–X). Suitable leaving groups (X) include iodide, bromide, or sulfonates. This method usually does not work well for aryl halides (e.g. bromobenzene, see Ullmann condensation below). Likewise, this method only gives the best yields for primary halides. Secondary and tertiary halides are prone to undergo E2 elimination on exposure to the basic alkoxide anion used in the reaction due to steric hindrance from the large alkyl groups.", "title": "Synthesis" }, { "paragraph_id": 28, "text": "In a related reaction, alkyl halides undergo nucleophilic displacement by phenoxides. The R–X cannot be used to react with the alcohol. However phenols can be used to replace the alcohol while maintaining the alkyl halide. Since phenols are acidic, they readily react with a strong base like sodium hydroxide to form phenoxide ions. The phenoxide ion will then substitute the –X group in the alkyl halide, forming an ether with an aryl group attached to it in a reaction with an SN2 mechanism.", "title": "Synthesis" }, { "paragraph_id": 29, "text": "The Ullmann condensation is similar to the Williamson method except that the substrate is an aryl halide. Such reactions generally require a catalyst, such as copper.", "title": "Synthesis" }, { "paragraph_id": 30, "text": "Alcohols add to electrophilically activated alkenes.", "title": "Synthesis" }, { "paragraph_id": 31, "text": "Acid catalysis is required for this reaction. Often, mercury trifluoroacetate (Hg(OCOCF3)2) is used as a catalyst for the reaction generating an ether with Markovnikov regiochemistry. Using similar reactions, tetrahydropyranyl ethers are used as protective groups for alcohols.", "title": "Synthesis" }, { "paragraph_id": 32, "text": "Epoxides are typically prepared by oxidation of alkenes. The most important epoxide in terms of industrial scale is ethylene oxide, which is produced by oxidation of ethylene with oxygen. Other epoxides are produced by one of two routes:", "title": "Synthesis" } ]
In organic chemistry, ethers are a class of compounds that contain an ether group—an oxygen atom connected to two alkyl or aryl groups. They have the general formula R−O−R′, where R and R′ represent the alkyl or aryl groups. Ethers can again be classified into two varieties: if the alkyl or aryl groups are the same on both sides of the oxygen atom, then it is a simple or symmetrical ether, whereas if they are different, the ethers are called mixed or unsymmetrical ethers. A typical example of the first group is the solvent and anaesthetic diethyl ether, commonly referred to simply as "ether" (CH3−CH2−O−CH2−CH3). Ethers are common in organic chemistry and even more prevalent in biochemistry, as they are common linkages in carbohydrates and lignin.
2001-11-04T11:48:02Z
2023-11-02T16:38:12Z
[ "Template:See also", "Template:Cite journal", "Template:Doi", "Template:Authority control", "Template:For multi", "Template:Main", "Template:Reflist", "Template:Cite book", "Template:OrgSynth", "Template:Cite EB1911", "Template:Short description", "Template:Chem2", "Template:GoldBookRef", "Template:Functional Groups" ]
https://en.wikipedia.org/wiki/Ether
9,264
Ecliptic
The ecliptic or ecliptic plane is the orbital plane of Earth around the Sun. From the perspective of an observer on Earth, the Sun's movement around the celestial sphere over the course of a year traces out a path along the ecliptic against the background of stars. The ecliptic is an important reference plane and is the basis of the ecliptic coordinate system. The ecliptic is the apparent path of the Sun throughout the course of a year. Because Earth takes one year to orbit the Sun, the apparent position of the Sun takes one year to make a complete circuit of the ecliptic. With slightly more than 365 days in one year, the Sun moves a little less than 1° eastward every day. This small difference in the Sun's position against the stars causes any particular spot on Earth's surface to catch up with (and stand directly north or south of) the Sun about four minutes later each day than it would if Earth did not orbit; a day on Earth is therefore 24 hours long rather than the approximately 23-hour 56-minute sidereal day. Again, this is a simplification, based on a hypothetical Earth that orbits at uniform speed around the Sun. The actual speed with which Earth orbits the Sun varies slightly during the year, so the speed with which the Sun seems to move along the ecliptic also varies. For example, the Sun is north of the celestial equator for about 185 days of each year, and south of it for about 180 days. The variation of orbital speed accounts for part of the equation of time. Because of the movement of Earth around the Earth–Moon center of mass, the apparent path of the Sun wobbles slightly, with a period of about one month. Because of further perturbations by the other planets of the Solar System, the Earth–Moon barycenter wobbles slightly around a mean position in a complex fashion. Because Earth's rotational axis is not perpendicular to its orbital plane, Earth's equatorial plane is not coplanar with the ecliptic plane, but is inclined to it by an angle of about 23.4°, which is known as the obliquity of the ecliptic. If the equator is projected outward to the celestial sphere, forming the celestial equator, it crosses the ecliptic at two points known as the equinoxes. The Sun, in its apparent motion along the ecliptic, crosses the celestial equator at these points, one from south to north, the other from north to south. The crossing from south to north is known as the vernal equinox, also known as the first point of Aries and the ascending node of the ecliptic on the celestial equator. The crossing from north to south is the autumnal equinox or descending node. The orientation of Earth's axis and equator are not fixed in space, but rotate about the poles of the ecliptic with a period of about 26,000 years, a process known as lunisolar precession, as it is due mostly to the gravitational effect of the Moon and Sun on Earth's equatorial bulge. Likewise, the ecliptic itself is not fixed. The gravitational perturbations of the other bodies of the Solar System cause a much smaller motion of the plane of Earth's orbit, and hence of the ecliptic, known as planetary precession. The combined action of these two motions is called general precession, and changes the position of the equinoxes by about 50 arc seconds (about 0.014°) per year. Once again, this is a simplification. Periodic motions of the Moon and apparent periodic motions of the Sun (actually of Earth in its orbit) cause short-term small-amplitude periodic oscillations of Earth's axis, and hence the celestial equator, known as nutation. This adds a periodic component to the position of the equinoxes; the positions of the celestial equator and (vernal) equinox with fully updated precession and nutation are called the true equator and equinox; the positions without nutation are the mean equator and equinox. Obliquity of the ecliptic is the term used by astronomers for the inclination of Earth's equator with respect to the ecliptic, or of Earth's rotation axis to a perpendicular to the ecliptic. It is about 23.4° and is currently decreasing 0.013 degrees (47 arcseconds) per hundred years because of planetary perturbations. The angular value of the obliquity is found by observation of the motions of Earth and other planets over many years. Astronomers produce new fundamental ephemerides as the accuracy of observation improves and as the understanding of the dynamics increases, and from these ephemerides various astronomical values, including the obliquity, are derived. Until 1983 the obliquity for any date was calculated from work of Newcomb, who analyzed positions of the planets until about 1895: ε = 23°27′08.26″ − 46.845″ T − 0.0059″ T + 0.00181″ T where ε is the obliquity and T is tropical centuries from B1900.0 to the date in question. From 1984, the Jet Propulsion Laboratory's DE series of computer-generated ephemerides took over as the fundamental ephemeris of the Astronomical Almanac. Obliquity based on DE200, which analyzed observations from 1911 to 1979, was calculated: ε = 23°26′21.45″ − 46.815″ T − 0.0006″ T + 0.00181″ T where hereafter T is Julian centuries from J2000.0. JPL's fundamental ephemerides have been continually updated. The Astronomical Almanac for 2010 specifies: ε = 23°26′21.406″ − 46.836769″ T − 0.0001831″ T + 0.00200340″ T − 0.576×10″ T − 4.34×10″ T These expressions for the obliquity are intended for high precision over a relatively short time span, perhaps several centuries. J. Laskar computed an expression to order T good to 0.04″/1000 years over 10,000 years. All of these expressions are for the mean obliquity, that is, without the nutation of the equator included. The true or instantaneous obliquity includes the nutation. Most of the major bodies of the Solar System orbit the Sun in nearly the same plane. This is likely due to the way in which the Solar System formed from a protoplanetary disk. Probably the closest current representation of the disk is known as the invariable plane of the Solar System. Earth's orbit, and hence, the ecliptic, is inclined a little more than 1° to the invariable plane, Jupiter's orbit is within a little more than ½° of it, and the other major planets are all within about 6°. Because of this, most Solar System bodies appear very close to the ecliptic in the sky. The invariable plane is defined by the angular momentum of the entire Solar System, essentially the vector sum of all of the orbital and rotational angular momenta of all the bodies of the system; more than 60% of the total comes from the orbit of Jupiter. That sum requires precise knowledge of every object in the system, making it a somewhat uncertain value. Because of the uncertainty regarding the exact location of the invariable plane, and because the ecliptic is well defined by the apparent motion of the Sun, the ecliptic is used as the reference plane of the Solar System both for precision and convenience. The only drawback of using the ecliptic instead of the invariable plane is that over geologic time scales, it will move against fixed reference points in the sky's distant background. The ecliptic forms one of the two fundamental planes used as reference for positions on the celestial sphere, the other being the celestial equator. Perpendicular to the ecliptic are the ecliptic poles, the north ecliptic pole being the pole north of the equator. Of the two fundamental planes, the ecliptic is closer to unmoving against the background stars, its motion due to planetary precession being roughly 1/100 that of the celestial equator. Spherical coordinates, known as ecliptic longitude and latitude or celestial longitude and latitude, are used to specify positions of bodies on the celestial sphere with respect to the ecliptic. Longitude is measured positively eastward 0° to 360° along the ecliptic from the vernal equinox, the same direction in which the Sun appears to move. Latitude is measured perpendicular to the ecliptic, to +90° northward or −90° southward to the poles of the ecliptic, the ecliptic itself being 0° latitude. For a complete spherical position, a distance parameter is also necessary. Different distance units are used for different objects. Within the Solar System, astronomical units are used, and for objects near Earth, Earth radii or kilometers are used. A corresponding right-handed rectangular coordinate system is also used occasionally; the x-axis is directed toward the vernal equinox, the y-axis 90° to the east, and the z-axis toward the north ecliptic pole; the astronomical unit is the unit of measure. Symbols for ecliptic coordinates are somewhat standardized; see the table. Ecliptic coordinates are convenient for specifying positions of Solar System objects, as most of the planets' orbits have small inclinations to the ecliptic, and therefore always appear relatively close to it on the sky. Because Earth's orbit, and hence the ecliptic, moves very little, it is a relatively fixed reference with respect to the stars. Because of the precessional motion of the equinox, the ecliptic coordinates of objects on the celestial sphere are continuously changing. Specifying a position in ecliptic coordinates requires specifying a particular equinox, that is, the equinox of a particular date, known as an epoch; the coordinates are referred to the direction of the equinox at that date. For instance, the Astronomical Almanac lists the heliocentric position of Mars at 0h Terrestrial Time, 4 January 2010 as: longitude 118°09′15.8″, latitude +1°43′16.7″, true heliocentric distance 1.6302454 AU, mean equinox and ecliptic of date. This specifies the mean equinox of 4 January 2010 0h TT as above, without the addition of nutation. Because the orbit of the Moon is inclined only about 5.145° to the ecliptic and the Sun is always very near the ecliptic, eclipses always occur on or near it. Because of the inclination of the Moon's orbit, eclipses do not occur at every conjunction and opposition of the Sun and Moon, but only when the Moon is near an ascending or descending node at the same time it is at conjunction (new) or opposition (full). The ecliptic is so named because the ancients noted that eclipses only occur when the Moon is crossing it. The exact instants of equinoxes and solstices are the times when the apparent ecliptic longitude (including the effects of aberration and nutation) of the Sun is 0°, 90°, 180°, and 270°. Because of perturbations of Earth's orbit and anomalies of the calendar, the dates of these are not fixed. The ecliptic currently passes through the following constellations: The constellations Cetus and Orion are not on the ecliptic, but are close enough that the Moon and planets can occasionally appear in them. The ecliptic forms the center of the zodiac, a celestial belt about 20° wide in latitude through which the Sun, Moon, and planets always appear to move. Traditionally, this region is divided into 12 signs of 30° longitude, each of which approximates the Sun's motion in one month. In ancient times, the signs corresponded roughly to 12 of the constellations that straddle the ecliptic. These signs are sometimes still used in modern terminology. The "First Point of Aries" was named when the March equinox Sun was actually in the constellation Aries; it has since moved into Pisces because of precession of the equinoxes.
[ { "paragraph_id": 0, "text": "The ecliptic or ecliptic plane is the orbital plane of Earth around the Sun. From the perspective of an observer on Earth, the Sun's movement around the celestial sphere over the course of a year traces out a path along the ecliptic against the background of stars. The ecliptic is an important reference plane and is the basis of the ecliptic coordinate system.", "title": "" }, { "paragraph_id": 1, "text": "The ecliptic is the apparent path of the Sun throughout the course of a year.", "title": "Sun's apparent motion" }, { "paragraph_id": 2, "text": "Because Earth takes one year to orbit the Sun, the apparent position of the Sun takes one year to make a complete circuit of the ecliptic. With slightly more than 365 days in one year, the Sun moves a little less than 1° eastward every day. This small difference in the Sun's position against the stars causes any particular spot on Earth's surface to catch up with (and stand directly north or south of) the Sun about four minutes later each day than it would if Earth did not orbit; a day on Earth is therefore 24 hours long rather than the approximately 23-hour 56-minute sidereal day. Again, this is a simplification, based on a hypothetical Earth that orbits at uniform speed around the Sun. The actual speed with which Earth orbits the Sun varies slightly during the year, so the speed with which the Sun seems to move along the ecliptic also varies. For example, the Sun is north of the celestial equator for about 185 days of each year, and south of it for about 180 days. The variation of orbital speed accounts for part of the equation of time.", "title": "Sun's apparent motion" }, { "paragraph_id": 3, "text": "Because of the movement of Earth around the Earth–Moon center of mass, the apparent path of the Sun wobbles slightly, with a period of about one month. Because of further perturbations by the other planets of the Solar System, the Earth–Moon barycenter wobbles slightly around a mean position in a complex fashion.", "title": "Sun's apparent motion" }, { "paragraph_id": 4, "text": "Because Earth's rotational axis is not perpendicular to its orbital plane, Earth's equatorial plane is not coplanar with the ecliptic plane, but is inclined to it by an angle of about 23.4°, which is known as the obliquity of the ecliptic. If the equator is projected outward to the celestial sphere, forming the celestial equator, it crosses the ecliptic at two points known as the equinoxes. The Sun, in its apparent motion along the ecliptic, crosses the celestial equator at these points, one from south to north, the other from north to south. The crossing from south to north is known as the vernal equinox, also known as the first point of Aries and the ascending node of the ecliptic on the celestial equator. The crossing from north to south is the autumnal equinox or descending node.", "title": "Relationship to the celestial equator" }, { "paragraph_id": 5, "text": "The orientation of Earth's axis and equator are not fixed in space, but rotate about the poles of the ecliptic with a period of about 26,000 years, a process known as lunisolar precession, as it is due mostly to the gravitational effect of the Moon and Sun on Earth's equatorial bulge. Likewise, the ecliptic itself is not fixed. The gravitational perturbations of the other bodies of the Solar System cause a much smaller motion of the plane of Earth's orbit, and hence of the ecliptic, known as planetary precession. The combined action of these two motions is called general precession, and changes the position of the equinoxes by about 50 arc seconds (about 0.014°) per year.", "title": "Relationship to the celestial equator" }, { "paragraph_id": 6, "text": "Once again, this is a simplification. Periodic motions of the Moon and apparent periodic motions of the Sun (actually of Earth in its orbit) cause short-term small-amplitude periodic oscillations of Earth's axis, and hence the celestial equator, known as nutation. This adds a periodic component to the position of the equinoxes; the positions of the celestial equator and (vernal) equinox with fully updated precession and nutation are called the true equator and equinox; the positions without nutation are the mean equator and equinox.", "title": "Relationship to the celestial equator" }, { "paragraph_id": 7, "text": "Obliquity of the ecliptic is the term used by astronomers for the inclination of Earth's equator with respect to the ecliptic, or of Earth's rotation axis to a perpendicular to the ecliptic. It is about 23.4° and is currently decreasing 0.013 degrees (47 arcseconds) per hundred years because of planetary perturbations.", "title": "Obliquity of the ecliptic " }, { "paragraph_id": 8, "text": "The angular value of the obliquity is found by observation of the motions of Earth and other planets over many years. Astronomers produce new fundamental ephemerides as the accuracy of observation improves and as the understanding of the dynamics increases, and from these ephemerides various astronomical values, including the obliquity, are derived.", "title": "Obliquity of the ecliptic " }, { "paragraph_id": 9, "text": "Until 1983 the obliquity for any date was calculated from work of Newcomb, who analyzed positions of the planets until about 1895:", "title": "Obliquity of the ecliptic " }, { "paragraph_id": 10, "text": "ε = 23°27′08.26″ − 46.845″ T − 0.0059″ T + 0.00181″ T", "title": "Obliquity of the ecliptic " }, { "paragraph_id": 11, "text": "where ε is the obliquity and T is tropical centuries from B1900.0 to the date in question.", "title": "Obliquity of the ecliptic " }, { "paragraph_id": 12, "text": "From 1984, the Jet Propulsion Laboratory's DE series of computer-generated ephemerides took over as the fundamental ephemeris of the Astronomical Almanac. Obliquity based on DE200, which analyzed observations from 1911 to 1979, was calculated:", "title": "Obliquity of the ecliptic " }, { "paragraph_id": 13, "text": "ε = 23°26′21.45″ − 46.815″ T − 0.0006″ T + 0.00181″ T", "title": "Obliquity of the ecliptic " }, { "paragraph_id": 14, "text": "where hereafter T is Julian centuries from J2000.0.", "title": "Obliquity of the ecliptic " }, { "paragraph_id": 15, "text": "JPL's fundamental ephemerides have been continually updated. The Astronomical Almanac for 2010 specifies:", "title": "Obliquity of the ecliptic " }, { "paragraph_id": 16, "text": "ε = 23°26′21.406″ − 46.836769″ T − 0.0001831″ T + 0.00200340″ T − 0.576×10″ T − 4.34×10″ T", "title": "Obliquity of the ecliptic " }, { "paragraph_id": 17, "text": "These expressions for the obliquity are intended for high precision over a relatively short time span, perhaps several centuries. J. Laskar computed an expression to order T good to 0.04″/1000 years over 10,000 years.", "title": "Obliquity of the ecliptic " }, { "paragraph_id": 18, "text": "All of these expressions are for the mean obliquity, that is, without the nutation of the equator included. The true or instantaneous obliquity includes the nutation.", "title": "Obliquity of the ecliptic " }, { "paragraph_id": 19, "text": "Most of the major bodies of the Solar System orbit the Sun in nearly the same plane. This is likely due to the way in which the Solar System formed from a protoplanetary disk. Probably the closest current representation of the disk is known as the invariable plane of the Solar System. Earth's orbit, and hence, the ecliptic, is inclined a little more than 1° to the invariable plane, Jupiter's orbit is within a little more than ½° of it, and the other major planets are all within about 6°. Because of this, most Solar System bodies appear very close to the ecliptic in the sky.", "title": "Plane of the Solar System" }, { "paragraph_id": 20, "text": "The invariable plane is defined by the angular momentum of the entire Solar System, essentially the vector sum of all of the orbital and rotational angular momenta of all the bodies of the system; more than 60% of the total comes from the orbit of Jupiter. That sum requires precise knowledge of every object in the system, making it a somewhat uncertain value. Because of the uncertainty regarding the exact location of the invariable plane, and because the ecliptic is well defined by the apparent motion of the Sun, the ecliptic is used as the reference plane of the Solar System both for precision and convenience. The only drawback of using the ecliptic instead of the invariable plane is that over geologic time scales, it will move against fixed reference points in the sky's distant background.", "title": "Plane of the Solar System" }, { "paragraph_id": 21, "text": "The ecliptic forms one of the two fundamental planes used as reference for positions on the celestial sphere, the other being the celestial equator. Perpendicular to the ecliptic are the ecliptic poles, the north ecliptic pole being the pole north of the equator. Of the two fundamental planes, the ecliptic is closer to unmoving against the background stars, its motion due to planetary precession being roughly 1/100 that of the celestial equator.", "title": "Celestial reference plane" }, { "paragraph_id": 22, "text": "Spherical coordinates, known as ecliptic longitude and latitude or celestial longitude and latitude, are used to specify positions of bodies on the celestial sphere with respect to the ecliptic. Longitude is measured positively eastward 0° to 360° along the ecliptic from the vernal equinox, the same direction in which the Sun appears to move. Latitude is measured perpendicular to the ecliptic, to +90° northward or −90° southward to the poles of the ecliptic, the ecliptic itself being 0° latitude. For a complete spherical position, a distance parameter is also necessary. Different distance units are used for different objects. Within the Solar System, astronomical units are used, and for objects near Earth, Earth radii or kilometers are used. A corresponding right-handed rectangular coordinate system is also used occasionally; the x-axis is directed toward the vernal equinox, the y-axis 90° to the east, and the z-axis toward the north ecliptic pole; the astronomical unit is the unit of measure. Symbols for ecliptic coordinates are somewhat standardized; see the table.", "title": "Celestial reference plane" }, { "paragraph_id": 23, "text": "Ecliptic coordinates are convenient for specifying positions of Solar System objects, as most of the planets' orbits have small inclinations to the ecliptic, and therefore always appear relatively close to it on the sky. Because Earth's orbit, and hence the ecliptic, moves very little, it is a relatively fixed reference with respect to the stars.", "title": "Celestial reference plane" }, { "paragraph_id": 24, "text": "Because of the precessional motion of the equinox, the ecliptic coordinates of objects on the celestial sphere are continuously changing. Specifying a position in ecliptic coordinates requires specifying a particular equinox, that is, the equinox of a particular date, known as an epoch; the coordinates are referred to the direction of the equinox at that date. For instance, the Astronomical Almanac lists the heliocentric position of Mars at 0h Terrestrial Time, 4 January 2010 as: longitude 118°09′15.8″, latitude +1°43′16.7″, true heliocentric distance 1.6302454 AU, mean equinox and ecliptic of date. This specifies the mean equinox of 4 January 2010 0h TT as above, without the addition of nutation.", "title": "Celestial reference plane" }, { "paragraph_id": 25, "text": "Because the orbit of the Moon is inclined only about 5.145° to the ecliptic and the Sun is always very near the ecliptic, eclipses always occur on or near it. Because of the inclination of the Moon's orbit, eclipses do not occur at every conjunction and opposition of the Sun and Moon, but only when the Moon is near an ascending or descending node at the same time it is at conjunction (new) or opposition (full). The ecliptic is so named because the ancients noted that eclipses only occur when the Moon is crossing it.", "title": "Eclipses" }, { "paragraph_id": 26, "text": "The exact instants of equinoxes and solstices are the times when the apparent ecliptic longitude (including the effects of aberration and nutation) of the Sun is 0°, 90°, 180°, and 270°. Because of perturbations of Earth's orbit and anomalies of the calendar, the dates of these are not fixed.", "title": "Equinoxes and solstices" }, { "paragraph_id": 27, "text": "The ecliptic currently passes through the following constellations:", "title": "In the constellations" }, { "paragraph_id": 28, "text": "The constellations Cetus and Orion are not on the ecliptic, but are close enough that the Moon and planets can occasionally appear in them.", "title": "In the constellations" }, { "paragraph_id": 29, "text": "The ecliptic forms the center of the zodiac, a celestial belt about 20° wide in latitude through which the Sun, Moon, and planets always appear to move. Traditionally, this region is divided into 12 signs of 30° longitude, each of which approximates the Sun's motion in one month. In ancient times, the signs corresponded roughly to 12 of the constellations that straddle the ecliptic. These signs are sometimes still used in modern terminology. The \"First Point of Aries\" was named when the March equinox Sun was actually in the constellation Aries; it has since moved into Pisces because of precession of the equinoxes.", "title": "Astrology" } ]
The ecliptic or ecliptic plane is the orbital plane of Earth around the Sun. From the perspective of an observer on Earth, the Sun's movement around the celestial sphere over the course of a year traces out a path along the ecliptic against the background of stars. The ecliptic is an important reference plane and is the basis of the ecliptic coordinate system.
2001-09-18T12:33:47Z
2023-09-30T02:04:11Z
[ "Template:Clear", "Template:Cite journal", "Template:Wikiversity", "Template:Zodiac", "Template:Indian astronomy", "Template:Math", "Template:Notelist", "Template:Portal bar", "Template:Authority control", "Template:Efn", "Template:Main", "Template:Anchor", "Template:Columns-list", "Template:Cite web", "Template:Wiktionary", "Template:Astronomy in medieval Islam", "Template:Short description", "Template:Use dmy dates", "Template:Reflist", "Template:Cite book", "Template:Webarchive", "Template:Good article" ]
https://en.wikipedia.org/wiki/Ecliptic
9,269
List of former sovereign states
A historical sovereign state is a state that once existed, but has since been dissolved due to conflict, war, rebellion, annexation, or uprising. This page lists sovereign states, countries, nations, or empires that ceased to exist as political entities sometime after 1453, grouped geographically and by constitutional nature. The criteria for inclusion in this list are similar to that of the list of states with limited recognition. To be included here, a polity must have claimed statehood and either: This is not a list for all variant governments of a state, nor is it a list of variations of countries' official long form name. For purposes of this list, the cutoff between medieval and early modern states is the fall of Constantinople in 1453. In the Nordic countries, unions were personal, not unitary These states are now dissolved into a number of states, none of which retain the old name. Four of the homelands, or bantustans, for black South Africans, were granted nominal independence by the apartheid regime of South Africa. Not recognised by other nations, these effectively were puppet states and were re-incorporated in 1994. These nations declared themselves independent, but failed to achieve it in fact or did not seek permanent independence and were either re-incorporated into the mother country or incorporated into another country. These nations, once separate, are now part of another country. Cases of voluntary accession are included.
[ { "paragraph_id": 0, "text": "A historical sovereign state is a state that once existed, but has since been dissolved due to conflict, war, rebellion, annexation, or uprising. This page lists sovereign states, countries, nations, or empires that ceased to exist as political entities sometime after 1453, grouped geographically and by constitutional nature.", "title": "" }, { "paragraph_id": 1, "text": "The criteria for inclusion in this list are similar to that of the list of states with limited recognition. To be included here, a polity must have claimed statehood and either:", "title": "Criteria for inclusion" }, { "paragraph_id": 2, "text": "This is not a list for all variant governments of a state, nor is it a list of variations of countries' official long form name. For purposes of this list, the cutoff between medieval and early modern states is the fall of Constantinople in 1453.", "title": "Criteria for inclusion" }, { "paragraph_id": 3, "text": "In the Nordic countries, unions were personal, not unitary", "title": "Modern states and territories by geography" }, { "paragraph_id": 4, "text": "These states are now dissolved into a number of states, none of which retain the old name.", "title": "Modern states and territories by type" }, { "paragraph_id": 5, "text": "Four of the homelands, or bantustans, for black South Africans, were granted nominal independence by the apartheid regime of South Africa. Not recognised by other nations, these effectively were puppet states and were re-incorporated in 1994.", "title": "Modern states and territories by type" }, { "paragraph_id": 6, "text": "These nations declared themselves independent, but failed to achieve it in fact or did not seek permanent independence and were either re-incorporated into the mother country or incorporated into another country.", "title": "Modern states and territories by type" }, { "paragraph_id": 7, "text": "These nations, once separate, are now part of another country. Cases of voluntary accession are included.", "title": "Modern states and territories by type" } ]
A historical sovereign state is a state that once existed, but has since been dissolved due to conflict, war, rebellion, annexation, or uprising. This page lists sovereign states, countries, nations, or empires that ceased to exist as political entities sometime after 1453, grouped geographically and by constitutional nature.
2001-10-14T19:53:54Z
2023-12-22T11:35:07Z
[ "Template:Flagcountry", "Template:Reflist", "Template:Notelist", "Template:Lists of sovereign states by year", "Template:Contradicts other", "Template:Circa", "Template:Noflag", "Template:Annotated link", "Template:For", "Template:See also", "Template:Flagicon image", "Template:Cn", "Template:Flag", "Template:ISBN", "Template:Short description", "Template:NoteTag", "Template:Flagicon", "Template:Flagdeco" ]
https://en.wikipedia.org/wiki/List_of_former_sovereign_states
9,277
Ellipse
In mathematics, an ellipse is a plane curve surrounding two focal points, such that for all points on the curve, the sum of the two distances to the focal points is a constant. It generalizes a circle, which is the special type of ellipse in which the two focal points are the same. The elongation of an ellipse is measured by its eccentricity e {\displaystyle e} , a number ranging from e = 0 {\displaystyle e=0} (the limiting case of a circle) to e = 1 {\displaystyle e=1} (the limiting case of infinite elongation, no longer an ellipse but a parabola). An ellipse has a simple algebraic solution for its area, but only approximations for its perimeter (also known as circumference), for which integration is required to obtain an exact solution. Analytically, the equation of a standard ellipse centered at the origin with width 2 a {\displaystyle 2a} and height 2 b {\displaystyle 2b} is: Assuming a ≥ b {\displaystyle a\geq b} , the foci are ( ± c , 0 ) {\displaystyle (\pm c,0)} for c = a 2 − b 2 {\textstyle c={\sqrt {a^{2}-b^{2}}}} . The standard parametric equation is: Ellipses are the closed type of conic section: a plane curve tracing the intersection of a cone with a plane (see figure). Ellipses have many similarities with the other two forms of conic sections, parabolas and hyperbolas, both of which are open and unbounded. An angled cross section of a right circular cylinder is also an ellipse. An ellipse may also be defined in terms of one focal point and a line outside the ellipse called the directrix: for all points on the ellipse, the ratio between the distance to the focus and the distance to the directrix is a constant. This constant ratio is the above-mentioned eccentricity: Ellipses are common in physics, astronomy and engineering. For example, the orbit of each planet in the Solar System is approximately an ellipse with the Sun at one focus point (more precisely, the focus is the barycenter of the Sun–planet pair). The same is true for moons orbiting planets and all other systems of two astronomical bodies. The shapes of planets and stars are often well described by ellipsoids. A circle viewed from a side angle looks like an ellipse: that is, the ellipse is the image of a circle under parallel or perspective projection. The ellipse is also the simplest Lissajous figure formed when the horizontal and vertical motions are sinusoids with the same frequency: a similar effect leads to elliptical polarization of light in optics. The name, ἔλλειψις (élleipsis, "omission"), was given by Apollonius of Perga in his Conics. An ellipse can be defined geometrically as a set or locus of points in the Euclidean plane: The midpoint C {\displaystyle C} of the line segment joining the foci is called the center of the ellipse. The line through the foci is called the major axis, and the line perpendicular to it through the center is the minor axis. The major axis intersects the ellipse at two vertices V 1 , V 2 {\displaystyle V_{1},V_{2}} , which have distance a {\displaystyle a} to the center. The distance c {\displaystyle c} of the foci to the center is called the focal distance or linear eccentricity. The quotient e = c a {\displaystyle e={\tfrac {c}{a}}} is the eccentricity. The case F 1 = F 2 {\displaystyle F_{1}=F_{2}} yields a circle and is included as a special type of ellipse. The equation | P F 2 | + | P F 1 | = 2 a {\displaystyle \left|PF_{2}\right|+\left|PF_{1}\right|=2a} can be viewed in a different way (see figure): c 2 {\displaystyle c_{2}} is called the circular directrix (related to focus F 2 {\displaystyle F_{2}} ) of the ellipse. This property should not be confused with the definition of an ellipse using a directrix line below. Using Dandelin spheres, one can prove that any section of a cone with a plane is an ellipse, assuming the plane does not contain the apex and has slope less than that of the lines on the cone. The standard form of an ellipse in Cartesian coordinates assumes that the origin is the center of the ellipse, the x-axis is the major axis, and: For an arbitrary point ( x , y ) {\displaystyle (x,y)} the distance to the focus ( c , 0 ) {\displaystyle (c,0)} is ( x − c ) 2 + y 2 {\textstyle {\sqrt {(x-c)^{2}+y^{2}}}} and to the other focus ( x + c ) 2 + y 2 {\textstyle {\sqrt {(x+c)^{2}+y^{2}}}} . Hence the point ( x , y ) {\displaystyle (x,\,y)} is on the ellipse whenever: Removing the radicals by suitable squarings and using b 2 = a 2 − c 2 {\displaystyle b^{2}=a^{2}-c^{2}} (see diagram) produces the standard equation of the ellipse: or, solved for y: The width and height parameters a , b {\displaystyle a,\;b} are called the semi-major and semi-minor axes. The top and bottom points V 3 = ( 0 , b ) , V 4 = ( 0 , − b ) {\displaystyle V_{3}=(0,\,b),\;V_{4}=(0,\,-b)} are the co-vertices. The distances from a point ( x , y ) {\displaystyle (x,\,y)} on the ellipse to the left and right foci are a + e x {\displaystyle a+ex} and a − e x {\displaystyle a-ex} . It follows from the equation that the ellipse is symmetric with respect to the coordinate axes and hence with respect to the origin. Throughout this article, the semi-major and semi-minor axes are denoted a {\displaystyle a} and b {\displaystyle b} , respectively, i.e. a ≥ b > 0 . {\displaystyle a\geq b>0\ .} In principle, the canonical ellipse equation x 2 a 2 + y 2 b 2 = 1 {\displaystyle {\tfrac {x^{2}}{a^{2}}}+{\tfrac {y^{2}}{b^{2}}}=1} may have a < b {\displaystyle a<b} (and hence the ellipse would be taller than it is wide). This form can be converted to the standard form by transposing the variable names x {\displaystyle x} and y {\displaystyle y} and the parameter names a {\displaystyle a} and b . {\displaystyle b.} This is the distance from the center to a focus: c = a 2 − b 2 {\displaystyle c={\sqrt {a^{2}-b^{2}}}} . The eccentricity can be expressed as: assuming a > b . {\displaystyle a>b.} An ellipse with equal axes ( a = b {\displaystyle a=b} ) has zero eccentricity, and is a circle. The length of the chord through one focus, perpendicular to the major axis, is called the latus rectum. One half of it is the semi-latus rectum ℓ {\displaystyle \ell } . A calculation shows: The semi-latus rectum ℓ {\displaystyle \ell } is equal to the radius of curvature at the vertices (see section curvature). An arbitrary line g {\displaystyle g} intersects an ellipse at 0, 1, or 2 points, respectively called an exterior line, tangent and secant. Through any point of an ellipse there is a unique tangent. The tangent at a point ( x 1 , y 1 ) {\displaystyle (x_{1},\,y_{1})} of the ellipse x 2 a 2 + y 2 b 2 = 1 {\displaystyle {\tfrac {x^{2}}{a^{2}}}+{\tfrac {y^{2}}{b^{2}}}=1} has the coordinate equation: A vector parametric equation of the tangent is: Proof: Let ( x 1 , y 1 ) {\displaystyle (x_{1},\,y_{1})} be a point on an ellipse and x → = ( x 1 y 1 ) + s ( u v ) {\textstyle {\vec {x}}={\begin{pmatrix}x_{1}\\y_{1}\end{pmatrix}}+s{\begin{pmatrix}u\\v\end{pmatrix}}} be the equation of any line g {\displaystyle g} containing ( x 1 , y 1 ) {\displaystyle (x_{1},\,y_{1})} . Inserting the line's equation into the ellipse equation and respecting x 1 2 a 2 + y 1 2 b 2 = 1 {\textstyle {\frac {x_{1}^{2}}{a^{2}}}+{\frac {y_{1}^{2}}{b^{2}}}=1} yields: There are then cases: Using (1) one finds that ( − y 1 a 2 x 1 b 2 ) {\displaystyle {\begin{pmatrix}-y_{1}a^{2}&x_{1}b^{2}\end{pmatrix}}} is a tangent vector at point ( x 1 , y 1 ) {\displaystyle (x_{1},\,y_{1})} , which proves the vector equation. If ( x 1 , y 1 ) {\displaystyle (x_{1},y_{1})} and ( u , v ) {\displaystyle (u,v)} are two points of the ellipse such that x 1 u a 2 + y 1 v b 2 = 0 {\textstyle {\frac {x_{1}u}{a^{2}}}+{\tfrac {y_{1}v}{b^{2}}}=0} , then the points lie on two conjugate diameters (see below). (If a = b {\displaystyle a=b} , the ellipse is a circle and "conjugate" means "orthogonal".) If the standard ellipse is shifted to have center ( x ∘ , y ∘ ) {\displaystyle \left(x_{\circ },\,y_{\circ }\right)} , its equation is The axes are still parallel to the x- and y-axes. In analytic geometry, the ellipse is defined as a quadric: the set of points ( x , y ) {\displaystyle (x,\,y)} of the Cartesian plane that, in non-degenerate cases, satisfy the implicit equation provided B 2 − 4 A C < 0. {\displaystyle B^{2}-4AC<0.} To distinguish the degenerate cases from the non-degenerate case, let ∆ be the determinant Then the ellipse is a non-degenerate real ellipse if and only if C∆ < 0. If C∆ > 0, we have an imaginary ellipse, and if ∆ = 0, we have a point ellipse. The general equation's coefficients can be obtained from known semi-major axis a {\displaystyle a} , semi-minor axis b {\displaystyle b} , center coordinates ( x ∘ , y ∘ ) {\displaystyle \left(x_{\circ },\,y_{\circ }\right)} , and rotation angle θ {\displaystyle \theta } (the angle from the positive horizontal axis to the ellipse's major axis) using the formulae: These expressions can be derived from the canonical equation by a Euclidean transformation of the coordinates ( X , Y ) {\displaystyle (X,\,Y)} : Conversely, the canonical form parameters can be obtained from the general-form coefficients by the equations: where atan2 is the 2-argument arctangent function. Using trigonometric functions, a parametric representation of the standard ellipse x 2 a 2 + y 2 b 2 = 1 {\displaystyle {\tfrac {x^{2}}{a^{2}}}+{\tfrac {y^{2}}{b^{2}}}=1} is: The parameter t (called the eccentric anomaly in astronomy) is not the angle of ( x ( t ) , y ( t ) ) {\displaystyle (x(t),y(t))} with the x-axis, but has a geometric meaning due to Philippe de La Hire (see § Drawing ellipses below). With the substitution u = tan ( t 2 ) {\textstyle u=\tan \left({\frac {t}{2}}\right)} and trigonometric formulae one obtains and the rational parametric equation of an ellipse which covers any point of the ellipse x 2 a 2 + y 2 b 2 = 1 {\displaystyle {\tfrac {x^{2}}{a^{2}}}+{\tfrac {y^{2}}{b^{2}}}=1} except the left vertex ( − a , 0 ) {\displaystyle (-a,\,0)} . For u ∈ [ 0 , 1 ] , {\displaystyle u\in [0,\,1],} this formula represents the right upper quarter of the ellipse moving counter-clockwise with increasing u . {\displaystyle u.} The left vertex is the limit lim u → ± ∞ ( x ( u ) , y ( u ) ) = ( − a , 0 ) . {\textstyle \lim _{u\to \pm \infty }(x(u),\,y(u))=(-a,\,0)\;.} Alternately, if the parameter [ u : v ] {\displaystyle [u:v]} is considered to be a point on the real projective line P ( R ) {\textstyle \mathbf {P} (\mathbf {R} )} , then the corresponding rational parametrization is Then [ 1 : 0 ] ↦ ( − a , 0 ) . {\textstyle [1:0]\mapsto (-a,\,0).} Rational representations of conic sections are commonly used in computer-aided design (see Bezier curve). A parametric representation, which uses the slope m {\displaystyle m} of the tangent at a point of the ellipse can be obtained from the derivative of the standard representation x → ( t ) = ( a cos t , b sin t ) T {\displaystyle {\vec {x}}(t)=(a\cos t,\,b\sin t)^{\mathsf {T}}} : With help of trigonometric formulae one obtains: Replacing cos t {\displaystyle \cos t} and sin t {\displaystyle \sin t} of the standard representation yields: Here m {\displaystyle m} is the slope of the tangent at the corresponding ellipse point, c → + {\displaystyle {\vec {c}}_{+}} is the upper and c → − {\displaystyle {\vec {c}}_{-}} the lower half of the ellipse. The vertices ( ± a , 0 ) {\displaystyle (\pm a,\,0)} , having vertical tangents, are not covered by the representation. The equation of the tangent at point c → ± ( m ) {\displaystyle {\vec {c}}_{\pm }(m)} has the form y = m x + n {\displaystyle y=mx+n} . The still unknown n {\displaystyle n} can be determined by inserting the coordinates of the corresponding ellipse point c → ± ( m ) {\displaystyle {\vec {c}}_{\pm }(m)} : This description of the tangents of an ellipse is an essential tool for the determination of the orthoptic of an ellipse. The orthoptic article contains another proof, without differential calculus and trigonometric formulae. Another definition of an ellipse uses affine transformations: An affine transformation of the Euclidean plane has the form x → ↦ f → 0 + A x → {\displaystyle {\vec {x}}\mapsto {\vec {f}}\!_{0}+A{\vec {x}}} , where A {\displaystyle A} is a regular matrix (with non-zero determinant) and f → 0 {\displaystyle {\vec {f}}\!_{0}} is an arbitrary vector. If f → 1 , f → 2 {\displaystyle {\vec {f}}\!_{1},{\vec {f}}\!_{2}} are the column vectors of the matrix A {\displaystyle A} , the unit circle ( cos ( t ) , sin ( t ) ) {\displaystyle (\cos(t),\sin(t))} , 0 ≤ t ≤ 2 π {\displaystyle 0\leq t\leq 2\pi } , is mapped onto the ellipse: Here f → 0 {\displaystyle {\vec {f}}\!_{0}} is the center and f → 1 , f → 2 {\displaystyle {\vec {f}}\!_{1},\;{\vec {f}}\!_{2}} are the directions of two conjugate diameters, in general not perpendicular. The four vertices of the ellipse are p → ( t 0 ) , p → ( t 0 ± π 2 ) , p → ( t 0 + π ) {\displaystyle {\vec {p}}(t_{0}),\;{\vec {p}}\left(t_{0}\pm {\tfrac {\pi }{2}}\right),\;{\vec {p}}\left(t_{0}+\pi \right)} , for a parameter t = t 0 {\displaystyle t=t_{0}} defined by: (If f → 1 ⋅ f → 2 = 0 {\displaystyle {\vec {f}}\!_{1}\cdot {\vec {f}}\!_{2}=0} , then t 0 = 0 {\displaystyle t_{0}=0} .) This is derived as follows. The tangent vector at point p → ( t ) {\displaystyle {\vec {p}}(t)} is: At a vertex parameter t = t 0 {\displaystyle t=t_{0}} , the tangent is perpendicular to the major/minor axes, so: Expanding and applying the identities cos 2 t − sin 2 t = cos 2 t , 2 sin t cos t = sin 2 t {\displaystyle \;\cos ^{2}t-\sin ^{2}t=\cos 2t,\ \ 2\sin t\cos t=\sin 2t\;} gives the equation for t = t 0 . {\displaystyle t=t_{0}\;.} From Apollonios theorem (see below) one obtains: The area of an ellipse x → = f → 0 + f → 1 cos t + f → 2 sin t {\displaystyle \;{\vec {x}}={\vec {f}}_{0}+{\vec {f}}_{1}\cos t+{\vec {f}}_{2}\sin t\;} is With the abbreviations M = f → 1 2 + f → 2 2 , N = | det ( f → 1 , f → 2 ) | {\displaystyle \;M={\vec {f}}_{1}^{2}+{\vec {f}}_{2}^{2},\ N=\left|\det({\vec {f}}_{1},{\vec {f}}_{2})\right|} the statements of Apollonios's theorem can be written as: Solving this nonlinear system for a , b {\displaystyle a,b} yields the semiaxes: Solving the parametric representation for cos t , sin t {\displaystyle \;\cos t,\sin t\;} by Cramer's rule and using cos 2 t + sin 2 t − 1 = 0 {\displaystyle \;\cos ^{2}t+\sin ^{2}t-1=0\;} , one obtains the implicit representation Conversely: If the equation of an ellipse centered at the origin is given, then the two vectors point to two conjugate points and the tools developed above are applicable. Example: For the ellipse with equation x 2 + 2 x y + 3 y 2 − 1 = 0 {\displaystyle \;x^{2}+2xy+3y^{2}-1=0\;} the vectors are For f → 0 = ( 0 0 ) , f → 1 = a ( cos θ sin θ ) , f → 2 = b ( − sin θ cos θ ) {\displaystyle {\vec {f}}_{0}={0 \choose 0},\;{\vec {f}}_{1}=a{\cos \theta \choose \sin \theta },\;{\vec {f}}_{2}=b{-\sin \theta \choose \;\cos \theta }} one obtains a parametric representation of the standard ellipse rotated by angle θ {\displaystyle \theta } : The definition of an ellipse in this section gives a parametric representation of an arbitrary ellipse, even in space, if one allows f → 0 , f → 1 , f → 2 {\displaystyle {\vec {f}}\!_{0},{\vec {f}}\!_{1},{\vec {f}}\!_{2}} to be vectors in space. In polar coordinates, with the origin at the center of the ellipse and with the angular coordinate θ {\displaystyle \theta } measured from the major axis, the ellipse's equation is where e {\displaystyle e} is the eccentricity, not Euler's number If instead we use polar coordinates with the origin at one focus, with the angular coordinate θ = 0 {\displaystyle \theta =0} still measured from the major axis, the ellipse's equation is where the sign in the denominator is negative if the reference direction θ = 0 {\displaystyle \theta =0} points towards the center (as illustrated on the right), and positive if that direction points away from the center. In the slightly more general case of an ellipse with one focus at the origin and the other focus at angular coordinate ϕ {\displaystyle \phi } , the polar form is The angle θ {\displaystyle \theta } in these formulas is called the true anomaly of the point. The numerator of these formulas is the semi-latus rectum ℓ = a ( 1 − e 2 ) {\displaystyle \ell =a(1-e^{2})} . Each of the two lines parallel to the minor axis, and at a distance of d = a 2 c = a e {\textstyle d={\frac {a^{2}}{c}}={\frac {a}{e}}} from it, is called a directrix of the ellipse (see diagram). The proof for the pair F 1 , l 1 {\displaystyle F_{1},l_{1}} follows from the fact that | P F 1 | 2 = ( x − c ) 2 + y 2 , | P l 1 | 2 = ( x − a 2 c ) 2 {\textstyle \left|PF_{1}\right|^{2}=(x-c)^{2}+y^{2},\ \left|Pl_{1}\right|^{2}=\left(x-{\tfrac {a^{2}}{c}}\right)^{2}} and y 2 = b 2 − b 2 a 2 x 2 {\displaystyle y^{2}=b^{2}-{\tfrac {b^{2}}{a^{2}}}x^{2}} satisfy the equation The second case is proven analogously. The converse is also true and can be used to define an ellipse (in a manner similar to the definition of a parabola): The extension to e = 0 {\displaystyle e=0} , which is the eccentricity of a circle, is not allowed in this context in the Euclidean plane. However, one may consider the directrix of a circle to be the line at infinity in the projective plane. (The choice e = 1 {\displaystyle e=1} yields a parabola, and if e > 1 {\displaystyle e>1} , a hyperbola.) Let F = ( f , 0 ) , e > 0 {\displaystyle F=(f,\,0),\ e>0} , and assume ( 0 , 0 ) {\displaystyle (0,\,0)} is a point on the curve. The directrix l {\displaystyle l} has equation x = − f e {\displaystyle x=-{\tfrac {f}{e}}} . With P = ( x , y ) {\displaystyle P=(x,\,y)} , the relation | P F | 2 = e 2 | P l | 2 {\displaystyle |PF|^{2}=e^{2}|Pl|^{2}} produces the equations The substitution p = f ( 1 + e ) {\displaystyle p=f(1+e)} yields This is the equation of an ellipse ( e < 1 {\displaystyle e<1} ), or a parabola ( e = 1 {\displaystyle e=1} ), or a hyperbola ( e > 1 {\displaystyle e>1} ). All of these non-degenerate conics have, in common, the origin as a vertex (see diagram). If e < 1 {\displaystyle e<1} , introduce new parameters a , b {\displaystyle a,\,b} so that 1 − e 2 = b 2 a 2 , and p = b 2 a {\displaystyle 1-e^{2}={\tfrac {b^{2}}{a^{2}}},{\text{ and }}\ p={\tfrac {b^{2}}{a}}} , and then the equation above becomes which is the equation of an ellipse with center ( a , 0 ) {\displaystyle (a,\,0)} , the x-axis as major axis, and the major/minor semi axis a , b {\displaystyle a,\,b} . Because of c ⋅ a 2 c = a 2 {\displaystyle c\cdot {\tfrac {a^{2}}{c}}=a^{2}} point L 1 {\displaystyle L_{1}} of directrix l 1 {\displaystyle l_{1}} (see diagram) and focus F 1 {\displaystyle F_{1}} are inverse with respect to the circle inversion at circle x 2 + y 2 = a 2 {\displaystyle x^{2}+y^{2}=a^{2}} (in diagram green). Hence L 1 {\displaystyle L_{1}} can be constructed as shown in the diagram. Directrix l 1 {\displaystyle l_{1}} is the perpendicular to the main axis at point L 1 {\displaystyle L_{1}} . If the focus is F = ( f 1 , f 2 ) {\displaystyle F=\left(f_{1},\,f_{2}\right)} and the directrix u x + v y + w = 0 {\displaystyle ux+vy+w=0} , one obtains the equation (The right side of the equation uses the Hesse normal form of a line to calculate the distance | P l | {\displaystyle |Pl|} .) An ellipse possesses the following property: Because the tangent line is perpendicular to the normal, an equivalent statement is that the tangent is the external angle bisector of the lines to the foci (see diagram). Let L {\displaystyle L} be the point on the line P F 2 ¯ {\displaystyle {\overline {PF_{2}}}} with distance 2 a {\displaystyle 2a} to the focus F 2 {\displaystyle F_{2}} , where a {\displaystyle a} is the semi-major axis of the ellipse. Let line w {\displaystyle w} be the external angle bisector of the lines P F 1 ¯ {\displaystyle {\overline {PF_{1}}}} and P F 2 ¯ . {\displaystyle {\overline {PF_{2}}}.} Take any other point Q {\displaystyle Q} on w . {\displaystyle w.} By the triangle inequality and the angle bisector theorem, 2 a = | L F 2 | < {\displaystyle 2a=\left|LF_{2}\right|<{}} | Q F 2 | + | Q L | = {\displaystyle \left|QF_{2}\right|+\left|QL\right|={}} | Q F 2 | + | Q F 1 | , {\displaystyle \left|QF_{2}\right|+\left|QF_{1}\right|,} therefore Q {\displaystyle Q} must be outside the ellipse. As this is true for every choice of Q , {\displaystyle Q,} w {\displaystyle w} only intersects the ellipse at the single point P {\displaystyle P} so must be the tangent line. The rays from one focus are reflected by the ellipse to the second focus. This property has optical and acoustic applications similar to the reflective property of a parabola (see whispering gallery). A circle has the following property: An affine transformation preserves parallelism and midpoints of line segments, so this property is true for any ellipse. (Note that the parallel chords and the diameter are no longer orthogonal.) Two diameters d 1 , d 2 {\displaystyle d_{1},\,d_{2}} of an ellipse are conjugate if the midpoints of chords parallel to d 1 {\displaystyle d_{1}} lie on d 2 . {\displaystyle d_{2}\ .} From the diagram one finds: Conjugate diameters in an ellipse generalize orthogonal diameters in a circle. In the parametric equation for a general ellipse given above, any pair of points p → ( t ) , p → ( t + π ) {\displaystyle {\vec {p}}(t),\ {\vec {p}}(t+\pi )} belong to a diameter, and the pair p → ( t + π 2 ) , p → ( t − π 2 ) {\displaystyle {\vec {p}}\left(t+{\tfrac {\pi }{2}}\right),\ {\vec {p}}\left(t-{\tfrac {\pi }{2}}\right)} belong to its conjugate diameter. For the common parametric representation ( a cos t , b sin t ) {\displaystyle (a\cos t,b\sin t)} of the ellipse with equation x 2 a 2 + y 2 b 2 = 1 {\displaystyle {\tfrac {x^{2}}{a^{2}}}+{\tfrac {y^{2}}{b^{2}}}=1} one gets: The points In case of a circle the last equation collapses to x 1 x 2 + y 1 y 2 = 0 . {\displaystyle x_{1}x_{2}+y_{1}y_{2}=0\ .} For an ellipse with semi-axes a , b {\displaystyle a,\,b} the following is true: Let the ellipse be in the canonical form with parametric equation The two points c → 1 = p → ( t ) , c → 2 = p → ( t + π 2 ) {\textstyle {\vec {c}}_{1}={\vec {p}}(t),\ {\vec {c}}_{2}={\vec {p}}\left(t+{\frac {\pi }{2}}\right)} are on conjugate diameters (see previous section). From trigonometric formulae one obtains c → 2 = ( − a sin t , b cos t ) T {\displaystyle {\vec {c}}_{2}=(-a\sin t,\,b\cos t)^{\mathsf {T}}} and The area of the triangle generated by c → 1 , c → 2 {\displaystyle {\vec {c}}_{1},\,{\vec {c}}_{2}} is and from the diagram it can be seen that the area of the parallelogram is 8 times that of A Δ {\displaystyle A_{\Delta }} . Hence For the ellipse x 2 a 2 + y 2 b 2 = 1 {\displaystyle {\tfrac {x^{2}}{a^{2}}}+{\tfrac {y^{2}}{b^{2}}}=1} the intersection points of orthogonal tangents lie on the circle x 2 + y 2 = a 2 + b 2 {\displaystyle x^{2}+y^{2}=a^{2}+b^{2}} . This circle is called orthoptic or director circle of the ellipse (not to be confused with the circular directrix defined above). Ellipses appear in descriptive geometry as images (parallel or central projection) of circles. There exist various tools to draw an ellipse. Computers provide the fastest and most accurate method for drawing an ellipse. However, technical tools (ellipsographs) to draw an ellipse without a computer exist. The principle of ellipsographs were known to Greek mathematicians such as Archimedes and Proklos. If there is no ellipsograph available, one can draw an ellipse using an approximation by the four osculating circles at the vertices. For any method described below, knowledge of the axes and the semi-axes is necessary (or equivalently: the foci and the semi-major axis). If this presumption is not fulfilled one has to know at least two conjugate diameters. With help of Rytz's construction the axes and semi-axes can be retrieved. The following construction of single points of an ellipse is due to de La Hire. It is based on the standard parametric representation ( a cos t , b sin t ) {\displaystyle (a\cos t,\,b\sin t)} of an ellipse: The characterization of an ellipse as the locus of points so that sum of the distances to the foci is constant leads to a method of drawing one using two drawing pins, a length of string, and a pencil. In this method, pins are pushed into the paper at two points, which become the ellipse's foci. A string is tied at each end to the two pins; its length after tying is 2 a {\displaystyle 2a} . The tip of the pencil then traces an ellipse if it is moved while keeping the string taut. Using two pegs and a rope, gardeners use this procedure to outline an elliptical flower bed—thus it is called the gardener's ellipse. A similar method for drawing confocal ellipses with a closed string is due to the Irish bishop Charles Graves. The two following methods rely on the parametric representation (see § Standard parametric representation, above): This representation can be modeled technically by two simple methods. In both cases center, the axes and semi axes a , b {\displaystyle a,\,b} have to be known. The first method starts with The point, where the semi axes meet is marked by P {\displaystyle P} . If the strip slides with both ends on the axes of the desired ellipse, then point P {\displaystyle P} traces the ellipse. For the proof one shows that point P {\displaystyle P} has the parametric representation ( a cos t , b sin t ) {\displaystyle (a\cos t,\,b\sin t)} , where parameter t {\displaystyle t} is the angle of the slope of the paper strip. A technical realization of the motion of the paper strip can be achieved by a Tusi couple (see animation). The device is able to draw any ellipse with a fixed sum a + b {\displaystyle a+b} , which is the radius of the large circle. This restriction may be a disadvantage in real life. More flexible is the second paper strip method. A variation of the paper strip method 1 uses the observation that the midpoint N {\displaystyle N} of the paper strip is moving on the circle with center M {\displaystyle M} (of the ellipse) and radius a + b 2 {\displaystyle {\tfrac {a+b}{2}}} . Hence, the paperstrip can be cut at point N {\displaystyle N} into halves, connected again by a joint at N {\displaystyle N} and the sliding end K {\displaystyle K} fixed at the center M {\displaystyle M} (see diagram). After this operation the movement of the unchanged half of the paperstrip is unchanged. This variation requires only one sliding shoe. The second method starts with One marks the point, which divides the strip into two substrips of length b {\displaystyle b} and a − b {\displaystyle a-b} . The strip is positioned onto the axes as described in the diagram. Then the free end of the strip traces an ellipse, while the strip is moved. For the proof, one recognizes that the tracing point can be described parametrically by ( a cos t , b sin t ) {\displaystyle (a\cos t,\,b\sin t)} , where parameter t {\displaystyle t} is the angle of slope of the paper strip. This method is the base for several ellipsographs (see section below). Similar to the variation of the paper strip method 1 a variation of the paper strip method 2 can be established (see diagram) by cutting the part between the axes into halves. Most ellipsograph drafting instruments are based on the second paperstrip method. From Metric properties below, one obtains: The diagram shows an easy way to find the centers of curvature C 1 = ( a − b 2 a , 0 ) , C 3 = ( 0 , b − a 2 b ) {\displaystyle C_{1}=\left(a-{\tfrac {b^{2}}{a}},0\right),\,C_{3}=\left(0,b-{\tfrac {a^{2}}{b}}\right)} at vertex V 1 {\displaystyle V_{1}} and co-vertex V 3 {\displaystyle V_{3}} , respectively: (proof: simple calculation.) The centers for the remaining vertices are found by symmetry. With help of a French curve one draws a curve, which has smooth contact to the osculating circles. The following method to construct single points of an ellipse relies on the Steiner generation of a conic section: For the generation of points of the ellipse x 2 a 2 + y 2 b 2 = 1 {\displaystyle {\tfrac {x^{2}}{a^{2}}}+{\tfrac {y^{2}}{b^{2}}}=1} one uses the pencils at the vertices V 1 , V 2 {\displaystyle V_{1},\,V_{2}} . Let P = ( 0 , b ) {\displaystyle P=(0,\,b)} be an upper co-vertex of the ellipse and A = ( − a , 2 b ) , B = ( a , 2 b ) {\displaystyle A=(-a,\,2b),\,B=(a,\,2b)} . P {\displaystyle P} is the center of the rectangle V 1 , V 2 , B , A {\displaystyle V_{1},\,V_{2},\,B,\,A} . The side A B ¯ {\displaystyle {\overline {AB}}} of the rectangle is divided into n equal spaced line segments and this division is projected parallel with the diagonal A V 2 {\displaystyle AV_{2}} as direction onto the line segment V 1 B ¯ {\displaystyle {\overline {V_{1}B}}} and assign the division as shown in the diagram. The parallel projection together with the reverse of the orientation is part of the projective mapping between the pencils at V 1 {\displaystyle V_{1}} and V 2 {\displaystyle V_{2}} needed. The intersection points of any two related lines V 1 B i {\displaystyle V_{1}B_{i}} and V 2 A i {\displaystyle V_{2}A_{i}} are points of the uniquely defined ellipse. With help of the points C 1 , … {\displaystyle C_{1},\,\dotsc } the points of the second quarter of the ellipse can be determined. Analogously one obtains the points of the lower half of the ellipse. Steiner generation can also be defined for hyperbolas and parabolas. It is sometimes called a parallelogram method because one can use other points rather than the vertices, which starts with a parallelogram instead of a rectangle. The ellipse is a special case of the hypotrochoid when R = 2 r {\displaystyle R=2r} , as shown in the adjacent image. The special case of a moving circle with radius r {\displaystyle r} inside a circle with radius R = 2 r {\displaystyle R=2r} is called a Tusi couple. A circle with equation ( x − x ∘ ) 2 + ( y − y ∘ ) 2 = r 2 {\displaystyle \left(x-x_{\circ }\right)^{2}+\left(y-y_{\circ }\right)^{2}=r^{2}} is uniquely determined by three points ( x 1 , y 1 ) , ( x 2 , y 2 ) , ( x 3 , y 3 ) {\displaystyle \left(x_{1},y_{1}\right),\;\left(x_{2},\,y_{2}\right),\;\left(x_{3},\,y_{3}\right)} not on a line. A simple way to determine the parameters x ∘ , y ∘ , r {\displaystyle x_{\circ },y_{\circ },r} uses the inscribed angle theorem for circles: Usually one measures inscribed angles by a degree or radian θ, but here the following measurement is more convenient: For four points P i = ( x i , y i ) , i = 1 , 2 , 3 , 4 , {\displaystyle P_{i}=\left(x_{i},\,y_{i}\right),\ i=1,\,2,\,3,\,4,\,} no three of them on a line, we have the following (see diagram): At first the measure is available only for chords not parallel to the y-axis, but the final formula works for any chord. For example, for P 1 = ( 2 , 0 ) , P 2 = ( 0 , 1 ) , P 3 = ( 0 , 0 ) {\displaystyle P_{1}=(2,\,0),\;P_{2}=(0,\,1),\;P_{3}=(0,\,0)} the three-point equation is: Using vectors, dot products and determinants this formula can be arranged more clearly, letting x → = ( x , y ) {\displaystyle {\vec {x}}=(x,\,y)} : The center of the circle ( x ∘ , y ∘ ) {\displaystyle \left(x_{\circ },\,y_{\circ }\right)} satisfies: The radius is the distance between any of the three points and the center. This section considers the family of ellipses defined by equations ( x − x ∘ ) 2 a 2 + ( y − y ∘ ) 2 b 2 = 1 {\displaystyle {\tfrac {\left(x-x_{\circ }\right)^{2}}{a^{2}}}+{\tfrac {\left(y-y_{\circ }\right)^{2}}{b^{2}}}=1} with a fixed eccentricity e {\displaystyle e} . It is convenient to use the parameter: and to write the ellipse equation as: where q is fixed and x ∘ , y ∘ , a {\displaystyle x_{\circ },\,y_{\circ },\,a} vary over the real numbers. (Such ellipses have their axes parallel to the coordinate axes: if q < 1 {\displaystyle q<1} , the major axis is parallel to the x-axis; if q > 1 {\displaystyle q>1} , it is parallel to the y-axis.) Like a circle, such an ellipse is determined by three points not on a line. For this family of ellipses, one introduces the following q-analog angle measure, which is not a function of the usual angle measure θ: At first the measure is available only for chords which are not parallel to the y-axis. But the final formula works for any chord. The proof follows from a straightforward calculation. For the direction of proof given that the points are on an ellipse, one can assume that the center of the ellipse is the origin. For example, for P 1 = ( 2 , 0 ) , P 2 = ( 0 , 1 ) , P 3 = ( 0 , 0 ) {\displaystyle P_{1}=(2,\,0),\;P_{2}=(0,\,1),\;P_{3}=(0,\,0)} and q = 4 {\displaystyle q=4} one obtains the three-point form Analogously to the circle case, the equation can be written more clearly using vectors: where ∗ {\displaystyle *} is the modified dot product u → ∗ v → = u x v x + q u y v y . {\displaystyle {\vec {u}}*{\vec {v}}=u_{x}v_{x}+{\color {blue}q}\,u_{y}v_{y}.} Any ellipse can be described in a suitable coordinate system by an equation x 2 a 2 + y 2 b 2 = 1 {\displaystyle {\tfrac {x^{2}}{a^{2}}}+{\tfrac {y^{2}}{b^{2}}}=1} . The equation of the tangent at a point P 1 = ( x 1 , y 1 ) {\displaystyle P_{1}=\left(x_{1},\,y_{1}\right)} of the ellipse is x 1 x a 2 + y 1 y b 2 = 1. {\displaystyle {\tfrac {x_{1}x}{a^{2}}}+{\tfrac {y_{1}y}{b^{2}}}=1.} If one allows point P 1 = ( x 1 , y 1 ) {\displaystyle P_{1}=\left(x_{1},\,y_{1}\right)} to be an arbitrary point different from the origin, then This relation between points and lines is a bijection. The inverse function maps Such a relation between points and lines generated by a conic is called pole-polar relation or polarity. The pole is the point; the polar the line. By calculation one can confirm the following properties of the pole-polar relation of the ellipse: Pole-polar relations exist for hyperbolas and parabolas as well. All metric properties given below refer to an ellipse with equation except for the section on the area enclosed by a tilted ellipse, where the generalized form of Eq.(1) will be given. The area A ellipse {\displaystyle A_{\text{ellipse}}} enclosed by an ellipse is: where a {\displaystyle a} and b {\displaystyle b} are the lengths of the semi-major and semi-minor axes, respectively. The area formula π a b {\displaystyle \pi ab} is intuitive: start with a circle of radius b {\displaystyle b} (so its area is π b 2 {\displaystyle \pi b^{2}} ) and stretch it by a factor a / b {\displaystyle a/b} to make an ellipse. This scales the area by the same factor: π b 2 ( a / b ) = π a b . {\displaystyle \pi b^{2}(a/b)=\pi ab.} However, using the same approach for the circumference would be fallacious – compare the integrals ∫ f ( x ) d x {\textstyle \int f(x)\,dx} and ∫ 1 + f ′ 2 ( x ) d x {\textstyle \int {\sqrt {1+f'^{2}(x)}}\,dx} . It is also easy to rigorously prove the area formula using integration as follows. Equation (1) can be rewritten as y ( x ) = b 1 − x 2 / a 2 . {\textstyle y(x)=b{\sqrt {1-x^{2}/a^{2}}}.} For x ∈ [ − a , a ] , {\displaystyle x\in [-a,a],} this curve is the top half of the ellipse. So twice the integral of y ( x ) {\displaystyle y(x)} over the interval [ − a , a ] {\displaystyle [-a,a]} will be the area of the ellipse: The second integral is the area of a circle of radius a , {\displaystyle a,} that is, π a 2 . {\displaystyle \pi a^{2}.} So An ellipse defined implicitly by A x 2 + B x y + C y 2 = 1 {\displaystyle Ax^{2}+Bxy+Cy^{2}=1} has area 2 π / 4 A C − B 2 . {\displaystyle 2\pi /{\sqrt {4AC-B^{2}}}.} The area can also be expressed in terms of eccentricity and the length of the semi-major axis as a 2 π 1 − e 2 {\displaystyle a^{2}\pi {\sqrt {1-e^{2}}}} (obtained by solving for flattening, then computing the semi-minor axis). So far we have dealt with erect ellipses, whose major and minor axes are parallel to the x {\displaystyle x} and y {\displaystyle y} axes. However, some applications require tilted ellipses. In charged-particle beam optics, for instance, the enclosed area of an erect or tilted ellipse is an important property of the beam, its emittance. In this case a simple formula still applies, namely where y int {\displaystyle y_{\text{int}}} , x int {\displaystyle x_{\text{int}}} are intercepts and x max {\displaystyle x_{\text{max}}} , y max {\displaystyle y_{\text{max}}} are maximum values. It follows directly from Apollonios's theorem. The circumference C {\displaystyle C} of an ellipse is: where again a {\displaystyle a} is the length of the semi-major axis, e = 1 − b 2 / a 2 {\textstyle e={\sqrt {1-b^{2}/a^{2}}}} is the eccentricity, and the function E {\displaystyle E} is the complete elliptic integral of the second kind, which is in general not an elementary function. The circumference of the ellipse may be evaluated in terms of E ( e ) {\displaystyle E(e)} using Gauss's arithmetic-geometric mean; this is a quadratically converging iterative method (see here for details). The exact infinite series is: where n ! ! {\displaystyle n!!} is the double factorial (extended to negative odd integers by the recurrence relation ( 2 n − 1 ) ! ! = ( 2 n + 1 ) ! ! / ( 2 n + 1 ) {\displaystyle (2n-1)!!=(2n+1)!!/(2n+1)} , for n ≤ 0 {\displaystyle n\leq 0} ). This series converges, but by expanding in terms of h = ( a − b ) 2 / ( a + b ) 2 , {\displaystyle h=(a-b)^{2}/(a+b)^{2},} James Ivory and Bessel derived an expression that converges much more rapidly: Srinivasa Ramanujan gave two close approximations for the circumference in §16 of "Modular Equations and Approximations to π {\displaystyle \pi } "; they are and where h {\displaystyle h} takes on the same meaning as above. The errors in these approximations, which were obtained empirically, are of order h 3 {\displaystyle h^{3}} and h 5 , {\displaystyle h^{5},} respectively. More generally, the arc length of a portion of the circumference, as a function of the angle subtended (or x coordinates of any two points on the upper half of the ellipse), is given by an incomplete elliptic integral. The upper half of an ellipse is parameterized by Then the arc length s {\displaystyle s} from x 1 {\displaystyle \ x_{1}\ } to x 2 {\displaystyle \ x_{2}\ } is: This is equivalent to where E ( z ∣ m ) {\displaystyle E(z\mid m)} is the incomplete elliptic integral of the second kind with parameter m = k 2 . {\displaystyle m=k^{2}.} Some lower and upper bounds on the circumference of the canonical ellipse x 2 / a 2 + y 2 / b 2 = 1 {\displaystyle \ x^{2}/a^{2}+y^{2}/b^{2}=1\ } with a ≥ b {\displaystyle \ a\geq b\ } are Here the upper bound 2 π a {\displaystyle \ 2\pi a\ } is the circumference of a circumscribed concentric circle passing through the endpoints of the ellipse's major axis, and the lower bound 4 a 2 + b 2 {\displaystyle 4{\sqrt {a^{2}+b^{2}}}} is the perimeter of an inscribed rhombus with vertices at the endpoints of the major and the minor axes. The curvature is given by κ = 1 a 2 b 2 ( x 2 a 4 + y 2 b 4 ) − 3 2 , {\displaystyle \kappa ={\frac {1}{a^{2}b^{2}}}\left({\frac {x^{2}}{a^{4}}}+{\frac {y^{2}}{b^{4}}}\right)^{-{\frac {3}{2}}}\ ,} radius of curvature at point ( x , y ) {\displaystyle (x,y)} : Radius of curvature at the two vertices ( ± a , 0 ) {\displaystyle (\pm a,0)} and the centers of curvature: Radius of curvature at the two co-vertices ( 0 , ± b ) {\displaystyle (0,\pm b)} and the centers of curvature: Ellipses appear in triangle geometry as Ellipses appear as plane sections of the following quadrics: If the water's surface is disturbed at one focus of an elliptical water tank, the circular waves of that disturbance, after reflecting off the walls, converge simultaneously to a single point: the second focus. This is a consequence of the total travel length being the same along any wall-bouncing path between the two foci. Similarly, if a light source is placed at one focus of an elliptic mirror, all light rays on the plane of the ellipse are reflected to the second focus. Since no other smooth curve has such a property, it can be used as an alternative definition of an ellipse. (In the special case of a circle with a source at its center all light would be reflected back to the center.) If the ellipse is rotated along its major axis to produce an ellipsoidal mirror (specifically, a prolate spheroid), this property holds for all rays out of the source. Alternatively, a cylindrical mirror with elliptical cross-section can be used to focus light from a linear fluorescent lamp along a line of the paper; such mirrors are used in some document scanners. Sound waves are reflected in a similar way, so in a large elliptical room a person standing at one focus can hear a person standing at the other focus remarkably well. The effect is even more evident under a vaulted roof shaped as a section of a prolate spheroid. Such a room is called a whisper chamber. The same effect can be demonstrated with two reflectors shaped like the end caps of such a spheroid, placed facing each other at the proper distance. Examples are the National Statuary Hall at the United States Capitol (where John Quincy Adams is said to have used this property for eavesdropping on political matters); the Mormon Tabernacle at Temple Square in Salt Lake City, Utah; at an exhibit on sound at the Museum of Science and Industry in Chicago; in front of the University of Illinois at Urbana–Champaign Foellinger Auditorium; and also at a side chamber of the Palace of Charles V, in the Alhambra. In the 17th century, Johannes Kepler discovered that the orbits along which the planets travel around the Sun are ellipses with the Sun [approximately] at one focus, in his first law of planetary motion. Later, Isaac Newton explained this as a corollary of his law of universal gravitation. More generally, in the gravitational two-body problem, if the two bodies are bound to each other (that is, the total energy is negative), their orbits are similar ellipses with the common barycenter being one of the foci of each ellipse. The other focus of either ellipse has no known physical significance. The orbit of either body in the reference frame of the other is also an ellipse, with the other body at the same focus. Keplerian elliptical orbits are the result of any radially directed attraction force whose strength is inversely proportional to the square of the distance. Thus, in principle, the motion of two oppositely charged particles in empty space would also be an ellipse. (However, this conclusion ignores losses due to electromagnetic radiation and quantum effects, which become significant when the particles are moving at high speed.) For elliptical orbits, useful relations involving the eccentricity e {\displaystyle e} are: where Also, in terms of r a {\displaystyle r_{a}} and r p {\displaystyle r_{p}} , the semi-major axis a {\displaystyle a} is their arithmetic mean, the semi-minor axis b {\displaystyle b} is their geometric mean, and the semi-latus rectum ℓ {\displaystyle \ell } is their harmonic mean. In other words, The general solution for a harmonic oscillator in two or more dimensions is also an ellipse. Such is the case, for instance, of a long pendulum that is free to move in two dimensions; of a mass attached to a fixed point by a perfectly elastic spring; or of any object that moves under influence of an attractive force that is directly proportional to its distance from a fixed attractor. Unlike Keplerian orbits, however, these "harmonic orbits" have the center of attraction at the geometric center of the ellipse, and have fairly simple equations of motion. In electronics, the relative phase of two sinusoidal signals can be compared by feeding them to the vertical and horizontal inputs of an oscilloscope. If the Lissajous figure display is an ellipse, rather than a straight line, the two signals are out of phase. Two non-circular gears with the same elliptical outline, each pivoting around one focus and positioned at the proper angle, turn smoothly while maintaining contact at all times. Alternatively, they can be connected by a link chain or timing belt, or in the case of a bicycle the main chainring may be elliptical, or an ovoid similar to an ellipse in form. Such elliptical gears may be used in mechanical equipment to produce variable angular speed or torque from a constant rotation of the driving axle, or in the case of a bicycle to allow a varying crank rotation speed with inversely varying mechanical advantage. Elliptical bicycle gears make it easier for the chain to slide off the cog when changing gears. An example gear application would be a device that winds thread onto a conical bobbin on a spinning machine. The bobbin would need to wind faster when the thread is near the apex than when it is near the base. In statistics, a bivariate random vector ( X , Y ) {\displaystyle (X,Y)} is jointly elliptically distributed if its iso-density contours—loci of equal values of the density function—are ellipses. The concept extends to an arbitrary number of elements of the random vector, in which case in general the iso-density contours are ellipsoids. A special case is the multivariate normal distribution. The elliptical distributions are important in finance because if rates of return on assets are jointly elliptically distributed then all portfolios can be characterized completely by their mean and variance—that is, any two portfolios with identical mean and variance of portfolio return have identical distributions of portfolio return. Drawing an ellipse as a graphics primitive is common in standard display libraries, such as the MacIntosh QuickDraw API, and Direct2D on Windows. Jack Bresenham at IBM is most famous for the invention of 2D drawing primitives, including line and circle drawing, using only fast integer operations such as addition and branch on carry bit. M. L. V. Pitteway extended Bresenham's algorithm for lines to conics in 1967. Another efficient generalization to draw ellipses was invented in 1984 by Jerry Van Aken. In 1970 Danny Cohen presented at the "Computer Graphics 1970" conference in England a linear algorithm for drawing ellipses and circles. In 1971, L. B. Smith published similar algorithms for all conic sections and proved them to have good properties. These algorithms need only a few multiplications and additions to calculate each vector. It is beneficial to use a parametric formulation in computer graphics because the density of points is greatest where there is the most curvature. Thus, the change in slope between each successive point is small, reducing the apparent "jaggedness" of the approximation. Composite Bézier curves may also be used to draw an ellipse to sufficient accuracy, since any ellipse may be construed as an affine transformation of a circle. The spline methods used to draw a circle may be used to draw an ellipse, since the constituent Bézier curves behave appropriately under such transformations. It is sometimes useful to find the minimum bounding ellipse on a set of points. The ellipsoid method is quite useful for solving this problem.
[ { "paragraph_id": 0, "text": "In mathematics, an ellipse is a plane curve surrounding two focal points, such that for all points on the curve, the sum of the two distances to the focal points is a constant. It generalizes a circle, which is the special type of ellipse in which the two focal points are the same. The elongation of an ellipse is measured by its eccentricity e {\\displaystyle e} , a number ranging from e = 0 {\\displaystyle e=0} (the limiting case of a circle) to e = 1 {\\displaystyle e=1} (the limiting case of infinite elongation, no longer an ellipse but a parabola).", "title": "" }, { "paragraph_id": 1, "text": "An ellipse has a simple algebraic solution for its area, but only approximations for its perimeter (also known as circumference), for which integration is required to obtain an exact solution.", "title": "" }, { "paragraph_id": 2, "text": "Analytically, the equation of a standard ellipse centered at the origin with width 2 a {\\displaystyle 2a} and height 2 b {\\displaystyle 2b} is:", "title": "" }, { "paragraph_id": 3, "text": "Assuming a ≥ b {\\displaystyle a\\geq b} , the foci are ( ± c , 0 ) {\\displaystyle (\\pm c,0)} for c = a 2 − b 2 {\\textstyle c={\\sqrt {a^{2}-b^{2}}}} . The standard parametric equation is:", "title": "" }, { "paragraph_id": 4, "text": "Ellipses are the closed type of conic section: a plane curve tracing the intersection of a cone with a plane (see figure). Ellipses have many similarities with the other two forms of conic sections, parabolas and hyperbolas, both of which are open and unbounded. An angled cross section of a right circular cylinder is also an ellipse.", "title": "" }, { "paragraph_id": 5, "text": "An ellipse may also be defined in terms of one focal point and a line outside the ellipse called the directrix: for all points on the ellipse, the ratio between the distance to the focus and the distance to the directrix is a constant. This constant ratio is the above-mentioned eccentricity:", "title": "" }, { "paragraph_id": 6, "text": "Ellipses are common in physics, astronomy and engineering. For example, the orbit of each planet in the Solar System is approximately an ellipse with the Sun at one focus point (more precisely, the focus is the barycenter of the Sun–planet pair). The same is true for moons orbiting planets and all other systems of two astronomical bodies. The shapes of planets and stars are often well described by ellipsoids. A circle viewed from a side angle looks like an ellipse: that is, the ellipse is the image of a circle under parallel or perspective projection. The ellipse is also the simplest Lissajous figure formed when the horizontal and vertical motions are sinusoids with the same frequency: a similar effect leads to elliptical polarization of light in optics.", "title": "" }, { "paragraph_id": 7, "text": "The name, ἔλλειψις (élleipsis, \"omission\"), was given by Apollonius of Perga in his Conics.", "title": "" }, { "paragraph_id": 8, "text": "An ellipse can be defined geometrically as a set or locus of points in the Euclidean plane:", "title": "Definition as locus of points" }, { "paragraph_id": 9, "text": "The midpoint C {\\displaystyle C} of the line segment joining the foci is called the center of the ellipse. The line through the foci is called the major axis, and the line perpendicular to it through the center is the minor axis. The major axis intersects the ellipse at two vertices V 1 , V 2 {\\displaystyle V_{1},V_{2}} , which have distance a {\\displaystyle a} to the center. The distance c {\\displaystyle c} of the foci to the center is called the focal distance or linear eccentricity. The quotient e = c a {\\displaystyle e={\\tfrac {c}{a}}} is the eccentricity.", "title": "Definition as locus of points" }, { "paragraph_id": 10, "text": "The case F 1 = F 2 {\\displaystyle F_{1}=F_{2}} yields a circle and is included as a special type of ellipse.", "title": "Definition as locus of points" }, { "paragraph_id": 11, "text": "The equation | P F 2 | + | P F 1 | = 2 a {\\displaystyle \\left|PF_{2}\\right|+\\left|PF_{1}\\right|=2a} can be viewed in a different way (see figure):", "title": "Definition as locus of points" }, { "paragraph_id": 12, "text": "c 2 {\\displaystyle c_{2}} is called the circular directrix (related to focus F 2 {\\displaystyle F_{2}} ) of the ellipse. This property should not be confused with the definition of an ellipse using a directrix line below.", "title": "Definition as locus of points" }, { "paragraph_id": 13, "text": "Using Dandelin spheres, one can prove that any section of a cone with a plane is an ellipse, assuming the plane does not contain the apex and has slope less than that of the lines on the cone.", "title": "Definition as locus of points" }, { "paragraph_id": 14, "text": "The standard form of an ellipse in Cartesian coordinates assumes that the origin is the center of the ellipse, the x-axis is the major axis, and:", "title": "In Cartesian coordinates" }, { "paragraph_id": 15, "text": "For an arbitrary point ( x , y ) {\\displaystyle (x,y)} the distance to the focus ( c , 0 ) {\\displaystyle (c,0)} is ( x − c ) 2 + y 2 {\\textstyle {\\sqrt {(x-c)^{2}+y^{2}}}} and to the other focus ( x + c ) 2 + y 2 {\\textstyle {\\sqrt {(x+c)^{2}+y^{2}}}} . Hence the point ( x , y ) {\\displaystyle (x,\\,y)} is on the ellipse whenever:", "title": "In Cartesian coordinates" }, { "paragraph_id": 16, "text": "Removing the radicals by suitable squarings and using b 2 = a 2 − c 2 {\\displaystyle b^{2}=a^{2}-c^{2}} (see diagram) produces the standard equation of the ellipse:", "title": "In Cartesian coordinates" }, { "paragraph_id": 17, "text": "or, solved for y:", "title": "In Cartesian coordinates" }, { "paragraph_id": 18, "text": "The width and height parameters a , b {\\displaystyle a,\\;b} are called the semi-major and semi-minor axes. The top and bottom points V 3 = ( 0 , b ) , V 4 = ( 0 , − b ) {\\displaystyle V_{3}=(0,\\,b),\\;V_{4}=(0,\\,-b)} are the co-vertices. The distances from a point ( x , y ) {\\displaystyle (x,\\,y)} on the ellipse to the left and right foci are a + e x {\\displaystyle a+ex} and a − e x {\\displaystyle a-ex} .", "title": "In Cartesian coordinates" }, { "paragraph_id": 19, "text": "It follows from the equation that the ellipse is symmetric with respect to the coordinate axes and hence with respect to the origin.", "title": "In Cartesian coordinates" }, { "paragraph_id": 20, "text": "Throughout this article, the semi-major and semi-minor axes are denoted a {\\displaystyle a} and b {\\displaystyle b} , respectively, i.e. a ≥ b > 0 . {\\displaystyle a\\geq b>0\\ .}", "title": "In Cartesian coordinates" }, { "paragraph_id": 21, "text": "In principle, the canonical ellipse equation x 2 a 2 + y 2 b 2 = 1 {\\displaystyle {\\tfrac {x^{2}}{a^{2}}}+{\\tfrac {y^{2}}{b^{2}}}=1} may have a < b {\\displaystyle a<b} (and hence the ellipse would be taller than it is wide). This form can be converted to the standard form by transposing the variable names x {\\displaystyle x} and y {\\displaystyle y} and the parameter names a {\\displaystyle a} and b . {\\displaystyle b.}", "title": "In Cartesian coordinates" }, { "paragraph_id": 22, "text": "This is the distance from the center to a focus: c = a 2 − b 2 {\\displaystyle c={\\sqrt {a^{2}-b^{2}}}} .", "title": "In Cartesian coordinates" }, { "paragraph_id": 23, "text": "The eccentricity can be expressed as:", "title": "In Cartesian coordinates" }, { "paragraph_id": 24, "text": "assuming a > b . {\\displaystyle a>b.} An ellipse with equal axes ( a = b {\\displaystyle a=b} ) has zero eccentricity, and is a circle.", "title": "In Cartesian coordinates" }, { "paragraph_id": 25, "text": "The length of the chord through one focus, perpendicular to the major axis, is called the latus rectum. One half of it is the semi-latus rectum ℓ {\\displaystyle \\ell } . A calculation shows:", "title": "In Cartesian coordinates" }, { "paragraph_id": 26, "text": "The semi-latus rectum ℓ {\\displaystyle \\ell } is equal to the radius of curvature at the vertices (see section curvature).", "title": "In Cartesian coordinates" }, { "paragraph_id": 27, "text": "An arbitrary line g {\\displaystyle g} intersects an ellipse at 0, 1, or 2 points, respectively called an exterior line, tangent and secant. Through any point of an ellipse there is a unique tangent. The tangent at a point ( x 1 , y 1 ) {\\displaystyle (x_{1},\\,y_{1})} of the ellipse x 2 a 2 + y 2 b 2 = 1 {\\displaystyle {\\tfrac {x^{2}}{a^{2}}}+{\\tfrac {y^{2}}{b^{2}}}=1} has the coordinate equation:", "title": "In Cartesian coordinates" }, { "paragraph_id": 28, "text": "A vector parametric equation of the tangent is:", "title": "In Cartesian coordinates" }, { "paragraph_id": 29, "text": "Proof: Let ( x 1 , y 1 ) {\\displaystyle (x_{1},\\,y_{1})} be a point on an ellipse and x → = ( x 1 y 1 ) + s ( u v ) {\\textstyle {\\vec {x}}={\\begin{pmatrix}x_{1}\\\\y_{1}\\end{pmatrix}}+s{\\begin{pmatrix}u\\\\v\\end{pmatrix}}} be the equation of any line g {\\displaystyle g} containing ( x 1 , y 1 ) {\\displaystyle (x_{1},\\,y_{1})} . Inserting the line's equation into the ellipse equation and respecting x 1 2 a 2 + y 1 2 b 2 = 1 {\\textstyle {\\frac {x_{1}^{2}}{a^{2}}}+{\\frac {y_{1}^{2}}{b^{2}}}=1} yields:", "title": "In Cartesian coordinates" }, { "paragraph_id": 30, "text": "There are then cases:", "title": "In Cartesian coordinates" }, { "paragraph_id": 31, "text": "Using (1) one finds that ( − y 1 a 2 x 1 b 2 ) {\\displaystyle {\\begin{pmatrix}-y_{1}a^{2}&x_{1}b^{2}\\end{pmatrix}}} is a tangent vector at point ( x 1 , y 1 ) {\\displaystyle (x_{1},\\,y_{1})} , which proves the vector equation.", "title": "In Cartesian coordinates" }, { "paragraph_id": 32, "text": "If ( x 1 , y 1 ) {\\displaystyle (x_{1},y_{1})} and ( u , v ) {\\displaystyle (u,v)} are two points of the ellipse such that x 1 u a 2 + y 1 v b 2 = 0 {\\textstyle {\\frac {x_{1}u}{a^{2}}}+{\\tfrac {y_{1}v}{b^{2}}}=0} , then the points lie on two conjugate diameters (see below). (If a = b {\\displaystyle a=b} , the ellipse is a circle and \"conjugate\" means \"orthogonal\".)", "title": "In Cartesian coordinates" }, { "paragraph_id": 33, "text": "If the standard ellipse is shifted to have center ( x ∘ , y ∘ ) {\\displaystyle \\left(x_{\\circ },\\,y_{\\circ }\\right)} , its equation is", "title": "In Cartesian coordinates" }, { "paragraph_id": 34, "text": "The axes are still parallel to the x- and y-axes.", "title": "In Cartesian coordinates" }, { "paragraph_id": 35, "text": "In analytic geometry, the ellipse is defined as a quadric: the set of points ( x , y ) {\\displaystyle (x,\\,y)} of the Cartesian plane that, in non-degenerate cases, satisfy the implicit equation", "title": "In Cartesian coordinates" }, { "paragraph_id": 36, "text": "provided B 2 − 4 A C < 0. {\\displaystyle B^{2}-4AC<0.}", "title": "In Cartesian coordinates" }, { "paragraph_id": 37, "text": "To distinguish the degenerate cases from the non-degenerate case, let ∆ be the determinant", "title": "In Cartesian coordinates" }, { "paragraph_id": 38, "text": "Then the ellipse is a non-degenerate real ellipse if and only if C∆ < 0. If C∆ > 0, we have an imaginary ellipse, and if ∆ = 0, we have a point ellipse.", "title": "In Cartesian coordinates" }, { "paragraph_id": 39, "text": "The general equation's coefficients can be obtained from known semi-major axis a {\\displaystyle a} , semi-minor axis b {\\displaystyle b} , center coordinates ( x ∘ , y ∘ ) {\\displaystyle \\left(x_{\\circ },\\,y_{\\circ }\\right)} , and rotation angle θ {\\displaystyle \\theta } (the angle from the positive horizontal axis to the ellipse's major axis) using the formulae:", "title": "In Cartesian coordinates" }, { "paragraph_id": 40, "text": "These expressions can be derived from the canonical equation", "title": "In Cartesian coordinates" }, { "paragraph_id": 41, "text": "by a Euclidean transformation of the coordinates ( X , Y ) {\\displaystyle (X,\\,Y)} :", "title": "In Cartesian coordinates" }, { "paragraph_id": 42, "text": "Conversely, the canonical form parameters can be obtained from the general-form coefficients by the equations:", "title": "In Cartesian coordinates" }, { "paragraph_id": 43, "text": "where atan2 is the 2-argument arctangent function.", "title": "In Cartesian coordinates" }, { "paragraph_id": 44, "text": "Using trigonometric functions, a parametric representation of the standard ellipse x 2 a 2 + y 2 b 2 = 1 {\\displaystyle {\\tfrac {x^{2}}{a^{2}}}+{\\tfrac {y^{2}}{b^{2}}}=1} is:", "title": "Parametric representation" }, { "paragraph_id": 45, "text": "The parameter t (called the eccentric anomaly in astronomy) is not the angle of ( x ( t ) , y ( t ) ) {\\displaystyle (x(t),y(t))} with the x-axis, but has a geometric meaning due to Philippe de La Hire (see § Drawing ellipses below).", "title": "Parametric representation" }, { "paragraph_id": 46, "text": "With the substitution u = tan ( t 2 ) {\\textstyle u=\\tan \\left({\\frac {t}{2}}\\right)} and trigonometric formulae one obtains", "title": "Parametric representation" }, { "paragraph_id": 47, "text": "and the rational parametric equation of an ellipse", "title": "Parametric representation" }, { "paragraph_id": 48, "text": "which covers any point of the ellipse x 2 a 2 + y 2 b 2 = 1 {\\displaystyle {\\tfrac {x^{2}}{a^{2}}}+{\\tfrac {y^{2}}{b^{2}}}=1} except the left vertex ( − a , 0 ) {\\displaystyle (-a,\\,0)} .", "title": "Parametric representation" }, { "paragraph_id": 49, "text": "For u ∈ [ 0 , 1 ] , {\\displaystyle u\\in [0,\\,1],} this formula represents the right upper quarter of the ellipse moving counter-clockwise with increasing u . {\\displaystyle u.} The left vertex is the limit lim u → ± ∞ ( x ( u ) , y ( u ) ) = ( − a , 0 ) . {\\textstyle \\lim _{u\\to \\pm \\infty }(x(u),\\,y(u))=(-a,\\,0)\\;.}", "title": "Parametric representation" }, { "paragraph_id": 50, "text": "Alternately, if the parameter [ u : v ] {\\displaystyle [u:v]} is considered to be a point on the real projective line P ( R ) {\\textstyle \\mathbf {P} (\\mathbf {R} )} , then the corresponding rational parametrization is", "title": "Parametric representation" }, { "paragraph_id": 51, "text": "Then [ 1 : 0 ] ↦ ( − a , 0 ) . {\\textstyle [1:0]\\mapsto (-a,\\,0).}", "title": "Parametric representation" }, { "paragraph_id": 52, "text": "Rational representations of conic sections are commonly used in computer-aided design (see Bezier curve).", "title": "Parametric representation" }, { "paragraph_id": 53, "text": "A parametric representation, which uses the slope m {\\displaystyle m} of the tangent at a point of the ellipse can be obtained from the derivative of the standard representation x → ( t ) = ( a cos t , b sin t ) T {\\displaystyle {\\vec {x}}(t)=(a\\cos t,\\,b\\sin t)^{\\mathsf {T}}} :", "title": "Parametric representation" }, { "paragraph_id": 54, "text": "With help of trigonometric formulae one obtains:", "title": "Parametric representation" }, { "paragraph_id": 55, "text": "Replacing cos t {\\displaystyle \\cos t} and sin t {\\displaystyle \\sin t} of the standard representation yields:", "title": "Parametric representation" }, { "paragraph_id": 56, "text": "Here m {\\displaystyle m} is the slope of the tangent at the corresponding ellipse point, c → + {\\displaystyle {\\vec {c}}_{+}} is the upper and c → − {\\displaystyle {\\vec {c}}_{-}} the lower half of the ellipse. The vertices ( ± a , 0 ) {\\displaystyle (\\pm a,\\,0)} , having vertical tangents, are not covered by the representation.", "title": "Parametric representation" }, { "paragraph_id": 57, "text": "The equation of the tangent at point c → ± ( m ) {\\displaystyle {\\vec {c}}_{\\pm }(m)} has the form y = m x + n {\\displaystyle y=mx+n} . The still unknown n {\\displaystyle n} can be determined by inserting the coordinates of the corresponding ellipse point c → ± ( m ) {\\displaystyle {\\vec {c}}_{\\pm }(m)} :", "title": "Parametric representation" }, { "paragraph_id": 58, "text": "This description of the tangents of an ellipse is an essential tool for the determination of the orthoptic of an ellipse. The orthoptic article contains another proof, without differential calculus and trigonometric formulae.", "title": "Parametric representation" }, { "paragraph_id": 59, "text": "Another definition of an ellipse uses affine transformations:", "title": "Parametric representation" }, { "paragraph_id": 60, "text": "An affine transformation of the Euclidean plane has the form x → ↦ f → 0 + A x → {\\displaystyle {\\vec {x}}\\mapsto {\\vec {f}}\\!_{0}+A{\\vec {x}}} , where A {\\displaystyle A} is a regular matrix (with non-zero determinant) and f → 0 {\\displaystyle {\\vec {f}}\\!_{0}} is an arbitrary vector. If f → 1 , f → 2 {\\displaystyle {\\vec {f}}\\!_{1},{\\vec {f}}\\!_{2}} are the column vectors of the matrix A {\\displaystyle A} , the unit circle ( cos ( t ) , sin ( t ) ) {\\displaystyle (\\cos(t),\\sin(t))} , 0 ≤ t ≤ 2 π {\\displaystyle 0\\leq t\\leq 2\\pi } , is mapped onto the ellipse:", "title": "Parametric representation" }, { "paragraph_id": 61, "text": "Here f → 0 {\\displaystyle {\\vec {f}}\\!_{0}} is the center and f → 1 , f → 2 {\\displaystyle {\\vec {f}}\\!_{1},\\;{\\vec {f}}\\!_{2}} are the directions of two conjugate diameters, in general not perpendicular.", "title": "Parametric representation" }, { "paragraph_id": 62, "text": "The four vertices of the ellipse are p → ( t 0 ) , p → ( t 0 ± π 2 ) , p → ( t 0 + π ) {\\displaystyle {\\vec {p}}(t_{0}),\\;{\\vec {p}}\\left(t_{0}\\pm {\\tfrac {\\pi }{2}}\\right),\\;{\\vec {p}}\\left(t_{0}+\\pi \\right)} , for a parameter t = t 0 {\\displaystyle t=t_{0}} defined by:", "title": "Parametric representation" }, { "paragraph_id": 63, "text": "(If f → 1 ⋅ f → 2 = 0 {\\displaystyle {\\vec {f}}\\!_{1}\\cdot {\\vec {f}}\\!_{2}=0} , then t 0 = 0 {\\displaystyle t_{0}=0} .) This is derived as follows. The tangent vector at point p → ( t ) {\\displaystyle {\\vec {p}}(t)} is:", "title": "Parametric representation" }, { "paragraph_id": 64, "text": "At a vertex parameter t = t 0 {\\displaystyle t=t_{0}} , the tangent is perpendicular to the major/minor axes, so:", "title": "Parametric representation" }, { "paragraph_id": 65, "text": "Expanding and applying the identities cos 2 t − sin 2 t = cos 2 t , 2 sin t cos t = sin 2 t {\\displaystyle \\;\\cos ^{2}t-\\sin ^{2}t=\\cos 2t,\\ \\ 2\\sin t\\cos t=\\sin 2t\\;} gives the equation for t = t 0 . {\\displaystyle t=t_{0}\\;.}", "title": "Parametric representation" }, { "paragraph_id": 66, "text": "From Apollonios theorem (see below) one obtains: The area of an ellipse x → = f → 0 + f → 1 cos t + f → 2 sin t {\\displaystyle \\;{\\vec {x}}={\\vec {f}}_{0}+{\\vec {f}}_{1}\\cos t+{\\vec {f}}_{2}\\sin t\\;} is", "title": "Parametric representation" }, { "paragraph_id": 67, "text": "With the abbreviations M = f → 1 2 + f → 2 2 , N = | det ( f → 1 , f → 2 ) | {\\displaystyle \\;M={\\vec {f}}_{1}^{2}+{\\vec {f}}_{2}^{2},\\ N=\\left|\\det({\\vec {f}}_{1},{\\vec {f}}_{2})\\right|} the statements of Apollonios's theorem can be written as:", "title": "Parametric representation" }, { "paragraph_id": 68, "text": "Solving this nonlinear system for a , b {\\displaystyle a,b} yields the semiaxes:", "title": "Parametric representation" }, { "paragraph_id": 69, "text": "Solving the parametric representation for cos t , sin t {\\displaystyle \\;\\cos t,\\sin t\\;} by Cramer's rule and using cos 2 t + sin 2 t − 1 = 0 {\\displaystyle \\;\\cos ^{2}t+\\sin ^{2}t-1=0\\;} , one obtains the implicit representation", "title": "Parametric representation" }, { "paragraph_id": 70, "text": "Conversely: If the equation", "title": "Parametric representation" }, { "paragraph_id": 71, "text": "of an ellipse centered at the origin is given, then the two vectors", "title": "Parametric representation" }, { "paragraph_id": 72, "text": "point to two conjugate points and the tools developed above are applicable.", "title": "Parametric representation" }, { "paragraph_id": 73, "text": "Example: For the ellipse with equation x 2 + 2 x y + 3 y 2 − 1 = 0 {\\displaystyle \\;x^{2}+2xy+3y^{2}-1=0\\;} the vectors are", "title": "Parametric representation" }, { "paragraph_id": 74, "text": "For f → 0 = ( 0 0 ) , f → 1 = a ( cos θ sin θ ) , f → 2 = b ( − sin θ cos θ ) {\\displaystyle {\\vec {f}}_{0}={0 \\choose 0},\\;{\\vec {f}}_{1}=a{\\cos \\theta \\choose \\sin \\theta },\\;{\\vec {f}}_{2}=b{-\\sin \\theta \\choose \\;\\cos \\theta }} one obtains a parametric representation of the standard ellipse rotated by angle θ {\\displaystyle \\theta } :", "title": "Parametric representation" }, { "paragraph_id": 75, "text": "The definition of an ellipse in this section gives a parametric representation of an arbitrary ellipse, even in space, if one allows f → 0 , f → 1 , f → 2 {\\displaystyle {\\vec {f}}\\!_{0},{\\vec {f}}\\!_{1},{\\vec {f}}\\!_{2}} to be vectors in space.", "title": "Parametric representation" }, { "paragraph_id": 76, "text": "In polar coordinates, with the origin at the center of the ellipse and with the angular coordinate θ {\\displaystyle \\theta } measured from the major axis, the ellipse's equation is", "title": "Polar forms" }, { "paragraph_id": 77, "text": "where e {\\displaystyle e} is the eccentricity, not Euler's number", "title": "Polar forms" }, { "paragraph_id": 78, "text": "If instead we use polar coordinates with the origin at one focus, with the angular coordinate θ = 0 {\\displaystyle \\theta =0} still measured from the major axis, the ellipse's equation is", "title": "Polar forms" }, { "paragraph_id": 79, "text": "where the sign in the denominator is negative if the reference direction θ = 0 {\\displaystyle \\theta =0} points towards the center (as illustrated on the right), and positive if that direction points away from the center.", "title": "Polar forms" }, { "paragraph_id": 80, "text": "In the slightly more general case of an ellipse with one focus at the origin and the other focus at angular coordinate ϕ {\\displaystyle \\phi } , the polar form is", "title": "Polar forms" }, { "paragraph_id": 81, "text": "The angle θ {\\displaystyle \\theta } in these formulas is called the true anomaly of the point. The numerator of these formulas is the semi-latus rectum ℓ = a ( 1 − e 2 ) {\\displaystyle \\ell =a(1-e^{2})} .", "title": "Polar forms" }, { "paragraph_id": 82, "text": "Each of the two lines parallel to the minor axis, and at a distance of d = a 2 c = a e {\\textstyle d={\\frac {a^{2}}{c}}={\\frac {a}{e}}} from it, is called a directrix of the ellipse (see diagram).", "title": "Eccentricity and the directrix property" }, { "paragraph_id": 83, "text": "The proof for the pair F 1 , l 1 {\\displaystyle F_{1},l_{1}} follows from the fact that | P F 1 | 2 = ( x − c ) 2 + y 2 , | P l 1 | 2 = ( x − a 2 c ) 2 {\\textstyle \\left|PF_{1}\\right|^{2}=(x-c)^{2}+y^{2},\\ \\left|Pl_{1}\\right|^{2}=\\left(x-{\\tfrac {a^{2}}{c}}\\right)^{2}} and y 2 = b 2 − b 2 a 2 x 2 {\\displaystyle y^{2}=b^{2}-{\\tfrac {b^{2}}{a^{2}}}x^{2}} satisfy the equation", "title": "Eccentricity and the directrix property" }, { "paragraph_id": 84, "text": "The second case is proven analogously.", "title": "Eccentricity and the directrix property" }, { "paragraph_id": 85, "text": "The converse is also true and can be used to define an ellipse (in a manner similar to the definition of a parabola):", "title": "Eccentricity and the directrix property" }, { "paragraph_id": 86, "text": "The extension to e = 0 {\\displaystyle e=0} , which is the eccentricity of a circle, is not allowed in this context in the Euclidean plane. However, one may consider the directrix of a circle to be the line at infinity in the projective plane.", "title": "Eccentricity and the directrix property" }, { "paragraph_id": 87, "text": "(The choice e = 1 {\\displaystyle e=1} yields a parabola, and if e > 1 {\\displaystyle e>1} , a hyperbola.)", "title": "Eccentricity and the directrix property" }, { "paragraph_id": 88, "text": "Let F = ( f , 0 ) , e > 0 {\\displaystyle F=(f,\\,0),\\ e>0} , and assume ( 0 , 0 ) {\\displaystyle (0,\\,0)} is a point on the curve. The directrix l {\\displaystyle l} has equation x = − f e {\\displaystyle x=-{\\tfrac {f}{e}}} . With P = ( x , y ) {\\displaystyle P=(x,\\,y)} , the relation | P F | 2 = e 2 | P l | 2 {\\displaystyle |PF|^{2}=e^{2}|Pl|^{2}} produces the equations", "title": "Eccentricity and the directrix property" }, { "paragraph_id": 89, "text": "The substitution p = f ( 1 + e ) {\\displaystyle p=f(1+e)} yields", "title": "Eccentricity and the directrix property" }, { "paragraph_id": 90, "text": "This is the equation of an ellipse ( e < 1 {\\displaystyle e<1} ), or a parabola ( e = 1 {\\displaystyle e=1} ), or a hyperbola ( e > 1 {\\displaystyle e>1} ). All of these non-degenerate conics have, in common, the origin as a vertex (see diagram).", "title": "Eccentricity and the directrix property" }, { "paragraph_id": 91, "text": "If e < 1 {\\displaystyle e<1} , introduce new parameters a , b {\\displaystyle a,\\,b} so that 1 − e 2 = b 2 a 2 , and p = b 2 a {\\displaystyle 1-e^{2}={\\tfrac {b^{2}}{a^{2}}},{\\text{ and }}\\ p={\\tfrac {b^{2}}{a}}} , and then the equation above becomes", "title": "Eccentricity and the directrix property" }, { "paragraph_id": 92, "text": "which is the equation of an ellipse with center ( a , 0 ) {\\displaystyle (a,\\,0)} , the x-axis as major axis, and the major/minor semi axis a , b {\\displaystyle a,\\,b} .", "title": "Eccentricity and the directrix property" }, { "paragraph_id": 93, "text": "Because of c ⋅ a 2 c = a 2 {\\displaystyle c\\cdot {\\tfrac {a^{2}}{c}}=a^{2}} point L 1 {\\displaystyle L_{1}} of directrix l 1 {\\displaystyle l_{1}} (see diagram) and focus F 1 {\\displaystyle F_{1}} are inverse with respect to the circle inversion at circle x 2 + y 2 = a 2 {\\displaystyle x^{2}+y^{2}=a^{2}} (in diagram green). Hence L 1 {\\displaystyle L_{1}} can be constructed as shown in the diagram. Directrix l 1 {\\displaystyle l_{1}} is the perpendicular to the main axis at point L 1 {\\displaystyle L_{1}} .", "title": "Eccentricity and the directrix property" }, { "paragraph_id": 94, "text": "If the focus is F = ( f 1 , f 2 ) {\\displaystyle F=\\left(f_{1},\\,f_{2}\\right)} and the directrix u x + v y + w = 0 {\\displaystyle ux+vy+w=0} , one obtains the equation", "title": "Eccentricity and the directrix property" }, { "paragraph_id": 95, "text": "(The right side of the equation uses the Hesse normal form of a line to calculate the distance | P l | {\\displaystyle |Pl|} .)", "title": "Eccentricity and the directrix property" }, { "paragraph_id": 96, "text": "An ellipse possesses the following property:", "title": "Focus-to-focus reflection property" }, { "paragraph_id": 97, "text": "Because the tangent line is perpendicular to the normal, an equivalent statement is that the tangent is the external angle bisector of the lines to the foci (see diagram). Let L {\\displaystyle L} be the point on the line P F 2 ¯ {\\displaystyle {\\overline {PF_{2}}}} with distance 2 a {\\displaystyle 2a} to the focus F 2 {\\displaystyle F_{2}} , where a {\\displaystyle a} is the semi-major axis of the ellipse. Let line w {\\displaystyle w} be the external angle bisector of the lines P F 1 ¯ {\\displaystyle {\\overline {PF_{1}}}} and P F 2 ¯ . {\\displaystyle {\\overline {PF_{2}}}.} Take any other point Q {\\displaystyle Q} on w . {\\displaystyle w.} By the triangle inequality and the angle bisector theorem, 2 a = | L F 2 | < {\\displaystyle 2a=\\left|LF_{2}\\right|<{}} | Q F 2 | + | Q L | = {\\displaystyle \\left|QF_{2}\\right|+\\left|QL\\right|={}} | Q F 2 | + | Q F 1 | , {\\displaystyle \\left|QF_{2}\\right|+\\left|QF_{1}\\right|,} therefore Q {\\displaystyle Q} must be outside the ellipse. As this is true for every choice of Q , {\\displaystyle Q,} w {\\displaystyle w} only intersects the ellipse at the single point P {\\displaystyle P} so must be the tangent line.", "title": "Focus-to-focus reflection property" }, { "paragraph_id": 98, "text": "The rays from one focus are reflected by the ellipse to the second focus. This property has optical and acoustic applications similar to the reflective property of a parabola (see whispering gallery).", "title": "Focus-to-focus reflection property" }, { "paragraph_id": 99, "text": "A circle has the following property:", "title": "Conjugate diameters" }, { "paragraph_id": 100, "text": "An affine transformation preserves parallelism and midpoints of line segments, so this property is true for any ellipse. (Note that the parallel chords and the diameter are no longer orthogonal.)", "title": "Conjugate diameters" }, { "paragraph_id": 101, "text": "Two diameters d 1 , d 2 {\\displaystyle d_{1},\\,d_{2}} of an ellipse are conjugate if the midpoints of chords parallel to d 1 {\\displaystyle d_{1}} lie on d 2 . {\\displaystyle d_{2}\\ .}", "title": "Conjugate diameters" }, { "paragraph_id": 102, "text": "From the diagram one finds:", "title": "Conjugate diameters" }, { "paragraph_id": 103, "text": "Conjugate diameters in an ellipse generalize orthogonal diameters in a circle.", "title": "Conjugate diameters" }, { "paragraph_id": 104, "text": "In the parametric equation for a general ellipse given above,", "title": "Conjugate diameters" }, { "paragraph_id": 105, "text": "any pair of points p → ( t ) , p → ( t + π ) {\\displaystyle {\\vec {p}}(t),\\ {\\vec {p}}(t+\\pi )} belong to a diameter, and the pair p → ( t + π 2 ) , p → ( t − π 2 ) {\\displaystyle {\\vec {p}}\\left(t+{\\tfrac {\\pi }{2}}\\right),\\ {\\vec {p}}\\left(t-{\\tfrac {\\pi }{2}}\\right)} belong to its conjugate diameter.", "title": "Conjugate diameters" }, { "paragraph_id": 106, "text": "For the common parametric representation ( a cos t , b sin t ) {\\displaystyle (a\\cos t,b\\sin t)} of the ellipse with equation x 2 a 2 + y 2 b 2 = 1 {\\displaystyle {\\tfrac {x^{2}}{a^{2}}}+{\\tfrac {y^{2}}{b^{2}}}=1} one gets: The points", "title": "Conjugate diameters" }, { "paragraph_id": 107, "text": "In case of a circle the last equation collapses to x 1 x 2 + y 1 y 2 = 0 . {\\displaystyle x_{1}x_{2}+y_{1}y_{2}=0\\ .}", "title": "Conjugate diameters" }, { "paragraph_id": 108, "text": "For an ellipse with semi-axes a , b {\\displaystyle a,\\,b} the following is true:", "title": "Conjugate diameters" }, { "paragraph_id": 109, "text": "Let the ellipse be in the canonical form with parametric equation", "title": "Conjugate diameters" }, { "paragraph_id": 110, "text": "The two points c → 1 = p → ( t ) , c → 2 = p → ( t + π 2 ) {\\textstyle {\\vec {c}}_{1}={\\vec {p}}(t),\\ {\\vec {c}}_{2}={\\vec {p}}\\left(t+{\\frac {\\pi }{2}}\\right)} are on conjugate diameters (see previous section). From trigonometric formulae one obtains c → 2 = ( − a sin t , b cos t ) T {\\displaystyle {\\vec {c}}_{2}=(-a\\sin t,\\,b\\cos t)^{\\mathsf {T}}} and", "title": "Conjugate diameters" }, { "paragraph_id": 111, "text": "The area of the triangle generated by c → 1 , c → 2 {\\displaystyle {\\vec {c}}_{1},\\,{\\vec {c}}_{2}} is", "title": "Conjugate diameters" }, { "paragraph_id": 112, "text": "and from the diagram it can be seen that the area of the parallelogram is 8 times that of A Δ {\\displaystyle A_{\\Delta }} . Hence", "title": "Conjugate diameters" }, { "paragraph_id": 113, "text": "For the ellipse x 2 a 2 + y 2 b 2 = 1 {\\displaystyle {\\tfrac {x^{2}}{a^{2}}}+{\\tfrac {y^{2}}{b^{2}}}=1} the intersection points of orthogonal tangents lie on the circle x 2 + y 2 = a 2 + b 2 {\\displaystyle x^{2}+y^{2}=a^{2}+b^{2}} .", "title": "Orthogonal tangents" }, { "paragraph_id": 114, "text": "This circle is called orthoptic or director circle of the ellipse (not to be confused with the circular directrix defined above).", "title": "Orthogonal tangents" }, { "paragraph_id": 115, "text": "Ellipses appear in descriptive geometry as images (parallel or central projection) of circles. There exist various tools to draw an ellipse. Computers provide the fastest and most accurate method for drawing an ellipse. However, technical tools (ellipsographs) to draw an ellipse without a computer exist. The principle of ellipsographs were known to Greek mathematicians such as Archimedes and Proklos.", "title": "Drawing ellipses" }, { "paragraph_id": 116, "text": "If there is no ellipsograph available, one can draw an ellipse using an approximation by the four osculating circles at the vertices.", "title": "Drawing ellipses" }, { "paragraph_id": 117, "text": "For any method described below, knowledge of the axes and the semi-axes is necessary (or equivalently: the foci and the semi-major axis). If this presumption is not fulfilled one has to know at least two conjugate diameters. With help of Rytz's construction the axes and semi-axes can be retrieved.", "title": "Drawing ellipses" }, { "paragraph_id": 118, "text": "The following construction of single points of an ellipse is due to de La Hire. It is based on the standard parametric representation ( a cos t , b sin t ) {\\displaystyle (a\\cos t,\\,b\\sin t)} of an ellipse:", "title": "Drawing ellipses" }, { "paragraph_id": 119, "text": "The characterization of an ellipse as the locus of points so that sum of the distances to the foci is constant leads to a method of drawing one using two drawing pins, a length of string, and a pencil. In this method, pins are pushed into the paper at two points, which become the ellipse's foci. A string is tied at each end to the two pins; its length after tying is 2 a {\\displaystyle 2a} . The tip of the pencil then traces an ellipse if it is moved while keeping the string taut. Using two pegs and a rope, gardeners use this procedure to outline an elliptical flower bed—thus it is called the gardener's ellipse.", "title": "Drawing ellipses" }, { "paragraph_id": 120, "text": "A similar method for drawing confocal ellipses with a closed string is due to the Irish bishop Charles Graves.", "title": "Drawing ellipses" }, { "paragraph_id": 121, "text": "The two following methods rely on the parametric representation (see § Standard parametric representation, above):", "title": "Drawing ellipses" }, { "paragraph_id": 122, "text": "This representation can be modeled technically by two simple methods. In both cases center, the axes and semi axes a , b {\\displaystyle a,\\,b} have to be known.", "title": "Drawing ellipses" }, { "paragraph_id": 123, "text": "The first method starts with", "title": "Drawing ellipses" }, { "paragraph_id": 124, "text": "The point, where the semi axes meet is marked by P {\\displaystyle P} . If the strip slides with both ends on the axes of the desired ellipse, then point P {\\displaystyle P} traces the ellipse. For the proof one shows that point P {\\displaystyle P} has the parametric representation ( a cos t , b sin t ) {\\displaystyle (a\\cos t,\\,b\\sin t)} , where parameter t {\\displaystyle t} is the angle of the slope of the paper strip.", "title": "Drawing ellipses" }, { "paragraph_id": 125, "text": "A technical realization of the motion of the paper strip can be achieved by a Tusi couple (see animation). The device is able to draw any ellipse with a fixed sum a + b {\\displaystyle a+b} , which is the radius of the large circle. This restriction may be a disadvantage in real life. More flexible is the second paper strip method.", "title": "Drawing ellipses" }, { "paragraph_id": 126, "text": "A variation of the paper strip method 1 uses the observation that the midpoint N {\\displaystyle N} of the paper strip is moving on the circle with center M {\\displaystyle M} (of the ellipse) and radius a + b 2 {\\displaystyle {\\tfrac {a+b}{2}}} . Hence, the paperstrip can be cut at point N {\\displaystyle N} into halves, connected again by a joint at N {\\displaystyle N} and the sliding end K {\\displaystyle K} fixed at the center M {\\displaystyle M} (see diagram). After this operation the movement of the unchanged half of the paperstrip is unchanged. This variation requires only one sliding shoe.", "title": "Drawing ellipses" }, { "paragraph_id": 127, "text": "The second method starts with", "title": "Drawing ellipses" }, { "paragraph_id": 128, "text": "One marks the point, which divides the strip into two substrips of length b {\\displaystyle b} and a − b {\\displaystyle a-b} . The strip is positioned onto the axes as described in the diagram. Then the free end of the strip traces an ellipse, while the strip is moved. For the proof, one recognizes that the tracing point can be described parametrically by ( a cos t , b sin t ) {\\displaystyle (a\\cos t,\\,b\\sin t)} , where parameter t {\\displaystyle t} is the angle of slope of the paper strip.", "title": "Drawing ellipses" }, { "paragraph_id": 129, "text": "This method is the base for several ellipsographs (see section below).", "title": "Drawing ellipses" }, { "paragraph_id": 130, "text": "Similar to the variation of the paper strip method 1 a variation of the paper strip method 2 can be established (see diagram) by cutting the part between the axes into halves.", "title": "Drawing ellipses" }, { "paragraph_id": 131, "text": "Most ellipsograph drafting instruments are based on the second paperstrip method.", "title": "Drawing ellipses" }, { "paragraph_id": 132, "text": "From Metric properties below, one obtains:", "title": "Drawing ellipses" }, { "paragraph_id": 133, "text": "The diagram shows an easy way to find the centers of curvature C 1 = ( a − b 2 a , 0 ) , C 3 = ( 0 , b − a 2 b ) {\\displaystyle C_{1}=\\left(a-{\\tfrac {b^{2}}{a}},0\\right),\\,C_{3}=\\left(0,b-{\\tfrac {a^{2}}{b}}\\right)} at vertex V 1 {\\displaystyle V_{1}} and co-vertex V 3 {\\displaystyle V_{3}} , respectively:", "title": "Drawing ellipses" }, { "paragraph_id": 134, "text": "(proof: simple calculation.)", "title": "Drawing ellipses" }, { "paragraph_id": 135, "text": "The centers for the remaining vertices are found by symmetry.", "title": "Drawing ellipses" }, { "paragraph_id": 136, "text": "With help of a French curve one draws a curve, which has smooth contact to the osculating circles.", "title": "Drawing ellipses" }, { "paragraph_id": 137, "text": "The following method to construct single points of an ellipse relies on the Steiner generation of a conic section:", "title": "Drawing ellipses" }, { "paragraph_id": 138, "text": "For the generation of points of the ellipse x 2 a 2 + y 2 b 2 = 1 {\\displaystyle {\\tfrac {x^{2}}{a^{2}}}+{\\tfrac {y^{2}}{b^{2}}}=1} one uses the pencils at the vertices V 1 , V 2 {\\displaystyle V_{1},\\,V_{2}} . Let P = ( 0 , b ) {\\displaystyle P=(0,\\,b)} be an upper co-vertex of the ellipse and A = ( − a , 2 b ) , B = ( a , 2 b ) {\\displaystyle A=(-a,\\,2b),\\,B=(a,\\,2b)} .", "title": "Drawing ellipses" }, { "paragraph_id": 139, "text": "P {\\displaystyle P} is the center of the rectangle V 1 , V 2 , B , A {\\displaystyle V_{1},\\,V_{2},\\,B,\\,A} . The side A B ¯ {\\displaystyle {\\overline {AB}}} of the rectangle is divided into n equal spaced line segments and this division is projected parallel with the diagonal A V 2 {\\displaystyle AV_{2}} as direction onto the line segment V 1 B ¯ {\\displaystyle {\\overline {V_{1}B}}} and assign the division as shown in the diagram. The parallel projection together with the reverse of the orientation is part of the projective mapping between the pencils at V 1 {\\displaystyle V_{1}} and V 2 {\\displaystyle V_{2}} needed. The intersection points of any two related lines V 1 B i {\\displaystyle V_{1}B_{i}} and V 2 A i {\\displaystyle V_{2}A_{i}} are points of the uniquely defined ellipse. With help of the points C 1 , … {\\displaystyle C_{1},\\,\\dotsc } the points of the second quarter of the ellipse can be determined. Analogously one obtains the points of the lower half of the ellipse.", "title": "Drawing ellipses" }, { "paragraph_id": 140, "text": "Steiner generation can also be defined for hyperbolas and parabolas. It is sometimes called a parallelogram method because one can use other points rather than the vertices, which starts with a parallelogram instead of a rectangle.", "title": "Drawing ellipses" }, { "paragraph_id": 141, "text": "The ellipse is a special case of the hypotrochoid when R = 2 r {\\displaystyle R=2r} , as shown in the adjacent image. The special case of a moving circle with radius r {\\displaystyle r} inside a circle with radius R = 2 r {\\displaystyle R=2r} is called a Tusi couple.", "title": "Drawing ellipses" }, { "paragraph_id": 142, "text": "A circle with equation ( x − x ∘ ) 2 + ( y − y ∘ ) 2 = r 2 {\\displaystyle \\left(x-x_{\\circ }\\right)^{2}+\\left(y-y_{\\circ }\\right)^{2}=r^{2}} is uniquely determined by three points ( x 1 , y 1 ) , ( x 2 , y 2 ) , ( x 3 , y 3 ) {\\displaystyle \\left(x_{1},y_{1}\\right),\\;\\left(x_{2},\\,y_{2}\\right),\\;\\left(x_{3},\\,y_{3}\\right)} not on a line. A simple way to determine the parameters x ∘ , y ∘ , r {\\displaystyle x_{\\circ },y_{\\circ },r} uses the inscribed angle theorem for circles:", "title": "Inscribed angles and three-point form" }, { "paragraph_id": 143, "text": "Usually one measures inscribed angles by a degree or radian θ, but here the following measurement is more convenient:", "title": "Inscribed angles and three-point form" }, { "paragraph_id": 144, "text": "For four points P i = ( x i , y i ) , i = 1 , 2 , 3 , 4 , {\\displaystyle P_{i}=\\left(x_{i},\\,y_{i}\\right),\\ i=1,\\,2,\\,3,\\,4,\\,} no three of them on a line, we have the following (see diagram):", "title": "Inscribed angles and three-point form" }, { "paragraph_id": 145, "text": "At first the measure is available only for chords not parallel to the y-axis, but the final formula works for any chord.", "title": "Inscribed angles and three-point form" }, { "paragraph_id": 146, "text": "For example, for P 1 = ( 2 , 0 ) , P 2 = ( 0 , 1 ) , P 3 = ( 0 , 0 ) {\\displaystyle P_{1}=(2,\\,0),\\;P_{2}=(0,\\,1),\\;P_{3}=(0,\\,0)} the three-point equation is:", "title": "Inscribed angles and three-point form" }, { "paragraph_id": 147, "text": "Using vectors, dot products and determinants this formula can be arranged more clearly, letting x → = ( x , y ) {\\displaystyle {\\vec {x}}=(x,\\,y)} :", "title": "Inscribed angles and three-point form" }, { "paragraph_id": 148, "text": "The center of the circle ( x ∘ , y ∘ ) {\\displaystyle \\left(x_{\\circ },\\,y_{\\circ }\\right)} satisfies:", "title": "Inscribed angles and three-point form" }, { "paragraph_id": 149, "text": "The radius is the distance between any of the three points and the center.", "title": "Inscribed angles and three-point form" }, { "paragraph_id": 150, "text": "This section considers the family of ellipses defined by equations ( x − x ∘ ) 2 a 2 + ( y − y ∘ ) 2 b 2 = 1 {\\displaystyle {\\tfrac {\\left(x-x_{\\circ }\\right)^{2}}{a^{2}}}+{\\tfrac {\\left(y-y_{\\circ }\\right)^{2}}{b^{2}}}=1} with a fixed eccentricity e {\\displaystyle e} . It is convenient to use the parameter:", "title": "Inscribed angles and three-point form" }, { "paragraph_id": 151, "text": "and to write the ellipse equation as:", "title": "Inscribed angles and three-point form" }, { "paragraph_id": 152, "text": "where q is fixed and x ∘ , y ∘ , a {\\displaystyle x_{\\circ },\\,y_{\\circ },\\,a} vary over the real numbers. (Such ellipses have their axes parallel to the coordinate axes: if q < 1 {\\displaystyle q<1} , the major axis is parallel to the x-axis; if q > 1 {\\displaystyle q>1} , it is parallel to the y-axis.)", "title": "Inscribed angles and three-point form" }, { "paragraph_id": 153, "text": "Like a circle, such an ellipse is determined by three points not on a line.", "title": "Inscribed angles and three-point form" }, { "paragraph_id": 154, "text": "For this family of ellipses, one introduces the following q-analog angle measure, which is not a function of the usual angle measure θ:", "title": "Inscribed angles and three-point form" }, { "paragraph_id": 155, "text": "At first the measure is available only for chords which are not parallel to the y-axis. But the final formula works for any chord. The proof follows from a straightforward calculation. For the direction of proof given that the points are on an ellipse, one can assume that the center of the ellipse is the origin.", "title": "Inscribed angles and three-point form" }, { "paragraph_id": 156, "text": "For example, for P 1 = ( 2 , 0 ) , P 2 = ( 0 , 1 ) , P 3 = ( 0 , 0 ) {\\displaystyle P_{1}=(2,\\,0),\\;P_{2}=(0,\\,1),\\;P_{3}=(0,\\,0)} and q = 4 {\\displaystyle q=4} one obtains the three-point form", "title": "Inscribed angles and three-point form" }, { "paragraph_id": 157, "text": "Analogously to the circle case, the equation can be written more clearly using vectors:", "title": "Inscribed angles and three-point form" }, { "paragraph_id": 158, "text": "where ∗ {\\displaystyle *} is the modified dot product u → ∗ v → = u x v x + q u y v y . {\\displaystyle {\\vec {u}}*{\\vec {v}}=u_{x}v_{x}+{\\color {blue}q}\\,u_{y}v_{y}.}", "title": "Inscribed angles and three-point form" }, { "paragraph_id": 159, "text": "Any ellipse can be described in a suitable coordinate system by an equation x 2 a 2 + y 2 b 2 = 1 {\\displaystyle {\\tfrac {x^{2}}{a^{2}}}+{\\tfrac {y^{2}}{b^{2}}}=1} . The equation of the tangent at a point P 1 = ( x 1 , y 1 ) {\\displaystyle P_{1}=\\left(x_{1},\\,y_{1}\\right)} of the ellipse is x 1 x a 2 + y 1 y b 2 = 1. {\\displaystyle {\\tfrac {x_{1}x}{a^{2}}}+{\\tfrac {y_{1}y}{b^{2}}}=1.} If one allows point P 1 = ( x 1 , y 1 ) {\\displaystyle P_{1}=\\left(x_{1},\\,y_{1}\\right)} to be an arbitrary point different from the origin, then", "title": "Pole-polar relation" }, { "paragraph_id": 160, "text": "This relation between points and lines is a bijection.", "title": "Pole-polar relation" }, { "paragraph_id": 161, "text": "The inverse function maps", "title": "Pole-polar relation" }, { "paragraph_id": 162, "text": "Such a relation between points and lines generated by a conic is called pole-polar relation or polarity. The pole is the point; the polar the line.", "title": "Pole-polar relation" }, { "paragraph_id": 163, "text": "By calculation one can confirm the following properties of the pole-polar relation of the ellipse:", "title": "Pole-polar relation" }, { "paragraph_id": 164, "text": "Pole-polar relations exist for hyperbolas and parabolas as well.", "title": "Pole-polar relation" }, { "paragraph_id": 165, "text": "All metric properties given below refer to an ellipse with equation", "title": "Metric properties" }, { "paragraph_id": 166, "text": "except for the section on the area enclosed by a tilted ellipse, where the generalized form of Eq.(1) will be given.", "title": "Metric properties" }, { "paragraph_id": 167, "text": "The area A ellipse {\\displaystyle A_{\\text{ellipse}}} enclosed by an ellipse is:", "title": "Metric properties" }, { "paragraph_id": 168, "text": "where a {\\displaystyle a} and b {\\displaystyle b} are the lengths of the semi-major and semi-minor axes, respectively. The area formula π a b {\\displaystyle \\pi ab} is intuitive: start with a circle of radius b {\\displaystyle b} (so its area is π b 2 {\\displaystyle \\pi b^{2}} ) and stretch it by a factor a / b {\\displaystyle a/b} to make an ellipse. This scales the area by the same factor: π b 2 ( a / b ) = π a b . {\\displaystyle \\pi b^{2}(a/b)=\\pi ab.} However, using the same approach for the circumference would be fallacious – compare the integrals ∫ f ( x ) d x {\\textstyle \\int f(x)\\,dx} and ∫ 1 + f ′ 2 ( x ) d x {\\textstyle \\int {\\sqrt {1+f'^{2}(x)}}\\,dx} . It is also easy to rigorously prove the area formula using integration as follows. Equation (1) can be rewritten as y ( x ) = b 1 − x 2 / a 2 . {\\textstyle y(x)=b{\\sqrt {1-x^{2}/a^{2}}}.} For x ∈ [ − a , a ] , {\\displaystyle x\\in [-a,a],} this curve is the top half of the ellipse. So twice the integral of y ( x ) {\\displaystyle y(x)} over the interval [ − a , a ] {\\displaystyle [-a,a]} will be the area of the ellipse:", "title": "Metric properties" }, { "paragraph_id": 169, "text": "The second integral is the area of a circle of radius a , {\\displaystyle a,} that is, π a 2 . {\\displaystyle \\pi a^{2}.} So", "title": "Metric properties" }, { "paragraph_id": 170, "text": "An ellipse defined implicitly by A x 2 + B x y + C y 2 = 1 {\\displaystyle Ax^{2}+Bxy+Cy^{2}=1} has area 2 π / 4 A C − B 2 . {\\displaystyle 2\\pi /{\\sqrt {4AC-B^{2}}}.}", "title": "Metric properties" }, { "paragraph_id": 171, "text": "The area can also be expressed in terms of eccentricity and the length of the semi-major axis as a 2 π 1 − e 2 {\\displaystyle a^{2}\\pi {\\sqrt {1-e^{2}}}} (obtained by solving for flattening, then computing the semi-minor axis).", "title": "Metric properties" }, { "paragraph_id": 172, "text": "So far we have dealt with erect ellipses, whose major and minor axes are parallel to the x {\\displaystyle x} and y {\\displaystyle y} axes. However, some applications require tilted ellipses. In charged-particle beam optics, for instance, the enclosed area of an erect or tilted ellipse is an important property of the beam, its emittance. In this case a simple formula still applies, namely", "title": "Metric properties" }, { "paragraph_id": 173, "text": "where y int {\\displaystyle y_{\\text{int}}} , x int {\\displaystyle x_{\\text{int}}} are intercepts and x max {\\displaystyle x_{\\text{max}}} , y max {\\displaystyle y_{\\text{max}}} are maximum values. It follows directly from Apollonios's theorem.", "title": "Metric properties" }, { "paragraph_id": 174, "text": "The circumference C {\\displaystyle C} of an ellipse is:", "title": "Metric properties" }, { "paragraph_id": 175, "text": "where again a {\\displaystyle a} is the length of the semi-major axis, e = 1 − b 2 / a 2 {\\textstyle e={\\sqrt {1-b^{2}/a^{2}}}} is the eccentricity, and the function E {\\displaystyle E} is the complete elliptic integral of the second kind,", "title": "Metric properties" }, { "paragraph_id": 176, "text": "which is in general not an elementary function.", "title": "Metric properties" }, { "paragraph_id": 177, "text": "The circumference of the ellipse may be evaluated in terms of E ( e ) {\\displaystyle E(e)} using Gauss's arithmetic-geometric mean; this is a quadratically converging iterative method (see here for details).", "title": "Metric properties" }, { "paragraph_id": 178, "text": "The exact infinite series is:", "title": "Metric properties" }, { "paragraph_id": 179, "text": "where n ! ! {\\displaystyle n!!} is the double factorial (extended to negative odd integers by the recurrence relation ( 2 n − 1 ) ! ! = ( 2 n + 1 ) ! ! / ( 2 n + 1 ) {\\displaystyle (2n-1)!!=(2n+1)!!/(2n+1)} , for n ≤ 0 {\\displaystyle n\\leq 0} ). This series converges, but by expanding in terms of h = ( a − b ) 2 / ( a + b ) 2 , {\\displaystyle h=(a-b)^{2}/(a+b)^{2},} James Ivory and Bessel derived an expression that converges much more rapidly:", "title": "Metric properties" }, { "paragraph_id": 180, "text": "Srinivasa Ramanujan gave two close approximations for the circumference in §16 of \"Modular Equations and Approximations to π {\\displaystyle \\pi } \"; they are", "title": "Metric properties" }, { "paragraph_id": 181, "text": "and", "title": "Metric properties" }, { "paragraph_id": 182, "text": "where h {\\displaystyle h} takes on the same meaning as above. The errors in these approximations, which were obtained empirically, are of order h 3 {\\displaystyle h^{3}} and h 5 , {\\displaystyle h^{5},} respectively.", "title": "Metric properties" }, { "paragraph_id": 183, "text": "More generally, the arc length of a portion of the circumference, as a function of the angle subtended (or x coordinates of any two points on the upper half of the ellipse), is given by an incomplete elliptic integral. The upper half of an ellipse is parameterized by", "title": "Metric properties" }, { "paragraph_id": 184, "text": "Then the arc length s {\\displaystyle s} from x 1 {\\displaystyle \\ x_{1}\\ } to x 2 {\\displaystyle \\ x_{2}\\ } is:", "title": "Metric properties" }, { "paragraph_id": 185, "text": "This is equivalent to", "title": "Metric properties" }, { "paragraph_id": 186, "text": "where E ( z ∣ m ) {\\displaystyle E(z\\mid m)} is the incomplete elliptic integral of the second kind with parameter m = k 2 . {\\displaystyle m=k^{2}.}", "title": "Metric properties" }, { "paragraph_id": 187, "text": "Some lower and upper bounds on the circumference of the canonical ellipse x 2 / a 2 + y 2 / b 2 = 1 {\\displaystyle \\ x^{2}/a^{2}+y^{2}/b^{2}=1\\ } with a ≥ b {\\displaystyle \\ a\\geq b\\ } are", "title": "Metric properties" }, { "paragraph_id": 188, "text": "Here the upper bound 2 π a {\\displaystyle \\ 2\\pi a\\ } is the circumference of a circumscribed concentric circle passing through the endpoints of the ellipse's major axis, and the lower bound 4 a 2 + b 2 {\\displaystyle 4{\\sqrt {a^{2}+b^{2}}}} is the perimeter of an inscribed rhombus with vertices at the endpoints of the major and the minor axes.", "title": "Metric properties" }, { "paragraph_id": 189, "text": "The curvature is given by κ = 1 a 2 b 2 ( x 2 a 4 + y 2 b 4 ) − 3 2 , {\\displaystyle \\kappa ={\\frac {1}{a^{2}b^{2}}}\\left({\\frac {x^{2}}{a^{4}}}+{\\frac {y^{2}}{b^{4}}}\\right)^{-{\\frac {3}{2}}}\\ ,} radius of curvature at point ( x , y ) {\\displaystyle (x,y)} :", "title": "Metric properties" }, { "paragraph_id": 190, "text": "Radius of curvature at the two vertices ( ± a , 0 ) {\\displaystyle (\\pm a,0)} and the centers of curvature:", "title": "Metric properties" }, { "paragraph_id": 191, "text": "Radius of curvature at the two co-vertices ( 0 , ± b ) {\\displaystyle (0,\\pm b)} and the centers of curvature:", "title": "Metric properties" }, { "paragraph_id": 192, "text": "Ellipses appear in triangle geometry as", "title": "In triangle geometry" }, { "paragraph_id": 193, "text": "Ellipses appear as plane sections of the following quadrics:", "title": "As plane sections of quadrics" }, { "paragraph_id": 194, "text": "If the water's surface is disturbed at one focus of an elliptical water tank, the circular waves of that disturbance, after reflecting off the walls, converge simultaneously to a single point: the second focus. This is a consequence of the total travel length being the same along any wall-bouncing path between the two foci.", "title": "Applications" }, { "paragraph_id": 195, "text": "Similarly, if a light source is placed at one focus of an elliptic mirror, all light rays on the plane of the ellipse are reflected to the second focus. Since no other smooth curve has such a property, it can be used as an alternative definition of an ellipse. (In the special case of a circle with a source at its center all light would be reflected back to the center.) If the ellipse is rotated along its major axis to produce an ellipsoidal mirror (specifically, a prolate spheroid), this property holds for all rays out of the source. Alternatively, a cylindrical mirror with elliptical cross-section can be used to focus light from a linear fluorescent lamp along a line of the paper; such mirrors are used in some document scanners.", "title": "Applications" }, { "paragraph_id": 196, "text": "Sound waves are reflected in a similar way, so in a large elliptical room a person standing at one focus can hear a person standing at the other focus remarkably well. The effect is even more evident under a vaulted roof shaped as a section of a prolate spheroid. Such a room is called a whisper chamber. The same effect can be demonstrated with two reflectors shaped like the end caps of such a spheroid, placed facing each other at the proper distance. Examples are the National Statuary Hall at the United States Capitol (where John Quincy Adams is said to have used this property for eavesdropping on political matters); the Mormon Tabernacle at Temple Square in Salt Lake City, Utah; at an exhibit on sound at the Museum of Science and Industry in Chicago; in front of the University of Illinois at Urbana–Champaign Foellinger Auditorium; and also at a side chamber of the Palace of Charles V, in the Alhambra.", "title": "Applications" }, { "paragraph_id": 197, "text": "In the 17th century, Johannes Kepler discovered that the orbits along which the planets travel around the Sun are ellipses with the Sun [approximately] at one focus, in his first law of planetary motion. Later, Isaac Newton explained this as a corollary of his law of universal gravitation.", "title": "Applications" }, { "paragraph_id": 198, "text": "More generally, in the gravitational two-body problem, if the two bodies are bound to each other (that is, the total energy is negative), their orbits are similar ellipses with the common barycenter being one of the foci of each ellipse. The other focus of either ellipse has no known physical significance. The orbit of either body in the reference frame of the other is also an ellipse, with the other body at the same focus.", "title": "Applications" }, { "paragraph_id": 199, "text": "Keplerian elliptical orbits are the result of any radially directed attraction force whose strength is inversely proportional to the square of the distance. Thus, in principle, the motion of two oppositely charged particles in empty space would also be an ellipse. (However, this conclusion ignores losses due to electromagnetic radiation and quantum effects, which become significant when the particles are moving at high speed.)", "title": "Applications" }, { "paragraph_id": 200, "text": "For elliptical orbits, useful relations involving the eccentricity e {\\displaystyle e} are:", "title": "Applications" }, { "paragraph_id": 201, "text": "where", "title": "Applications" }, { "paragraph_id": 202, "text": "Also, in terms of r a {\\displaystyle r_{a}} and r p {\\displaystyle r_{p}} , the semi-major axis a {\\displaystyle a} is their arithmetic mean, the semi-minor axis b {\\displaystyle b} is their geometric mean, and the semi-latus rectum ℓ {\\displaystyle \\ell } is their harmonic mean. In other words,", "title": "Applications" }, { "paragraph_id": 203, "text": "The general solution for a harmonic oscillator in two or more dimensions is also an ellipse. Such is the case, for instance, of a long pendulum that is free to move in two dimensions; of a mass attached to a fixed point by a perfectly elastic spring; or of any object that moves under influence of an attractive force that is directly proportional to its distance from a fixed attractor. Unlike Keplerian orbits, however, these \"harmonic orbits\" have the center of attraction at the geometric center of the ellipse, and have fairly simple equations of motion.", "title": "Applications" }, { "paragraph_id": 204, "text": "In electronics, the relative phase of two sinusoidal signals can be compared by feeding them to the vertical and horizontal inputs of an oscilloscope. If the Lissajous figure display is an ellipse, rather than a straight line, the two signals are out of phase.", "title": "Applications" }, { "paragraph_id": 205, "text": "Two non-circular gears with the same elliptical outline, each pivoting around one focus and positioned at the proper angle, turn smoothly while maintaining contact at all times. Alternatively, they can be connected by a link chain or timing belt, or in the case of a bicycle the main chainring may be elliptical, or an ovoid similar to an ellipse in form. Such elliptical gears may be used in mechanical equipment to produce variable angular speed or torque from a constant rotation of the driving axle, or in the case of a bicycle to allow a varying crank rotation speed with inversely varying mechanical advantage.", "title": "Applications" }, { "paragraph_id": 206, "text": "Elliptical bicycle gears make it easier for the chain to slide off the cog when changing gears.", "title": "Applications" }, { "paragraph_id": 207, "text": "An example gear application would be a device that winds thread onto a conical bobbin on a spinning machine. The bobbin would need to wind faster when the thread is near the apex than when it is near the base.", "title": "Applications" }, { "paragraph_id": 208, "text": "In statistics, a bivariate random vector ( X , Y ) {\\displaystyle (X,Y)} is jointly elliptically distributed if its iso-density contours—loci of equal values of the density function—are ellipses. The concept extends to an arbitrary number of elements of the random vector, in which case in general the iso-density contours are ellipsoids. A special case is the multivariate normal distribution. The elliptical distributions are important in finance because if rates of return on assets are jointly elliptically distributed then all portfolios can be characterized completely by their mean and variance—that is, any two portfolios with identical mean and variance of portfolio return have identical distributions of portfolio return.", "title": "Applications" }, { "paragraph_id": 209, "text": "Drawing an ellipse as a graphics primitive is common in standard display libraries, such as the MacIntosh QuickDraw API, and Direct2D on Windows. Jack Bresenham at IBM is most famous for the invention of 2D drawing primitives, including line and circle drawing, using only fast integer operations such as addition and branch on carry bit. M. L. V. Pitteway extended Bresenham's algorithm for lines to conics in 1967. Another efficient generalization to draw ellipses was invented in 1984 by Jerry Van Aken.", "title": "Applications" }, { "paragraph_id": 210, "text": "In 1970 Danny Cohen presented at the \"Computer Graphics 1970\" conference in England a linear algorithm for drawing ellipses and circles. In 1971, L. B. Smith published similar algorithms for all conic sections and proved them to have good properties. These algorithms need only a few multiplications and additions to calculate each vector.", "title": "Applications" }, { "paragraph_id": 211, "text": "It is beneficial to use a parametric formulation in computer graphics because the density of points is greatest where there is the most curvature. Thus, the change in slope between each successive point is small, reducing the apparent \"jaggedness\" of the approximation.", "title": "Applications" }, { "paragraph_id": 212, "text": "Composite Bézier curves may also be used to draw an ellipse to sufficient accuracy, since any ellipse may be construed as an affine transformation of a circle. The spline methods used to draw a circle may be used to draw an ellipse, since the constituent Bézier curves behave appropriately under such transformations.", "title": "Applications" }, { "paragraph_id": 213, "text": "It is sometimes useful to find the minimum bounding ellipse on a set of points. The ellipsoid method is quite useful for solving this problem.", "title": "Applications" } ]
In mathematics, an ellipse is a plane curve surrounding two focal points, such that for all points on the curve, the sum of the two distances to the focal points is a constant. It generalizes a circle, which is the special type of ellipse in which the two focal points are the same. The elongation of an ellipse is measured by its eccentricity e , a number ranging from e = 0 to e = 1 . An ellipse has a simple algebraic solution for its area, but only approximations for its perimeter, for which integration is required to obtain an exact solution. Analytically, the equation of a standard ellipse centered at the origin with width 2 a and height 2 b is: Assuming a ≥ b , the foci are for c = a 2 − b 2 . The standard parametric equation is: Ellipses are the closed type of conic section: a plane curve tracing the intersection of a cone with a plane. Ellipses have many similarities with the other two forms of conic sections, parabolas and hyperbolas, both of which are open and unbounded. An angled cross section of a right circular cylinder is also an ellipse. An ellipse may also be defined in terms of one focal point and a line outside the ellipse called the directrix: for all points on the ellipse, the ratio between the distance to the focus and the distance to the directrix is a constant. This constant ratio is the above-mentioned eccentricity: Ellipses are common in physics, astronomy and engineering. For example, the orbit of each planet in the Solar System is approximately an ellipse with the Sun at one focus point. The same is true for moons orbiting planets and all other systems of two astronomical bodies. The shapes of planets and stars are often well described by ellipsoids. A circle viewed from a side angle looks like an ellipse: that is, the ellipse is the image of a circle under parallel or perspective projection. The ellipse is also the simplest Lissajous figure formed when the horizontal and vertical motions are sinusoids with the same frequency: a similar effect leads to elliptical polarization of light in optics. The name, ἔλλειψις, was given by Apollonius of Perga in his Conics.
2001-10-23T15:58:12Z
2023-12-11T15:57:21Z
[ "Template:Short description", "Template:Lang", "Template:Nobr", "Template:Commons category-inline", "Template:Springer", "Template:Confusion", "Template:Math", "Template:Youtube", "Template:Anchor", "Template:Slink", "Template:NumBlk", "Template:Further", "Template:Div col end", "Template:Harvtxt", "Template:Cite book", "Template:PlanetMath", "Template:MathWorld", "Template:About", "Template:Nowrap", "Template:EquationNote", "Template:See also", "Template:Portal", "Template:Dlmf", "Template:Cite journal", "Template:Transl", "Template:Unbulleted list", "Template:Block indent", "Template:Rp", "Template:Reflist", "Template:Citation", "Template:Cite web", "Template:ISBN", "Template:For multi", "Template:Ndash", "Template:Main", "Template:Div col", "Template:Wikiquote-inline", "Template:Authority control" ]
https://en.wikipedia.org/wiki/Ellipse
9,278
Extension
Extension, extend or extended may refer to:
[ { "paragraph_id": 0, "text": "Extension, extend or extended may refer to:", "title": "" } ]
Extension, extend or extended may refer to:
2001-03-22T23:00:18Z
2023-12-11T16:29:29Z
[ "Template:Wiktionary", "Template:TOC right", "Template:Disambiguation" ]
https://en.wikipedia.org/wiki/Extension
9,279
Elephant
Elephants are the largest living land animals. Three living species are currently recognised: the African bush elephant (Loxodonta africana), the African forest elephant (L. cyclotis), and the Asian elephant (Elephas maximus). They are the only surviving members of the family Elephantidae and the order Proboscidea; extinct relatives include mammoths and mastodons. Distinctive features of elephants include a long proboscis called a trunk, tusks, large ear flaps, pillar-like legs, and tough but sensitive grey skin. The trunk is prehensile, bringing food and water to the mouth and grasping objects. Tusks, which are derived from the incisor teeth, serve both as weapons and as tools for moving objects and digging. The large ear flaps assist in maintaining a constant body temperature as well as in communication. African elephants have larger ears and concave backs, whereas Asian elephants have smaller ears and convex or level backs. Elephants are scattered throughout sub-Saharan Africa, South Asia, and Southeast Asia and are found in different habitats, including savannahs, forests, deserts, and marshes. They are herbivorous, and they stay near water when it is accessible. They are considered to be keystone species, due to their impact on their environments. Elephants have a fission–fusion society, in which multiple family groups come together to socialise. Females (cows) tend to live in family groups, which can consist of one female with her calves or several related females with offspring. The leader of a female group, usually the oldest cow, is known as the matriarch. Males (bulls) leave their family groups when they reach puberty and may live alone or with other males. Adult bulls mostly interact with family groups when looking for a mate. They enter a state of increased testosterone and aggression known as musth, which helps them gain dominance over other males as well as reproductive success. Calves are the centre of attention in their family groups and rely on their mothers for as long as three years. Elephants can live up to 70 years in the wild. They communicate by touch, sight, smell, and sound; elephants use infrasound and seismic communication over long distances. Elephant intelligence has been compared with that of primates and cetaceans. They appear to have self-awareness, and possibly show concern for dying and dead individuals of their kind. African bush elephants and Asian elephants are listed as endangered and African forest elephants as critically endangered by the International Union for Conservation of Nature (IUCN). One of the biggest threats to elephant populations is the ivory trade, as the animals are poached for their ivory tusks. Other threats to wild elephants include habitat destruction and conflicts with local people. Elephants are used as working animals in Asia. In the past, they were used in war; today, they are often controversially put on display in zoos, or employed for entertainment in circuses. Elephants have an iconic status in human culture, and have been widely featured in art, folklore, religion, literature, and popular culture. The word elephant is based on the Latin elephas (genitive elephantis) 'elephant', which is the Latinised form of the ancient Greek ἐλέφας (elephas) (genitive ἐλέφαντος (elephantos)), probably from a non-Indo-European language, likely Phoenician. It is attested in Mycenaean Greek as e-re-pa (genitive e-re-pa-to) in Linear B syllabic script. As in Mycenaean Greek, Homer used the Greek word to mean ivory, but after the time of Herodotus, it also referred to the animal. The word elephant appears in Middle English as olyfaunt (c. 1300) and was borrowed from Old French oliphant (12th century). Elephants belong to the family Elephantidae, the sole remaining family within the order Proboscidea. Their closest extant relatives are the sirenians (dugongs and manatees) and the hyraxes, with which they share the clade Paenungulata within the superorder Afrotheria. Elephants and sirenians are further grouped in the clade Tethytheria. Three species of living elephants are recognised; the African bush elephant (Loxodonta africana), forest elephant (Loxodonta cyclotis) and Asian elephant (Elephas maximus). African elephants were traditionally considered a single species, Loxodonta africana, but molecular studies have affirmed their status as separate species. Mammoths (Mammuthus) are nested within living elephants as they are more closely related to Asian elephants than to African elephants. Another extinct genus of elephant, Palaeoloxodon, is also recognised, which appears to have close affinities with African elephants and to have hybridised with African forest elephants. Over 180 extinct members of order Proboscidea have been described. The earliest proboscideans, the African Eritherium and Phosphatherium are known from the late Paleocene. The Eocene included Numidotherium, Moeritherium and Barytherium from Africa. These animals were relatively small and, some, like Moeritherium and Barytherium were probably amphibious. Later on, genera such as Phiomia and Palaeomastodon arose; the latter likely inhabited more forested areas. Proboscidean diversification changed little during the Oligocene. One notable species of this epoch was Eritreum melakeghebrekristosi of the Horn of Africa, which may have been an ancestor to several later species. A major event in proboscidean evolution was the collision of Afro-Arabia with Eurasia, during the Early Miocene, around 18–19 million years ago, allowing proboscideans to disperse from their African homeland across Eurasia and later, around 16–15 million years ago into North America across the Bering Land Bridge. Proboscidean groups prominent during the Miocene include the deinotheres, along with the more advanced elephantimorphs, including mammutids (mastodons), gomphotheres, amebelodontids (which includes the "shovel tuskers" like Platybelodon), choerolophodontids and stegodontids. Around 10 million years ago, the earliest members of the family Elephantidae emerged in Africa, having originated from gomphotheres. Elephantids are distinguished from earlier proboscideans by a major shift in the molar morphology to parallel lophs rather than the cusps of earlier proboscideans, allowing them to become higher-crowned (hypsodont) and more efficient in consuming grass. The Late Miocene saw major climactic changes, which resulted in the decline and extinction of many proboscidean groups. The earliest members of the modern genera of Elephantidae appeared during the latest Miocene–early Pliocene around 5 million years ago. The elephantid genera Elephas (which includes the living Asian elephant) and Mammuthus (mammoths) migrated out of Africa during the late Pliocene, around 3.6 to 3.2 million years ago. Over the course of the Early Pleistocene, all non-elephantid probobscidean genera outside of the Americas became extinct with the exception of Stegodon, with gomphotheres dispersing into South America as part of the Great American interchange, and mammoths migrating into North America around 1.5 million years ago. At the end of the Early Pleistocene, around 800,000 years ago the elephantid genus Palaeoloxodon dispersed outside of Africa, becoming widely distributed in Eurasia. Proboscideans underwent a dramatic decline during the Late Pleistocene, with all remaining non-elephantid proboscideans (including Stegodon, mastodons, and the gomphotheres Cuvieronius and Notiomastodon) and Palaeoloxodon becoming extinct, with mammoths only surviving in relict populations on islands around the Bering Strait into the Holocene, with their latest survival being on Wrangel Island, where they persisted until around 4,000 years ago. Over the course of their evolution, probobscideans grew in size. With that came longer limbs and wider feet with a more digitigrade stance, along with a larger head and shorter neck. The trunk evolved and grew longer to provide reach. The number of premolars, incisors, and canines decreased, and the cheek teeth (molars and premolars) became longer and more specialised. The incisors developed into tusks of different shapes and sizes. Several species of proboscideans became isolated on islands and experienced insular dwarfism, some dramatically reducing in body size, such as the 1 metre (3.3 ft) tall dwarf elephant species Palaeoloxodon falconeri. Elephants are the largest living terrestrial animals. The skeleton is made up of 326–351 bones. The vertebrae are connected by tight joints, which limit the backbone's flexibility. African elephants have 21 pairs of ribs, while Asian elephants have 19 or 20 pairs. The skull contains air cavities (sinuses) that reduce the weight of the skull while maintaining overall strength. These cavities give the inside of the skull a honeycomb-like appearance. By contrast, the lower jaw is dense. The cranium is particularly large and provides enough room for the attachment of muscles to support the entire head. The skull is built to withstand great stress, particularly when fighting or using the tusks. The brain is surrounded by arches in the skull, which serve as protection. Because of the size of the head, the neck is relatively short to provide better support. Elephants are homeotherms and maintain their average body temperature at ~ 36 °C (97 °F), with a minimum of 35.2 °C (95.4 °F) during the cool season, and a maximum of 38.0 °C (100.4 °F) during the hot dry season. Elephant ear flaps, or pinnae, are 1–2 mm (0.039–0.079 in) thick in the middle with a thinner tip and supported by a thicker base. They contain numerous blood vessels called capillaries. Warm blood flows into the capillaries, releasing excess heat into the environment. This effect is increased by flapping the ears back and forth. Larger ear surfaces contain more capillaries, and more heat can be released. Of all the elephants, African bush elephants live in the hottest climates and have the largest ear flaps. The ossicles are adapted for hearing low frequencies, being most sensitive at 1 kHz. Lacking a lacrimal apparatus (tear duct), the eye relies on the harderian gland in the orbit to keep it moist. A durable nictitating membrane shields the globe. The animal's field of vision is compromised by the location and limited mobility of the eyes. Elephants are dichromats and they can see well in dim light but not in bright light. The elongated and prehensile trunk, or proboscis, consists of both the nose and upper lip, which fuse in early fetal development. This versatile appendage contains up to 150,000 separate muscle fascicles, with no bone and little fat. These paired muscles consist of two major types: superficial (surface) and internal. The former are divided into dorsal, ventral, and lateral muscles, while the latter are divided into transverse and radiating muscles. The muscles of the trunk connect to a bony opening in the skull. The nasal septum consists of small elastic muscles between the nostrils, which are divided by cartilage at the base. A unique proboscis nerve – a combination of the maxillary and facial nerves – lines each side of the appendage. As a muscular hydrostat, the trunk moves through finely controlled muscle contractions, working both with and against each other. Using three basic movements: bending, twisting, and longitudinal stretching or retracting, the trunk has near unlimited flexibility. Objects grasped by the end of the trunk can be moved to the mouth by curving the appendage inward. The trunk can also bend at different points by creating stiffened "pseudo-joints". The tip can be moved in a way similar to the human hand. The skin is more elastic on the dorsal side of the elephant trunk than underneath; allowing the animal to stretch and coil while maintaining a strong grasp. The African elephants have two finger-like extensions at the tip of the trunk that allow them to pluck small food. The Asian elephant has only one and relies more on wrapping around a food item. Asian elephant trunks have better motor coordination. The trunk's extreme flexibility allows it to forage and wrestle other elephants with it. It is powerful enough to lift up to 350 kg (770 lb), but it also has the precision to crack a peanut shell without breaking the seed. With its trunk, an elephant can reach items up to 7 m (23 ft) high and dig for water in the mud or sand below. It also uses it to clean itself. Individuals may show lateral preference when grasping with their trunks: some prefer to twist them to the left, others to the right. Elephant trunks are capable of powerful siphoning. They can expand their nostrils by 30%, leading to a 64% greater nasal volume, and can breathe in almost 30 times faster than a human sneeze, at over 150 m/s (490 ft/s). They suck up water, which is squirted into the mouth or over the body. The trunk of an adult Asian elephant is capable of retaining 8.5 L (2.2 US gal) of water. They will also sprinkle dust or grass on themselves. When underwater, the elephant uses its trunk as a snorkel. The trunk also acts as a sense organ. Its sense of smell may be four times greater than a bloodhound's nose. The infraorbital nerve, which makes the trunk sensitive to touch, is thicker than both the optic and auditory nerves. Whiskers grow all along the trunk, and are particularly packed at the tip, where they contribute to its tactile sensitivity. Unlike those of many mammals, such as cats and rats, elephant whiskers do not move independently ("whisk") to sense the environment; the trunk itself must move to bring the whiskers into contact with nearby objects. Whiskers grow in rows along each side on the ventral surface of the trunk, which is thought to be essential in helping elephants balance objects there, whereas they are more evenly arranged on the dorsal surface. The number and patterns of whiskers are distinctly different between species. Damaging the trunk would be detrimental to an elephant's survival, although in rare cases, individuals have survived with shortened ones. One trunkless elephant has been observed to graze using its lips with its hind legs in the air and balancing on its front knees. Floppy trunk syndrome is a condition of trunk paralysis recorded in African bush elephants and involves the degeneration of the peripheral nerves and muscles. The disorder has been linked to lead poisoning. Elephants usually have 26 teeth: the incisors, known as the tusks; 12 deciduous premolars; and 12 molars. Unlike most mammals, teeth are not replaced by new ones emerging from the jaws vertically. Instead, new teeth start at the back of the mouth and push out the old ones. The first chewing tooth on each side of the jaw falls out when the elephant is two to three years old. This is followed by four more tooth replacements at the ages of four to six, 9–15, 18–28, and finally in their early 40s. The final (usually sixth) set must last the elephant the rest of its life. Elephant teeth have loop-shaped dental ridges, which are more diamond-shaped in African elephants. The tusks of an elephant have modified second incisors in the upper jaw. They replace deciduous milk teeth at 6–12 months of age and keep growing at about 17 cm (7 in) a year. As the tusk develops, it is topped with smooth, cone-shaped enamel that eventually wanes. The dentine is known as ivory and has a cross-section of intersecting lines, known as "engine turning", which create diamond-shaped patterns. Being living tissue, tusks are fairly soft and about as dense as the mineral calcite. The tusk protrudes from a socket in the skull, and most of it is external. At least one-third of the tusk contains the pulp, and some have nerves that stretch even further. Thus, it would be difficult to remove it without harming the animal. When removed, ivory will dry up and crack if not kept cool and wet. Tusks function in digging, debarking, marking, moving objects, and fighting. Elephants are usually right- or left-tusked, similar to humans, who are typically right- or left-handed. The dominant, or "master" tusk, is typically more worn down, as it is shorter and blunter. For African elephants, tusks are present in both males and females, and are around the same length in both sexes, reaching up to 300 cm (9 ft 10 in), but those of males tend to be more massive. In the Asian species, only the males have large tusks. Female Asians have very small tusks, or none at all. Tuskless males exist and are particularly common among Sri Lankan elephants. Asian males can have tusks as long as Africans', but they are usually slimmer and lighter; the largest recorded was 302 cm (9 ft 11 in) long and weighed 39 kg (86 lb). Hunting for elephant ivory in Africa and Asia has led to natural selection for shorter tusks and tusklessness. An elephant's skin is generally very tough, at 2.5 cm (1 in) thick on the back and parts of the head. The skin around the mouth, anus, and inside of the ear is considerably thinner. Elephants are typically grey, but African elephants look brown or reddish after rolling in coloured mud. Asian elephants have some patches of depigmentation, particularly on the head. Calves have brownish or reddish hair, with the head and back being particularly hairy. As elephants mature, their hair darkens and becomes sparser, but dense concentrations of hair and bristles remain on the tip of the tail and parts of the head and genitals. Normally, the skin of an Asian elephant is covered with more hair than its African counterpart. Their hair is thought to help them lose heat in their hot environments. Although tough, an elephant's skin is very sensitive and requires mud baths to maintain moisture and protect it from burning and insect bites. After bathing, the elephant will usually use its trunk to blow dust onto its body, which dries into a protective crust. Elephants have difficulty releasing heat through the skin because of their low surface-area-to-volume ratio, which is many times smaller than that of a human. They have even been observed lifting up their legs to expose their soles to the air. Elephants only have sweat glands between the toes, but the skin allows water to disperse and evaporate, cooling the animal. In addition, cracks in the skin may reduce dehydration and allow for increased thermal regulation in the long term. To support the animal's weight, an elephant's limbs are positioned more vertically under the body than in most other mammals. The long bones of the limbs have cancellous bones in place of medullary cavities. This strengthens the bones while still allowing haematopoiesis (blood cell creation). Both the front and hind limbs can support an elephant's weight, although 60% is borne by the front. The position of the limbs and leg bones allows an elephant to stand still for extended periods of time without tiring. Elephants are incapable of turning their manus as the ulna and radius of the front legs are secured in pronation. Elephants may also lack the pronator quadratus and pronator teres muscles or have very small ones. The circular feet of an elephant have soft tissues, or "cushion pads" beneath the manus or pes, which allow them to bear the animal's great mass. They appear to have a sesamoid, an extra "toe" similar in placement to a giant panda's extra "thumb", that also helps in weight distribution. As many as five toenails can be found on both the front and hind feet. Elephants can move both forward and backward, but are incapable of trotting, jumping, or galloping. They can move on land only by walking or ambling: a faster gait similar to running. In walking, the legs act as pendulums, with the hips and shoulders moving up and down while the foot is planted on the ground. The fast gait does not meet all the criteria of running, since there is no point where all the feet are off the ground, although the elephant uses its legs much like other running animals, and can move faster by quickening its stride. Fast-moving elephants appear to 'run' with their front legs, but 'walk' with their hind legs and can reach a top speed of 25 km/h (16 mph). At this speed, most other quadrupeds are well into a gallop, even accounting for leg length. Spring-like kinetics could explain the difference between the motion of elephants and other animals. The cushion pads expand and contract, and reduce both the pain and noise that would come from a very heavy animal moving. Elephants are capable swimmers: they can swim for up to six hours while staying at the surface, moving at 2.1 km/h (1 mph) and traversing up to 48 km (30 mi) continuously. The brain of an elephant weighs 4.5–5.5 kg (10–12 lb) compared to 1.6 kg (4 lb) for a human brain. It is the largest of all terrestrial mammals. While the elephant brain is larger overall, it is proportionally smaller than the human brain. At birth, an elephant's brain already weighs 30–40% of its adult weight. The cerebrum and cerebellum are well developed, and the temporal lobes are so large that they bulge out laterally. Their temporal lobes are proportionally larger than those of other animals, including humans. The throat of an elephant appears to contain a pouch where it can store water for later use. The larynx of the elephant is the largest known among mammals. The vocal folds are anchored close to the epiglottis base. When comparing an elephant's vocal folds to those of a human, an elephant's are proportionally longer, thicker, with a greater cross-sectional area. In addition, they are located further up the vocal tract with an acute slope. The heart of an elephant weighs 12–21 kg (26–46 lb). Its apex has two pointed ends, an unusual trait among mammals. In addition, the ventricles of the heart split towards the top, a trait also found in sirenians. When upright, the elephant's heart beats around 28 beats per minute and actually speeds up to 35 beats when it lies down. The blood vessels are thick and wide and can hold up under high blood pressure. The lungs are attached to the diaphragm, and breathing relies less on the expanding of the ribcage. Connective tissue exists in place of the pleural cavity. This may allow the animal to deal with the pressure differences when its body is underwater and its trunk is breaking the surface for air. Elephants breathe mostly with the trunk but also with the mouth. They have a hindgut fermentation system, and their large and small intestines together reach 35 m (115 ft) in length. Less than half of an elephant's food intake gets digested, despite the process lasting a day. An elephant's kidneys can produce more than 50 litres of urine per day. A male elephant's testes, like other Afrotheria, are internally located near the kidneys. The penis can be as long as 100 cm (39 in) with a 16 cm (6 in) wide base. It curves to an 'S' when fully erect and has an orifice shaped like a Y. The female's clitoris may be 40 cm (16 in). The vulva is found lower than in other herbivores, between the hind legs instead of under the tail. Determining pregnancy status can be difficult due to the animal's large belly. The female's mammary glands occupy the space between the front legs, which puts the suckling calf within reach of the female's trunk. Elephants have a unique organ, the temporal gland, located on both sides of the head. This organ is associated with sexual behaviour, and males secrete a fluid from it when in musth. Females have also been observed with these secretions. Elephants are herbivorous and will eat leaves, twigs, fruit, bark, grass, and roots. African elephants mostly browse, while Asian elephants mainly graze. They can eat as much as 300 kg (660 lb) of food and drink 40 L (11 US gal) of water in a day. Elephants tend to stay near water sources. They have morning, afternoon, and nighttime feeding sessions. At midday, elephants rest under trees and may doze off while standing. Sleeping occurs at night while the animal is lying down. Elephants average 3–4 hours of sleep per day. Both males and family groups typically move no more than 20 km (12 mi) a day, but distances as far as 180 km (112 mi) have been recorded in the Etosha region of Namibia. Elephants go on seasonal migrations in response to changes in environmental conditions. In northern Botswana, they travel 325 km (202 mi) to the Chobe River after the local waterholes dry up in late August. Because of their large size, elephants have a huge impact on their environments and are considered keystone species. Their habit of uprooting trees and undergrowth can transform savannah into grasslands; smaller herbivores can access trees mowed down by elephants. When they dig for water during droughts, they create waterholes that can be used by other animals. When they use waterholes, they end up making them bigger. At Mount Elgon, elephants dig through caves and pave the way for ungulates, hyraxes, bats, birds and insects. Elephants are important seed dispersers; African forest elephants consume and deposit many seeds over great distances, with either no effect or a positive effect on germination. In Asian forests, large seeds require giant herbivores like elephants and rhinoceros for transport and dispersal. This ecological niche cannot be filled by the smaller Malayan tapir. Because most of the food elephants eat goes undigested, their dung can provide food for other animals, such as dung beetles and monkeys. Elephants can have a negative impact on ecosystems. At Murchison Falls National Park in Uganda, elephant numbers have threatened several species of small birds that depend on woodlands. Their weight causes the soil to compress, leading to runoff and erosion. Elephants typically coexist peacefully with other herbivores, which will usually stay out of their way. Some aggressive interactions between elephants and rhinoceros have been recorded. The size of adult elephants makes them nearly invulnerable to predators. Calves may be preyed on by lions, spotted hyenas, and wild dogs in Africa and tigers in Asia. The lions of Savuti, Botswana, have adapted to hunting elephants, mostly calves, juveniles or even sub-adults. There are rare reports of adult Asian elephants falling prey to tigers. Elephants tend to have high numbers of parasites, particularly nematodes, compared to many other mammals. This is due to them being largely immune to predators, which would otherwise kill off many of the individuals with significant parasite loads. Elephants are generally gregarious animals. African bush elephants in particular have a complex, stratified social structure. Female elephants spend their entire lives in tight-knit matrilineal family groups. They are led by the matriarch, who is often the eldest female. She remains leader of the group until death or if she no longer has the energy for the role; a study on zoo elephants found that the death of the matriarch led to greater stress in the surviving elephants. When her tenure is over, the matriarch's eldest daughter takes her place instead of her sister (if present). One study found that younger matriarchs take potential threats less seriously. Large family groups may split if they cannot be supported by local resources. At Amboseli National Park, Kenya, female groups may consist of around ten members, including four adults and their dependent offspring. Here, a cow's life involves interaction with those outside her group. Two separate families may associate and bond with each other, forming what are known as bond groups. During the dry season, elephant families may aggregate into clans. These may number around nine groups, in which clans do not form strong bonds but defend their dry-season ranges against other clans. The Amboseli elephant population is further divided into the "central" and "peripheral" subpopulations. Female Asian elephants tend to have more fluid social associations. In Sri Lanka, there appear to be stable family units or "herds" and larger, looser "groups". They have been observed to have "nursing units" and "juvenile-care units". In southern India, elephant populations may contain family groups, bond groups and possibly clans. Family groups tend to be small, with only one or two adult females and their offspring. A group containing more than two cows and their offspring is known as a "joint family". Malay elephant populations have even smaller family units and do not reach levels above a bond group. Groups of African forest elephants typically consist of one cow with one to three offspring. These groups appear to interact with each other, especially at forest clearings. Adult males live separate lives. As he matures, a bull associates more with outside males or even other families. At Amboseli, young males may be away from their families 80% of the time by 14–15 years of age. When males permanently leave, they either live alone or with other males. The former is typical of bulls in dense forests. A dominance hierarchy exists among males, whether they are social or solitary. Dominance depends on age, size, and sexual condition. Male elephants can be quite sociable when not competing for mates and form vast and fluid social networks. Older bulls act as the leaders of these groups. The presence of older males appears to subdue the aggression and "deviant" behaviour of younger ones. The largest all-male groups can reach close to 150 individuals. Adult males and females come together to breed. Bulls will accompany family groups if a cow is in oestrous. Adult males enter a state of increased testosterone known as musth. In a population in southern India, males first enter musth at 15 years old, but it is not very intense until they are older than 25. At Amboseli, no bulls under 24 were found to be in musth, while half of those aged 25–35 and all those over 35 were. In some areas, there may be seasonal influences on the timing of musths. The main characteristic of a bull's musth is a fluid discharged from the temporal gland that runs down the side of his face. Behaviours associated with musth include walking with a high and swinging head, nonsynchronous ear flapping, picking at the ground with the tusks, marking, rumbling, and urinating in the sheath. The length of this varies between males of different ages and conditions, lasting from days to months. Males become extremely aggressive during musth. Size is the determining factor in agonistic encounters when the individuals have the same condition. In contests between musth and non-musth individuals, musth bulls win the majority of the time, even when the non-musth bull is larger. A male may stop showing signs of musth when he encounters a musth male of higher rank. Those of equal rank tend to avoid each other. Agonistic encounters typically consist of threat displays, chases, and minor sparring. Rarely do they full-on fight. Elephants are polygynous breeders, and most copulations occur during rainfall. An oestrous cow uses pheromones in her urine and vaginal secretions to signal her readiness to mate. A bull will follow a potential mate and assess her condition with the flehmen response, which requires him to collect a chemical sample with his trunk and taste it with the vomeronasal organ at the roof of the mouth. The oestrous cycle of a cow lasts 14–16 weeks, with the follicular phase lasting 4–6 weeks and the luteal phase lasting 8–10 weeks. While most mammals have one surge of luteinizing hormone during the follicular phase, elephants have two. The first (or anovulatory) surge, appears to change the female's scent, signaling to males that she is in heat, but ovulation does not occur until the second (or ovulatory) surge. Cows over 45–50 years of age are less fertile. Bulls engage in a behaviour known as mate-guarding, where they follow oestrous females and defend them from other males. Most mate-guarding is done by musth males, and females seek them out, particularly older ones. Musth appears to signal to females the condition of the male, as weak or injured males do not have normal musths. For young females, the approach of an older bull can be intimidating, so her relatives stay nearby for comfort. During copulation, the male rests his trunk on the female. The penis is mobile enough to move without the pelvis. Before mounting, it curves forward and upward. Copulation lasts about 45 seconds and does not involve pelvic thrusting or an ejaculatory pause. Homosexual behaviour is frequent in both sexes. As in heterosexual interactions, this involves mounting. Male elephants sometimes stimulate each other by playfighting, and "championships" may form between old bulls and younger males. Female same-sex behaviours have been documented only in captivity, where they engage in mutual masturbation with their trunks. Gestation in elephants typically lasts between one and a half and two years and the female will not give birth again for at least four years. The relatively long pregnancy is supported by several corpus luteums and gives the foetus more time to develop, particularly the brain and trunk. Births tend to take place during the wet season. Typically, only a single young is born, but twins sometimes occur. Calves are born roughly 85 cm (33 in) tall and with a weight of around 120 kg (260 lb). They are precocial and quickly stand and walk to follow their mother and family herd. A newborn calf will attract the attention of all the herd members. Adults and most of the other young will gather around the newborn, touching and caressing it with their trunks. For the first few days, the mother limits access to her young. Alloparenting – where a calf is cared for by someone other than its mother – takes place in some family groups. Allomothers are typically aged two to twelve years. For the first few days, the newborn is unsteady on its feet and needs its mother's help. It relies on touch, smell, and hearing, as its eyesight is less developed. With little coordination in its trunk, it can only flop it around which may cause it to trip. When it reaches its second week, the calf can walk with more balance and has more control over its trunk. After its first month, the trunk can grab and hold objects, but still lacks sucking abilities, and the calf must bend down to drink. It continues to stay near its mother as it is still reliant on her. For its first three months, a calf relies entirely on its mother's milk, after which it begins to forage for vegetation and can use its trunk to collect water. At the same time, there is progress in lip and leg movements. By nine months, mouth, trunk and foot coordination are mastered. Suckling bouts tend to last 2–4 min/hr for a calf younger than a year. After a year, a calf is fully capable of grooming, drinking, and feeding itself. It still needs its mother's milk and protection until it is at least two years old. Suckling after two years may improve growth, health and fertility. Play behaviour in calves differs between the sexes; females run or chase each other while males play-fight. The former are sexually mature by the age of nine years while the latter become mature around 14–15 years. Adulthood starts at about 18 years of age in both sexes. Elephants have long lifespans, reaching 60–70 years of age. Lin Wang, a captive male Asian elephant, lived for 86 years. Elephants communicate in various ways. Individuals greet one another by touching each other on the mouth, temporal glands and genitals. This allows them to pick up chemical cues. Older elephants use trunk-slaps, kicks, and shoves to control younger ones. Touching is especially important for mother–calf communication. When moving, elephant mothers will touch their calves with their trunks or feet when side-by-side or with their tails if the calf is behind them. A calf will press against its mother's front legs to signal it wants to rest and will touch her breast or leg when it wants to suckle. Visual displays mostly occur in agonistic situations. Elephants will try to appear more threatening by raising their heads and spreading their ears. They may add to the display by shaking their heads and snapping their ears, as well as tossing around dust and vegetation. They are usually bluffing when performing these actions. Excited elephants also raise their heads and spread their ears but additionally may raise their trunks. Submissive elephants will lower their heads and trunks, as well as flatten their ears against their necks, while those that are ready to fight will bend their ears in a V shape. Elephants produce several vocalisations—some of which pass though the trunk—for both short and long range communication. This includes trumpeting, bellowing, roaring, growling, barking, snorting, and rumbling. Elephants can produce infrasonic rumbles. For Asian elephants, these calls have a frequency of 14–24 Hz, with sound pressure levels of 85–90 dB and last 10–15 seconds. For African elephants, calls range from 15 to 35 Hz with sound pressure levels as high as 117 dB, allowing communication for many kilometres, possibly over 10 km (6 mi). Elephants are known to communicate with seismics, vibrations produced by impacts on the earth's surface or acoustical waves that travel through it. An individual foot stomping or mock charging can create seismic signals that can be heard at travel distances of up to 32 km (20 mi). Seismic waveforms produced by rumbles travel 16 km (10 mi). Elephants are among the most intelligent animals. They exhibit mirror self-recognition, an indication of self-awareness and cognition that has also been demonstrated in some apes and dolphins. One study of a captive female Asian elephant suggested the animal was capable of learning and distinguishing between several visual and some acoustic discrimination pairs. This individual was even able to score a high accuracy rating when re-tested with the same visual pairs a year later. Elephants are among the species known to use tools. An Asian elephant has been observed fine-tuning branches for use as flyswatters. Tool modification by these animals is not as advanced as that of chimpanzees. Elephants are popularly thought of as having an excellent memory. This could have a factual basis; they possibly have cognitive maps which give them long lasting memories of their environment on a wide scale. Individuals may be able to remember where their family members are located. Scientists debate the extent to which elephants feel emotion. They are attracted to the bones of their own kind, regardless of whether they are related. As with chimpanzees and dolphins, a dying or dead elephant may elicit attention and aid from others, including those from other groups. This has been interpreted as expressing "concern"; however, the Oxford Companion to Animal Behaviour (1987) said that "one is well advised to study the behaviour rather than attempting to get at any underlying emotion". African bush elephants were listed as Endangered by the International Union for Conservation of Nature (IUCN) in 2021, and African forest elephants were listed as Critically Endangered in the same year. In 1979, Africa had an estimated population of at least 1.3 million elephants, possibly as high as 3.0 million. A decade later, the population was estimated to be 609,000; with 277,000 in Central Africa, 110,000 in Eastern Africa, 204,000 in Southern Africa, and 19,000 in Western Africa. The population of rainforest elephants was lower than anticipated, at around 214,000 individuals. Between 1977 and 1989, elephant populations declined by 74% in East Africa. After 1987, losses in elephant numbers hastened, and savannah populations from Cameroon to Somalia experienced a decline of 80%. African forest elephants had a total loss of 43%. Population trends in southern Africa were various, with unconfirmed losses in Zambia, Mozambique and Angola while populations grew in Botswana and Zimbabwe and were stable in South Africa. The IUCN estimated that total population in Africa is estimated at around to 415,000 individuals for both species combined as of 2016. African elephants receive at least some legal protection in every country where they are found. Successful conservation efforts in certain areas have led to high population densities while failures have led to declines as high as 70% or more of the course of ten years. As of 2008, local numbers were controlled by contraception or translocation. Large-scale cullings stopped in the late 1980s and early 1990s. In 1989, the African elephant was listed under Appendix I by the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES), making trade illegal. Appendix II status (which allows restricted trade) was given to elephants in Botswana, Namibia, and Zimbabwe in 1997 and South Africa in 2000. In some countries, sport hunting of the animals is legal; Botswana, Cameroon, Gabon, Mozambique, Namibia, South Africa, Tanzania, Zambia, and Zimbabwe have CITES export quotas for elephant trophies. In 2020, the IUCN listed the Asian elephant as endangered due to the population declining by half over "the last three generations". Asian elephants once ranged from Western to East Asia and south to Sumatra. and Java. It is now extinct in these areas, and the current range of Asian elephants is highly fragmented. The total population of Asian elephants is estimated to be around 40,000–50,000, although this may be a loose estimate. Around 60% of the population is in India. Although Asian elephants are declining in numbers overall, particularly in Southeast Asia, the population in Sri Lanka appears to have risen and elephant numbers in the Western Ghats may have stablised. The poaching of elephants for their ivory, meat and hides has been one of the major threats to their existence. Historically, numerous cultures made ornaments and other works of art from elephant ivory, and its use was comparable to that of gold. The ivory trade contributed to the fall of the African elephant population in the late 20th century. This prompted international bans on ivory imports, starting with the United States in June 1989, and followed by bans in other North American countries, western European countries, and Japan. Around the same time, Kenya destroyed all its ivory stocks. Ivory was banned internationally by CITES in 1990. Following the bans, unemployment rose in India and China, where the ivory industry was important economically. By contrast, Japan and Hong Kong, which were also part of the industry, were able to adapt and were not as badly affected. Zimbabwe, Botswana, Namibia, Zambia, and Malawi wanted to continue the ivory trade and were allowed to, since their local populations were healthy, but only if their supplies were from culled individuals or those that died of natural causes. The ban allowed the elephant to recover in parts of Africa. In February 2012, 650 elephants in Bouba Njida National Park, Cameroon, were slaughtered by Chadian raiders. This has been called "one of the worst concentrated killings" since the ivory ban. Asian elephants are potentially less vulnerable to the ivory trade, as females usually lack tusks. Still, members of the species have been killed for their ivory in some areas, such as Periyar National Park in India. China was the biggest market for poached ivory but announced they would phase out the legal domestic manufacture and sale of ivory products in May 2015, and in September 2015, China and the United States said "they would enact a nearly complete ban on the import and export of ivory" due to causes of extinction. Other threats to elephants include habitat destruction and fragmentation. The Asian elephant lives in areas with some of the highest human populations and may be confined to small islands of forest among human-dominated landscapes. Elephants commonly trample and consume crops, which contributes to conflicts with humans, and both elephants and humans have died by the hundreds as a result. Mitigating these conflicts is important for conservation. One proposed solution is the protection of wildlife corridors which give populations greater interconnectivity and space. Chili pepper products as well as guarding with defense tools have been found to be effective in preventing crop-raiding by elephants. Less effective tactics include beehive and electric fences. Elephants have been working animals since at least the Indus Valley civilization over 4,000 years ago and continue to be used in modern times. There were 13,000–16,500 working elephants employed in Asia in 2000. These animals are typically captured from the wild when they are 10–20 years old when they are both more trainable and can work for more years. They were traditionally captured with traps and lassos, but since 1950, tranquillisers have been used. Individuals of the Asian species have been often trained as working animals. Asian elephants are used to carry and pull both objects and people in and out of areas as well as lead people in religious celebrations. They are valued over mechanised tools as they can perform the same tasks but in more difficult terrain, with strength, memory, and delicacy. Elephants can learn over 30 commands. Musth bulls are difficult and dangerous to work with and so are chained up until their condition passes. In India, many working elephants are alleged to have been subject to abuse. They and other captive elephants are thus protected under The Prevention of Cruelty to Animals Act of 1960. In both Myanmar and Thailand, deforestation and other economic factors have resulted in sizable populations of unemployed elephants resulting in health problems for the elephants themselves as well as economic and safety problems for the people amongst whom they live. The practice of working elephants has also been attempted in Africa. The taming of African elephants in the Belgian Congo began by decree of Leopold II of Belgium during the 19th century and continues to the present with the Api Elephant Domestication Centre. Historically, elephants were considered formidable instruments of war. They were described in Sanskrit texts as far back as 1500 BC. From South Asia, the use of elephants in warfare spread west to Persia and east to Southeast Asia. The Persians used them during the Achaemenid Empire (between the 6th and 4th centuries BC) while Southeast Asian states first used war elephants possibly as early as the 5th century BC and continued to the 20th century. War elephants were also employed in the Mediterranean and North Africa throughout the classical period since the reign of Ptolemy II in Egypt. The Carthaginian general Hannibal famously took African elephants across the Alps during his war with the Romans and reached the Po Valley in 218 BC with all of them alive, but died of disease and combat a year later. Their heads and sides were equipped with armour, the trunk may have had a sword tied to it and tusks were sometimes covered with sharpened iron or brass. Trained elephants would attack both humans and horses with their tusks. They might have grasped an enemy soldier with the trunk and tossed him to their mahout, or pinned the soldier to the ground and speared him. Some shortcomings of war elephants included their great visibility, which made them easy to target, and limited maneuverability compared to horses. Alexander the Great achieved victory over armies with war elephants by having his soldiers injure the trunks and legs of the animals which caused them to panic and become uncontrollable. Elephants have traditionally been a major part of zoos and circuses around the world. In circuses, they are trained to perform tricks. The most famous circus elephant was probably Jumbo (1861 – 15 September 1885), who was a major attraction in the Barnum & Bailey Circus. These animals do not reproduce well in captivity, due to the difficulty of handling musth bulls and limited understanding of female oestrous cycles. Asian elephants were always more common than their African counterparts in modern zoos and circuses. After CITES listed the Asian elephant under Appendix I in 1975, imports of the species almost stopped by the end of the 1980s. Subsequently, the US received many captive African elephants from Zimbabwe, which had an overabundance of the animals. Keeping elephants in zoos has met with some controversy. Proponents of zoos argue that they allow easy access to the animals and provide fund and knowledge for preserving their natural habitats, as well as safekeeping for the species. Opponents claim that animals in zoos are under physical and mental stress. Elephants have been recorded displaying stereotypical behaviours in the form of wobbling the body or head and pacing the same route both forwards and backwards. This has been observed in 54% of individuals in UK zoos. Elephants in European zoos appear to have shorter lifespans than their wild counterparts at only 17 years, although other studies suggest that zoo elephants live just as long. The use of elephants in circuses has also been controversial; the Humane Society of the United States has accused circuses of mistreating and distressing their animals. In testimony to a US federal court in 2009, Barnum & Bailey Circus CEO Kenneth Feld acknowledged that circus elephants are struck behind their ears, under their chins and on their legs with metal-tipped prods, called bull hooks or ankus. Feld stated that these practices are necessary to protect circus workers and acknowledged that an elephant trainer was rebuked for using an electric prod on an elephant. Despite this, he denied that any of these practices hurt the animals. Some trainers have tried to train elephants without the use of physical punishment. Ralph Helfer is known to have relied on positive reinforcement when training his animals. Barnum and Bailey circus retired its touring elephants in May 2016. Elephants can exhibit bouts of aggressive behaviour and engage in destructive actions against humans. In Africa, groups of adolescent elephants damaged homes in villages after cullings in the 1970s and 1980s. Because of the timing, these attacks have been interpreted as vindictive. In parts of India, male elephants have entered villages at night, destroying homes and killing people. From 2000 to 2004, 300 people died in Jharkhand, and in Assam, 239 people were reportedly killed between 2001 and 2006. Throughout the country, 1,500 people were killed by elephants between 2019 and 2022, which led to 300 elephants being killed in kind. Local people have reported their belief that some elephants were drunk during their attacks, though officials have disputed this. Purportedly drunk elephants attacked an Indian village in December 2002, killing six people, which led to the retaliatory slaughter of about 200 elephants by locals. Elephants have a universal presence in global culture. They have been represented in art since Paleolithic times. Africa, in particular, contains many examples of elephant rock art, especially in the Sahara and southern Africa. In Asia, the animals are depicted as motifs in Hindu and Buddhist shrines and temples. Elephants were often difficult to portray by people with no first-hand experience of them. The ancient Romans, who kept the animals in captivity, depicted elephants more accurately than medieval Europeans who portrayed them more like fantasy creatures, with horse, bovine and boar-like traits, and trumpet-like trunks. As Europeans gained more access to captive elephants during the 15th century, depictions of them became more accurate, including one made by Leonardo da Vinci. Elephants have been the subject of religious beliefs. The Mbuti people of central Africa believe that the souls of their dead ancestors resided in elephants. Similar ideas existed among other African societies, who believed that their chiefs would be reincarnated as elephants. During the 10th century AD, the people of Igbo-Ukwu, in modern day Nigeria, placed elephant tusks underneath their death leader's feet in the grave. The animals' importance is only totemic in Africa but is much more significant in Asia. In Sumatra, elephants have been associated with lightning. Likewise in Hinduism, they are linked with thunderstorms as Airavata, the father of all elephants, represents both lightning and rainbows. One of the most important Hindu deities, the elephant-headed Ganesha, is ranked equal with the supreme gods Shiva, Vishnu, and Brahma in some traditions. Ganesha is associated with writers and merchants and it is believed that he can give people success as well as grant them their desires, but could also take these things away. In Buddhism, Buddha is said to have been a white elephant reincarnated as a human. In Western popular culture, elephants symbolise the exotic, especially since – as with the giraffe, hippopotamus and rhinoceros – there are no similar animals familiar to Western audiences. As characters, elephants are most common in children's stories, where they are portrayed positively. They are typically surrogates for humans with ideal human values. Many stories tell of isolated young elephants returning to or finding a family, such as "The Elephant's Child" from Rudyard Kipling's Just So Stories, Disney's Dumbo, and Kathryn and Byron Jackson's The Saggy Baggy Elephant. Other elephant heroes given human qualities include Jean de Brunhoff's Babar, David McKee's Elmer, and Dr. Seuss's Horton. Several cultural references emphasise the elephant's size and strangeness. For instance, a "white elephant" is a byword for something that is weird, unwanted, and has no value. The expression "elephant in the room" refers to something that is being ignored but ultimately must be addressed. The story of the blind men and an elephant involves blind men touching different parts of an elephant and trying to figure out what it is.
[ { "paragraph_id": 0, "text": "Elephants are the largest living land animals. Three living species are currently recognised: the African bush elephant (Loxodonta africana), the African forest elephant (L. cyclotis), and the Asian elephant (Elephas maximus). They are the only surviving members of the family Elephantidae and the order Proboscidea; extinct relatives include mammoths and mastodons. Distinctive features of elephants include a long proboscis called a trunk, tusks, large ear flaps, pillar-like legs, and tough but sensitive grey skin. The trunk is prehensile, bringing food and water to the mouth and grasping objects. Tusks, which are derived from the incisor teeth, serve both as weapons and as tools for moving objects and digging. The large ear flaps assist in maintaining a constant body temperature as well as in communication. African elephants have larger ears and concave backs, whereas Asian elephants have smaller ears and convex or level backs.", "title": "" }, { "paragraph_id": 1, "text": "Elephants are scattered throughout sub-Saharan Africa, South Asia, and Southeast Asia and are found in different habitats, including savannahs, forests, deserts, and marshes. They are herbivorous, and they stay near water when it is accessible. They are considered to be keystone species, due to their impact on their environments. Elephants have a fission–fusion society, in which multiple family groups come together to socialise. Females (cows) tend to live in family groups, which can consist of one female with her calves or several related females with offspring. The leader of a female group, usually the oldest cow, is known as the matriarch.", "title": "" }, { "paragraph_id": 2, "text": "Males (bulls) leave their family groups when they reach puberty and may live alone or with other males. Adult bulls mostly interact with family groups when looking for a mate. They enter a state of increased testosterone and aggression known as musth, which helps them gain dominance over other males as well as reproductive success. Calves are the centre of attention in their family groups and rely on their mothers for as long as three years. Elephants can live up to 70 years in the wild. They communicate by touch, sight, smell, and sound; elephants use infrasound and seismic communication over long distances. Elephant intelligence has been compared with that of primates and cetaceans. They appear to have self-awareness, and possibly show concern for dying and dead individuals of their kind.", "title": "" }, { "paragraph_id": 3, "text": "African bush elephants and Asian elephants are listed as endangered and African forest elephants as critically endangered by the International Union for Conservation of Nature (IUCN). One of the biggest threats to elephant populations is the ivory trade, as the animals are poached for their ivory tusks. Other threats to wild elephants include habitat destruction and conflicts with local people. Elephants are used as working animals in Asia. In the past, they were used in war; today, they are often controversially put on display in zoos, or employed for entertainment in circuses. Elephants have an iconic status in human culture, and have been widely featured in art, folklore, religion, literature, and popular culture.", "title": "" }, { "paragraph_id": 4, "text": "The word elephant is based on the Latin elephas (genitive elephantis) 'elephant', which is the Latinised form of the ancient Greek ἐλέφας (elephas) (genitive ἐλέφαντος (elephantos)), probably from a non-Indo-European language, likely Phoenician. It is attested in Mycenaean Greek as e-re-pa (genitive e-re-pa-to) in Linear B syllabic script. As in Mycenaean Greek, Homer used the Greek word to mean ivory, but after the time of Herodotus, it also referred to the animal. The word elephant appears in Middle English as olyfaunt (c. 1300) and was borrowed from Old French oliphant (12th century).", "title": "Etymology" }, { "paragraph_id": 5, "text": "Elephants belong to the family Elephantidae, the sole remaining family within the order Proboscidea. Their closest extant relatives are the sirenians (dugongs and manatees) and the hyraxes, with which they share the clade Paenungulata within the superorder Afrotheria. Elephants and sirenians are further grouped in the clade Tethytheria.", "title": "Taxonomy" }, { "paragraph_id": 6, "text": "Three species of living elephants are recognised; the African bush elephant (Loxodonta africana), forest elephant (Loxodonta cyclotis) and Asian elephant (Elephas maximus). African elephants were traditionally considered a single species, Loxodonta africana, but molecular studies have affirmed their status as separate species. Mammoths (Mammuthus) are nested within living elephants as they are more closely related to Asian elephants than to African elephants. Another extinct genus of elephant, Palaeoloxodon, is also recognised, which appears to have close affinities with African elephants and to have hybridised with African forest elephants.", "title": "Taxonomy" }, { "paragraph_id": 7, "text": "Over 180 extinct members of order Proboscidea have been described. The earliest proboscideans, the African Eritherium and Phosphatherium are known from the late Paleocene. The Eocene included Numidotherium, Moeritherium and Barytherium from Africa. These animals were relatively small and, some, like Moeritherium and Barytherium were probably amphibious. Later on, genera such as Phiomia and Palaeomastodon arose; the latter likely inhabited more forested areas. Proboscidean diversification changed little during the Oligocene. One notable species of this epoch was Eritreum melakeghebrekristosi of the Horn of Africa, which may have been an ancestor to several later species.", "title": "Taxonomy" }, { "paragraph_id": 8, "text": "A major event in proboscidean evolution was the collision of Afro-Arabia with Eurasia, during the Early Miocene, around 18–19 million years ago, allowing proboscideans to disperse from their African homeland across Eurasia and later, around 16–15 million years ago into North America across the Bering Land Bridge. Proboscidean groups prominent during the Miocene include the deinotheres, along with the more advanced elephantimorphs, including mammutids (mastodons), gomphotheres, amebelodontids (which includes the \"shovel tuskers\" like Platybelodon), choerolophodontids and stegodontids. Around 10 million years ago, the earliest members of the family Elephantidae emerged in Africa, having originated from gomphotheres.", "title": "Taxonomy" }, { "paragraph_id": 9, "text": "Elephantids are distinguished from earlier proboscideans by a major shift in the molar morphology to parallel lophs rather than the cusps of earlier proboscideans, allowing them to become higher-crowned (hypsodont) and more efficient in consuming grass. The Late Miocene saw major climactic changes, which resulted in the decline and extinction of many proboscidean groups. The earliest members of the modern genera of Elephantidae appeared during the latest Miocene–early Pliocene around 5 million years ago. The elephantid genera Elephas (which includes the living Asian elephant) and Mammuthus (mammoths) migrated out of Africa during the late Pliocene, around 3.6 to 3.2 million years ago.", "title": "Taxonomy" }, { "paragraph_id": 10, "text": "Over the course of the Early Pleistocene, all non-elephantid probobscidean genera outside of the Americas became extinct with the exception of Stegodon, with gomphotheres dispersing into South America as part of the Great American interchange, and mammoths migrating into North America around 1.5 million years ago. At the end of the Early Pleistocene, around 800,000 years ago the elephantid genus Palaeoloxodon dispersed outside of Africa, becoming widely distributed in Eurasia. Proboscideans underwent a dramatic decline during the Late Pleistocene, with all remaining non-elephantid proboscideans (including Stegodon, mastodons, and the gomphotheres Cuvieronius and Notiomastodon) and Palaeoloxodon becoming extinct, with mammoths only surviving in relict populations on islands around the Bering Strait into the Holocene, with their latest survival being on Wrangel Island, where they persisted until around 4,000 years ago.", "title": "Taxonomy" }, { "paragraph_id": 11, "text": "Over the course of their evolution, probobscideans grew in size. With that came longer limbs and wider feet with a more digitigrade stance, along with a larger head and shorter neck. The trunk evolved and grew longer to provide reach. The number of premolars, incisors, and canines decreased, and the cheek teeth (molars and premolars) became longer and more specialised. The incisors developed into tusks of different shapes and sizes. Several species of proboscideans became isolated on islands and experienced insular dwarfism, some dramatically reducing in body size, such as the 1 metre (3.3 ft) tall dwarf elephant species Palaeoloxodon falconeri.", "title": "Taxonomy" }, { "paragraph_id": 12, "text": "Elephants are the largest living terrestrial animals. The skeleton is made up of 326–351 bones. The vertebrae are connected by tight joints, which limit the backbone's flexibility. African elephants have 21 pairs of ribs, while Asian elephants have 19 or 20 pairs. The skull contains air cavities (sinuses) that reduce the weight of the skull while maintaining overall strength. These cavities give the inside of the skull a honeycomb-like appearance. By contrast, the lower jaw is dense. The cranium is particularly large and provides enough room for the attachment of muscles to support the entire head. The skull is built to withstand great stress, particularly when fighting or using the tusks. The brain is surrounded by arches in the skull, which serve as protection. Because of the size of the head, the neck is relatively short to provide better support. Elephants are homeotherms and maintain their average body temperature at ~ 36 °C (97 °F), with a minimum of 35.2 °C (95.4 °F) during the cool season, and a maximum of 38.0 °C (100.4 °F) during the hot dry season.", "title": "Anatomy" }, { "paragraph_id": 13, "text": "Elephant ear flaps, or pinnae, are 1–2 mm (0.039–0.079 in) thick in the middle with a thinner tip and supported by a thicker base. They contain numerous blood vessels called capillaries. Warm blood flows into the capillaries, releasing excess heat into the environment. This effect is increased by flapping the ears back and forth. Larger ear surfaces contain more capillaries, and more heat can be released. Of all the elephants, African bush elephants live in the hottest climates and have the largest ear flaps. The ossicles are adapted for hearing low frequencies, being most sensitive at 1 kHz.", "title": "Anatomy" }, { "paragraph_id": 14, "text": "Lacking a lacrimal apparatus (tear duct), the eye relies on the harderian gland in the orbit to keep it moist. A durable nictitating membrane shields the globe. The animal's field of vision is compromised by the location and limited mobility of the eyes. Elephants are dichromats and they can see well in dim light but not in bright light.", "title": "Anatomy" }, { "paragraph_id": 15, "text": "The elongated and prehensile trunk, or proboscis, consists of both the nose and upper lip, which fuse in early fetal development. This versatile appendage contains up to 150,000 separate muscle fascicles, with no bone and little fat. These paired muscles consist of two major types: superficial (surface) and internal. The former are divided into dorsal, ventral, and lateral muscles, while the latter are divided into transverse and radiating muscles. The muscles of the trunk connect to a bony opening in the skull. The nasal septum consists of small elastic muscles between the nostrils, which are divided by cartilage at the base. A unique proboscis nerve – a combination of the maxillary and facial nerves – lines each side of the appendage.", "title": "Anatomy" }, { "paragraph_id": 16, "text": "As a muscular hydrostat, the trunk moves through finely controlled muscle contractions, working both with and against each other. Using three basic movements: bending, twisting, and longitudinal stretching or retracting, the trunk has near unlimited flexibility. Objects grasped by the end of the trunk can be moved to the mouth by curving the appendage inward. The trunk can also bend at different points by creating stiffened \"pseudo-joints\". The tip can be moved in a way similar to the human hand. The skin is more elastic on the dorsal side of the elephant trunk than underneath; allowing the animal to stretch and coil while maintaining a strong grasp. The African elephants have two finger-like extensions at the tip of the trunk that allow them to pluck small food. The Asian elephant has only one and relies more on wrapping around a food item. Asian elephant trunks have better motor coordination.", "title": "Anatomy" }, { "paragraph_id": 17, "text": "The trunk's extreme flexibility allows it to forage and wrestle other elephants with it. It is powerful enough to lift up to 350 kg (770 lb), but it also has the precision to crack a peanut shell without breaking the seed. With its trunk, an elephant can reach items up to 7 m (23 ft) high and dig for water in the mud or sand below. It also uses it to clean itself. Individuals may show lateral preference when grasping with their trunks: some prefer to twist them to the left, others to the right. Elephant trunks are capable of powerful siphoning. They can expand their nostrils by 30%, leading to a 64% greater nasal volume, and can breathe in almost 30 times faster than a human sneeze, at over 150 m/s (490 ft/s). They suck up water, which is squirted into the mouth or over the body. The trunk of an adult Asian elephant is capable of retaining 8.5 L (2.2 US gal) of water. They will also sprinkle dust or grass on themselves. When underwater, the elephant uses its trunk as a snorkel.", "title": "Anatomy" }, { "paragraph_id": 18, "text": "The trunk also acts as a sense organ. Its sense of smell may be four times greater than a bloodhound's nose. The infraorbital nerve, which makes the trunk sensitive to touch, is thicker than both the optic and auditory nerves. Whiskers grow all along the trunk, and are particularly packed at the tip, where they contribute to its tactile sensitivity. Unlike those of many mammals, such as cats and rats, elephant whiskers do not move independently (\"whisk\") to sense the environment; the trunk itself must move to bring the whiskers into contact with nearby objects. Whiskers grow in rows along each side on the ventral surface of the trunk, which is thought to be essential in helping elephants balance objects there, whereas they are more evenly arranged on the dorsal surface. The number and patterns of whiskers are distinctly different between species.", "title": "Anatomy" }, { "paragraph_id": 19, "text": "Damaging the trunk would be detrimental to an elephant's survival, although in rare cases, individuals have survived with shortened ones. One trunkless elephant has been observed to graze using its lips with its hind legs in the air and balancing on its front knees. Floppy trunk syndrome is a condition of trunk paralysis recorded in African bush elephants and involves the degeneration of the peripheral nerves and muscles. The disorder has been linked to lead poisoning.", "title": "Anatomy" }, { "paragraph_id": 20, "text": "Elephants usually have 26 teeth: the incisors, known as the tusks; 12 deciduous premolars; and 12 molars. Unlike most mammals, teeth are not replaced by new ones emerging from the jaws vertically. Instead, new teeth start at the back of the mouth and push out the old ones. The first chewing tooth on each side of the jaw falls out when the elephant is two to three years old. This is followed by four more tooth replacements at the ages of four to six, 9–15, 18–28, and finally in their early 40s. The final (usually sixth) set must last the elephant the rest of its life. Elephant teeth have loop-shaped dental ridges, which are more diamond-shaped in African elephants.", "title": "Anatomy" }, { "paragraph_id": 21, "text": "The tusks of an elephant have modified second incisors in the upper jaw. They replace deciduous milk teeth at 6–12 months of age and keep growing at about 17 cm (7 in) a year. As the tusk develops, it is topped with smooth, cone-shaped enamel that eventually wanes. The dentine is known as ivory and has a cross-section of intersecting lines, known as \"engine turning\", which create diamond-shaped patterns. Being living tissue, tusks are fairly soft and about as dense as the mineral calcite. The tusk protrudes from a socket in the skull, and most of it is external. At least one-third of the tusk contains the pulp, and some have nerves that stretch even further. Thus, it would be difficult to remove it without harming the animal. When removed, ivory will dry up and crack if not kept cool and wet. Tusks function in digging, debarking, marking, moving objects, and fighting.", "title": "Anatomy" }, { "paragraph_id": 22, "text": "Elephants are usually right- or left-tusked, similar to humans, who are typically right- or left-handed. The dominant, or \"master\" tusk, is typically more worn down, as it is shorter and blunter. For African elephants, tusks are present in both males and females, and are around the same length in both sexes, reaching up to 300 cm (9 ft 10 in), but those of males tend to be more massive. In the Asian species, only the males have large tusks. Female Asians have very small tusks, or none at all. Tuskless males exist and are particularly common among Sri Lankan elephants. Asian males can have tusks as long as Africans', but they are usually slimmer and lighter; the largest recorded was 302 cm (9 ft 11 in) long and weighed 39 kg (86 lb). Hunting for elephant ivory in Africa and Asia has led to natural selection for shorter tusks and tusklessness.", "title": "Anatomy" }, { "paragraph_id": 23, "text": "An elephant's skin is generally very tough, at 2.5 cm (1 in) thick on the back and parts of the head. The skin around the mouth, anus, and inside of the ear is considerably thinner. Elephants are typically grey, but African elephants look brown or reddish after rolling in coloured mud. Asian elephants have some patches of depigmentation, particularly on the head. Calves have brownish or reddish hair, with the head and back being particularly hairy. As elephants mature, their hair darkens and becomes sparser, but dense concentrations of hair and bristles remain on the tip of the tail and parts of the head and genitals. Normally, the skin of an Asian elephant is covered with more hair than its African counterpart. Their hair is thought to help them lose heat in their hot environments.", "title": "Anatomy" }, { "paragraph_id": 24, "text": "Although tough, an elephant's skin is very sensitive and requires mud baths to maintain moisture and protect it from burning and insect bites. After bathing, the elephant will usually use its trunk to blow dust onto its body, which dries into a protective crust. Elephants have difficulty releasing heat through the skin because of their low surface-area-to-volume ratio, which is many times smaller than that of a human. They have even been observed lifting up their legs to expose their soles to the air. Elephants only have sweat glands between the toes, but the skin allows water to disperse and evaporate, cooling the animal. In addition, cracks in the skin may reduce dehydration and allow for increased thermal regulation in the long term.", "title": "Anatomy" }, { "paragraph_id": 25, "text": "To support the animal's weight, an elephant's limbs are positioned more vertically under the body than in most other mammals. The long bones of the limbs have cancellous bones in place of medullary cavities. This strengthens the bones while still allowing haematopoiesis (blood cell creation). Both the front and hind limbs can support an elephant's weight, although 60% is borne by the front. The position of the limbs and leg bones allows an elephant to stand still for extended periods of time without tiring. Elephants are incapable of turning their manus as the ulna and radius of the front legs are secured in pronation. Elephants may also lack the pronator quadratus and pronator teres muscles or have very small ones. The circular feet of an elephant have soft tissues, or \"cushion pads\" beneath the manus or pes, which allow them to bear the animal's great mass. They appear to have a sesamoid, an extra \"toe\" similar in placement to a giant panda's extra \"thumb\", that also helps in weight distribution. As many as five toenails can be found on both the front and hind feet.", "title": "Anatomy" }, { "paragraph_id": 26, "text": "Elephants can move both forward and backward, but are incapable of trotting, jumping, or galloping. They can move on land only by walking or ambling: a faster gait similar to running. In walking, the legs act as pendulums, with the hips and shoulders moving up and down while the foot is planted on the ground. The fast gait does not meet all the criteria of running, since there is no point where all the feet are off the ground, although the elephant uses its legs much like other running animals, and can move faster by quickening its stride. Fast-moving elephants appear to 'run' with their front legs, but 'walk' with their hind legs and can reach a top speed of 25 km/h (16 mph). At this speed, most other quadrupeds are well into a gallop, even accounting for leg length. Spring-like kinetics could explain the difference between the motion of elephants and other animals. The cushion pads expand and contract, and reduce both the pain and noise that would come from a very heavy animal moving. Elephants are capable swimmers: they can swim for up to six hours while staying at the surface, moving at 2.1 km/h (1 mph) and traversing up to 48 km (30 mi) continuously.", "title": "Anatomy" }, { "paragraph_id": 27, "text": "The brain of an elephant weighs 4.5–5.5 kg (10–12 lb) compared to 1.6 kg (4 lb) for a human brain. It is the largest of all terrestrial mammals. While the elephant brain is larger overall, it is proportionally smaller than the human brain. At birth, an elephant's brain already weighs 30–40% of its adult weight. The cerebrum and cerebellum are well developed, and the temporal lobes are so large that they bulge out laterally. Their temporal lobes are proportionally larger than those of other animals, including humans. The throat of an elephant appears to contain a pouch where it can store water for later use. The larynx of the elephant is the largest known among mammals. The vocal folds are anchored close to the epiglottis base. When comparing an elephant's vocal folds to those of a human, an elephant's are proportionally longer, thicker, with a greater cross-sectional area. In addition, they are located further up the vocal tract with an acute slope.", "title": "Anatomy" }, { "paragraph_id": 28, "text": "The heart of an elephant weighs 12–21 kg (26–46 lb). Its apex has two pointed ends, an unusual trait among mammals. In addition, the ventricles of the heart split towards the top, a trait also found in sirenians. When upright, the elephant's heart beats around 28 beats per minute and actually speeds up to 35 beats when it lies down. The blood vessels are thick and wide and can hold up under high blood pressure. The lungs are attached to the diaphragm, and breathing relies less on the expanding of the ribcage. Connective tissue exists in place of the pleural cavity. This may allow the animal to deal with the pressure differences when its body is underwater and its trunk is breaking the surface for air. Elephants breathe mostly with the trunk but also with the mouth. They have a hindgut fermentation system, and their large and small intestines together reach 35 m (115 ft) in length. Less than half of an elephant's food intake gets digested, despite the process lasting a day. An elephant's kidneys can produce more than 50 litres of urine per day.", "title": "Anatomy" }, { "paragraph_id": 29, "text": "A male elephant's testes, like other Afrotheria, are internally located near the kidneys. The penis can be as long as 100 cm (39 in) with a 16 cm (6 in) wide base. It curves to an 'S' when fully erect and has an orifice shaped like a Y. The female's clitoris may be 40 cm (16 in). The vulva is found lower than in other herbivores, between the hind legs instead of under the tail. Determining pregnancy status can be difficult due to the animal's large belly. The female's mammary glands occupy the space between the front legs, which puts the suckling calf within reach of the female's trunk. Elephants have a unique organ, the temporal gland, located on both sides of the head. This organ is associated with sexual behaviour, and males secrete a fluid from it when in musth. Females have also been observed with these secretions.", "title": "Anatomy" }, { "paragraph_id": 30, "text": "Elephants are herbivorous and will eat leaves, twigs, fruit, bark, grass, and roots. African elephants mostly browse, while Asian elephants mainly graze. They can eat as much as 300 kg (660 lb) of food and drink 40 L (11 US gal) of water in a day. Elephants tend to stay near water sources. They have morning, afternoon, and nighttime feeding sessions. At midday, elephants rest under trees and may doze off while standing. Sleeping occurs at night while the animal is lying down. Elephants average 3–4 hours of sleep per day. Both males and family groups typically move no more than 20 km (12 mi) a day, but distances as far as 180 km (112 mi) have been recorded in the Etosha region of Namibia. Elephants go on seasonal migrations in response to changes in environmental conditions. In northern Botswana, they travel 325 km (202 mi) to the Chobe River after the local waterholes dry up in late August.", "title": "Behaviour and ecology" }, { "paragraph_id": 31, "text": "Because of their large size, elephants have a huge impact on their environments and are considered keystone species. Their habit of uprooting trees and undergrowth can transform savannah into grasslands; smaller herbivores can access trees mowed down by elephants. When they dig for water during droughts, they create waterholes that can be used by other animals. When they use waterholes, they end up making them bigger. At Mount Elgon, elephants dig through caves and pave the way for ungulates, hyraxes, bats, birds and insects. Elephants are important seed dispersers; African forest elephants consume and deposit many seeds over great distances, with either no effect or a positive effect on germination. In Asian forests, large seeds require giant herbivores like elephants and rhinoceros for transport and dispersal. This ecological niche cannot be filled by the smaller Malayan tapir. Because most of the food elephants eat goes undigested, their dung can provide food for other animals, such as dung beetles and monkeys. Elephants can have a negative impact on ecosystems. At Murchison Falls National Park in Uganda, elephant numbers have threatened several species of small birds that depend on woodlands. Their weight causes the soil to compress, leading to runoff and erosion.", "title": "Behaviour and ecology" }, { "paragraph_id": 32, "text": "Elephants typically coexist peacefully with other herbivores, which will usually stay out of their way. Some aggressive interactions between elephants and rhinoceros have been recorded. The size of adult elephants makes them nearly invulnerable to predators. Calves may be preyed on by lions, spotted hyenas, and wild dogs in Africa and tigers in Asia. The lions of Savuti, Botswana, have adapted to hunting elephants, mostly calves, juveniles or even sub-adults. There are rare reports of adult Asian elephants falling prey to tigers. Elephants tend to have high numbers of parasites, particularly nematodes, compared to many other mammals. This is due to them being largely immune to predators, which would otherwise kill off many of the individuals with significant parasite loads.", "title": "Behaviour and ecology" }, { "paragraph_id": 33, "text": "Elephants are generally gregarious animals. African bush elephants in particular have a complex, stratified social structure. Female elephants spend their entire lives in tight-knit matrilineal family groups. They are led by the matriarch, who is often the eldest female. She remains leader of the group until death or if she no longer has the energy for the role; a study on zoo elephants found that the death of the matriarch led to greater stress in the surviving elephants. When her tenure is over, the matriarch's eldest daughter takes her place instead of her sister (if present). One study found that younger matriarchs take potential threats less seriously. Large family groups may split if they cannot be supported by local resources.", "title": "Behaviour and ecology" }, { "paragraph_id": 34, "text": "At Amboseli National Park, Kenya, female groups may consist of around ten members, including four adults and their dependent offspring. Here, a cow's life involves interaction with those outside her group. Two separate families may associate and bond with each other, forming what are known as bond groups. During the dry season, elephant families may aggregate into clans. These may number around nine groups, in which clans do not form strong bonds but defend their dry-season ranges against other clans. The Amboseli elephant population is further divided into the \"central\" and \"peripheral\" subpopulations.", "title": "Behaviour and ecology" }, { "paragraph_id": 35, "text": "Female Asian elephants tend to have more fluid social associations. In Sri Lanka, there appear to be stable family units or \"herds\" and larger, looser \"groups\". They have been observed to have \"nursing units\" and \"juvenile-care units\". In southern India, elephant populations may contain family groups, bond groups and possibly clans. Family groups tend to be small, with only one or two adult females and their offspring. A group containing more than two cows and their offspring is known as a \"joint family\". Malay elephant populations have even smaller family units and do not reach levels above a bond group. Groups of African forest elephants typically consist of one cow with one to three offspring. These groups appear to interact with each other, especially at forest clearings.", "title": "Behaviour and ecology" }, { "paragraph_id": 36, "text": "Adult males live separate lives. As he matures, a bull associates more with outside males or even other families. At Amboseli, young males may be away from their families 80% of the time by 14–15 years of age. When males permanently leave, they either live alone or with other males. The former is typical of bulls in dense forests. A dominance hierarchy exists among males, whether they are social or solitary. Dominance depends on age, size, and sexual condition. Male elephants can be quite sociable when not competing for mates and form vast and fluid social networks. Older bulls act as the leaders of these groups. The presence of older males appears to subdue the aggression and \"deviant\" behaviour of younger ones. The largest all-male groups can reach close to 150 individuals. Adult males and females come together to breed. Bulls will accompany family groups if a cow is in oestrous.", "title": "Behaviour and ecology" }, { "paragraph_id": 37, "text": "Adult males enter a state of increased testosterone known as musth. In a population in southern India, males first enter musth at 15 years old, but it is not very intense until they are older than 25. At Amboseli, no bulls under 24 were found to be in musth, while half of those aged 25–35 and all those over 35 were. In some areas, there may be seasonal influences on the timing of musths. The main characteristic of a bull's musth is a fluid discharged from the temporal gland that runs down the side of his face. Behaviours associated with musth include walking with a high and swinging head, nonsynchronous ear flapping, picking at the ground with the tusks, marking, rumbling, and urinating in the sheath. The length of this varies between males of different ages and conditions, lasting from days to months.", "title": "Behaviour and ecology" }, { "paragraph_id": 38, "text": "Males become extremely aggressive during musth. Size is the determining factor in agonistic encounters when the individuals have the same condition. In contests between musth and non-musth individuals, musth bulls win the majority of the time, even when the non-musth bull is larger. A male may stop showing signs of musth when he encounters a musth male of higher rank. Those of equal rank tend to avoid each other. Agonistic encounters typically consist of threat displays, chases, and minor sparring. Rarely do they full-on fight.", "title": "Behaviour and ecology" }, { "paragraph_id": 39, "text": "Elephants are polygynous breeders, and most copulations occur during rainfall. An oestrous cow uses pheromones in her urine and vaginal secretions to signal her readiness to mate. A bull will follow a potential mate and assess her condition with the flehmen response, which requires him to collect a chemical sample with his trunk and taste it with the vomeronasal organ at the roof of the mouth. The oestrous cycle of a cow lasts 14–16 weeks, with the follicular phase lasting 4–6 weeks and the luteal phase lasting 8–10 weeks. While most mammals have one surge of luteinizing hormone during the follicular phase, elephants have two. The first (or anovulatory) surge, appears to change the female's scent, signaling to males that she is in heat, but ovulation does not occur until the second (or ovulatory) surge. Cows over 45–50 years of age are less fertile.", "title": "Behaviour and ecology" }, { "paragraph_id": 40, "text": "Bulls engage in a behaviour known as mate-guarding, where they follow oestrous females and defend them from other males. Most mate-guarding is done by musth males, and females seek them out, particularly older ones. Musth appears to signal to females the condition of the male, as weak or injured males do not have normal musths. For young females, the approach of an older bull can be intimidating, so her relatives stay nearby for comfort. During copulation, the male rests his trunk on the female. The penis is mobile enough to move without the pelvis. Before mounting, it curves forward and upward. Copulation lasts about 45 seconds and does not involve pelvic thrusting or an ejaculatory pause.", "title": "Behaviour and ecology" }, { "paragraph_id": 41, "text": "Homosexual behaviour is frequent in both sexes. As in heterosexual interactions, this involves mounting. Male elephants sometimes stimulate each other by playfighting, and \"championships\" may form between old bulls and younger males. Female same-sex behaviours have been documented only in captivity, where they engage in mutual masturbation with their trunks.", "title": "Behaviour and ecology" }, { "paragraph_id": 42, "text": "Gestation in elephants typically lasts between one and a half and two years and the female will not give birth again for at least four years. The relatively long pregnancy is supported by several corpus luteums and gives the foetus more time to develop, particularly the brain and trunk. Births tend to take place during the wet season. Typically, only a single young is born, but twins sometimes occur. Calves are born roughly 85 cm (33 in) tall and with a weight of around 120 kg (260 lb). They are precocial and quickly stand and walk to follow their mother and family herd. A newborn calf will attract the attention of all the herd members. Adults and most of the other young will gather around the newborn, touching and caressing it with their trunks. For the first few days, the mother limits access to her young. Alloparenting – where a calf is cared for by someone other than its mother – takes place in some family groups. Allomothers are typically aged two to twelve years.", "title": "Behaviour and ecology" }, { "paragraph_id": 43, "text": "For the first few days, the newborn is unsteady on its feet and needs its mother's help. It relies on touch, smell, and hearing, as its eyesight is less developed. With little coordination in its trunk, it can only flop it around which may cause it to trip. When it reaches its second week, the calf can walk with more balance and has more control over its trunk. After its first month, the trunk can grab and hold objects, but still lacks sucking abilities, and the calf must bend down to drink. It continues to stay near its mother as it is still reliant on her. For its first three months, a calf relies entirely on its mother's milk, after which it begins to forage for vegetation and can use its trunk to collect water. At the same time, there is progress in lip and leg movements. By nine months, mouth, trunk and foot coordination are mastered. Suckling bouts tend to last 2–4 min/hr for a calf younger than a year. After a year, a calf is fully capable of grooming, drinking, and feeding itself. It still needs its mother's milk and protection until it is at least two years old. Suckling after two years may improve growth, health and fertility.", "title": "Behaviour and ecology" }, { "paragraph_id": 44, "text": "Play behaviour in calves differs between the sexes; females run or chase each other while males play-fight. The former are sexually mature by the age of nine years while the latter become mature around 14–15 years. Adulthood starts at about 18 years of age in both sexes. Elephants have long lifespans, reaching 60–70 years of age. Lin Wang, a captive male Asian elephant, lived for 86 years.", "title": "Behaviour and ecology" }, { "paragraph_id": 45, "text": "Elephants communicate in various ways. Individuals greet one another by touching each other on the mouth, temporal glands and genitals. This allows them to pick up chemical cues. Older elephants use trunk-slaps, kicks, and shoves to control younger ones. Touching is especially important for mother–calf communication. When moving, elephant mothers will touch their calves with their trunks or feet when side-by-side or with their tails if the calf is behind them. A calf will press against its mother's front legs to signal it wants to rest and will touch her breast or leg when it wants to suckle.", "title": "Behaviour and ecology" }, { "paragraph_id": 46, "text": "Visual displays mostly occur in agonistic situations. Elephants will try to appear more threatening by raising their heads and spreading their ears. They may add to the display by shaking their heads and snapping their ears, as well as tossing around dust and vegetation. They are usually bluffing when performing these actions. Excited elephants also raise their heads and spread their ears but additionally may raise their trunks. Submissive elephants will lower their heads and trunks, as well as flatten their ears against their necks, while those that are ready to fight will bend their ears in a V shape.", "title": "Behaviour and ecology" }, { "paragraph_id": 47, "text": "Elephants produce several vocalisations—some of which pass though the trunk—for both short and long range communication. This includes trumpeting, bellowing, roaring, growling, barking, snorting, and rumbling. Elephants can produce infrasonic rumbles. For Asian elephants, these calls have a frequency of 14–24 Hz, with sound pressure levels of 85–90 dB and last 10–15 seconds. For African elephants, calls range from 15 to 35 Hz with sound pressure levels as high as 117 dB, allowing communication for many kilometres, possibly over 10 km (6 mi). Elephants are known to communicate with seismics, vibrations produced by impacts on the earth's surface or acoustical waves that travel through it. An individual foot stomping or mock charging can create seismic signals that can be heard at travel distances of up to 32 km (20 mi). Seismic waveforms produced by rumbles travel 16 km (10 mi).", "title": "Behaviour and ecology" }, { "paragraph_id": 48, "text": "Elephants are among the most intelligent animals. They exhibit mirror self-recognition, an indication of self-awareness and cognition that has also been demonstrated in some apes and dolphins. One study of a captive female Asian elephant suggested the animal was capable of learning and distinguishing between several visual and some acoustic discrimination pairs. This individual was even able to score a high accuracy rating when re-tested with the same visual pairs a year later. Elephants are among the species known to use tools. An Asian elephant has been observed fine-tuning branches for use as flyswatters. Tool modification by these animals is not as advanced as that of chimpanzees. Elephants are popularly thought of as having an excellent memory. This could have a factual basis; they possibly have cognitive maps which give them long lasting memories of their environment on a wide scale. Individuals may be able to remember where their family members are located.", "title": "Behaviour and ecology" }, { "paragraph_id": 49, "text": "Scientists debate the extent to which elephants feel emotion. They are attracted to the bones of their own kind, regardless of whether they are related. As with chimpanzees and dolphins, a dying or dead elephant may elicit attention and aid from others, including those from other groups. This has been interpreted as expressing \"concern\"; however, the Oxford Companion to Animal Behaviour (1987) said that \"one is well advised to study the behaviour rather than attempting to get at any underlying emotion\".", "title": "Behaviour and ecology" }, { "paragraph_id": 50, "text": "African bush elephants were listed as Endangered by the International Union for Conservation of Nature (IUCN) in 2021, and African forest elephants were listed as Critically Endangered in the same year. In 1979, Africa had an estimated population of at least 1.3 million elephants, possibly as high as 3.0 million. A decade later, the population was estimated to be 609,000; with 277,000 in Central Africa, 110,000 in Eastern Africa, 204,000 in Southern Africa, and 19,000 in Western Africa. The population of rainforest elephants was lower than anticipated, at around 214,000 individuals. Between 1977 and 1989, elephant populations declined by 74% in East Africa. After 1987, losses in elephant numbers hastened, and savannah populations from Cameroon to Somalia experienced a decline of 80%. African forest elephants had a total loss of 43%. Population trends in southern Africa were various, with unconfirmed losses in Zambia, Mozambique and Angola while populations grew in Botswana and Zimbabwe and were stable in South Africa. The IUCN estimated that total population in Africa is estimated at around to 415,000 individuals for both species combined as of 2016.", "title": "Conservation" }, { "paragraph_id": 51, "text": "African elephants receive at least some legal protection in every country where they are found. Successful conservation efforts in certain areas have led to high population densities while failures have led to declines as high as 70% or more of the course of ten years. As of 2008, local numbers were controlled by contraception or translocation. Large-scale cullings stopped in the late 1980s and early 1990s. In 1989, the African elephant was listed under Appendix I by the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES), making trade illegal. Appendix II status (which allows restricted trade) was given to elephants in Botswana, Namibia, and Zimbabwe in 1997 and South Africa in 2000. In some countries, sport hunting of the animals is legal; Botswana, Cameroon, Gabon, Mozambique, Namibia, South Africa, Tanzania, Zambia, and Zimbabwe have CITES export quotas for elephant trophies.", "title": "Conservation" }, { "paragraph_id": 52, "text": "In 2020, the IUCN listed the Asian elephant as endangered due to the population declining by half over \"the last three generations\". Asian elephants once ranged from Western to East Asia and south to Sumatra. and Java. It is now extinct in these areas, and the current range of Asian elephants is highly fragmented. The total population of Asian elephants is estimated to be around 40,000–50,000, although this may be a loose estimate. Around 60% of the population is in India. Although Asian elephants are declining in numbers overall, particularly in Southeast Asia, the population in Sri Lanka appears to have risen and elephant numbers in the Western Ghats may have stablised.", "title": "Conservation" }, { "paragraph_id": 53, "text": "The poaching of elephants for their ivory, meat and hides has been one of the major threats to their existence. Historically, numerous cultures made ornaments and other works of art from elephant ivory, and its use was comparable to that of gold. The ivory trade contributed to the fall of the African elephant population in the late 20th century. This prompted international bans on ivory imports, starting with the United States in June 1989, and followed by bans in other North American countries, western European countries, and Japan. Around the same time, Kenya destroyed all its ivory stocks. Ivory was banned internationally by CITES in 1990. Following the bans, unemployment rose in India and China, where the ivory industry was important economically. By contrast, Japan and Hong Kong, which were also part of the industry, were able to adapt and were not as badly affected. Zimbabwe, Botswana, Namibia, Zambia, and Malawi wanted to continue the ivory trade and were allowed to, since their local populations were healthy, but only if their supplies were from culled individuals or those that died of natural causes.", "title": "Conservation" }, { "paragraph_id": 54, "text": "The ban allowed the elephant to recover in parts of Africa. In February 2012, 650 elephants in Bouba Njida National Park, Cameroon, were slaughtered by Chadian raiders. This has been called \"one of the worst concentrated killings\" since the ivory ban. Asian elephants are potentially less vulnerable to the ivory trade, as females usually lack tusks. Still, members of the species have been killed for their ivory in some areas, such as Periyar National Park in India. China was the biggest market for poached ivory but announced they would phase out the legal domestic manufacture and sale of ivory products in May 2015, and in September 2015, China and the United States said \"they would enact a nearly complete ban on the import and export of ivory\" due to causes of extinction.", "title": "Conservation" }, { "paragraph_id": 55, "text": "Other threats to elephants include habitat destruction and fragmentation. The Asian elephant lives in areas with some of the highest human populations and may be confined to small islands of forest among human-dominated landscapes. Elephants commonly trample and consume crops, which contributes to conflicts with humans, and both elephants and humans have died by the hundreds as a result. Mitigating these conflicts is important for conservation. One proposed solution is the protection of wildlife corridors which give populations greater interconnectivity and space. Chili pepper products as well as guarding with defense tools have been found to be effective in preventing crop-raiding by elephants. Less effective tactics include beehive and electric fences.", "title": "Conservation" }, { "paragraph_id": 56, "text": "Elephants have been working animals since at least the Indus Valley civilization over 4,000 years ago and continue to be used in modern times. There were 13,000–16,500 working elephants employed in Asia in 2000. These animals are typically captured from the wild when they are 10–20 years old when they are both more trainable and can work for more years. They were traditionally captured with traps and lassos, but since 1950, tranquillisers have been used. Individuals of the Asian species have been often trained as working animals. Asian elephants are used to carry and pull both objects and people in and out of areas as well as lead people in religious celebrations. They are valued over mechanised tools as they can perform the same tasks but in more difficult terrain, with strength, memory, and delicacy. Elephants can learn over 30 commands. Musth bulls are difficult and dangerous to work with and so are chained up until their condition passes.", "title": "Human relations" }, { "paragraph_id": 57, "text": "In India, many working elephants are alleged to have been subject to abuse. They and other captive elephants are thus protected under The Prevention of Cruelty to Animals Act of 1960. In both Myanmar and Thailand, deforestation and other economic factors have resulted in sizable populations of unemployed elephants resulting in health problems for the elephants themselves as well as economic and safety problems for the people amongst whom they live.", "title": "Human relations" }, { "paragraph_id": 58, "text": "The practice of working elephants has also been attempted in Africa. The taming of African elephants in the Belgian Congo began by decree of Leopold II of Belgium during the 19th century and continues to the present with the Api Elephant Domestication Centre.", "title": "Human relations" }, { "paragraph_id": 59, "text": "Historically, elephants were considered formidable instruments of war. They were described in Sanskrit texts as far back as 1500 BC. From South Asia, the use of elephants in warfare spread west to Persia and east to Southeast Asia. The Persians used them during the Achaemenid Empire (between the 6th and 4th centuries BC) while Southeast Asian states first used war elephants possibly as early as the 5th century BC and continued to the 20th century. War elephants were also employed in the Mediterranean and North Africa throughout the classical period since the reign of Ptolemy II in Egypt. The Carthaginian general Hannibal famously took African elephants across the Alps during his war with the Romans and reached the Po Valley in 218 BC with all of them alive, but died of disease and combat a year later.", "title": "Human relations" }, { "paragraph_id": 60, "text": "Their heads and sides were equipped with armour, the trunk may have had a sword tied to it and tusks were sometimes covered with sharpened iron or brass. Trained elephants would attack both humans and horses with their tusks. They might have grasped an enemy soldier with the trunk and tossed him to their mahout, or pinned the soldier to the ground and speared him. Some shortcomings of war elephants included their great visibility, which made them easy to target, and limited maneuverability compared to horses. Alexander the Great achieved victory over armies with war elephants by having his soldiers injure the trunks and legs of the animals which caused them to panic and become uncontrollable.", "title": "Human relations" }, { "paragraph_id": 61, "text": "Elephants have traditionally been a major part of zoos and circuses around the world. In circuses, they are trained to perform tricks. The most famous circus elephant was probably Jumbo (1861 – 15 September 1885), who was a major attraction in the Barnum & Bailey Circus. These animals do not reproduce well in captivity, due to the difficulty of handling musth bulls and limited understanding of female oestrous cycles. Asian elephants were always more common than their African counterparts in modern zoos and circuses. After CITES listed the Asian elephant under Appendix I in 1975, imports of the species almost stopped by the end of the 1980s. Subsequently, the US received many captive African elephants from Zimbabwe, which had an overabundance of the animals.", "title": "Human relations" }, { "paragraph_id": 62, "text": "Keeping elephants in zoos has met with some controversy. Proponents of zoos argue that they allow easy access to the animals and provide fund and knowledge for preserving their natural habitats, as well as safekeeping for the species. Opponents claim that animals in zoos are under physical and mental stress. Elephants have been recorded displaying stereotypical behaviours in the form of wobbling the body or head and pacing the same route both forwards and backwards. This has been observed in 54% of individuals in UK zoos. Elephants in European zoos appear to have shorter lifespans than their wild counterparts at only 17 years, although other studies suggest that zoo elephants live just as long.", "title": "Human relations" }, { "paragraph_id": 63, "text": "The use of elephants in circuses has also been controversial; the Humane Society of the United States has accused circuses of mistreating and distressing their animals. In testimony to a US federal court in 2009, Barnum & Bailey Circus CEO Kenneth Feld acknowledged that circus elephants are struck behind their ears, under their chins and on their legs with metal-tipped prods, called bull hooks or ankus. Feld stated that these practices are necessary to protect circus workers and acknowledged that an elephant trainer was rebuked for using an electric prod on an elephant. Despite this, he denied that any of these practices hurt the animals. Some trainers have tried to train elephants without the use of physical punishment. Ralph Helfer is known to have relied on positive reinforcement when training his animals. Barnum and Bailey circus retired its touring elephants in May 2016.", "title": "Human relations" }, { "paragraph_id": 64, "text": "Elephants can exhibit bouts of aggressive behaviour and engage in destructive actions against humans. In Africa, groups of adolescent elephants damaged homes in villages after cullings in the 1970s and 1980s. Because of the timing, these attacks have been interpreted as vindictive. In parts of India, male elephants have entered villages at night, destroying homes and killing people. From 2000 to 2004, 300 people died in Jharkhand, and in Assam, 239 people were reportedly killed between 2001 and 2006. Throughout the country, 1,500 people were killed by elephants between 2019 and 2022, which led to 300 elephants being killed in kind. Local people have reported their belief that some elephants were drunk during their attacks, though officials have disputed this. Purportedly drunk elephants attacked an Indian village in December 2002, killing six people, which led to the retaliatory slaughter of about 200 elephants by locals.", "title": "Human relations" }, { "paragraph_id": 65, "text": "Elephants have a universal presence in global culture. They have been represented in art since Paleolithic times. Africa, in particular, contains many examples of elephant rock art, especially in the Sahara and southern Africa. In Asia, the animals are depicted as motifs in Hindu and Buddhist shrines and temples. Elephants were often difficult to portray by people with no first-hand experience of them. The ancient Romans, who kept the animals in captivity, depicted elephants more accurately than medieval Europeans who portrayed them more like fantasy creatures, with horse, bovine and boar-like traits, and trumpet-like trunks. As Europeans gained more access to captive elephants during the 15th century, depictions of them became more accurate, including one made by Leonardo da Vinci.", "title": "Human relations" }, { "paragraph_id": 66, "text": "Elephants have been the subject of religious beliefs. The Mbuti people of central Africa believe that the souls of their dead ancestors resided in elephants. Similar ideas existed among other African societies, who believed that their chiefs would be reincarnated as elephants. During the 10th century AD, the people of Igbo-Ukwu, in modern day Nigeria, placed elephant tusks underneath their death leader's feet in the grave. The animals' importance is only totemic in Africa but is much more significant in Asia. In Sumatra, elephants have been associated with lightning. Likewise in Hinduism, they are linked with thunderstorms as Airavata, the father of all elephants, represents both lightning and rainbows. One of the most important Hindu deities, the elephant-headed Ganesha, is ranked equal with the supreme gods Shiva, Vishnu, and Brahma in some traditions. Ganesha is associated with writers and merchants and it is believed that he can give people success as well as grant them their desires, but could also take these things away. In Buddhism, Buddha is said to have been a white elephant reincarnated as a human.", "title": "Human relations" }, { "paragraph_id": 67, "text": "In Western popular culture, elephants symbolise the exotic, especially since – as with the giraffe, hippopotamus and rhinoceros – there are no similar animals familiar to Western audiences. As characters, elephants are most common in children's stories, where they are portrayed positively. They are typically surrogates for humans with ideal human values. Many stories tell of isolated young elephants returning to or finding a family, such as \"The Elephant's Child\" from Rudyard Kipling's Just So Stories, Disney's Dumbo, and Kathryn and Byron Jackson's The Saggy Baggy Elephant. Other elephant heroes given human qualities include Jean de Brunhoff's Babar, David McKee's Elmer, and Dr. Seuss's Horton.", "title": "Human relations" }, { "paragraph_id": 68, "text": "Several cultural references emphasise the elephant's size and strangeness. For instance, a \"white elephant\" is a byword for something that is weird, unwanted, and has no value. The expression \"elephant in the room\" refers to something that is being ignored but ultimately must be addressed. The story of the blind men and an elephant involves blind men touching different parts of an elephant and trying to figure out what it is.", "title": "Human relations" }, { "paragraph_id": 69, "text": "", "title": "External links" } ]
Elephants are the largest living land animals. Three living species are currently recognised: the African bush elephant, the African forest elephant, and the Asian elephant. They are the only surviving members of the family Elephantidae and the order Proboscidea; extinct relatives include mammoths and mastodons. Distinctive features of elephants include a long proboscis called a trunk, tusks, large ear flaps, pillar-like legs, and tough but sensitive grey skin. The trunk is prehensile, bringing food and water to the mouth and grasping objects. Tusks, which are derived from the incisor teeth, serve both as weapons and as tools for moving objects and digging. The large ear flaps assist in maintaining a constant body temperature as well as in communication. African elephants have larger ears and concave backs, whereas Asian elephants have smaller ears and convex or level backs. Elephants are scattered throughout sub-Saharan Africa, South Asia, and Southeast Asia and are found in different habitats, including savannahs, forests, deserts, and marshes. They are herbivorous, and they stay near water when it is accessible. They are considered to be keystone species, due to their impact on their environments. Elephants have a fission–fusion society, in which multiple family groups come together to socialise. Females (cows) tend to live in family groups, which can consist of one female with her calves or several related females with offspring. The leader of a female group, usually the oldest cow, is known as the matriarch. Males (bulls) leave their family groups when they reach puberty and may live alone or with other males. Adult bulls mostly interact with family groups when looking for a mate. They enter a state of increased testosterone and aggression known as musth, which helps them gain dominance over other males as well as reproductive success. Calves are the centre of attention in their family groups and rely on their mothers for as long as three years. Elephants can live up to 70 years in the wild. They communicate by touch, sight, smell, and sound; elephants use infrasound and seismic communication over long distances. Elephant intelligence has been compared with that of primates and cetaceans. They appear to have self-awareness, and possibly show concern for dying and dead individuals of their kind. African bush elephants and Asian elephants are listed as endangered and African forest elephants as critically endangered by the International Union for Conservation of Nature (IUCN). One of the biggest threats to elephant populations is the ivory trade, as the animals are poached for their ivory tusks. Other threats to wild elephants include habitat destruction and conflicts with local people. Elephants are used as working animals in Asia. In the past, they were used in war; today, they are often controversially put on display in zoos, or employed for entertainment in circuses. Elephants have an iconic status in human culture, and have been widely featured in art, folklore, religion, literature, and popular culture.
2001-10-09T15:32:29Z
2023-12-25T03:09:03Z
[ "Template:About", "Template:Convert", "Template:LSJ", "Template:Pp-move-indef", "Template:Use dmy dates", "Template:Cite dictionary", "Template:Cite web", "Template:Cite magazine", "Template:Cbignore", "Template:Refend", "Template:Sister project links", "Template:Cvt", "Template:Redirect", "Template:Anchor", "Template:Cite news", "Template:Proboscidea", "Template:Authority control", "Template:Short description", "Template:Pp-vandalism", "Template:Circa", "Template:Cladogram", "Template:Main article", "Template:Further", "Template:Reflist", "Template:Elephants", "Template:Use British English", "Template:Paraphyletic group", "Template:Gloss", "Template:Multiple image", "Template:Main", "Template:Cite journal", "Template:Transl", "Template:Multiple images", "Template:Portal", "Template:Citation", "Template:Refbegin", "Template:See also", "Template:Proboscidea Genera", "Template:Lang", "Template:Cite book", "Template:Cite iucn", "Template:Featured article" ]
https://en.wikipedia.org/wiki/Elephant
9,281
Evolutionary linguistics
Evolutionary linguistics or Darwinian linguistics is a sociobiological approach to the study of language. Evolutionary linguists consider linguistics as a subfield of sociobiology and evolutionary psychology. The approach is also closely linked with evolutionary anthropology, cognitive linguistics and biolinguistics. Studying languages as the products of nature, it is interested in the biological origin and development of language. Evolutionary linguistics is contrasted with humanistic approaches, especially structural linguistics. A main challenge in this research is the lack of empirical data: there are no archaeological traces of early human language. Computational biological modelling and clinical research with artificial languages have been employed to fill in gaps of knowledge. Although biology is understood to shape the brain, which processes language, there is no clear link between biology and specific human language structures or linguistic universals. For lack of a breakthrough in the field, there have been numerous debates about what kind of natural phenomenon language might be. Some researchers focus on the innate aspects of language. It is suggested that grammar has emerged adaptationally from the human genome, bringing about a language instinct; or that it depends on a single mutation which has caused a language organ to appear in the human brain. This is hypothesized to result in a crystalline grammatical structure underlying all human languages. Others suggest language is not crystallized, but fluid and ever-changing. Others, yet, liken languages to living organisms. Languages are considered analogous to a parasite or populations of mind-viruses. There is so far little scientific evidence for any of these claims, and some of them have been labelled as pseudoscience. Although pre-Darwinian theorists had compared languages to living organisms as a metaphor, the comparison was first taken literally in 1863 by the historical linguist August Schleicher who was inspired by Charles Darwin's On the Origin of Species. At the time there was not enough evidence to prove that Darwin's theory of natural selection was correct. Schleicher proposed that linguistics could be used as a testing ground for the study of the evolution of species. A review of Schleicher's book Darwinism as Tested by the Science of Language appeared in the first issue of Nature journal in 1870. Darwin reiterated Schleicher's proposition in his 1871 book The Descent of Man, claiming that languages are comparable to species, and that language change occurs through natural selection as words 'struggle for life'. Darwin believed that languages had evolved from animal mating calls. Darwinists considered the concept of language creation as unscientific. August Schleicher and his friend Ernst Haeckel were keen gardeners and regarded the study of cultures as a type of botany, with different species competing for the same living space. Similar ideas became later advocated by politicians who wanted to appeal to working class voters, not least by the national socialists who subsequently included the concept of struggle for living space in their agenda. Highly influential until the end of World War II, social Darwinism was eventually banished from human sciences, leading to a strict separation of natural and sociocultural studies. This gave rise to the dominance of structural linguistics in Europe. There had long been a dispute between the Darwinists and the French intellectuals with the topic of language evolution famously having been banned by the Paris Linguistic Society as early as in 1866. Ferdinand de Saussure proposed structuralism to replace evolutionary linguistics in his Course in General Linguistics, published posthumously in 1916. The structuralists rose to academic political power in human and social sciences in the aftermath of the student revolts of Spring 1968, establishing Sorbonne as an international centrepoint of humanistic thinking. In the United States, structuralism was however fended off by the advocates of behavioural psychology; a linguistics framework nicknamed as 'American structuralism'. It was eventually replaced by the approach of Noam Chomsky who published a modification of Louis Hjelmslev's formal structuralist theory, claiming that syntactic structures are innate. An active figure in peace demonstrations in the 1950s and 1960s, Chomsky rose to academic political power following Spring 1968 at the MIT. Chomsky became an influential opponent of the French intellectuals during the following decades, and his supporters successfully confronted the post-structuralists in the Science Wars of the late 1990s. The shift of the century saw a new academic funding policy where interdisciplinary research became favoured, effectively directing research funds to biological humanities. The decline of structuralism was evident by 2015 with Sorbonne having lost its former spirit. Chomsky eventually claimed that syntactic structures are caused by a random mutation in the human genome, proposing a similar explanation for other human faculties such as ethics. But Steven Pinker argued in 1990 that they are the outcome of evolutionary adaptations. At the same time when the Chomskyan paradigm of biological determinism defeated humanism, it was losing its own clout within sociobiology. It was reported likewise in 2015 that generative grammar was under fire in applied linguistics and in the process of being replaced with usage-based linguistics; a derivative of Richard Dawkins's memetics. It is a concept of linguistic units as replicators. Following the publication of memetics in Dawkins's 1976 nonfiction bestseller The Selfish Gene, many biologically inclined linguists, frustrated with the lack of evidence for Chomsky's Universal Grammar, grouped under different brands including a framework called Cognitive Linguistics (with capitalised initials), and 'functional' (adaptational) linguistics (not to be confused with functional linguistics) to confront both Chomsky and the humanists. The replicator approach is today dominant in evolutionary linguistics, applied linguistics, cognitive linguistics and linguistic typology; while the generative approach has maintained its position in general linguistics, especially syntax; and in computational linguistics. Evolutionary linguistics is part of a wider framework of Universal Darwinism. In this view, linguistics is seen as an ecological environment for research traditions struggling for the same resources. According to David Hull, these traditions correspond to species in biology. Relationships between research traditions can be symbiotic, competitive or parasitic. An adaptation of Hull's theory in linguistics is proposed by William Croft. He argues that the Darwinian method is more advantageous than linguistic models based on physics, structuralist sociology, or hermeneutics. Evolutionary linguistics is often divided into functionalism and formalism, concepts which are not to be confused with functionalism and formalism in the humanistic reference. Functional evolutionary linguistics considers languages as adaptations to human mind. The formalist view regards them as crystallised or non-adaptational. The adaptational view of language is advocated by various frameworks of cognitive and evolutionary linguistics, with the terms 'functionalism' and 'Cognitive Linguistics' often being equated. It is hypothesised that the evolution of the animal brain provides humans with a mechanism of abstract reasoning which is a 'metaphorical' version of image-based reasoning. Language is not considered as a separate area of cognition, but as coinciding with general cognitive capacities, such as perception, attention, motor skills, and spatial and visual processing. It is argued to function according to the same principles as these. It is thought that the brain links action schemes to form–meaning pairs which are called constructions. Cognitive linguistic approaches to syntax are called cognitive and construction grammar. Also deriving from memetics and other cultural replicator theories, these can study the natural or social selection and adaptation of linguistic units. Adaptational models reject a formal systemic view of language and consider language as a population of linguistic units. The bad reputation of social Darwinism and memetics has been discussed in the literature, and recommendations for new terminology have been given. What correspond to replicators or mind-viruses in memetics are called linguemes in Croft's theory of Utterance Selection (TUS), and likewise linguemes or constructions in construction grammar and usage-based linguistics; and metaphors, frames or schemas in cognitive and construction grammar. The reference of memetics has been largely replaced with that of a Complex Adaptive System. In current linguistics, this term covers a wide range of evolutionary notions while maintaining the Neo-Darwinian concepts of replication and replicator population. Functional evolutionary linguistics is not to be confused with functional humanistic linguistics. Advocates of formal evolutionary explanation in linguistics argue that linguistic structures are crystallised. Inspired by 19th century advances in crystallography, Schleicher argued that different types of languages are like plants, animals and crystals. The idea of linguistic structures as frozen drops was revived in tagmemics, an approach to linguistics with the goal to uncover divine symmetries underlying all languages, as if caused by the Creation. In modern biolinguistics, the X-bar tree is argued to be like natural systems such as ferromagnetic droplets and botanic forms. Generative grammar considers syntactic structures similar to snowflakes. It is hypothesised that such patterns are caused by a mutation in humans. The formal–structural evolutionary aspect of linguistics is not to be confused with structural linguistics. There was some hope of a breakthrough at the discovery of the FOXP2 gene. There is little support, however, for the idea that FOXP2 is 'the grammar gene' or that it had much to do with the relatively recent emergence of syntactical speech. There is no evidence that people have a language instinct. Memetics is widely discredited as pseudoscience and neurological claims made by evolutionary cognitive linguists have been likened to pseudoscience. All in all, there does not appear to be any evidence for the basic tenets of evolutionary linguistics beyond the fact that language is processed by the brain, and brain structures are shaped by genes. Evolutionary linguistics has been criticised by advocates of (humanistic) structural and functional linguistics. Ferdinand de Saussure commented on 19th century evolutionary linguistics: "Language was considered a specific sphere, a fourth natural kingdom; this led to methods of reasoning which would have caused astonishment in other sciences. Today one cannot read a dozen lines written at that time without being struck by absurdities of reasoning and by the terminology used to justify these absurdities” Mark Aronoff, however, argues that historical linguistics had its golden age during the time of Schleicher and his supporters, enjoying a place among the hard sciences, and considers the return of Darwinian linguistics as a positive development. Esa Itkonen nonetheless deems the revival of Darwinism as a hopeless enterprise: "There is ... an application of intelligence in linguistic change which is absent in biological evolution; and this suffices to make the two domains totally disanalogous ... [Grammaticalisation depends on] cognitive processes, ultimately serving the goal of problem solving, which intelligent entities like humans must perform all the time, but which biological entities like genes cannot perform. Trying to eliminate this basic difference leads to confusion.” Itkonen also points out that the principles of natural selection are not applicable because language innovation and acceptance have the same source which is the speech community. In biological evolution, mutation and selection have different sources. This makes it possible for people to change their languages, but not their genotype.
[ { "paragraph_id": 0, "text": "Evolutionary linguistics or Darwinian linguistics is a sociobiological approach to the study of language. Evolutionary linguists consider linguistics as a subfield of sociobiology and evolutionary psychology. The approach is also closely linked with evolutionary anthropology, cognitive linguistics and biolinguistics. Studying languages as the products of nature, it is interested in the biological origin and development of language. Evolutionary linguistics is contrasted with humanistic approaches, especially structural linguistics.", "title": "" }, { "paragraph_id": 1, "text": "A main challenge in this research is the lack of empirical data: there are no archaeological traces of early human language. Computational biological modelling and clinical research with artificial languages have been employed to fill in gaps of knowledge. Although biology is understood to shape the brain, which processes language, there is no clear link between biology and specific human language structures or linguistic universals.", "title": "" }, { "paragraph_id": 2, "text": "For lack of a breakthrough in the field, there have been numerous debates about what kind of natural phenomenon language might be. Some researchers focus on the innate aspects of language. It is suggested that grammar has emerged adaptationally from the human genome, bringing about a language instinct; or that it depends on a single mutation which has caused a language organ to appear in the human brain. This is hypothesized to result in a crystalline grammatical structure underlying all human languages. Others suggest language is not crystallized, but fluid and ever-changing. Others, yet, liken languages to living organisms. Languages are considered analogous to a parasite or populations of mind-viruses. There is so far little scientific evidence for any of these claims, and some of them have been labelled as pseudoscience.", "title": "" }, { "paragraph_id": 3, "text": "Although pre-Darwinian theorists had compared languages to living organisms as a metaphor, the comparison was first taken literally in 1863 by the historical linguist August Schleicher who was inspired by Charles Darwin's On the Origin of Species. At the time there was not enough evidence to prove that Darwin's theory of natural selection was correct. Schleicher proposed that linguistics could be used as a testing ground for the study of the evolution of species. A review of Schleicher's book Darwinism as Tested by the Science of Language appeared in the first issue of Nature journal in 1870. Darwin reiterated Schleicher's proposition in his 1871 book The Descent of Man, claiming that languages are comparable to species, and that language change occurs through natural selection as words 'struggle for life'. Darwin believed that languages had evolved from animal mating calls. Darwinists considered the concept of language creation as unscientific.", "title": "History" }, { "paragraph_id": 4, "text": "August Schleicher and his friend Ernst Haeckel were keen gardeners and regarded the study of cultures as a type of botany, with different species competing for the same living space. Similar ideas became later advocated by politicians who wanted to appeal to working class voters, not least by the national socialists who subsequently included the concept of struggle for living space in their agenda. Highly influential until the end of World War II, social Darwinism was eventually banished from human sciences, leading to a strict separation of natural and sociocultural studies.", "title": "History" }, { "paragraph_id": 5, "text": "This gave rise to the dominance of structural linguistics in Europe. There had long been a dispute between the Darwinists and the French intellectuals with the topic of language evolution famously having been banned by the Paris Linguistic Society as early as in 1866. Ferdinand de Saussure proposed structuralism to replace evolutionary linguistics in his Course in General Linguistics, published posthumously in 1916. The structuralists rose to academic political power in human and social sciences in the aftermath of the student revolts of Spring 1968, establishing Sorbonne as an international centrepoint of humanistic thinking.", "title": "History" }, { "paragraph_id": 6, "text": "In the United States, structuralism was however fended off by the advocates of behavioural psychology; a linguistics framework nicknamed as 'American structuralism'. It was eventually replaced by the approach of Noam Chomsky who published a modification of Louis Hjelmslev's formal structuralist theory, claiming that syntactic structures are innate. An active figure in peace demonstrations in the 1950s and 1960s, Chomsky rose to academic political power following Spring 1968 at the MIT.", "title": "History" }, { "paragraph_id": 7, "text": "Chomsky became an influential opponent of the French intellectuals during the following decades, and his supporters successfully confronted the post-structuralists in the Science Wars of the late 1990s. The shift of the century saw a new academic funding policy where interdisciplinary research became favoured, effectively directing research funds to biological humanities. The decline of structuralism was evident by 2015 with Sorbonne having lost its former spirit.", "title": "History" }, { "paragraph_id": 8, "text": "Chomsky eventually claimed that syntactic structures are caused by a random mutation in the human genome, proposing a similar explanation for other human faculties such as ethics. But Steven Pinker argued in 1990 that they are the outcome of evolutionary adaptations.", "title": "History" }, { "paragraph_id": 9, "text": "At the same time when the Chomskyan paradigm of biological determinism defeated humanism, it was losing its own clout within sociobiology. It was reported likewise in 2015 that generative grammar was under fire in applied linguistics and in the process of being replaced with usage-based linguistics; a derivative of Richard Dawkins's memetics. It is a concept of linguistic units as replicators. Following the publication of memetics in Dawkins's 1976 nonfiction bestseller The Selfish Gene, many biologically inclined linguists, frustrated with the lack of evidence for Chomsky's Universal Grammar, grouped under different brands including a framework called Cognitive Linguistics (with capitalised initials), and 'functional' (adaptational) linguistics (not to be confused with functional linguistics) to confront both Chomsky and the humanists. The replicator approach is today dominant in evolutionary linguistics, applied linguistics, cognitive linguistics and linguistic typology; while the generative approach has maintained its position in general linguistics, especially syntax; and in computational linguistics.", "title": "History" }, { "paragraph_id": 10, "text": "Evolutionary linguistics is part of a wider framework of Universal Darwinism. In this view, linguistics is seen as an ecological environment for research traditions struggling for the same resources. According to David Hull, these traditions correspond to species in biology. Relationships between research traditions can be symbiotic, competitive or parasitic. An adaptation of Hull's theory in linguistics is proposed by William Croft. He argues that the Darwinian method is more advantageous than linguistic models based on physics, structuralist sociology, or hermeneutics.", "title": "View of linguistics" }, { "paragraph_id": 11, "text": "Evolutionary linguistics is often divided into functionalism and formalism, concepts which are not to be confused with functionalism and formalism in the humanistic reference. Functional evolutionary linguistics considers languages as adaptations to human mind. The formalist view regards them as crystallised or non-adaptational.", "title": "Approaches" }, { "paragraph_id": 12, "text": "The adaptational view of language is advocated by various frameworks of cognitive and evolutionary linguistics, with the terms 'functionalism' and 'Cognitive Linguistics' often being equated. It is hypothesised that the evolution of the animal brain provides humans with a mechanism of abstract reasoning which is a 'metaphorical' version of image-based reasoning. Language is not considered as a separate area of cognition, but as coinciding with general cognitive capacities, such as perception, attention, motor skills, and spatial and visual processing. It is argued to function according to the same principles as these.", "title": "Approaches" }, { "paragraph_id": 13, "text": "It is thought that the brain links action schemes to form–meaning pairs which are called constructions. Cognitive linguistic approaches to syntax are called cognitive and construction grammar. Also deriving from memetics and other cultural replicator theories, these can study the natural or social selection and adaptation of linguistic units. Adaptational models reject a formal systemic view of language and consider language as a population of linguistic units.", "title": "Approaches" }, { "paragraph_id": 14, "text": "The bad reputation of social Darwinism and memetics has been discussed in the literature, and recommendations for new terminology have been given. What correspond to replicators or mind-viruses in memetics are called linguemes in Croft's theory of Utterance Selection (TUS), and likewise linguemes or constructions in construction grammar and usage-based linguistics; and metaphors, frames or schemas in cognitive and construction grammar. The reference of memetics has been largely replaced with that of a Complex Adaptive System. In current linguistics, this term covers a wide range of evolutionary notions while maintaining the Neo-Darwinian concepts of replication and replicator population.", "title": "Approaches" }, { "paragraph_id": 15, "text": "Functional evolutionary linguistics is not to be confused with functional humanistic linguistics.", "title": "Approaches" }, { "paragraph_id": 16, "text": "Advocates of formal evolutionary explanation in linguistics argue that linguistic structures are crystallised. Inspired by 19th century advances in crystallography, Schleicher argued that different types of languages are like plants, animals and crystals. The idea of linguistic structures as frozen drops was revived in tagmemics, an approach to linguistics with the goal to uncover divine symmetries underlying all languages, as if caused by the Creation.", "title": "Approaches" }, { "paragraph_id": 17, "text": "In modern biolinguistics, the X-bar tree is argued to be like natural systems such as ferromagnetic droplets and botanic forms. Generative grammar considers syntactic structures similar to snowflakes. It is hypothesised that such patterns are caused by a mutation in humans.", "title": "Approaches" }, { "paragraph_id": 18, "text": "The formal–structural evolutionary aspect of linguistics is not to be confused with structural linguistics.", "title": "Approaches" }, { "paragraph_id": 19, "text": "There was some hope of a breakthrough at the discovery of the FOXP2 gene. There is little support, however, for the idea that FOXP2 is 'the grammar gene' or that it had much to do with the relatively recent emergence of syntactical speech. There is no evidence that people have a language instinct. Memetics is widely discredited as pseudoscience and neurological claims made by evolutionary cognitive linguists have been likened to pseudoscience. All in all, there does not appear to be any evidence for the basic tenets of evolutionary linguistics beyond the fact that language is processed by the brain, and brain structures are shaped by genes.", "title": "Evidence" }, { "paragraph_id": 20, "text": "Evolutionary linguistics has been criticised by advocates of (humanistic) structural and functional linguistics. Ferdinand de Saussure commented on 19th century evolutionary linguistics:", "title": "Criticism" }, { "paragraph_id": 21, "text": "\"Language was considered a specific sphere, a fourth natural kingdom; this led to methods of reasoning which would have caused astonishment in other sciences. Today one cannot read a dozen lines written at that time without being struck by absurdities of reasoning and by the terminology used to justify these absurdities”", "title": "Criticism" }, { "paragraph_id": 22, "text": "Mark Aronoff, however, argues that historical linguistics had its golden age during the time of Schleicher and his supporters, enjoying a place among the hard sciences, and considers the return of Darwinian linguistics as a positive development. Esa Itkonen nonetheless deems the revival of Darwinism as a hopeless enterprise:", "title": "Criticism" }, { "paragraph_id": 23, "text": "\"There is ... an application of intelligence in linguistic change which is absent in biological evolution; and this suffices to make the two domains totally disanalogous ... [Grammaticalisation depends on] cognitive processes, ultimately serving the goal of problem solving, which intelligent entities like humans must perform all the time, but which biological entities like genes cannot perform. Trying to eliminate this basic difference leads to confusion.”", "title": "Criticism" }, { "paragraph_id": 24, "text": "Itkonen also points out that the principles of natural selection are not applicable because language innovation and acceptance have the same source which is the speech community. In biological evolution, mutation and selection have different sources. This makes it possible for people to change their languages, but not their genotype.", "title": "Criticism" } ]
Evolutionary linguistics or Darwinian linguistics is a sociobiological approach to the study of language. Evolutionary linguists consider linguistics as a subfield of sociobiology and evolutionary psychology. The approach is also closely linked with evolutionary anthropology, cognitive linguistics and biolinguistics. Studying languages as the products of nature, it is interested in the biological origin and development of language. Evolutionary linguistics is contrasted with humanistic approaches, especially structural linguistics. A main challenge in this research is the lack of empirical data: there are no archaeological traces of early human language. Computational biological modelling and clinical research with artificial languages have been employed to fill in gaps of knowledge. Although biology is understood to shape the brain, which processes language, there is no clear link between biology and specific human language structures or linguistic universals. For lack of a breakthrough in the field, there have been numerous debates about what kind of natural phenomenon language might be. Some researchers focus on the innate aspects of language. It is suggested that grammar has emerged adaptationally from the human genome, bringing about a language instinct; or that it depends on a single mutation which has caused a language organ to appear in the human brain. This is hypothesized to result in a crystalline grammatical structure underlying all human languages. Others suggest language is not crystallized, but fluid and ever-changing. Others, yet, liken languages to living organisms. Languages are considered analogous to a parasite or populations of mind-viruses. There is so far little scientific evidence for any of these claims, and some of them have been labelled as pseudoscience.
2001-03-23T01:07:48Z
2023-12-29T04:56:29Z
[ "Template:Linguistics", "Template:Blockquote", "Template:Distinguish", "Template:Col div", "Template:Colend", "Template:Cite journal", "Template:Webarchive", "Template:Short description", "Template:Evolutionary biology", "Template:Cite web", "Template:Animal communication", "Template:Portal bar", "Template:Portal", "Template:Reflist", "Template:Cite book", "Template:Evolutionary psychology" ]
https://en.wikipedia.org/wiki/Evolutionary_linguistics
9,282
ECHELON
ECHELON, originally a secret government code name, is a surveillance program (signals intelligence/SIGINT collection and analysis network) operated by the five signatory states to the UKUSA Security Agreement: Australia, Canada, New Zealand, the United Kingdom and the United States, also known as the Five Eyes. Created in the late 1960s to monitor the military and diplomatic communications of the Soviet Union and its Eastern Bloc allies during the Cold War, the ECHELON project became formally established in 1971. By the end of the 20th century, it had greatly expanded. The UKUSA intelligence community was assessed by the European Parliament (EP) in 2000 to include the signals intelligence agencies of each of the member states: Former NSA analyst Perry Fellwock, under the pseudonym Winslow Peck, first blew the whistle on ECHELON to Ramparts in 1972, when he revealed the existence of a global network of listening posts and told of his experiences working there. He also revealed the existence of nuclear weapons in Israel in 1972, the widespread involvement of CIA and NSA personnel in drugs and human smuggling, and CIA operatives leading Nationalist Chinese (Taiwan) commandos in burning villages inside PRC borders. In 1982, James Bamford, investigative journalist and author wrote The Puzzle Palace, an in-depth look inside the workings of the NSA, then a super-secret agency, and the massive eavesdropping operation under the codename "SHAMROCK". The NSA has used many codenames, and SHAMROCK was the codename used for ECHELON prior to 1975. In 1988, Margaret Newsham, a Lockheed employee under NSA contract, disclosed the ECHELON surveillance system to members of Congress. Newsham told a member of the US Congress that the telephone calls of Strom Thurmond, a Republican US senator, were being collected by the NSA. Congressional investigators determined that "targeting of US political figures would not occur by accident, but was designed into the system from the start." Also in 1988, an article titled "Somebody's Listening", written by investigative journalist Duncan Campbell in the New Statesman, described the signals intelligence gathering activities of a program code-named "ECHELON". James Bamford describes the system as the software controlling the collection and distribution of civilian telecommunications traffic conveyed using communication satellites, with the collection being undertaken by ground stations located in the footprint of the downlink leg. A detailed description of ECHELON was provided by the New Zealand journalist Nicky Hager in his 1996 book Secret Power: New Zealand's Role in the International Spy Network. Two years later, Hager's book was cited by the European Parliament in a report titled "An Appraisal of the Technology of Political Control" (PE 168.184). In March 1999, for the first time in history, the Australian government admitted that news reports about the top secret UKUSA Agreement were true. Martin Brady, the director of Australia's Defence Signals Directorate (DSD, now known as Australian Signals Directorate, or ASD) told the Australian broadcasting channel Nine Network that the DSD "does co-operate with counterpart signals intelligence organisations overseas under the UKUSA relationship." In 2000, James Woolsey, the former Director of the US Central Intelligence Agency, confirmed that US intelligence uses interception systems and keyword searches to monitor European businesses. Lawmakers in the United States feared that the ECHELON system could be used to monitor US citizens. According to The New York Times, the ECHELON system has been "shrouded in such secrecy that its very existence has been difficult to prove." Critics said the ECHELON system emerged from the Cold War as a "Big Brother without a cause". The program's capabilities and political implications were investigated by a committee of the European Parliament during 2000 and 2001 with a report published in 2001. In July 2000, the Temporary Committee on the ECHELON Interception System was established by the European parliament to investigate the surveillance network. It was chaired by the Portuguese politician Carlos Coelho, who was in charge of supervising investigations throughout 2000 and 2001. In May 2001, as the committee finalised its report on the ECHELON system, a delegation travelled to Washington, D.C. to attend meetings with US officials from the following agencies and departments: All meetings were cancelled by the US government and the committee was forced to end its trip prematurely. According to a BBC correspondent in May 2001, "The US Government still refuses to admit that Echelon even exists." In July 2001, the Committee released its final report. The EP report concluded that it seemed likely that ECHELON is a method of sorting captured signal traffic, rather than a comprehensive analysis tool. On 5 September 2001, the European parliament voted to accept the report. The European Parliament stated in its report that the term ECHELON is used in a number of contexts, but that the evidence presented indicates that it was the name for a signals intelligence collection system. The report concludes that, on the basis of information presented, ECHELON was capable of interception and content inspection of telephone calls, fax, e-mail and other data traffic globally through the interception of communication bearers including satellite transmission, public switched telephone networks (which once carried most Internet traffic), and microwave links. Two internal NSA newsletters from January 2011 and July 2012, published as part of Edward Snowden's leaks by the website The Intercept on 3 August 2015, for the first time confirmed that NSA used the code word ECHELON and provided some details about the scope of the program: ECHELON was part of an umbrella program with the code name FROSTING, which was established by the NSA in 1966 to collect and process data from communications satellites. FROSTING had two sub-programs: The European Parliament's Temporary Committee on the ECHELON Interception System stated, "It seems likely, in view of the evidence and the consistent pattern of statements from a very wide range of individuals and organisations, including American sources, that its name is in fact ECHELON, although this is a relatively minor detail". The US intelligence community uses many code names (see, for example, CIA cryptonym). Former NSA employee Margaret Newsham said that she worked on the configuration and installation of software that makes up the ECHELON system while employed at Lockheed Martin, from 1974 to 1984 in Sunnyvale, California, in the United States, and in Menwith Hill, England, in the UK. At that time, according to Newsham, the code name ECHELON was NSA's term for the computer network itself. Lockheed called it P415. The software programs were called SILKWORTH and SIRE. A satellite named VORTEX intercepted communications. An image available on the internet of a fragment apparently torn from a job description shows Echelon listed along with several other code names. Britain's The Guardian newspaper summarized the capabilities of the ECHELON system as follows: A global network of electronic spy stations that can eavesdrop on telephones, faxes and computers. It can even track bank accounts. This information is stored in Echelon computers, which can keep millions of records on individuals. Officially, however, Echelon doesn't exist. Documents leaked by the former NSA contractor Edward Snowden revealed that the ECHELON system's collection of satellite data is also referred to as FORNSAT - an abbreviation for "Foreign Satellite Collection". First revealed by the European Parliament report (p. 54 ff) and confirmed later by the Edward Snowden disclosures the following ground stations presently have, or have had, a role in intercepting transmissions from Satellite and other means of communication: The ability to intercept communications depends on the medium used, be it radio, satellite, microwave, cellular or fiber-optic. During World War II and through the 1950s, high-frequency ("short-wave") radio was widely used for military and diplomatic communication and could be intercepted at great distances. The rise of geostationary communications satellites in the 1960s presented new possibilities for intercepting international communications. In 1964, plans for the establishment of the ECHELON network took off after dozens of countries agreed to establish the International Telecommunications Satellite Organization (Intelsat), which would own and operate a global constellation of communications satellites. In 1966, the first Intelsat satellite was launched into orbit. From 1970 to 1971, the Government Communications Headquarters (GCHQ) of Britain began to operate a secret signal station at Morwenstow, near Bude in Cornwall, England. The station intercepted satellite communications over the Atlantic and Indian Oceans. Soon afterwards, the US National Security Agency (NSA) built a second signal station at Yakima, near Seattle, for the interception of satellite communications over the Pacific Ocean. In 1981, GCHQ and the NSA started the construction of the first global wide area network (WAN). Soon after Australia, Canada, and New Zealand joined the ECHELON system. The report to the European Parliament of 2001 states: "If UKUSA states operate listening stations in the relevant regions of the earth, in principle they can intercept all telephone, fax, and data traffic transmitted via such satellites." Most reports on ECHELON focus on satellite interception. Testimony before the European Parliament indicated that separate but similar UKUSA systems are in place to monitor communication through undersea cables, microwave transmissions, and other lines. The report to the European Parliament points out that interception of private communications by foreign intelligence services is not necessarily limited to the US or British foreign intelligence services. The role of satellites in point-to-point voice and data communications has largely been supplanted by fiber optics. In 2006, 99% of the world's long-distance voice and data traffic was carried over optical-fiber. The proportion of international communications accounted for by satellite links is said to have decreased substantially to an amount between 0.4% and 5% in Central Europe. Even in less-developed parts of the world, communications satellites are used largely for point-to-multipoint applications, such as video. Thus, the majority of communications can no longer be intercepted by earth stations; they can only be collected by tapping cables and intercepting line-of-sight microwave signals, which is possible only to a limited extent. British journalist Duncan Campbell and New Zealand journalist Nicky Hager said in the 1990s that the United States was exploiting ECHELON traffic for industrial espionage, rather than military and diplomatic purposes. Examples alleged by the journalists include the gear-less wind turbine technology designed by the German firm Enercon and the speech technology developed by the Belgian firm Lernout & Hauspie. In 2001, the Temporary Committee on the ECHELON Interception System recommended to the European Parliament that citizens of member states routinely use cryptography in their communications to protect their privacy, because economic espionage with ECHELON has been conducted by the US intelligence agencies. American author James Bamford provides an alternative view, highlighting that legislation prohibits the use of intercepted communications for commercial purposes, although he does not elaborate on how intercepted communications are used as part of an all-source intelligence process. In its report, the committee of the European Parliament stated categorically that the Echelon network was being used to intercept not only military communications, but also private and business ones. In its epigraph to the report, the parliamentary committee quoted Juvenal, "Sed quis custodiet ipsos custodes." ("But who will watch the watchers"). James Bamford, in The Guardian in May 2001, warned that if Echelon were to continue unchecked, it could become a "cyber secret police, without courts, juries, or the right to a defence". Alleged examples of espionage conducted by the members of the "Five Eyes" include: The first United States satellite ground station for the ECHELON collection program was built in 1971 at a military firing and training center near Yakima, Washington. The facility, which was codenamed JACKKNIFE, was an investment of ca. 21.3 million dollars and had around 90 people. Satellite traffic was intercepted by a 30-meter single-dish antenna. The station became fully operational on 4 October 1974. It was connected with NSA headquarters at Fort Meade by a 75-baud secure Teletype orderwire channel. In 1999 the Australian Senate Joint Standing Committee on Treaties was told by Professor Desmond Ball that the Pine Gap facility was used as a ground station for a satellite-based interception network. The satellites were said to be large radio dishes between 20 and 100 meters in diameter in geostationary orbits. The original purpose of the network was to monitor the telemetry from 1970s Soviet weapons, air defence and other radars' capabilities, satellites' ground stations' transmissions and ground-based microwave communications. In 1999, Enercon, a German company and leading manufacturer of wind energy equipment, developed a breakthrough generator for wind turbines. After applying for a US patent, it had learned that Kenetech, an American rival, had submitted an almost identical patent application shortly before. By the statement of a former NSA employee, it was later claimed that the NSA had secretly intercepted and monitored Enercon's data communications and conference calls and passed information regarding the new generator to Kenetech. However, later German media reports contradicted this story, as it was revealed that the American patent in question was actually filed three years before the alleged wiretapping was said to have taken place. As German intelligence services are forbidden from engaging in industrial or economic espionage, German companies have complained that this leaves them defenceless against industrial espionage from the United States or Russia. According to Wolfgang Hoffmann, a former manager at Bayer, German intelligence services know which companies are being targeted by US intelligence agencies, but refuse to inform the companies involved.
[ { "paragraph_id": 0, "text": "ECHELON, originally a secret government code name, is a surveillance program (signals intelligence/SIGINT collection and analysis network) operated by the five signatory states to the UKUSA Security Agreement: Australia, Canada, New Zealand, the United Kingdom and the United States, also known as the Five Eyes.", "title": "" }, { "paragraph_id": 1, "text": "Created in the late 1960s to monitor the military and diplomatic communications of the Soviet Union and its Eastern Bloc allies during the Cold War, the ECHELON project became formally established in 1971. By the end of the 20th century, it had greatly expanded.", "title": "" }, { "paragraph_id": 2, "text": "The UKUSA intelligence community was assessed by the European Parliament (EP) in 2000 to include the signals intelligence agencies of each of the member states:", "title": "Organization" }, { "paragraph_id": 3, "text": "Former NSA analyst Perry Fellwock, under the pseudonym Winslow Peck, first blew the whistle on ECHELON to Ramparts in 1972, when he revealed the existence of a global network of listening posts and told of his experiences working there. He also revealed the existence of nuclear weapons in Israel in 1972, the widespread involvement of CIA and NSA personnel in drugs and human smuggling, and CIA operatives leading Nationalist Chinese (Taiwan) commandos in burning villages inside PRC borders.", "title": "Reporting and disclosures" }, { "paragraph_id": 4, "text": "In 1982, James Bamford, investigative journalist and author wrote The Puzzle Palace, an in-depth look inside the workings of the NSA, then a super-secret agency, and the massive eavesdropping operation under the codename \"SHAMROCK\". The NSA has used many codenames, and SHAMROCK was the codename used for ECHELON prior to 1975.", "title": "Reporting and disclosures" }, { "paragraph_id": 5, "text": "In 1988, Margaret Newsham, a Lockheed employee under NSA contract, disclosed the ECHELON surveillance system to members of Congress. Newsham told a member of the US Congress that the telephone calls of Strom Thurmond, a Republican US senator, were being collected by the NSA. Congressional investigators determined that \"targeting of US political figures would not occur by accident, but was designed into the system from the start.\"", "title": "Reporting and disclosures" }, { "paragraph_id": 6, "text": "Also in 1988, an article titled \"Somebody's Listening\", written by investigative journalist Duncan Campbell in the New Statesman, described the signals intelligence gathering activities of a program code-named \"ECHELON\". James Bamford describes the system as the software controlling the collection and distribution of civilian telecommunications traffic conveyed using communication satellites, with the collection being undertaken by ground stations located in the footprint of the downlink leg.", "title": "Reporting and disclosures" }, { "paragraph_id": 7, "text": "A detailed description of ECHELON was provided by the New Zealand journalist Nicky Hager in his 1996 book Secret Power: New Zealand's Role in the International Spy Network. Two years later, Hager's book was cited by the European Parliament in a report titled \"An Appraisal of the Technology of Political Control\" (PE 168.184).", "title": "Reporting and disclosures" }, { "paragraph_id": 8, "text": "In March 1999, for the first time in history, the Australian government admitted that news reports about the top secret UKUSA Agreement were true. Martin Brady, the director of Australia's Defence Signals Directorate (DSD, now known as Australian Signals Directorate, or ASD) told the Australian broadcasting channel Nine Network that the DSD \"does co-operate with counterpart signals intelligence organisations overseas under the UKUSA relationship.\"", "title": "Reporting and disclosures" }, { "paragraph_id": 9, "text": "In 2000, James Woolsey, the former Director of the US Central Intelligence Agency, confirmed that US intelligence uses interception systems and keyword searches to monitor European businesses.", "title": "Reporting and disclosures" }, { "paragraph_id": 10, "text": "Lawmakers in the United States feared that the ECHELON system could be used to monitor US citizens. According to The New York Times, the ECHELON system has been \"shrouded in such secrecy that its very existence has been difficult to prove.\" Critics said the ECHELON system emerged from the Cold War as a \"Big Brother without a cause\".", "title": "Reporting and disclosures" }, { "paragraph_id": 11, "text": "The program's capabilities and political implications were investigated by a committee of the European Parliament during 2000 and 2001 with a report published in 2001. In July 2000, the Temporary Committee on the ECHELON Interception System was established by the European parliament to investigate the surveillance network. It was chaired by the Portuguese politician Carlos Coelho, who was in charge of supervising investigations throughout 2000 and 2001.", "title": "Reporting and disclosures" }, { "paragraph_id": 12, "text": "In May 2001, as the committee finalised its report on the ECHELON system, a delegation travelled to Washington, D.C. to attend meetings with US officials from the following agencies and departments:", "title": "Reporting and disclosures" }, { "paragraph_id": 13, "text": "All meetings were cancelled by the US government and the committee was forced to end its trip prematurely. According to a BBC correspondent in May 2001, \"The US Government still refuses to admit that Echelon even exists.\"", "title": "Reporting and disclosures" }, { "paragraph_id": 14, "text": "In July 2001, the Committee released its final report. The EP report concluded that it seemed likely that ECHELON is a method of sorting captured signal traffic, rather than a comprehensive analysis tool. On 5 September 2001, the European parliament voted to accept the report.", "title": "Reporting and disclosures" }, { "paragraph_id": 15, "text": "The European Parliament stated in its report that the term ECHELON is used in a number of contexts, but that the evidence presented indicates that it was the name for a signals intelligence collection system. The report concludes that, on the basis of information presented, ECHELON was capable of interception and content inspection of telephone calls, fax, e-mail and other data traffic globally through the interception of communication bearers including satellite transmission, public switched telephone networks (which once carried most Internet traffic), and microwave links.", "title": "Reporting and disclosures" }, { "paragraph_id": 16, "text": "Two internal NSA newsletters from January 2011 and July 2012, published as part of Edward Snowden's leaks by the website The Intercept on 3 August 2015, for the first time confirmed that NSA used the code word ECHELON and provided some details about the scope of the program: ECHELON was part of an umbrella program with the code name FROSTING, which was established by the NSA in 1966 to collect and process data from communications satellites. FROSTING had two sub-programs:", "title": "Reporting and disclosures" }, { "paragraph_id": 17, "text": "The European Parliament's Temporary Committee on the ECHELON Interception System stated, \"It seems likely, in view of the evidence and the consistent pattern of statements from a very wide range of individuals and organisations, including American sources, that its name is in fact ECHELON, although this is a relatively minor detail\". The US intelligence community uses many code names (see, for example, CIA cryptonym).", "title": "Reporting and disclosures" }, { "paragraph_id": 18, "text": "Former NSA employee Margaret Newsham said that she worked on the configuration and installation of software that makes up the ECHELON system while employed at Lockheed Martin, from 1974 to 1984 in Sunnyvale, California, in the United States, and in Menwith Hill, England, in the UK. At that time, according to Newsham, the code name ECHELON was NSA's term for the computer network itself. Lockheed called it P415. The software programs were called SILKWORTH and SIRE. A satellite named VORTEX intercepted communications. An image available on the internet of a fragment apparently torn from a job description shows Echelon listed along with several other code names.", "title": "Reporting and disclosures" }, { "paragraph_id": 19, "text": "Britain's The Guardian newspaper summarized the capabilities of the ECHELON system as follows:", "title": "Reporting and disclosures" }, { "paragraph_id": 20, "text": "A global network of electronic spy stations that can eavesdrop on telephones, faxes and computers. It can even track bank accounts. This information is stored in Echelon computers, which can keep millions of records on individuals. Officially, however, Echelon doesn't exist.", "title": "Reporting and disclosures" }, { "paragraph_id": 21, "text": "Documents leaked by the former NSA contractor Edward Snowden revealed that the ECHELON system's collection of satellite data is also referred to as FORNSAT - an abbreviation for \"Foreign Satellite Collection\".", "title": "Reporting and disclosures" }, { "paragraph_id": 22, "text": "First revealed by the European Parliament report (p. 54 ff) and confirmed later by the Edward Snowden disclosures the following ground stations presently have, or have had, a role in intercepting transmissions from Satellite and other means of communication:", "title": "Intercept stations" }, { "paragraph_id": 23, "text": "The ability to intercept communications depends on the medium used, be it radio, satellite, microwave, cellular or fiber-optic. During World War II and through the 1950s, high-frequency (\"short-wave\") radio was widely used for military and diplomatic communication and could be intercepted at great distances. The rise of geostationary communications satellites in the 1960s presented new possibilities for intercepting international communications. In 1964, plans for the establishment of the ECHELON network took off after dozens of countries agreed to establish the International Telecommunications Satellite Organization (Intelsat), which would own and operate a global constellation of communications satellites.", "title": "History and context" }, { "paragraph_id": 24, "text": "In 1966, the first Intelsat satellite was launched into orbit. From 1970 to 1971, the Government Communications Headquarters (GCHQ) of Britain began to operate a secret signal station at Morwenstow, near Bude in Cornwall, England. The station intercepted satellite communications over the Atlantic and Indian Oceans. Soon afterwards, the US National Security Agency (NSA) built a second signal station at Yakima, near Seattle, for the interception of satellite communications over the Pacific Ocean. In 1981, GCHQ and the NSA started the construction of the first global wide area network (WAN). Soon after Australia, Canada, and New Zealand joined the ECHELON system. The report to the European Parliament of 2001 states: \"If UKUSA states operate listening stations in the relevant regions of the earth, in principle they can intercept all telephone, fax, and data traffic transmitted via such satellites.\"", "title": "History and context" }, { "paragraph_id": 25, "text": "Most reports on ECHELON focus on satellite interception. Testimony before the European Parliament indicated that separate but similar UKUSA systems are in place to monitor communication through undersea cables, microwave transmissions, and other lines. The report to the European Parliament points out that interception of private communications by foreign intelligence services is not necessarily limited to the US or British foreign intelligence services. The role of satellites in point-to-point voice and data communications has largely been supplanted by fiber optics. In 2006, 99% of the world's long-distance voice and data traffic was carried over optical-fiber. The proportion of international communications accounted for by satellite links is said to have decreased substantially to an amount between 0.4% and 5% in Central Europe. Even in less-developed parts of the world, communications satellites are used largely for point-to-multipoint applications, such as video. Thus, the majority of communications can no longer be intercepted by earth stations; they can only be collected by tapping cables and intercepting line-of-sight microwave signals, which is possible only to a limited extent.", "title": "History and context" }, { "paragraph_id": 26, "text": "British journalist Duncan Campbell and New Zealand journalist Nicky Hager said in the 1990s that the United States was exploiting ECHELON traffic for industrial espionage, rather than military and diplomatic purposes. Examples alleged by the journalists include the gear-less wind turbine technology designed by the German firm Enercon and the speech technology developed by the Belgian firm Lernout & Hauspie.", "title": "Concerns" }, { "paragraph_id": 27, "text": "In 2001, the Temporary Committee on the ECHELON Interception System recommended to the European Parliament that citizens of member states routinely use cryptography in their communications to protect their privacy, because economic espionage with ECHELON has been conducted by the US intelligence agencies.", "title": "Concerns" }, { "paragraph_id": 28, "text": "American author James Bamford provides an alternative view, highlighting that legislation prohibits the use of intercepted communications for commercial purposes, although he does not elaborate on how intercepted communications are used as part of an all-source intelligence process.", "title": "Concerns" }, { "paragraph_id": 29, "text": "In its report, the committee of the European Parliament stated categorically that the Echelon network was being used to intercept not only military communications, but also private and business ones. In its epigraph to the report, the parliamentary committee quoted Juvenal, \"Sed quis custodiet ipsos custodes.\" (\"But who will watch the watchers\"). James Bamford, in The Guardian in May 2001, warned that if Echelon were to continue unchecked, it could become a \"cyber secret police, without courts, juries, or the right to a defence\".", "title": "Concerns" }, { "paragraph_id": 30, "text": "Alleged examples of espionage conducted by the members of the \"Five Eyes\" include:", "title": "Concerns" }, { "paragraph_id": 31, "text": "The first United States satellite ground station for the ECHELON collection program was built in 1971 at a military firing and training center near Yakima, Washington. The facility, which was codenamed JACKKNIFE, was an investment of ca. 21.3 million dollars and had around 90 people. Satellite traffic was intercepted by a 30-meter single-dish antenna. The station became fully operational on 4 October 1974. It was connected with NSA headquarters at Fort Meade by a 75-baud secure Teletype orderwire channel.", "title": "Workings" }, { "paragraph_id": 32, "text": "In 1999 the Australian Senate Joint Standing Committee on Treaties was told by Professor Desmond Ball that the Pine Gap facility was used as a ground station for a satellite-based interception network. The satellites were said to be large radio dishes between 20 and 100 meters in diameter in geostationary orbits. The original purpose of the network was to monitor the telemetry from 1970s Soviet weapons, air defence and other radars' capabilities, satellites' ground stations' transmissions and ground-based microwave communications.", "title": "Workings" }, { "paragraph_id": 33, "text": "In 1999, Enercon, a German company and leading manufacturer of wind energy equipment, developed a breakthrough generator for wind turbines. After applying for a US patent, it had learned that Kenetech, an American rival, had submitted an almost identical patent application shortly before. By the statement of a former NSA employee, it was later claimed that the NSA had secretly intercepted and monitored Enercon's data communications and conference calls and passed information regarding the new generator to Kenetech. However, later German media reports contradicted this story, as it was revealed that the American patent in question was actually filed three years before the alleged wiretapping was said to have taken place. As German intelligence services are forbidden from engaging in industrial or economic espionage, German companies have complained that this leaves them defenceless against industrial espionage from the United States or Russia. According to Wolfgang Hoffmann, a former manager at Bayer, German intelligence services know which companies are being targeted by US intelligence agencies, but refuse to inform the companies involved.", "title": "Workings" } ]
ECHELON, originally a secret government code name, is a surveillance program operated by the five signatory states to the UKUSA Security Agreement: Australia, Canada, New Zealand, the United Kingdom and the United States, also known as the Five Eyes. Created in the late 1960s to monitor the military and diplomatic communications of the Soviet Union and its Eastern Bloc allies during the Cold War, the ECHELON project became formally established in 1971. By the end of the 20th century, it had greatly expanded.
2001-01-29T11:28:15Z
2023-11-07T17:53:14Z
[ "Template:Flagicon", "Template:Blockquote", "Template:Reflist", "Template:Espionage", "Template:Intelligence cycle management", "Template:Other uses", "Template:Global surveillance", "Template:Cite book", "Template:Short description", "Template:ISBN", "Template:Cite web", "Template:Cite news", "Template:Commons category", "Template:Signals intelligence agencies", "Template:Flag", "Template:-\"", "Template:Webarchive", "Template:Cite journal", "Template:Use dmy dates" ]
https://en.wikipedia.org/wiki/ECHELON
9,284
Equation
In mathematics, an equation is a mathematical formula that expresses the equality of two expressions, by connecting them with the equals sign =. The word equation and its cognates in other languages may have subtly different meanings; for example, in French an équation is defined as containing one or more variables, while in English, any well-formed formula consisting of two expressions related with an equals sign is an equation. Solving an equation containing variables consists of determining which values of the variables make the equality true. The variables for which the equation has to be solved are also called unknowns, and the values of the unknowns that satisfy the equality are called solutions of the equation. There are two kinds of equations: identities and conditional equations. An identity is true for all values of the variables. A conditional equation is only true for particular values of the variables. The "=" symbol, which appears in every equation, was invented in 1557 by Robert Recorde, who considered that nothing could be more equal than parallel straight lines with the same length. An equation is written as two expressions, connected by an equals sign ("="). The expressions on the two sides of the equals sign are called the "left-hand side" and "right-hand side" of the equation. Very often the right-hand side of an equation is assumed to be zero. This does not reduce the generality, as this can be realized by subtracting the right-hand side from both sides. The most common type of equation is a polynomial equation (commonly called also an algebraic equation) in which the two sides are polynomials. The sides of a polynomial equation contain one or more terms. For example, the equation has left-hand side A x 2 + B x + C − y {\displaystyle Ax^{2}+Bx+C-y} , which has four terms, and right-hand side 0 {\displaystyle 0} , consisting of just one term. The names of the variables suggest that x and y are unknowns, and that A, B, and C are parameters, but this is normally fixed by the context (in some contexts, y may be a parameter, or A, B, and C may be ordinary variables). An equation is analogous to a scale into which weights are placed. When equal weights of something (e.g., grain) are placed into the two pans, the two weights cause the scale to be in balance and are said to be equal. If a quantity of grain is removed from one pan of the balance, an equal amount of grain must be removed from the other pan to keep the scale in balance. More generally, an equation remains in balance if the same operation is performed on its both sides. Two equations or two systems of equations are equivalent, if they have the same set of solutions. The following operations transform an equation or a system of equations into an equivalent one – provided that the operations are meaningful for the expressions they are applied to: If some function is applied to both sides of an equation, the resulting equation has the solutions of the initial equation among its solutions, but may have further solutions called extraneous solutions. For example, the equation x = 1 {\displaystyle x=1} has the solution x = 1. {\displaystyle x=1.} Raising both sides to the exponent of 2 (which means applying the function f ( s ) = s 2 {\displaystyle f(s)=s^{2}} to both sides of the equation) changes the equation to x 2 = 1 {\displaystyle x^{2}=1} , which not only has the previous solution but also introduces the extraneous solution, x = − 1. {\displaystyle x=-1.} Moreover, if the function is not defined at some values (such as 1/x, which is not defined for x = 0), solutions existing at those values may be lost. Thus, caution must be exercised when applying such a transformation to an equation. The above transformations are the basis of most elementary methods for equation solving, as well as some less elementary ones, like Gaussian elimination. An equation is analogous to a weighing scale, balance, or seesaw. Each side of the equation corresponds to one side of the balance. Different quantities can be placed on each side: if the weights on the two sides are equal, the scale balances, and in analogy, the equality that represents the balance is also balanced (if not, then the lack of balance corresponds to an inequality represented by an inequation). In the illustration, x, y and z are all different quantities (in this case real numbers) represented as circular weights, and each of x, y, and z has a different weight. Addition corresponds to adding weight, while subtraction corresponds to removing weight from what is already there. When equality holds, the total weight on each side is the same. Equations often contain terms other than the unknowns. These other terms, which are assumed to be known, are usually called constants, coefficients or parameters. An example of an equation involving x and y as unknowns and the parameter R is When R is chosen to have the value of 2 (R = 2), this equation would be recognized in Cartesian coordinates as the equation for the circle of radius of 2 around the origin. Hence, the equation with R unspecified is the general equation for the circle. Usually, the unknowns are denoted by letters at the end of the alphabet, x, y, z, w, ..., while coefficients (parameters) are denoted by letters at the beginning, a, b, c, d, ... . For example, the general quadratic equation is usually written ax + bx + c = 0. The process of finding the solutions, or, in case of parameters, expressing the unknowns in terms of the parameters, is called solving the equation. Such expressions of the solutions in terms of the parameters are also called solutions. A system of equations is a set of simultaneous equations, usually in several unknowns for which the common solutions are sought. Thus, a solution to the system is a set of values for each of the unknowns, which together form a solution to each equation in the system. For example, the system has the unique solution x = −1, y = 1. An identity is an equation that is true for all possible values of the variable(s) it contains. Many identities are known in algebra and calculus. In the process of solving an equation, an identity is often used to simplify an equation, making it more easily solvable. In algebra, an example of an identity is the difference of two squares: which is true for all x and y. Trigonometry is an area where many identities exist; these are useful in manipulating or solving trigonometric equations. Two of many that involve the sine and cosine functions are: and which are both true for all values of θ. For example, to solve for the value of θ that satisfies the equation: where θ is limited to between 0 and 45 degrees, one may use the above identity for the product to give: yielding the following solution for θ: Since the sine function is a periodic function, there are infinitely many solutions if there are no restrictions on θ. In this example, restricting θ to be between 0 and 45 degrees would restrict the solution to only one number. Algebra studies two main families of equations: polynomial equations and, among them, the special case of linear equations. When there is only one variable, polynomial equations have the form P(x) = 0, where P is a polynomial, and linear equations have the form ax + b = 0, where a and b are parameters. To solve equations from either family, one uses algorithmic or geometric techniques that originate from linear algebra or mathematical analysis. Algebra also studies Diophantine equations where the coefficients and solutions are integers. The techniques used are different and come from number theory. These equations are difficult in general; one often searches just to find the existence or absence of a solution, and, if they exist, to count the number of solutions. In general, an algebraic equation or polynomial equation is an equation of the form where P and Q are polynomials with coefficients in some field (e.g., rational numbers, real numbers, complex numbers). An algebraic equation is univariate if it involves only one variable. On the other hand, a polynomial equation may involve several variables, in which case it is called multivariate (multiple variables, x, y, z, etc.). For example, is a univariate algebraic (polynomial) equation with integer coefficients and is a multivariate polynomial equation over the rational numbers. Some polynomial equations with rational coefficients have a solution that is an algebraic expression, with a finite number of operations involving just those coefficients (i.e., can be solved algebraically). This can be done for all such equations of degree one, two, three, or four; but equations of degree five or more cannot always be solved in this way, as the Abel–Ruffini theorem demonstrates. A large amount of research has been devoted to compute efficiently accurate approximations of the real or complex solutions of a univariate algebraic equation (see Root finding of polynomials) and of the common solutions of several multivariate polynomial equations (see System of polynomial equations). A system of linear equations (or linear system) is a collection of linear equations involving one or more variables. For example, is a system of three equations in the three variables x, y, z. A solution to a linear system is an assignment of numbers to the variables such that all the equations are simultaneously satisfied. A solution to the system above is given by since it makes all three equations valid. The word "system" indicates that the equations are to be considered collectively, rather than individually. In mathematics, the theory of linear systems is a fundamental part of linear algebra, a subject which is used in many parts of modern mathematics. Computational algorithms for finding the solutions are an important part of numerical linear algebra, and play a prominent role in physics, engineering, chemistry, computer science, and economics. A system of non-linear equations can often be approximated by a linear system (see linearization), a helpful technique when making a mathematical model or computer simulation of a relatively complex system. In Euclidean geometry, it is possible to associate a set of coordinates to each point in space, for example by an orthogonal grid. This method allows one to characterize geometric figures by equations. A plane in three-dimensional space can be expressed as the solution set of an equation of the form a x + b y + c z + d = 0 {\displaystyle ax+by+cz+d=0} , where a , b , c {\displaystyle a,b,c} and d {\displaystyle d} are real numbers and x , y , z {\displaystyle x,y,z} are the unknowns that correspond to the coordinates of a point in the system given by the orthogonal grid. The values a , b , c {\displaystyle a,b,c} are the coordinates of a vector perpendicular to the plane defined by the equation. A line is expressed as the intersection of two planes, that is as the solution set of a single linear equation with values in R 2 {\displaystyle \mathbb {R} ^{2}} or as the solution set of two linear equations with values in R 3 . {\displaystyle \mathbb {R} ^{3}.} A conic section is the intersection of a cone with equation x 2 + y 2 = z 2 {\displaystyle x^{2}+y^{2}=z^{2}} and a plane. In other words, in space, all conics are defined as the solution set of an equation of a plane and of the equation of a cone just given. This formalism allows one to determine the positions and the properties of the focuses of a conic. The use of equations allows one to call on a large area of mathematics to solve geometric questions. The Cartesian coordinate system transforms a geometric problem into an analysis problem, once the figures are transformed into equations; thus the name analytic geometry. This point of view, outlined by Descartes, enriches and modifies the type of geometry conceived of by the ancient Greek mathematicians. Currently, analytic geometry designates an active branch of mathematics. Although it still uses equations to characterize figures, it also uses other sophisticated techniques such as functional analysis and linear algebra. In Cartesian geometry, equations are used to describe geometric figures. As the equations that are considered, such as implicit equations or parametric equations, have infinitely many solutions, the objective is now different: instead of giving the solutions explicitly or counting them, which is impossible, one uses equations for studying properties of figures. This is the starting idea of algebraic geometry, an important area of mathematics. One can use the same principle to specify the position of any point in three-dimensional space by the use of three Cartesian coordinates, which are the signed distances to three mutually perpendicular planes (or, equivalently, by its perpendicular projection onto three mutually perpendicular lines). The invention of Cartesian coordinates in the 17th century by René Descartes revolutionized mathematics by providing the first systematic link between Euclidean geometry and algebra. Using the Cartesian coordinate system, geometric shapes (such as curves) can be described by Cartesian equations: algebraic equations involving the coordinates of the points lying on the shape. For example, a circle of radius 2 in a plane, centered on a particular point called the origin, may be described as the set of all points whose coordinates x and y satisfy the equation x + y = 4. A parametric equation for a curve expresses the coordinates of the points of the curve as functions of a variable, called a parameter. For example, are parametric equations for the unit circle, where t is the parameter. Together, these equations are called a parametric representation of the curve. The notion of parametric equation has been generalized to surfaces, manifolds and algebraic varieties of higher dimension, with the number of parameters being equal to the dimension of the manifold or variety, and the number of equations being equal to the dimension of the space in which the manifold or variety is considered (for curves the dimension is one and one parameter is used, for surfaces dimension two and two parameters, etc.). A Diophantine equation is a polynomial equation in two or more unknowns for which only the integer solutions are sought (an integer solution is a solution such that all the unknowns take integer values). A linear Diophantine equation is an equation between two sums of monomials of degree zero or one. An example of linear Diophantine equation is ax + by = c where a, b, and c are constants. An exponential Diophantine equation is one for which exponents of the terms of the equation can be unknowns. Diophantine problems have fewer equations than unknown variables and involve finding integers that work correctly for all equations. In more technical language, they define an algebraic curve, algebraic surface, or more general object, and ask about the lattice points on it. The word Diophantine refers to the Hellenistic mathematician of the 3rd century, Diophantus of Alexandria, who made a study of such equations and was one of the first mathematicians to introduce symbolism into algebra. The mathematical study of Diophantine problems that Diophantus initiated is now called Diophantine analysis. An algebraic number is a number that is a solution of a non-zero polynomial equation in one variable with rational coefficients (or equivalently — by clearing denominators — with integer coefficients). Numbers such as π that are not algebraic are said to be transcendental. Almost all real and complex numbers are transcendental. Algebraic geometry is a branch of mathematics, classically studying solutions of polynomial equations. Modern algebraic geometry is based on more abstract techniques of abstract algebra, especially commutative algebra, with the language and the problems of geometry. The fundamental objects of study in algebraic geometry are algebraic varieties, which are geometric manifestations of solutions of systems of polynomial equations. Examples of the most studied classes of algebraic varieties are: plane algebraic curves, which include lines, circles, parabolas, ellipses, hyperbolas, cubic curves like elliptic curves and quartic curves like lemniscates, and Cassini ovals. A point of the plane belongs to an algebraic curve if its coordinates satisfy a given polynomial equation. Basic questions involve the study of the points of special interest like the singular points, the inflection points and the points at infinity. More advanced questions involve the topology of the curve and relations between the curves given by different equations. A differential equation is a mathematical equation that relates some function with its derivatives. In applications, the functions usually represent physical quantities, the derivatives represent their rates of change, and the equation defines a relationship between the two. They are solved by finding an expression for the function that does not involve derivatives. Differential equations are used to model processes that involve the rates of change of the variable, and are used in areas such as physics, chemistry, biology, and economics. In pure mathematics, differential equations are studied from several different perspectives, mostly concerned with their solutions — the set of functions that satisfy the equation. Only the simplest differential equations are solvable by explicit formulas; however, some properties of solutions of a given differential equation may be determined without finding their exact form. If a self-contained formula for the solution is not available, the solution may be numerically approximated using computers. The theory of dynamical systems puts emphasis on qualitative analysis of systems described by differential equations, while many numerical methods have been developed to determine solutions with a given degree of accuracy. An ordinary differential equation or ODE is an equation containing a function of one independent variable and its derivatives. The term "ordinary" is used in contrast with the term partial differential equation, which may be with respect to more than one independent variable. Linear differential equations, which have solutions that can be added and multiplied by coefficients, are well-defined and understood, and exact closed-form solutions are obtained. By contrast, ODEs that lack additive solutions are nonlinear, and solving them is far more intricate, as one can rarely represent them by elementary functions in closed form: Instead, exact and analytic solutions of ODEs are in series or integral form. Graphical and numerical methods, applied by hand or by computer, may approximate solutions of ODEs and perhaps yield useful information, often sufficing in the absence of exact, analytic solutions. A partial differential equation (PDE) is a differential equation that contains unknown multivariable functions and their partial derivatives. (This is in contrast to ordinary differential equations, which deal with functions of a single variable and their derivatives.) PDEs are used to formulate problems involving functions of several variables, and are either solved by hand, or used to create a relevant computer model. PDEs can be used to describe a wide variety of phenomena such as sound, heat, electrostatics, electrodynamics, fluid flow, elasticity, or quantum mechanics. These seemingly distinct physical phenomena can be formalised similarly in terms of PDEs. Just as ordinary differential equations often model one-dimensional dynamical systems, partial differential equations often model multidimensional systems. PDEs find their generalisation in stochastic partial differential equations. Equations can be classified according to the types of operations and quantities involved. Important types include:
[ { "paragraph_id": 0, "text": "In mathematics, an equation is a mathematical formula that expresses the equality of two expressions, by connecting them with the equals sign =. The word equation and its cognates in other languages may have subtly different meanings; for example, in French an équation is defined as containing one or more variables, while in English, any well-formed formula consisting of two expressions related with an equals sign is an equation.", "title": "" }, { "paragraph_id": 1, "text": "Solving an equation containing variables consists of determining which values of the variables make the equality true. The variables for which the equation has to be solved are also called unknowns, and the values of the unknowns that satisfy the equality are called solutions of the equation. There are two kinds of equations: identities and conditional equations. An identity is true for all values of the variables. A conditional equation is only true for particular values of the variables.", "title": "" }, { "paragraph_id": 2, "text": "The \"=\" symbol, which appears in every equation, was invented in 1557 by Robert Recorde, who considered that nothing could be more equal than parallel straight lines with the same length.", "title": "" }, { "paragraph_id": 3, "text": "An equation is written as two expressions, connected by an equals sign (\"=\"). The expressions on the two sides of the equals sign are called the \"left-hand side\" and \"right-hand side\" of the equation. Very often the right-hand side of an equation is assumed to be zero. This does not reduce the generality, as this can be realized by subtracting the right-hand side from both sides.", "title": "Description" }, { "paragraph_id": 4, "text": "The most common type of equation is a polynomial equation (commonly called also an algebraic equation) in which the two sides are polynomials. The sides of a polynomial equation contain one or more terms. For example, the equation", "title": "Description" }, { "paragraph_id": 5, "text": "has left-hand side A x 2 + B x + C − y {\\displaystyle Ax^{2}+Bx+C-y} , which has four terms, and right-hand side 0 {\\displaystyle 0} , consisting of just one term. The names of the variables suggest that x and y are unknowns, and that A, B, and C are parameters, but this is normally fixed by the context (in some contexts, y may be a parameter, or A, B, and C may be ordinary variables).", "title": "Description" }, { "paragraph_id": 6, "text": "An equation is analogous to a scale into which weights are placed. When equal weights of something (e.g., grain) are placed into the two pans, the two weights cause the scale to be in balance and are said to be equal. If a quantity of grain is removed from one pan of the balance, an equal amount of grain must be removed from the other pan to keep the scale in balance. More generally, an equation remains in balance if the same operation is performed on its both sides.", "title": "Description" }, { "paragraph_id": 7, "text": "Two equations or two systems of equations are equivalent, if they have the same set of solutions. The following operations transform an equation or a system of equations into an equivalent one – provided that the operations are meaningful for the expressions they are applied to:", "title": "Properties" }, { "paragraph_id": 8, "text": "If some function is applied to both sides of an equation, the resulting equation has the solutions of the initial equation among its solutions, but may have further solutions called extraneous solutions. For example, the equation x = 1 {\\displaystyle x=1} has the solution x = 1. {\\displaystyle x=1.} Raising both sides to the exponent of 2 (which means applying the function f ( s ) = s 2 {\\displaystyle f(s)=s^{2}} to both sides of the equation) changes the equation to x 2 = 1 {\\displaystyle x^{2}=1} , which not only has the previous solution but also introduces the extraneous solution, x = − 1. {\\displaystyle x=-1.} Moreover, if the function is not defined at some values (such as 1/x, which is not defined for x = 0), solutions existing at those values may be lost. Thus, caution must be exercised when applying such a transformation to an equation.", "title": "Properties" }, { "paragraph_id": 9, "text": "The above transformations are the basis of most elementary methods for equation solving, as well as some less elementary ones, like Gaussian elimination.", "title": "Properties" }, { "paragraph_id": 10, "text": "An equation is analogous to a weighing scale, balance, or seesaw.", "title": "Examples" }, { "paragraph_id": 11, "text": "Each side of the equation corresponds to one side of the balance. Different quantities can be placed on each side: if the weights on the two sides are equal, the scale balances, and in analogy, the equality that represents the balance is also balanced (if not, then the lack of balance corresponds to an inequality represented by an inequation).", "title": "Examples" }, { "paragraph_id": 12, "text": "In the illustration, x, y and z are all different quantities (in this case real numbers) represented as circular weights, and each of x, y, and z has a different weight. Addition corresponds to adding weight, while subtraction corresponds to removing weight from what is already there. When equality holds, the total weight on each side is the same.", "title": "Examples" }, { "paragraph_id": 13, "text": "Equations often contain terms other than the unknowns. These other terms, which are assumed to be known, are usually called constants, coefficients or parameters.", "title": "Examples" }, { "paragraph_id": 14, "text": "An example of an equation involving x and y as unknowns and the parameter R is", "title": "Examples" }, { "paragraph_id": 15, "text": "When R is chosen to have the value of 2 (R = 2), this equation would be recognized in Cartesian coordinates as the equation for the circle of radius of 2 around the origin. Hence, the equation with R unspecified is the general equation for the circle.", "title": "Examples" }, { "paragraph_id": 16, "text": "Usually, the unknowns are denoted by letters at the end of the alphabet, x, y, z, w, ..., while coefficients (parameters) are denoted by letters at the beginning, a, b, c, d, ... . For example, the general quadratic equation is usually written ax + bx + c = 0.", "title": "Examples" }, { "paragraph_id": 17, "text": "The process of finding the solutions, or, in case of parameters, expressing the unknowns in terms of the parameters, is called solving the equation. Such expressions of the solutions in terms of the parameters are also called solutions.", "title": "Examples" }, { "paragraph_id": 18, "text": "A system of equations is a set of simultaneous equations, usually in several unknowns for which the common solutions are sought. Thus, a solution to the system is a set of values for each of the unknowns, which together form a solution to each equation in the system. For example, the system", "title": "Examples" }, { "paragraph_id": 19, "text": "has the unique solution x = −1, y = 1.", "title": "Examples" }, { "paragraph_id": 20, "text": "An identity is an equation that is true for all possible values of the variable(s) it contains. Many identities are known in algebra and calculus. In the process of solving an equation, an identity is often used to simplify an equation, making it more easily solvable.", "title": "Examples" }, { "paragraph_id": 21, "text": "In algebra, an example of an identity is the difference of two squares:", "title": "Examples" }, { "paragraph_id": 22, "text": "which is true for all x and y.", "title": "Examples" }, { "paragraph_id": 23, "text": "Trigonometry is an area where many identities exist; these are useful in manipulating or solving trigonometric equations. Two of many that involve the sine and cosine functions are:", "title": "Examples" }, { "paragraph_id": 24, "text": "and", "title": "Examples" }, { "paragraph_id": 25, "text": "which are both true for all values of θ.", "title": "Examples" }, { "paragraph_id": 26, "text": "For example, to solve for the value of θ that satisfies the equation:", "title": "Examples" }, { "paragraph_id": 27, "text": "where θ is limited to between 0 and 45 degrees, one may use the above identity for the product to give:", "title": "Examples" }, { "paragraph_id": 28, "text": "yielding the following solution for θ:", "title": "Examples" }, { "paragraph_id": 29, "text": "Since the sine function is a periodic function, there are infinitely many solutions if there are no restrictions on θ. In this example, restricting θ to be between 0 and 45 degrees would restrict the solution to only one number.", "title": "Examples" }, { "paragraph_id": 30, "text": "Algebra studies two main families of equations: polynomial equations and, among them, the special case of linear equations. When there is only one variable, polynomial equations have the form P(x) = 0, where P is a polynomial, and linear equations have the form ax + b = 0, where a and b are parameters. To solve equations from either family, one uses algorithmic or geometric techniques that originate from linear algebra or mathematical analysis. Algebra also studies Diophantine equations where the coefficients and solutions are integers. The techniques used are different and come from number theory. These equations are difficult in general; one often searches just to find the existence or absence of a solution, and, if they exist, to count the number of solutions.", "title": "Algebra" }, { "paragraph_id": 31, "text": "In general, an algebraic equation or polynomial equation is an equation of the form", "title": "Algebra" }, { "paragraph_id": 32, "text": "where P and Q are polynomials with coefficients in some field (e.g., rational numbers, real numbers, complex numbers). An algebraic equation is univariate if it involves only one variable. On the other hand, a polynomial equation may involve several variables, in which case it is called multivariate (multiple variables, x, y, z, etc.).", "title": "Algebra" }, { "paragraph_id": 33, "text": "For example,", "title": "Algebra" }, { "paragraph_id": 34, "text": "is a univariate algebraic (polynomial) equation with integer coefficients and", "title": "Algebra" }, { "paragraph_id": 35, "text": "is a multivariate polynomial equation over the rational numbers.", "title": "Algebra" }, { "paragraph_id": 36, "text": "Some polynomial equations with rational coefficients have a solution that is an algebraic expression, with a finite number of operations involving just those coefficients (i.e., can be solved algebraically). This can be done for all such equations of degree one, two, three, or four; but equations of degree five or more cannot always be solved in this way, as the Abel–Ruffini theorem demonstrates.", "title": "Algebra" }, { "paragraph_id": 37, "text": "A large amount of research has been devoted to compute efficiently accurate approximations of the real or complex solutions of a univariate algebraic equation (see Root finding of polynomials) and of the common solutions of several multivariate polynomial equations (see System of polynomial equations).", "title": "Algebra" }, { "paragraph_id": 38, "text": "A system of linear equations (or linear system) is a collection of linear equations involving one or more variables. For example,", "title": "Algebra" }, { "paragraph_id": 39, "text": "is a system of three equations in the three variables x, y, z. A solution to a linear system is an assignment of numbers to the variables such that all the equations are simultaneously satisfied. A solution to the system above is given by", "title": "Algebra" }, { "paragraph_id": 40, "text": "since it makes all three equations valid. The word \"system\" indicates that the equations are to be considered collectively, rather than individually.", "title": "Algebra" }, { "paragraph_id": 41, "text": "In mathematics, the theory of linear systems is a fundamental part of linear algebra, a subject which is used in many parts of modern mathematics. Computational algorithms for finding the solutions are an important part of numerical linear algebra, and play a prominent role in physics, engineering, chemistry, computer science, and economics. A system of non-linear equations can often be approximated by a linear system (see linearization), a helpful technique when making a mathematical model or computer simulation of a relatively complex system.", "title": "Algebra" }, { "paragraph_id": 42, "text": "In Euclidean geometry, it is possible to associate a set of coordinates to each point in space, for example by an orthogonal grid. This method allows one to characterize geometric figures by equations. A plane in three-dimensional space can be expressed as the solution set of an equation of the form a x + b y + c z + d = 0 {\\displaystyle ax+by+cz+d=0} , where a , b , c {\\displaystyle a,b,c} and d {\\displaystyle d} are real numbers and x , y , z {\\displaystyle x,y,z} are the unknowns that correspond to the coordinates of a point in the system given by the orthogonal grid. The values a , b , c {\\displaystyle a,b,c} are the coordinates of a vector perpendicular to the plane defined by the equation. A line is expressed as the intersection of two planes, that is as the solution set of a single linear equation with values in R 2 {\\displaystyle \\mathbb {R} ^{2}} or as the solution set of two linear equations with values in R 3 . {\\displaystyle \\mathbb {R} ^{3}.}", "title": "Geometry" }, { "paragraph_id": 43, "text": "A conic section is the intersection of a cone with equation x 2 + y 2 = z 2 {\\displaystyle x^{2}+y^{2}=z^{2}} and a plane. In other words, in space, all conics are defined as the solution set of an equation of a plane and of the equation of a cone just given. This formalism allows one to determine the positions and the properties of the focuses of a conic.", "title": "Geometry" }, { "paragraph_id": 44, "text": "The use of equations allows one to call on a large area of mathematics to solve geometric questions. The Cartesian coordinate system transforms a geometric problem into an analysis problem, once the figures are transformed into equations; thus the name analytic geometry. This point of view, outlined by Descartes, enriches and modifies the type of geometry conceived of by the ancient Greek mathematicians.", "title": "Geometry" }, { "paragraph_id": 45, "text": "Currently, analytic geometry designates an active branch of mathematics. Although it still uses equations to characterize figures, it also uses other sophisticated techniques such as functional analysis and linear algebra.", "title": "Geometry" }, { "paragraph_id": 46, "text": "In Cartesian geometry, equations are used to describe geometric figures. As the equations that are considered, such as implicit equations or parametric equations, have infinitely many solutions, the objective is now different: instead of giving the solutions explicitly or counting them, which is impossible, one uses equations for studying properties of figures. This is the starting idea of algebraic geometry, an important area of mathematics.", "title": "Geometry" }, { "paragraph_id": 47, "text": "One can use the same principle to specify the position of any point in three-dimensional space by the use of three Cartesian coordinates, which are the signed distances to three mutually perpendicular planes (or, equivalently, by its perpendicular projection onto three mutually perpendicular lines).", "title": "Geometry" }, { "paragraph_id": 48, "text": "The invention of Cartesian coordinates in the 17th century by René Descartes revolutionized mathematics by providing the first systematic link between Euclidean geometry and algebra. Using the Cartesian coordinate system, geometric shapes (such as curves) can be described by Cartesian equations: algebraic equations involving the coordinates of the points lying on the shape. For example, a circle of radius 2 in a plane, centered on a particular point called the origin, may be described as the set of all points whose coordinates x and y satisfy the equation x + y = 4.", "title": "Geometry" }, { "paragraph_id": 49, "text": "A parametric equation for a curve expresses the coordinates of the points of the curve as functions of a variable, called a parameter. For example,", "title": "Geometry" }, { "paragraph_id": 50, "text": "are parametric equations for the unit circle, where t is the parameter. Together, these equations are called a parametric representation of the curve.", "title": "Geometry" }, { "paragraph_id": 51, "text": "The notion of parametric equation has been generalized to surfaces, manifolds and algebraic varieties of higher dimension, with the number of parameters being equal to the dimension of the manifold or variety, and the number of equations being equal to the dimension of the space in which the manifold or variety is considered (for curves the dimension is one and one parameter is used, for surfaces dimension two and two parameters, etc.).", "title": "Geometry" }, { "paragraph_id": 52, "text": "A Diophantine equation is a polynomial equation in two or more unknowns for which only the integer solutions are sought (an integer solution is a solution such that all the unknowns take integer values). A linear Diophantine equation is an equation between two sums of monomials of degree zero or one. An example of linear Diophantine equation is ax + by = c where a, b, and c are constants. An exponential Diophantine equation is one for which exponents of the terms of the equation can be unknowns.", "title": "Number theory" }, { "paragraph_id": 53, "text": "Diophantine problems have fewer equations than unknown variables and involve finding integers that work correctly for all equations. In more technical language, they define an algebraic curve, algebraic surface, or more general object, and ask about the lattice points on it.", "title": "Number theory" }, { "paragraph_id": 54, "text": "The word Diophantine refers to the Hellenistic mathematician of the 3rd century, Diophantus of Alexandria, who made a study of such equations and was one of the first mathematicians to introduce symbolism into algebra. The mathematical study of Diophantine problems that Diophantus initiated is now called Diophantine analysis.", "title": "Number theory" }, { "paragraph_id": 55, "text": "An algebraic number is a number that is a solution of a non-zero polynomial equation in one variable with rational coefficients (or equivalently — by clearing denominators — with integer coefficients). Numbers such as π that are not algebraic are said to be transcendental. Almost all real and complex numbers are transcendental.", "title": "Number theory" }, { "paragraph_id": 56, "text": "Algebraic geometry is a branch of mathematics, classically studying solutions of polynomial equations. Modern algebraic geometry is based on more abstract techniques of abstract algebra, especially commutative algebra, with the language and the problems of geometry.", "title": "Number theory" }, { "paragraph_id": 57, "text": "The fundamental objects of study in algebraic geometry are algebraic varieties, which are geometric manifestations of solutions of systems of polynomial equations. Examples of the most studied classes of algebraic varieties are: plane algebraic curves, which include lines, circles, parabolas, ellipses, hyperbolas, cubic curves like elliptic curves and quartic curves like lemniscates, and Cassini ovals. A point of the plane belongs to an algebraic curve if its coordinates satisfy a given polynomial equation. Basic questions involve the study of the points of special interest like the singular points, the inflection points and the points at infinity. More advanced questions involve the topology of the curve and relations between the curves given by different equations.", "title": "Number theory" }, { "paragraph_id": 58, "text": "A differential equation is a mathematical equation that relates some function with its derivatives. In applications, the functions usually represent physical quantities, the derivatives represent their rates of change, and the equation defines a relationship between the two. They are solved by finding an expression for the function that does not involve derivatives. Differential equations are used to model processes that involve the rates of change of the variable, and are used in areas such as physics, chemistry, biology, and economics.", "title": "Differential equations" }, { "paragraph_id": 59, "text": "In pure mathematics, differential equations are studied from several different perspectives, mostly concerned with their solutions — the set of functions that satisfy the equation. Only the simplest differential equations are solvable by explicit formulas; however, some properties of solutions of a given differential equation may be determined without finding their exact form.", "title": "Differential equations" }, { "paragraph_id": 60, "text": "If a self-contained formula for the solution is not available, the solution may be numerically approximated using computers. The theory of dynamical systems puts emphasis on qualitative analysis of systems described by differential equations, while many numerical methods have been developed to determine solutions with a given degree of accuracy.", "title": "Differential equations" }, { "paragraph_id": 61, "text": "An ordinary differential equation or ODE is an equation containing a function of one independent variable and its derivatives. The term \"ordinary\" is used in contrast with the term partial differential equation, which may be with respect to more than one independent variable.", "title": "Differential equations" }, { "paragraph_id": 62, "text": "Linear differential equations, which have solutions that can be added and multiplied by coefficients, are well-defined and understood, and exact closed-form solutions are obtained. By contrast, ODEs that lack additive solutions are nonlinear, and solving them is far more intricate, as one can rarely represent them by elementary functions in closed form: Instead, exact and analytic solutions of ODEs are in series or integral form. Graphical and numerical methods, applied by hand or by computer, may approximate solutions of ODEs and perhaps yield useful information, often sufficing in the absence of exact, analytic solutions.", "title": "Differential equations" }, { "paragraph_id": 63, "text": "A partial differential equation (PDE) is a differential equation that contains unknown multivariable functions and their partial derivatives. (This is in contrast to ordinary differential equations, which deal with functions of a single variable and their derivatives.) PDEs are used to formulate problems involving functions of several variables, and are either solved by hand, or used to create a relevant computer model.", "title": "Differential equations" }, { "paragraph_id": 64, "text": "PDEs can be used to describe a wide variety of phenomena such as sound, heat, electrostatics, electrodynamics, fluid flow, elasticity, or quantum mechanics. These seemingly distinct physical phenomena can be formalised similarly in terms of PDEs. Just as ordinary differential equations often model one-dimensional dynamical systems, partial differential equations often model multidimensional systems. PDEs find their generalisation in stochastic partial differential equations.", "title": "Differential equations" }, { "paragraph_id": 65, "text": "Equations can be classified according to the types of operations and quantities involved. Important types include:", "title": "Types of equations" } ]
In mathematics, an equation is a mathematical formula that expresses the equality of two expressions, by connecting them with the equals sign =. The word equation and its cognates in other languages may have subtly different meanings; for example, in French an équation is defined as containing one or more variables, while in English, any well-formed formula consisting of two expressions related with an equals sign is an equation. Solving an equation containing variables consists of determining which values of the variables make the equality true. The variables for which the equation has to be solved are also called unknowns, and the values of the unknowns that satisfy the equality are called solutions of the equation. There are two kinds of equations: identities and conditional equations. An identity is true for all values of the variables. A conditional equation is only true for particular values of the variables. The "=" symbol, which appears in every equation, was invented in 1557 by Robert Recorde, who considered that nothing could be more equal than parallel straight lines with the same length.
2001-03-24T15:13:11Z
2023-11-15T17:05:59Z
[ "Template:Short description", "Template:Other uses", "Template:Math", "Template:Div col", "Template:Div col end", "Template:Reflist", "Template:Lang", "Template:Nowrap", "Template:Pi", "Template:Cite web", "Template:Cite book", "Template:Ill", "Template:P.", "Template:Authority control", "Template:See also", "Template:Not a typo", "Template:Expand French", "Template:Char", "Template:Mvar", "Template:Main", "Template:Efn", "Template:Notelist" ]
https://en.wikipedia.org/wiki/Equation
9,285
Ethical naturalism
Ethical naturalism (also called moral naturalism or naturalistic cognitivistic definism) is the meta-ethical view which claims that: The versions of ethical naturalism which have received the most sustained philosophical interest, for example, Cornell realism, differ from the position that "the way things are is always the way they ought to be", which few ethical naturalists hold. Ethical naturalism does, however, reject the fact-value distinction: it suggests that inquiry into the natural world can increase our moral knowledge in just the same way it increases our scientific knowledge. Indeed, proponents of ethical naturalism have argued that humanity needs to invest in the science of morality, a broad and loosely defined field that uses evidence from biology, primatology, anthropology, psychology, neuroscience, and other areas to classify and describe moral behavior. Ethical naturalism encompasses any reduction of ethical properties, such as 'goodness', to non-ethical properties; there are many different examples of such reductions, and thus many different varieties of ethical naturalism. Hedonism, for example, is the view that goodness is ultimately just pleasure. Ethical naturalism has been criticized most prominently by ethical non-naturalist G. E. Moore, who formulated the open-question argument. Garner and Rosen say that a common definition of "natural property" is one "which can be discovered by sense observation or experience, experiment, or through any of the available means of science." They also say that a good definition of "natural property" is problematic but that "it is only in criticism of naturalism, or in an attempt to distinguish between naturalistic and nonnaturalistic definist theories, that such a concept is needed." R. M. Hare also criticised ethical naturalism because of what he considered its fallacious definition of the terms 'good' or 'right', saying that value-terms being part of our prescriptive moral language are not reducible to descriptive terms: "Value-terms have a special function in language, that of commending; and so they plainly cannot be defined in terms of other words which themselves do not perform this function". Moral nihilists maintain that there are no such entities as objective values or objective moral facts. Proponents of moral science like Ronald A. Lindsay have counter-argued that their way of understanding "morality" as a practical enterprise is the way we ought to have understood it in the first place. He holds the position that the alternative seems to be the elaborate philosophical reduction of the word "moral" into a vacuous, useless term. Lindsay adds that it is important to reclaim the specific word "morality" because of the connotations it holds with many individuals. Author Sam Harris has argued that we overestimate the relevance of many arguments against the science of morality, arguments he believes scientists happily and rightly disregard in other domains of science like physics. For example, scientists may find themselves attempting to argue against philosophical skeptics, when Harris says they should be practically asking – as they would in any other domain – "why would we listen to a solipsist in the first place?" This, Harris contends, is part of what it means to practice a science of morality. In modern times, many thinkers discussing the fact–value distinction and the is–ought problem have settled on the idea that one cannot derive ought from is. Conversely, Harris maintains that the fact-value distinction is a confusion, proposing that values are really a certain kind of fact. Specifically, Harris suggests that values amount to empirical statements about "the flourishing of conscious creatures in a society". He argues that there are objective answers to moral questions, even if some are difficult or impossible to possess in practice. In this way, he says, science can tell us what to value. Harris adds that we do not demand absolute certainty from predictions in physics, so we should not demand that of a science studying morality (see The Moral Landscape). Physicist Sean Carroll believes that conceiving of morality as a science could be a case of scientific imperialism and insists that what is "good for conscious creatures" is not an adequate working definition of "moral". In opposition, Vice President at the Center for Inquiry, John Shook, claims that this working definition is more than adequate for science at present, and that disagreement should not immobilize the scientific study of ethics.
[ { "paragraph_id": 0, "text": "Ethical naturalism (also called moral naturalism or naturalistic cognitivistic definism) is the meta-ethical view which claims that:", "title": "" }, { "paragraph_id": 1, "text": "The versions of ethical naturalism which have received the most sustained philosophical interest, for example, Cornell realism, differ from the position that \"the way things are is always the way they ought to be\", which few ethical naturalists hold. Ethical naturalism does, however, reject the fact-value distinction: it suggests that inquiry into the natural world can increase our moral knowledge in just the same way it increases our scientific knowledge. Indeed, proponents of ethical naturalism have argued that humanity needs to invest in the science of morality, a broad and loosely defined field that uses evidence from biology, primatology, anthropology, psychology, neuroscience, and other areas to classify and describe moral behavior.", "title": "Overview" }, { "paragraph_id": 2, "text": "Ethical naturalism encompasses any reduction of ethical properties, such as 'goodness', to non-ethical properties; there are many different examples of such reductions, and thus many different varieties of ethical naturalism. Hedonism, for example, is the view that goodness is ultimately just pleasure.", "title": "Overview" }, { "paragraph_id": 3, "text": "Ethical naturalism has been criticized most prominently by ethical non-naturalist G. E. Moore, who formulated the open-question argument. Garner and Rosen say that a common definition of \"natural property\" is one \"which can be discovered by sense observation or experience, experiment, or through any of the available means of science.\" They also say that a good definition of \"natural property\" is problematic but that \"it is only in criticism of naturalism, or in an attempt to distinguish between naturalistic and nonnaturalistic definist theories, that such a concept is needed.\" R. M. Hare also criticised ethical naturalism because of what he considered its fallacious definition of the terms 'good' or 'right', saying that value-terms being part of our prescriptive moral language are not reducible to descriptive terms: \"Value-terms have a special function in language, that of commending; and so they plainly cannot be defined in terms of other words which themselves do not perform this function\".", "title": "Criticisms" }, { "paragraph_id": 4, "text": "Moral nihilists maintain that there are no such entities as objective values or objective moral facts. Proponents of moral science like Ronald A. Lindsay have counter-argued that their way of understanding \"morality\" as a practical enterprise is the way we ought to have understood it in the first place. He holds the position that the alternative seems to be the elaborate philosophical reduction of the word \"moral\" into a vacuous, useless term. Lindsay adds that it is important to reclaim the specific word \"morality\" because of the connotations it holds with many individuals.", "title": "Criticisms" }, { "paragraph_id": 5, "text": "Author Sam Harris has argued that we overestimate the relevance of many arguments against the science of morality, arguments he believes scientists happily and rightly disregard in other domains of science like physics. For example, scientists may find themselves attempting to argue against philosophical skeptics, when Harris says they should be practically asking – as they would in any other domain – \"why would we listen to a solipsist in the first place?\" This, Harris contends, is part of what it means to practice a science of morality.", "title": "Morality as a science" }, { "paragraph_id": 6, "text": "In modern times, many thinkers discussing the fact–value distinction and the is–ought problem have settled on the idea that one cannot derive ought from is. Conversely, Harris maintains that the fact-value distinction is a confusion, proposing that values are really a certain kind of fact. Specifically, Harris suggests that values amount to empirical statements about \"the flourishing of conscious creatures in a society\". He argues that there are objective answers to moral questions, even if some are difficult or impossible to possess in practice. In this way, he says, science can tell us what to value. Harris adds that we do not demand absolute certainty from predictions in physics, so we should not demand that of a science studying morality (see The Moral Landscape).", "title": "Morality as a science" }, { "paragraph_id": 7, "text": "Physicist Sean Carroll believes that conceiving of morality as a science could be a case of scientific imperialism and insists that what is \"good for conscious creatures\" is not an adequate working definition of \"moral\". In opposition, Vice President at the Center for Inquiry, John Shook, claims that this working definition is more than adequate for science at present, and that disagreement should not immobilize the scientific study of ethics.", "title": "Morality as a science" } ]
Ethical naturalism is the meta-ethical view which claims that: Ethical sentences express propositions. Some such propositions are true. Those propositions are made true by objective features of the world. These moral features of the world are reducible to some set of non-moral features.
2001-02-20T20:22:32Z
2023-12-14T12:02:43Z
[ "Template:Cite web", "Template:Cite book", "Template:Cite SEP", "Template:Short description", "Template:Main", "Template:Reflist", "Template:Harvnb", "Template:Essay", "Template:According to whom?", "Template:Cite news", "Template:Ethics" ]
https://en.wikipedia.org/wiki/Ethical_naturalism
9,286
Ethical non-naturalism
Ethical non-naturalism (or moral non-naturalism) is the meta-ethical view which claims that: This makes ethical non-naturalism a non-definist form of moral realism, which is in turn a form of cognitivism. Ethical non-naturalism stands in opposition to ethical naturalism, which claims that moral terms and properties are reducible to non-moral terms and properties, as well as to all forms of moral anti-realism, including ethical subjectivism (which denies that moral propositions refer to objective facts), error theory (which denies that any moral propositions are true), and non-cognitivism (which denies that moral sentences express propositions at all). According to G. E. Moore, "Goodness is a simple, undefinable, non-natural property." To call goodness "non-natural" does not mean that it is supernatural or divine. It does mean, however, that goodness cannot be reduced to natural properties such as needs, wants or pleasures. Moore also stated that a reduction of ethical properties to a divine command would be the same as stating their naturalness. This would be an example of what he referred to as "the naturalistic fallacy." Moore claimed that goodness is "indefinable", i.e., it cannot be defined in any other terms. This is the central claim of non-naturalism. Thus, the meaning of sentences containing the word "good" cannot be explained entirely in terms of sentences not containing the word "good." One cannot substitute words referring to pleasure, needs or anything else in place of "good." Some properties, such as hardness, roundness and dampness, are clearly natural properties. We encounter them in the real world and can perceive them. On the other hand, other properties, such as being good and being right, are not so obvious. A great novel is considered to be a good thing; goodness may be said to be a property of that novel. Paying one's debts and telling the truth are generally held to be right things to do; rightness may be said to be a property of certain human actions. However, these two types of property are quite different. Those natural properties, such as hardness and roundness, can be perceived and encountered in the real world. On the other hand, it is not immediately clear how to physically see, touch or measure the goodness of a novel or the rightness of an action. Moore did not consider goodness and rightness to be natural properties, i.e., they cannot be defined in terms of any natural properties. How, then, can we know that anything is good and how can we distinguish good from bad? Moral epistemology, the part of epistemology (and/or ethics) that studies how we know moral facts and how moral beliefs are justified, has proposed an answer. British epistemologists, following Moore, suggested that humans have a special faculty, a faculty of moral intuition, which tells us what is good and bad, right and wrong. Ethical intuitionists assert that, if we see a good person or a right action, and our faculty of moral intuition is sufficiently developed and unimpaired, we simply intuit that the person is good or that the action is right. Moral intuition is supposed to be a mental process different from other, more familiar faculties like sense-perception, and that moral judgments are its outputs. When someone judges something to be good, or some action to be right, then the person is using the faculty of moral intuition. The faculty is attuned to those non-natural properties. Perhaps the best ordinary notion that approximates moral intuition would be the idea of a conscience. Moore also introduced what is called the open-question argument, a position he later rejected. Suppose a definition of "good" is "pleasure-causing." In other words, if something is good, it causes pleasure; if it causes pleasure, then it is, by definition, good. Moore asserted, however, that we could always ask, "But are pleasure-causing things good?" This would always be an open question. There is no foregone conclusion that, indeed, pleasure-causing things are good. In his initial argument, Moore concluded that any similar definition of goodness could be criticized in the same way.
[ { "paragraph_id": 0, "text": "Ethical non-naturalism (or moral non-naturalism) is the meta-ethical view which claims that:", "title": "" }, { "paragraph_id": 1, "text": "This makes ethical non-naturalism a non-definist form of moral realism, which is in turn a form of cognitivism. Ethical non-naturalism stands in opposition to ethical naturalism, which claims that moral terms and properties are reducible to non-moral terms and properties, as well as to all forms of moral anti-realism, including ethical subjectivism (which denies that moral propositions refer to objective facts), error theory (which denies that any moral propositions are true), and non-cognitivism (which denies that moral sentences express propositions at all).", "title": "" }, { "paragraph_id": 2, "text": "According to G. E. Moore, \"Goodness is a simple, undefinable, non-natural property.\" To call goodness \"non-natural\" does not mean that it is supernatural or divine. It does mean, however, that goodness cannot be reduced to natural properties such as needs, wants or pleasures. Moore also stated that a reduction of ethical properties to a divine command would be the same as stating their naturalness. This would be an example of what he referred to as \"the naturalistic fallacy.\"", "title": "Definitions and examples" }, { "paragraph_id": 3, "text": "Moore claimed that goodness is \"indefinable\", i.e., it cannot be defined in any other terms. This is the central claim of non-naturalism. Thus, the meaning of sentences containing the word \"good\" cannot be explained entirely in terms of sentences not containing the word \"good.\" One cannot substitute words referring to pleasure, needs or anything else in place of \"good.\"", "title": "Definitions and examples" }, { "paragraph_id": 4, "text": "Some properties, such as hardness, roundness and dampness, are clearly natural properties. We encounter them in the real world and can perceive them. On the other hand, other properties, such as being good and being right, are not so obvious. A great novel is considered to be a good thing; goodness may be said to be a property of that novel. Paying one's debts and telling the truth are generally held to be right things to do; rightness may be said to be a property of certain human actions.", "title": "Definitions and examples" }, { "paragraph_id": 5, "text": "However, these two types of property are quite different. Those natural properties, such as hardness and roundness, can be perceived and encountered in the real world. On the other hand, it is not immediately clear how to physically see, touch or measure the goodness of a novel or the rightness of an action.", "title": "Definitions and examples" }, { "paragraph_id": 6, "text": "Moore did not consider goodness and rightness to be natural properties, i.e., they cannot be defined in terms of any natural properties. How, then, can we know that anything is good and how can we distinguish good from bad?", "title": "A difficult question" }, { "paragraph_id": 7, "text": "Moral epistemology, the part of epistemology (and/or ethics) that studies how we know moral facts and how moral beliefs are justified, has proposed an answer. British epistemologists, following Moore, suggested that humans have a special faculty, a faculty of moral intuition, which tells us what is good and bad, right and wrong.", "title": "A difficult question" }, { "paragraph_id": 8, "text": "Ethical intuitionists assert that, if we see a good person or a right action, and our faculty of moral intuition is sufficiently developed and unimpaired, we simply intuit that the person is good or that the action is right. Moral intuition is supposed to be a mental process different from other, more familiar faculties like sense-perception, and that moral judgments are its outputs. When someone judges something to be good, or some action to be right, then the person is using the faculty of moral intuition. The faculty is attuned to those non-natural properties. Perhaps the best ordinary notion that approximates moral intuition would be the idea of a conscience.", "title": "A difficult question" }, { "paragraph_id": 9, "text": "Moore also introduced what is called the open-question argument, a position he later rejected.", "title": "Another argument for non-naturalism" }, { "paragraph_id": 10, "text": "Suppose a definition of \"good\" is \"pleasure-causing.\" In other words, if something is good, it causes pleasure; if it causes pleasure, then it is, by definition, good. Moore asserted, however, that we could always ask, \"But are pleasure-causing things good?\" This would always be an open question. There is no foregone conclusion that, indeed, pleasure-causing things are good. In his initial argument, Moore concluded that any similar definition of goodness could be criticized in the same way.", "title": "Another argument for non-naturalism" } ]
Ethical non-naturalism is the meta-ethical view which claims that: Ethical sentences express propositions. Some such propositions are true. Those propositions are made true by objective features of the world, independent of human opinion. These moral features of the world are not reducible to any set of non-moral features. This makes ethical non-naturalism a non-definist form of moral realism, which is in turn a form of cognitivism. Ethical non-naturalism stands in opposition to ethical naturalism, which claims that moral terms and properties are reducible to non-moral terms and properties, as well as to all forms of moral anti-realism, including ethical subjectivism, error theory, and non-cognitivism.
2001-03-24T19:05:02Z
2023-11-17T19:00:56Z
[ "Template:Short description", "Template:Multiple issues", "Template:SEP", "Template:Ethics" ]
https://en.wikipedia.org/wiki/Ethical_non-naturalism
9,288
Elvis Presley
Elvis Aaron Presley (January 8, 1935 – August 16, 1977), also known mononymously as Elvis, was an American singer and actor. Known as the "King of Rock and Roll", he is regarded as one of the most significant cultural figures of the 20th century. Presley's energized interpretations of songs and sexually provocative performance style, combined with a singularly potent mix of influences across color lines during a transformative era in race relations, brought both great success and initial controversy. Presley was born in Tupelo, Mississippi; his family relocated to Memphis, Tennessee, when he was 13. His music career began there in 1954, at Sun Records with producer Sam Phillips, who wanted to bring the sound of African-American music to a wider audience. Presley, on guitar and accompanied by lead guitarist Scotty Moore and bassist Bill Black, was a pioneer of rockabilly, an uptempo, backbeat-driven fusion of country music and rhythm and blues. In 1955, drummer D. J. Fontana joined to complete the lineup of Presley's classic quartet and RCA Victor acquired his contract in a deal arranged by Colonel Tom Parker, who would manage him for more than two decades. Presley's first RCA single, "Heartbreak Hotel", was released in January 1956 and became a number-one hit in the United States. Within a year, RCA would sell ten million Presley singles. With a series of successful television appearances and chart-topping records, Presley became the leading figure of the newly popular rock and roll; though his performative style and promotion of the then-marginalized sound of African Americans led to him being widely considered a threat to the moral well-being of white American youth. In November 1956, Presley made his film debut in Love Me Tender. Drafted into military service in 1958, he relaunched his recording career two years later with some of his most commercially successful work. Presley held few concerts, however, and guided by Parker, proceeded to devote much of the 1960s to making Hollywood films and soundtrack albums, most of them critically derided. Some of his most famous films included Jailhouse Rock (1957), Blue Hawaii (1961), and Viva Las Vegas (1964). In 1968, following a seven-year break from live performances, he returned to the stage in the acclaimed television comeback special Elvis, which led to an extended Las Vegas concert residency and a string of highly profitable tours. In 1973, Presley gave the first concert by a solo artist to be broadcast around the world, Aloha from Hawaii. However, years of prescription drug abuse and unhealthy eating habits severely compromised his health, and Presley died suddenly in 1977 at his Graceland estate at the age of 42. Having sold roughly 500 million records worldwide, Presley is one of the best-selling music artists of all time. He was commercially successful in many genres, including pop, country, rhythm & blues, adult contemporary, and gospel. He won three Grammy Awards, received the Grammy Lifetime Achievement Award at age 36, and has been inducted into multiple music halls of fame. He also holds several records, including the most RIAA-certified gold and platinum albums, the most albums charted on the Billboard 200, the most number-one albums by a solo artist on the UK Albums Chart, and the most number-one singles by any act on the UK Singles Chart. In 2018, Presley was posthumously awarded the Presidential Medal of Freedom. Elvis Aaron Presley was born on January 8, 1935, in Tupelo, Mississippi, to Vernon Presley and Gladys Love (née Smith) Presley. Elvis' twin Jesse Garon was delivered stillborn. Presley became close to both parents, especially his mother. The family attended an Assembly of God church, where he found his initial musical inspiration. Vernon moved from one odd job to the next, and the family often relied on neighbors and government food assistance. In 1938 they lost their home after Vernon was found guilty of altering a check and jailed for eight months. In September 1941, Presley entered first grade at East Tupelo Consolidated, where his teachers regarded him as "average". His first public performance was a singing contest at the Mississippi–Alabama Fair and Dairy Show on October 3, 1945, when he was 10; he sang "Old Shep" and recalled placing fifth. A few months later, Presley received his first guitar for his birthday; he received guitar lessons from two uncles and a pastor at the family's church. Presley recalled, "I took the guitar, and I watched people, and I learned to play a little bit. But I would never sing in public. I was very shy about it." In September 1946, Presley entered a new school, Milam, for sixth grade. The following year, he began singing and playing his guitar at school. He was often teased as a "trashy" kid who played hillbilly music. Presley was a devotee of Mississippi Slim's radio show. He was described as "crazy about music" by Slim's younger brother, one of Presley's classmates. Slim showed Presley chord techniques. When his protégé was 12, Slim scheduled him for two on-air performances. Presley was overcome by stage fright the first time but performed the following week. In November 1948, the family moved to Memphis, Tennessee. Enrolled at L. C. Humes High School, Presley received a C in music in eighth grade. When his music teacher said he had no aptitude for singing, he brought in his guitar and sang a recent hit, "Keep Them Cold Icy Fingers Off Me". He was usually too shy to perform openly and was occasionally bullied by classmates for being a "mama's boy". In 1950, Presley began practicing guitar under the tutelage of Lee Denson, a neighbor. They and three other boys—including two future rockabilly pioneers, brothers Dorsey and Johnny Burnette—formed a loose musical collective. During his junior year, Presley began to stand out among his classmates, largely because of his appearance: he grew his sideburns and styled his hair. He would head down to Beale Street, the heart of Memphis' thriving blues scene, and admire the wild, flashy clothes at Lansky Brothers. By his senior year, he was wearing those clothes. He competed in Humes' Annual "Minstrel" Show in 1953, singing and playing "Till I Waltz Again with You", a recent hit for Teresa Brewer. Presley recalled that the performance did much for his reputation: I wasn't popular in school ... I failed music—only thing I ever failed. And then they entered me in this talent show ... when I came onstage, I heard people kind of rumbling and whispering and so forth, 'cause nobody knew I even sang. It was amazing how popular I became in school after that. Presley, who could not read music, played by ear and frequented record stores that provided jukeboxes and listening booths. He knew all of Hank Snow's songs, and he loved records by other country singers such as Roy Acuff, Ernest Tubb, Ted Daffan, Jimmie Rodgers, Jimmie Davis, and Bob Wills. The Southern gospel singer Jake Hess, one of his favorite performers, was a significant influence on his ballad-singing style. Presley was a regular audience member at the monthly All-Night Singings downtown, where many of the white gospel groups that performed reflected the influence of African American spirituals. Presley listened to regional radio stations, such as WDIA, that played what were then called "race records": spirituals, blues, and the modern, backbeat-heavy rhythm and blues. Like some of his peers, he may have attended blues venues only on nights designated for exclusively white audiences. Many of his future recordings were inspired by local African-American musicians such as Arthur Crudup and Rufus Thomas. B.B. King recalled that he had known Presley before he was popular when they both used to frequent Beale Street. By the time he graduated high school in June 1953, Presley had singled out music as his future. In August 1953, Presley checked into Memphis Recording Service, the company run by Sam Phillips before he started Sun Records. He aimed to pay for studio time to record a two-sided acetate disc: "My Happiness" and "That's When Your Heartaches Begin". He later claimed that he intended the record as a birthday gift for his mother, or that he was merely interested in what he "sounded like". Biographer Peter Guralnick argued that Presley chose Sun in the hope of being discovered. In January 1954, Presley cut a second acetate at Sun—"I'll Never Stand in Your Way" and "It Wouldn't Be the Same Without You"—but again nothing came of it. Not long after, he failed an audition for a local vocal quartet, the Songfellows, and another for the band of Eddie Bond. Phillips, meanwhile, was always on the lookout for someone who could bring to a broader audience the sound of the black musicians on whom Sun focused. In June, he acquired a demo recording by Jimmy Sweeney of a ballad, "Without You", that he thought might suit Presley. The teenaged singer came by the studio but was unable to do it justice. Despite this, Phillips asked Presley to sing other numbers and was sufficiently affected by what he heard to invite two local musicians, guitarist Winfield "Scotty" Moore and upright bass player Bill Black, to work with Presley for a recording session. The session, held the evening of July 5, proved entirely unfruitful until late in the night. As they were about to abort and go home, Presley launched into a 1946 blues number, Arthur Crudup's "That's All Right". Moore recalled, "All of a sudden, Elvis just started singing this song, jumping around and acting the fool, and then Bill picked up his bass, and he started acting the fool, too, and I started playing with them." Phillips quickly began taping; this was the sound he had been looking for. Three days later, popular Memphis disc jockey Dewey Phillips (no relation to Sam Phillips) played "That's All Right" on his Red, Hot, and Blue show. Listener interest was such that Phillips played the record repeatedly during the remaining two hours of his show. Interviewing Presley on-air, Phillips asked him what high school he attended to clarify his color for the many callers who had assumed that he was black. During the next few days, the trio recorded a bluegrass song, Bill Monroe's "Blue Moon of Kentucky", again in a distinctive style and employing a jury-rigged echo effect that Sam Phillips dubbed "slapback". A single was pressed with "That's All Right" on the A-side and "Blue Moon of Kentucky" on the reverse. The trio played publicly for the first time at the Bon Air club on July 17, 1954. Later that month, they appeared at the Overton Park Shell, with Slim Whitman headlining. Here Elvis pioneered "Rubber Legs", his signature dance movement. A combination of his strong response to rhythm and nervousness led Presley to shake his legs as he performed: His wide-cut pants emphasized his movements, causing young women in the audience to start screaming. Moore recalled, "During the instrumental parts, he would back off from the mike and be playing and shaking, and the crowd would just go wild." Soon after, Moore and Black left their old band to play with Presley regularly, and disc jockey/promoter Bob Neal became the trio's manager. From August through October, they played frequently at the Eagle's Nest club, a dance venue in Memphis. When Presley played, teenagers rushed from the pool to fill the club, then left again as the house western swing band resumed. Presley quickly grew more confident on stage. According to Moore, "His movement was a natural thing, but he was also very conscious of what got a reaction. He'd do something one time and then he would expand on it real quick." Amid these live performances, Presley returned to Sun studio for more recording sessions. Presley made what would be his only appearance on Nashville's Grand Ole Opry on October 2; Opry manager Jim Denny told Phillips that his singer was "not bad" but did not suit the program. In November 1954, Presley performed on Louisiana Hayride—the Opry's chief, and more adventurous, rival. The show was broadcast to 198 radio stations in 28 states. His nervous first set drew a muted reaction. A more composed and energetic second set inspired an enthusiastic response. Soon after the show, the Hayride engaged Presley for a year's worth of Saturday-night appearances. Trading in his old guitar for $8, he purchased a Martin instrument for $175 (equivalent to $1,900 in 2022) and his trio began playing in new locales, including Houston, Texas, and Texarkana, Arkansas. Presley made his first television appearance on the KSLA-TV broadcast of Louisiana Hayride. Soon after, he failed an audition for Arthur Godfrey's Talent Scouts on the CBS television network. By early 1955, Presley's regular Hayride appearances, constant touring, and well-received record releases had made him a regional star. In January, Neal signed a formal management contract with Presley and brought him to the attention of Colonel Tom Parker, whom he considered the best promoter in the music business. Having successfully managed the top country star Eddy Arnold, Parker was working with the new number-one country singer, Hank Snow. Parker booked Presley on Snow's February tour. By August, Sun had released ten sides credited to "Elvis Presley, Scotty and Bill"; the latest recordings included a drummer. Some of the songs, like "That's All Right", were in what one Memphis journalist described as the "R&B idiom of negro field jazz"; others, like "Blue Moon of Kentucky", were "more in the country field", "but there was a curious blending of the two different musics in both". This blend of styles made it difficult for Presley's music to find radio airplay. According to Neal, many country-music disc jockeys would not play it because Presley sounded too much like a black artist and none of the R&B stations would touch him because "he sounded too much like a hillbilly." The blend came to be known as "rockabilly". At the time, Presley was billed as "The King of Western Bop", "The Hillbilly Cat", and "The Memphis Flash". Presley renewed Neal's management contract in August 1955, simultaneously appointing Parker as his special adviser. The group maintained an extensive touring schedule. Neal recalled, "It was almost frightening, the reaction that came to Elvis from the teenaged boys. So many of them, through some sort of jealousy, would practically hate him. There were occasions in some towns in Texas when we'd have to be sure to have a police guard because somebody'd always try to take a crack at him." The trio became a quartet when Hayride drummer Fontana joined as a full member. In mid-October, they played a few shows in support of Bill Haley, whose "Rock Around the Clock" track had been a number-one hit the previous year. Haley observed that Presley had a natural feel for rhythm, and advised him to sing fewer ballads. At the Country Disc Jockey Convention in early November, Presley was voted the year's most promising male artist. After three major labels made offers of up to $25,000, Parker and Phillips struck a deal with RCA Victor on November 21 to acquire Presley's Sun contract for an unprecedented $40,000. Presley, aged 20, was legally still a minor, so his father signed the contract. Parker arranged with the owners of Hill & Range Publishing, Jean and Julian Aberbach, to create two entities, Elvis Presley Music and Gladys Music, to handle all the new material recorded by Presley. Songwriters were obliged to forgo one-third of their customary royalties in exchange for having Presley perform their compositions. By December, RCA had begun to heavily promote its new singer, and before month's end had reissued many of his Sun recordings. On January 10, 1956, Presley made his first recordings for RCA in Nashville. Extending his by-now customary backup of Moore, Black, Fontana, and Hayride pianist Floyd Cramer—who had been performing at live club dates with Presley—RCA enlisted guitarist Chet Atkins and three background singers, including Gordon Stoker of the popular Jordanaires quartet. The session produced the moody "Heartbreak Hotel", released as a single on January 27. Parker brought Presley to national television, booking him on CBS's Stage Show for six appearances over two months. The program, produced in New York City, was hosted on alternate weeks by big band leaders and brothers Tommy and Jimmy Dorsey. After his first appearance on January 28, Presley stayed in town to record at RCA Victor's New York studio. The sessions yielded eight songs, including a cover of Carl Perkins' rockabilly anthem "Blue Suede Shoes". In February, Presley's "I Forgot to Remember to Forget", a Sun recording released the previous August, reached the top of the Billboard country chart. Neal's contract was terminated and Parker became Presley's manager. RCA released Presley's self-titled debut album on March 23. Joined by five previously unreleased Sun recordings, its seven recently recorded tracks included two country songs, a bouncy pop tune, and what would centrally define the evolving sound of rock and roll: "Blue Suede Shoes"—"an improvement over Perkins' in almost every way", according to critic Robert Hilburn—and three R&B numbers that had been part of Presley's stage repertoire, covers of Little Richard, Ray Charles, and The Drifters. As described by Hilburn, these were the most revealing of all. Unlike many white artists ... who watered down the gritty edges of the original R&B versions of songs in the '50s, Presley reshaped them. He not only injected the tunes with his own vocal character but also made guitar, not piano, the lead instrument in all three cases. It became the first rock and roll album to top the Billboard chart, a position it held for ten weeks. While Presley was not an innovative guitarist like Moore or contemporary African American rockers Bo Diddley and Chuck Berry, cultural historian Gilbert B. Rodman argued that the album's cover image, "of Elvis having the time of his life on stage with a guitar in his hands played a crucial role in positioning the guitar ... as the instrument that best captured the style and spirit of this new music." On April 3, Presley made the first of two appearances on NBC's The Milton Berle Show. His performance, on the deck of the USS Hancock in San Diego, California, prompted cheers and screams from an audience of sailors and their dates. A few days later, Presley and his band were flying to Nashville for a recording session when an engine died and the plane almost went down over Arkansas. Twelve weeks after its original release, "Heartbreak Hotel" became Presley's first number-one pop hit. In late April he began a two-week residency at the New Frontier Hotel and Casino on the Las Vegas Strip. The shows were poorly received by the conservative, middle-aged hotel guests—"like a jug of corn liquor at a champagne party", wrote a critic for Newsweek. Amid his Vegas tenure, Presley, who had acting ambitions, signed a seven-year contract with Paramount Pictures. He began a tour of the Midwest in mid-May, covering fifteen cities in as many days. He had attended several shows by Freddie Bell and the Bellboys in Vegas and was struck by their cover of "Hound Dog", a hit in 1953 for blues singer Big Mama Thornton by songwriters Jerry Leiber and Mike Stoller. It became his new closing number. After a show in La Crosse, Wisconsin, an urgent message on the letterhead of the local Catholic diocese's newspaper was sent to FBI director J. Edgar Hoover. It warned that Presley is a definite danger to the security of the United States. ... [His] actions and motions were such as to rouse the sexual passions of teenaged youth. ... After the show, more than 1,000 teenagers tried to gang into Presley's room at the auditorium. ... Indications of the harm Presley did just in La Crosse were the two high school girls ... whose abdomen and thigh had Presley's autograph. Presley's second Milton Berle Show appearance came on June 5 at NBC's Hollywood studio, amid another hectic tour. Milton Berle persuaded Presley to leave his guitar backstage. During the performance, Presley abruptly halted an uptempo rendition of "Hound Dog" and launched into a slow, grinding version accentuated with exaggerated body movements. His gyrations created a storm of controversy. Television critics were outraged: Jack Gould of The New York Times wrote, Mr. Presley has no discernible singing ability. ... His phrasing, if it can be called that, consists of the stereotyped variations that go with a beginner's aria in a bathtub. ... His one specialty is an accented movement of the body ... primarily identified with the repertoire of the blond bombshells of the burlesque runway. Ben Gross of the New York Daily News opined that popular music "has reached its lowest depths in the 'grunt and groin' antics of one Elvis Presley. ... Elvis, who rotates his pelvis ... gave an exhibition that was suggestive and vulgar, tinged with the kind of animalism that should be confined to dives and bordellos". Ed Sullivan, whose variety show was the nation's most popular, declared Presley "unfit for family viewing". To Presley's displeasure, he soon found himself being referred to as "Elvis the Pelvis", which he called "childish". The Berle shows drew such high ratings that Presley was booked for a July 1 appearance on NBC's The Steve Allen Show in New York. Allen, no fan of rock and roll, introduced a "new Elvis" in a white bowtie and black tails. Presley sang "Hound Dog" for less than a minute to a basset hound wearing a top hat and bowtie. As described by television historian Jake Austen, "Allen thought Presley was talentless and absurd ... [he] set things up so that Presley would show his contrition". Allen later wrote that he found Presley's "strange, gangly, country-boy charisma, his hard-to-define cuteness, and his charming eccentricity intriguing" and worked him into the "comedy fabric" of his program. Just before the final rehearsal for the show, Presley told a reporter, "I don't want to do anything to make people dislike me. I think TV is important so I'm going to go along, but I won't be able to give the kind of show I do in a personal appearance." Presley would refer back to the Allen show as the most ridiculous performance of his career. Later that night, he appeared on Hy Gardner Calling, a popular local television show. Pressed on whether he had learned anything from the criticism of him, Presley responded, "No, I haven't... I don't see how any type of music would have any bad influence on people when it's only music. ... how would rock 'n' roll music make anyone rebel against their parents?" The next day, Presley recorded "Hound Dog", "Any Way You Want Me" and "Don't Be Cruel". The Jordanaires sang harmony, as they had on The Steve Allen Show; they would work with Presley through the 1960s. A few days later, Presley made an outdoor concert appearance in Memphis, at which he announced, "You know, those people in New York are not gonna change me none. I'm gonna show you what the real Elvis is like tonight." In August, a judge in Jacksonville, Florida, ordered Presley to tame his act. Throughout the following performance, he largely kept still, except for wiggling his little finger suggestively in mockery of the order. The single pairing "Don't Be Cruel" with "Hound Dog" ruled the top of the charts for eleven weeks—a mark that would not be surpassed for thirty-six years. Recording sessions for Presley's second album took place in Hollywood in early September. Leiber and Stoller, the writers of "Hound Dog", contributed "Love Me". Allen's show with Presley had, for the first time, beaten The Ed Sullivan Show in the ratings. Sullivan booked Presley for three appearances for an unprecedented $50,000. The first, on September 9, 1956, was seen by approximately 60 million viewers—a record 82.6 percent of the television audience. Actor Charles Laughton hosted the show, filling in while Sullivan was recovering from a car accident. According to legend, Presley was shot only from the waist up. Watching clips of the Allen and Berle shows, Sullivan had opined that Presley "got some kind of device hanging down below the crotch of his pants—so when he moves his legs back and forth you can see the outline of his cock. ... I think it's a Coke bottle. ... We just can't have this on a Sunday night. This is a family show!" Sullivan publicly told TV Guide, "As for his gyrations, the whole thing can be controlled with camera shots." In fact, Presley was shown head-to-toe. Though the camerawork was relatively discreet during his debut, with leg-concealing closeups when he danced, the studio audience reacted with screams. Presley's performance of his forthcoming single, the ballad "Love Me Tender", prompted a record-shattering million advance orders. More than any other single event, it was this first appearance on The Ed Sullivan Show that made Presley a national celebrity. Accompanying Presley's rise to fame, a cultural shift was taking place that he both helped inspire and came to symbolize. The historian Marty Jezer wrote that Presley began the "biggest pop craze" since Glenn Miller and Frank Sinatra and brought rock and roll to mainstream culture: As Presley set the artistic pace, other artists followed. ... Presley, more than anyone else, gave the young a belief in themselves as a distinct and somehow unified generation—the first in America ever to feel the power of an integrated youth culture. The audience response at Presley's live shows became increasingly fevered. Moore recalled, "He'd start out, 'You ain't nothin' but a Hound Dog,' and they'd just go to pieces. They'd always react the same way. There'd be a riot every time." At the two concerts he performed in September at the Mississippi–Alabama Fair and Dairy Show, fifty National Guardsmen were added to the police detail to prevent a ruckus. Elvis, Presley's second RCA album, was released in October and quickly rose to number one. The album includes "Old Shep", which he sang at the talent show in 1945, and which now marked the first time he played piano on an RCA session. According to Guralnick, "the halting chords and the somewhat stumbling rhythm" showed "the unmistakable emotion and the equally unmistakable valuing of emotion over technique." Assessing the musical and cultural impact of Presley's recordings from "That's All Right" through Elvis, rock critic Dave Marsh wrote that "these records, more than any others, contain the seeds of what rock & roll was, has been and most likely what it may foreseeably become." Presley returned to The Ed Sullivan Show, hosted this time by its namesake, on October 28. After the performance, crowds in Nashville and St. Louis burned him in effigy. His first motion picture, Love Me Tender, was released on November 21. Though he was not top-billed, the film's original title—The Reno Brothers—was changed to capitalize on his latest number-one record: "Love Me Tender" had hit the top of the charts earlier that month. To further take advantage of Presley's popularity, four musical numbers were added to what was originally a straight acting role. The film was panned by critics but did very well at the box office. Presley would receive top billing on every subsequent film he made. On December 4, Presley dropped into Sun Records, where Carl Perkins and Jerry Lee Lewis were recording, and had an impromptu jam session along with Johnny Cash. Though Phillips no longer had the right to release any Presley material, he made sure that the session was captured on tape. The results, none officially released for twenty-five years, became known as the "Million Dollar Quartet" recordings. The year ended with a front-page story in The Wall Street Journal reporting that Presley merchandise had brought in $22 million on top of his record sales, and Billboard's declaration that he had placed more songs in the top 100 than any other artist since records were first charted. In his first full year at RCA Victor, then the record industry's largest company, Presley had accounted for over fifty percent of the label's singles sales. Presley made his third and final Ed Sullivan Show appearance on January 6, 1957—on this occasion indeed shot only down to the waist. Some commentators have claimed that Parker orchestrated an appearance of censorship to generate publicity. In any event, as critic Greil Marcus describes, Presley "did not tie himself down. Leaving behind the bland clothes he had worn on the first two shows, he stepped out in the outlandish costume of a pasha, if not a harem girl. From the make-up over his eyes, the hair falling in his face, the overwhelmingly sexual cast of his mouth, he was playing Rudolph Valentino in The Sheik, with all stops out." To close, displaying his range and defying Sullivan's wishes, Presley sang a gentle black spiritual, "Peace in the Valley". At the end of the show, Sullivan declared Presley "a real decent, fine boy". Two days later, the Memphis draft board announced that Presley would be classified 1-A and would probably be drafted sometime that year. Each of the three Presley singles released in the first half of 1957 went to number one: "Too Much", "All Shook Up", and "(Let Me Be Your) Teddy Bear". Already an international star, he was attracting fans even where his music was not officially released: The New York Times reported that pressings of his music on discarded X-ray plates were commanding high prices in Leningrad. Presley purchased his 18-room mansion, Graceland, on March 19, 1957. Before the purchase, Elvis recorded Loving You—the soundtrack to his second film, which was released in July. It was his third straight number-one album. The title track was written by Leiber and Stoller, who were then retained to write four of the six songs recorded at the sessions for Jailhouse Rock, Presley's next film. The songwriting team effectively produced the Jailhouse sessions and developed a close working relationship with Presley, who came to regard them as his "good-luck charm". "He was fast," said Leiber. "Any demo you gave him he knew by heart in ten minutes." The title track became another number-one hit, as was the Jailhouse Rock EP. Presley undertook three brief tours during the year, continuing to generate a crazed audience response. A Detroit newspaper suggested that "the trouble with going to see Elvis Presley is that you're liable to get killed". Villanova students pelted the singer with eggs in Philadelphia, and in Vancouver the crowd rioted after the show ended, destroying the stage. Frank Sinatra, who had inspired the swooning and screaming of teenage girls in the 1940s, decried rock and roll as "brutal, ugly, degenerate, vicious. ... It fosters almost totally negative and destructive reactions in young people. It smells phoney and false. It is sung, played and written, for the most part, by cretinous goons. ... This rancid-smelling aphrodisiac I deplore." Asked for a response, Presley said, "I admire the man. He has a right to say what he wants to say. He is a great success and a fine actor, but I think he shouldn't have said it. ... This is a trend, just the same as he faced when he started years ago." Leiber and Stoller were again in the studio for the recording of Elvis' Christmas Album. Toward the end of the session, they wrote a song on the spot at Presley's request: "Santa Claus Is Back in Town", an innuendo-laden blues. The holiday release stretched Presley's string of number-one albums to four and would become the best-selling Christmas album ever in the United States, with eventual sales of over 20 million worldwide. After the session, Moore and Black—drawing only modest weekly salaries, sharing in none of Presley's massive financial success—resigned, though they were brought back on a per diem basis a few weeks later. On December 20, Presley received his draft notice, though he was granted a deferment to finish the forthcoming film King Creole. A couple of weeks into the new year, "Don't", another Leiber and Stoller tune, became Presley's tenth number-one seller. Recording sessions for the King Creole soundtrack were held in Hollywood in mid-January 1958. Leiber and Stoller provided three songs, but it would be the last time Presley and the duo worked closely together. As Stoller later recalled, Presley's manager and entourage sought to wall him off. A brief soundtrack session on February 11 marked the final occasion on which Black was to perform with Presley. On March 24, 1958, Presley was drafted into the United States Army at Fort Chaffee in Arkansas. His arrival was a major media event. Hundreds of people descended on Presley as he stepped from the bus; photographers accompanied him into the installation. Presley announced that he was looking forward to his military service, saying that he did not want to be treated any differently from anyone else. Between March 28 and September 17, 1958, Presley completed basic and advanced training at Fort Hood, Texas, where he was temporarily assigned to Company A, 2d Medium Tank Battalion, 37th Armor. During the two weeks' leave between his basic and advanced training in early June, he recorded five songs in Nashville. In early August, Presley's mother was diagnosed with hepatitis, and her condition rapidly worsened. Presley was granted emergency leave to visit her and arrived in Memphis on August 12. Two days later, she died of heart failure at age 46. Presley was devastated and never the same; their relationship had remained extremely close—even into his adulthood, they would use baby talk with each other and Presley would address her with pet names. On October 1, 1958, Presley was assigned to the 1st Medium Tank Battalion, 32d Armor, 3d Armored Division, at Ray Barracks, West Germany, where he served as an armor intelligence specialist. On November 27, he was promoted to private first class and on June 1, 1959, to specialist fourth class. While on maneuvers, Presley was introduced to amphetamines and became "practically evangelical about their benefits", not only for energy but for "strength" and weight loss. Karate became a lifelong interest: he studied with Jürgen Seydel, and later included it in his live performances. Fellow soldiers have attested to Presley's wish to be seen as an able, ordinary soldier despite his fame, and to his generosity. He donated his Army pay to charity, purchased television sets for the base, and bought an extra set of fatigues for everyone in his outfit. Presley was promoted to sergeant on February 11, 1960. While in Bad Nauheim, Presley, aged 24, met 14-year-old Priscilla Beaulieu. They would marry after a seven-and-a-half-year courtship. In her autobiography, Priscilla said that Presley was concerned that his 24 months in the military would ruin his career. In Special Services, he would have been able to perform and remain in touch with the public, but Parker had convinced him that to gain popular respect, he should serve as a regular soldier. Media reports echoed Presley's concerns about his career, but RCA producer Steve Sholes and Freddy Bienstock of Hill and Range had carefully prepared: armed with a substantial amount of unreleased material, they kept up a regular stream of successful releases. Between his induction and discharge, Presley had ten top-40 hits, including "Wear My Ring Around Your Neck", the bestselling "Hard Headed Woman", and "One Night" in 1958, and "(Now and Then There's) A Fool Such as I" and the number-one "A Big Hunk o' Love" in 1959. RCA also generated four albums compiling previously issued material during this period, most successfully Elvis' Golden Records (1958), which hit number three on the LP chart. Presley returned to the U.S. on March 2, 1960, and was honorably discharged three days later. The train that carried him from New Jersey to Tennessee was mobbed all the way, and Presley was called upon to appear at scheduled stops to please his fans. On the night of March 20, he entered RCA's Nashville studio to cut tracks for a new album along with a single, "Stuck on You", which was rushed into release and swiftly became a number-one hit. Another Nashville session two weeks later yielded a pair of bestselling singles, the ballads "It's Now or Never" and "Are You Lonesome Tonight?", along with the rest of Elvis Is Back! The album features several songs described by Greil Marcus as full of Chicago blues "menace, driven by Presley's own super-miked acoustic guitar, brilliant playing by Scotty Moore, and demonic sax work from Boots Randolph. Elvis' singing wasn't sexy, it was pornographic." The record "conjured up the vision of a performer who could be all things", according to music historian John Robertson: "a flirtatious teenage idol with a heart of gold; a tempestuous, dangerous lover; a gutbucket blues singer; a sophisticated nightclub entertainer; [a] raucous rocker". Released only days after recording was complete, it reached number two on the album chart. Presley returned to television on May 12 as a guest on The Frank Sinatra Timex Special. Also known as Welcome Home Elvis, the show had been taped in late March, the only time all year Presley performed in front of an audience. Parker secured an unheard-of $125,000 for eight minutes of singing. The broadcast drew an enormous viewership. G.I. Blues, the soundtrack to Presley's first film since his return, was a number-one album in October. His first LP of sacred material, His Hand in Mine, followed two months later; it reached number 13 on the U.S. pop chart and number 3 in the United Kingdom, remarkable figures for a gospel album. In February 1961, Presley performed two shows in Memphis, for a benefit for twenty-four local charities. During a luncheon preceding the event, RCA presented him with a plaque certifying worldwide sales of over 75 million records. A twelve-hour Nashville session in mid-March yielded nearly all of Presley's next studio album, Something for Everybody. According to John Robertson, it exemplifies the Nashville sound, the restrained, cosmopolitan style that would define country music in the 1960s. Presaging much of what was to come from Presley over the next half-decade, the album is largely "a pleasant, unthreatening pastiche of the music that had once been Elvis' birthright". It would be his sixth number-one LP. Another benefit concert, for a Pearl Harbor memorial, was staged on March 25 in Hawaii. It was to be Presley's last public performance for seven years. Parker had by now pushed Presley into a heavy filmmaking schedule, focused on formulaic, modestly budgeted musical comedies. Presley initially insisted on pursuing higher roles, but when two films in a more dramatic vein—Flaming Star (1960) and Wild in the Country (1961)—were less commercially successful, he reverted to the formula. Among the twenty-seven films he made during the 1960s, there were a few further exceptions. His films were almost universally panned; critic Andrew Caine dismissed them as a "pantheon of bad taste". Nonetheless, they were virtually all profitable. Hal Wallis, who produced nine, declared, "A Presley picture is the only sure thing in Hollywood." Of Presley's films in the 1960s, fifteen were accompanied by soundtrack albums and another five by soundtrack EPs. The films' rapid production and release schedules—Presley frequently starred in three a year—affected his music. According to Jerry Leiber, the soundtrack formula was already evident before Presley left for the Army: "three ballads, one medium-tempo [number], one up-tempo, and one break blues boogie". As the decade wore on, the quality of the soundtrack songs grew "progressively worse". Julie Parrish, who appeared in Paradise, Hawaiian Style (1966), says that Presley disliked many of the songs. The Jordanaires' Gordon Stoker describes how he would retreat from the studio microphone: "The material was so bad that he felt like he couldn't sing it." Most of the film albums featured a song or two from respected writers such as the team of Doc Pomus and Mort Shuman. But by and large, according to biographer Jerry Hopkins, the numbers seemed to be "written on order by men who never really understood Elvis or rock and roll". In the first half of the decade, three of Presley's soundtrack albums were ranked number one on the pop charts, and a few of his most popular songs came from his films, such as "Can't Help Falling in Love" (1961) and "Return to Sender" (1962). However, the commercial returns steadily diminished. From 1964 through 1968, Presley had only one top-ten hit: "Crying in the Chapel" (1965), a gospel number recorded in 1960. As for non-film albums, between the June 1962 release of Pot Luck and the November 1968 release of the soundtrack to the television special that signaled his comeback, only one LP of new material by Presley was issued: the gospel album How Great Thou Art (1967). It won him his first Grammy Award, for Best Sacred Performance. As Marsh described, Presley was "arguably the greatest white gospel singer of his time [and] really the last rock & roll artist to make gospel as vital a component of his musical personality as his secular songs". Shortly before Christmas 1966, more than seven years since they first met, Presley proposed to Priscilla Beaulieu. They were married on May 1, 1967, in a brief ceremony in their suite at the Aladdin Hotel in Las Vegas. The flow of formulaic films and assembly-line soundtracks continued. It was not until October 1967, when the Clambake soundtrack LP registered record low sales for a new Presley album, that RCA executives recognized a problem. "By then, of course, the damage had been done", as historians Connie Kirchberg and Marc Hendrickx put it. "Elvis was viewed as a joke by serious music lovers and a has-been to all but his most loyal fans." Presley's only child, Lisa Marie, was born on February 1, 1968, during a period when he had grown deeply unhappy with his career. Of the eight Presley singles released between January 1967 and May 1968, only two charted in the top 40, none higher than number 28. His forthcoming soundtrack album, Speedway, would rank at number 82. Parker had already shifted his plans to television: he maneuvered a deal with NBC that committed the network to finance a theatrical feature and broadcast a Christmas special. Recorded in late June in Burbank, California, the special, simply called Elvis, aired on December 3, 1968. Later known as the '68 Comeback Special, the show featured lavishly staged studio productions as well as songs performed with a band in front of a small audience—Presley's first live performances since 1961. The live segments saw Presley dressed in tight black leather, singing and playing guitar in an uninhibited style reminiscent of his early rock and roll days. Director and co-producer Steve Binder worked hard to produce a show that was far from the hour of Christmas songs Parker had originally planned. The show, NBC's highest-rated that season, captured forty-two percent of the total viewing audience. Jon Landau of Eye magazine remarked, "There is something magical about watching a man who has lost himself find his way back home. He sang with the kind of power people no longer expect of rock 'n' roll singers. He moved his body with a lack of pretension and effort that must have made Jim Morrison green with envy." Marsh calls the performance one of "emotional grandeur and historical resonance". By January 1969, the single "If I Can Dream", written for the special, reached number 12. The soundtrack album rose into the top ten. According to friend Jerry Schilling, the special reminded Presley of what "he had not been able to do for years, being able to choose the people; being able to choose what songs and not being told what had to be on the soundtrack. ... He was out of prison, man." Binder said of Presley's reaction, "I played Elvis the 60-minute show, and he told me in the screening room, 'Steve, it's the greatest thing I've ever done in my life. I give you my word I will never sing a song I don't believe in.'" Buoyed by the experience of the Comeback Special, Presley engaged in a prolific series of recording sessions at American Sound Studio, which led to the acclaimed From Elvis in Memphis. Released in June 1969, it was his first secular, non-soundtrack album from a dedicated period in the studio in eight years. As described by Marsh, it is "a masterpiece in which Presley immediately catches up with pop music trends that had seemed to pass him by during the movie years. He sings country songs, soul songs and rockers with real conviction, a stunning achievement." The album featured the hit single "In the Ghetto", issued in April, which reached number three on the pop chart—Presley's first non-gospel top ten hit since "Bossa Nova Baby" in 1963. Further hit singles were culled from the American Sound sessions: "Suspicious Minds", "Don't Cry Daddy", and "Kentucky Rain". Presley was keen to resume regular live performing. Following the success of the Comeback Special, offers came in from around the world. The London Palladium offered Parker US$28,000 (equivalent to $223,000 in 2022) for a one-week engagement. He responded, "That's fine for me, now how much can you get for Elvis?" In May, the brand-new International Hotel in Las Vegas, boasting the largest showroom in the city, booked Presley for fifty-seven shows over four weeks, beginning July 31. Moore, Fontana, and the Jordanaires declined to participate, afraid of losing the lucrative session work they had in Nashville. Presley assembled new, top-notch accompaniment, led by guitarist James Burton and including two gospel groups, The Imperials and Sweet Inspirations. Costume designer Bill Belew, responsible for the intense leather styling of the Comeback Special, created a new stage look for Presley, inspired by his passion for karate. Nonetheless, Presley was nervous: his only previous Las Vegas engagement, in 1956, had been dismal. Parker oversaw a major promotional push, and International Hotel owner Kirk Kerkorian arranged to send his own plane to New York to fly in rock journalists for the debut performance. Presley took to the stage without introduction. The audience of 2,200, including many celebrities, gave him a standing ovation before he sang a note and another after his performance. A third followed his encore, "Can't Help Falling in Love" (which would be his closing number for much of his remaining life). At a press conference after the show, when a journalist referred to him as "The King", Presley gestured toward Fats Domino, who was taking in the scene. "No," Presley said, "that's the real king of rock and roll." The next day, Parker's negotiations with the hotel resulted in a five-year contract for Presley to play each February and August, at an annual salary of $1 million. Newsweek commented, "There are several unbelievable things about Elvis, but the most incredible is his staying power in a world where meteoric careers fade like shooting stars." Rolling Stone called Presley "supernatural, his own resurrection." In November, Presley's final non-concert film, Change of Habit, opened. The double album From Memphis to Vegas/From Vegas to Memphis came out the same month; the first LP consisted of live performances from the International, the second of more cuts from the American Sound sessions. "Suspicious Minds" reached the top of the charts—Presley's first U.S. pop number-one in over seven years, and his last. Cassandra Peterson, later television's Elvira, met Presley during this period in Las Vegas. She recalled of their encounter, "He was so anti-drug when I met him. I mentioned to him that I smoked marijuana, and he was just appalled." Presley also rarely drank—several of his family members had been alcoholics, a fate he intended to avoid. Presley returned to the International early in 1970 for the first of the year's two-month-long engagements, performing two shows a night. Recordings from these shows were issued on the album On Stage. In late February, Presley performed six attendance-record–breaking shows at the Houston Astrodome. In April, the single "The Wonder of You" was issued—a number one hit in the UK, it topped the U.S. adult contemporary chart as well. Metro-Goldwyn-Mayer (MGM) filmed rehearsal and concert footage at the International during August for the documentary Elvis: That's the Way It Is. Presley was performing in a jumpsuit, which would become a trademark of his live act. During this engagement, he was threatened with murder unless US$50,000 (equivalent to $377,000 in 2022) was paid. Presley had been the target of many threats since the 1950s, often without his knowledge. The FBI took the threat seriously and security was increased for the next two shows. Presley went onstage with a Derringer in his right boot and a .45 caliber pistol in his waistband, but the concerts succeeded without any incidents. That's the Way It Is, produced to accompany the documentary and featuring both studio and live recordings, marked a stylistic shift. As music historian John Robertson noted, The authority of Presley's singing helped disguise the fact that the album stepped decisively away from the American-roots inspiration of the Memphis sessions towards a more middle-of-the-road sound. With country put on the back burner, and soul and R&B left in Memphis, what was left was very classy, very clean white pop—perfect for the Las Vegas crowd, but a definite retrograde step for Elvis. After the end of his International engagement on September 7, Presley embarked on a week-long concert tour, largely of the South, his first since 1958. Another week-long tour, of the West Coast, followed in November. On December 21, 1970, Presley engineered a meeting with U.S. President Richard Nixon at the White House, where he explained how he believed he could reach out to the hippies to help combat the drug culture he and the president abhorred. He asked Nixon for a Bureau of Narcotics and Dangerous Drugs badge, to signify official sanction of his efforts. Nixon, who apparently found the encounter awkward, expressed a belief that Presley could send a positive message to young people and that it was, therefore, important that he "retain his credibility". Presley told Nixon that the Beatles, whose songs he regularly performed in concert during the era, exemplified what he saw as a trend of anti-Americanism. Presley and his friends previously had a four-hour get-together with the Beatles at his home in Bel Air, California, in August 1965. Paul McCartney later said that he "felt a bit betrayed. ... The great joke was that we were taking [illegal] drugs, and look what happened to him", a reference to Presley's early death linked to prescription drug abuse. The U.S. Junior Chamber of Commerce named Presley one of its annual Ten Most Outstanding Young Men of the Nation on January 16, 1971. Not long after, the City of Memphis named the stretch of Highway 51 South on which Graceland is located "Elvis Presley Boulevard". The same year, Presley became the first rock and roll singer to be awarded the Grammy Lifetime Achievement Award (then known as the Bing Crosby Award). Three new, non-film Presley studio albums were released in 1971. Best received by critics was Elvis Country, a concept record that focused on genre standards. The biggest seller was Elvis Sings The Wonderful World of Christmas. According to Greil Marcus, In the midst of ten painfully genteel Christmas songs, every one sung with appalling sincerity and humility, one could find Elvis tom-catting his way through six blazing minutes of "Merry Christmas Baby", a raunchy old Charles Brown blues. [...] If [Presley's] sin was his lifelessness, it was his sinfulness that brought him to life. MGM filmed Presley in April 1972 for Elvis on Tour, which went on to win the Golden Globe Award for Best Documentary Film for that year's Golden Globe Awards. His gospel album He Touched Me, released that month, would earn him his second Grammy Award for Best Inspirational Performance. A fourteen-date tour commenced with an unprecedented four consecutive sold-out shows at New York's Madison Square Garden. The evening concert on July 10 was issued in LP form a week later. Elvis: As Recorded at Madison Square Garden became one of Presley's biggest-selling albums. After the tour, the single "Burning Love" was released—Presley's last top ten hit on the U.S. pop chart. "The most exciting single Elvis has made since 'All Shook Up'", wrote rock critic Robert Christgau. Presley and his wife had become increasingly distant, barely cohabiting. In 1971, an affair he had with Joyce Bova resulted—unbeknownst to him—in her pregnancy and an abortion. He often raised the possibility of Joyce moving into Graceland. The Presleys separated on February 23, 1972, after Priscilla disclosed her relationship with Mike Stone, a karate instructor Presley had recommended to her. Priscilla related that when she told him, Presley forcefully made love to her, declaring, "This is how a real man makes love to his woman". She later stated in an interview that she regretted her choice of words in describing the incident, and said it had been an overstatement. Five months later, Presley's new girlfriend, Linda Thompson, a songwriter and one-time Memphis beauty queen, moved in with him. Presley and his wife filed for divorce on August 18. According to Joe Moscheo of the Imperials, the failure of Presley's marriage "was a blow from which he never recovered". At a rare press conference that June, a reporter had asked Presley whether he was satisfied with his image. Presley replied, "Well, the image is one thing and the human being another ... it's very hard to live up to an image." In January 1973, Presley performed two benefit concerts for the Kui Lee Cancer Fund in connection with a groundbreaking television special, Aloha from Hawaii, which would be the first concert by a solo artist to be aired globally. The first show served as a practice run and backup should technical problems affect the live broadcast two days later. On January 14, Aloha from Hawaii aired live via satellite to prime-time audiences in Japan, South Korea, Thailand, the Philippines, Australia, and New Zealand, as well as to U.S. servicemen based across Southeast Asia. In Japan, where it capped a nationwide Elvis Presley Week, it smashed viewing records. The next night, it was simulcast to twenty-eight European countries, and in April an extended version aired in the U.S., receiving a fifty-seven percent share of the TV audience. Over time, Parker's claim that it was seen by one billion or more people would be broadly accepted, but that figure appeared to have been sheer invention. Presley's stage costume became the most recognized example of the elaborate concert garb with which his latter-day persona became closely associated. As described by Bobbie Ann Mason, "At the end of the show, when he spreads out his American Eagle cape, with the full stretched wings of the eagle studded on the back, he becomes a god figure." The accompanying double album, released in February, went to number one and eventually sold over 5 million copies in the U.S. It was Presley's last U.S. number-one pop album during his lifetime. At a midnight show that same month, four men rushed onto the stage in an apparent attack. Security personnel came to Presley's defense, and he ejected one invader from the stage himself. Following the show, Presley became obsessed with the idea that the men had been sent by Mike Stone to kill him. Though they were shown to have been only overexuberant fans, Presley raged, "There's too much pain in me ... Stone [must] die." His outbursts continued with such intensity that a physician was unable to calm him, despite administering large doses of medication. After another two full days of raging, Red West, his friend and bodyguard, felt compelled to get a price for a contract killing and was relieved when Presley decided, "Aw hell, let's just leave it for now. Maybe it's a bit heavy." Presley's divorce was finalized on October 9, 1973. By then, his health was in serious decline. Twice during the year he overdosed on barbiturates, spending three days in a coma in his hotel suite after the first incident. In late 1973, he was hospitalized from the effects of a pethidine addiction. According to his primary care physician, George C. Nichopoulos, Presley "felt that by getting drugs from a doctor, he wasn't the common everyday junkie getting something off the street". Since his comeback, he had staged more live shows with each passing year, and 1973 saw 168 concerts, his busiest schedule ever. Despite his failing health, he undertook another intensive touring schedule in 1974. Presley's condition declined precipitously that September. Keyboardist Tony Brown remembered his arrival at a University of Maryland concert: "He fell out of the limousine, to his knees. People jumped to help, and he pushed them away like, 'Don't help me.' He walked on stage and held onto the mic for the first thirty minutes like it was a post. Everybody's looking at each other like, 'Is the tour gonna happen'?" Guitarist John Wilkinson recalled, "He was all gut. He was slurring. He was so fucked up. ... It was obvious he was drugged. It was obvious there was something terribly wrong with his body. It was so bad the words to the songs were barely intelligible. ... I remember crying. He could barely get through the introductions." RCA began to grow anxious as his interest in the recording studio waned. After a session in December 1973 that produced eighteen songs, enough for almost two albums, Presley made no official studio recordings in 1974. Parker delivered RCA another concert record, Elvis Recorded Live on Stage in Memphis. Recorded on March 20, it included a version of "How Great Thou Art" that won Presley his third and final Grammy Award for Best Inspirational Performance. All three of his competitive Grammy wins – out of fourteen total nominations – were for gospel recordings. Presley returned to the recording studio in March 1975, but Parker's attempts to arrange another session toward the end of the year were unsuccessful. In 1976, RCA sent a mobile recording unit to Graceland that made possible two full-scale recording sessions. However, the recording process had become a struggle for him. Journalist Tony Scherman wrote that, by early 1977, "Presley had become a grotesque caricature of his sleek, energetic former self. Grossly overweight, his mind dulled by the pharmacopia he daily ingested, he was barely able to pull himself through his abbreviated concerts." According to Andy Greene of Rolling Stone, Presley's final performances were mostly "sad, sloppy affairs where a bloated, drugged Presley struggled to remember his lyrics and get through the night without collapsing ... Most everything from the final three years of his life is sad and hard to watch." In Alexandria, Louisiana, he was on stage for less than an hour and "was impossible to understand". On March 31, he canceled a performance in Baton Rouge, unable to get out of his hotel bed; four shows had to be canceled and rescheduled. Despite the accelerating deterioration of his health, Presley fulfilled most of his touring commitments. According to Guralnick, fans "were becoming increasingly voluble about their disappointment, but it all seemed to go right past Presley, whose world was now confined almost entirely to his room and his spiritualism books". Presley's cousin, Billy Smith, recalled how he would sit in his room and chat for hours, sometimes recounting favorite Monty Python sketches and his past escapades, but more often gripped by paranoid obsessions. "Way Down", Presley's last single issued during his lifetime, was released on June 6, 1977. That month, CBS taped two concerts for a television special, Elvis in Concert, to be broadcast in October. In the first, shot in Omaha on June 19, Presley's voice, Guralnick writes, "is almost unrecognizable, a small, childlike instrument in which he talks more than sings most of the songs, casts about uncertainly for the melody in others, and is virtually unable to articulate or project". Two days later, in Rapid City, South Dakota, "he looked healthier, seemed to have lost a little weight, and sounded better, too", though, by the conclusion of the performance, his face was "framed in a helmet of blue-black hair from which sweat sheets down over pale, swollen cheeks". Presley's final concert was held in Indianapolis at Market Square Arena, on June 26, 1977. On August 16, 1977, Presley was scheduled on an evening flight out of Memphis to Portland, Maine, to begin another tour. That afternoon, however, his fiancée Ginger Alden discovered him unresponsive on the bathroom floor of his Graceland mansion. Biographer Joel Williamson suggests that "involving a reaction to the codeine" he had taken "and attempts to move his bowels—he experienced pain and fright while sitting on the toilet. Alarmed, he stood up ... and fell face down in the fetal position." Drooling on the rug and "unable to breathe, he died." Attempts to revive him failed, and he was pronounced dead at Baptist Memorial Hospital at 3:30 p.m. He was 42. President Jimmy Carter issued a statement that credited Presley with having "permanently changed the face of American popular culture". Thousands of people gathered outside Graceland to view the open casket. One of Presley's cousins, Billy Mann, accepted US$18,000 (equivalent to $87,000 in 2022) to secretly photograph the body; the picture appeared on the cover of the National Enquirer's biggest-selling issue ever. Alden struck a $105,000 (equivalent to $507,000 in 2022) deal with the Enquirer for her story, but settled for less when she broke her exclusivity agreement. Presley left her nothing in his will. Presley's funeral was held at Graceland on August 18. Outside the gates, a car plowed into a group of fans, killing two young women and critically injuring a third. About 80,000 people lined the processional route to Forest Hill Cemetery, where Presley was buried next to his mother. Within a few weeks, "Way Down" topped the country and UK singles chart. Following an attempt to steal Presley's body in late August, the remains of both Presley and his mother were exhumed and reburied in Graceland's Meditation Garden on October 2. While an autopsy, undertaken the same day Presley died, was still in progress, Memphis medical examiner Jerry Francisco announced that the immediate cause of death was cardiac arrest and declared that "drugs played no role in Presley's death". In fact, "drug use was heavily implicated" in Presley's death, writes Guralnick. The pathologists conducting the autopsy thought it possible, for instance, that he had suffered "anaphylactic shock brought on by the codeine pills he had gotten from his dentist, to which he was known to have had a mild allergy". Lab reports filed two months later strongly suggested that polypharmacy was the primary cause of death; one reported "fourteen drugs in Elvis' system, ten in significant quantity". In 1979, forensic pathologist Cyril Wecht reviewed the reports and concluded that a combination of depressants had resulted in Presley's accidental death. Forensic historian and pathologist Michael Baden viewed the situation as complicated: "Elvis had had an enlarged heart for a long time. That, together with his drug habit, caused his death. But he was difficult to diagnose; it was a judgment call." The competence and ethics of two of the centrally involved medical professionals were seriously questioned. Francisco had offered a cause of death before the autopsy was complete; claimed the underlying ailment was cardiac arrhythmia, a condition that can be determined only in a living person; and denied drugs played any part in Presley's death before the toxicology results were known. Allegations of a cover-up were widespread. While a 1981 trial of Presley's main physician, George C. Nichopoulos, exonerated him of criminal liability, the facts were startling: "In the first eight months of 1977 alone, he had [prescribed] more than 10,000 doses of sedatives, amphetamines, and narcotics: all in Elvis' name." Nichopoulos' license was suspended for three months. It was permanently revoked in the 1990s after the Tennessee Medical Board brought new charges of over-prescription. In 1994, the Presley autopsy report was reopened. Joseph Davis, who had conducted thousands of autopsies as Miami-Dade County coroner, declared at its completion, "There is nothing in any of the data that supports a death from drugs. In fact, everything points to a sudden, violent heart attack." More recent research has revealed that Francisco did not speak for the entire pathology team. Other staff "could say nothing with confidence until they got the results back from the laboratories, if then." One of the examiners, E. Eric Muirhead, could not believe his ears. Francisco had not only presumed to speak for the hospital's team of pathologists, he had announced a conclusion that they had not reached. ... Early on, a meticulous dissection of the body ... confirmed [that] Elvis was chronically ill with diabetes, glaucoma, and constipation. As they proceeded, the doctors saw evidence that his body had been wracked over a span of years by a large and constant stream of drugs. They had also studied his hospital records, which included two admissions for drug detoxification and methadone treatments. According to biographer Frank Coffey, "other plausible causes" include "the Valsalva maneuver (essentially straining on the toilet leading to heart stoppage—plausible because Elvis suffered constipation, a common reaction to drug use)". Dr Warlick, who attended the autopsy, agrees. Between 1977 and 1981, six of Presley's posthumously released singles were top-ten country hits. Graceland was opened to the public in 1982. Attracting over half a million visitors annually, it became the second-most-visited home in the United States, after the White House. The residence was declared a National Historic Landmark in 2006. Presley has been inducted into five music halls of fame: the Rock and Roll Hall of Fame (1986), the Country Music Hall of Fame (1998), the Gospel Music Hall of Fame (2001), the Rockabilly Hall of Fame (2007), and the Memphis Music Hall of Fame (2012). In 1984, he received the W. C. Handy Award from the Blues Foundation and the Academy of Country Music's first Golden Hat Award. In 1987, he received the American Music Awards' Award of Merit. A Junkie XL remix of Presley's "A Little Less Conversation" (credited as "Elvis Vs JXL") was used in a Nike advertising campaign during the 2002 FIFA World Cup. It topped the charts in over twenty countries and was included in a compilation of Presley's number-one hits, ELV1S, which was also an international success. The album returned Presley to the top of the Billboard chart for the first time in almost three decades. In 2003, a remix of "Rubberneckin'", a 1969 recording, topped the U.S. sales chart, as did a 50th-anniversary re-release of "That's All Right" the following year. The latter was an outright hit in Britain, debuting at number three on the pop chart; it also made the top ten in Canada. In 2005, another three reissued singles, "Jailhouse Rock", "One Night"/"I Got Stung", and "It's Now or Never", went to number one in the UK. They were part of a campaign that saw the re-release of all eighteen of Presley's previous chart-topping UK singles. The first, "All Shook Up", came with a collectors' box that made it ineligible to chart again; each of the other seventeen reissues hit the British top five. In 2005, Forbes magazine named Presley the top-earning deceased celebrity for the fifth straight year, with a gross income of $45 million. He was placed second in 2006, returned to the top spot the next two years, and ranked fourth in 2009. The following year, he was ranked second, with his highest annual income ever—$60 million—spurred by the celebration of his 75th birthday and the launch of Cirque du Soleil's Viva Elvis show in Las Vegas. In November 2010, Viva Elvis: The Album was released, setting his voice to newly recorded instrumental tracks. As of mid-2011, there were an estimated 15,000 licensed Presley products, and he was again the second-highest-earning deceased celebrity. Six years later, he ranked fourth with earnings of $35 million, up $8 million from 2016 due in part to the opening of a new entertainment complex, Elvis Presley's Memphis, and hotel, The Guest House at Graceland. In 2018, RCA/Legacy released Elvis Presley – Where No One Stands Alone, a new album focused on Elvis' love of gospel music. Produced by Joel Weinshanker, Lisa Marie Presley and Andy Childs, the album introduced newly recorded instrumentation along with vocals from singers who had performed in the past with Elvis. It included a reimagined duet with Lisa Marie, on the album's title track. In 2022, Baz Luhrmann's film Elvis, a biographical film about Presley's life, was released. Presley is portrayed by Austin Butler and Parker by Tom Hanks. As of August 2022, the film had grossed $261.8 million worldwide on a $85 million budget, becoming the second-highest-grossing music biopic of all-time behind Bohemian Rhapsody (2018), and the fifth-highest-grossing Australian-produced film. For his portrayal of Elvis, Butler won the Golden Globe and was nominated for the Oscar for Best Actor. In January 2023, his 1962 Lockheed 1329 JetStar sold at an auction for $260,000. Presley's earliest musical influence came from gospel. His mother recalled that from the age of two, at the Assembly of God church in Tupelo attended by the family, "he would slide down off my lap, run into the aisle and scramble up to the platform. There he would stand looking at the choir and trying to sing with them." In Memphis, Presley frequently attended all-night gospel singings at the Ellis Auditorium, where groups such as the Statesmen Quartet led the music in a style that, Guralnick suggests, sowed the seeds of Presley's future stage act: The Statesmen were an electric combination ... featuring some of the most thrillingly emotive singing and daringly unconventional showmanship in the entertainment world ... dressed in suits that might have come out of the window of Lansky's. ... Bass singer Jim Wetherington, known universally as the Big Chief, maintained a steady bottom, ceaselessly jiggling first his left leg, then his right, with the material of the pants leg ballooning out and shimmering. "He went about as far as you could go in gospel music," said Jake Hess. "The women would jump up, just like they do for the pop shows." Preachers frequently objected to the lewd movements ... but audiences reacted with screams and swoons. As a teenager, Presley's musical interests were wide-ranging, and he was deeply informed about both white and African-American musical idioms. Though he never had any formal training, he had a remarkable memory, and his musical knowledge was already considerable by the time he made his first professional recordings aged 19 in 1954. When Jerry Leiber and Mike Stoller met him two years later, they were astonished at his encyclopedic understanding of the blues, and, as Stoller put it, "He certainly knew a lot more than we did about country music and gospel music." At a press conference the following year, he proudly declared, "I know practically every religious song that's ever been written." Presley played guitar, bass, and piano; he received his first guitar when he was 11 years old. He could not read or write music and had no formal lessons, and played everything by ear. Presley often played an instrument on his recordings and produced his own music. Presley played rhythm acoustic guitar on most of his Sun recordings and his 1950s RCA albums. Presley played piano on songs such as "Old Shep" and "First in Line" from his 1956 album Elvis. He is credited with playing piano on later albums such as From Elvis in Memphis and "Moody Blue", and on "Unchained Melody", which was one of the last songs that he recorded. Presley played lead guitar on one of his successful singles called "Are You Lonesome Tonight". In the 68 Comeback Special, Elvis took over on lead electric guitar, the first time he had ever been seen with the instrument in public, playing it on songs such as "Baby What You Want Me to Do" and "One Night". The album Elvis is Back! features Presley playing a lot of acoustic guitar on songs such as "I Will Be Home Again" and "Like a Baby". Presley was a central figure in the development of rockabilly, according to music historians. "Rockabilly crystallized into a recognizable style in 1954 with Elvis Presley's first release, on the Sun label," writes Craig Morrison. Paul Friedlander described rockabilly as "essentially ... an Elvis Presley construction", with the defining elements as "the raw, emotive, and slurred vocal style and emphasis on rhythmic feeling [of] the blues with the string band and strummed rhythm guitar [of] country". In "That's All Right", the Presley trio's first record, Scotty Moore's guitar solo, "a combination of Merle Travis–style country finger-picking, double-stop slides from acoustic boogie, and blues-based bent-note, single-string work, is a microcosm of this fusion". While Katherine Charlton calls Presley "rockabilly's originator", Carl Perkins, another pioneer of rock'n'roll, said that "[Sam] Phillips, Elvis, and I didn't create rockabilly". According to Michael Campbell, the first major rockabilly song was recorded by Bill Haley. In Moore's view, "It had been there for quite a while, really. Carl Perkins was doing basically the same sort of thing up around Jackson, and I know for a fact Jerry Lee Lewis had been playing that kind of music ever since he was ten years old." At RCA Victor, Presley's rock and roll sound grew distinct from rockabilly with group chorus vocals, more heavily amplified electric guitars and a tougher, more intense manner. While he was known for taking songs from various sources and giving them a rockabilly/rock and roll treatment, he also recorded songs in other genres from early in his career, from the pop standard "Blue Moon" at Sun Records to the country ballad "How's the World Treating You?" on his second RCA Victor LP to the blues of "Santa Claus Is Back in Town". In 1957, his first gospel record was released, the four-song EP Peace in the Valley. Certified as a million-seller, it became the top-selling gospel EP in recording history. Presley would record gospel periodically for the rest of his life. After his return from military service in 1960, Presley continued to perform rock and roll, but the characteristic style was substantially toned down. His first post-Army single, the number-one hit "Stuck on You", is typical of this shift. RCA Victor publicity referred to its "mild rock beat"; discographer Ernst Jorgensen calls it "upbeat pop". The number five "She's Not You" (1962) "integrates the Jordanaires so completely, it's practically doo-wop". The modern blues/R&B sound captured with success on Elvis Is Back! was essentially abandoned for six years until such 1966–67 recordings as "Down in the Alley" and "Hi-Heel Sneakers". Presley's output during most of the 1960s emphasized pop music, often in the form of ballads such as "Are You Lonesome Tonight?", a number-one in 1960. "It's Now or Never", which also topped the chart that year, was a classically influenced variation of pop based on the Neapolitan song "'O sole mio" and concluding with a "full-voiced operatic cadence". These were both dramatic numbers, but most of what Presley recorded for his many film soundtracks was in a much lighter vein. While Presley performed several of his classic ballads for the '68 Comeback Special, the sound of the show was dominated by aggressive rock and roll. He recorded few new straight rock and roll songs thereafter; as he explained, they had become "hard to find". A significant exception was "Burning Love", his last major hit on the pop charts. Like his work of the 1950s, Presley's subsequent recordings reworked pop and country songs, but in markedly different permutations. His stylistic range now began to embrace a more contemporary rock sound as well as soul and funk. Much of Elvis in Memphis, as well as "Suspicious Minds", cut at the same sessions, reflected this new rock and soul fusion. In the mid-1970s, many of his singles found a home on country radio, the field where he first became a star. The developmental arc of Presley's singing voice, as described by critic Dave Marsh, goes from "high and thrilled in the early days, [to] lower and perplexed in the final months." Marsh credits Presley with the introduction of the "vocal stutter" on 1955's "Baby Let's Play House". When on "Don't Be Cruel", Presley "slides into a 'mmmmm' that marks the transition between the first two verses," he shows "how masterful his relaxed style really is." Marsh describes the vocal performance on "Can't Help Falling in Love" as one of "gentle insistence and delicacy of phrasing", with the line "'Shall I stay' pronounced as if the words are fragile as crystal". Jorgensen calls the 1966 recording of "How Great Thou Art" "an extraordinary fulfillment of his vocal ambitions", as Presley "crafted for himself an ad-hoc arrangement in which he took every part of the four-part vocal, from [the] bass intro to the soaring heights of the song's operatic climax", becoming "a kind of one-man quartet". Guralnick finds "Stand by Me" from the same gospel sessions "a beautifully articulated, almost nakedly yearning performance", but, by contrast, feels that Presley reaches beyond his powers on "Where No One Stands Alone", resorting "to a kind of inelegant bellowing to push out a sound" that Jake Hess of the Statesmen Quartet had in his command. Hess himself thought that while others might have voices the equal of Presley's, "he had that certain something that everyone searches for all during their lifetime." Guralnick attempts to pinpoint that something: "The warmth of his voice, his controlled use of both vibrato technique and natural falsetto range, the subtlety and deeply felt conviction of his singing were all qualities recognizably belonging to his talent but just as recognizably not to be achieved without sustained dedication and effort." Marsh praises his 1968 reading of "U.S. Male", "bearing down on the hard guy lyrics, not sending them up or overplaying them but tossing them around with that astonishingly tough yet gentle assurance that he brought to his Sun records." The performance on "In the Ghetto" is, according to Jorgensen, "devoid of any of his characteristic vocal tricks or mannerisms", instead relying on the exceptional "clarity and sensitivity of his voice". Guralnick describes the song's delivery as of "almost translucent eloquence ... so quietly confident in its simplicity". On "Suspicious Minds", Guralnick hears essentially the same "remarkable mixture of tenderness and poise", but supplemented with "an expressive quality somewhere between stoicism (at suspected infidelity) and anguish (over impending loss)". Music critic Henry Pleasants observes that "Presley has been described variously as a baritone and a tenor. An extraordinary compass ... and a very wide range of vocal color have something to do with this divergence of opinion." He identifies Presley as a high baritone, calculating his range as two octaves and a third, "from the baritone low G to the tenor high B, with an upward extension in falsetto to at least a D-flat. Presley's best octave is in the middle, D-flat to D-flat, granting an extra full step up or down." In Pleasants' view, his voice was "variable and unpredictable" at the bottom, "often brilliant" at the top, with the capacity for "full-voiced high Gs and As that an opera baritone might envy". Scholar Lindsay Waters, who figures Presley's range as two-and-a-quarter octaves, emphasizes that "his voice had an emotional range from tender whispers to sighs down to shouts, grunts, grumbles, and sheer gruffness that could move the listener from calmness and surrender, to fear. His voice can not be measured in octaves, but in decibels; even that misses the problem of how to measure delicate whispers that are hardly audible at all." Presley was always "able to duplicate the open, hoarse, ecstatic, screaming, shouting, wailing, reckless sound of the black rhythm-and-blues and gospel singers", writes Pleasants, and also demonstrated a remarkable ability to assimilate many other vocal styles. When Dewey Phillips first aired "That's All Right" on Memphis' WHBQ, many listeners who contacted the station to ask for it again assumed that its singer was black. From the beginning of his national fame, Presley expressed respect for African-American performers and their music, and disregard for the segregation and racial prejudice then prevalent in the South. Interviewed in 1956, he recalled how in his childhood he would listen to blues musician Arthur Crudup—the originator of "That's All Right"—"bang his box the way I do now, and I said if I ever got to the place where I could feel all old Arthur felt, I'd be a music man like nobody ever saw." The Memphis World, an African-American newspaper, reported that Presley "cracked Memphis' segregation laws" by attending the local amusement park on what was designated as its "colored night". Such statements and actions led Presley to be generally hailed in the black community during his early stardom. In contrast, many white adults "did not like him, and condemned him as depraved. Anti-negro prejudice doubtless figured in adult antagonism. Regardless of whether parents were aware of the Negro sexual origins of the phrase 'rock 'n' roll', Presley impressed them as the visual and aural embodiment of sex." Despite the largely positive view of Presley held by African Americans, a rumor spread in mid-1957 that he had announced, "The only thing Negroes can do for me is buy my records and shine my shoes." A journalist with the national African American weekly Jet, Louie Robinson, pursued the story. On the set of Jailhouse Rock, Presley granted Robinson an interview, though he was no longer dealing with the mainstream press. He denied making such a statement: I never said anything like that, and people who know me know that I wouldn't have said it. ... A lot of people seem to think I started this business. But rock 'n' roll was here a long time before I came along. Nobody can sing that kind of music like colored people. Let's face it: I can't sing like Fats Domino can. I know that. Robinson found no evidence that the remark had ever been made, and elicited testimony from many individuals indicating that Presley was anything but racist. Blues singer Ivory Joe Hunter, who had heard the rumor before he visited Graceland, reported of Presley, "He showed me every courtesy, and I think he's one of the greatest." Though the rumored remark was discredited, it was still being used against Presley decades later. The persistence of such attitudes was fueled by resentment over the fact that Presley, whose musical and visual performance idiom owed much to African-American sources, achieved the cultural acknowledgement and commercial success largely denied his black peers. Into the 21st century, the notion that Presley had "stolen" black music still found adherents. Notable among African-American entertainers expressly rejecting this view was Jackie Wilson, who argued, "A lot of people have accused Elvis of stealing the black man's music, when in fact, almost every black solo entertainer copied his stage mannerisms from Elvis." Moreover, Presley acknowledged his debt to African-American musicians throughout his career. Addressing his '68 Comeback Special audience, he said, "Rock 'n' roll music is basically gospel or rhythm and blues, or it sprang from that. People have been adding to it, adding instruments to it, experimenting with it, but it all boils down to [that]." Nine years earlier, he had said, "Rock 'n' roll has been around for many years. It used to be called rhythm and blues." Presley's physical attractiveness and sexual appeal were widely acknowledged. "He was once beautiful, astonishingly beautiful", according to critic Mark Feeney. Television director Steve Binder reported, "I'm straight as an arrow and I got to tell you, you stop, whether you're male or female, to look at him. He was that good looking. And if you never knew he was a superstar, it wouldn't make any difference; if he'd walked in the room, you'd know somebody special was in your presence." His performance style was equally responsible for Presley's eroticized image. Critic George Melly described him as "the master of the sexual simile, treating his guitar as both phallus and girl". In his Presley obituary, Lester Bangs credited him with bringing "overt blatant vulgar sexual frenzy to the popular arts in America". Ed Sullivan's declaration that he perceived a soda bottle in Presley's trousers was echoed by rumors involving a similarly positioned toilet roll tube or lead bar. While Presley was marketed as an icon of heterosexuality, some critics have argued that his image was ambiguous. In 1959, Sight and Sound's Peter John Dyer described his onscreen persona as "aggressively bisexual in appeal". Brett Farmer places the "orgasmic gyrations" of the title dance sequence in Jailhouse Rock within a lineage of cinematic musical numbers that offer a "spectacular eroticization, if not homoeroticization, of the male image". In the analysis of Yvonne Tasker, "Elvis was an ambivalent figure who articulated a peculiar feminised, objectifying version of white working-class masculinity as aggressive sexual display." Reinforcing Presley's image as a sex symbol were the reports of his dalliances with Hollywood stars and starlets, from Natalie Wood in the 1950s to Connie Stevens and Ann-Margret in the 1960s to Candice Bergen and Cybill Shepherd in the 1970s. June Juanico of Memphis, one of Presley's early girlfriends, later blamed Parker for encouraging him to choose his dating partners with publicity in mind. Presley never grew comfortable with the Hollywood scene, and most of these relationships were insubstantial. I know he invented rock and roll, in a manner of speaking, but ... that's not why he's worshiped as a god today. He's worshiped as a god today because in addition to inventing rock and roll he was the greatest ballad singer this side of Frank Sinatra—because the spiritual translucence and reined-in gut sexuality of his slow weeper and torchy pop blues still activate the hormones and slavish devotion of millions of female human beings worldwide. —Robert Christgau December 24, 1985 Presley's rise to national attention in 1956 transformed the field of popular music and had a huge effect on the broader scope of popular culture. As the catalyst for the cultural revolution that was rock and roll, he was central not only to defining it as a musical genre but in making it a touchstone of youth culture and rebellious attitude. With its racially mixed origins—repeatedly affirmed by Presley—rock and roll's occupation of a central position in mainstream American culture facilitated a new acceptance and appreciation of black culture. In this regard, Little Richard said of Presley, "He was an integrator. Elvis was a blessing. They wouldn't let black music through. He opened the door for black music." Al Green agreed: "He broke the ice for all of us." President Jimmy Carter remarked on Presley's legacy in 1977: "His music and his personality, fusing the styles of white country and black rhythm and blues, permanently changed the face of American popular culture." Presley also heralded the vastly expanded reach of celebrity in the era of mass communication: within a year of his first appearance on American network television, he was regarded as one of the most famous people in the world. Presley's name, image, and voice are recognized around the world. He has inspired a legion of impersonators. In polls and surveys, he is recognized as one of the most important popular music artists and influential Americans. American composer and conductor Leonard Bernstein said, "Elvis Presley is the greatest cultural force in the twentieth century. He introduced the beat to everything and he changed everything—music, language, clothes." John Lennon said that "Nothing really affected me until Elvis." Bob Dylan described the sensation of first hearing Presley as "like busting out of jail". For much of his adult life, Presley, with his rise from poverty to riches and fame, had seemed to epitomize the American Dream. In his final years, and following the revelations about his circumstances after his death, he became a symbol of excess and gluttony. Increasing attention was paid to his appetite for the rich, heavy Southern cooking of his upbringing, foods such as chicken-fried steak and biscuits and gravy. In particular, his love of fried peanut butter, banana, and (sometimes) bacon sandwiches, now known as "Elvis sandwiches", came to symbolize this characteristic. Since 1977, there have been numerous alleged sightings of Presley. A long-standing conspiracy theory among some fans is that he faked his death. Adherents cite alleged discrepancies in the death certificate, reports of a wax dummy in his original coffin, and accounts of Presley planning a diversion so he could retire in peace. An unusually large number of fans have domestic shrines devoted to Presley and journey to sites with which he is connected, however faintly. On the anniversary of his death, thousands of people gather outside Graceland for a candlelight ritual. "With Elvis, it is not just his music that has survived death", writes Ted Harrison. "He himself has been raised, like a medieval saint, to a figure of cultic status. It is as if he has been canonized by acclamation." On the 25th anniversary of Presley's death, The New York Times asserted: All the talentless impersonators and appalling black velvet paintings on display can make him seem little more than a perverse and distant memory. But before Elvis was camp, he was its opposite: a genuine cultural force. ... Elvis' breakthroughs are underappreciated because in this rock-and-roll age, his hard-rocking music and sultry style have triumphed so completely. Not only Presley's achievements but his failings as well, are seen by some cultural observers as adding to the power of his legacy, as in this description by Greil Marcus: Elvis Presley is a supreme figure in American life, one whose presence, no matter how banal or predictable, brooks no real comparisons. ... The cultural range of his music has expanded to the point where it includes not only the hits of the day, but also patriotic recitals, pure country gospel, and really dirty blues. ... Elvis has emerged as a great artist, a great rocker, a great purveyor of schlock, a great heart throb, a great bore, a great symbol of potency, a great ham, a great nice person, and, yes, a great American. Having sold about 500 million records worldwide, Presley is one of the best-selling music artists of all time. Presley holds the records for most songs charting in Billboard's top 40 (115) and top 100 (152), according to chart statistician Joel Whitburn, 139 according to Presley historian Adam Victor. Presley's rankings for top ten and number-one hits vary depending on how the double-sided "Hound Dog/Don't Be Cruel" and "Don't/I Beg of You" singles, which precede the inception of Billboard's unified Hot 100 chart, are analyzed. According to Whitburn's analysis, Presley holds the record with 38, tying with Madonna; per Billboard's current assessment, he ranks second with 36. Whitburn and Billboard concur that the Beatles hold the record for most number-one hits with 20, and that Mariah Carey is second with 19. Whitburn has Presley with 18: Billboard has him third with 17. According to Billboard, Presley has 79 cumulative weeks at number one: alone at 80, according to Whitburn and the Rock and Roll Hall of Fame, with only Mariah Carey having more with 91 weeks. He holds the records for most number-one singles on the UK chart with 21 and singles reaching the top ten with 76. As an album artist, Presley is credited by Billboard with the record for the most albums charting in the Billboard 200: 129, far ahead of second-place Frank Sinatra's 82. He also holds the record for most time spent at number one on the Billboard 200: 67 weeks. In 2015 and 2016, two albums setting Presley's vocals against music by the Royal Philharmonic Orchestra, If I Can Dream and The Wonder of You, both reached number one in the UK. This gave him a new record for number-one UK albums by a solo artist with 13, and extended his record for longest span between number-one albums by anybody—Presley had first topped the British chart in 1956 with his self-titled debut. As of 2023, the Recording Industry Association of America (RIAA) credits Presley with 146.5 million certified album sales in the US, third all time behind the Beatles and Garth Brooks. He holds the records for most gold albums (101, nearly double second-place Barbra Streisand's 51), and most platinum albums (57). His 25 multi-platinum albums is second behind the Beatles' 26. His total of 197 album certification awards (including one diamond award), far outpaces the Beatles' second-best 122. He has the 9th-most gold singles (54, tied with Justin Bieber), and the 16th-most platinum singles (27). In 2012, the spider Paradonea presleyi was named in his honor. In 2018, President Donald Trump awarded Presley the Presidential Medal of Freedom posthumously. A vast number of recordings have been issued under Presley's name. The number of his original master recordings has been variously calculated as 665 and 711. His career began and he was most successful during an era when singles were the primary commercial medium for pop music. For his albums, the distinction between "official" studio records and other forms is often blurred. The correct spelling of his middle name has long been a matter of debate. The physician who delivered him wrote "Elvis Aaron Presley" in his ledger. The state-issued birth certificate reads "Elvis Aron Presley". The name was chosen after the Presleys' friend and fellow congregation member Aaron Kennedy, though a single-A spelling was probably intended by Presley's parents to parallel the middle name of Presley's stillborn brother, Jesse Garon. It reads Aron on most official documents produced during his lifetime, including his high school diploma, RCA Victor record contract, and marriage license, and this was generally taken to be the proper spelling. In 1966, Presley expressed the desire to his father that the more traditional biblical rendering, Aaron, be used henceforth, "especially on legal documents". Five years later, the Jaycees citation honoring him as one of the country's Outstanding Young Men used Aaron. Late in his life, he sought to officially change the spelling to Aaron and discovered that state records already listed it that way. Knowing his wishes for his middle name, Aaron is the spelling his father chose for Presley's tombstone, and it is the spelling his estate has designated as official.
[ { "paragraph_id": 0, "text": "Elvis Aaron Presley (January 8, 1935 – August 16, 1977), also known mononymously as Elvis, was an American singer and actor. Known as the \"King of Rock and Roll\", he is regarded as one of the most significant cultural figures of the 20th century. Presley's energized interpretations of songs and sexually provocative performance style, combined with a singularly potent mix of influences across color lines during a transformative era in race relations, brought both great success and initial controversy.", "title": "" }, { "paragraph_id": 1, "text": "Presley was born in Tupelo, Mississippi; his family relocated to Memphis, Tennessee, when he was 13. His music career began there in 1954, at Sun Records with producer Sam Phillips, who wanted to bring the sound of African-American music to a wider audience. Presley, on guitar and accompanied by lead guitarist Scotty Moore and bassist Bill Black, was a pioneer of rockabilly, an uptempo, backbeat-driven fusion of country music and rhythm and blues. In 1955, drummer D. J. Fontana joined to complete the lineup of Presley's classic quartet and RCA Victor acquired his contract in a deal arranged by Colonel Tom Parker, who would manage him for more than two decades. Presley's first RCA single, \"Heartbreak Hotel\", was released in January 1956 and became a number-one hit in the United States. Within a year, RCA would sell ten million Presley singles. With a series of successful television appearances and chart-topping records, Presley became the leading figure of the newly popular rock and roll; though his performative style and promotion of the then-marginalized sound of African Americans led to him being widely considered a threat to the moral well-being of white American youth.", "title": "" }, { "paragraph_id": 2, "text": "In November 1956, Presley made his film debut in Love Me Tender. Drafted into military service in 1958, he relaunched his recording career two years later with some of his most commercially successful work. Presley held few concerts, however, and guided by Parker, proceeded to devote much of the 1960s to making Hollywood films and soundtrack albums, most of them critically derided. Some of his most famous films included Jailhouse Rock (1957), Blue Hawaii (1961), and Viva Las Vegas (1964). In 1968, following a seven-year break from live performances, he returned to the stage in the acclaimed television comeback special Elvis, which led to an extended Las Vegas concert residency and a string of highly profitable tours. In 1973, Presley gave the first concert by a solo artist to be broadcast around the world, Aloha from Hawaii. However, years of prescription drug abuse and unhealthy eating habits severely compromised his health, and Presley died suddenly in 1977 at his Graceland estate at the age of 42.", "title": "" }, { "paragraph_id": 3, "text": "Having sold roughly 500 million records worldwide, Presley is one of the best-selling music artists of all time. He was commercially successful in many genres, including pop, country, rhythm & blues, adult contemporary, and gospel. He won three Grammy Awards, received the Grammy Lifetime Achievement Award at age 36, and has been inducted into multiple music halls of fame. He also holds several records, including the most RIAA-certified gold and platinum albums, the most albums charted on the Billboard 200, the most number-one albums by a solo artist on the UK Albums Chart, and the most number-one singles by any act on the UK Singles Chart. In 2018, Presley was posthumously awarded the Presidential Medal of Freedom.", "title": "" }, { "paragraph_id": 4, "text": "Elvis Aaron Presley was born on January 8, 1935, in Tupelo, Mississippi, to Vernon Presley and Gladys Love (née Smith) Presley. Elvis' twin Jesse Garon was delivered stillborn. Presley became close to both parents, especially his mother. The family attended an Assembly of God church, where he found his initial musical inspiration. Vernon moved from one odd job to the next, and the family often relied on neighbors and government food assistance. In 1938 they lost their home after Vernon was found guilty of altering a check and jailed for eight months. In September 1941, Presley entered first grade at East Tupelo Consolidated, where his teachers regarded him as \"average\". His first public performance was a singing contest at the Mississippi–Alabama Fair and Dairy Show on October 3, 1945, when he was 10; he sang \"Old Shep\" and recalled placing fifth. A few months later, Presley received his first guitar for his birthday; he received guitar lessons from two uncles and a pastor at the family's church. Presley recalled, \"I took the guitar, and I watched people, and I learned to play a little bit. But I would never sing in public. I was very shy about it.\"", "title": "Life and career" }, { "paragraph_id": 5, "text": "In September 1946, Presley entered a new school, Milam, for sixth grade. The following year, he began singing and playing his guitar at school. He was often teased as a \"trashy\" kid who played hillbilly music. Presley was a devotee of Mississippi Slim's radio show. He was described as \"crazy about music\" by Slim's younger brother, one of Presley's classmates. Slim showed Presley chord techniques. When his protégé was 12, Slim scheduled him for two on-air performances. Presley was overcome by stage fright the first time but performed the following week.", "title": "Life and career" }, { "paragraph_id": 6, "text": "In November 1948, the family moved to Memphis, Tennessee. Enrolled at L. C. Humes High School, Presley received a C in music in eighth grade. When his music teacher said he had no aptitude for singing, he brought in his guitar and sang a recent hit, \"Keep Them Cold Icy Fingers Off Me\". He was usually too shy to perform openly and was occasionally bullied by classmates for being a \"mama's boy\". In 1950, Presley began practicing guitar under the tutelage of Lee Denson, a neighbor. They and three other boys—including two future rockabilly pioneers, brothers Dorsey and Johnny Burnette—formed a loose musical collective.", "title": "Life and career" }, { "paragraph_id": 7, "text": "During his junior year, Presley began to stand out among his classmates, largely because of his appearance: he grew his sideburns and styled his hair. He would head down to Beale Street, the heart of Memphis' thriving blues scene, and admire the wild, flashy clothes at Lansky Brothers. By his senior year, he was wearing those clothes. He competed in Humes' Annual \"Minstrel\" Show in 1953, singing and playing \"Till I Waltz Again with You\", a recent hit for Teresa Brewer. Presley recalled that the performance did much for his reputation:", "title": "Life and career" }, { "paragraph_id": 8, "text": "I wasn't popular in school ... I failed music—only thing I ever failed. And then they entered me in this talent show ... when I came onstage, I heard people kind of rumbling and whispering and so forth, 'cause nobody knew I even sang. It was amazing how popular I became in school after that.", "title": "Life and career" }, { "paragraph_id": 9, "text": "Presley, who could not read music, played by ear and frequented record stores that provided jukeboxes and listening booths. He knew all of Hank Snow's songs, and he loved records by other country singers such as Roy Acuff, Ernest Tubb, Ted Daffan, Jimmie Rodgers, Jimmie Davis, and Bob Wills. The Southern gospel singer Jake Hess, one of his favorite performers, was a significant influence on his ballad-singing style. Presley was a regular audience member at the monthly All-Night Singings downtown, where many of the white gospel groups that performed reflected the influence of African American spirituals. Presley listened to regional radio stations, such as WDIA, that played what were then called \"race records\": spirituals, blues, and the modern, backbeat-heavy rhythm and blues. Like some of his peers, he may have attended blues venues only on nights designated for exclusively white audiences. Many of his future recordings were inspired by local African-American musicians such as Arthur Crudup and Rufus Thomas. B.B. King recalled that he had known Presley before he was popular when they both used to frequent Beale Street. By the time he graduated high school in June 1953, Presley had singled out music as his future.", "title": "Life and career" }, { "paragraph_id": 10, "text": "In August 1953, Presley checked into Memphis Recording Service, the company run by Sam Phillips before he started Sun Records. He aimed to pay for studio time to record a two-sided acetate disc: \"My Happiness\" and \"That's When Your Heartaches Begin\". He later claimed that he intended the record as a birthday gift for his mother, or that he was merely interested in what he \"sounded like\". Biographer Peter Guralnick argued that Presley chose Sun in the hope of being discovered. In January 1954, Presley cut a second acetate at Sun—\"I'll Never Stand in Your Way\" and \"It Wouldn't Be the Same Without You\"—but again nothing came of it. Not long after, he failed an audition for a local vocal quartet, the Songfellows, and another for the band of Eddie Bond.", "title": "Life and career" }, { "paragraph_id": 11, "text": "Phillips, meanwhile, was always on the lookout for someone who could bring to a broader audience the sound of the black musicians on whom Sun focused. In June, he acquired a demo recording by Jimmy Sweeney of a ballad, \"Without You\", that he thought might suit Presley. The teenaged singer came by the studio but was unable to do it justice. Despite this, Phillips asked Presley to sing other numbers and was sufficiently affected by what he heard to invite two local musicians, guitarist Winfield \"Scotty\" Moore and upright bass player Bill Black, to work with Presley for a recording session. The session, held the evening of July 5, proved entirely unfruitful until late in the night. As they were about to abort and go home, Presley launched into a 1946 blues number, Arthur Crudup's \"That's All Right\". Moore recalled, \"All of a sudden, Elvis just started singing this song, jumping around and acting the fool, and then Bill picked up his bass, and he started acting the fool, too, and I started playing with them.\" Phillips quickly began taping; this was the sound he had been looking for. Three days later, popular Memphis disc jockey Dewey Phillips (no relation to Sam Phillips) played \"That's All Right\" on his Red, Hot, and Blue show. Listener interest was such that Phillips played the record repeatedly during the remaining two hours of his show. Interviewing Presley on-air, Phillips asked him what high school he attended to clarify his color for the many callers who had assumed that he was black. During the next few days, the trio recorded a bluegrass song, Bill Monroe's \"Blue Moon of Kentucky\", again in a distinctive style and employing a jury-rigged echo effect that Sam Phillips dubbed \"slapback\". A single was pressed with \"That's All Right\" on the A-side and \"Blue Moon of Kentucky\" on the reverse.", "title": "Life and career" }, { "paragraph_id": 12, "text": "The trio played publicly for the first time at the Bon Air club on July 17, 1954. Later that month, they appeared at the Overton Park Shell, with Slim Whitman headlining. Here Elvis pioneered \"Rubber Legs\", his signature dance movement. A combination of his strong response to rhythm and nervousness led Presley to shake his legs as he performed: His wide-cut pants emphasized his movements, causing young women in the audience to start screaming. Moore recalled, \"During the instrumental parts, he would back off from the mike and be playing and shaking, and the crowd would just go wild.\"", "title": "Life and career" }, { "paragraph_id": 13, "text": "Soon after, Moore and Black left their old band to play with Presley regularly, and disc jockey/promoter Bob Neal became the trio's manager. From August through October, they played frequently at the Eagle's Nest club, a dance venue in Memphis. When Presley played, teenagers rushed from the pool to fill the club, then left again as the house western swing band resumed. Presley quickly grew more confident on stage. According to Moore, \"His movement was a natural thing, but he was also very conscious of what got a reaction. He'd do something one time and then he would expand on it real quick.\" Amid these live performances, Presley returned to Sun studio for more recording sessions. Presley made what would be his only appearance on Nashville's Grand Ole Opry on October 2; Opry manager Jim Denny told Phillips that his singer was \"not bad\" but did not suit the program.", "title": "Life and career" }, { "paragraph_id": 14, "text": "In November 1954, Presley performed on Louisiana Hayride—the Opry's chief, and more adventurous, rival. The show was broadcast to 198 radio stations in 28 states. His nervous first set drew a muted reaction. A more composed and energetic second set inspired an enthusiastic response. Soon after the show, the Hayride engaged Presley for a year's worth of Saturday-night appearances. Trading in his old guitar for $8, he purchased a Martin instrument for $175 (equivalent to $1,900 in 2022) and his trio began playing in new locales, including Houston, Texas, and Texarkana, Arkansas. Presley made his first television appearance on the KSLA-TV broadcast of Louisiana Hayride. Soon after, he failed an audition for Arthur Godfrey's Talent Scouts on the CBS television network. By early 1955, Presley's regular Hayride appearances, constant touring, and well-received record releases had made him a regional star.", "title": "Life and career" }, { "paragraph_id": 15, "text": "In January, Neal signed a formal management contract with Presley and brought him to the attention of Colonel Tom Parker, whom he considered the best promoter in the music business. Having successfully managed the top country star Eddy Arnold, Parker was working with the new number-one country singer, Hank Snow. Parker booked Presley on Snow's February tour.", "title": "Life and career" }, { "paragraph_id": 16, "text": "By August, Sun had released ten sides credited to \"Elvis Presley, Scotty and Bill\"; the latest recordings included a drummer. Some of the songs, like \"That's All Right\", were in what one Memphis journalist described as the \"R&B idiom of negro field jazz\"; others, like \"Blue Moon of Kentucky\", were \"more in the country field\", \"but there was a curious blending of the two different musics in both\". This blend of styles made it difficult for Presley's music to find radio airplay. According to Neal, many country-music disc jockeys would not play it because Presley sounded too much like a black artist and none of the R&B stations would touch him because \"he sounded too much like a hillbilly.\" The blend came to be known as \"rockabilly\". At the time, Presley was billed as \"The King of Western Bop\", \"The Hillbilly Cat\", and \"The Memphis Flash\".", "title": "Life and career" }, { "paragraph_id": 17, "text": "Presley renewed Neal's management contract in August 1955, simultaneously appointing Parker as his special adviser. The group maintained an extensive touring schedule. Neal recalled, \"It was almost frightening, the reaction that came to Elvis from the teenaged boys. So many of them, through some sort of jealousy, would practically hate him. There were occasions in some towns in Texas when we'd have to be sure to have a police guard because somebody'd always try to take a crack at him.\" The trio became a quartet when Hayride drummer Fontana joined as a full member. In mid-October, they played a few shows in support of Bill Haley, whose \"Rock Around the Clock\" track had been a number-one hit the previous year. Haley observed that Presley had a natural feel for rhythm, and advised him to sing fewer ballads.", "title": "Life and career" }, { "paragraph_id": 18, "text": "At the Country Disc Jockey Convention in early November, Presley was voted the year's most promising male artist. After three major labels made offers of up to $25,000, Parker and Phillips struck a deal with RCA Victor on November 21 to acquire Presley's Sun contract for an unprecedented $40,000. Presley, aged 20, was legally still a minor, so his father signed the contract. Parker arranged with the owners of Hill & Range Publishing, Jean and Julian Aberbach, to create two entities, Elvis Presley Music and Gladys Music, to handle all the new material recorded by Presley. Songwriters were obliged to forgo one-third of their customary royalties in exchange for having Presley perform their compositions. By December, RCA had begun to heavily promote its new singer, and before month's end had reissued many of his Sun recordings.", "title": "Life and career" }, { "paragraph_id": 19, "text": "On January 10, 1956, Presley made his first recordings for RCA in Nashville. Extending his by-now customary backup of Moore, Black, Fontana, and Hayride pianist Floyd Cramer—who had been performing at live club dates with Presley—RCA enlisted guitarist Chet Atkins and three background singers, including Gordon Stoker of the popular Jordanaires quartet. The session produced the moody \"Heartbreak Hotel\", released as a single on January 27. Parker brought Presley to national television, booking him on CBS's Stage Show for six appearances over two months. The program, produced in New York City, was hosted on alternate weeks by big band leaders and brothers Tommy and Jimmy Dorsey. After his first appearance on January 28, Presley stayed in town to record at RCA Victor's New York studio. The sessions yielded eight songs, including a cover of Carl Perkins' rockabilly anthem \"Blue Suede Shoes\". In February, Presley's \"I Forgot to Remember to Forget\", a Sun recording released the previous August, reached the top of the Billboard country chart. Neal's contract was terminated and Parker became Presley's manager.", "title": "Life and career" }, { "paragraph_id": 20, "text": "RCA released Presley's self-titled debut album on March 23. Joined by five previously unreleased Sun recordings, its seven recently recorded tracks included two country songs, a bouncy pop tune, and what would centrally define the evolving sound of rock and roll: \"Blue Suede Shoes\"—\"an improvement over Perkins' in almost every way\", according to critic Robert Hilburn—and three R&B numbers that had been part of Presley's stage repertoire, covers of Little Richard, Ray Charles, and The Drifters. As described by Hilburn, these", "title": "Life and career" }, { "paragraph_id": 21, "text": "were the most revealing of all. Unlike many white artists ... who watered down the gritty edges of the original R&B versions of songs in the '50s, Presley reshaped them. He not only injected the tunes with his own vocal character but also made guitar, not piano, the lead instrument in all three cases.", "title": "Life and career" }, { "paragraph_id": 22, "text": "It became the first rock and roll album to top the Billboard chart, a position it held for ten weeks. While Presley was not an innovative guitarist like Moore or contemporary African American rockers Bo Diddley and Chuck Berry, cultural historian Gilbert B. Rodman argued that the album's cover image, \"of Elvis having the time of his life on stage with a guitar in his hands played a crucial role in positioning the guitar ... as the instrument that best captured the style and spirit of this new music.\"", "title": "Life and career" }, { "paragraph_id": 23, "text": "On April 3, Presley made the first of two appearances on NBC's The Milton Berle Show. His performance, on the deck of the USS Hancock in San Diego, California, prompted cheers and screams from an audience of sailors and their dates. A few days later, Presley and his band were flying to Nashville for a recording session when an engine died and the plane almost went down over Arkansas. Twelve weeks after its original release, \"Heartbreak Hotel\" became Presley's first number-one pop hit. In late April he began a two-week residency at the New Frontier Hotel and Casino on the Las Vegas Strip. The shows were poorly received by the conservative, middle-aged hotel guests—\"like a jug of corn liquor at a champagne party\", wrote a critic for Newsweek. Amid his Vegas tenure, Presley, who had acting ambitions, signed a seven-year contract with Paramount Pictures. He began a tour of the Midwest in mid-May, covering fifteen cities in as many days. He had attended several shows by Freddie Bell and the Bellboys in Vegas and was struck by their cover of \"Hound Dog\", a hit in 1953 for blues singer Big Mama Thornton by songwriters Jerry Leiber and Mike Stoller. It became his new closing number.", "title": "Life and career" }, { "paragraph_id": 24, "text": "After a show in La Crosse, Wisconsin, an urgent message on the letterhead of the local Catholic diocese's newspaper was sent to FBI director J. Edgar Hoover. It warned that", "title": "Life and career" }, { "paragraph_id": 25, "text": "Presley is a definite danger to the security of the United States. ... [His] actions and motions were such as to rouse the sexual passions of teenaged youth. ... After the show, more than 1,000 teenagers tried to gang into Presley's room at the auditorium. ... Indications of the harm Presley did just in La Crosse were the two high school girls ... whose abdomen and thigh had Presley's autograph.", "title": "Life and career" }, { "paragraph_id": 26, "text": "Presley's second Milton Berle Show appearance came on June 5 at NBC's Hollywood studio, amid another hectic tour. Milton Berle persuaded Presley to leave his guitar backstage. During the performance, Presley abruptly halted an uptempo rendition of \"Hound Dog\" and launched into a slow, grinding version accentuated with exaggerated body movements. His gyrations created a storm of controversy. Television critics were outraged: Jack Gould of The New York Times wrote,", "title": "Life and career" }, { "paragraph_id": 27, "text": "Mr. Presley has no discernible singing ability. ... His phrasing, if it can be called that, consists of the stereotyped variations that go with a beginner's aria in a bathtub. ... His one specialty is an accented movement of the body ... primarily identified with the repertoire of the blond bombshells of the burlesque runway.", "title": "Life and career" }, { "paragraph_id": 28, "text": "Ben Gross of the New York Daily News opined that popular music \"has reached its lowest depths in the 'grunt and groin' antics of one Elvis Presley. ... Elvis, who rotates his pelvis ... gave an exhibition that was suggestive and vulgar, tinged with the kind of animalism that should be confined to dives and bordellos\". Ed Sullivan, whose variety show was the nation's most popular, declared Presley \"unfit for family viewing\". To Presley's displeasure, he soon found himself being referred to as \"Elvis the Pelvis\", which he called \"childish\".", "title": "Life and career" }, { "paragraph_id": 29, "text": "The Berle shows drew such high ratings that Presley was booked for a July 1 appearance on NBC's The Steve Allen Show in New York. Allen, no fan of rock and roll, introduced a \"new Elvis\" in a white bowtie and black tails. Presley sang \"Hound Dog\" for less than a minute to a basset hound wearing a top hat and bowtie. As described by television historian Jake Austen, \"Allen thought Presley was talentless and absurd ... [he] set things up so that Presley would show his contrition\". Allen later wrote that he found Presley's \"strange, gangly, country-boy charisma, his hard-to-define cuteness, and his charming eccentricity intriguing\" and worked him into the \"comedy fabric\" of his program. Just before the final rehearsal for the show, Presley told a reporter, \"I don't want to do anything to make people dislike me. I think TV is important so I'm going to go along, but I won't be able to give the kind of show I do in a personal appearance.\" Presley would refer back to the Allen show as the most ridiculous performance of his career. Later that night, he appeared on Hy Gardner Calling, a popular local television show. Pressed on whether he had learned anything from the criticism of him, Presley responded, \"No, I haven't... I don't see how any type of music would have any bad influence on people when it's only music. ... how would rock 'n' roll music make anyone rebel against their parents?\"", "title": "Life and career" }, { "paragraph_id": 30, "text": "The next day, Presley recorded \"Hound Dog\", \"Any Way You Want Me\" and \"Don't Be Cruel\". The Jordanaires sang harmony, as they had on The Steve Allen Show; they would work with Presley through the 1960s. A few days later, Presley made an outdoor concert appearance in Memphis, at which he announced, \"You know, those people in New York are not gonna change me none. I'm gonna show you what the real Elvis is like tonight.\" In August, a judge in Jacksonville, Florida, ordered Presley to tame his act. Throughout the following performance, he largely kept still, except for wiggling his little finger suggestively in mockery of the order. The single pairing \"Don't Be Cruel\" with \"Hound Dog\" ruled the top of the charts for eleven weeks—a mark that would not be surpassed for thirty-six years. Recording sessions for Presley's second album took place in Hollywood in early September. Leiber and Stoller, the writers of \"Hound Dog\", contributed \"Love Me\".", "title": "Life and career" }, { "paragraph_id": 31, "text": "Allen's show with Presley had, for the first time, beaten The Ed Sullivan Show in the ratings. Sullivan booked Presley for three appearances for an unprecedented $50,000. The first, on September 9, 1956, was seen by approximately 60 million viewers—a record 82.6 percent of the television audience. Actor Charles Laughton hosted the show, filling in while Sullivan was recovering from a car accident. According to legend, Presley was shot only from the waist up. Watching clips of the Allen and Berle shows, Sullivan had opined that Presley \"got some kind of device hanging down below the crotch of his pants—so when he moves his legs back and forth you can see the outline of his cock. ... I think it's a Coke bottle. ... We just can't have this on a Sunday night. This is a family show!\" Sullivan publicly told TV Guide, \"As for his gyrations, the whole thing can be controlled with camera shots.\" In fact, Presley was shown head-to-toe. Though the camerawork was relatively discreet during his debut, with leg-concealing closeups when he danced, the studio audience reacted with screams. Presley's performance of his forthcoming single, the ballad \"Love Me Tender\", prompted a record-shattering million advance orders. More than any other single event, it was this first appearance on The Ed Sullivan Show that made Presley a national celebrity.", "title": "Life and career" }, { "paragraph_id": 32, "text": "Accompanying Presley's rise to fame, a cultural shift was taking place that he both helped inspire and came to symbolize. The historian Marty Jezer wrote that Presley began the \"biggest pop craze\" since Glenn Miller and Frank Sinatra and brought rock and roll to mainstream culture:", "title": "Life and career" }, { "paragraph_id": 33, "text": "As Presley set the artistic pace, other artists followed. ... Presley, more than anyone else, gave the young a belief in themselves as a distinct and somehow unified generation—the first in America ever to feel the power of an integrated youth culture.", "title": "Life and career" }, { "paragraph_id": 34, "text": "The audience response at Presley's live shows became increasingly fevered. Moore recalled, \"He'd start out, 'You ain't nothin' but a Hound Dog,' and they'd just go to pieces. They'd always react the same way. There'd be a riot every time.\" At the two concerts he performed in September at the Mississippi–Alabama Fair and Dairy Show, fifty National Guardsmen were added to the police detail to prevent a ruckus. Elvis, Presley's second RCA album, was released in October and quickly rose to number one. The album includes \"Old Shep\", which he sang at the talent show in 1945, and which now marked the first time he played piano on an RCA session. According to Guralnick, \"the halting chords and the somewhat stumbling rhythm\" showed \"the unmistakable emotion and the equally unmistakable valuing of emotion over technique.\" Assessing the musical and cultural impact of Presley's recordings from \"That's All Right\" through Elvis, rock critic Dave Marsh wrote that \"these records, more than any others, contain the seeds of what rock & roll was, has been and most likely what it may foreseeably become.\"", "title": "Life and career" }, { "paragraph_id": 35, "text": "Presley returned to The Ed Sullivan Show, hosted this time by its namesake, on October 28. After the performance, crowds in Nashville and St. Louis burned him in effigy. His first motion picture, Love Me Tender, was released on November 21. Though he was not top-billed, the film's original title—The Reno Brothers—was changed to capitalize on his latest number-one record: \"Love Me Tender\" had hit the top of the charts earlier that month. To further take advantage of Presley's popularity, four musical numbers were added to what was originally a straight acting role. The film was panned by critics but did very well at the box office. Presley would receive top billing on every subsequent film he made.", "title": "Life and career" }, { "paragraph_id": 36, "text": "On December 4, Presley dropped into Sun Records, where Carl Perkins and Jerry Lee Lewis were recording, and had an impromptu jam session along with Johnny Cash. Though Phillips no longer had the right to release any Presley material, he made sure that the session was captured on tape. The results, none officially released for twenty-five years, became known as the \"Million Dollar Quartet\" recordings. The year ended with a front-page story in The Wall Street Journal reporting that Presley merchandise had brought in $22 million on top of his record sales, and Billboard's declaration that he had placed more songs in the top 100 than any other artist since records were first charted. In his first full year at RCA Victor, then the record industry's largest company, Presley had accounted for over fifty percent of the label's singles sales.", "title": "Life and career" }, { "paragraph_id": 37, "text": "Presley made his third and final Ed Sullivan Show appearance on January 6, 1957—on this occasion indeed shot only down to the waist. Some commentators have claimed that Parker orchestrated an appearance of censorship to generate publicity. In any event, as critic Greil Marcus describes, Presley \"did not tie himself down. Leaving behind the bland clothes he had worn on the first two shows, he stepped out in the outlandish costume of a pasha, if not a harem girl. From the make-up over his eyes, the hair falling in his face, the overwhelmingly sexual cast of his mouth, he was playing Rudolph Valentino in The Sheik, with all stops out.\" To close, displaying his range and defying Sullivan's wishes, Presley sang a gentle black spiritual, \"Peace in the Valley\". At the end of the show, Sullivan declared Presley \"a real decent, fine boy\". Two days later, the Memphis draft board announced that Presley would be classified 1-A and would probably be drafted sometime that year.", "title": "Life and career" }, { "paragraph_id": 38, "text": "Each of the three Presley singles released in the first half of 1957 went to number one: \"Too Much\", \"All Shook Up\", and \"(Let Me Be Your) Teddy Bear\". Already an international star, he was attracting fans even where his music was not officially released: The New York Times reported that pressings of his music on discarded X-ray plates were commanding high prices in Leningrad. Presley purchased his 18-room mansion, Graceland, on March 19, 1957. Before the purchase, Elvis recorded Loving You—the soundtrack to his second film, which was released in July. It was his third straight number-one album. The title track was written by Leiber and Stoller, who were then retained to write four of the six songs recorded at the sessions for Jailhouse Rock, Presley's next film. The songwriting team effectively produced the Jailhouse sessions and developed a close working relationship with Presley, who came to regard them as his \"good-luck charm\". \"He was fast,\" said Leiber. \"Any demo you gave him he knew by heart in ten minutes.\" The title track became another number-one hit, as was the Jailhouse Rock EP.", "title": "Life and career" }, { "paragraph_id": 39, "text": "Presley undertook three brief tours during the year, continuing to generate a crazed audience response. A Detroit newspaper suggested that \"the trouble with going to see Elvis Presley is that you're liable to get killed\". Villanova students pelted the singer with eggs in Philadelphia, and in Vancouver the crowd rioted after the show ended, destroying the stage. Frank Sinatra, who had inspired the swooning and screaming of teenage girls in the 1940s, decried rock and roll as \"brutal, ugly, degenerate, vicious. ... It fosters almost totally negative and destructive reactions in young people. It smells phoney and false. It is sung, played and written, for the most part, by cretinous goons. ... This rancid-smelling aphrodisiac I deplore.\" Asked for a response, Presley said, \"I admire the man. He has a right to say what he wants to say. He is a great success and a fine actor, but I think he shouldn't have said it. ... This is a trend, just the same as he faced when he started years ago.\"", "title": "Life and career" }, { "paragraph_id": 40, "text": "Leiber and Stoller were again in the studio for the recording of Elvis' Christmas Album. Toward the end of the session, they wrote a song on the spot at Presley's request: \"Santa Claus Is Back in Town\", an innuendo-laden blues. The holiday release stretched Presley's string of number-one albums to four and would become the best-selling Christmas album ever in the United States, with eventual sales of over 20 million worldwide. After the session, Moore and Black—drawing only modest weekly salaries, sharing in none of Presley's massive financial success—resigned, though they were brought back on a per diem basis a few weeks later.", "title": "Life and career" }, { "paragraph_id": 41, "text": "On December 20, Presley received his draft notice, though he was granted a deferment to finish the forthcoming film King Creole. A couple of weeks into the new year, \"Don't\", another Leiber and Stoller tune, became Presley's tenth number-one seller. Recording sessions for the King Creole soundtrack were held in Hollywood in mid-January 1958. Leiber and Stoller provided three songs, but it would be the last time Presley and the duo worked closely together. As Stoller later recalled, Presley's manager and entourage sought to wall him off. A brief soundtrack session on February 11 marked the final occasion on which Black was to perform with Presley.", "title": "Life and career" }, { "paragraph_id": 42, "text": "On March 24, 1958, Presley was drafted into the United States Army at Fort Chaffee in Arkansas. His arrival was a major media event. Hundreds of people descended on Presley as he stepped from the bus; photographers accompanied him into the installation. Presley announced that he was looking forward to his military service, saying that he did not want to be treated any differently from anyone else.", "title": "Life and career" }, { "paragraph_id": 43, "text": "Between March 28 and September 17, 1958, Presley completed basic and advanced training at Fort Hood, Texas, where he was temporarily assigned to Company A, 2d Medium Tank Battalion, 37th Armor. During the two weeks' leave between his basic and advanced training in early June, he recorded five songs in Nashville. In early August, Presley's mother was diagnosed with hepatitis, and her condition rapidly worsened. Presley was granted emergency leave to visit her and arrived in Memphis on August 12. Two days later, she died of heart failure at age 46. Presley was devastated and never the same; their relationship had remained extremely close—even into his adulthood, they would use baby talk with each other and Presley would address her with pet names.", "title": "Life and career" }, { "paragraph_id": 44, "text": "On October 1, 1958, Presley was assigned to the 1st Medium Tank Battalion, 32d Armor, 3d Armored Division, at Ray Barracks, West Germany, where he served as an armor intelligence specialist. On November 27, he was promoted to private first class and on June 1, 1959, to specialist fourth class. While on maneuvers, Presley was introduced to amphetamines and became \"practically evangelical about their benefits\", not only for energy but for \"strength\" and weight loss. Karate became a lifelong interest: he studied with Jürgen Seydel, and later included it in his live performances. Fellow soldiers have attested to Presley's wish to be seen as an able, ordinary soldier despite his fame, and to his generosity. He donated his Army pay to charity, purchased television sets for the base, and bought an extra set of fatigues for everyone in his outfit. Presley was promoted to sergeant on February 11, 1960.", "title": "Life and career" }, { "paragraph_id": 45, "text": "While in Bad Nauheim, Presley, aged 24, met 14-year-old Priscilla Beaulieu. They would marry after a seven-and-a-half-year courtship. In her autobiography, Priscilla said that Presley was concerned that his 24 months in the military would ruin his career. In Special Services, he would have been able to perform and remain in touch with the public, but Parker had convinced him that to gain popular respect, he should serve as a regular soldier. Media reports echoed Presley's concerns about his career, but RCA producer Steve Sholes and Freddy Bienstock of Hill and Range had carefully prepared: armed with a substantial amount of unreleased material, they kept up a regular stream of successful releases. Between his induction and discharge, Presley had ten top-40 hits, including \"Wear My Ring Around Your Neck\", the bestselling \"Hard Headed Woman\", and \"One Night\" in 1958, and \"(Now and Then There's) A Fool Such as I\" and the number-one \"A Big Hunk o' Love\" in 1959. RCA also generated four albums compiling previously issued material during this period, most successfully Elvis' Golden Records (1958), which hit number three on the LP chart.", "title": "Life and career" }, { "paragraph_id": 46, "text": "Presley returned to the U.S. on March 2, 1960, and was honorably discharged three days later. The train that carried him from New Jersey to Tennessee was mobbed all the way, and Presley was called upon to appear at scheduled stops to please his fans. On the night of March 20, he entered RCA's Nashville studio to cut tracks for a new album along with a single, \"Stuck on You\", which was rushed into release and swiftly became a number-one hit. Another Nashville session two weeks later yielded a pair of bestselling singles, the ballads \"It's Now or Never\" and \"Are You Lonesome Tonight?\", along with the rest of Elvis Is Back! The album features several songs described by Greil Marcus as full of Chicago blues \"menace, driven by Presley's own super-miked acoustic guitar, brilliant playing by Scotty Moore, and demonic sax work from Boots Randolph. Elvis' singing wasn't sexy, it was pornographic.\" The record \"conjured up the vision of a performer who could be all things\", according to music historian John Robertson: \"a flirtatious teenage idol with a heart of gold; a tempestuous, dangerous lover; a gutbucket blues singer; a sophisticated nightclub entertainer; [a] raucous rocker\". Released only days after recording was complete, it reached number two on the album chart.", "title": "Life and career" }, { "paragraph_id": 47, "text": "Presley returned to television on May 12 as a guest on The Frank Sinatra Timex Special. Also known as Welcome Home Elvis, the show had been taped in late March, the only time all year Presley performed in front of an audience. Parker secured an unheard-of $125,000 for eight minutes of singing. The broadcast drew an enormous viewership.", "title": "Life and career" }, { "paragraph_id": 48, "text": "G.I. Blues, the soundtrack to Presley's first film since his return, was a number-one album in October. His first LP of sacred material, His Hand in Mine, followed two months later; it reached number 13 on the U.S. pop chart and number 3 in the United Kingdom, remarkable figures for a gospel album. In February 1961, Presley performed two shows in Memphis, for a benefit for twenty-four local charities. During a luncheon preceding the event, RCA presented him with a plaque certifying worldwide sales of over 75 million records. A twelve-hour Nashville session in mid-March yielded nearly all of Presley's next studio album, Something for Everybody. According to John Robertson, it exemplifies the Nashville sound, the restrained, cosmopolitan style that would define country music in the 1960s. Presaging much of what was to come from Presley over the next half-decade, the album is largely \"a pleasant, unthreatening pastiche of the music that had once been Elvis' birthright\". It would be his sixth number-one LP. Another benefit concert, for a Pearl Harbor memorial, was staged on March 25 in Hawaii. It was to be Presley's last public performance for seven years.", "title": "Life and career" }, { "paragraph_id": 49, "text": "Parker had by now pushed Presley into a heavy filmmaking schedule, focused on formulaic, modestly budgeted musical comedies. Presley initially insisted on pursuing higher roles, but when two films in a more dramatic vein—Flaming Star (1960) and Wild in the Country (1961)—were less commercially successful, he reverted to the formula. Among the twenty-seven films he made during the 1960s, there were a few further exceptions. His films were almost universally panned; critic Andrew Caine dismissed them as a \"pantheon of bad taste\". Nonetheless, they were virtually all profitable. Hal Wallis, who produced nine, declared, \"A Presley picture is the only sure thing in Hollywood.\"", "title": "Life and career" }, { "paragraph_id": 50, "text": "Of Presley's films in the 1960s, fifteen were accompanied by soundtrack albums and another five by soundtrack EPs. The films' rapid production and release schedules—Presley frequently starred in three a year—affected his music. According to Jerry Leiber, the soundtrack formula was already evident before Presley left for the Army: \"three ballads, one medium-tempo [number], one up-tempo, and one break blues boogie\". As the decade wore on, the quality of the soundtrack songs grew \"progressively worse\". Julie Parrish, who appeared in Paradise, Hawaiian Style (1966), says that Presley disliked many of the songs. The Jordanaires' Gordon Stoker describes how he would retreat from the studio microphone: \"The material was so bad that he felt like he couldn't sing it.\" Most of the film albums featured a song or two from respected writers such as the team of Doc Pomus and Mort Shuman. But by and large, according to biographer Jerry Hopkins, the numbers seemed to be \"written on order by men who never really understood Elvis or rock and roll\".", "title": "Life and career" }, { "paragraph_id": 51, "text": "In the first half of the decade, three of Presley's soundtrack albums were ranked number one on the pop charts, and a few of his most popular songs came from his films, such as \"Can't Help Falling in Love\" (1961) and \"Return to Sender\" (1962). However, the commercial returns steadily diminished. From 1964 through 1968, Presley had only one top-ten hit: \"Crying in the Chapel\" (1965), a gospel number recorded in 1960. As for non-film albums, between the June 1962 release of Pot Luck and the November 1968 release of the soundtrack to the television special that signaled his comeback, only one LP of new material by Presley was issued: the gospel album How Great Thou Art (1967). It won him his first Grammy Award, for Best Sacred Performance. As Marsh described, Presley was \"arguably the greatest white gospel singer of his time [and] really the last rock & roll artist to make gospel as vital a component of his musical personality as his secular songs\".", "title": "Life and career" }, { "paragraph_id": 52, "text": "Shortly before Christmas 1966, more than seven years since they first met, Presley proposed to Priscilla Beaulieu. They were married on May 1, 1967, in a brief ceremony in their suite at the Aladdin Hotel in Las Vegas. The flow of formulaic films and assembly-line soundtracks continued. It was not until October 1967, when the Clambake soundtrack LP registered record low sales for a new Presley album, that RCA executives recognized a problem. \"By then, of course, the damage had been done\", as historians Connie Kirchberg and Marc Hendrickx put it. \"Elvis was viewed as a joke by serious music lovers and a has-been to all but his most loyal fans.\"", "title": "Life and career" }, { "paragraph_id": 53, "text": "Presley's only child, Lisa Marie, was born on February 1, 1968, during a period when he had grown deeply unhappy with his career. Of the eight Presley singles released between January 1967 and May 1968, only two charted in the top 40, none higher than number 28. His forthcoming soundtrack album, Speedway, would rank at number 82. Parker had already shifted his plans to television: he maneuvered a deal with NBC that committed the network to finance a theatrical feature and broadcast a Christmas special.", "title": "Life and career" }, { "paragraph_id": 54, "text": "Recorded in late June in Burbank, California, the special, simply called Elvis, aired on December 3, 1968. Later known as the '68 Comeback Special, the show featured lavishly staged studio productions as well as songs performed with a band in front of a small audience—Presley's first live performances since 1961. The live segments saw Presley dressed in tight black leather, singing and playing guitar in an uninhibited style reminiscent of his early rock and roll days. Director and co-producer Steve Binder worked hard to produce a show that was far from the hour of Christmas songs Parker had originally planned. The show, NBC's highest-rated that season, captured forty-two percent of the total viewing audience. Jon Landau of Eye magazine remarked, \"There is something magical about watching a man who has lost himself find his way back home. He sang with the kind of power people no longer expect of rock 'n' roll singers. He moved his body with a lack of pretension and effort that must have made Jim Morrison green with envy.\" Marsh calls the performance one of \"emotional grandeur and historical resonance\".", "title": "Life and career" }, { "paragraph_id": 55, "text": "By January 1969, the single \"If I Can Dream\", written for the special, reached number 12. The soundtrack album rose into the top ten. According to friend Jerry Schilling, the special reminded Presley of what \"he had not been able to do for years, being able to choose the people; being able to choose what songs and not being told what had to be on the soundtrack. ... He was out of prison, man.\" Binder said of Presley's reaction, \"I played Elvis the 60-minute show, and he told me in the screening room, 'Steve, it's the greatest thing I've ever done in my life. I give you my word I will never sing a song I don't believe in.'\"", "title": "Life and career" }, { "paragraph_id": 56, "text": "Buoyed by the experience of the Comeback Special, Presley engaged in a prolific series of recording sessions at American Sound Studio, which led to the acclaimed From Elvis in Memphis. Released in June 1969, it was his first secular, non-soundtrack album from a dedicated period in the studio in eight years. As described by Marsh, it is \"a masterpiece in which Presley immediately catches up with pop music trends that had seemed to pass him by during the movie years. He sings country songs, soul songs and rockers with real conviction, a stunning achievement.\" The album featured the hit single \"In the Ghetto\", issued in April, which reached number three on the pop chart—Presley's first non-gospel top ten hit since \"Bossa Nova Baby\" in 1963. Further hit singles were culled from the American Sound sessions: \"Suspicious Minds\", \"Don't Cry Daddy\", and \"Kentucky Rain\".", "title": "Life and career" }, { "paragraph_id": 57, "text": "Presley was keen to resume regular live performing. Following the success of the Comeback Special, offers came in from around the world. The London Palladium offered Parker US$28,000 (equivalent to $223,000 in 2022) for a one-week engagement. He responded, \"That's fine for me, now how much can you get for Elvis?\" In May, the brand-new International Hotel in Las Vegas, boasting the largest showroom in the city, booked Presley for fifty-seven shows over four weeks, beginning July 31. Moore, Fontana, and the Jordanaires declined to participate, afraid of losing the lucrative session work they had in Nashville. Presley assembled new, top-notch accompaniment, led by guitarist James Burton and including two gospel groups, The Imperials and Sweet Inspirations. Costume designer Bill Belew, responsible for the intense leather styling of the Comeback Special, created a new stage look for Presley, inspired by his passion for karate. Nonetheless, Presley was nervous: his only previous Las Vegas engagement, in 1956, had been dismal. Parker oversaw a major promotional push, and International Hotel owner Kirk Kerkorian arranged to send his own plane to New York to fly in rock journalists for the debut performance.", "title": "Life and career" }, { "paragraph_id": 58, "text": "Presley took to the stage without introduction. The audience of 2,200, including many celebrities, gave him a standing ovation before he sang a note and another after his performance. A third followed his encore, \"Can't Help Falling in Love\" (which would be his closing number for much of his remaining life). At a press conference after the show, when a journalist referred to him as \"The King\", Presley gestured toward Fats Domino, who was taking in the scene. \"No,\" Presley said, \"that's the real king of rock and roll.\" The next day, Parker's negotiations with the hotel resulted in a five-year contract for Presley to play each February and August, at an annual salary of $1 million. Newsweek commented, \"There are several unbelievable things about Elvis, but the most incredible is his staying power in a world where meteoric careers fade like shooting stars.\" Rolling Stone called Presley \"supernatural, his own resurrection.\" In November, Presley's final non-concert film, Change of Habit, opened. The double album From Memphis to Vegas/From Vegas to Memphis came out the same month; the first LP consisted of live performances from the International, the second of more cuts from the American Sound sessions. \"Suspicious Minds\" reached the top of the charts—Presley's first U.S. pop number-one in over seven years, and his last.", "title": "Life and career" }, { "paragraph_id": 59, "text": "Cassandra Peterson, later television's Elvira, met Presley during this period in Las Vegas. She recalled of their encounter, \"He was so anti-drug when I met him. I mentioned to him that I smoked marijuana, and he was just appalled.\" Presley also rarely drank—several of his family members had been alcoholics, a fate he intended to avoid.", "title": "Life and career" }, { "paragraph_id": 60, "text": "Presley returned to the International early in 1970 for the first of the year's two-month-long engagements, performing two shows a night. Recordings from these shows were issued on the album On Stage. In late February, Presley performed six attendance-record–breaking shows at the Houston Astrodome. In April, the single \"The Wonder of You\" was issued—a number one hit in the UK, it topped the U.S. adult contemporary chart as well. Metro-Goldwyn-Mayer (MGM) filmed rehearsal and concert footage at the International during August for the documentary Elvis: That's the Way It Is. Presley was performing in a jumpsuit, which would become a trademark of his live act. During this engagement, he was threatened with murder unless US$50,000 (equivalent to $377,000 in 2022) was paid. Presley had been the target of many threats since the 1950s, often without his knowledge. The FBI took the threat seriously and security was increased for the next two shows. Presley went onstage with a Derringer in his right boot and a .45 caliber pistol in his waistband, but the concerts succeeded without any incidents.", "title": "Life and career" }, { "paragraph_id": 61, "text": "That's the Way It Is, produced to accompany the documentary and featuring both studio and live recordings, marked a stylistic shift. As music historian John Robertson noted,", "title": "Life and career" }, { "paragraph_id": 62, "text": "The authority of Presley's singing helped disguise the fact that the album stepped decisively away from the American-roots inspiration of the Memphis sessions towards a more middle-of-the-road sound. With country put on the back burner, and soul and R&B left in Memphis, what was left was very classy, very clean white pop—perfect for the Las Vegas crowd, but a definite retrograde step for Elvis.", "title": "Life and career" }, { "paragraph_id": 63, "text": "After the end of his International engagement on September 7, Presley embarked on a week-long concert tour, largely of the South, his first since 1958. Another week-long tour, of the West Coast, followed in November.", "title": "Life and career" }, { "paragraph_id": 64, "text": "On December 21, 1970, Presley engineered a meeting with U.S. President Richard Nixon at the White House, where he explained how he believed he could reach out to the hippies to help combat the drug culture he and the president abhorred. He asked Nixon for a Bureau of Narcotics and Dangerous Drugs badge, to signify official sanction of his efforts. Nixon, who apparently found the encounter awkward, expressed a belief that Presley could send a positive message to young people and that it was, therefore, important that he \"retain his credibility\". Presley told Nixon that the Beatles, whose songs he regularly performed in concert during the era, exemplified what he saw as a trend of anti-Americanism. Presley and his friends previously had a four-hour get-together with the Beatles at his home in Bel Air, California, in August 1965. Paul McCartney later said that he \"felt a bit betrayed. ... The great joke was that we were taking [illegal] drugs, and look what happened to him\", a reference to Presley's early death linked to prescription drug abuse.", "title": "Life and career" }, { "paragraph_id": 65, "text": "The U.S. Junior Chamber of Commerce named Presley one of its annual Ten Most Outstanding Young Men of the Nation on January 16, 1971. Not long after, the City of Memphis named the stretch of Highway 51 South on which Graceland is located \"Elvis Presley Boulevard\". The same year, Presley became the first rock and roll singer to be awarded the Grammy Lifetime Achievement Award (then known as the Bing Crosby Award). Three new, non-film Presley studio albums were released in 1971. Best received by critics was Elvis Country, a concept record that focused on genre standards. The biggest seller was Elvis Sings The Wonderful World of Christmas. According to Greil Marcus,", "title": "Life and career" }, { "paragraph_id": 66, "text": "In the midst of ten painfully genteel Christmas songs, every one sung with appalling sincerity and humility, one could find Elvis tom-catting his way through six blazing minutes of \"Merry Christmas Baby\", a raunchy old Charles Brown blues. [...] If [Presley's] sin was his lifelessness, it was his sinfulness that brought him to life.", "title": "Life and career" }, { "paragraph_id": 67, "text": "MGM filmed Presley in April 1972 for Elvis on Tour, which went on to win the Golden Globe Award for Best Documentary Film for that year's Golden Globe Awards. His gospel album He Touched Me, released that month, would earn him his second Grammy Award for Best Inspirational Performance. A fourteen-date tour commenced with an unprecedented four consecutive sold-out shows at New York's Madison Square Garden. The evening concert on July 10 was issued in LP form a week later. Elvis: As Recorded at Madison Square Garden became one of Presley's biggest-selling albums. After the tour, the single \"Burning Love\" was released—Presley's last top ten hit on the U.S. pop chart. \"The most exciting single Elvis has made since 'All Shook Up'\", wrote rock critic Robert Christgau.", "title": "Life and career" }, { "paragraph_id": 68, "text": "Presley and his wife had become increasingly distant, barely cohabiting. In 1971, an affair he had with Joyce Bova resulted—unbeknownst to him—in her pregnancy and an abortion. He often raised the possibility of Joyce moving into Graceland. The Presleys separated on February 23, 1972, after Priscilla disclosed her relationship with Mike Stone, a karate instructor Presley had recommended to her. Priscilla related that when she told him, Presley forcefully made love to her, declaring, \"This is how a real man makes love to his woman\". She later stated in an interview that she regretted her choice of words in describing the incident, and said it had been an overstatement. Five months later, Presley's new girlfriend, Linda Thompson, a songwriter and one-time Memphis beauty queen, moved in with him. Presley and his wife filed for divorce on August 18. According to Joe Moscheo of the Imperials, the failure of Presley's marriage \"was a blow from which he never recovered\". At a rare press conference that June, a reporter had asked Presley whether he was satisfied with his image. Presley replied, \"Well, the image is one thing and the human being another ... it's very hard to live up to an image.\"", "title": "Life and career" }, { "paragraph_id": 69, "text": "In January 1973, Presley performed two benefit concerts for the Kui Lee Cancer Fund in connection with a groundbreaking television special, Aloha from Hawaii, which would be the first concert by a solo artist to be aired globally. The first show served as a practice run and backup should technical problems affect the live broadcast two days later. On January 14, Aloha from Hawaii aired live via satellite to prime-time audiences in Japan, South Korea, Thailand, the Philippines, Australia, and New Zealand, as well as to U.S. servicemen based across Southeast Asia. In Japan, where it capped a nationwide Elvis Presley Week, it smashed viewing records. The next night, it was simulcast to twenty-eight European countries, and in April an extended version aired in the U.S., receiving a fifty-seven percent share of the TV audience. Over time, Parker's claim that it was seen by one billion or more people would be broadly accepted, but that figure appeared to have been sheer invention. Presley's stage costume became the most recognized example of the elaborate concert garb with which his latter-day persona became closely associated. As described by Bobbie Ann Mason, \"At the end of the show, when he spreads out his American Eagle cape, with the full stretched wings of the eagle studded on the back, he becomes a god figure.\" The accompanying double album, released in February, went to number one and eventually sold over 5 million copies in the U.S. It was Presley's last U.S. number-one pop album during his lifetime.", "title": "Life and career" }, { "paragraph_id": 70, "text": "At a midnight show that same month, four men rushed onto the stage in an apparent attack. Security personnel came to Presley's defense, and he ejected one invader from the stage himself. Following the show, Presley became obsessed with the idea that the men had been sent by Mike Stone to kill him. Though they were shown to have been only overexuberant fans, Presley raged, \"There's too much pain in me ... Stone [must] die.\" His outbursts continued with such intensity that a physician was unable to calm him, despite administering large doses of medication. After another two full days of raging, Red West, his friend and bodyguard, felt compelled to get a price for a contract killing and was relieved when Presley decided, \"Aw hell, let's just leave it for now. Maybe it's a bit heavy.\"", "title": "Life and career" }, { "paragraph_id": 71, "text": "Presley's divorce was finalized on October 9, 1973. By then, his health was in serious decline. Twice during the year he overdosed on barbiturates, spending three days in a coma in his hotel suite after the first incident. In late 1973, he was hospitalized from the effects of a pethidine addiction. According to his primary care physician, George C. Nichopoulos, Presley \"felt that by getting drugs from a doctor, he wasn't the common everyday junkie getting something off the street\". Since his comeback, he had staged more live shows with each passing year, and 1973 saw 168 concerts, his busiest schedule ever. Despite his failing health, he undertook another intensive touring schedule in 1974.", "title": "Life and career" }, { "paragraph_id": 72, "text": "Presley's condition declined precipitously that September. Keyboardist Tony Brown remembered his arrival at a University of Maryland concert: \"He fell out of the limousine, to his knees. People jumped to help, and he pushed them away like, 'Don't help me.' He walked on stage and held onto the mic for the first thirty minutes like it was a post. Everybody's looking at each other like, 'Is the tour gonna happen'?\" Guitarist John Wilkinson recalled, \"He was all gut. He was slurring. He was so fucked up. ... It was obvious he was drugged. It was obvious there was something terribly wrong with his body. It was so bad the words to the songs were barely intelligible. ... I remember crying. He could barely get through the introductions.\"", "title": "Life and career" }, { "paragraph_id": 73, "text": "RCA began to grow anxious as his interest in the recording studio waned. After a session in December 1973 that produced eighteen songs, enough for almost two albums, Presley made no official studio recordings in 1974. Parker delivered RCA another concert record, Elvis Recorded Live on Stage in Memphis. Recorded on March 20, it included a version of \"How Great Thou Art\" that won Presley his third and final Grammy Award for Best Inspirational Performance. All three of his competitive Grammy wins – out of fourteen total nominations – were for gospel recordings. Presley returned to the recording studio in March 1975, but Parker's attempts to arrange another session toward the end of the year were unsuccessful. In 1976, RCA sent a mobile recording unit to Graceland that made possible two full-scale recording sessions. However, the recording process had become a struggle for him.", "title": "Life and career" }, { "paragraph_id": 74, "text": "Journalist Tony Scherman wrote that, by early 1977, \"Presley had become a grotesque caricature of his sleek, energetic former self. Grossly overweight, his mind dulled by the pharmacopia he daily ingested, he was barely able to pull himself through his abbreviated concerts.\" According to Andy Greene of Rolling Stone, Presley's final performances were mostly \"sad, sloppy affairs where a bloated, drugged Presley struggled to remember his lyrics and get through the night without collapsing ... Most everything from the final three years of his life is sad and hard to watch.\" In Alexandria, Louisiana, he was on stage for less than an hour and \"was impossible to understand\". On March 31, he canceled a performance in Baton Rouge, unable to get out of his hotel bed; four shows had to be canceled and rescheduled.", "title": "Life and career" }, { "paragraph_id": 75, "text": "Despite the accelerating deterioration of his health, Presley fulfilled most of his touring commitments. According to Guralnick, fans \"were becoming increasingly voluble about their disappointment, but it all seemed to go right past Presley, whose world was now confined almost entirely to his room and his spiritualism books\". Presley's cousin, Billy Smith, recalled how he would sit in his room and chat for hours, sometimes recounting favorite Monty Python sketches and his past escapades, but more often gripped by paranoid obsessions.", "title": "Life and career" }, { "paragraph_id": 76, "text": "\"Way Down\", Presley's last single issued during his lifetime, was released on June 6, 1977. That month, CBS taped two concerts for a television special, Elvis in Concert, to be broadcast in October. In the first, shot in Omaha on June 19, Presley's voice, Guralnick writes, \"is almost unrecognizable, a small, childlike instrument in which he talks more than sings most of the songs, casts about uncertainly for the melody in others, and is virtually unable to articulate or project\". Two days later, in Rapid City, South Dakota, \"he looked healthier, seemed to have lost a little weight, and sounded better, too\", though, by the conclusion of the performance, his face was \"framed in a helmet of blue-black hair from which sweat sheets down over pale, swollen cheeks\". Presley's final concert was held in Indianapolis at Market Square Arena, on June 26, 1977.", "title": "Life and career" }, { "paragraph_id": 77, "text": "On August 16, 1977, Presley was scheduled on an evening flight out of Memphis to Portland, Maine, to begin another tour. That afternoon, however, his fiancée Ginger Alden discovered him unresponsive on the bathroom floor of his Graceland mansion. Biographer Joel Williamson suggests that \"involving a reaction to the codeine\" he had taken \"and attempts to move his bowels—he experienced pain and fright while sitting on the toilet. Alarmed, he stood up ... and fell face down in the fetal position.\" Drooling on the rug and \"unable to breathe, he died.\" Attempts to revive him failed, and he was pronounced dead at Baptist Memorial Hospital at 3:30 p.m. He was 42.", "title": "Life and career" }, { "paragraph_id": 78, "text": "President Jimmy Carter issued a statement that credited Presley with having \"permanently changed the face of American popular culture\". Thousands of people gathered outside Graceland to view the open casket. One of Presley's cousins, Billy Mann, accepted US$18,000 (equivalent to $87,000 in 2022) to secretly photograph the body; the picture appeared on the cover of the National Enquirer's biggest-selling issue ever. Alden struck a $105,000 (equivalent to $507,000 in 2022) deal with the Enquirer for her story, but settled for less when she broke her exclusivity agreement. Presley left her nothing in his will.", "title": "Life and career" }, { "paragraph_id": 79, "text": "Presley's funeral was held at Graceland on August 18. Outside the gates, a car plowed into a group of fans, killing two young women and critically injuring a third. About 80,000 people lined the processional route to Forest Hill Cemetery, where Presley was buried next to his mother. Within a few weeks, \"Way Down\" topped the country and UK singles chart. Following an attempt to steal Presley's body in late August, the remains of both Presley and his mother were exhumed and reburied in Graceland's Meditation Garden on October 2.", "title": "Life and career" }, { "paragraph_id": 80, "text": "While an autopsy, undertaken the same day Presley died, was still in progress, Memphis medical examiner Jerry Francisco announced that the immediate cause of death was cardiac arrest and declared that \"drugs played no role in Presley's death\". In fact, \"drug use was heavily implicated\" in Presley's death, writes Guralnick. The pathologists conducting the autopsy thought it possible, for instance, that he had suffered \"anaphylactic shock brought on by the codeine pills he had gotten from his dentist, to which he was known to have had a mild allergy\". Lab reports filed two months later strongly suggested that polypharmacy was the primary cause of death; one reported \"fourteen drugs in Elvis' system, ten in significant quantity\". In 1979, forensic pathologist Cyril Wecht reviewed the reports and concluded that a combination of depressants had resulted in Presley's accidental death. Forensic historian and pathologist Michael Baden viewed the situation as complicated: \"Elvis had had an enlarged heart for a long time. That, together with his drug habit, caused his death. But he was difficult to diagnose; it was a judgment call.\"", "title": "Life and career" }, { "paragraph_id": 81, "text": "The competence and ethics of two of the centrally involved medical professionals were seriously questioned. Francisco had offered a cause of death before the autopsy was complete; claimed the underlying ailment was cardiac arrhythmia, a condition that can be determined only in a living person; and denied drugs played any part in Presley's death before the toxicology results were known. Allegations of a cover-up were widespread. While a 1981 trial of Presley's main physician, George C. Nichopoulos, exonerated him of criminal liability, the facts were startling: \"In the first eight months of 1977 alone, he had [prescribed] more than 10,000 doses of sedatives, amphetamines, and narcotics: all in Elvis' name.\" Nichopoulos' license was suspended for three months. It was permanently revoked in the 1990s after the Tennessee Medical Board brought new charges of over-prescription.", "title": "Life and career" }, { "paragraph_id": 82, "text": "In 1994, the Presley autopsy report was reopened. Joseph Davis, who had conducted thousands of autopsies as Miami-Dade County coroner, declared at its completion, \"There is nothing in any of the data that supports a death from drugs. In fact, everything points to a sudden, violent heart attack.\" More recent research has revealed that Francisco did not speak for the entire pathology team. Other staff \"could say nothing with confidence until they got the results back from the laboratories, if then.\" One of the examiners, E. Eric Muirhead,", "title": "Life and career" }, { "paragraph_id": 83, "text": "could not believe his ears. Francisco had not only presumed to speak for the hospital's team of pathologists, he had announced a conclusion that they had not reached. ... Early on, a meticulous dissection of the body ... confirmed [that] Elvis was chronically ill with diabetes, glaucoma, and constipation. As they proceeded, the doctors saw evidence that his body had been wracked over a span of years by a large and constant stream of drugs. They had also studied his hospital records, which included two admissions for drug detoxification and methadone treatments.", "title": "Life and career" }, { "paragraph_id": 84, "text": "According to biographer Frank Coffey, \"other plausible causes\" include \"the Valsalva maneuver (essentially straining on the toilet leading to heart stoppage—plausible because Elvis suffered constipation, a common reaction to drug use)\". Dr Warlick, who attended the autopsy, agrees.", "title": "Life and career" }, { "paragraph_id": 85, "text": "Between 1977 and 1981, six of Presley's posthumously released singles were top-ten country hits. Graceland was opened to the public in 1982. Attracting over half a million visitors annually, it became the second-most-visited home in the United States, after the White House. The residence was declared a National Historic Landmark in 2006.", "title": "Life and career" }, { "paragraph_id": 86, "text": "Presley has been inducted into five music halls of fame: the Rock and Roll Hall of Fame (1986), the Country Music Hall of Fame (1998), the Gospel Music Hall of Fame (2001), the Rockabilly Hall of Fame (2007), and the Memphis Music Hall of Fame (2012). In 1984, he received the W. C. Handy Award from the Blues Foundation and the Academy of Country Music's first Golden Hat Award. In 1987, he received the American Music Awards' Award of Merit.", "title": "Life and career" }, { "paragraph_id": 87, "text": "A Junkie XL remix of Presley's \"A Little Less Conversation\" (credited as \"Elvis Vs JXL\") was used in a Nike advertising campaign during the 2002 FIFA World Cup. It topped the charts in over twenty countries and was included in a compilation of Presley's number-one hits, ELV1S, which was also an international success. The album returned Presley to the top of the Billboard chart for the first time in almost three decades.", "title": "Life and career" }, { "paragraph_id": 88, "text": "In 2003, a remix of \"Rubberneckin'\", a 1969 recording, topped the U.S. sales chart, as did a 50th-anniversary re-release of \"That's All Right\" the following year. The latter was an outright hit in Britain, debuting at number three on the pop chart; it also made the top ten in Canada. In 2005, another three reissued singles, \"Jailhouse Rock\", \"One Night\"/\"I Got Stung\", and \"It's Now or Never\", went to number one in the UK. They were part of a campaign that saw the re-release of all eighteen of Presley's previous chart-topping UK singles. The first, \"All Shook Up\", came with a collectors' box that made it ineligible to chart again; each of the other seventeen reissues hit the British top five.", "title": "Life and career" }, { "paragraph_id": 89, "text": "In 2005, Forbes magazine named Presley the top-earning deceased celebrity for the fifth straight year, with a gross income of $45 million. He was placed second in 2006, returned to the top spot the next two years, and ranked fourth in 2009. The following year, he was ranked second, with his highest annual income ever—$60 million—spurred by the celebration of his 75th birthday and the launch of Cirque du Soleil's Viva Elvis show in Las Vegas. In November 2010, Viva Elvis: The Album was released, setting his voice to newly recorded instrumental tracks. As of mid-2011, there were an estimated 15,000 licensed Presley products, and he was again the second-highest-earning deceased celebrity. Six years later, he ranked fourth with earnings of $35 million, up $8 million from 2016 due in part to the opening of a new entertainment complex, Elvis Presley's Memphis, and hotel, The Guest House at Graceland.", "title": "Life and career" }, { "paragraph_id": 90, "text": "In 2018, RCA/Legacy released Elvis Presley – Where No One Stands Alone, a new album focused on Elvis' love of gospel music. Produced by Joel Weinshanker, Lisa Marie Presley and Andy Childs, the album introduced newly recorded instrumentation along with vocals from singers who had performed in the past with Elvis. It included a reimagined duet with Lisa Marie, on the album's title track.", "title": "Life and career" }, { "paragraph_id": 91, "text": "In 2022, Baz Luhrmann's film Elvis, a biographical film about Presley's life, was released. Presley is portrayed by Austin Butler and Parker by Tom Hanks. As of August 2022, the film had grossed $261.8 million worldwide on a $85 million budget, becoming the second-highest-grossing music biopic of all-time behind Bohemian Rhapsody (2018), and the fifth-highest-grossing Australian-produced film. For his portrayal of Elvis, Butler won the Golden Globe and was nominated for the Oscar for Best Actor. In January 2023, his 1962 Lockheed 1329 JetStar sold at an auction for $260,000.", "title": "Life and career" }, { "paragraph_id": 92, "text": "Presley's earliest musical influence came from gospel. His mother recalled that from the age of two, at the Assembly of God church in Tupelo attended by the family, \"he would slide down off my lap, run into the aisle and scramble up to the platform. There he would stand looking at the choir and trying to sing with them.\" In Memphis, Presley frequently attended all-night gospel singings at the Ellis Auditorium, where groups such as the Statesmen Quartet led the music in a style that, Guralnick suggests, sowed the seeds of Presley's future stage act:", "title": "Artistry" }, { "paragraph_id": 93, "text": "The Statesmen were an electric combination ... featuring some of the most thrillingly emotive singing and daringly unconventional showmanship in the entertainment world ... dressed in suits that might have come out of the window of Lansky's. ... Bass singer Jim Wetherington, known universally as the Big Chief, maintained a steady bottom, ceaselessly jiggling first his left leg, then his right, with the material of the pants leg ballooning out and shimmering. \"He went about as far as you could go in gospel music,\" said Jake Hess. \"The women would jump up, just like they do for the pop shows.\" Preachers frequently objected to the lewd movements ... but audiences reacted with screams and swoons.", "title": "Artistry" }, { "paragraph_id": 94, "text": "As a teenager, Presley's musical interests were wide-ranging, and he was deeply informed about both white and African-American musical idioms. Though he never had any formal training, he had a remarkable memory, and his musical knowledge was already considerable by the time he made his first professional recordings aged 19 in 1954. When Jerry Leiber and Mike Stoller met him two years later, they were astonished at his encyclopedic understanding of the blues, and, as Stoller put it, \"He certainly knew a lot more than we did about country music and gospel music.\" At a press conference the following year, he proudly declared, \"I know practically every religious song that's ever been written.\"", "title": "Artistry" }, { "paragraph_id": 95, "text": "Presley played guitar, bass, and piano; he received his first guitar when he was 11 years old. He could not read or write music and had no formal lessons, and played everything by ear. Presley often played an instrument on his recordings and produced his own music. Presley played rhythm acoustic guitar on most of his Sun recordings and his 1950s RCA albums. Presley played piano on songs such as \"Old Shep\" and \"First in Line\" from his 1956 album Elvis. He is credited with playing piano on later albums such as From Elvis in Memphis and \"Moody Blue\", and on \"Unchained Melody\", which was one of the last songs that he recorded. Presley played lead guitar on one of his successful singles called \"Are You Lonesome Tonight\". In the 68 Comeback Special, Elvis took over on lead electric guitar, the first time he had ever been seen with the instrument in public, playing it on songs such as \"Baby What You Want Me to Do\" and \"One Night\". The album Elvis is Back! features Presley playing a lot of acoustic guitar on songs such as \"I Will Be Home Again\" and \"Like a Baby\".", "title": "Artistry" }, { "paragraph_id": 96, "text": "Presley was a central figure in the development of rockabilly, according to music historians. \"Rockabilly crystallized into a recognizable style in 1954 with Elvis Presley's first release, on the Sun label,\" writes Craig Morrison. Paul Friedlander described rockabilly as \"essentially ... an Elvis Presley construction\", with the defining elements as \"the raw, emotive, and slurred vocal style and emphasis on rhythmic feeling [of] the blues with the string band and strummed rhythm guitar [of] country\". In \"That's All Right\", the Presley trio's first record, Scotty Moore's guitar solo, \"a combination of Merle Travis–style country finger-picking, double-stop slides from acoustic boogie, and blues-based bent-note, single-string work, is a microcosm of this fusion\". While Katherine Charlton calls Presley \"rockabilly's originator\", Carl Perkins, another pioneer of rock'n'roll, said that \"[Sam] Phillips, Elvis, and I didn't create rockabilly\". According to Michael Campbell, the first major rockabilly song was recorded by Bill Haley. In Moore's view, \"It had been there for quite a while, really. Carl Perkins was doing basically the same sort of thing up around Jackson, and I know for a fact Jerry Lee Lewis had been playing that kind of music ever since he was ten years old.\"", "title": "Artistry" }, { "paragraph_id": 97, "text": "At RCA Victor, Presley's rock and roll sound grew distinct from rockabilly with group chorus vocals, more heavily amplified electric guitars and a tougher, more intense manner. While he was known for taking songs from various sources and giving them a rockabilly/rock and roll treatment, he also recorded songs in other genres from early in his career, from the pop standard \"Blue Moon\" at Sun Records to the country ballad \"How's the World Treating You?\" on his second RCA Victor LP to the blues of \"Santa Claus Is Back in Town\". In 1957, his first gospel record was released, the four-song EP Peace in the Valley. Certified as a million-seller, it became the top-selling gospel EP in recording history. Presley would record gospel periodically for the rest of his life.", "title": "Artistry" }, { "paragraph_id": 98, "text": "After his return from military service in 1960, Presley continued to perform rock and roll, but the characteristic style was substantially toned down. His first post-Army single, the number-one hit \"Stuck on You\", is typical of this shift. RCA Victor publicity referred to its \"mild rock beat\"; discographer Ernst Jorgensen calls it \"upbeat pop\". The number five \"She's Not You\" (1962) \"integrates the Jordanaires so completely, it's practically doo-wop\". The modern blues/R&B sound captured with success on Elvis Is Back! was essentially abandoned for six years until such 1966–67 recordings as \"Down in the Alley\" and \"Hi-Heel Sneakers\". Presley's output during most of the 1960s emphasized pop music, often in the form of ballads such as \"Are You Lonesome Tonight?\", a number-one in 1960. \"It's Now or Never\", which also topped the chart that year, was a classically influenced variation of pop based on the Neapolitan song \"'O sole mio\" and concluding with a \"full-voiced operatic cadence\". These were both dramatic numbers, but most of what Presley recorded for his many film soundtracks was in a much lighter vein.", "title": "Artistry" }, { "paragraph_id": 99, "text": "While Presley performed several of his classic ballads for the '68 Comeback Special, the sound of the show was dominated by aggressive rock and roll. He recorded few new straight rock and roll songs thereafter; as he explained, they had become \"hard to find\". A significant exception was \"Burning Love\", his last major hit on the pop charts. Like his work of the 1950s, Presley's subsequent recordings reworked pop and country songs, but in markedly different permutations. His stylistic range now began to embrace a more contemporary rock sound as well as soul and funk. Much of Elvis in Memphis, as well as \"Suspicious Minds\", cut at the same sessions, reflected this new rock and soul fusion. In the mid-1970s, many of his singles found a home on country radio, the field where he first became a star.", "title": "Artistry" }, { "paragraph_id": 100, "text": "The developmental arc of Presley's singing voice, as described by critic Dave Marsh, goes from \"high and thrilled in the early days, [to] lower and perplexed in the final months.\" Marsh credits Presley with the introduction of the \"vocal stutter\" on 1955's \"Baby Let's Play House\". When on \"Don't Be Cruel\", Presley \"slides into a 'mmmmm' that marks the transition between the first two verses,\" he shows \"how masterful his relaxed style really is.\" Marsh describes the vocal performance on \"Can't Help Falling in Love\" as one of \"gentle insistence and delicacy of phrasing\", with the line \"'Shall I stay' pronounced as if the words are fragile as crystal\".", "title": "Artistry" }, { "paragraph_id": 101, "text": "Jorgensen calls the 1966 recording of \"How Great Thou Art\" \"an extraordinary fulfillment of his vocal ambitions\", as Presley \"crafted for himself an ad-hoc arrangement in which he took every part of the four-part vocal, from [the] bass intro to the soaring heights of the song's operatic climax\", becoming \"a kind of one-man quartet\". Guralnick finds \"Stand by Me\" from the same gospel sessions \"a beautifully articulated, almost nakedly yearning performance\", but, by contrast, feels that Presley reaches beyond his powers on \"Where No One Stands Alone\", resorting \"to a kind of inelegant bellowing to push out a sound\" that Jake Hess of the Statesmen Quartet had in his command. Hess himself thought that while others might have voices the equal of Presley's, \"he had that certain something that everyone searches for all during their lifetime.\" Guralnick attempts to pinpoint that something: \"The warmth of his voice, his controlled use of both vibrato technique and natural falsetto range, the subtlety and deeply felt conviction of his singing were all qualities recognizably belonging to his talent but just as recognizably not to be achieved without sustained dedication and effort.\"", "title": "Artistry" }, { "paragraph_id": 102, "text": "Marsh praises his 1968 reading of \"U.S. Male\", \"bearing down on the hard guy lyrics, not sending them up or overplaying them but tossing them around with that astonishingly tough yet gentle assurance that he brought to his Sun records.\" The performance on \"In the Ghetto\" is, according to Jorgensen, \"devoid of any of his characteristic vocal tricks or mannerisms\", instead relying on the exceptional \"clarity and sensitivity of his voice\". Guralnick describes the song's delivery as of \"almost translucent eloquence ... so quietly confident in its simplicity\". On \"Suspicious Minds\", Guralnick hears essentially the same \"remarkable mixture of tenderness and poise\", but supplemented with \"an expressive quality somewhere between stoicism (at suspected infidelity) and anguish (over impending loss)\".", "title": "Artistry" }, { "paragraph_id": 103, "text": "Music critic Henry Pleasants observes that \"Presley has been described variously as a baritone and a tenor. An extraordinary compass ... and a very wide range of vocal color have something to do with this divergence of opinion.\" He identifies Presley as a high baritone, calculating his range as two octaves and a third, \"from the baritone low G to the tenor high B, with an upward extension in falsetto to at least a D-flat. Presley's best octave is in the middle, D-flat to D-flat, granting an extra full step up or down.\" In Pleasants' view, his voice was \"variable and unpredictable\" at the bottom, \"often brilliant\" at the top, with the capacity for \"full-voiced high Gs and As that an opera baritone might envy\". Scholar Lindsay Waters, who figures Presley's range as two-and-a-quarter octaves, emphasizes that \"his voice had an emotional range from tender whispers to sighs down to shouts, grunts, grumbles, and sheer gruffness that could move the listener from calmness and surrender, to fear. His voice can not be measured in octaves, but in decibels; even that misses the problem of how to measure delicate whispers that are hardly audible at all.\" Presley was always \"able to duplicate the open, hoarse, ecstatic, screaming, shouting, wailing, reckless sound of the black rhythm-and-blues and gospel singers\", writes Pleasants, and also demonstrated a remarkable ability to assimilate many other vocal styles.", "title": "Artistry" }, { "paragraph_id": 104, "text": "When Dewey Phillips first aired \"That's All Right\" on Memphis' WHBQ, many listeners who contacted the station to ask for it again assumed that its singer was black. From the beginning of his national fame, Presley expressed respect for African-American performers and their music, and disregard for the segregation and racial prejudice then prevalent in the South. Interviewed in 1956, he recalled how in his childhood he would listen to blues musician Arthur Crudup—the originator of \"That's All Right\"—\"bang his box the way I do now, and I said if I ever got to the place where I could feel all old Arthur felt, I'd be a music man like nobody ever saw.\" The Memphis World, an African-American newspaper, reported that Presley \"cracked Memphis' segregation laws\" by attending the local amusement park on what was designated as its \"colored night\". Such statements and actions led Presley to be generally hailed in the black community during his early stardom. In contrast, many white adults \"did not like him, and condemned him as depraved. Anti-negro prejudice doubtless figured in adult antagonism. Regardless of whether parents were aware of the Negro sexual origins of the phrase 'rock 'n' roll', Presley impressed them as the visual and aural embodiment of sex.\"", "title": "Public image" }, { "paragraph_id": 105, "text": "Despite the largely positive view of Presley held by African Americans, a rumor spread in mid-1957 that he had announced, \"The only thing Negroes can do for me is buy my records and shine my shoes.\" A journalist with the national African American weekly Jet, Louie Robinson, pursued the story. On the set of Jailhouse Rock, Presley granted Robinson an interview, though he was no longer dealing with the mainstream press. He denied making such a statement:", "title": "Public image" }, { "paragraph_id": 106, "text": "I never said anything like that, and people who know me know that I wouldn't have said it. ... A lot of people seem to think I started this business. But rock 'n' roll was here a long time before I came along. Nobody can sing that kind of music like colored people. Let's face it: I can't sing like Fats Domino can. I know that.", "title": "Public image" }, { "paragraph_id": 107, "text": "Robinson found no evidence that the remark had ever been made, and elicited testimony from many individuals indicating that Presley was anything but racist. Blues singer Ivory Joe Hunter, who had heard the rumor before he visited Graceland, reported of Presley, \"He showed me every courtesy, and I think he's one of the greatest.\" Though the rumored remark was discredited, it was still being used against Presley decades later.", "title": "Public image" }, { "paragraph_id": 108, "text": "The persistence of such attitudes was fueled by resentment over the fact that Presley, whose musical and visual performance idiom owed much to African-American sources, achieved the cultural acknowledgement and commercial success largely denied his black peers. Into the 21st century, the notion that Presley had \"stolen\" black music still found adherents. Notable among African-American entertainers expressly rejecting this view was Jackie Wilson, who argued, \"A lot of people have accused Elvis of stealing the black man's music, when in fact, almost every black solo entertainer copied his stage mannerisms from Elvis.\" Moreover, Presley acknowledged his debt to African-American musicians throughout his career. Addressing his '68 Comeback Special audience, he said, \"Rock 'n' roll music is basically gospel or rhythm and blues, or it sprang from that. People have been adding to it, adding instruments to it, experimenting with it, but it all boils down to [that].\" Nine years earlier, he had said, \"Rock 'n' roll has been around for many years. It used to be called rhythm and blues.\"", "title": "Public image" }, { "paragraph_id": 109, "text": "Presley's physical attractiveness and sexual appeal were widely acknowledged. \"He was once beautiful, astonishingly beautiful\", according to critic Mark Feeney. Television director Steve Binder reported, \"I'm straight as an arrow and I got to tell you, you stop, whether you're male or female, to look at him. He was that good looking. And if you never knew he was a superstar, it wouldn't make any difference; if he'd walked in the room, you'd know somebody special was in your presence.\" His performance style was equally responsible for Presley's eroticized image. Critic George Melly described him as \"the master of the sexual simile, treating his guitar as both phallus and girl\". In his Presley obituary, Lester Bangs credited him with bringing \"overt blatant vulgar sexual frenzy to the popular arts in America\". Ed Sullivan's declaration that he perceived a soda bottle in Presley's trousers was echoed by rumors involving a similarly positioned toilet roll tube or lead bar.", "title": "Public image" }, { "paragraph_id": 110, "text": "While Presley was marketed as an icon of heterosexuality, some critics have argued that his image was ambiguous. In 1959, Sight and Sound's Peter John Dyer described his onscreen persona as \"aggressively bisexual in appeal\". Brett Farmer places the \"orgasmic gyrations\" of the title dance sequence in Jailhouse Rock within a lineage of cinematic musical numbers that offer a \"spectacular eroticization, if not homoeroticization, of the male image\". In the analysis of Yvonne Tasker, \"Elvis was an ambivalent figure who articulated a peculiar feminised, objectifying version of white working-class masculinity as aggressive sexual display.\"", "title": "Public image" }, { "paragraph_id": 111, "text": "Reinforcing Presley's image as a sex symbol were the reports of his dalliances with Hollywood stars and starlets, from Natalie Wood in the 1950s to Connie Stevens and Ann-Margret in the 1960s to Candice Bergen and Cybill Shepherd in the 1970s. June Juanico of Memphis, one of Presley's early girlfriends, later blamed Parker for encouraging him to choose his dating partners with publicity in mind. Presley never grew comfortable with the Hollywood scene, and most of these relationships were insubstantial.", "title": "Public image" }, { "paragraph_id": 112, "text": "I know he invented rock and roll, in a manner of speaking, but ... that's not why he's worshiped as a god today. He's worshiped as a god today because in addition to inventing rock and roll he was the greatest ballad singer this side of Frank Sinatra—because the spiritual translucence and reined-in gut sexuality of his slow weeper and torchy pop blues still activate the hormones and slavish devotion of millions of female human beings worldwide.", "title": "Legacy" }, { "paragraph_id": 113, "text": "—Robert Christgau December 24, 1985", "title": "Legacy" }, { "paragraph_id": 114, "text": "Presley's rise to national attention in 1956 transformed the field of popular music and had a huge effect on the broader scope of popular culture. As the catalyst for the cultural revolution that was rock and roll, he was central not only to defining it as a musical genre but in making it a touchstone of youth culture and rebellious attitude. With its racially mixed origins—repeatedly affirmed by Presley—rock and roll's occupation of a central position in mainstream American culture facilitated a new acceptance and appreciation of black culture.", "title": "Legacy" }, { "paragraph_id": 115, "text": "In this regard, Little Richard said of Presley, \"He was an integrator. Elvis was a blessing. They wouldn't let black music through. He opened the door for black music.\" Al Green agreed: \"He broke the ice for all of us.\"", "title": "Legacy" }, { "paragraph_id": 116, "text": "President Jimmy Carter remarked on Presley's legacy in 1977: \"His music and his personality, fusing the styles of white country and black rhythm and blues, permanently changed the face of American popular culture.\" Presley also heralded the vastly expanded reach of celebrity in the era of mass communication: within a year of his first appearance on American network television, he was regarded as one of the most famous people in the world.", "title": "Legacy" }, { "paragraph_id": 117, "text": "Presley's name, image, and voice are recognized around the world. He has inspired a legion of impersonators. In polls and surveys, he is recognized as one of the most important popular music artists and influential Americans. American composer and conductor Leonard Bernstein said, \"Elvis Presley is the greatest cultural force in the twentieth century. He introduced the beat to everything and he changed everything—music, language, clothes.\" John Lennon said that \"Nothing really affected me until Elvis.\" Bob Dylan described the sensation of first hearing Presley as \"like busting out of jail\".", "title": "Legacy" }, { "paragraph_id": 118, "text": "For much of his adult life, Presley, with his rise from poverty to riches and fame, had seemed to epitomize the American Dream. In his final years, and following the revelations about his circumstances after his death, he became a symbol of excess and gluttony. Increasing attention was paid to his appetite for the rich, heavy Southern cooking of his upbringing, foods such as chicken-fried steak and biscuits and gravy. In particular, his love of fried peanut butter, banana, and (sometimes) bacon sandwiches, now known as \"Elvis sandwiches\", came to symbolize this characteristic.", "title": "Legacy" }, { "paragraph_id": 119, "text": "Since 1977, there have been numerous alleged sightings of Presley. A long-standing conspiracy theory among some fans is that he faked his death. Adherents cite alleged discrepancies in the death certificate, reports of a wax dummy in his original coffin, and accounts of Presley planning a diversion so he could retire in peace. An unusually large number of fans have domestic shrines devoted to Presley and journey to sites with which he is connected, however faintly. On the anniversary of his death, thousands of people gather outside Graceland for a candlelight ritual. \"With Elvis, it is not just his music that has survived death\", writes Ted Harrison. \"He himself has been raised, like a medieval saint, to a figure of cultic status. It is as if he has been canonized by acclamation.\"", "title": "Legacy" }, { "paragraph_id": 120, "text": "On the 25th anniversary of Presley's death, The New York Times asserted:", "title": "Legacy" }, { "paragraph_id": 121, "text": "All the talentless impersonators and appalling black velvet paintings on display can make him seem little more than a perverse and distant memory. But before Elvis was camp, he was its opposite: a genuine cultural force. ... Elvis' breakthroughs are underappreciated because in this rock-and-roll age, his hard-rocking music and sultry style have triumphed so completely.", "title": "Legacy" }, { "paragraph_id": 122, "text": "Not only Presley's achievements but his failings as well, are seen by some cultural observers as adding to the power of his legacy, as in this description by Greil Marcus:", "title": "Legacy" }, { "paragraph_id": 123, "text": "Elvis Presley is a supreme figure in American life, one whose presence, no matter how banal or predictable, brooks no real comparisons. ... The cultural range of his music has expanded to the point where it includes not only the hits of the day, but also patriotic recitals, pure country gospel, and really dirty blues. ... Elvis has emerged as a great artist, a great rocker, a great purveyor of schlock, a great heart throb, a great bore, a great symbol of potency, a great ham, a great nice person, and, yes, a great American.", "title": "Legacy" }, { "paragraph_id": 124, "text": "Having sold about 500 million records worldwide, Presley is one of the best-selling music artists of all time.", "title": "Achievements" }, { "paragraph_id": 125, "text": "Presley holds the records for most songs charting in Billboard's top 40 (115) and top 100 (152), according to chart statistician Joel Whitburn, 139 according to Presley historian Adam Victor. Presley's rankings for top ten and number-one hits vary depending on how the double-sided \"Hound Dog/Don't Be Cruel\" and \"Don't/I Beg of You\" singles, which precede the inception of Billboard's unified Hot 100 chart, are analyzed. According to Whitburn's analysis, Presley holds the record with 38, tying with Madonna; per Billboard's current assessment, he ranks second with 36. Whitburn and Billboard concur that the Beatles hold the record for most number-one hits with 20, and that Mariah Carey is second with 19. Whitburn has Presley with 18: Billboard has him third with 17. According to Billboard, Presley has 79 cumulative weeks at number one: alone at 80, according to Whitburn and the Rock and Roll Hall of Fame, with only Mariah Carey having more with 91 weeks. He holds the records for most number-one singles on the UK chart with 21 and singles reaching the top ten with 76.", "title": "Achievements" }, { "paragraph_id": 126, "text": "As an album artist, Presley is credited by Billboard with the record for the most albums charting in the Billboard 200: 129, far ahead of second-place Frank Sinatra's 82. He also holds the record for most time spent at number one on the Billboard 200: 67 weeks. In 2015 and 2016, two albums setting Presley's vocals against music by the Royal Philharmonic Orchestra, If I Can Dream and The Wonder of You, both reached number one in the UK. This gave him a new record for number-one UK albums by a solo artist with 13, and extended his record for longest span between number-one albums by anybody—Presley had first topped the British chart in 1956 with his self-titled debut.", "title": "Achievements" }, { "paragraph_id": 127, "text": "As of 2023, the Recording Industry Association of America (RIAA) credits Presley with 146.5 million certified album sales in the US, third all time behind the Beatles and Garth Brooks. He holds the records for most gold albums (101, nearly double second-place Barbra Streisand's 51), and most platinum albums (57). His 25 multi-platinum albums is second behind the Beatles' 26. His total of 197 album certification awards (including one diamond award), far outpaces the Beatles' second-best 122. He has the 9th-most gold singles (54, tied with Justin Bieber), and the 16th-most platinum singles (27).", "title": "Achievements" }, { "paragraph_id": 128, "text": "In 2012, the spider Paradonea presleyi was named in his honor. In 2018, President Donald Trump awarded Presley the Presidential Medal of Freedom posthumously.", "title": "Achievements" }, { "paragraph_id": 129, "text": "A vast number of recordings have been issued under Presley's name. The number of his original master recordings has been variously calculated as 665 and 711. His career began and he was most successful during an era when singles were the primary commercial medium for pop music. For his albums, the distinction between \"official\" studio records and other forms is often blurred.", "title": "Discography" }, { "paragraph_id": 130, "text": "The correct spelling of his middle name has long been a matter of debate. The physician who delivered him wrote \"Elvis Aaron Presley\" in his ledger. The state-issued birth certificate reads \"Elvis Aron Presley\". The name was chosen after the Presleys' friend and fellow congregation member Aaron Kennedy, though a single-A spelling was probably intended by Presley's parents to parallel the middle name of Presley's stillborn brother, Jesse Garon. It reads Aron on most official documents produced during his lifetime, including his high school diploma, RCA Victor record contract, and marriage license, and this was generally taken to be the proper spelling. In 1966, Presley expressed the desire to his father that the more traditional biblical rendering, Aaron, be used henceforth, \"especially on legal documents\". Five years later, the Jaycees citation honoring him as one of the country's Outstanding Young Men used Aaron. Late in his life, he sought to officially change the spelling to Aaron and discovered that state records already listed it that way. Knowing his wishes for his middle name, Aaron is the spelling his father chose for Presley's tombstone, and it is the spelling his estate has designated as official.", "title": "Explanatory notes" } ]
Elvis Aaron Presley, also known mononymously as Elvis, was an American singer and actor. Known as the "King of Rock and Roll", he is regarded as one of the most significant cultural figures of the 20th century. Presley's energized interpretations of songs and sexually provocative performance style, combined with a singularly potent mix of influences across color lines during a transformative era in race relations, brought both great success and initial controversy. Presley was born in Tupelo, Mississippi; his family relocated to Memphis, Tennessee, when he was 13. His music career began there in 1954, at Sun Records with producer Sam Phillips, who wanted to bring the sound of African-American music to a wider audience. Presley, on guitar and accompanied by lead guitarist Scotty Moore and bassist Bill Black, was a pioneer of rockabilly, an uptempo, backbeat-driven fusion of country music and rhythm and blues. In 1955, drummer D. J. Fontana joined to complete the lineup of Presley's classic quartet and RCA Victor acquired his contract in a deal arranged by Colonel Tom Parker, who would manage him for more than two decades. Presley's first RCA single, "Heartbreak Hotel", was released in January 1956 and became a number-one hit in the United States. Within a year, RCA would sell ten million Presley singles. With a series of successful television appearances and chart-topping records, Presley became the leading figure of the newly popular rock and roll; though his performative style and promotion of the then-marginalized sound of African Americans led to him being widely considered a threat to the moral well-being of white American youth. In November 1956, Presley made his film debut in Love Me Tender. Drafted into military service in 1958, he relaunched his recording career two years later with some of his most commercially successful work. Presley held few concerts, however, and guided by Parker, proceeded to devote much of the 1960s to making Hollywood films and soundtrack albums, most of them critically derided. Some of his most famous films included Jailhouse Rock (1957), Blue Hawaii (1961), and Viva Las Vegas (1964). In 1968, following a seven-year break from live performances, he returned to the stage in the acclaimed television comeback special Elvis, which led to an extended Las Vegas concert residency and a string of highly profitable tours. In 1973, Presley gave the first concert by a solo artist to be broadcast around the world, Aloha from Hawaii. However, years of prescription drug abuse and unhealthy eating habits severely compromised his health, and Presley died suddenly in 1977 at his Graceland estate at the age of 42. Having sold roughly 500 million records worldwide, Presley is one of the best-selling music artists of all time. He was commercially successful in many genres, including pop, country, rhythm & blues, adult contemporary, and gospel. He won three Grammy Awards, received the Grammy Lifetime Achievement Award at age 36, and has been inducted into multiple music halls of fame. He also holds several records, including the most RIAA-certified gold and platinum albums, the most albums charted on the Billboard 200, the most number-one albums by a solo artist on the UK Albums Chart, and the most number-one singles by any act on the UK Singles Chart. In 2018, Presley was posthumously awarded the Presidential Medal of Freedom.
2001-10-13T07:54:10Z
2023-12-22T06:28:47Z
[ "Template:Lang", "Template:Cite book", "Template:Other uses", "Template:Div col", "Template:Col-2", "Template:Colbegin", "Template:Pp-semi", "Template:Efn", "Template:Inflation", "Template:Refn", "Template:Example needed", "Template:As of", "Template:Nbsp", "Template:Featured article", "Template:Listen", "Template:US$", "Template:Div col end", "Template:Refbegin", "Template:ISBN", "Template:Cite magazine", "Template:Elvis Presley singles", "Template:Short description", "Template:Infobox person", "Template:Blockquote", "Template:Navboxes", "Template:See also", "Template:Reflist", "Template:Cite journal", "Template:Authority control", "Template:\"'", "Template:Colend", "Template:Col-end", "Template:Amg name", "Template:Use mdy dates", "Template:Col-begin", "Template:Portal bar", "Template:Redirect-multi", "Template:Use American English", "Template:Further", "Template:Snd", "Template:Quote box", "Template:Sfn", "Template:'\"", "Template:Refend", "Template:Tcmdb name", "Template:Pp-move", "Template:Main", "Template:'", "Template:Cite news", "Template:IMDb name", "Template:Elvis Presley", "Template:Cite web", "Template:Commons category", "Template:Wikiquote" ]
https://en.wikipedia.org/wiki/Elvis_Presley
9,294
The Evil Dead
The Evil Dead is a 1981 American supernatural horror film written and directed by Sam Raimi (in his feature directorial debut). The film stars Bruce Campbell, Ellen Sandweiss, Richard DeManincor, Betsy Baker and Theresa Tilly. The story focuses on five college students vacationing in an isolated cabin in a remote wooded area. After they find an audio tape that, when played, releases a legion of demons and spirits, four members of the group suffer from demonic possession, forcing the fifth member, Ash Williams (Campbell), to survive an onslaught of increasingly gory mayhem. Raimi, producer Robert G. Tapert, Campbell, and their friends produced the short film Within the Woods as a proof of concept to build the interest of potential investors, which secured US$90,000 to begin work on The Evil Dead. Principal photography took place on location in a remote cabin located in Morristown, Tennessee, in a difficult filming process that proved extremely uncomfortable for the cast and crew; the film's extensive prosthetic makeup effects and stop-motion animations were created by artist Tom Sullivan. The completed film attracted the interest of producer Irvin Shapiro, who helped screen the film at the 1982 Cannes Film Festival. Horror author Stephen King gave a rave review of the film, which resulted in New Line Cinema acquiring its distribution rights. The Evil Dead grossed $2.4 million in the United States and between $2.7 and $29.4 million worldwide. Both early and later critical reception were universally positive; in the years since its release, the film has developed a reputation as one of the most significant cult films, cited among the greatest horror films of all time and one of the most successful independent films. It launched the careers of Raimi, Tapert, and Campbell, who have continued to collaborate on several films together, such as Raimi's Spider-Man trilogy. The Evil Dead spawned a media franchise, beginning with two direct sequels written and directed by Raimi, Evil Dead II (1987) and Army of Darkness (1992), a fourth film, Evil Dead (2013), which serves as a soft reboot and continuation, a follow-up television series, Ash vs Evil Dead, which aired from 2015 to 2018, and a fifth film, Evil Dead Rise (2023); the franchise also includes video games and comic books. The film's protagonist Ash Williams is considered to be a cultural icon. Five Michigan State University students – Ash Williams, his girlfriend Linda, his sister Cheryl, their friend Scott, and Scott's girlfriend Shelly – vacation at an isolated cabin in rural Tennessee. Approaching the cabin, the group notices the porch swing move on its own but suddenly stop as Scott grabs the doorknob. While Cheryl draws a picture of a clock, the clock stops, and she hears a faint, demonic voice tell her to "join us". Her hand becomes possessed, turns pale and draws a picture of a book with a demonic face on its cover. Although shaken, she does not mention the incident. When the cellar trapdoor flies open during dinner, Shelly, Linda, and Cheryl remain upstairs as Ash and Scott investigate the cellar. They find the Naturom Demonto, a Sumerian version of the Egyptian Book of the Dead, along with archaeologist Raymond Knowby's tape recorder, and they take the items upstairs. Scott plays a tape of incantations that resurrect a demonic entity. Cheryl yells for Scott to turn off the tape recorder, and a tree branch breaks one of the cabin's windows. Later that evening, an agitated Cheryl goes into the woods to investigate strange noises and she's attacked and raped by the vines and branches of demonically possessed trees. When she escapes and returns to the cabin bruised and anguished, Ash agrees to take her back into town, only to discover that the bridge to the cabin has been destroyed. Cheryl panics as she realizes that they are now trapped and the demonic entity will not let them leave. Back at the cabin, Ash listens to more of the tape, learning that the only way to kill the entity is to dismember a possessed host. As Linda and Shelly play spades, Cheryl correctly calls out the cards without looking at them, succumbs to the entity, and levitates. In a raspy, demonic voice, she demands to know why they disturbed her sleep and threatens to kill them. She stabs Linda in the ankle with a pencil and throws Ash into a shelf. Scott knocks Cheryl into the cellar and locks her inside. Everyone fights about what to do. Having become paranoid upon seeing Cheryl's demonic transformation, Shelly lies down in her room but is drawn to look out of her window, where a demon crashes through and attacks her, turning her into a Deadite. She attacks Scott before he throws her into the fireplace, slashes her wrist and then stabs her in the back with a Sumerian dagger, apparently killing her. When she reanimates, Scott dismembers her with an axe. Ash and Scott then bury her remains. Shaken by the experience, Scott decides to leave in order to find a way back to town. He returns shortly after, mortally wounded from the possessed trees, and dies while warning Ash that the trees will not let them escape alive. When Ash checks on Linda, he is horrified to find that she has become possessed. She attacks him, but he stabs her with the Sumerian dagger. Unwilling to dismember her, he buries her instead. She revives and attacks him, forcing him to decapitate her with a shovel. Her headless body bleeds on his face as it tries to rape him. He manages to escape as Linda dies, and then retreats back to the cabin. Back inside, Ash discovers that Cheryl has escaped the cellar. Cheryl eludes Ash, and attempts to choke him. Ash escapes her grasp, then shoots Cheryl in the jaw. As Ash is barricading the door, Scott reanimates into a Deadite. Scott attacks Ash, and inadvertently knocks the book close to the fireplace. Ash gouges Scott's eyes out and pulls a tree branch from Scott's stomach, causing him to bleed out and fall to the ground. Cheryl breaks through the trapdoor and knocks Ash to the floor. As Scott and Cheryl continue to attack Ash on the ground, Ash grabs the book and throws it into the fireplace. While the book burns, the Deadites freeze in place, then begin to rapidly decompose. Large appendages burst from both corpses, covering Ash in blood. The bodies of Scott and Cheryl then completely decompose. Dawn breaks, and Ash stumbles outside. As Ash walks away from the cabin, an unseen demon moves rapidly through the forest, rushes through the cabin, and attacks him as he screams in terror. Sam Raimi and Bruce Campbell grew up together, and have been friends from an early age. The duo made several low-budget Super 8 mm film projects together. Several were comedies, including Clockwork and It's Murder!. Shooting a suspense scene in It's Murder! inspired them to approach careers in the horror genre; after researching horror cinema at drive-in theaters, Raimi was set on directing a horror film, opting to shoot a proof of concept short film – described by the director as a "prototype" – that would attract the interest of financiers, and use the funds raised to shoot a full-length project. The short film that Raimi created was called Within the Woods, which was produced for $1,600. For The Evil Dead Raimi required over $100,000. To generate funds to produce the film, Raimi approached Phil Gillis, a lawyer to one of his friends. Raimi showed him Within the Woods, and although Gillis was not impressed by the short film, he offered Raimi legal advice on how to produce The Evil Dead. With his advice in mind, Raimi asked a variety of people for donations, and even eventually "begged" some. Campbell had to ask several of his own family members, and Raimi asked every individual he thought might be interested. He eventually raised enough money to produce a full-length film, though not the full amount he originally wanted. Raimi said the film cost $375,000. With enough money to produce the film, Raimi and Campbell set out to make what was then titled Book of the Dead, a name inspired by Raimi's interest in the fiction of H. P. Lovecraft. The film was supposed to be a remake of Within the Woods, with higher production values and a full-length running time. Raimi turned 20 just before shooting began, and he considered the project his "rite of passage". Raimi asked for help and assistance from several of his friends and past collaborators to make The Evil Dead. Campbell offered to produce the film alongside Tapert, and was subsequently cast as Ash Williams, the main character, since his producing responsibilities made him the only actor willing to stay during the production's entirety. To acquire more actors for the project, Raimi put an ad in The Detroit News. Betsy Baker was one of the actresses who responded, and Ellen Sandweiss, who appeared in Within the Woods, was also cast. The crew consisted almost entirely of Raimi and Campbell's friends and family. The special make-up effects artist for Within the Woods, Tom Sullivan, was brought on to compose the effects after expressing a positive reaction to working with Raimi. He helped create many of the film's foam latex and fake blood effects, and added coffee as an extra ingredient to the traditional fake blood formula of corn syrup and food coloring. Without any formal assistance from location scouts, the cast had to find filming locations on their own. The crew initially attempted to shoot the film in Raimi's hometown of Royal Oak, Michigan, but instead chose Morristown, Tennessee, as it was the only state that expressed enthusiasm for the project. The crew quickly found a remote cabin located several miles away from any other buildings. During pre-production, the 13 crew members had to stay at the cabin, leading to several people sleeping in the same room. The living conditions were notoriously difficult, with several arguments breaking out between crew members. Steve Frankel was the only carpenter on set, which made him the art direction's sole contributor. For exterior shots, Frankel had to produce several elaborate props with a circular saw. Otherwise, the cabin mostly remained the way it was found during production. The cabin had no plumbing, but phone lines were connected to it. The film was made on Kodak 16mm film stock with a rented camera. The inexperienced crew made filming a "comedy of errors". The first day of filming led to them getting lost in the woods during a scene shot on a bridge. Several crew members were injured during the shoot, and because of the cabin's remoteness, securing medical assistance was difficult. One notably gruesome moment on set involved ripping off Baker's eyelashes during removal of her face-mask. Because of the low budget, contact lenses as thick as glass had to be applied to the actors to achieve the "demonic eyes" effect. The lenses took ten minutes to apply, and could only be left on for about 15 minutes because eyes could not "breathe" with them applied. Campbell later commented that to get the effect of wearing these lenses, they had to put "Tupperware" over their eyes. Raimi developed a sense of mise en scène, coming up with ideas for scenes at a fast rate. He had drawn several crude illustrations to help him break down the flow of scenes. The crew was surprised when Raimi began using Dutch angles during shots to build atmosphere during scenes. To accommodate Raimi's style of direction, several elaborate, low-budget rigs had to be built, since the crew could not afford a camera dolly. One involved the "vas-o-cam", which relied on a mounted camera that was slid down long wooden platforms to create a more fluid sense of motion. A camera trick used to emulate a Steadicam inexpensively was the "shaky cam", which involved mounting the camera to a piece of wood and having two camera operators sprint around the swamp. During scenes involving the unseen force in the woods watching the characters, Raimi had to run through the woods with the makeshift rig, jumping over logs and stones. This often proved difficult due to mist in the swamp. The film's final scene was shot with the camera mounted to a bike, while it was quickly driven through the cabin to create a seamless long take. Raimi had been a big fan of The Three Stooges during his youth, which inspired him to use "Fake Shemps" during production. In any scene that required a background shot of a character, he used another actor as a substitute if the original actor was preoccupied. During a close-up involving Richard DeManicor's hand opening a curtain, Raimi used his own hand in the scene since it was more convenient. His brother Ted Raimi was used as a "Fake Shemp" in many scenes when the original actor was either busy or preoccupied. Raimi enjoyed "torturing" his actors. Raimi believed that to capture pain and anger in his actors, he had to abuse them a little at times, saying, "if everyone was in extreme pain and misery, that would translate into a horror". Producer Robert Tapert agreed with Raimi, commenting that he "enjoyed when an actor bleeds." While shooting a scene with Campbell running down a hill, Campbell tripped and injured his leg. Raimi enjoyed poking Campbell's injury with a stick he found in the woods. Because of the copious amounts of blood in the film, the crew produced gallons of fake blood with Karo corn syrup. It took Campbell hours to remove the sticky substance from himself. Several actors had inadvertently been stabbed or thrown into objects during production. During the last few days on set, the conditions had become so extreme the crew began burning furniture to stay warm. Since at that point only exterior shots needed to be filmed, they burned nearly every piece of furniture left. Several actors went days without showering, and because of the freezing conditions, several caught colds and other illnesses. Campbell later described the filming process as nearly "twelve weeks of mirthless exercise in agony", though he allowed that he did manage to have fun while on set. On January 23, 1980, filming was finished and almost every crew member left the set to return home, with Campbell staying with Raimi. While looking over the footage that had been shot, Raimi discovered that a few pick-ups were required to fill in missing shots. Four days of re-shoots were then done to complete the film. The final moment involved Campbell having "monster-guts" splattered on him in the basement. Summing up the production decades later, Campbell remarked: "It's low-budget, it's got rough edges," but even so, "there are parts of that movie that are visually stunning." After the extensive filming process, Raimi had a "mountain of footage" that he had to put together. He chose a Detroit editing association, where he met Edna Paul, to cut the film. Paul's assistant was Joel Coen of the Coen brothers, who helped with the film's editing. Paul edited a majority of the film, although Coen edited the shed sequence. Coen had been inspired by Raimi's Within the Woods and liked the idea of producing a prototype film to help build the interest of investors. Joel used the concept to help make Blood Simple with his brother Ethan, and he and Raimi became friends following the editing process. The film's first cut ran at around 117 minutes, which Campbell called an impressive achievement in light of the 65-minute length of the screenplay. The cut scenes were to focus on the main character's lamentation of not being able to save the victims from their deaths, but was edited down to make the film less "grim and depressing" and to be a more marketable 85 minutes. Raimi was inspired by the fact that Brian De Palma was editing his own film Blow Out with John Travolta at the same sound facility. One of the most intricate moments during editing was the stop-motion animation sequence where the corpses "melted", which took hours to cut properly. The film had unique sounds that required extensive recording from the crew. Several sounds were not recorded properly during shooting, which meant the effects had to be redone in the editing rooms. Dead chickens were stabbed to replicate the sounds of mutilated flesh, and Campbell had to scream into a microphone for several hours. Much like Within the Woods, The Evil Dead needed to be blown up to 35mm, the industry standard, to be played at movie theaters. The relatively large budget made this a much simpler process with The Evil Dead than it had been with the short film. With the film completed, Raimi and the crew decided to celebrate with a "big premiere". They chose to screen the film at Detroit's Redford Theatre, which Campbell had often visited as a child. Raimi opted to have the most theatrical premiere possible, using custom tickets and wind tracks set in the theater, and ordering ambulances outside the theater to build atmosphere. The premiere setup was inspired by horror director William Castle, who would often attempt to scare his audiences by using gimmicks. Local turnout for the premiere exceeded the cast's expectations, with a thousand patrons showing up. The audiences responded enthusiastically to the premiere, which led to Raimi's idea of "touring" the film to build hype. Raimi showed the film to anyone willing to watch it, booking meetings with distribution agents and anyone with experience in the film industry. Eventually Raimi came across Irvin Shapiro, the man who was responsible for the distribution of George A. Romero's Night of the Living Dead and other famous horror films. Upon first viewing the film, he joked that while it "wasn't Gone with the Wind", it had commercial potential, and he expressed an interest in distributing it. It was his idea not to use the then-title Book of the Dead, because he thought it made the film sound boring. Raimi brainstormed several ideas, eventually going with The Evil Dead, deemed the "least worst" title. Shapiro also advised distributing the film worldwide to garner a larger income, though it required a further financial investment by Raimi, who managed to scrape together what little money he had. Shapiro was a founder of the Cannes Film Festival, and allowed Raimi to screen the film at the 1982 festival out of competition. Stephen King was present at its screening and gave the film a rave review. USA Today released an article about King's favorite horror films; the author cited The Evil Dead as his fifth favorite film of the genre. The film severely affected King, who commented that while watching the film at Cannes, he was "registering things [he] had never seen in a movie before". He became one of the film's largest supporters during the early efforts to find a distributor, eventually describing it as the "most ferociously original film of the year", a quote used in the film's promotional pieces. King's comments attracted the interest of critics, who otherwise would likely have dismissed the low-budget thriller. The film's press attracted the attention of British film distribution agent Stephen Woolley. Though he considered the film a big risk, Woolley decided to take on the job of releasing the film in the United Kingdom. The film was promoted in an unconventional manner for a film of its budget, receiving marketing on par with that of larger budget films. Dozens of promotional pieces, including film posters and trailers, were showcased in the UK, heavy promotion rarely expended on such a low-budget film. Woolley was impressed by Raimi, whom he called "charming", and was an admirer of the film, which led to his taking more risks with the film's promotion than he normally would have. Fangoria started covering the film in late 1982, writing several articles about the film's long production history. Early critical reception at the time was very positive, and along with Fangoria, King and Shapiro's approval, the film generated an impressive amount of interest before its commercial premiere. New Line Cinema, one of the distributors interested in the film, negotiated an agreement to distribute it domestically. The film had several "sneak previews" before its commercial release, including screenings in New York and Detroit. Audience reception at both screenings was widely enthusiastic, and interest was built for the film to such an extent that wider distribution was planned. New Line Cinema wrote Raimi a check large enough to pay off all the investors, and decided to release the film in an unusual manner: simultaneously into both cinemas and onto VHS, with substantial domestic promotion. Because of its large promotional campaign, the film performed above expectations at the box office. However, the initial domestic gross was described as "disappointing." The movie opened in 15 theaters and grossed $108,000 in its opening weekend. Word of mouth later spread, and the film became a "sleeper hit". It grossed $2,400,000 domestically, nearly eight times its production budget. Sources differ as to whether it grossed $261,944 overseas, for a worldwide gross of $2,661,944, or $27 million overseas, for a worldwide gross of $29.4 million. Raimi said in 1990 that the film "did very well overseas and did very poorly domestically" and that its investors earned a return of "about five times their initial investment." The film's release was met with controversy, as Raimi had made the film as gruesome as possible with neither interest in nor fear of censorship. Writer Bruce Kawin described The Evil Dead as one of the most notorious splatter films of its day, along with Cannibal Holocaust and I Spit on Your Grave. In the UK, the film was trimmed by 49 seconds before it was granted an X certificate for cinema release. This censored version was also released on home video; at the time there was no requirement that films had to be classified for video release. A campaign by pro-censorship organization NVLA led to the film being labelled a "video nasty" and when the Video Recordings Act was passed in 1984, the video version was removed from circulation. In 1990, a further 66 seconds were trimmed from the already censored version and the film was granted an 18 certificate for home video release. In 2000, the uncut version was finally granted an 18 certificate for both cinema and home video. In the US, the film received an X rating. Films with this label were quite violent and disturbing, and the rating was often held by pornographic films. The film has since been re-rated NC-17 for “substantial graphic horror violence and gore”, though many recent home media releases have been released without a rating. The film was and is still banned either theatrically or on video in some countries. The first VHS release of The Evil Dead was released by Thorn EMI in 1983, and Thorn's successor company HBO/Cannon Video later repackaged the film. Former HBO Video's partner Congress Video, a company notable for public domain films, issued its version in 1989. In its first week of video release, the film made £100,000 in the UK. It quickly became that week's bestselling video release, and later became the year's bestselling video in the UK, out-grossing large-budget horror releases such as The Shining. Its impressive European performance was chalked up to its heavy promotion there and the more open-minded nature of European audiences. The resurgence of The Evil Dead in the home-video market came through two companies that restored the film from its negatives and issued special editions in 1998: Anchor Bay Entertainment on VHS, and Elite Entertainment on LaserDisc. Anchor Bay was responsible for the film's first DVD release on January 19, 1999, along with Elite releasing the special collector's edition DVD on March 30, 1999, and between them, Elite and Anchor Bay have released six different DVD versions of The Evil Dead, most notably the 2002 "Book Of The Dead" edition, packaged in a latex replica of the Necronomicon sculpted by Tom Sullivan and the 2007 three disc "Ultimate Edition" which contained the widescreen and original full frame versions of the movie. The film's high-definition debut was in a 2010 Blu-ray. Lionsgate Films released a 4K Ultra HD Blu-ray edition of The Evil Dead on October 9, 2018. Upon its release, contemporary critical opinion was largely positive. Bob Martin, editor of Fangoria, reviewed the film before its formal premiere and proclaimed that it "might be the exception to the usual run of low-budget horror films". He followed up on this praise after the film's premiere, stating: "Since I started editing this magazine, I have not seen any new film that I could recommend to our readers with more confidence that it would be loved, embraced and hailed as a new milestone in graphic horror". The Los Angeles Times called the film an "instant classic", proclaiming it as "probably the grisliest well-made movie ever." In a 1982 review, staff from Variety wrote that the film "emerges as the ne plus ultra of low-budget gore and shock effect", commenting that the "powerful" and inventive camerawork was key to creating a sense of dread. British press for the film was positive; Kim Newman of Monthly Film Bulletin, Richard Cook of NME and Julian Petley of Film and Filming all gave the film good reviews during its early release. Petley and Cook compared the film to other contemporary horror films, writing that the film expressed more imagination and "youthful enthusiasm" than an average horror film. Cook described the camera work by Raimi as "audacious", stating that the film's visceral nature was greatly helped by the style of direction. Woolley, Newman and several critics complimented the film for its unexpected use of black comedy, which elevated the film above its genre's potential trappings. All three critics compared the film to the surrealistic work of Georges Franju and Jean Cocteau, noting the cinephilic references to Cocteau's film Orpheus. Writer Lynn Schofield Clark in his novel From Angels to Aliens compared the film to better-known horror films such as The Exorcist and The Omen, citing it as a key supernatural thriller. On the review aggregator website Rotten Tomatoes, 86% of 83 critics' reviews are positive, with an average rating of 7.70/10. The website's consensus reads: "So scrappy that it feels as illicit as a book found in the woods, The Evil Dead is a stomach-churning achievement in bad taste that marks a startling debut for wunderkind Sam Raimi." Empire stated the film's "reputation was deserved", writing that the film was impressive considering its low budget and the cast's inexperience. He commented that the film successfully blended the "bizarre" combination of Night of the Living Dead, The Texas Chain Saw Massacre and The Three Stooges. A reviewer for Film4 rated The Evil Dead four-and-a-half stars out of five, musing that the film was "energetic, original and icky" and concluding that Raimi's "splat-stick debut is a tight little horror classic that deserves its cult reputation, despite the best efforts of the censors." Slant's Ed Gonzales compared the film to Dario Argento's work, citing Raimi's "unnerving wide angle work" as an important factor to the film's atmosphere. He mused that Raimi possessed an "almost unreal ability to suggest the presence of intangible evil", which was what prevented the movie from being "B-movie schlock". BBC critic Martyn Glanville awarded the film four stars out of five, writing that for Raimi, it served as a better debut film than Tobe Hooper's The Texas Chain Saw Massacre or Wes Craven's The Last House on the Left. Glanville noted that other than the "ill-advised trees-that-rape scene", the film is "one of the great modern horror films, and even more impressive when one considers its modest production values." Filmcritic.com's Christopher Null gave the film the same rating as Glanville, writing that "Raimi's biggest grossout is schlock horror done the right way" and comparing it to Romero's Night of the Living Dead in its ability to create stark atmosphere. Chicago Reader writer Pat Graham commented that the film featured several "clever" turns on the standard horror formula, adding that Raimi's "anything-for-an-effect enthusiasm pays off in lots of formally inventive bits." Time Out critic Stephen Garrett, referred to the make-up effects in the climax as "amazing", and commented that although the film was light on character development, it "blends comic fantasy" with "atmospheric horror ... to impressive effect". The same site later cited the film as the 41st greatest horror movie ever made. Phelim O'Neill of The Guardian combined The Evil Dead and its sequel Evil Dead II and listed them as the 23rd best horror film ever made, announcing that the former film "stands above its mostly forgotten peers in the 80s horror boom." Don Summer, in his book Horror Movie Freak, and writer Kate Egan have both cited the film as a horror classic. J.C. Maçek III of PopMatters said: "What is unquestionable is that the Raimis and their pals created a monster in The Evil Dead. It started as a disastrous failure to obtain a big break with a too long, too perilous shoot (note Campbell's changing hairstyle in the various scenes of the one-day plot). The film went through name changes and bannings only to survive as not only 'the ultimate experience in grueling horror' but as an oft-imitated and cashed-in-on classic, with 30 years of positive reviews to prove it." While The Evil Dead received favorable critical comment when it was initially released, it failed to establish Raimi's reputation. It was, however, a box-office success, which led to Campbell and Raimi teaming up again for the release of another movie. Joel Coen and his brother Ethan had collaborated as directors and released the film Blood Simple, to critical acclaim. According to Campbell, Ethan, then an accountant, expressed surprise when the duo succeeded. The Coen brothers and Raimi collaborated on a screenplay, which was released shortly after The Evil Dead. The film, Crimewave, was a box-office failure. The film's production was a "disaster", according to Campbell, who stated that "missteps" like Crimewave usually lead to the end of a director's career. Other people involved with the film expressed similar disappointment with the project. Fortunately, Raimi had the studio support to make a sequel to The Evil Dead, which he initially decided to make out of desperation. The Evil Dead was followed by a series of sequels. The franchise is noted from attracting attention for each sequel featuring more comedic qualities than the last, progressing into "weirder" territory with each film. Evil Dead II: Dead by Dawn was a black comedy-horror film which released in 1987, and was a box-office success. It received general acclaim from critics, and is often considered to be superior to the first film. This was followed by Army of Darkness, a comedy fantasy-horror film released in 1992. At that time, Raimi had become a successful director, attracting Hollywood's interest. His superhero film Darkman (1990) was another box-office success, which led to an increased budget for Army of Darkness. Army of Darkness had 22.8 times the budget of the original Evil Dead, though it was not considered to be a box-office success like its two predecessors. It was met with mostly positive critical reception. After any additional installments suffered through development hell, a supernatural-horror soft reboot/legacy sequel titled Evil Dead was released in 2013, featuring Jane Levy as the main character Mia Allen. Directed and co-written by Fede Álvarez, the film was produced by Raimi and Campbell. The film, which was a departure from the humor of the previous two films, was a moderate box office success and was praised for its dark and bloody story. While various projects going through varying stages of development, a continuation was released as a television series titled, Ash vs. Evil Dead. Created and executive produced by Sam Raimi, the series aired from 2015-2018. After further film installments once again remained in development hell for a number of years, a fifth feature film titled Evil Dead Rise was announced to be in development. The project began filming in June 2021, with Irish filmmaker Lee Cronin serving as writer/director. Though Campbell reprised his role as Ashley "Ash" J. Williams in each of the proceeding sequels, he did not appear in the film. The film was released theatrically on April 21, 2023, by Warner Bros. Pictures. Unofficial sequels were made in Italy, where the film was known as La Casa ("The House"). Produced by Joe D'Amato's Filmirage, two mostly unrelated films were released and marketed as sequels to Evil Dead II: Umberto Lenzi's La Casa 3: Ghosthouse and La Casa 4: Witchery starring Linda Blair and David Hasselhoff. The final film was released in 1990 and titled, La Casa 5: Beyond Darkness. The film House II: The Second Story was reissued and retitled in Italy as La Casa 6; followed by The Horror Show which was released in Italy as La Casa 7. The original Evil Dead trilogy of films has been recognized as one of the most successful cult film series in history. David Lavery, in his book The Essential Cult TV Reader, surmised that Campbell's "career is a practical guide to becoming a cult idol". The film launched the careers of Raimi and Campbell, who have since collaborated frequently. Raimi has worked with Campbell in virtually all of his films since, and Campbell has appeared in cameo roles in all three of Raimi's Spider-Man films (as well as a very brief appearance at the end of Darkman), which have become some of the highest-grossing films in history. Though it has often been considered an odd choice for Raimi, a director known for his violent horror films, to direct a family-friendly franchise, the hiring was mostly inspired by Raimi's passion for comic books as a child. Raimi returned to the horror-comedy genre in 2009 with Drag Me to Hell. Critics have often compared Campbell's later performances to his role in Evil Dead, which has been called his defining role. Campbell's performance as Ash has been compared to roles ranging from his performance of Elvis Presley in the film Bubba Ho-tep to the bigamous demon in The X-Files episode "Terms of Endearment". Campbell's fan base gradually developed after the release of Evil Dead II and his short-lived series The Adventures of Brisco County, Jr.. He is a regular favorite at most fan conventions and often draws sold-out auditoriums at his public appearances. The Evil Dead developed a substantial cult following throughout the years, and has often been cited as a defining cult classic. The Evil Dead has spawned a media franchise. A video game adaptation of the same name was released for the Commodore 64 in 1984, as was a trilogy of survival horror games in the 1990s and early 2000s: Evil Dead: Hail to the King, Evil Dead: A Fistful of Boomstick and Evil Dead: Regeneration. Ted Raimi did voices for the trilogy, and Campbell returned as the voice of Ash. The character Ash became the main character of a comic book franchise. Ash has fought both Freddy Krueger and Jason Voorhees in the Freddy vs. Jason vs. Ash series, Herbert West in Army of Darkness vs. Re-Animator, zombie versions of the Marvel Comics superheroes in Marvel Zombies vs. The Army of Darkness, and has even saved the life of a fictional Barack Obama in Army of Darkness: Ash Saves Obama. In January 2008, Dark Horse Comics began releasing a four-part monthly comic book mini-series, written by Mark Verheiden and drawn by John Bolton, based on The Evil Dead. The film has also inspired a stage musical, Evil Dead: The Musical, which was produced with the permission of Raimi and Campbell. The musical has run on and off since its inception in 2003. After the film was released, many people began to trespass onto the filming location in Morristown. In 1982, the cabin was burned down by drunken trespassers. Although the cabin is now gone, the chimney remains, which many people now take stones from when they trespass onto the location. In 2022, a video game adaptation of the series called Evil Dead: The Game was released. In 2021, heavy metal band Ice Nine Kills released a song titled "Ex-Mørtis" on their album The Silver Scream 2: Welcome to Horrorwood, which is composed of songs each explicitly linked to specific horror media per the album's booklet of liner notes; "Ex-Mørtis" is stated to be inspired by The Evil Dead.
[ { "paragraph_id": 0, "text": "The Evil Dead is a 1981 American supernatural horror film written and directed by Sam Raimi (in his feature directorial debut). The film stars Bruce Campbell, Ellen Sandweiss, Richard DeManincor, Betsy Baker and Theresa Tilly. The story focuses on five college students vacationing in an isolated cabin in a remote wooded area. After they find an audio tape that, when played, releases a legion of demons and spirits, four members of the group suffer from demonic possession, forcing the fifth member, Ash Williams (Campbell), to survive an onslaught of increasingly gory mayhem.", "title": "" }, { "paragraph_id": 1, "text": "Raimi, producer Robert G. Tapert, Campbell, and their friends produced the short film Within the Woods as a proof of concept to build the interest of potential investors, which secured US$90,000 to begin work on The Evil Dead. Principal photography took place on location in a remote cabin located in Morristown, Tennessee, in a difficult filming process that proved extremely uncomfortable for the cast and crew; the film's extensive prosthetic makeup effects and stop-motion animations were created by artist Tom Sullivan. The completed film attracted the interest of producer Irvin Shapiro, who helped screen the film at the 1982 Cannes Film Festival. Horror author Stephen King gave a rave review of the film, which resulted in New Line Cinema acquiring its distribution rights.", "title": "" }, { "paragraph_id": 2, "text": "The Evil Dead grossed $2.4 million in the United States and between $2.7 and $29.4 million worldwide. Both early and later critical reception were universally positive; in the years since its release, the film has developed a reputation as one of the most significant cult films, cited among the greatest horror films of all time and one of the most successful independent films. It launched the careers of Raimi, Tapert, and Campbell, who have continued to collaborate on several films together, such as Raimi's Spider-Man trilogy.", "title": "" }, { "paragraph_id": 3, "text": "The Evil Dead spawned a media franchise, beginning with two direct sequels written and directed by Raimi, Evil Dead II (1987) and Army of Darkness (1992), a fourth film, Evil Dead (2013), which serves as a soft reboot and continuation, a follow-up television series, Ash vs Evil Dead, which aired from 2015 to 2018, and a fifth film, Evil Dead Rise (2023); the franchise also includes video games and comic books. The film's protagonist Ash Williams is considered to be a cultural icon.", "title": "" }, { "paragraph_id": 4, "text": "Five Michigan State University students – Ash Williams, his girlfriend Linda, his sister Cheryl, their friend Scott, and Scott's girlfriend Shelly – vacation at an isolated cabin in rural Tennessee. Approaching the cabin, the group notices the porch swing move on its own but suddenly stop as Scott grabs the doorknob. While Cheryl draws a picture of a clock, the clock stops, and she hears a faint, demonic voice tell her to \"join us\". Her hand becomes possessed, turns pale and draws a picture of a book with a demonic face on its cover. Although shaken, she does not mention the incident.", "title": "Plot" }, { "paragraph_id": 5, "text": "When the cellar trapdoor flies open during dinner, Shelly, Linda, and Cheryl remain upstairs as Ash and Scott investigate the cellar. They find the Naturom Demonto, a Sumerian version of the Egyptian Book of the Dead, along with archaeologist Raymond Knowby's tape recorder, and they take the items upstairs. Scott plays a tape of incantations that resurrect a demonic entity. Cheryl yells for Scott to turn off the tape recorder, and a tree branch breaks one of the cabin's windows. Later that evening, an agitated Cheryl goes into the woods to investigate strange noises and she's attacked and raped by the vines and branches of demonically possessed trees. When she escapes and returns to the cabin bruised and anguished, Ash agrees to take her back into town, only to discover that the bridge to the cabin has been destroyed. Cheryl panics as she realizes that they are now trapped and the demonic entity will not let them leave. Back at the cabin, Ash listens to more of the tape, learning that the only way to kill the entity is to dismember a possessed host. As Linda and Shelly play spades, Cheryl correctly calls out the cards without looking at them, succumbs to the entity, and levitates. In a raspy, demonic voice, she demands to know why they disturbed her sleep and threatens to kill them. She stabs Linda in the ankle with a pencil and throws Ash into a shelf. Scott knocks Cheryl into the cellar and locks her inside.", "title": "Plot" }, { "paragraph_id": 6, "text": "Everyone fights about what to do. Having become paranoid upon seeing Cheryl's demonic transformation, Shelly lies down in her room but is drawn to look out of her window, where a demon crashes through and attacks her, turning her into a Deadite. She attacks Scott before he throws her into the fireplace, slashes her wrist and then stabs her in the back with a Sumerian dagger, apparently killing her. When she reanimates, Scott dismembers her with an axe. Ash and Scott then bury her remains. Shaken by the experience, Scott decides to leave in order to find a way back to town. He returns shortly after, mortally wounded from the possessed trees, and dies while warning Ash that the trees will not let them escape alive. When Ash checks on Linda, he is horrified to find that she has become possessed. She attacks him, but he stabs her with the Sumerian dagger. Unwilling to dismember her, he buries her instead. She revives and attacks him, forcing him to decapitate her with a shovel. Her headless body bleeds on his face as it tries to rape him. He manages to escape as Linda dies, and then retreats back to the cabin.", "title": "Plot" }, { "paragraph_id": 7, "text": "Back inside, Ash discovers that Cheryl has escaped the cellar. Cheryl eludes Ash, and attempts to choke him. Ash escapes her grasp, then shoots Cheryl in the jaw. As Ash is barricading the door, Scott reanimates into a Deadite. Scott attacks Ash, and inadvertently knocks the book close to the fireplace. Ash gouges Scott's eyes out and pulls a tree branch from Scott's stomach, causing him to bleed out and fall to the ground. Cheryl breaks through the trapdoor and knocks Ash to the floor. As Scott and Cheryl continue to attack Ash on the ground, Ash grabs the book and throws it into the fireplace. While the book burns, the Deadites freeze in place, then begin to rapidly decompose. Large appendages burst from both corpses, covering Ash in blood. The bodies of Scott and Cheryl then completely decompose. Dawn breaks, and Ash stumbles outside.", "title": "Plot" }, { "paragraph_id": 8, "text": "As Ash walks away from the cabin, an unseen demon moves rapidly through the forest, rushes through the cabin, and attacks him as he screams in terror.", "title": "Plot" }, { "paragraph_id": 9, "text": "Sam Raimi and Bruce Campbell grew up together, and have been friends from an early age. The duo made several low-budget Super 8 mm film projects together. Several were comedies, including Clockwork and It's Murder!. Shooting a suspense scene in It's Murder! inspired them to approach careers in the horror genre; after researching horror cinema at drive-in theaters, Raimi was set on directing a horror film, opting to shoot a proof of concept short film – described by the director as a \"prototype\" – that would attract the interest of financiers, and use the funds raised to shoot a full-length project. The short film that Raimi created was called Within the Woods, which was produced for $1,600. For The Evil Dead Raimi required over $100,000.", "title": "Production" }, { "paragraph_id": 10, "text": "To generate funds to produce the film, Raimi approached Phil Gillis, a lawyer to one of his friends. Raimi showed him Within the Woods, and although Gillis was not impressed by the short film, he offered Raimi legal advice on how to produce The Evil Dead. With his advice in mind, Raimi asked a variety of people for donations, and even eventually \"begged\" some. Campbell had to ask several of his own family members, and Raimi asked every individual he thought might be interested. He eventually raised enough money to produce a full-length film, though not the full amount he originally wanted. Raimi said the film cost $375,000.", "title": "Production" }, { "paragraph_id": 11, "text": "With enough money to produce the film, Raimi and Campbell set out to make what was then titled Book of the Dead, a name inspired by Raimi's interest in the fiction of H. P. Lovecraft. The film was supposed to be a remake of Within the Woods, with higher production values and a full-length running time. Raimi turned 20 just before shooting began, and he considered the project his \"rite of passage\".", "title": "Production" }, { "paragraph_id": 12, "text": "Raimi asked for help and assistance from several of his friends and past collaborators to make The Evil Dead. Campbell offered to produce the film alongside Tapert, and was subsequently cast as Ash Williams, the main character, since his producing responsibilities made him the only actor willing to stay during the production's entirety. To acquire more actors for the project, Raimi put an ad in The Detroit News. Betsy Baker was one of the actresses who responded, and Ellen Sandweiss, who appeared in Within the Woods, was also cast. The crew consisted almost entirely of Raimi and Campbell's friends and family. The special make-up effects artist for Within the Woods, Tom Sullivan, was brought on to compose the effects after expressing a positive reaction to working with Raimi. He helped create many of the film's foam latex and fake blood effects, and added coffee as an extra ingredient to the traditional fake blood formula of corn syrup and food coloring.", "title": "Production" }, { "paragraph_id": 13, "text": "Without any formal assistance from location scouts, the cast had to find filming locations on their own. The crew initially attempted to shoot the film in Raimi's hometown of Royal Oak, Michigan, but instead chose Morristown, Tennessee, as it was the only state that expressed enthusiasm for the project. The crew quickly found a remote cabin located several miles away from any other buildings. During pre-production, the 13 crew members had to stay at the cabin, leading to several people sleeping in the same room. The living conditions were notoriously difficult, with several arguments breaking out between crew members.", "title": "Production" }, { "paragraph_id": 14, "text": "Steve Frankel was the only carpenter on set, which made him the art direction's sole contributor. For exterior shots, Frankel had to produce several elaborate props with a circular saw. Otherwise, the cabin mostly remained the way it was found during production. The cabin had no plumbing, but phone lines were connected to it.", "title": "Production" }, { "paragraph_id": 15, "text": "The film was made on Kodak 16mm film stock with a rented camera. The inexperienced crew made filming a \"comedy of errors\". The first day of filming led to them getting lost in the woods during a scene shot on a bridge. Several crew members were injured during the shoot, and because of the cabin's remoteness, securing medical assistance was difficult. One notably gruesome moment on set involved ripping off Baker's eyelashes during removal of her face-mask. Because of the low budget, contact lenses as thick as glass had to be applied to the actors to achieve the \"demonic eyes\" effect. The lenses took ten minutes to apply, and could only be left on for about 15 minutes because eyes could not \"breathe\" with them applied. Campbell later commented that to get the effect of wearing these lenses, they had to put \"Tupperware\" over their eyes.", "title": "Production" }, { "paragraph_id": 16, "text": "Raimi developed a sense of mise en scène, coming up with ideas for scenes at a fast rate. He had drawn several crude illustrations to help him break down the flow of scenes. The crew was surprised when Raimi began using Dutch angles during shots to build atmosphere during scenes. To accommodate Raimi's style of direction, several elaborate, low-budget rigs had to be built, since the crew could not afford a camera dolly. One involved the \"vas-o-cam\", which relied on a mounted camera that was slid down long wooden platforms to create a more fluid sense of motion.", "title": "Production" }, { "paragraph_id": 17, "text": "A camera trick used to emulate a Steadicam inexpensively was the \"shaky cam\", which involved mounting the camera to a piece of wood and having two camera operators sprint around the swamp. During scenes involving the unseen force in the woods watching the characters, Raimi had to run through the woods with the makeshift rig, jumping over logs and stones. This often proved difficult due to mist in the swamp. The film's final scene was shot with the camera mounted to a bike, while it was quickly driven through the cabin to create a seamless long take.", "title": "Production" }, { "paragraph_id": 18, "text": "Raimi had been a big fan of The Three Stooges during his youth, which inspired him to use \"Fake Shemps\" during production. In any scene that required a background shot of a character, he used another actor as a substitute if the original actor was preoccupied. During a close-up involving Richard DeManicor's hand opening a curtain, Raimi used his own hand in the scene since it was more convenient. His brother Ted Raimi was used as a \"Fake Shemp\" in many scenes when the original actor was either busy or preoccupied.", "title": "Production" }, { "paragraph_id": 19, "text": "Raimi enjoyed \"torturing\" his actors. Raimi believed that to capture pain and anger in his actors, he had to abuse them a little at times, saying, \"if everyone was in extreme pain and misery, that would translate into a horror\". Producer Robert Tapert agreed with Raimi, commenting that he \"enjoyed when an actor bleeds.\" While shooting a scene with Campbell running down a hill, Campbell tripped and injured his leg. Raimi enjoyed poking Campbell's injury with a stick he found in the woods. Because of the copious amounts of blood in the film, the crew produced gallons of fake blood with Karo corn syrup. It took Campbell hours to remove the sticky substance from himself. Several actors had inadvertently been stabbed or thrown into objects during production. During the last few days on set, the conditions had become so extreme the crew began burning furniture to stay warm. Since at that point only exterior shots needed to be filmed, they burned nearly every piece of furniture left. Several actors went days without showering, and because of the freezing conditions, several caught colds and other illnesses. Campbell later described the filming process as nearly \"twelve weeks of mirthless exercise in agony\", though he allowed that he did manage to have fun while on set. On January 23, 1980, filming was finished and almost every crew member left the set to return home, with Campbell staying with Raimi. While looking over the footage that had been shot, Raimi discovered that a few pick-ups were required to fill in missing shots. Four days of re-shoots were then done to complete the film. The final moment involved Campbell having \"monster-guts\" splattered on him in the basement.", "title": "Production" }, { "paragraph_id": 20, "text": "Summing up the production decades later, Campbell remarked: \"It's low-budget, it's got rough edges,\" but even so, \"there are parts of that movie that are visually stunning.\"", "title": "Production" }, { "paragraph_id": 21, "text": "After the extensive filming process, Raimi had a \"mountain of footage\" that he had to put together. He chose a Detroit editing association, where he met Edna Paul, to cut the film. Paul's assistant was Joel Coen of the Coen brothers, who helped with the film's editing. Paul edited a majority of the film, although Coen edited the shed sequence. Coen had been inspired by Raimi's Within the Woods and liked the idea of producing a prototype film to help build the interest of investors. Joel used the concept to help make Blood Simple with his brother Ethan, and he and Raimi became friends following the editing process.", "title": "Production" }, { "paragraph_id": 22, "text": "The film's first cut ran at around 117 minutes, which Campbell called an impressive achievement in light of the 65-minute length of the screenplay. The cut scenes were to focus on the main character's lamentation of not being able to save the victims from their deaths, but was edited down to make the film less \"grim and depressing\" and to be a more marketable 85 minutes. Raimi was inspired by the fact that Brian De Palma was editing his own film Blow Out with John Travolta at the same sound facility. One of the most intricate moments during editing was the stop-motion animation sequence where the corpses \"melted\", which took hours to cut properly. The film had unique sounds that required extensive recording from the crew. Several sounds were not recorded properly during shooting, which meant the effects had to be redone in the editing rooms. Dead chickens were stabbed to replicate the sounds of mutilated flesh, and Campbell had to scream into a microphone for several hours.", "title": "Production" }, { "paragraph_id": 23, "text": "Much like Within the Woods, The Evil Dead needed to be blown up to 35mm, the industry standard, to be played at movie theaters. The relatively large budget made this a much simpler process with The Evil Dead than it had been with the short film.", "title": "Production" }, { "paragraph_id": 24, "text": "With the film completed, Raimi and the crew decided to celebrate with a \"big premiere\". They chose to screen the film at Detroit's Redford Theatre, which Campbell had often visited as a child. Raimi opted to have the most theatrical premiere possible, using custom tickets and wind tracks set in the theater, and ordering ambulances outside the theater to build atmosphere. The premiere setup was inspired by horror director William Castle, who would often attempt to scare his audiences by using gimmicks. Local turnout for the premiere exceeded the cast's expectations, with a thousand patrons showing up. The audiences responded enthusiastically to the premiere, which led to Raimi's idea of \"touring\" the film to build hype.", "title": "Promotion and distribution rights" }, { "paragraph_id": 25, "text": "Raimi showed the film to anyone willing to watch it, booking meetings with distribution agents and anyone with experience in the film industry. Eventually Raimi came across Irvin Shapiro, the man who was responsible for the distribution of George A. Romero's Night of the Living Dead and other famous horror films. Upon first viewing the film, he joked that while it \"wasn't Gone with the Wind\", it had commercial potential, and he expressed an interest in distributing it. It was his idea not to use the then-title Book of the Dead, because he thought it made the film sound boring. Raimi brainstormed several ideas, eventually going with The Evil Dead, deemed the \"least worst\" title. Shapiro also advised distributing the film worldwide to garner a larger income, though it required a further financial investment by Raimi, who managed to scrape together what little money he had.", "title": "Promotion and distribution rights" }, { "paragraph_id": 26, "text": "Shapiro was a founder of the Cannes Film Festival, and allowed Raimi to screen the film at the 1982 festival out of competition. Stephen King was present at its screening and gave the film a rave review. USA Today released an article about King's favorite horror films; the author cited The Evil Dead as his fifth favorite film of the genre. The film severely affected King, who commented that while watching the film at Cannes, he was \"registering things [he] had never seen in a movie before\". He became one of the film's largest supporters during the early efforts to find a distributor, eventually describing it as the \"most ferociously original film of the year\", a quote used in the film's promotional pieces. King's comments attracted the interest of critics, who otherwise would likely have dismissed the low-budget thriller.", "title": "Promotion and distribution rights" }, { "paragraph_id": 27, "text": "The film's press attracted the attention of British film distribution agent Stephen Woolley. Though he considered the film a big risk, Woolley decided to take on the job of releasing the film in the United Kingdom. The film was promoted in an unconventional manner for a film of its budget, receiving marketing on par with that of larger budget films. Dozens of promotional pieces, including film posters and trailers, were showcased in the UK, heavy promotion rarely expended on such a low-budget film. Woolley was impressed by Raimi, whom he called \"charming\", and was an admirer of the film, which led to his taking more risks with the film's promotion than he normally would have.", "title": "Promotion and distribution rights" }, { "paragraph_id": 28, "text": "Fangoria started covering the film in late 1982, writing several articles about the film's long production history. Early critical reception at the time was very positive, and along with Fangoria, King and Shapiro's approval, the film generated an impressive amount of interest before its commercial premiere. New Line Cinema, one of the distributors interested in the film, negotiated an agreement to distribute it domestically. The film had several \"sneak previews\" before its commercial release, including screenings in New York and Detroit. Audience reception at both screenings was widely enthusiastic, and interest was built for the film to such an extent that wider distribution was planned. New Line Cinema wrote Raimi a check large enough to pay off all the investors, and decided to release the film in an unusual manner: simultaneously into both cinemas and onto VHS, with substantial domestic promotion.", "title": "Promotion and distribution rights" }, { "paragraph_id": 29, "text": "Because of its large promotional campaign, the film performed above expectations at the box office. However, the initial domestic gross was described as \"disappointing.\" The movie opened in 15 theaters and grossed $108,000 in its opening weekend. Word of mouth later spread, and the film became a \"sleeper hit\". It grossed $2,400,000 domestically, nearly eight times its production budget. Sources differ as to whether it grossed $261,944 overseas, for a worldwide gross of $2,661,944, or $27 million overseas, for a worldwide gross of $29.4 million. Raimi said in 1990 that the film \"did very well overseas and did very poorly domestically\" and that its investors earned a return of \"about five times their initial investment.\"", "title": "Release" }, { "paragraph_id": 30, "text": "The film's release was met with controversy, as Raimi had made the film as gruesome as possible with neither interest in nor fear of censorship. Writer Bruce Kawin described The Evil Dead as one of the most notorious splatter films of its day, along with Cannibal Holocaust and I Spit on Your Grave.", "title": "Release" }, { "paragraph_id": 31, "text": "In the UK, the film was trimmed by 49 seconds before it was granted an X certificate for cinema release. This censored version was also released on home video; at the time there was no requirement that films had to be classified for video release. A campaign by pro-censorship organization NVLA led to the film being labelled a \"video nasty\" and when the Video Recordings Act was passed in 1984, the video version was removed from circulation. In 1990, a further 66 seconds were trimmed from the already censored version and the film was granted an 18 certificate for home video release. In 2000, the uncut version was finally granted an 18 certificate for both cinema and home video.", "title": "Release" }, { "paragraph_id": 32, "text": "In the US, the film received an X rating. Films with this label were quite violent and disturbing, and the rating was often held by pornographic films. The film has since been re-rated NC-17 for “substantial graphic horror violence and gore”, though many recent home media releases have been released without a rating.", "title": "Release" }, { "paragraph_id": 33, "text": "The film was and is still banned either theatrically or on video in some countries.", "title": "Release" }, { "paragraph_id": 34, "text": "The first VHS release of The Evil Dead was released by Thorn EMI in 1983, and Thorn's successor company HBO/Cannon Video later repackaged the film. Former HBO Video's partner Congress Video, a company notable for public domain films, issued its version in 1989.", "title": "Release" }, { "paragraph_id": 35, "text": "In its first week of video release, the film made £100,000 in the UK. It quickly became that week's bestselling video release, and later became the year's bestselling video in the UK, out-grossing large-budget horror releases such as The Shining. Its impressive European performance was chalked up to its heavy promotion there and the more open-minded nature of European audiences.", "title": "Release" }, { "paragraph_id": 36, "text": "The resurgence of The Evil Dead in the home-video market came through two companies that restored the film from its negatives and issued special editions in 1998: Anchor Bay Entertainment on VHS, and Elite Entertainment on LaserDisc. Anchor Bay was responsible for the film's first DVD release on January 19, 1999, along with Elite releasing the special collector's edition DVD on March 30, 1999, and between them, Elite and Anchor Bay have released six different DVD versions of The Evil Dead, most notably the 2002 \"Book Of The Dead\" edition, packaged in a latex replica of the Necronomicon sculpted by Tom Sullivan and the 2007 three disc \"Ultimate Edition\" which contained the widescreen and original full frame versions of the movie. The film's high-definition debut was in a 2010 Blu-ray.", "title": "Release" }, { "paragraph_id": 37, "text": "Lionsgate Films released a 4K Ultra HD Blu-ray edition of The Evil Dead on October 9, 2018.", "title": "Release" }, { "paragraph_id": 38, "text": "Upon its release, contemporary critical opinion was largely positive. Bob Martin, editor of Fangoria, reviewed the film before its formal premiere and proclaimed that it \"might be the exception to the usual run of low-budget horror films\". He followed up on this praise after the film's premiere, stating: \"Since I started editing this magazine, I have not seen any new film that I could recommend to our readers with more confidence that it would be loved, embraced and hailed as a new milestone in graphic horror\". The Los Angeles Times called the film an \"instant classic\", proclaiming it as \"probably the grisliest well-made movie ever.\" In a 1982 review, staff from Variety wrote that the film \"emerges as the ne plus ultra of low-budget gore and shock effect\", commenting that the \"powerful\" and inventive camerawork was key to creating a sense of dread.", "title": "Reception" }, { "paragraph_id": 39, "text": "British press for the film was positive; Kim Newman of Monthly Film Bulletin, Richard Cook of NME and Julian Petley of Film and Filming all gave the film good reviews during its early release. Petley and Cook compared the film to other contemporary horror films, writing that the film expressed more imagination and \"youthful enthusiasm\" than an average horror film. Cook described the camera work by Raimi as \"audacious\", stating that the film's visceral nature was greatly helped by the style of direction. Woolley, Newman and several critics complimented the film for its unexpected use of black comedy, which elevated the film above its genre's potential trappings. All three critics compared the film to the surrealistic work of Georges Franju and Jean Cocteau, noting the cinephilic references to Cocteau's film Orpheus. Writer Lynn Schofield Clark in his novel From Angels to Aliens compared the film to better-known horror films such as The Exorcist and The Omen, citing it as a key supernatural thriller.", "title": "Reception" }, { "paragraph_id": 40, "text": "On the review aggregator website Rotten Tomatoes, 86% of 83 critics' reviews are positive, with an average rating of 7.70/10. The website's consensus reads: \"So scrappy that it feels as illicit as a book found in the woods, The Evil Dead is a stomach-churning achievement in bad taste that marks a startling debut for wunderkind Sam Raimi.\" Empire stated the film's \"reputation was deserved\", writing that the film was impressive considering its low budget and the cast's inexperience. He commented that the film successfully blended the \"bizarre\" combination of Night of the Living Dead, The Texas Chain Saw Massacre and The Three Stooges. A reviewer for Film4 rated The Evil Dead four-and-a-half stars out of five, musing that the film was \"energetic, original and icky\" and concluding that Raimi's \"splat-stick debut is a tight little horror classic that deserves its cult reputation, despite the best efforts of the censors.\"", "title": "Reception" }, { "paragraph_id": 41, "text": "Slant's Ed Gonzales compared the film to Dario Argento's work, citing Raimi's \"unnerving wide angle work\" as an important factor to the film's atmosphere. He mused that Raimi possessed an \"almost unreal ability to suggest the presence of intangible evil\", which was what prevented the movie from being \"B-movie schlock\". BBC critic Martyn Glanville awarded the film four stars out of five, writing that for Raimi, it served as a better debut film than Tobe Hooper's The Texas Chain Saw Massacre or Wes Craven's The Last House on the Left. Glanville noted that other than the \"ill-advised trees-that-rape scene\", the film is \"one of the great modern horror films, and even more impressive when one considers its modest production values.\"", "title": "Reception" }, { "paragraph_id": 42, "text": "Filmcritic.com's Christopher Null gave the film the same rating as Glanville, writing that \"Raimi's biggest grossout is schlock horror done the right way\" and comparing it to Romero's Night of the Living Dead in its ability to create stark atmosphere. Chicago Reader writer Pat Graham commented that the film featured several \"clever\" turns on the standard horror formula, adding that Raimi's \"anything-for-an-effect enthusiasm pays off in lots of formally inventive bits.\" Time Out critic Stephen Garrett, referred to the make-up effects in the climax as \"amazing\", and commented that although the film was light on character development, it \"blends comic fantasy\" with \"atmospheric horror ... to impressive effect\". The same site later cited the film as the 41st greatest horror movie ever made. Phelim O'Neill of The Guardian combined The Evil Dead and its sequel Evil Dead II and listed them as the 23rd best horror film ever made, announcing that the former film \"stands above its mostly forgotten peers in the 80s horror boom.\" Don Summer, in his book Horror Movie Freak, and writer Kate Egan have both cited the film as a horror classic.", "title": "Reception" }, { "paragraph_id": 43, "text": "J.C. Maçek III of PopMatters said: \"What is unquestionable is that the Raimis and their pals created a monster in The Evil Dead. It started as a disastrous failure to obtain a big break with a too long, too perilous shoot (note Campbell's changing hairstyle in the various scenes of the one-day plot). The film went through name changes and bannings only to survive as not only 'the ultimate experience in grueling horror' but as an oft-imitated and cashed-in-on classic, with 30 years of positive reviews to prove it.\"", "title": "Reception" }, { "paragraph_id": 44, "text": "While The Evil Dead received favorable critical comment when it was initially released, it failed to establish Raimi's reputation. It was, however, a box-office success, which led to Campbell and Raimi teaming up again for the release of another movie. Joel Coen and his brother Ethan had collaborated as directors and released the film Blood Simple, to critical acclaim. According to Campbell, Ethan, then an accountant, expressed surprise when the duo succeeded. The Coen brothers and Raimi collaborated on a screenplay, which was released shortly after The Evil Dead. The film, Crimewave, was a box-office failure. The film's production was a \"disaster\", according to Campbell, who stated that \"missteps\" like Crimewave usually lead to the end of a director's career. Other people involved with the film expressed similar disappointment with the project. Fortunately, Raimi had the studio support to make a sequel to The Evil Dead, which he initially decided to make out of desperation.", "title": "Aftermath" }, { "paragraph_id": 45, "text": "The Evil Dead was followed by a series of sequels. The franchise is noted from attracting attention for each sequel featuring more comedic qualities than the last, progressing into \"weirder\" territory with each film. Evil Dead II: Dead by Dawn was a black comedy-horror film which released in 1987, and was a box-office success. It received general acclaim from critics, and is often considered to be superior to the first film. This was followed by Army of Darkness, a comedy fantasy-horror film released in 1992. At that time, Raimi had become a successful director, attracting Hollywood's interest. His superhero film Darkman (1990) was another box-office success, which led to an increased budget for Army of Darkness. Army of Darkness had 22.8 times the budget of the original Evil Dead, though it was not considered to be a box-office success like its two predecessors. It was met with mostly positive critical reception. After any additional installments suffered through development hell, a supernatural-horror soft reboot/legacy sequel titled Evil Dead was released in 2013, featuring Jane Levy as the main character Mia Allen. Directed and co-written by Fede Álvarez, the film was produced by Raimi and Campbell. The film, which was a departure from the humor of the previous two films, was a moderate box office success and was praised for its dark and bloody story. While various projects going through varying stages of development, a continuation was released as a television series titled, Ash vs. Evil Dead. Created and executive produced by Sam Raimi, the series aired from 2015-2018.", "title": "Aftermath" }, { "paragraph_id": 46, "text": "After further film installments once again remained in development hell for a number of years, a fifth feature film titled Evil Dead Rise was announced to be in development. The project began filming in June 2021, with Irish filmmaker Lee Cronin serving as writer/director. Though Campbell reprised his role as Ashley \"Ash\" J. Williams in each of the proceeding sequels, he did not appear in the film. The film was released theatrically on April 21, 2023, by Warner Bros. Pictures.", "title": "Aftermath" }, { "paragraph_id": 47, "text": "Unofficial sequels were made in Italy, where the film was known as La Casa (\"The House\"). Produced by Joe D'Amato's Filmirage, two mostly unrelated films were released and marketed as sequels to Evil Dead II: Umberto Lenzi's La Casa 3: Ghosthouse and La Casa 4: Witchery starring Linda Blair and David Hasselhoff. The final film was released in 1990 and titled, La Casa 5: Beyond Darkness. The film House II: The Second Story was reissued and retitled in Italy as La Casa 6; followed by The Horror Show which was released in Italy as La Casa 7.", "title": "Aftermath" }, { "paragraph_id": 48, "text": "The original Evil Dead trilogy of films has been recognized as one of the most successful cult film series in history. David Lavery, in his book The Essential Cult TV Reader, surmised that Campbell's \"career is a practical guide to becoming a cult idol\". The film launched the careers of Raimi and Campbell, who have since collaborated frequently. Raimi has worked with Campbell in virtually all of his films since, and Campbell has appeared in cameo roles in all three of Raimi's Spider-Man films (as well as a very brief appearance at the end of Darkman), which have become some of the highest-grossing films in history. Though it has often been considered an odd choice for Raimi, a director known for his violent horror films, to direct a family-friendly franchise, the hiring was mostly inspired by Raimi's passion for comic books as a child. Raimi returned to the horror-comedy genre in 2009 with Drag Me to Hell.", "title": "Aftermath" }, { "paragraph_id": 49, "text": "Critics have often compared Campbell's later performances to his role in Evil Dead, which has been called his defining role. Campbell's performance as Ash has been compared to roles ranging from his performance of Elvis Presley in the film Bubba Ho-tep to the bigamous demon in The X-Files episode \"Terms of Endearment\". Campbell's fan base gradually developed after the release of Evil Dead II and his short-lived series The Adventures of Brisco County, Jr.. He is a regular favorite at most fan conventions and often draws sold-out auditoriums at his public appearances. The Evil Dead developed a substantial cult following throughout the years, and has often been cited as a defining cult classic.", "title": "Aftermath" }, { "paragraph_id": 50, "text": "The Evil Dead has spawned a media franchise. A video game adaptation of the same name was released for the Commodore 64 in 1984, as was a trilogy of survival horror games in the 1990s and early 2000s: Evil Dead: Hail to the King, Evil Dead: A Fistful of Boomstick and Evil Dead: Regeneration. Ted Raimi did voices for the trilogy, and Campbell returned as the voice of Ash. The character Ash became the main character of a comic book franchise. Ash has fought both Freddy Krueger and Jason Voorhees in the Freddy vs. Jason vs. Ash series, Herbert West in Army of Darkness vs. Re-Animator, zombie versions of the Marvel Comics superheroes in Marvel Zombies vs. The Army of Darkness, and has even saved the life of a fictional Barack Obama in Army of Darkness: Ash Saves Obama. In January 2008, Dark Horse Comics began releasing a four-part monthly comic book mini-series, written by Mark Verheiden and drawn by John Bolton, based on The Evil Dead. The film has also inspired a stage musical, Evil Dead: The Musical, which was produced with the permission of Raimi and Campbell. The musical has run on and off since its inception in 2003.", "title": "Aftermath" }, { "paragraph_id": 51, "text": "After the film was released, many people began to trespass onto the filming location in Morristown. In 1982, the cabin was burned down by drunken trespassers. Although the cabin is now gone, the chimney remains, which many people now take stones from when they trespass onto the location.", "title": "Aftermath" }, { "paragraph_id": 52, "text": "In 2022, a video game adaptation of the series called Evil Dead: The Game was released.", "title": "Aftermath" }, { "paragraph_id": 53, "text": "In 2021, heavy metal band Ice Nine Kills released a song titled \"Ex-Mørtis\" on their album The Silver Scream 2: Welcome to Horrorwood, which is composed of songs each explicitly linked to specific horror media per the album's booklet of liner notes; \"Ex-Mørtis\" is stated to be inspired by The Evil Dead.", "title": "Aftermath" }, { "paragraph_id": 54, "text": "", "title": "External links" } ]
The Evil Dead is a 1981 American supernatural horror film written and directed by Sam Raimi. The film stars Bruce Campbell, Ellen Sandweiss, Richard DeManincor, Betsy Baker and Theresa Tilly. The story focuses on five college students vacationing in an isolated cabin in a remote wooded area. After they find an audio tape that, when played, releases a legion of demons and spirits, four members of the group suffer from demonic possession, forcing the fifth member, Ash Williams (Campbell), to survive an onslaught of increasingly gory mayhem. Raimi, producer Robert G. Tapert, Campbell, and their friends produced the short film Within the Woods as a proof of concept to build the interest of potential investors, which secured US$90,000 to begin work on The Evil Dead. Principal photography took place on location in a remote cabin located in Morristown, Tennessee, in a difficult filming process that proved extremely uncomfortable for the cast and crew; the film's extensive prosthetic makeup effects and stop-motion animations were created by artist Tom Sullivan. The completed film attracted the interest of producer Irvin Shapiro, who helped screen the film at the 1982 Cannes Film Festival. Horror author Stephen King gave a rave review of the film, which resulted in New Line Cinema acquiring its distribution rights. The Evil Dead grossed $2.4 million in the United States and between $2.7 and $29.4 million worldwide. Both early and later critical reception were universally positive; in the years since its release, the film has developed a reputation as one of the most significant cult films, cited among the greatest horror films of all time and one of the most successful independent films. It launched the careers of Raimi, Tapert, and Campbell, who have continued to collaborate on several films together, such as Raimi's Spider-Man trilogy. The Evil Dead spawned a media franchise, beginning with two direct sequels written and directed by Raimi, Evil Dead II (1987) and Army of Darkness (1992), a fourth film, Evil Dead (2013), which serves as a soft reboot and continuation, a follow-up television series, Ash vs Evil Dead, which aired from 2015 to 2018, and a fifth film, Evil Dead Rise (2023); the franchise also includes video games and comic books. The film's protagonist Ash Williams is considered to be a cultural icon.
2001-09-10T05:40:51Z
2023-12-28T22:37:58Z
[ "Template:Use American English", "Template:'", "Template:Cite journal", "Template:Wikiquote", "Template:Authority control", "Template:AllMovie title", "Template:Rotten Tomatoes", "Template:Sam Raimi", "Template:Cite AV media", "Template:Cbignore", "Template:Webarchive", "Template:IMDb title", "Template:Evil Dead", "Template:Citation needed", "Template:Rotten Tomatoes prose", "Template:User-generated source", "Template:Reflist", "Template:Cite book", "Template:Good article", "Template:Cite video", "Template:Refbegin", "Template:Refend", "Template:Mojo title", "Template:Short description", "Template:About", "Template:Infobox film", "Template:Cite web", "Template:Cite AV media notes", "Template:Use mdy dates", "Template:Sfn", "Template:Clarify" ]
https://en.wikipedia.org/wiki/The_Evil_Dead
9,297
Economic calculation problem
The economic calculation problem (sometimes abbreviated ECP) is a criticism of using economic planning as a substitute for market-based allocation of the factors of production. It was first proposed by Ludwig von Mises in his 1920 article "Economic Calculation in the Socialist Commonwealth" and later expanded upon by Friedrich Hayek. In his first article, Mises described the nature of the price system under capitalism and described how individual subjective values (while criticizing other theories of value) are translated into the objective information necessary for rational allocation of resources in society. He argued that economy planning necessarily leads to an irrational and inefficient allocation of resources. In market exchanges, prices reflect the supply and demand of resources, labor and products. In the article, Mises focused his criticism on the deficiencies of the socialisation of capital goods, but he later went on to elaborate on various different forms of socialism in his book Socialism. He briefly mentioned the problem in the 3rd book of Human Action: a Treatise on Economics, where he also elaborated on the different types of socialism, namely the "Hindenburg" and "Lenin" models, which he viewed as fundamentally flawed despite their ideological differences. Mises and Hayek argued that economic calculation is only possible by information provided through market prices and that bureaucratic or technocratic methods of allocation lack methods to rationally allocate resources. Mises's analysis centered on price theory while Hayek went with a more feathered analysis of information and entrepreneurship. The debate raged in the 1920s and 1930s and that specific period of the debate has come to be known by economic historians as the socialist calculation debate. Mises' initial criticism received multiple reactions and led to the conception of trial-and-error market socialism, most notably the Lange–Lerner theorem. In the 1920 paper, Mises argued that the pricing systems in socialist economies were necessarily deficient because if a public entity owned all the means of production, no rational prices could be obtained for capital goods as they were merely internal transfers of goods and not "objects of exchange", unlike final goods. Therefore, they were unpriced and hence the system would be necessarily irrational as the central planners would not know how to allocate the available resources efficiently. He wrote that "rational economic activity is impossible in a socialist commonwealth". Mises developed his critique of socialism more completely in his 1922 book Socialism, arguing that the market price system is an expression of praxeology and can not be replicated by any form of bureaucracy. Notable critics of both Mises's original argument and Hayek's newer proposition include Anarcho-capitalist economist Bryan Caplan, computer programmer and Marxist Paul Cockshott, as well as other communists. Since capital goods and labor are highly heterogeneous (i.e. they have different characteristics that pertain to physical productivity), economic calculation requires a common basis for comparison for all forms of capital and labour. As a means of exchange, money enables buyers to compare the costs of goods without having knowledge of their underlying factors; the consumer can simply focus on his personal cost-benefit decision. Therefore, the price system is said to promote economically efficient use of resources by agents who may not have explicit knowledge of all of the conditions of production or supply. This is called the signalling function of prices as well as the rationing function which prevents over-use of any resource. Without the market process to fulfill such comparisons, critics of non-market socialism say that it lacks any way to compare different goods and services and would have to rely on calculation in kind. The resulting decisions, it is claimed, would therefore be made without sufficient knowledge to be considered rational. The common basis for comparison of capital goods must also be connected to consumer welfare. It must also be able to compare the desired trade-off between present consumption and delayed consumption (for greater returns later on) via investment in capital goods. The use of money as a medium of exchange and unit of account is necessary to solve the first two problems of economic calculation. Mises (1912) applied the marginal utility theory developed by Carl Menger to money. Marginal consumer expenditures represent the marginal utility or additional consumer satisfaction expected by consumers as they spend money. This is similar to the equi-marginal principle developed by Alfred Marshall. Consumers equalize the marginal utility (amount of satisfaction) to the last dollar spent on each good. Thus, the exchange of consumer goods establishes prices that represent the marginal utility of consumers and money is representative of consumer satisfaction. If money is also spent on capital goods and labor, then it is possible to make comparisons between capital goods and consumer goods. The exchange of consumer and capital/labor goods does not imply that capital goods are valued accurately, only that it is possible for the valuations of capital goods to be made. These are foundational elements of economic calculation, namely that it requires the use of money across all goods. This is a necessary, but not a sufficient condition for successful economic calculation. Without a price mechanism, Mises argues, socialism lacks the means to relate consumer satisfaction to economic activity. The incentive function of prices allows diffuse interests, like the interests of every household in cheap, high quality shoes to compete, among buyers, with the concentrated interests of the cobblers in expensive, poor quality shoes. Without it, a panel of experts set up to "rationalise production", likely closely linked to the cobblers for expertise, would tend to support the cobblers interests in a "conspiracy against the public". However, if this happens to all industries, everyone would be worse off than if they had been subject to the rigours of market competition. The latter forces producers to produce superior products at appropriate prices to please their consumers. The Mises theory of money and calculation conflicts directly with Marxist labour theory of value. Marxist theory allows for the possibility that labour content can serve as a common means of valuing capital goods, a position now out of favour with economists following the success of the theory of marginal utility. The third condition for economic calculation is the existence of genuine entrepreneurship and market rivalry. According to Israel Kirzner (1973) and Don Lavoie (1985), entrepreneurs reap profits by supplying unfulfilled needs in all markets. Thus, entrepreneurship brings prices closer to marginal costs. The adjustment of prices in markets towards equilibrium (where supply and demand equal) gives them greater utilitarian significance. The activities of entrepreneurs make prices more accurate in terms of how they represent the marginal utility of consumers. Prices act as guides to the planning of production. Those who plan production use prices to decide which lines of production should be expanded or curtailed. Entrepreneurs lack the profit motive to take risks under socialism and so are far less likely to attempt to supply consumer demands. Without the price system to match consumer utility to incentives for production, or even indicate those utilities "without providing incentives", state planners are much less likely to invest in new ideas to satisfy consumers' desires. Entrepreneurs would also lack the ability to economize within the production process, causing repercussions for consumers. The fourth condition for successful economic calculation is plan coordination among those who plan production. The problem of planning production is the knowledge problem explained by Hayek (1937, 1945), but first mentioned and illustrated by his mentor Mises in Socialism (1922), not to be mistaken with Socialism: An Economic and Sociological Analysis (1951). The planning could either be done in a decentralised fashion, requiring some mechanism to make the individual plans coherent, or centrally, requiring a lot of information. Within capitalism, the overall plan for production is composed of individual plans among capitalists in large and small enterprises. Since capitalists purchase labour and capital out of the same common pool of available yet scarce labor and capital, it is essential that their plans fit together in at least a semi-coherent fashion. Hayek (1937) defined an efficient planning process as one where all decision makers form plans that contain relevant data from the plans from others. Entrepreneurs acquire data on the plans from others through the price system. The price system is an indispensable communications network for plan coordination among entrepreneurs. Increases and decreases in prices inform entrepreneurs about the general economic situation, to which they must adjust their own plans. As for socialism, Mises (1944) and Hayek (1937) insisted that bureaucrats in individual ministries could not coordinate their plans without a price system due to the local knowledge problem.Opponents argued that in principle an economy can be seen as a set of equations. Thus, using information about available resources and the preferences of people, it should be possible to calculate an optimal solution for resource allocation. Friedrich von Hayek responded that the system of equations required too much information that would not be easily available and the ensuing calculations would be too difficult. This is partly because individuals possess useful knowledge but do not realise its importance, may have no incentive to transmit the information, or may have incentive to transmit false information about their preferences. He contended that the only rational solution is to utilize all the dispersed knowledge in the market place through the use of price signals. The early debates were made before the much greater calculating powers of modern computers became available but also before research on chaos theory. In the 1980s, Alexander Nove argued that the calculations would take millions of years even with the best computers. It may be impossible to make long-term predictions for a highly complex system such as an economy. Hayek (1935, 1937, 1940, 1945) stressed the knowledge problem of central planning, partly because decentralized socialism seemed indefensible. Part of the reason that Hayek stressed the knowledge problem was also because he was mainly concerned with debating the proposal for market socialism and the Lange model by Oskar R. Lange (1938) and Hayek's student Abba Lerner (1934, 1937, 1938) which was developed in response to the calculation argument. Lange and Lerner conceded that prices were necessary in socialism. Lange and Lerner thought that socialist officials could simulate some markets (mainly spot markets) and the simulation of spot markets was enough to make socialism reasonably efficient. Lange argued that prices can be seen merely as an accounting practice. In principle, claim market socialists, socialist managers of state enterprises could use a price system, as an accounting system, in order to minimize costs and convey information to other managers. However, while this can deal with existing stocks of goods, providing a basis for values can be ascertained, it does not deal with the investment in new capital stocks. Hayek responded by arguing that the simulation of markets in socialism would fail due to a lack of genuine competition and entrepreneurship. Central planners would still have to plan production without the aid of economically meaningful prices. Lange and Lerner also admitted that socialism would lack any simulation of financial markets, and that this would cause problems in planning capital investment. However, Hayek's argumentation is not only regarding computational complexity for the central planners. He further argues that much of the information individuals have cannot be collected or used by others. First, individuals may have no or little incentive to share their information with central or even local planners. Second, the individual may not be aware that he has valuable information; and when he becomes aware, it is only useful for a limited time, too short for it to be communicated to the central or local planners. Third, the information is useless to other individuals if it is not in a form that allows for meaningful comparisons of value (i.e. money prices as a common basis for comparison). Therefore, Hayek argues, individuals must acquire data through prices in real markets. The fifth condition for successful economic calculation is the existence of well functioning financial markets. Economic efficiency depends heavily upon avoiding errors in capital investment. The costs of reversing errors in capital investment are potentially large. This is not just a matter of rearranging or converting capital goods that are found to be of little use. The time spent reconfiguring the structure of production is time lost in the production of consumer goods. Those who plan capital investment must anticipate future trends in consumer demand if they are to avoid investing too much in some lines of production and too little in other lines of production. Capitalists plan production for profit. Capitalists use prices to form expectations that determine the composition of capital accumulation, the pattern of investment across industry. Those who invest in accordance with consumers' desires are rewarded with profits, those who do not are forced to become more efficient or go out of business. Prices in futures markets play a special role in economic calculation. Futures markets develop prices for commodities in future time periods. It is in futures markets that entrepreneurs sort out plans for production based on their expectations. Futures markets are a link between entrepreneurial investment decisions and household consumer decisions. Since most goods are not explicitly traded in futures markets, substitute markets are needed. The stock market serves as a ‘continuous futures market’ that evaluates entrepreneurial plans for production (Lachmann 1978). Generally speaking, the problem of economic calculation is solved in financial markets as Mises argued: The problem of economic calculation arises in an economy which is perpetually subject to change [...]. In order to solve such problems it is above all necessary that capital be withdrawn from particular undertakings and applied in other lines of production [...]. [This] is essentially a matter of the capitalists who buy and sell stocks and shares, who make loans and recover them, who speculate in all kinds of commodities. The existence of financial markets is a necessary condition for economic calculation. The existence of financial markets itself does not automatically imply that entrepreneurial speculation will tend towards efficiency. Mises argued that speculation in financial markets tends towards efficiency because of a "trial and error" process. Entrepreneurs who commit relatively large errors in investment waste their funds over expanding some lines of production at the cost of other more profitable ventures where consumer demand is higher. The entrepreneurs who commit the worst errors by forming the least accurate expectations of future consumer demands incur financial losses. Financial losses remove these inept entrepreneurs from positions of authority in industry. Entrepreneurs who commit smaller errors by anticipating consumer demand more correctly attain greater financial success. The entrepreneurs who form the most accurate opinions regarding the future state of markets (i.e. new trends in consumer demands) earn the highest profits and gain greater control of industry. Those entrepreneurs who anticipate future market trends therefore waste the least amount of real capital and find the most favorable terms for finance on markets for financial capital. Minimal waste of real capital goods implies the minimization of the opportunity costs of capital's economic calculation. The value of capital goods is brought into line with the value of future consumer goods through competition in financial markets, because competition for profits among capitalist financiers rewards entrepreneurs who value capital more correctly (i.e. anticipating future prices more correctly) and eliminates capitalists who value capital least correctly. To sum things up, the use of money in trading all goods (capital/labor and consumer) in all markets (spot and financial) combined with profit driven entrepreneurship and Darwinian natural selection in financial markets all combine to make rational economic calculation and allocation the outcome of the capitalist process. Mises insisted that socialist calculation is impossible because socialism precludes the exchange of capital goods in terms of a generally accepted medium of exchange, or money. Investment in financial markets determines the capital structure of modern industry with some degree of efficiency. The egalitarian nature of socialism prohibits speculation in financial markets. Therefore, Mises concluded that socialism lacks any clear tendency towards improvement in the capital structure of industry. Mises gave the example of choosing between producing wine or oil, making the following point: It will be evident, even in the socialist society, that 1,000 hectolitres of wine are better than 800, and it is not difficult to decide whether it desires 1,000 hectolitres of wine rather than 500 of oil. There is no need for any system of calculation to establish this fact: the deciding element is the will of the economic subjects involved. But once this decision has been taken, the real task of rational economic direction only commences, i.e., economically, to place the means at the service of the end. That can only be done with some kind of economic calculation. The human mind cannot orient itself properly among the bewildering mass of intermediate products and potentialities of production without such aid. It would simply stand perplexed before the problems of management and location. Such intermediate products would include land, warehouse storage, bottles, barrels, oil, transport, etc. Not only would these things have to be assembled, but they would have to compete with the attainment of other economic goals. Without pricing for capital goods, essentially, Mises is arguing, it is impossible to know how they should rationally/efficiently use it. And since the absence of pricing necessitates the prior absence of a current standard of exchange, investment becomes particularly impossible. In other words, the potential future outputs cannot be measured by any current standard, let alone a monetary one required for economic calculation. Likewise, the value consumers have for current consumption over future consumption cannot be expressed, quantified or implemented. Some academics and economists argue that the claim a free market is an efficient, or even the most efficient, method of resource allocation is incorrect. Alexander Nove argued that Mises "tends to spoil his case by the implicit assumption that capitalism and optimum resource allocation go together" in Mises' "Economic Calculation in the Socialist Commonwealth". Joan Robinson argued that many prices in modern capitalism are effectively "administered prices" created by "quasi monopolies", thus challenging the connection between capital markets and rational resource allocation. Socialist market abolitionists argue that whilst advocates of capitalism and the Austrian School in particular recognize equilibrium prices do not exist in real life, they nonetheless claim that these prices can be used as a rational basis when this is not the case, hence markets are not efficient. Robin Hahnel further argued that market inefficiencies, such as externalities and excess supply and demand, arise from buyers and sellers thoughtlessly maximizing their rational interests, which free markets inherently do not deter. Nonetheless, Hahnel commended current policies pursued by free market capitalist societies against these inefficiencies (e.g. Pigouvian taxes, antitrust laws etc.), as long as they are properly calculated and consistently enforced. Milton Friedman agreed that markets with monopolistic competition are not efficient, but he argued that it is easy to force monopolies to adopt competitive behavior by exposing them to foreign rivals. Economic liberals and libertarian capitalists also argue that monopolies and big business are not generally the result of a free market, or that they never arise from a free market; rather, they say that such concentration is enabled by governmental grants of franchises or privileges. That said, protectionist economies can theoretically still foster competition as long as there is strong consumer switching. Joseph Schumpeter additionally argued that economic advancement, through innovation and investment, are often driven by large monopolies. Allin Cottrell, Paul Cockshott and Greg Michaelson argued that the contention that finding a true economic equilibrium is not just hard but impossible for a central planner applies equally well to a market system. As any universal Turing machine can do what any other Turing machine can, a central calculator in principle has no advantage over a system of dispersed calculators, i.e. a market, or vice versa. In some economic models, finding an equilibrium is hard, and finding an Arrow–Debreu equilibrium is PPAD-complete. If the market can find an equilibrium in polynomial time, then the equivalence above can be used to prove that P=PPAD. This line of argument thus attempts to show that any claim to impossibility must necessarily involve a local knowledge problem, because the planning system is no less capable than the market if given full information. Don Lavoie makes a local knowledge argument by taking this implication in reverse. The market socialists pointed out the formal similarity between the neoclassical model of Walrasian general equilibrium and that of market socialism which simply replace the Walrasian auctioneer with a planning board. According to Lavoie, this emphasizes the shortcomings of the model. By relying on this formal similarity, the market socialists must adopt the simplifying assumptions of the model. The model assumes that various sorts of information are given to the auctioneer or planning board. However, if not coordinated by a capital market, this information exists in a fundamentally distributed form, which would be difficult to utilize on the planners' part. If the planners decided to utilize the information, it would immediately become stale and relatively useless, unless reality somehow imitated the changeless monotony of the equilibrium model. The existence and usability of this information depends on its creation and situation within a distributed discovery procedure. One criticism is that proponents of the theory overstate the strength of their case by describing socialism as impossible rather than inefficient. In explaining why he is not an Austrian School economist, anarcho-capitalist economist Bryan Caplan argues that while the economic calculation problem is a problem for socialism, he denies that Mises has shown it to be fatal or that it is this particular problem that led to the collapse of authoritarian socialist states. Caplan also states the exaggeration of the problem; in his view, Mises did not manage to prove why economic calculation made the socialist economy 'impossible', and even if there were serious doubts about the efficiency of cost benefit analysis, other arguments are plentiful (Caplan gives the example of the incentive problem). Joan Robinson argued that in a steady-state economy there would be an effective abundance of means of production and so markets would not be needed. Mises acknowledged such a theoretical possibility in his original tract when he said the following: "The static state can dispense with economic calculation. For here the same events in economic life are ever recurring; and if we assume that the first disposition of the static socialist economy follows on the basis of the final state of the competitive economy, we might at all events conceive of a socialist production system which is rationally controlled from an economic point of view." However, he contended that stationary conditions never prevail in the real world. Changes in economic conditions are inevitable; and even if they were not, the transition to socialism would be so chaotic as to preclude the existence of such a steady-state from the start. The purpose of the price mechanism is to allow individuals to recognise the opportunity cost of decisions. In a state of abundance, there is no such cost, which is to say that in situations where one need not economize, economics does not apply, e.g. areas with abundant fresh air and water. Otto Neurath and Hillel Ticktin argued that with detailed use of real unit accounting and demand surveys a planned economy could operate without a capital market in a situation of abundance. In Towards a New Socialism's "Information and Economics: A Critique of Hayek" and "Against Mises", Paul Cockshott and Allin Cottrell argued that the use of computational technology now simplifies economic calculation and allows planning to be implemented and sustained. Len Brewster replied to this by arguing that Towards a New Socialism establishes what is essentially another form of a market economy, making the following point: [A]n examination of C&C's New Socialism confirms Mises's conclusion that rational socialist planning is impossible. It appears that in order for economic planners to have any useful data by which they might be guided, a market must be hauled in, and with it analogues of private property, inequality and exploitation. In response, Cockshott argued that the economic system is sufficiently far removed from a capitalist free-market economy to not count as one, saying: Those that Hayek was arguing against like Lange and Dickinson allowed for markets in consumer goods, this did not lead Hayek to say : Oh you are not really arguing for socialism since you have conceded a market in consumer goods, he did not, because there remained huge policy differences between him and Lange even if Lange accepted consumer goods markets. It is thus a very weak argument by Brewster to say that what we advocate is not really socialist calculation because it is contaminated in some way by market influences. Leigh Phillips' and Michal Rozworski's 2019 book The People's Republic of Walmart argues that multinational corporations like Walmart and Amazon already operate centrally planned economies in a more technologically sophisticated manner than the Soviet Union, proving that the economic calculation problem is surmountable. There are some contentions to this view however, namely how economic planning and planned economy ought to be distinguished. Both entail formulating data-driven economic objectives but the latter precludes it from occurring within a free market context and delegates the task to centralized bodies. Karras J. Lambert and Tate Fegley argue that artificial intelligence systems, no matter how advanced, cannot assume the role of central planners because they do not fulfill the prerequisites of effective economic calculation. This includes the ability to convert the ordinal preferences of producers and consumers into commensurate cardinal utility values, which are available and agreed upon, and forecast future market interactions. One reason includes how they are dependent on Big Data, which in turn is entirely based on past information. Hence, the system cannot make any meaningful conclusions about future consumer preferences, which are required for optimal pricing. This necessitates the intervention of the programmer, who is highly likely to be biased in their judgments. Even the manner by which a system can "predict" consumer preferences is also based on a programmer's creative bias. They further argue that even if artificial intelligence is able to ordinally rank items like humans, they would still suffer from the same issues of not being able to conceive of a pricing structure where meaningful pricing calculations, using a common cardinal utility unit, can be formed. Nonetheless, Lambert and Fegley acknowledge that entrepreneurs can benefit from Big Data's predictive value, provided that the data is based on past market prices and that it is used in tandem with free market-styled bidding.
[ { "paragraph_id": 0, "text": "The economic calculation problem (sometimes abbreviated ECP) is a criticism of using economic planning as a substitute for market-based allocation of the factors of production. It was first proposed by Ludwig von Mises in his 1920 article \"Economic Calculation in the Socialist Commonwealth\" and later expanded upon by Friedrich Hayek.", "title": "" }, { "paragraph_id": 1, "text": "In his first article, Mises described the nature of the price system under capitalism and described how individual subjective values (while criticizing other theories of value) are translated into the objective information necessary for rational allocation of resources in society. He argued that economy planning necessarily leads to an irrational and inefficient allocation of resources. In market exchanges, prices reflect the supply and demand of resources, labor and products. In the article, Mises focused his criticism on the deficiencies of the socialisation of capital goods, but he later went on to elaborate on various different forms of socialism in his book Socialism. He briefly mentioned the problem in the 3rd book of Human Action: a Treatise on Economics, where he also elaborated on the different types of socialism, namely the \"Hindenburg\" and \"Lenin\" models, which he viewed as fundamentally flawed despite their ideological differences.", "title": "" }, { "paragraph_id": 2, "text": "Mises and Hayek argued that economic calculation is only possible by information provided through market prices and that bureaucratic or technocratic methods of allocation lack methods to rationally allocate resources. Mises's analysis centered on price theory while Hayek went with a more feathered analysis of information and entrepreneurship. The debate raged in the 1920s and 1930s and that specific period of the debate has come to be known by economic historians as the socialist calculation debate. Mises' initial criticism received multiple reactions and led to the conception of trial-and-error market socialism, most notably the Lange–Lerner theorem.", "title": "" }, { "paragraph_id": 3, "text": "In the 1920 paper, Mises argued that the pricing systems in socialist economies were necessarily deficient because if a public entity owned all the means of production, no rational prices could be obtained for capital goods as they were merely internal transfers of goods and not \"objects of exchange\", unlike final goods. Therefore, they were unpriced and hence the system would be necessarily irrational as the central planners would not know how to allocate the available resources efficiently. He wrote that \"rational economic activity is impossible in a socialist commonwealth\". Mises developed his critique of socialism more completely in his 1922 book Socialism, arguing that the market price system is an expression of praxeology and can not be replicated by any form of bureaucracy.", "title": "" }, { "paragraph_id": 4, "text": "Notable critics of both Mises's original argument and Hayek's newer proposition include Anarcho-capitalist economist Bryan Caplan, computer programmer and Marxist Paul Cockshott, as well as other communists.", "title": "" }, { "paragraph_id": 5, "text": "Since capital goods and labor are highly heterogeneous (i.e. they have different characteristics that pertain to physical productivity), economic calculation requires a common basis for comparison for all forms of capital and labour.", "title": "Theory" }, { "paragraph_id": 6, "text": "As a means of exchange, money enables buyers to compare the costs of goods without having knowledge of their underlying factors; the consumer can simply focus on his personal cost-benefit decision. Therefore, the price system is said to promote economically efficient use of resources by agents who may not have explicit knowledge of all of the conditions of production or supply. This is called the signalling function of prices as well as the rationing function which prevents over-use of any resource.", "title": "Theory" }, { "paragraph_id": 7, "text": "Without the market process to fulfill such comparisons, critics of non-market socialism say that it lacks any way to compare different goods and services and would have to rely on calculation in kind. The resulting decisions, it is claimed, would therefore be made without sufficient knowledge to be considered rational.", "title": "Theory" }, { "paragraph_id": 8, "text": "The common basis for comparison of capital goods must also be connected to consumer welfare. It must also be able to compare the desired trade-off between present consumption and delayed consumption (for greater returns later on) via investment in capital goods. The use of money as a medium of exchange and unit of account is necessary to solve the first two problems of economic calculation. Mises (1912) applied the marginal utility theory developed by Carl Menger to money.", "title": "Theory" }, { "paragraph_id": 9, "text": "Marginal consumer expenditures represent the marginal utility or additional consumer satisfaction expected by consumers as they spend money. This is similar to the equi-marginal principle developed by Alfred Marshall. Consumers equalize the marginal utility (amount of satisfaction) to the last dollar spent on each good. Thus, the exchange of consumer goods establishes prices that represent the marginal utility of consumers and money is representative of consumer satisfaction.", "title": "Theory" }, { "paragraph_id": 10, "text": "If money is also spent on capital goods and labor, then it is possible to make comparisons between capital goods and consumer goods. The exchange of consumer and capital/labor goods does not imply that capital goods are valued accurately, only that it is possible for the valuations of capital goods to be made. These are foundational elements of economic calculation, namely that it requires the use of money across all goods. This is a necessary, but not a sufficient condition for successful economic calculation. Without a price mechanism, Mises argues, socialism lacks the means to relate consumer satisfaction to economic activity. The incentive function of prices allows diffuse interests, like the interests of every household in cheap, high quality shoes to compete, among buyers, with the concentrated interests of the cobblers in expensive, poor quality shoes. Without it, a panel of experts set up to \"rationalise production\", likely closely linked to the cobblers for expertise, would tend to support the cobblers interests in a \"conspiracy against the public\". However, if this happens to all industries, everyone would be worse off than if they had been subject to the rigours of market competition. The latter forces producers to produce superior products at appropriate prices to please their consumers.", "title": "Theory" }, { "paragraph_id": 11, "text": "The Mises theory of money and calculation conflicts directly with Marxist labour theory of value. Marxist theory allows for the possibility that labour content can serve as a common means of valuing capital goods, a position now out of favour with economists following the success of the theory of marginal utility.", "title": "Theory" }, { "paragraph_id": 12, "text": "The third condition for economic calculation is the existence of genuine entrepreneurship and market rivalry.", "title": "Theory" }, { "paragraph_id": 13, "text": "According to Israel Kirzner (1973) and Don Lavoie (1985), entrepreneurs reap profits by supplying unfulfilled needs in all markets. Thus, entrepreneurship brings prices closer to marginal costs. The adjustment of prices in markets towards equilibrium (where supply and demand equal) gives them greater utilitarian significance. The activities of entrepreneurs make prices more accurate in terms of how they represent the marginal utility of consumers. Prices act as guides to the planning of production. Those who plan production use prices to decide which lines of production should be expanded or curtailed.", "title": "Theory" }, { "paragraph_id": 14, "text": "Entrepreneurs lack the profit motive to take risks under socialism and so are far less likely to attempt to supply consumer demands. Without the price system to match consumer utility to incentives for production, or even indicate those utilities \"without providing incentives\", state planners are much less likely to invest in new ideas to satisfy consumers' desires. Entrepreneurs would also lack the ability to economize within the production process, causing repercussions for consumers.", "title": "Theory" }, { "paragraph_id": 15, "text": "The fourth condition for successful economic calculation is plan coordination among those who plan production. The problem of planning production is the knowledge problem explained by Hayek (1937, 1945), but first mentioned and illustrated by his mentor Mises in Socialism (1922), not to be mistaken with Socialism: An Economic and Sociological Analysis (1951). The planning could either be done in a decentralised fashion, requiring some mechanism to make the individual plans coherent, or centrally, requiring a lot of information.", "title": "Theory" }, { "paragraph_id": 16, "text": "Within capitalism, the overall plan for production is composed of individual plans among capitalists in large and small enterprises. Since capitalists purchase labour and capital out of the same common pool of available yet scarce labor and capital, it is essential that their plans fit together in at least a semi-coherent fashion. Hayek (1937) defined an efficient planning process as one where all decision makers form plans that contain relevant data from the plans from others. Entrepreneurs acquire data on the plans from others through the price system. The price system is an indispensable communications network for plan coordination among entrepreneurs. Increases and decreases in prices inform entrepreneurs about the general economic situation, to which they must adjust their own plans.", "title": "Theory" }, { "paragraph_id": 17, "text": "As for socialism, Mises (1944) and Hayek (1937) insisted that bureaucrats in individual ministries could not coordinate their plans without a price system due to the local knowledge problem.Opponents argued that in principle an economy can be seen as a set of equations. Thus, using information about available resources and the preferences of people, it should be possible to calculate an optimal solution for resource allocation. Friedrich von Hayek responded that the system of equations required too much information that would not be easily available and the ensuing calculations would be too difficult. This is partly because individuals possess useful knowledge but do not realise its importance, may have no incentive to transmit the information, or may have incentive to transmit false information about their preferences. He contended that the only rational solution is to utilize all the dispersed knowledge in the market place through the use of price signals. The early debates were made before the much greater calculating powers of modern computers became available but also before research on chaos theory. In the 1980s, Alexander Nove argued that the calculations would take millions of years even with the best computers. It may be impossible to make long-term predictions for a highly complex system such as an economy.", "title": "Theory" }, { "paragraph_id": 18, "text": "Hayek (1935, 1937, 1940, 1945) stressed the knowledge problem of central planning, partly because decentralized socialism seemed indefensible. Part of the reason that Hayek stressed the knowledge problem was also because he was mainly concerned with debating the proposal for market socialism and the Lange model by Oskar R. Lange (1938) and Hayek's student Abba Lerner (1934, 1937, 1938) which was developed in response to the calculation argument. Lange and Lerner conceded that prices were necessary in socialism. Lange and Lerner thought that socialist officials could simulate some markets (mainly spot markets) and the simulation of spot markets was enough to make socialism reasonably efficient. Lange argued that prices can be seen merely as an accounting practice. In principle, claim market socialists, socialist managers of state enterprises could use a price system, as an accounting system, in order to minimize costs and convey information to other managers. However, while this can deal with existing stocks of goods, providing a basis for values can be ascertained, it does not deal with the investment in new capital stocks. Hayek responded by arguing that the simulation of markets in socialism would fail due to a lack of genuine competition and entrepreneurship. Central planners would still have to plan production without the aid of economically meaningful prices. Lange and Lerner also admitted that socialism would lack any simulation of financial markets, and that this would cause problems in planning capital investment.", "title": "Theory" }, { "paragraph_id": 19, "text": "However, Hayek's argumentation is not only regarding computational complexity for the central planners. He further argues that much of the information individuals have cannot be collected or used by others. First, individuals may have no or little incentive to share their information with central or even local planners. Second, the individual may not be aware that he has valuable information; and when he becomes aware, it is only useful for a limited time, too short for it to be communicated to the central or local planners. Third, the information is useless to other individuals if it is not in a form that allows for meaningful comparisons of value (i.e. money prices as a common basis for comparison). Therefore, Hayek argues, individuals must acquire data through prices in real markets.", "title": "Theory" }, { "paragraph_id": 20, "text": "The fifth condition for successful economic calculation is the existence of well functioning financial markets. Economic efficiency depends heavily upon avoiding errors in capital investment. The costs of reversing errors in capital investment are potentially large. This is not just a matter of rearranging or converting capital goods that are found to be of little use. The time spent reconfiguring the structure of production is time lost in the production of consumer goods. Those who plan capital investment must anticipate future trends in consumer demand if they are to avoid investing too much in some lines of production and too little in other lines of production.", "title": "Theory" }, { "paragraph_id": 21, "text": "Capitalists plan production for profit. Capitalists use prices to form expectations that determine the composition of capital accumulation, the pattern of investment across industry. Those who invest in accordance with consumers' desires are rewarded with profits, those who do not are forced to become more efficient or go out of business.", "title": "Theory" }, { "paragraph_id": 22, "text": "Prices in futures markets play a special role in economic calculation. Futures markets develop prices for commodities in future time periods. It is in futures markets that entrepreneurs sort out plans for production based on their expectations. Futures markets are a link between entrepreneurial investment decisions and household consumer decisions. Since most goods are not explicitly traded in futures markets, substitute markets are needed. The stock market serves as a ‘continuous futures market’ that evaluates entrepreneurial plans for production (Lachmann 1978). Generally speaking, the problem of economic calculation is solved in financial markets as Mises argued:", "title": "Theory" }, { "paragraph_id": 23, "text": "The problem of economic calculation arises in an economy which is perpetually subject to change [...]. In order to solve such problems it is above all necessary that capital be withdrawn from particular undertakings and applied in other lines of production [...]. [This] is essentially a matter of the capitalists who buy and sell stocks and shares, who make loans and recover them, who speculate in all kinds of commodities.", "title": "Theory" }, { "paragraph_id": 24, "text": "The existence of financial markets is a necessary condition for economic calculation. The existence of financial markets itself does not automatically imply that entrepreneurial speculation will tend towards efficiency. Mises argued that speculation in financial markets tends towards efficiency because of a \"trial and error\" process. Entrepreneurs who commit relatively large errors in investment waste their funds over expanding some lines of production at the cost of other more profitable ventures where consumer demand is higher. The entrepreneurs who commit the worst errors by forming the least accurate expectations of future consumer demands incur financial losses. Financial losses remove these inept entrepreneurs from positions of authority in industry.", "title": "Theory" }, { "paragraph_id": 25, "text": "Entrepreneurs who commit smaller errors by anticipating consumer demand more correctly attain greater financial success. The entrepreneurs who form the most accurate opinions regarding the future state of markets (i.e. new trends in consumer demands) earn the highest profits and gain greater control of industry. Those entrepreneurs who anticipate future market trends therefore waste the least amount of real capital and find the most favorable terms for finance on markets for financial capital. Minimal waste of real capital goods implies the minimization of the opportunity costs of capital's economic calculation. The value of capital goods is brought into line with the value of future consumer goods through competition in financial markets, because competition for profits among capitalist financiers rewards entrepreneurs who value capital more correctly (i.e. anticipating future prices more correctly) and eliminates capitalists who value capital least correctly. To sum things up, the use of money in trading all goods (capital/labor and consumer) in all markets (spot and financial) combined with profit driven entrepreneurship and Darwinian natural selection in financial markets all combine to make rational economic calculation and allocation the outcome of the capitalist process.", "title": "Theory" }, { "paragraph_id": 26, "text": "Mises insisted that socialist calculation is impossible because socialism precludes the exchange of capital goods in terms of a generally accepted medium of exchange, or money. Investment in financial markets determines the capital structure of modern industry with some degree of efficiency. The egalitarian nature of socialism prohibits speculation in financial markets. Therefore, Mises concluded that socialism lacks any clear tendency towards improvement in the capital structure of industry.", "title": "Theory" }, { "paragraph_id": 27, "text": "Mises gave the example of choosing between producing wine or oil, making the following point:", "title": "Example" }, { "paragraph_id": 28, "text": "It will be evident, even in the socialist society, that 1,000 hectolitres of wine are better than 800, and it is not difficult to decide whether it desires 1,000 hectolitres of wine rather than 500 of oil. There is no need for any system of calculation to establish this fact: the deciding element is the will of the economic subjects involved. But once this decision has been taken, the real task of rational economic direction only commences, i.e., economically, to place the means at the service of the end. That can only be done with some kind of economic calculation. The human mind cannot orient itself properly among the bewildering mass of intermediate products and potentialities of production without such aid. It would simply stand perplexed before the problems of management and location.", "title": "Example" }, { "paragraph_id": 29, "text": "Such intermediate products would include land, warehouse storage, bottles, barrels, oil, transport, etc. Not only would these things have to be assembled, but they would have to compete with the attainment of other economic goals. Without pricing for capital goods, essentially, Mises is arguing, it is impossible to know how they should rationally/efficiently use it. And since the absence of pricing necessitates the prior absence of a current standard of exchange, investment becomes particularly impossible. In other words, the potential future outputs cannot be measured by any current standard, let alone a monetary one required for economic calculation. Likewise, the value consumers have for current consumption over future consumption cannot be expressed, quantified or implemented.", "title": "Example" }, { "paragraph_id": 30, "text": "Some academics and economists argue that the claim a free market is an efficient, or even the most efficient, method of resource allocation is incorrect. Alexander Nove argued that Mises \"tends to spoil his case by the implicit assumption that capitalism and optimum resource allocation go together\" in Mises' \"Economic Calculation in the Socialist Commonwealth\". Joan Robinson argued that many prices in modern capitalism are effectively \"administered prices\" created by \"quasi monopolies\", thus challenging the connection between capital markets and rational resource allocation.", "title": "Criticism" }, { "paragraph_id": 31, "text": "Socialist market abolitionists argue that whilst advocates of capitalism and the Austrian School in particular recognize equilibrium prices do not exist in real life, they nonetheless claim that these prices can be used as a rational basis when this is not the case, hence markets are not efficient. Robin Hahnel further argued that market inefficiencies, such as externalities and excess supply and demand, arise from buyers and sellers thoughtlessly maximizing their rational interests, which free markets inherently do not deter. Nonetheless, Hahnel commended current policies pursued by free market capitalist societies against these inefficiencies (e.g. Pigouvian taxes, antitrust laws etc.), as long as they are properly calculated and consistently enforced.", "title": "Criticism" }, { "paragraph_id": 32, "text": "Milton Friedman agreed that markets with monopolistic competition are not efficient, but he argued that it is easy to force monopolies to adopt competitive behavior by exposing them to foreign rivals. Economic liberals and libertarian capitalists also argue that monopolies and big business are not generally the result of a free market, or that they never arise from a free market; rather, they say that such concentration is enabled by governmental grants of franchises or privileges. That said, protectionist economies can theoretically still foster competition as long as there is strong consumer switching. Joseph Schumpeter additionally argued that economic advancement, through innovation and investment, are often driven by large monopolies.", "title": "Criticism" }, { "paragraph_id": 33, "text": "Allin Cottrell, Paul Cockshott and Greg Michaelson argued that the contention that finding a true economic equilibrium is not just hard but impossible for a central planner applies equally well to a market system. As any universal Turing machine can do what any other Turing machine can, a central calculator in principle has no advantage over a system of dispersed calculators, i.e. a market, or vice versa.", "title": "Criticism" }, { "paragraph_id": 34, "text": "In some economic models, finding an equilibrium is hard, and finding an Arrow–Debreu equilibrium is PPAD-complete. If the market can find an equilibrium in polynomial time, then the equivalence above can be used to prove that P=PPAD. This line of argument thus attempts to show that any claim to impossibility must necessarily involve a local knowledge problem, because the planning system is no less capable than the market if given full information.", "title": "Criticism" }, { "paragraph_id": 35, "text": "Don Lavoie makes a local knowledge argument by taking this implication in reverse. The market socialists pointed out the formal similarity between the neoclassical model of Walrasian general equilibrium and that of market socialism which simply replace the Walrasian auctioneer with a planning board. According to Lavoie, this emphasizes the shortcomings of the model. By relying on this formal similarity, the market socialists must adopt the simplifying assumptions of the model. The model assumes that various sorts of information are given to the auctioneer or planning board. However, if not coordinated by a capital market, this information exists in a fundamentally distributed form, which would be difficult to utilize on the planners' part. If the planners decided to utilize the information, it would immediately become stale and relatively useless, unless reality somehow imitated the changeless monotony of the equilibrium model. The existence and usability of this information depends on its creation and situation within a distributed discovery procedure.", "title": "Criticism" }, { "paragraph_id": 36, "text": "One criticism is that proponents of the theory overstate the strength of their case by describing socialism as impossible rather than inefficient. In explaining why he is not an Austrian School economist, anarcho-capitalist economist Bryan Caplan argues that while the economic calculation problem is a problem for socialism, he denies that Mises has shown it to be fatal or that it is this particular problem that led to the collapse of authoritarian socialist states. Caplan also states the exaggeration of the problem; in his view, Mises did not manage to prove why economic calculation made the socialist economy 'impossible', and even if there were serious doubts about the efficiency of cost benefit analysis, other arguments are plentiful (Caplan gives the example of the incentive problem).", "title": "Criticism" }, { "paragraph_id": 37, "text": "Joan Robinson argued that in a steady-state economy there would be an effective abundance of means of production and so markets would not be needed. Mises acknowledged such a theoretical possibility in his original tract when he said the following: \"The static state can dispense with economic calculation. For here the same events in economic life are ever recurring; and if we assume that the first disposition of the static socialist economy follows on the basis of the final state of the competitive economy, we might at all events conceive of a socialist production system which is rationally controlled from an economic point of view.\" However, he contended that stationary conditions never prevail in the real world. Changes in economic conditions are inevitable; and even if they were not, the transition to socialism would be so chaotic as to preclude the existence of such a steady-state from the start.", "title": "Criticism" }, { "paragraph_id": 38, "text": "The purpose of the price mechanism is to allow individuals to recognise the opportunity cost of decisions. In a state of abundance, there is no such cost, which is to say that in situations where one need not economize, economics does not apply, e.g. areas with abundant fresh air and water. Otto Neurath and Hillel Ticktin argued that with detailed use of real unit accounting and demand surveys a planned economy could operate without a capital market in a situation of abundance.", "title": "Criticism" }, { "paragraph_id": 39, "text": "In Towards a New Socialism's \"Information and Economics: A Critique of Hayek\" and \"Against Mises\", Paul Cockshott and Allin Cottrell argued that the use of computational technology now simplifies economic calculation and allows planning to be implemented and sustained. Len Brewster replied to this by arguing that Towards a New Socialism establishes what is essentially another form of a market economy, making the following point:", "title": "Criticism" }, { "paragraph_id": 40, "text": "[A]n examination of C&C's New Socialism confirms Mises's conclusion that rational socialist planning is impossible. It appears that in order for economic planners to have any useful data by which they might be guided, a market must be hauled in, and with it analogues of private property, inequality and exploitation.", "title": "Criticism" }, { "paragraph_id": 41, "text": "In response, Cockshott argued that the economic system is sufficiently far removed from a capitalist free-market economy to not count as one, saying:", "title": "Criticism" }, { "paragraph_id": 42, "text": "Those that Hayek was arguing against like Lange and Dickinson allowed for markets in consumer goods, this did not lead Hayek to say : Oh you are not really arguing for socialism since you have conceded a market in consumer goods, he did not, because there remained huge policy differences between him and Lange even if Lange accepted consumer goods markets. It is thus a very weak argument by Brewster to say that what we advocate is not really socialist calculation because it is contaminated in some way by market influences.", "title": "Criticism" }, { "paragraph_id": 43, "text": "Leigh Phillips' and Michal Rozworski's 2019 book The People's Republic of Walmart argues that multinational corporations like Walmart and Amazon already operate centrally planned economies in a more technologically sophisticated manner than the Soviet Union, proving that the economic calculation problem is surmountable. There are some contentions to this view however, namely how economic planning and planned economy ought to be distinguished. Both entail formulating data-driven economic objectives but the latter precludes it from occurring within a free market context and delegates the task to centralized bodies.", "title": "Criticism" }, { "paragraph_id": 44, "text": "Karras J. Lambert and Tate Fegley argue that artificial intelligence systems, no matter how advanced, cannot assume the role of central planners because they do not fulfill the prerequisites of effective economic calculation. This includes the ability to convert the ordinal preferences of producers and consumers into commensurate cardinal utility values, which are available and agreed upon, and forecast future market interactions.", "title": "Criticism" }, { "paragraph_id": 45, "text": "One reason includes how they are dependent on Big Data, which in turn is entirely based on past information. Hence, the system cannot make any meaningful conclusions about future consumer preferences, which are required for optimal pricing. This necessitates the intervention of the programmer, who is highly likely to be biased in their judgments. Even the manner by which a system can \"predict\" consumer preferences is also based on a programmer's creative bias. They further argue that even if artificial intelligence is able to ordinally rank items like humans, they would still suffer from the same issues of not being able to conceive of a pricing structure where meaningful pricing calculations, using a common cardinal utility unit, can be formed. Nonetheless, Lambert and Fegley acknowledge that entrepreneurs can benefit from Big Data's predictive value, provided that the data is based on past market prices and that it is used in tandem with free market-styled bidding.", "title": "Criticism" } ]
The economic calculation problem is a criticism of using economic planning as a substitute for market-based allocation of the factors of production. It was first proposed by Ludwig von Mises in his 1920 article "Economic Calculation in the Socialist Commonwealth" and later expanded upon by Friedrich Hayek. In his first article, Mises described the nature of the price system under capitalism and described how individual subjective values are translated into the objective information necessary for rational allocation of resources in society. He argued that economy planning necessarily leads to an irrational and inefficient allocation of resources. In market exchanges, prices reflect the supply and demand of resources, labor and products. In the article, Mises focused his criticism on the deficiencies of the socialisation of capital goods, but he later went on to elaborate on various different forms of socialism in his book Socialism. He briefly mentioned the problem in the 3rd book of Human Action: a Treatise on Economics, where he also elaborated on the different types of socialism, namely the "Hindenburg" and "Lenin" models, which he viewed as fundamentally flawed despite their ideological differences. Mises and Hayek argued that economic calculation is only possible by information provided through market prices and that bureaucratic or technocratic methods of allocation lack methods to rationally allocate resources. Mises's analysis centered on price theory while Hayek went with a more feathered analysis of information and entrepreneurship. The debate raged in the 1920s and 1930s and that specific period of the debate has come to be known by economic historians as the socialist calculation debate. Mises' initial criticism received multiple reactions and led to the conception of trial-and-error market socialism, most notably the Lange–Lerner theorem. In the 1920 paper, Mises argued that the pricing systems in socialist economies were necessarily deficient because if a public entity owned all the means of production, no rational prices could be obtained for capital goods as they were merely internal transfers of goods and not "objects of exchange", unlike final goods. Therefore, they were unpriced and hence the system would be necessarily irrational as the central planners would not know how to allocate the available resources efficiently. He wrote that "rational economic activity is impossible in a socialist commonwealth". Mises developed his critique of socialism more completely in his 1922 book Socialism, arguing that the market price system is an expression of praxeology and can not be replicated by any form of bureaucracy. Notable critics of both Mises's original argument and Hayek's newer proposition include Anarcho-capitalist economist Bryan Caplan, computer programmer and Marxist Paul Cockshott, as well as other communists.
2001-03-30T21:52:00Z
2023-11-24T03:11:18Z
[ "Template:Short description", "Template:Blockquote", "Template:Cols", "Template:Reflist", "Template:Cite web", "Template:More citations needed", "Template:Citation needed", "Template:Socialism sidebar", "Template:Colend", "Template:Cite book", "Template:Cite journal", "Template:Webarchive", "Template:Austrian School sidebar", "Template:Quote" ]
https://en.wikipedia.org/wiki/Economic_calculation_problem
9,299
Erasmus Darwin
Erasmus Robert Darwin FRS (12 December 1731 – 18 April 1802) was an English physician. One of the key thinkers of the Midlands Enlightenment, he was also a natural philosopher, physiologist, slave-trade abolitionist, inventor, and poet. His poems included much natural history, including a statement of evolution and the relatedness of all forms of life. He was a member of the Darwin–Wedgwood family, which includes his grandsons Charles Darwin and Francis Galton. Darwin was a founding member of the Lunar Society of Birmingham, a discussion group of pioneering industrialists and natural philosophers. He turned down an invitation from George III to become Physician to the King. Darwin was born in 1731 at Elston Hall, Nottinghamshire, near Newark-on-Trent, England, the youngest of seven children of Robert Darwin of Elston (1682–1754), a lawyer and physician, and his wife Elizabeth Hill (1702–97). The name Erasmus had been used by a number of his family and derives from his ancestor Erasmus Earle, Common Sergent of England under Oliver Cromwell. His siblings were: He was educated at Chesterfield Grammar School, then later at St John's College, Cambridge. He obtained his medical education at the University of Edinburgh Medical School. Darwin settled in 1756 as a physician at Nottingham, but met with little success and so moved the following year to Lichfield to try to establish a practice there. A few weeks after his arrival, using a novel course of treatment, he restored the health of a young fisherman whose death seemed inevitable. This ensured his success in the new locale. Darwin was a highly successful physician for more than fifty years in the Midlands. In 1761, he was elected to the Royal Society. George III invited him to be Royal Physician, but Darwin declined. Darwin married twice and had 14 children, including two illegitimate daughters by an employee, and, possibly, at least one further illegitimate daughter. In 1757 he married Mary (Polly) Howard (1740–1770), the daughter of Charles Howard, a Lichfield solicitor. They had four sons and one daughter, two of whom (a son and a daughter) died in infancy: The first Mrs. Darwin died in 1770. A governess, Mary Parker, was hired to look after Robert. By late 1771, employer and employee had become intimately involved and together they had two illegitimate daughters: Susanna and Mary Jr later established a boarding school for girls. In 1782, Mary Sr (the governess) married Joseph Day (1745–1811), a Birmingham merchant, and moved away. Darwin may have fathered another child, this time with a married woman. A Lucy Swift gave birth in 1771 to a baby, also named Lucy, who was christened a daughter of her mother and William Swift, but there is reason to believe the father was really Darwin. Lucy Jr. married John Hardcastle in Derby in 1792 and their daughter, Mary, married Francis Boott, the physician. In 1775, Darwin met Elizabeth Pole, daughter of Charles Colyear, 2nd Earl of Portmore, and wife of Colonel Edward Pole (1718–1780); but as she was married, Darwin could only make his feelings known for her through poetry. When Edward Pole died, Darwin married Elizabeth and moved to her home, Radbourne Hall, four miles (6.4 km) west of Derby. The hall and village are these days known as Radbourne. In 1782, they moved to Full Street, Derby. They had four sons, one of whom died in infancy, and three daughters: Darwin's personal appearance is described in unflattering detail in his Biographical Memoirs, printed by the Monthly Magazine in 1802. Darwin, the description reads, "was of middle stature, in person gross and corpulent; his features were coarse, and his countenance heavy; if not wholly void of animation, it certainly was by no means expressive. The print of him, from a painting of Mr. Wright, is a good likeness. In his gait and dress he was rather clumsy and slovenly, and frequently walked with his tongue hanging out of his mouth." Darwin had been a Freemason throughout his life, in the Time Immemorial Lodge of Cannongate Kilwinning, No. 2, of Scotland. Later on, Sir Francis Darwin, one of his sons, was made a Mason in Tyrian Lodge, No. 253, at Derby, in 1807 or 1808. His son Reginald was made a Mason in Tyrian Lodge in 1804. Charles Darwin's name does not appear on the rolls of the Lodge but it is very possible that he, like Francis, was a Mason, as he held many Masonic beliefs such as Deism throughout his life. Darwin died suddenly on 18 April 1802, weeks after having moved to Breadsall Priory, just north of Derby. The Monthly Magazine of 1802, in its Biographical Memoirs of the Late Dr. Darwin, reports that "during the last few years, Dr. Darwin was much subject to inflammation in his breast and lungs; he had a very serious attack of this disease in the course of the last Spring, from which, after repeated bleedings, by himself and a surgeon, he with great difficulty recovered." Darwin's death, the Biographical Memoirs continues, "is variously accounted for: it is supposed to have been caused by the cold fit of an inflammatory fever. Dr. Fox, of Derby, considers the disease which occasioned it to have been angina pectoris; but Dr. Garlicke, of the same place, thinks this opinion not sufficiently well founded. Whatever was the disease, it is not improbable, surely, that the fatal event was hastened by the violent fit of passion with which he was seized in the morning." His body is buried in All Saints' Church, Breadsall. Erasmus Darwin is commemorated on one of the Moonstones, a series of monuments in Birmingham. Darwin formed 'A Botanical Society, at Lichfield' almost always incorrectly named as the Lichfield Botanical Society (despite the name, composed of only three men, Erasmus Darwin, Sir Brooke Boothby and Mr John Jackson, proctor of Lichfield Cathedral) to translate the works of the Swedish botanist Carl Linnaeus from Latin into English. This took seven years. The result was two publications: A System of Vegetables between 1783 and 1785, and The Families of Plants in 1787. In these volumes, Darwin coined many of the English names of plants that we use today. Darwin then wrote The Loves of the Plants, a long poem, which was a popular rendering of Linnaeus' works. Darwin also wrote Economy of Vegetation, and together the two were published as The Botanic Garden. Among other writers he influenced were Anna Seward and Maria Jacson. Darwin's most important scientific work, Zoonomia (1794–1796), contains a system of pathology and a chapter on 'Generation'. In the latter, he anticipated some of the views of Jean-Baptiste Lamarck, which foreshadowed the modern theory of evolution. Erasmus Darwin's works were read and commented on by his grandson Charles Darwin the naturalist. Erasmus Darwin based his theories on David Hartley's psychological theory of associationism. The essence of his views is contained in the following passage, which he follows up with the conclusion that one and the same kind of living filament is and has been the cause of all organic life: Would it be too bold to imagine, that in the great length of time, since the earth began to exist, perhaps millions of ages before the commencement of the history of mankind, would it be too bold to imagine, that all warm-blooded animals have arisen from one living filament, which THE GREAT FIRST CAUSE endued with animality, with the power of acquiring new parts, attended with new propensities, directed by irritations, sensations, volitions, and associations; and thus possessing the faculty of continuing to improve by its own inherent activity, and of delivering down those improvements by generation to its posterity, world without end! Erasmus Darwin also anticipated survival of the fittest in Zoönomia mainly when writing about the "three great objects of desire" for every organism: "lust, hunger, and security." A similar "survival of the fittest" view in Zoönomia is Erasmus' view on how a species "should" propagate itself. Erasmus' idea that "the strongest and most active animal should propagate the species, which should thence become improved". Today, this is called the theory of survival of the fittest. His grandson Charles Darwin posited the different and fuller theory of natural selection. Charles' theory was that natural selection is the inheritance of changed genetic characteristics that are better adaptations to the environment; these are not necessarily based in "strength" and "activity", which themselves ironically can lead to the overpopulation that results in natural selection yielding nonsurvivors of genetic traits. Erasmus Darwin was familiar with the earlier proto-evolutionary thinking of James Burnett, Lord Monboddo, and cited him in his 1803 work Temple of Nature. Erasmus Darwin offered the first glimpse of his theory of evolution, obliquely, in a question at the end of a long footnote to his popular poem The Loves of the Plants (1789), which was republished throughout the 1790s in several editions as The Botanic Garden. His poetic concept was to anthropomorphise the stamen (male) and pistil (female) sexual organs, as bride and groom. In this stanza on the flower Curcuma (also Flax and Turmeric) the "youths" are infertile, and he devotes the footnote to other examples of neutered organs in flowers, insect castes, and finally associates this more broadly with many popular and well-known cases of vestigial organs (male nipples, the third and fourth wings of flies, etc.) Woo'd with long care, CURCUMA cold and shy Meets her fond husband with averted eye: Four beardless youths the obdurate beauty move With soft attentions of Platonic love. Darwin's final long poem, The Temple of Nature was published posthumously in 1803. The poem was originally titled The Origin of Society. It is considered his best poetic work. It centres on his own conception of evolution. The poem traces the progression of life from micro-organisms to civilised society. The poem contains a passage that describes the struggle for existence. His poetry was admired by Wordsworth, while Coleridge was intensely critical, writing, "I absolutely nauseate Darwin's poem". It often made reference to his interests in science; for example botany and steam engines. The last two leaves of Darwin's A plan for the conduct of female education in boarding schools (1797) contain a book list, an apology for the work, and an advert for "Miss Parkers School". The school advertised on the last page is the one he set up in Ashbourne, Derbyshire, for his two illegitimate children, Susanna and Mary. Darwin regretted that a good education had not been generally available to women in Britain in his time, and drew on the ideas of Locke, Rousseau, and Genlis in organising his thoughts. Addressing the education of middle-class girls, Darwin argued that amorous romance novels were inappropriate and that they should seek simplicity in dress. He contends that young women should be educated in schools, rather than privately at home, and learn appropriate subjects. These subjects include physiognomy, physical exercise, botany, chemistry, mineralogy, and experimental philosophy. They should familiarise themselves with arts and manufactures through visits to sites like Coalbrookdale, and Wedgwood's potteries; they should learn how to handle money, and study modern languages. Darwin's educational philosophy took the view that men and women should have different capabilities, skills, interests, and spheres of action, where the woman's education was designed to support and serve male accomplishment and financial reward, and to relieve him of daily responsibility for children and the chores of life. In the context of the times, this program may be read as a modernising influence in the sense that the woman was at least to learn about the "man's world", although not be allowed to participate in it. The text was written seven years after A Vindication of the Rights of Woman by Mary Wollstonecraft, which has the central argument that women should be educated in a rational manner to give them the opportunity to contribute to society. Some women of Darwin's era were receiving more substantial education and participating in the broader world. An example is Susanna Wright, who was raised in Lancashire and became an American colonist associated with the Midlands Enlightenment. It is not known whether Darwin and Wright knew each other, although they definitely knew many people in common. Other women who received substantial education and who participated in the broader world (albeit sometimes anonymously) whom Darwin definitely knew were Maria Jacson and Anna Seward. These dates indicate the year in which Darwin became friends with these people, who, in turn, became members of the Lunar Society. The Lunar Society existed from 1765 to 1813. Before 1765: After 1765: Darwin also established a lifelong friendship with Benjamin Franklin, who shared Darwin's support for the American and French revolutions. The Lunar Society was instrumental as an intellectual driving force behind England's Industrial Revolution. The members of the Lunar Society, and especially Darwin, opposed the slave trade. He attacked it in The Botanic Garden (1789–1791), and in The Loves of Plants (1789), The Economy of Vegetation (1791), and the Phytologia (1800). In 1761, Darwin was elected a fellow of the Royal Society. In addition to the Lunar Society, Erasmus Darwin belonged to the influential Derby Philosophical Society, as did his brother-in-law Samuel Fox (see family tree below). He experimented with the use of air and gases to alleviate infections and cancers in patients. A Pneumatic Institution was established at Clifton in 1799 for clinically testing these ideas. He conducted research into the formation of clouds, on which he published in 1788. He also inspired Robert Weldon's Somerset Coal Canal caisson lock. In 1792, Darwin was elected as a member to the American Philosophical Society in Philadelphia. Percy Bysshe Shelley specifically mentions Darwin in the first sentence of the 1818 Preface to Frankenstein to support his contention that the creation of life is possible. His wife Mary Shelley in her introduction to the 1831 edition of Frankenstein wrote that she overheard her husband talk about Darwin's experiments with Lord Byron about unspecified "experiments of Dr. Darwin" that led to the idea for the novel. Contemporary literature dates the cosmological theories of the Big Bang and Big Crunch to the 19th and 20th centuries. However, Erasmus Darwin had speculated on these sorts of events in The Botanic Garden, A Poem in Two Parts: Part 1, The Economy of Vegetation, 1791: Roll on, ye Stars! exult in youthful prime, Mark with bright curves the printless steps of Time; Near and more near your beamy cars approach, And lessening orbs on lessening orbs encroach; — Flowers of the sky! ye too to age must yield, Frail as your silken sisters of the field. Star after star from Heaven's high arch shall rush, Suns sink on suns, and systems, systems crush, Headlong, extinct, to one dark centre fall, And death and night and chaos mingle all: — Till o'er the wreck, emerging from the storm, Immortal Nature lifts her changeful form, Mounts from her funeral pyre on wings of flame, And soars and shines, another and the same! Darwin was the inventor of several devices, though he did not patent any: he believed this would damage his reputation as a doctor. He encouraged his friends to patent their own modifications of his designs. In notes dating to 1779, Darwin made a sketch of a simple hydrogen-oxygen rocket engine, with gas tanks connected by plumbing and pumps to an elongated combustion chamber and expansion nozzle, a concept not to be seen again until one century later. Erasmus Darwin House, his home in Lichfield, Staffordshire, is a museum dedicated to him and his life's work. A secondary school at Burntwood, near Lichfield, was renamed Erasmus Darwin Academy in 2011. A science building on the Clifton campus of Nottingham Trent University is named after him.
[ { "paragraph_id": 0, "text": "Erasmus Robert Darwin FRS (12 December 1731 – 18 April 1802) was an English physician. One of the key thinkers of the Midlands Enlightenment, he was also a natural philosopher, physiologist, slave-trade abolitionist, inventor, and poet.", "title": "" }, { "paragraph_id": 1, "text": "His poems included much natural history, including a statement of evolution and the relatedness of all forms of life.", "title": "" }, { "paragraph_id": 2, "text": "He was a member of the Darwin–Wedgwood family, which includes his grandsons Charles Darwin and Francis Galton. Darwin was a founding member of the Lunar Society of Birmingham, a discussion group of pioneering industrialists and natural philosophers.", "title": "" }, { "paragraph_id": 3, "text": "He turned down an invitation from George III to become Physician to the King.", "title": "" }, { "paragraph_id": 4, "text": "Darwin was born in 1731 at Elston Hall, Nottinghamshire, near Newark-on-Trent, England, the youngest of seven children of Robert Darwin of Elston (1682–1754), a lawyer and physician, and his wife Elizabeth Hill (1702–97). The name Erasmus had been used by a number of his family and derives from his ancestor Erasmus Earle, Common Sergent of England under Oliver Cromwell. His siblings were:", "title": "Early life and education" }, { "paragraph_id": 5, "text": "He was educated at Chesterfield Grammar School, then later at St John's College, Cambridge. He obtained his medical education at the University of Edinburgh Medical School.", "title": "Early life and education" }, { "paragraph_id": 6, "text": "Darwin settled in 1756 as a physician at Nottingham, but met with little success and so moved the following year to Lichfield to try to establish a practice there. A few weeks after his arrival, using a novel course of treatment, he restored the health of a young fisherman whose death seemed inevitable. This ensured his success in the new locale. Darwin was a highly successful physician for more than fifty years in the Midlands. In 1761, he was elected to the Royal Society. George III invited him to be Royal Physician, but Darwin declined.", "title": "Early life and education" }, { "paragraph_id": 7, "text": "Darwin married twice and had 14 children, including two illegitimate daughters by an employee, and, possibly, at least one further illegitimate daughter.", "title": "Personal life" }, { "paragraph_id": 8, "text": "In 1757 he married Mary (Polly) Howard (1740–1770), the daughter of Charles Howard, a Lichfield solicitor. They had four sons and one daughter, two of whom (a son and a daughter) died in infancy:", "title": "Personal life" }, { "paragraph_id": 9, "text": "The first Mrs. Darwin died in 1770. A governess, Mary Parker, was hired to look after Robert. By late 1771, employer and employee had become intimately involved and together they had two illegitimate daughters:", "title": "Personal life" }, { "paragraph_id": 10, "text": "Susanna and Mary Jr later established a boarding school for girls. In 1782, Mary Sr (the governess) married Joseph Day (1745–1811), a Birmingham merchant, and moved away.", "title": "Personal life" }, { "paragraph_id": 11, "text": "Darwin may have fathered another child, this time with a married woman. A Lucy Swift gave birth in 1771 to a baby, also named Lucy, who was christened a daughter of her mother and William Swift, but there is reason to believe the father was really Darwin. Lucy Jr. married John Hardcastle in Derby in 1792 and their daughter, Mary, married Francis Boott, the physician.", "title": "Personal life" }, { "paragraph_id": 12, "text": "In 1775, Darwin met Elizabeth Pole, daughter of Charles Colyear, 2nd Earl of Portmore, and wife of Colonel Edward Pole (1718–1780); but as she was married, Darwin could only make his feelings known for her through poetry. When Edward Pole died, Darwin married Elizabeth and moved to her home, Radbourne Hall, four miles (6.4 km) west of Derby. The hall and village are these days known as Radbourne. In 1782, they moved to Full Street, Derby. They had four sons, one of whom died in infancy, and three daughters:", "title": "Personal life" }, { "paragraph_id": 13, "text": "Darwin's personal appearance is described in unflattering detail in his Biographical Memoirs, printed by the Monthly Magazine in 1802. Darwin, the description reads, \"was of middle stature, in person gross and corpulent; his features were coarse, and his countenance heavy; if not wholly void of animation, it certainly was by no means expressive. The print of him, from a painting of Mr. Wright, is a good likeness. In his gait and dress he was rather clumsy and slovenly, and frequently walked with his tongue hanging out of his mouth.\"", "title": "Personal life" }, { "paragraph_id": 14, "text": "Darwin had been a Freemason throughout his life, in the Time Immemorial Lodge of Cannongate Kilwinning, No. 2, of Scotland. Later on, Sir Francis Darwin, one of his sons, was made a Mason in Tyrian Lodge, No. 253, at Derby, in 1807 or 1808. His son Reginald was made a Mason in Tyrian Lodge in 1804. Charles Darwin's name does not appear on the rolls of the Lodge but it is very possible that he, like Francis, was a Mason, as he held many Masonic beliefs such as Deism throughout his life.", "title": "Personal life" }, { "paragraph_id": 15, "text": "Darwin died suddenly on 18 April 1802, weeks after having moved to Breadsall Priory, just north of Derby. The Monthly Magazine of 1802, in its Biographical Memoirs of the Late Dr. Darwin, reports that \"during the last few years, Dr. Darwin was much subject to inflammation in his breast and lungs; he had a very serious attack of this disease in the course of the last Spring, from which, after repeated bleedings, by himself and a surgeon, he with great difficulty recovered.\"", "title": "Death" }, { "paragraph_id": 16, "text": "Darwin's death, the Biographical Memoirs continues, \"is variously accounted for: it is supposed to have been caused by the cold fit of an inflammatory fever. Dr. Fox, of Derby, considers the disease which occasioned it to have been angina pectoris; but Dr. Garlicke, of the same place, thinks this opinion not sufficiently well founded. Whatever was the disease, it is not improbable, surely, that the fatal event was hastened by the violent fit of passion with which he was seized in the morning.\"", "title": "Death" }, { "paragraph_id": 17, "text": "His body is buried in All Saints' Church, Breadsall.", "title": "Death" }, { "paragraph_id": 18, "text": "Erasmus Darwin is commemorated on one of the Moonstones, a series of monuments in Birmingham.", "title": "Death" }, { "paragraph_id": 19, "text": "Darwin formed 'A Botanical Society, at Lichfield' almost always incorrectly named as the Lichfield Botanical Society (despite the name, composed of only three men, Erasmus Darwin, Sir Brooke Boothby and Mr John Jackson, proctor of Lichfield Cathedral) to translate the works of the Swedish botanist Carl Linnaeus from Latin into English. This took seven years. The result was two publications: A System of Vegetables between 1783 and 1785, and The Families of Plants in 1787. In these volumes, Darwin coined many of the English names of plants that we use today.", "title": "Writings" }, { "paragraph_id": 20, "text": "Darwin then wrote The Loves of the Plants, a long poem, which was a popular rendering of Linnaeus' works. Darwin also wrote Economy of Vegetation, and together the two were published as The Botanic Garden. Among other writers he influenced were Anna Seward and Maria Jacson.", "title": "Writings" }, { "paragraph_id": 21, "text": "Darwin's most important scientific work, Zoonomia (1794–1796), contains a system of pathology and a chapter on 'Generation'. In the latter, he anticipated some of the views of Jean-Baptiste Lamarck, which foreshadowed the modern theory of evolution. Erasmus Darwin's works were read and commented on by his grandson Charles Darwin the naturalist. Erasmus Darwin based his theories on David Hartley's psychological theory of associationism. The essence of his views is contained in the following passage, which he follows up with the conclusion that one and the same kind of living filament is and has been the cause of all organic life:", "title": "Writings" }, { "paragraph_id": 22, "text": "Would it be too bold to imagine, that in the great length of time, since the earth began to exist, perhaps millions of ages before the commencement of the history of mankind, would it be too bold to imagine, that all warm-blooded animals have arisen from one living filament, which THE GREAT FIRST CAUSE endued with animality, with the power of acquiring new parts, attended with new propensities, directed by irritations, sensations, volitions, and associations; and thus possessing the faculty of continuing to improve by its own inherent activity, and of delivering down those improvements by generation to its posterity, world without end!", "title": "Writings" }, { "paragraph_id": 23, "text": "Erasmus Darwin also anticipated survival of the fittest in Zoönomia mainly when writing about the \"three great objects of desire\" for every organism: \"lust, hunger, and security.\" A similar \"survival of the fittest\" view in Zoönomia is Erasmus' view on how a species \"should\" propagate itself. Erasmus' idea that \"the strongest and most active animal should propagate the species, which should thence become improved\". Today, this is called the theory of survival of the fittest. His grandson Charles Darwin posited the different and fuller theory of natural selection. Charles' theory was that natural selection is the inheritance of changed genetic characteristics that are better adaptations to the environment; these are not necessarily based in \"strength\" and \"activity\", which themselves ironically can lead to the overpopulation that results in natural selection yielding nonsurvivors of genetic traits.", "title": "Writings" }, { "paragraph_id": 24, "text": "Erasmus Darwin was familiar with the earlier proto-evolutionary thinking of James Burnett, Lord Monboddo, and cited him in his 1803 work Temple of Nature.", "title": "Writings" }, { "paragraph_id": 25, "text": "Erasmus Darwin offered the first glimpse of his theory of evolution, obliquely, in a question at the end of a long footnote to his popular poem The Loves of the Plants (1789), which was republished throughout the 1790s in several editions as The Botanic Garden. His poetic concept was to anthropomorphise the stamen (male) and pistil (female) sexual organs, as bride and groom. In this stanza on the flower Curcuma (also Flax and Turmeric) the \"youths\" are infertile, and he devotes the footnote to other examples of neutered organs in flowers, insect castes, and finally associates this more broadly with many popular and well-known cases of vestigial organs (male nipples, the third and fourth wings of flies, etc.)", "title": "Writings" }, { "paragraph_id": 26, "text": "Woo'd with long care, CURCUMA cold and shy Meets her fond husband with averted eye: Four beardless youths the obdurate beauty move With soft attentions of Platonic love.", "title": "Writings" }, { "paragraph_id": 27, "text": "Darwin's final long poem, The Temple of Nature was published posthumously in 1803. The poem was originally titled The Origin of Society. It is considered his best poetic work. It centres on his own conception of evolution. The poem traces the progression of life from micro-organisms to civilised society. The poem contains a passage that describes the struggle for existence.", "title": "Writings" }, { "paragraph_id": 28, "text": "His poetry was admired by Wordsworth, while Coleridge was intensely critical, writing, \"I absolutely nauseate Darwin's poem\". It often made reference to his interests in science; for example botany and steam engines.", "title": "Writings" }, { "paragraph_id": 29, "text": "The last two leaves of Darwin's A plan for the conduct of female education in boarding schools (1797) contain a book list, an apology for the work, and an advert for \"Miss Parkers School\".", "title": "Writings" }, { "paragraph_id": 30, "text": "The school advertised on the last page is the one he set up in Ashbourne, Derbyshire, for his two illegitimate children, Susanna and Mary.", "title": "Writings" }, { "paragraph_id": 31, "text": "Darwin regretted that a good education had not been generally available to women in Britain in his time, and drew on the ideas of Locke, Rousseau, and Genlis in organising his thoughts. Addressing the education of middle-class girls, Darwin argued that amorous romance novels were inappropriate and that they should seek simplicity in dress. He contends that young women should be educated in schools, rather than privately at home, and learn appropriate subjects. These subjects include physiognomy, physical exercise, botany, chemistry, mineralogy, and experimental philosophy. They should familiarise themselves with arts and manufactures through visits to sites like Coalbrookdale, and Wedgwood's potteries; they should learn how to handle money, and study modern languages. Darwin's educational philosophy took the view that men and women should have different capabilities, skills, interests, and spheres of action, where the woman's education was designed to support and serve male accomplishment and financial reward, and to relieve him of daily responsibility for children and the chores of life. In the context of the times, this program may be read as a modernising influence in the sense that the woman was at least to learn about the \"man's world\", although not be allowed to participate in it. The text was written seven years after A Vindication of the Rights of Woman by Mary Wollstonecraft, which has the central argument that women should be educated in a rational manner to give them the opportunity to contribute to society.", "title": "Writings" }, { "paragraph_id": 32, "text": "Some women of Darwin's era were receiving more substantial education and participating in the broader world. An example is Susanna Wright, who was raised in Lancashire and became an American colonist associated with the Midlands Enlightenment. It is not known whether Darwin and Wright knew each other, although they definitely knew many people in common. Other women who received substantial education and who participated in the broader world (albeit sometimes anonymously) whom Darwin definitely knew were Maria Jacson and Anna Seward.", "title": "Writings" }, { "paragraph_id": 33, "text": "These dates indicate the year in which Darwin became friends with these people, who, in turn, became members of the Lunar Society. The Lunar Society existed from 1765 to 1813.", "title": "Lunar Society" }, { "paragraph_id": 34, "text": "Before 1765:", "title": "Lunar Society" }, { "paragraph_id": 35, "text": "After 1765:", "title": "Lunar Society" }, { "paragraph_id": 36, "text": "Darwin also established a lifelong friendship with Benjamin Franklin, who shared Darwin's support for the American and French revolutions. The Lunar Society was instrumental as an intellectual driving force behind England's Industrial Revolution.", "title": "Lunar Society" }, { "paragraph_id": 37, "text": "The members of the Lunar Society, and especially Darwin, opposed the slave trade. He attacked it in The Botanic Garden (1789–1791), and in The Loves of Plants (1789), The Economy of Vegetation (1791), and the Phytologia (1800).", "title": "Lunar Society" }, { "paragraph_id": 38, "text": "In 1761, Darwin was elected a fellow of the Royal Society.", "title": "Other activities" }, { "paragraph_id": 39, "text": "In addition to the Lunar Society, Erasmus Darwin belonged to the influential Derby Philosophical Society, as did his brother-in-law Samuel Fox (see family tree below). He experimented with the use of air and gases to alleviate infections and cancers in patients. A Pneumatic Institution was established at Clifton in 1799 for clinically testing these ideas. He conducted research into the formation of clouds, on which he published in 1788. He also inspired Robert Weldon's Somerset Coal Canal caisson lock.", "title": "Other activities" }, { "paragraph_id": 40, "text": "In 1792, Darwin was elected as a member to the American Philosophical Society in Philadelphia.", "title": "Other activities" }, { "paragraph_id": 41, "text": "Percy Bysshe Shelley specifically mentions Darwin in the first sentence of the 1818 Preface to Frankenstein to support his contention that the creation of life is possible. His wife Mary Shelley in her introduction to the 1831 edition of Frankenstein wrote that she overheard her husband talk about Darwin's experiments with Lord Byron about unspecified \"experiments of Dr. Darwin\" that led to the idea for the novel.", "title": "Other activities" }, { "paragraph_id": 42, "text": "Contemporary literature dates the cosmological theories of the Big Bang and Big Crunch to the 19th and 20th centuries. However, Erasmus Darwin had speculated on these sorts of events in The Botanic Garden, A Poem in Two Parts: Part 1, The Economy of Vegetation, 1791:", "title": "Other activities" }, { "paragraph_id": 43, "text": "Roll on, ye Stars! exult in youthful prime, Mark with bright curves the printless steps of Time; Near and more near your beamy cars approach, And lessening orbs on lessening orbs encroach; — Flowers of the sky! ye too to age must yield, Frail as your silken sisters of the field. Star after star from Heaven's high arch shall rush, Suns sink on suns, and systems, systems crush, Headlong, extinct, to one dark centre fall, And death and night and chaos mingle all: — Till o'er the wreck, emerging from the storm, Immortal Nature lifts her changeful form, Mounts from her funeral pyre on wings of flame, And soars and shines, another and the same!", "title": "Other activities" }, { "paragraph_id": 44, "text": "Darwin was the inventor of several devices, though he did not patent any: he believed this would damage his reputation as a doctor. He encouraged his friends to patent their own modifications of his designs.", "title": "Other activities" }, { "paragraph_id": 45, "text": "In notes dating to 1779, Darwin made a sketch of a simple hydrogen-oxygen rocket engine, with gas tanks connected by plumbing and pumps to an elongated combustion chamber and expansion nozzle, a concept not to be seen again until one century later.", "title": "Other activities" }, { "paragraph_id": 46, "text": "", "title": "Family tree" }, { "paragraph_id": 47, "text": "Erasmus Darwin House, his home in Lichfield, Staffordshire, is a museum dedicated to him and his life's work. A secondary school at Burntwood, near Lichfield, was renamed Erasmus Darwin Academy in 2011.", "title": "Commemoration" }, { "paragraph_id": 48, "text": "A science building on the Clifton campus of Nottingham Trent University is named after him.", "title": "Commemoration" } ]
Erasmus Robert Darwin was an English physician. One of the key thinkers of the Midlands Enlightenment, he was also a natural philosopher, physiologist, slave-trade abolitionist, inventor, and poet. His poems included much natural history, including a statement of evolution and the relatedness of all forms of life. He was a member of the Darwin–Wedgwood family, which includes his grandsons Charles Darwin and Francis Galton. Darwin was a founding member of the Lunar Society of Birmingham, a discussion group of pioneering industrialists and natural philosophers. He turned down an invitation from George III to become Physician to the King.
2001-04-01T15:22:10Z
2023-10-07T09:09:55Z
[ "Template:Gutenberg author", "Template:Derby Museum", "Template:Authority control", "Template:Use British English", "Template:Harv", "Template:Cite ODNB", "Template:ISBN", "Template:About", "Template:Reflist", "Template:Wikisource author", "Template:Short description", "Template:Cite web", "Template:Refend", "Template:Wikiquote", "Template:Acad", "Template:Darwin", "Template:Cite book", "Template:Cite journal", "Template:Refbegin", "Template:Internet Archive author", "Template:Commons", "Template:Use dmy dates", "Template:Infobox person", "Template:Circa", "Template:Cite news", "Template:Convert", "Template:Poemquote", "Template:Webarchive", "Template:Post-nominals", "Template:Spaced ndash", "Template:Sfn", "Template:Librivox author" ]
https://en.wikipedia.org/wiki/Erasmus_Darwin
9,300
Ediacaran
The Ediacaran Period ( /ˌiːdiˈækərən, ˌɛdi-/ EE-dee-AK-ər-ən, ED-ee-) is a geological period of the Neoproterozoic Era that spans 96 million years from the end of the Cryogenian Period at 635 Mya, to the beginning of the Cambrian Period at 538.8 Mya. It is the last period of the Proterozoic Eon as well as the so-called Precambrian "supereon", before the beginning of the subsequent Cambrian Period marks the start of the Phanerozoic Eon, where recognizable fossil evidence of life becomes common. The Ediacaran Period is named after the Ediacara Hills of South Australia, where trace fossils of a diverse community of previously unrecognized lifeforms (later named the Ediacaran biota) were first discovered by geologist Reg Sprigg in 1946. Its status as an official geological period was ratified in 2004 by the International Union of Geological Sciences (IUGS), making it the first new geological period declared in 120 years. Although the period took namesake from the Ediacara Hills of the Nilpena Ediacara National Park, the type section is actually located in the bed of the Enorama Creek within the Brachina Gorge of the Ikara-Flinders Ranges National Park, at 31°19′53.8″S 138°38′0.1″E / 31.331611°S 138.633361°E / -31.331611; 138.633361, approximately 55 km (34 mi) southeast of the Ediacara Hills fossil site. The Ediacaran marks the first widespread appearance of complex multicellular fauna following the end of the Snowball Earth glacial age, known as the Avalon Explosion, which is represented by now-extinct, relatively simple animal phyla such as Proarticulata (bilaterians with simple articulation, e.g. Dickinsonia and Spriggina), Petalonamae (sea pen-like animals, e.g. Charnia), Aspidella (radial-shaped animals, e.g. Cyclomedusa) and Trilobozoa (animals with tri-radial symmetry, e.g. Tribrachidium). Most of those organisms appeared during or after the Avalon explosion event 575 million years ago and died out during an End-Ediacaran extinction event 539 million years ago. Forerunners of some modern phyla of animals also appeared during this period, including cnidarians and early bilaterians such as Xenacoelomorpha, as well as Mollusc-like Kimberella. Fossilized organisms with shells or endoskeletons were yet to evolve, and would not appear until the superseding Cambrian Period of the Phanerozoic Eon. The supercontinent Pannotia formed and broke apart by the end of the period. The Ediacaran also witnessed several glaciation events, such as the Gaskiers and Baykonurian glaciations. The Shuram excursion also occurred during this period, but its glacial origin is unlikely. The Ediacaran Period overlaps but is shorter than the Vendian Period (650 to 543 million years ago), a name that was earlier, in 1952, proposed by Russian geologist and paleontologist Boris Sokolov. The Vendian concept was formed stratigraphically top-down, and the lower boundary of the Cambrian became the upper boundary of the Vendian. Paleontological substantiation of this boundary was worked out separately for the siliciclastic basin (base of the Baltic Stage of the Eastern European Platform) and for the carbonate basin (base of the Tommotian stage of the Siberian Platform). The lower boundary of the Vendian was suggested to be defined at the base of the Varanger (Laplandian) tillites. The Vendian in its type area consists of large subdivisions such as Laplandian, Redkino, Kotlin and Rovno regional stages with the globally traceable subdivisions and their boundaries, including its lower one. The Redkino, Kotlin and Rovno regional stages have been substantiated in the type area of the Vendian on the basis of the abundant organic-walled microfossils, megascopic algae, metazoan body fossils and ichnofossils. The lower boundary of the Vendian could have a biostratigraphic substantiation as well taking into consideration the worldwide occurrence of the Pertatataka assemblage of giant acanthomorph acritarchs. The Ediacaran Period (c. 635–538.8 Mya) represents the time from the end of global Marinoan glaciation to the first appearance worldwide of somewhat complicated trace fossils (Treptichnus pedum (Seilacher, 1955)). Although the Ediacaran Period does contain soft-bodied fossils, it is unusual in comparison to later periods because its beginning is not defined by a change in the fossil record. Rather, the beginning is defined at the base of a chemically distinctive carbonate layer that is referred to as a "cap carbonate", because it caps glacial deposits. This bed is characterized by an unusual depletion of C that indicates a sudden climatic change at the end of the Marinoan ice age. The lower global boundary stratotype section (GSSP) of the Ediacaran is at the base of the cap carbonate (Nuccaleena Formation), immediately above the Elatina diamictite in the Enorama Creek section, Brachina Gorge, Flinders Ranges, South Australia. The GSSP of the upper boundary of the Ediacaran is the lower boundary of the Cambrian on the SE coast of Newfoundland approved by the International Commission on Stratigraphy as a preferred alternative to the base of the Tommotian Stage in Siberia which was selected on the basis of the ichnofossil Treptichnus pedum (Seilacher, 1955). In the history of stratigraphy it was the first case of usage of bioturbations for the System boundary definition. Nevertheless, the definitions of the lower and upper boundaries of the Ediacaran on the basis of chemostratigraphy and ichnofossils are disputable. Cap carbonates generally have a restricted geographic distribution (due to specific conditions of their precipitation) and usually siliciclastic sediments laterally replace the cap carbonates in a rather short distance but cap carbonates do not occur above every tillite elsewhere in the world. The C-isotope chemostratigraphic characteristics obtained for contemporaneous cap carbonates in different parts of the world may be variable in a wide range owing to different degrees of secondary alteration of carbonates, dissimilar criteria used for selection of the least altered samples, and, as far as the C-isotope data are concerned, due to primary lateral variations of δ Ccarb in the upper layer of the ocean. Furthermore, Oman presents in its stratigraphic record a large negative carbon isotope excursion, within the Shuram Formation that is clearly away from any glacial evidence strongly questioning systematic association of negative δ Ccarb excursion and glacial events. Also, the Shuram excursion is prolonged and is estimated to last for ~9.0 Myrs. As to the Treptichnus pedum, a reference ichnofossil for the lower boundary of the Cambrian, its usage for the stratigraphic detection of this boundary is always risky, because of the occurrence of very similar trace fossils belonging to the Treptichnids group well below the level of T. pedum in Namibia, Spain and Newfoundland, and possibly, in the western United States. The stratigraphic range of T. pedum overlaps the range of the Ediacaran fossils in Namibia, and probably in Spain. The Ediacaran Period is not yet formally subdivided, but a proposed scheme recognises an Upper Ediacaran whose base corresponds with the Gaskiers glaciation, a Terminal Ediacaran Stage starting around 550 million years ago, a preceding stage beginning around 575 Ma with the earliest widespread Ediacaran biota fossils; two proposed schemes differ on whether the lower strata should be divided into an Early and Middle Ediacaran or not, because it is not clear whether the Shuram excursion (which would divide the Early and Middle) is a separate event from the Gaskiers, or whether the two events are correlated. The dating of the rock type section of the Ediacaran Period in South Australia has proven uncertain due to lack of overlying igneous material. Therefore, the age range of 635 to 538.8 million years is based on correlations to other countries where dating has been possible. The base age of approximately 635 million years is based on U–Pb (uranium–lead) and Re–Os (rhenium–osmium) dating from Africa, China, North America, and Tasmania. The fossil record from the Ediacaran Period is sparse, as more easily fossilized hard-shelled animals had yet to evolve. The Ediacaran biota include the oldest definite multicellular organisms (with specialized tissues), the most common types of which resemble segmented worms, fronds, disks, or immobile bags. Auroralumina was a cnidarian. Most members of the Ediacaran biota bear little resemblance to modern lifeforms, and their relationship even with the immediately following lifeforms of the Cambrian explosion is rather difficult to interpret. More than 100 genera have been described, and well known forms include Arkarua, Charnia, Dickinsonia, Ediacaria, Marywadea, Cephalonega, Pteridinium, and Yorgia. However, despite the overall enigmaticness of most Ediacaran organisms, some fossils identifiable as hard-shelled agglutinated foraminifera (which are not classified as animals) are known from latest Ediacaran sediments of western Siberia. Sponges recognisable as such also lived during the Ediacaran. Four different biotic intervals are known in the Ediacaran, each being characterised by the prominence of a unique ecology and faunal assemblage. The first spanned from 635 to around 575 Ma and was dominated by acritarchs known as large ornamented Ediacaran microfossils. The second spanned from around 575 to 560 Ma and was characterised by the Avalon biota. The third spanned from 560 to 550 Ma; its biota has been dubbed the White Sea biota due to many fossils from this time being found along the coasts of the White Sea. The fourth lasted from 550 to 539 Ma and is known as the interval of the Nama biotic assemblage. There is evidence for a mass extinction during this period from early animals changing the environment, dating to the same time as the transition between the White Sea and the Nama-type biotas. Alternatively, this mass extinction has also been theorised to have been the result of an anoxic event. The relative proximity of the Moon at this time meant that tides were stronger and more rapid than they are now. The day was 21.9 ± 0.4 hours, and there were 13.1 ± 0.1 synodic months/year and 400 ± 7 solar days/year. A few English language documentaries have featured the Ediacaran Period and biota:
[ { "paragraph_id": 0, "text": "The Ediacaran Period ( /ˌiːdiˈækərən, ˌɛdi-/ EE-dee-AK-ər-ən, ED-ee-) is a geological period of the Neoproterozoic Era that spans 96 million years from the end of the Cryogenian Period at 635 Mya, to the beginning of the Cambrian Period at 538.8 Mya. It is the last period of the Proterozoic Eon as well as the so-called Precambrian \"supereon\", before the beginning of the subsequent Cambrian Period marks the start of the Phanerozoic Eon, where recognizable fossil evidence of life becomes common.", "title": "" }, { "paragraph_id": 1, "text": "The Ediacaran Period is named after the Ediacara Hills of South Australia, where trace fossils of a diverse community of previously unrecognized lifeforms (later named the Ediacaran biota) were first discovered by geologist Reg Sprigg in 1946. Its status as an official geological period was ratified in 2004 by the International Union of Geological Sciences (IUGS), making it the first new geological period declared in 120 years. Although the period took namesake from the Ediacara Hills of the Nilpena Ediacara National Park, the type section is actually located in the bed of the Enorama Creek within the Brachina Gorge of the Ikara-Flinders Ranges National Park, at 31°19′53.8″S 138°38′0.1″E / 31.331611°S 138.633361°E / -31.331611; 138.633361, approximately 55 km (34 mi) southeast of the Ediacara Hills fossil site.", "title": "" }, { "paragraph_id": 2, "text": "The Ediacaran marks the first widespread appearance of complex multicellular fauna following the end of the Snowball Earth glacial age, known as the Avalon Explosion, which is represented by now-extinct, relatively simple animal phyla such as Proarticulata (bilaterians with simple articulation, e.g. Dickinsonia and Spriggina), Petalonamae (sea pen-like animals, e.g. Charnia), Aspidella (radial-shaped animals, e.g. Cyclomedusa) and Trilobozoa (animals with tri-radial symmetry, e.g. Tribrachidium). Most of those organisms appeared during or after the Avalon explosion event 575 million years ago and died out during an End-Ediacaran extinction event 539 million years ago. Forerunners of some modern phyla of animals also appeared during this period, including cnidarians and early bilaterians such as Xenacoelomorpha, as well as Mollusc-like Kimberella. Fossilized organisms with shells or endoskeletons were yet to evolve, and would not appear until the superseding Cambrian Period of the Phanerozoic Eon.", "title": "" }, { "paragraph_id": 3, "text": "The supercontinent Pannotia formed and broke apart by the end of the period. The Ediacaran also witnessed several glaciation events, such as the Gaskiers and Baykonurian glaciations. The Shuram excursion also occurred during this period, but its glacial origin is unlikely.", "title": "" }, { "paragraph_id": 4, "text": "The Ediacaran Period overlaps but is shorter than the Vendian Period (650 to 543 million years ago), a name that was earlier, in 1952, proposed by Russian geologist and paleontologist Boris Sokolov. The Vendian concept was formed stratigraphically top-down, and the lower boundary of the Cambrian became the upper boundary of the Vendian.", "title": "Ediacaran and Vendian" }, { "paragraph_id": 5, "text": "Paleontological substantiation of this boundary was worked out separately for the siliciclastic basin (base of the Baltic Stage of the Eastern European Platform) and for the carbonate basin (base of the Tommotian stage of the Siberian Platform). The lower boundary of the Vendian was suggested to be defined at the base of the Varanger (Laplandian) tillites.", "title": "Ediacaran and Vendian" }, { "paragraph_id": 6, "text": "The Vendian in its type area consists of large subdivisions such as Laplandian, Redkino, Kotlin and Rovno regional stages with the globally traceable subdivisions and their boundaries, including its lower one.", "title": "Ediacaran and Vendian" }, { "paragraph_id": 7, "text": "The Redkino, Kotlin and Rovno regional stages have been substantiated in the type area of the Vendian on the basis of the abundant organic-walled microfossils, megascopic algae, metazoan body fossils and ichnofossils.", "title": "Ediacaran and Vendian" }, { "paragraph_id": 8, "text": "The lower boundary of the Vendian could have a biostratigraphic substantiation as well taking into consideration the worldwide occurrence of the Pertatataka assemblage of giant acanthomorph acritarchs.", "title": "Ediacaran and Vendian" }, { "paragraph_id": 9, "text": "The Ediacaran Period (c. 635–538.8 Mya) represents the time from the end of global Marinoan glaciation to the first appearance worldwide of somewhat complicated trace fossils (Treptichnus pedum (Seilacher, 1955)).", "title": "Upper and lower boundaries" }, { "paragraph_id": 10, "text": "Although the Ediacaran Period does contain soft-bodied fossils, it is unusual in comparison to later periods because its beginning is not defined by a change in the fossil record. Rather, the beginning is defined at the base of a chemically distinctive carbonate layer that is referred to as a \"cap carbonate\", because it caps glacial deposits.", "title": "Upper and lower boundaries" }, { "paragraph_id": 11, "text": "This bed is characterized by an unusual depletion of C that indicates a sudden climatic change at the end of the Marinoan ice age. The lower global boundary stratotype section (GSSP) of the Ediacaran is at the base of the cap carbonate (Nuccaleena Formation), immediately above the Elatina diamictite in the Enorama Creek section, Brachina Gorge, Flinders Ranges, South Australia.", "title": "Upper and lower boundaries" }, { "paragraph_id": 12, "text": "The GSSP of the upper boundary of the Ediacaran is the lower boundary of the Cambrian on the SE coast of Newfoundland approved by the International Commission on Stratigraphy as a preferred alternative to the base of the Tommotian Stage in Siberia which was selected on the basis of the ichnofossil Treptichnus pedum (Seilacher, 1955). In the history of stratigraphy it was the first case of usage of bioturbations for the System boundary definition.", "title": "Upper and lower boundaries" }, { "paragraph_id": 13, "text": "Nevertheless, the definitions of the lower and upper boundaries of the Ediacaran on the basis of chemostratigraphy and ichnofossils are disputable.", "title": "Upper and lower boundaries" }, { "paragraph_id": 14, "text": "Cap carbonates generally have a restricted geographic distribution (due to specific conditions of their precipitation) and usually siliciclastic sediments laterally replace the cap carbonates in a rather short distance but cap carbonates do not occur above every tillite elsewhere in the world.", "title": "Upper and lower boundaries" }, { "paragraph_id": 15, "text": "The C-isotope chemostratigraphic characteristics obtained for contemporaneous cap carbonates in different parts of the world may be variable in a wide range owing to different degrees of secondary alteration of carbonates, dissimilar criteria used for selection of the least altered samples, and, as far as the C-isotope data are concerned, due to primary lateral variations of δ Ccarb in the upper layer of the ocean.", "title": "Upper and lower boundaries" }, { "paragraph_id": 16, "text": "Furthermore, Oman presents in its stratigraphic record a large negative carbon isotope excursion, within the Shuram Formation that is clearly away from any glacial evidence strongly questioning systematic association of negative δ Ccarb excursion and glacial events. Also, the Shuram excursion is prolonged and is estimated to last for ~9.0 Myrs.", "title": "Upper and lower boundaries" }, { "paragraph_id": 17, "text": "As to the Treptichnus pedum, a reference ichnofossil for the lower boundary of the Cambrian, its usage for the stratigraphic detection of this boundary is always risky, because of the occurrence of very similar trace fossils belonging to the Treptichnids group well below the level of T. pedum in Namibia, Spain and Newfoundland, and possibly, in the western United States. The stratigraphic range of T. pedum overlaps the range of the Ediacaran fossils in Namibia, and probably in Spain.", "title": "Upper and lower boundaries" }, { "paragraph_id": 18, "text": "The Ediacaran Period is not yet formally subdivided, but a proposed scheme recognises an Upper Ediacaran whose base corresponds with the Gaskiers glaciation, a Terminal Ediacaran Stage starting around 550 million years ago, a preceding stage beginning around 575 Ma with the earliest widespread Ediacaran biota fossils; two proposed schemes differ on whether the lower strata should be divided into an Early and Middle Ediacaran or not, because it is not clear whether the Shuram excursion (which would divide the Early and Middle) is a separate event from the Gaskiers, or whether the two events are correlated.", "title": "Subdivisions" }, { "paragraph_id": 19, "text": "The dating of the rock type section of the Ediacaran Period in South Australia has proven uncertain due to lack of overlying igneous material. Therefore, the age range of 635 to 538.8 million years is based on correlations to other countries where dating has been possible. The base age of approximately 635 million years is based on U–Pb (uranium–lead) and Re–Os (rhenium–osmium) dating from Africa, China, North America, and Tasmania.", "title": "Absolute dating" }, { "paragraph_id": 20, "text": "The fossil record from the Ediacaran Period is sparse, as more easily fossilized hard-shelled animals had yet to evolve. The Ediacaran biota include the oldest definite multicellular organisms (with specialized tissues), the most common types of which resemble segmented worms, fronds, disks, or immobile bags. Auroralumina was a cnidarian.", "title": "Biota" }, { "paragraph_id": 21, "text": "Most members of the Ediacaran biota bear little resemblance to modern lifeforms, and their relationship even with the immediately following lifeforms of the Cambrian explosion is rather difficult to interpret. More than 100 genera have been described, and well known forms include Arkarua, Charnia, Dickinsonia, Ediacaria, Marywadea, Cephalonega, Pteridinium, and Yorgia. However, despite the overall enigmaticness of most Ediacaran organisms, some fossils identifiable as hard-shelled agglutinated foraminifera (which are not classified as animals) are known from latest Ediacaran sediments of western Siberia. Sponges recognisable as such also lived during the Ediacaran.", "title": "Biota" }, { "paragraph_id": 22, "text": "Four different biotic intervals are known in the Ediacaran, each being characterised by the prominence of a unique ecology and faunal assemblage. The first spanned from 635 to around 575 Ma and was dominated by acritarchs known as large ornamented Ediacaran microfossils. The second spanned from around 575 to 560 Ma and was characterised by the Avalon biota. The third spanned from 560 to 550 Ma; its biota has been dubbed the White Sea biota due to many fossils from this time being found along the coasts of the White Sea. The fourth lasted from 550 to 539 Ma and is known as the interval of the Nama biotic assemblage.", "title": "Biota" }, { "paragraph_id": 23, "text": "There is evidence for a mass extinction during this period from early animals changing the environment, dating to the same time as the transition between the White Sea and the Nama-type biotas. Alternatively, this mass extinction has also been theorised to have been the result of an anoxic event.", "title": "Biota" }, { "paragraph_id": 24, "text": "The relative proximity of the Moon at this time meant that tides were stronger and more rapid than they are now. The day was 21.9 ± 0.4 hours, and there were 13.1 ± 0.1 synodic months/year and 400 ± 7 solar days/year.", "title": "Astronomical factors" }, { "paragraph_id": 25, "text": "A few English language documentaries have featured the Ediacaran Period and biota:", "title": "Documentaries" } ]
The Ediacaran Period is a geological period of the Neoproterozoic Era that spans 96 million years from the end of the Cryogenian Period at 635 Mya, to the beginning of the Cambrian Period at 538.8 Mya. It is the last period of the Proterozoic Eon as well as the so-called Precambrian "supereon", before the beginning of the subsequent Cambrian Period marks the start of the Phanerozoic Eon, where recognizable fossil evidence of life becomes common. The Ediacaran Period is named after the Ediacara Hills of South Australia, where trace fossils of a diverse community of previously unrecognized lifeforms were first discovered by geologist Reg Sprigg in 1946. Its status as an official geological period was ratified in 2004 by the International Union of Geological Sciences (IUGS), making it the first new geological period declared in 120 years. Although the period took namesake from the Ediacara Hills of the Nilpena Ediacara National Park, the type section is actually located in the bed of the Enorama Creek within the Brachina Gorge of the Ikara-Flinders Ranges National Park, at 31°19′53.8″S 138°38′0.1″E, approximately 55 km (34 mi) southeast of the Ediacara Hills fossil site. The Ediacaran marks the first widespread appearance of complex multicellular fauna following the end of the Snowball Earth glacial age, known as the Avalon Explosion, which is represented by now-extinct, relatively simple animal phyla such as Proarticulata, Petalonamae, Aspidella and Trilobozoa. Most of those organisms appeared during or after the Avalon explosion event 575 million years ago and died out during an End-Ediacaran extinction event 539 million years ago. Forerunners of some modern phyla of animals also appeared during this period, including cnidarians and early bilaterians such as Xenacoelomorpha, as well as Mollusc-like Kimberella. Fossilized organisms with shells or endoskeletons were yet to evolve, and would not appear until the superseding Cambrian Period of the Phanerozoic Eon. The supercontinent Pannotia formed and broke apart by the end of the period. The Ediacaran also witnessed several glaciation events, such as the Gaskiers and Baykonurian glaciations. The Shuram excursion also occurred during this period, but its glacial origin is unlikely.
2001-08-03T05:26:28Z
2023-12-02T10:12:44Z
[ "Template:Cvt", "Template:Vague", "Template:Reflist", "Template:Dictionary.com", "Template:Cite web", "Template:Cite journal", "Template:Use dmy dates", "Template:IPAc-en", "Template:Respell", "Template:See also", "Template:Ma", "Template:Cite news", "Template:Commons category", "Template:Clear", "Template:Infobox geologic timespan", "Template:Proterozoic footer", "Template:Coord", "Template:Main", "Template:Annotated link", "Template:Webarchive", "Template:Authority control", "Template:Short description", "Template:Citation", "Template:Cite book", "Template:Clarify" ]
https://en.wikipedia.org/wiki/Ediacaran
9,301
Erotica
Erotica is literature or art that deals substantively with subject matter that is erotic, sexually stimulating or sexually arousing. Some critics regard pornography as a type of erotica, but many consider it to be different. Erotic art may use any artistic form to depict erotic content, including painting, sculpture, drama, film or music. Erotic literature and erotic photography have become genres in their own right. Erotica also exists in a number of subgenres including gay, lesbian, women's, bondage, monster and tentacle erotica. The term erotica is derived from the feminine form of the ancient Greek adjective: ἐρωτικός (erōtikós), from ἔρως (érōs)—words used to indicate lust, and sexual love. Curiosa are curiosities or rarities, especially unusual or erotic books. In the antiquarian book trade, pornographic works are often listed under "curiosa", "erotica" or "facetiae". Erotica exists in many different forms, both modern and ancient. Erotic art dates back to the Paleolithic times, with cave paintings and carvings of female genitalia being a point of immense interest to prehistorians. Ancient Greek and Roman art depicted erotic acts or figures, often using phallic or erotic imagery to convey ideas of fertility. Modern depictions of erotic art are often intertwined with erotic photography, including boudoir photography, and erotic film. Discussions of modern erotic art are also often merged with discussions on pornography. More specifically, erotic photography found its mass-market roots in pornographic magazines. The most iconic of these magazines is Playboy, a men's magazine founded in the 1950s that helped to shape the modern Western perception on sex and sexuality in the media. Pornographic magazines could also include boudoir photography or pin-up models, though pin-up models are not definitively sexual by nature. Erotic film has evolved greatly with modern filmmaking capabilities, including developing a large subgenre of cartoon pornography, the most popular form of which is Japanese hentai. Erotic film is the form of erotica most often seen as interchangeable with pornography due to their similarities in form and function. Erotic literature also dates back to ancient times, though not quite as far. Arguably the most iconic erotic piece of literature, the Kama Sutra is a Sanskrit text largely describing and depicting ideas of sex, sexuality, love, and human emotion. Eroticism in ancient Greece and Rome was not contained to only visual art, as poets such as the Greek Sappho and the Roman Catullus and Ovid wrote erotic verse and lyrical poems. Modern erotic literature, often called 'smut', is quite popular, especially among women. A popular form of modern erotic literature is fan fiction, or fan-generated content about characters in a pre-existing media series or franchise. Stories on online websites like Archive of Our Own and FanFiction.Net account for a large percentage of modern erotic fan fiction literature. The topic of sex is often taboo in modern culture, especially in media. Censorship is an issue often faced by creators of erotic work, be it art, film, or literature. The legality of creating and publishing erotic works differs in different parts of the word, but it is not uncommon to see heavy regulations placed on the publication of erotic or pornographic media. The legality of cartoon pornography or animated erotic films is one of the most controversial aspects of erotic censorship. This is because of the gray area surrounding the portrayal of animated, fictional minors engaging in erotic or sexual acts . The legality of pornography with non-animated individuals is only slightly more definitive. Legal and moral issues regarding pornography and erotica can tie into arguments regarding the legalization or decriminalization of prostitution and sex work at large, a topic that is hotly debated. Pornography is often far less regulated than sex work and has fewer legal barriers to production, though it is still a morally controversial profession to some . The Obscene Publications Act of 1857 made the selling of "obscene" materials a statutory offense. This act has been criticized heavily, not just in retrospect, but at the time of enacting . Topics of erotic media have been brought to U.S. state and federal courts for centuries. Some notable cases include People v. Freeman, in which the state of California upheld that hiring actors to engage in sexual activity for the sake of creating erotic films was not considered pornography, and Miller v. California, in which the idea of erotic work providing serious artistic or literary value was introduced to the legal sphere . A majority of erotica centers women as the object of sexual desire, demonstrated in the sharp rise of popularity of pornographic magazines centering women in the mid-1900s . Pornhub, one of the most popular porn-sharing sites, released data on the porn consumed by viewers in 2021. Lesbian porn held one of the top spots of most-searched for genres, followed closely by MILF, or sexually attractive older women. All of the most popular pornstars on the site were women . In the 20th century, a cadre of female artists, authors, and other creatives began to create a new kind of erotica. Women's erotica exists to cater towards the sexual gratification of women consuming erotic material. Feminist erotic media often centers female pleasure instead of catering to the male gaze. Feminist erotic art had a boom in the mid-20th century, most iconically transforming the idea of the nude female figure from an object of sexual pleasure to a symbol for a woman's sexual liberation . Martha Edelheit was a pioneer of modern women's erotica, flipping the genre on its head by focusing her art on the nude male figure . It was not unusual for a man to be seen as an object of sexual desire in erotic media, but these portrayals were often found in gay pornography, and were often created or published by another man. Edelheit's work as a woman and as an artist was foundational for modern-day feminist erotic media . A distinction is often made between erotica and pornography (and the lesser-known genre of sexual entertainment, ribaldry), although some viewers may not distinguish between them. A key distinction, some have argued, is that pornography's objective is the graphic depiction of sexually explicit scenes. At the same time, erotica "seeks to tell a story that involves sexual themes" that include a more plausible depiction of human sexuality than in pornography. Additionally, works considered degrading or exploitative tend to be classified by those who see them as such, as "porn" rather than as "erotica" and consequently, pornography is often described as exploitative or degrading. Many countries have laws banning or at least regulating what is considered pornographic material, a situation that generally does not apply to erotica. For the anti-pornography activist Andrea Dworkin, "Erotica is simply high-class pornography; better produced, better conceived, better executed, better packaged, designed for a better class of consumer." Feminist writer Gloria Steinem distinguishes erotica from pornography, writing: "Erotica is as different from pornography as love is from rape, as dignity is from humiliation, as partnership is from slavery, as pleasure is from pain." Steinem's argument hinges on the distinction between reciprocity versus domination, as she writes: "Blatant or subtle, pornography involves no equal power or mutuality. In fact, much of the tension and drama comes from the clear idea that one person is dominating the other."
[ { "paragraph_id": 0, "text": "Erotica is literature or art that deals substantively with subject matter that is erotic, sexually stimulating or sexually arousing. Some critics regard pornography as a type of erotica, but many consider it to be different. Erotic art may use any artistic form to depict erotic content, including painting, sculpture, drama, film or music. Erotic literature and erotic photography have become genres in their own right. Erotica also exists in a number of subgenres including gay, lesbian, women's, bondage, monster and tentacle erotica.", "title": "" }, { "paragraph_id": 1, "text": "The term erotica is derived from the feminine form of the ancient Greek adjective: ἐρωτικός (erōtikós), from ἔρως (érōs)—words used to indicate lust, and sexual love.", "title": "" }, { "paragraph_id": 2, "text": "Curiosa are curiosities or rarities, especially unusual or erotic books. In the antiquarian book trade, pornographic works are often listed under \"curiosa\", \"erotica\" or \"facetiae\".", "title": "" }, { "paragraph_id": 3, "text": "Erotica exists in many different forms, both modern and ancient. Erotic art dates back to the Paleolithic times, with cave paintings and carvings of female genitalia being a point of immense interest to prehistorians. Ancient Greek and Roman art depicted erotic acts or figures, often using phallic or erotic imagery to convey ideas of fertility. Modern depictions of erotic art are often intertwined with erotic photography, including boudoir photography, and erotic film. Discussions of modern erotic art are also often merged with discussions on pornography.", "title": "Forms of erotica" }, { "paragraph_id": 4, "text": "More specifically, erotic photography found its mass-market roots in pornographic magazines. The most iconic of these magazines is Playboy, a men's magazine founded in the 1950s that helped to shape the modern Western perception on sex and sexuality in the media. Pornographic magazines could also include boudoir photography or pin-up models, though pin-up models are not definitively sexual by nature.", "title": "Forms of erotica" }, { "paragraph_id": 5, "text": "Erotic film has evolved greatly with modern filmmaking capabilities, including developing a large subgenre of cartoon pornography, the most popular form of which is Japanese hentai. Erotic film is the form of erotica most often seen as interchangeable with pornography due to their similarities in form and function.", "title": "Forms of erotica" }, { "paragraph_id": 6, "text": "Erotic literature also dates back to ancient times, though not quite as far. Arguably the most iconic erotic piece of literature, the Kama Sutra is a Sanskrit text largely describing and depicting ideas of sex, sexuality, love, and human emotion. Eroticism in ancient Greece and Rome was not contained to only visual art, as poets such as the Greek Sappho and the Roman Catullus and Ovid wrote erotic verse and lyrical poems. Modern erotic literature, often called 'smut', is quite popular, especially among women. A popular form of modern erotic literature is fan fiction, or fan-generated content about characters in a pre-existing media series or franchise. Stories on online websites like Archive of Our Own and FanFiction.Net account for a large percentage of modern erotic fan fiction literature.", "title": "Forms of erotica" }, { "paragraph_id": 7, "text": "The topic of sex is often taboo in modern culture, especially in media. Censorship is an issue often faced by creators of erotic work, be it art, film, or literature. The legality of creating and publishing erotic works differs in different parts of the word, but it is not uncommon to see heavy regulations placed on the publication of erotic or pornographic media.", "title": "Views on Erotica" }, { "paragraph_id": 8, "text": "The legality of cartoon pornography or animated erotic films is one of the most controversial aspects of erotic censorship. This is because of the gray area surrounding the portrayal of animated, fictional minors engaging in erotic or sexual acts . The legality of pornography with non-animated individuals is only slightly more definitive. Legal and moral issues regarding pornography and erotica can tie into arguments regarding the legalization or decriminalization of prostitution and sex work at large, a topic that is hotly debated. Pornography is often far less regulated than sex work and has fewer legal barriers to production, though it is still a morally controversial profession to some .", "title": "Views on Erotica" }, { "paragraph_id": 9, "text": "The Obscene Publications Act of 1857 made the selling of \"obscene\" materials a statutory offense. This act has been criticized heavily, not just in retrospect, but at the time of enacting . Topics of erotic media have been brought to U.S. state and federal courts for centuries. Some notable cases include People v. Freeman, in which the state of California upheld that hiring actors to engage in sexual activity for the sake of creating erotic films was not considered pornography, and Miller v. California, in which the idea of erotic work providing serious artistic or literary value was introduced to the legal sphere .", "title": "Views on Erotica" }, { "paragraph_id": 10, "text": "A majority of erotica centers women as the object of sexual desire, demonstrated in the sharp rise of popularity of pornographic magazines centering women in the mid-1900s . Pornhub, one of the most popular porn-sharing sites, released data on the porn consumed by viewers in 2021. Lesbian porn held one of the top spots of most-searched for genres, followed closely by MILF, or sexually attractive older women. All of the most popular pornstars on the site were women . In the 20th century, a cadre of female artists, authors, and other creatives began to create a new kind of erotica.", "title": "Views on Erotica" }, { "paragraph_id": 11, "text": "Women's erotica exists to cater towards the sexual gratification of women consuming erotic material. Feminist erotic media often centers female pleasure instead of catering to the male gaze. Feminist erotic art had a boom in the mid-20th century, most iconically transforming the idea of the nude female figure from an object of sexual pleasure to a symbol for a woman's sexual liberation . Martha Edelheit was a pioneer of modern women's erotica, flipping the genre on its head by focusing her art on the nude male figure . It was not unusual for a man to be seen as an object of sexual desire in erotic media, but these portrayals were often found in gay pornography, and were often created or published by another man. Edelheit's work as a woman and as an artist was foundational for modern-day feminist erotic media .", "title": "Views on Erotica" }, { "paragraph_id": 12, "text": "A distinction is often made between erotica and pornography (and the lesser-known genre of sexual entertainment, ribaldry), although some viewers may not distinguish between them. A key distinction, some have argued, is that pornography's objective is the graphic depiction of sexually explicit scenes. At the same time, erotica \"seeks to tell a story that involves sexual themes\" that include a more plausible depiction of human sexuality than in pornography. Additionally, works considered degrading or exploitative tend to be classified by those who see them as such, as \"porn\" rather than as \"erotica\" and consequently, pornography is often described as exploitative or degrading. Many countries have laws banning or at least regulating what is considered pornographic material, a situation that generally does not apply to erotica.", "title": "Erotica and pornography" }, { "paragraph_id": 13, "text": "For the anti-pornography activist Andrea Dworkin, \"Erotica is simply high-class pornography; better produced, better conceived, better executed, better packaged, designed for a better class of consumer.\" Feminist writer Gloria Steinem distinguishes erotica from pornography, writing: \"Erotica is as different from pornography as love is from rape, as dignity is from humiliation, as partnership is from slavery, as pleasure is from pain.\" Steinem's argument hinges on the distinction between reciprocity versus domination, as she writes: \"Blatant or subtle, pornography involves no equal power or mutuality. In fact, much of the tension and drama comes from the clear idea that one person is dominating the other.\"", "title": "Erotica and pornography" } ]
Erotica is literature or art that deals substantively with subject matter that is erotic, sexually stimulating or sexually arousing. Some critics regard pornography as a type of erotica, but many consider it to be different. Erotic art may use any artistic form to depict erotic content, including painting, sculpture, drama, film or music. Erotic literature and erotic photography have become genres in their own right. Erotica also exists in a number of subgenres including gay, lesbian, women's, bondage, monster and tentacle erotica. The term erotica is derived from the feminine form of the ancient Greek adjective: ἐρωτικός (erōtikós), from ἔρως (érōs)—words used to indicate lust, and sexual love. Curiosa are curiosities or rarities, especially unusual or erotic books. In the antiquarian book trade, pornographic works are often listed under "curiosa", "erotica" or "facetiae".
2001-09-26T19:56:00Z
2023-12-14T19:46:41Z
[ "Template:Other uses", "Template:Div col", "Template:Reflist", "Template:Cite web", "Template:Cite magazine", "Template:Commons category", "Template:Sex", "Template:Citation needed", "Template:Cite news", "Template:Short description", "Template:Lang", "Template:Portal", "Template:Div col end", "Template:Cite journal", "Template:Human sexuality", "Template:Cite book", "Template:Authority control" ]
https://en.wikipedia.org/wiki/Erotica
9,302
Existence
Existence is the state of being real or participating in reality. The terms "being", "reality", and "actuality" are often used as close synonyms. Existence contrasts with nonexistence, nothingness, and nonbeing. A common distinction is between the existence of an entity and its essence, which refers to the entity's nature or essential qualities. The main philosophical discipline studying existence is called ontology. The orthodox view is that it is a second-order property or a property of properties. According to this view, to say that a thing exists means that its properties are instantiated. A different view holds that existence is a first-order property or a property of individuals. This means that existence has the same ontological status as other properties of individuals, like color and shape. Meinongians accept this idea and hold that not all individuals have this property: they state that there are some individuals that do not exist. This view is rejected by universalists, who see existence as a universal property of every individual. Various types of existence are discussed in the academic literature. Singular existence is the existence of individual entities while general existence refers to the existence of concepts or universals. Other distinctions are between abstract and concrete existence, between possible, contingent, and necessary existence, and between physical and mental existence. A closely related issue is whether different types of entities exist in different ways or to different degrees. A key question in ontology is whether there is a reason for existence in general or why anything at all exists. The concept of existence is relevant to various fields, including logic, epistemology, philosophy of mind, philosophy of language, and existentialism. Existence is the state of being real. To exist means to have being or to participate in reality. Existence is what sets real entities apart from imaginary ones. It can refer both to individual entities or to the totality of reality. The word "existence" entered the English language in the late 14th century from old French. It has its roots in the medieval Latin term ex(s)istere, which means to stand forth, to appear, and to arise. Existence is studied by the subdiscipline of metaphysics known as ontology. The terms "being", "reality", and "actuality" are closely related to existence. They are usually used as synonyms of "existence" but their meanings as technical terms may come apart. According to metaphysicist Alexius Meinong, for example, all entities have being but not all have existence. He argues that merely possible objects, like Santa Claus, have being but lack existence. Ontologist Takashi Yagisawa contrasts existence with reality. He sees "reality" as the more fundamental term since it characterizes all entities equally. He defines existence as a relative term that connects an entity to the world that it inhabits. According to Gottlob Frege, actuality is more narrow than existence. He holds that actual entities can produce and undergo changes. He states that some existing entities are non-actual, like numbers and sets. Existence contrasts with nonexistence, which refers to a lack of reality. It is controversial whether objects can be divided into existent and nonexistent objects. This distinction is sometimes used to explain how it is possible to think of fictional objects, like dragons and unicorns. But the concept of nonexistent objects is not generally accepted. Closely related contrasting terms are nothingness and nonbeing. Another contrast is between existence and essence. Essence refers to the intrinsic nature or defining qualities of an entity. The essence of something determines what kind of entity it is and how it differs from other kinds of entities. Essence corresponds to what an entity is while existence corresponds to the fact that it is. For instance, it is possible to understand what an object is and grasp its nature even if one does not know whether this object exists. Some philosophers, like Edmund Husserl and Quentin Boyce Gibson, hold that existence is an elementary concept. This means that it cannot be defined in other terms without involving circularity. This would imply that it may be difficult or impossible to characterize existence or to talk about its nature in a non-trivial manner. A closely related issue concerns the distinction between thin and thick concepts of existence. Thin concepts understand existence as a logical property that every existing thing shares. It does not include any substantial content about the metaphysical implications of having existence. An example of a thin concept of existence is to state that existence is the same as the logical property of self-identity. Thick concepts of existence encompass a metaphysical analysis of what it means that something exists and what essential features existence implies. For example, George Berkeley's claim that esse est percipi presents a thick concept of existence. It can be translated as "to be is to be perceived" and highlights the mental nature of all existence. Some philosophers emphasize that there is a difference between the entities that exist and existence itself. A similar distinction plays a central role in the philosophy of Martin Heidegger, who calls it the ontological difference and contrasts the individual beings that exist with their being or the horizon of meaning of their existence. Theories of the nature of existence aim to explain what it means for something to exist. The central dispute regarding the nature of existence is whether it should be understood as a property of individuals. The two main theories of existence are first-order theories and second-order theories. First-order theories understand existence as a property of individuals. Some first-order theories see it as a property of all individuals while others hold that there are some individuals that do not exist. Second-order theories hold that existence is a second-order property, that is, a property of properties. A central challenge for the different theories of the nature of existence is to understand how it is possible to coherently deny the existence of something. An example is the sentence "Santa Claus does not exist". One difficulty consists in explaining how the name "Santa Claus" can be meaningful even though there is no Santa Claus. Second-order theories are often seen as the orthodox position. They understand existence as a second-order property rather than a first-order property. For instance, the Empire State Building is an individual object and being 443.2 meters tall is a first-order property of it. Being instantiated is a property of being 443.2 meters tall and therefore a second-order property. According to second-order theories, to talk about existence is to talk about which properties have instances. For example, this view states that the sentence "God exists" does not claim that God has the property of existing. Instead, it means "Godhood is instantiated". A key motivation of second-order theories is that existence is in important ways different from regular properties like being a building and being 443.2 meters tall: regular properties express what an object is like but existence does not. According to this view, existence is more fundamental than regular properties since, without it, objects cannot instantiate any properties. Second-order theorists usually hold that quantifiers rather than predicates express existence. Quantifiers are terms that talk about the quantity of objects that have certain properties. Existential quantifiers express that there is at least one object. Examples are expressions like "some" and "there exists", as in "some cows eat grass" and "there exists an even prime number". In this regard, existence is closely related to counting since to claim that something exists is to claim that the corresponding concept has one or more instances. Second-order views imply that a sentence like "egg-laying mammals exist" is misleading since the word "exist" is used as a predicate in them. They hold instead that their true logical form is better expressed in reformulations like "there exist entities that are egg-laying mammals". This way, existence has the role of a quantifier while egg-laying mammals is the predicate. Quantifier constructions can also be used to express negative existential statements. For instance, the sentence "talking tigers do not exist" can be expressed as "it is not the case that there exist talking tigers". Many ontologists accept that second-order theories provide a correct analysis of many types of existential sentences. However, it is controversial whether it is correct for all cases. One difficulty is caused by so-called negative singular existentials. Negative singular existentials are statements that deny that a particular object exists. An example is the sentence "Ronald McDonald does not exist". Singular terms, like Ronald McDonald, seem to refer to individuals. This poses a difficulty since negative singular existentials deny that this individual exists. This makes it unclear how the singular term can refer to the individual in the first place. One influential solution to this problem was proposed by Bertrand Russell. He holds that singular terms do not directly refer to individuals but are instead descriptions of individuals. Positive singular existentials affirm that an object matching the descriptions exists while negative singular existentials deny that an object matching the descriptions exists. According to this view, the sentence "Ronald McDonald does not exist" expresses the idea that "it is not the case that there is a unique happy hamburger clown". First-order theories claim that existence is a property of individuals. They are less widely accepted than second-order theories but also have some influential proponents. There are two types of first-order theories. According to Meinongianism, existence is a property of some but not all entities. This view implies that there are nonexistent entities. According to universalism, existence is a universal property instantiated by every entity. Meinongianism is a view about existence defended by Meinong and his followers. Its main claim is that there are some entities that do not exist. This means that objecthood is independent of existence. Proposed examples of nonexistent objects are merely possible objects, like flying pigs, as well as fictional and mythical objects, like Sherlock Holmes and Zeus. According to this view, these objects are real and have being even though they do not exist. Meinong states that there is an object for any combination of properties. For example, there is an object that only has the single property of being a singer without any additional properties. This means that neither the attribute of wearing a dress nor the absence of it applies to this object. Meinong also includes impossible objects, like round squares. Meinongians state that sentences describing what Sherlock Holmes and Zeus are like refer to nonexisting objects. They are true or false depending on whether these objects have the properties ascribed to them. For instance, the sentence "Pegasus has wings" is true because having wings is a property of Pegasus, even though Pegasus lacks the property of existing. One key motivation of Meinongianism is to explain how negative singular existentials, like "Ronald McDonald does not exist", can be true. Meinongians accept the idea that singular terms, like "Ronald McDonald" refer to individuals. For them, a negative singular existential is true if the individual it refers to does not exist. Meinongianism has important implications for how to understand quantification. According to an influential view defended by Willard Van Orman Quine, the domain of quantification is restricted to existing objects. This view implies that quantifiers carry ontological commitments about what exists and what does not exist. Meinongianism differs from this view by holding that the widest domain of quantification includes both existing and nonexisting objects. Some aspects of Meinongianism are controversial and have received substantial criticism. According to one objection, one cannot distinguish between being an object and being an existing object. A closely related criticism rests on the idea that objects cannot have properties if they do not exist. A further objection is that Meinongianism leads to an "overpopulated universe" since there is an object corresponding to any combination of properties. A more specific criticism rejects the idea that there are incomplete and impossible objects. Universalists agree with Meinongians that existence is a property of individuals. But they deny that there are nonexistent entities. They state instead that existence is a universal property: all entities have it, meaning that everything exists. One approach is to hold that existence is the same as self-identity. According to the law of identity, every object is identical to itself or has the property of self-identity. This can be expressed in predicate logic as ∀ x ( x = x ) {\displaystyle \forall x(x=x)} . An influential argument in favor of universalism rests on the claim that to deny the existence of something is contradictory. This conclusion follows from the premises that one can only deny the existence of something by referring to that entity and that one can only refer to entities that exist. Universalists have proposed different ways of interpreting negative singular existentials. According to one view, names of fictional entities like "Ronald McDonald" refer to abstract objects. Abstract objects exist even though they do not exist in space and time. This means that, when understood in a strict sense, all negative singular existentials are false, including the claim that "Ronald McDonald does not exist". However, universalists can interpret such sentences slightly differently in relation to the context. In everyday life, for example, people use sentences like "Ronald McDonald does not exist" to express the idea that Ronald McDonald does not exist as a concrete object, which is true. A different approach is to claim that negative singular existentials lack a truth value since their singular terms do not refer to anything. According to this view, they are neither true nor false but meaningless. Different types of existing entities are discussed in the academic literature. Many discussions revolve around the questions of what those types are, whether entities of a specific type exist, how entities of different types are related to each other, and whether some types are more fundamental than others. Examples are questions like whether souls exist, whether there are abstract, fictional, and universal entities, and whether besides the actual world and its objects, there are also possible worlds and objects. One distinction is between singular and general existence. Singular existence is the existence of individual entities. For example, the sentence "Angela Merkel exists" expresses the existence of one particular person. General existence pertains to general concepts, properties, or universals. For instance, the sentence "politicians exist" states that the general term "politician" has instances without referring to any one politician in particular. Singular and general existence are closely related to each other and some philosophers have tried to explain one as a special case of the other. For example, Frege held that general existence is more basic. One argument in favor of this position is that general existence can be expressed in terms of singular existence. For instance, the sentence "Angela Merkel exists" can be expressed as "entities exist that are identical to Angela Merkel", where the expression "being identical to Angela Merkel" is understood as a general term. A different position is defended by Quine, who gives primacy to singular existence. A related question is whether there can be general existence without singular existence. According to philosophers like Henry S. Leonard, a property only has general existence if there is at least one actual object that instantiates it. A different view, defended by Nicholas Rescher, holds that properties can even exist if they have no actual instances, like the property of being a unicorn. This question has a long philosophical tradition in relation to the existence of universals. Platonists claim that universals have general existence as Platonic forms independently of the particulars that instantiate them. According to this view, the universal of redness exists independent of whether there are any red objects. Aristotelianism also accepts that universals exist. However, it holds that their existence depends on particulars that instantiate them and that they are unable to exist by themselves. According to this view, a universal that has no instances in the spacio-temporal world does not exist. Nominalists claim that only particulars have existence and deny that universals exist. Another influential distinction in ontology is between concrete and abstract objects. Many concrete objects are encountered in regular everyday life, like rocks, plants, and other people. They exist in space and time and influence each other: they have causal powers and are affected by other concrete objects. Abstract objects exist outside space and time and lack causal powers. Examples of abstract objects are numbers, sets, and types. The distinction between concrete and abstract objects is sometimes treated as the most general division of being. There is wide agreement that concrete objects exist but opinions are divided in regard to abstract objects. Realists accept the idea that abstract objects have independent existence. Some of them claim that abstract objects have the same mode of existence as concrete objects while others maintain that they exist but in a different way. Antirealists state that abstract objects do not exist. This is often combined with the view that existence requires a location in space and time or the ability to causally interact. Fictional objects, like dragons and centaurs, are closely related to abstract objects and pose similar problems. However, the two terms are not identical. For example, the expression "the integer between two and three" refers to a fictional abstract object while the expression "integer between two and four" refers to a non-fictional abstract object. In a similar sense, there are also concrete fictional objects besides abstract fictional objects, like the winged horse of Bellerophon. A further distinction is between merely possible, contingent, and necessary existence. An entity has necessary existence if it must exist or could not fail to exist. Entities that exist but could fail to exist are contingent. Merely possible entities are entities that do not exist but could exist. Most entities encountered in ordinary experience, like telephones, sticks, and flowers, have contingent existence. It is an open question whether any entities have necessary existence. According to one view, all concrete objects have contingent existence while all abstract objects have necessary existence. According to some theorists, one or several necessary beings are required as the explanatory foundation of the cosmos. For instance, philosophers like Avicenna and Thomas Aquinas follow this idea and claim that God has necessary existence. There are many academic debates about whether there are merely possible objects. According to actualism, only actual entities have being. This includes both contingent and necessary entities. But it excludes merely possible entities. This view is rejected by possibilists, who state that there are also merely possible objects besides actual objects. For example, David Lewis argues that possible objects exist in the same way as actual objects. According to him, possible objects exist in possible worlds while actual objects exist in the actual world. Lewis holds that the only difference between possible worlds and the actual world is the location of the speaker: the term "actual" refers to the world of the speaker, similar to how the terms "here" and "now" refer to the spatial and temporal location of the speaker. A further distinction is between entities that exist on a physical level in contrast to mental entities. Physical entities include objects of regular perception, like stones, trees, and human bodies as well as entities discussed in modern physics, like electrons and protons. Physical entities can be observed and measured. They possess mass and a location in space and time. Mental entities belong to the realm of the mind, like perceptions, experiences of pleasure and pain as well as beliefs, desires, and emotions. They are primarily associated with conscious experiences but also include unconscious states, like unconscious beliefs, desires, and memories. The ontological status of physical and mental entities is a frequent topic in metaphysics and philosophy of mind. According to materialists, only physical entities exist on the most fundamental level. Materialists usually explain mental entities in terms of physical processes, for example, as brain states or as patterns of neural activation. Idealists reject this view and state that mind is the ultimate foundation of existence. They hold that physical entities have a derivative form of existence, for instance, that they are mental representations or products of consciousness. Dualists believe that both physical and mental entities exist on the most fundamental level. They state that they are connected to one another in various ways but that one cannot be reduced to the other. Closely related to the problem of different types of entities is the question of whether they differ also concerning their mode of existence. This is the case according to ontological pluralism. In this view, entities belonging to different types do not just differ in their essential features but also in the way they exist. This position is sometimes found in theology. It states that God is radically different from his creation and emphasizes his uniqueness by holding that the difference affects not just God's features but also God's mode of existence. Another form of ontological pluralism distinguishes the existence of material objects from the existence of spacetime. This view holds that material objects have relative existence since they exist in spacetime. It further states that the existence of spacetime itself is not relative in this sense since it just exists without existing within another spacetime. The topic of degrees of existence is closely related to the issue of modes of existence. This topic is based on the idea that some entities exist to a higher degree or have more being than other entities. It is similar to how some properties have degrees, like heat and mass. According to Plato, for example, unchangeable Platonic forms have a higher degree of existence than physical objects. While the view that there are different types of entities is common in metaphysics, the idea they differ from each other concerning their modes or degrees of existence is not generally accepted. For instance, philosopher Quentin Gibson maintains that a thing either exists or does not exist. This means that there is no alternative in between and that there are no degrees of existence. Peter van Inwagen uses the idea that there is an intimate relation between existence and quantification to argue against different modes of existence. Quantification is related to how people count objects. Inwagen argues that if there were different modes of entities then people would need different types of numbers to count them. Since the same numbers can be used to count different types of entities, he concludes that all entities have the same mode of existence. A central question in ontology is why anything exists at all or why there is something rather than nothing. Similar questions are "why is there a world?" and "why are there individual things?". These questions focus on the idea that many things that exist are contingent, meaning they could have failed to exist. It asks whether this applies to existence as a whole as well or whether there is a reason why something exists instead of nothing. This question is different from scientific questions that seek to explain the existence of one thing, like life, in relation to the existence of another thing, like the primordial soup which may have been its origin. It is also different from most religious creation myths that explain the existence of the material world in relation to a god or gods that created it. The difference lies in the fact that these theories explain the existence of one thing in terms of the existence of another thing instead of trying to explain existence in general. The additional difficulty of the ontological question lies in the fact that one cannot refer to any other existing entity without engaging in circular reasoning. One answer to the question of why there is anything at all is called the statistical argument. It is based on the idea that besides the actual world, there are many possible worlds. They differ from the actual world in various respects. For example, the Eiffel Tower exists in the actual world but there are possible worlds without the Eiffel Tower. There are countless variations of possible worlds but there is only one possible world that is empty, i.e., that does not contain any entities. This means that, if it was up to chance which possible world becomes actual, the chance that there is nothing is exceedingly small. A closely related argument in physics explains the existence of the world as the result of random quantum fluctuations. Another response is to deny that a reason or an explanation for existence in general can be found. According to this view, existence as a whole is absurd since it is there without a reason for being there. Not all theorists accept this question as a valid or philosophically interesting question. Some philosophers, like Graham Priest and Kris McDaniel, have suggested that the term nothing refers to a global absence, which can itself be understood as a form of existence. According to this view, the answer to the question is trivial, since there is always something, even if this something is just a global absence. A closely related response is to claim that an empty world is metaphysically impossible. According to this view, there is something rather than nothing because it is necessary for some things to exist. Western philosophy originated with the Presocratic philosophers, who explored the foundational principles of all existence. Some, like Thales and Heraclitus, suggested that concrete principles, like water or fire, are the root of existence. This position was opposed by Anaximander, who held that the source must lie in an abstract principle beyond the world of human perception. Plato argued that different types of entities have different degrees of existence. He held that shadows and images exist in a weaker sense than regular material objects. He claimed that the unchangeable Platonic forms have the highest type of existence. He saw material objects as imperfect and impermanent copies of Platonic forms. While Aristotle accepted Plato's idea that forms are different from matter, he challenged the idea that forms have a higher type of existence. Instead, he held that forms cannot exist without matter. Aristotle further claimed that different entities have different modes of existence. For example, he distinguished between substances and their accidents and between potentiality and actuality. Neoplatonists, like Plotinus, suggested that reality has a hierarchical structure. They held that a transcendent entity called "the One" or "the Good" is responsible for all existence. From it emerges the intellect, which in turn gives rise to the soul and the material world. In medieval philosophy, Anselm of Canterbury formulated the influential ontological argument. This argument aims to deduce the existence of God from the concept of God. Anselm defined God as the greatest conceivable being. He reasoned that an entity that did not exist outside his mind would not be the greatest conceivable being. This led him to the conclusion that God exists. Thomas Aquinas distinguished between the essence of a thing and its existence. According to him, the essence of a thing constitutes its fundamental nature. He argued that it is possible to understand what an object is and grasp its essence even if one does not know whether this object exists. He concluded from this observation that existence is not part of the qualities of an object and should instead be understood as a separate property. Aquinas also considered the problem of creation from nothing. He claimed that only God has the power to truly bring new entities into existence. These ideas later inspired Gottfried Wilhelm Leibniz's theory of creation. He held that to create is to confer actual existence to possible objects. Both David Hume and Immanuel Kant rejected the idea that existence is a property. According to Hume, objects are bundles of qualities. He held that existence is not a property since there is no impression of existence besides the bundled qualities. Kant came to a similar conclusion in his criticism of the ontological argument. According to him, this proof fails because one cannot deduce from the definition of a concept whether entities described by this concept exist. He held that existence does not add anything to the concept of the object, it only indicates that this concept is instantiated. Franz Brentano agreed with Kant's criticism and his claim that existence is not a real predicate. He used this idea to develop his theory of judgments. According to him, all judgments are existential judgments: they either affirm or deny the existence of something. He stated that judgments like "some zebras are striped" have the logical form "there is a striped zebra" while judgments like "all zebras are striped" have the logical form "there is not a non-striped zebra". Gottlob Frege and Bertrand Russell aimed to refine the idea of what it means that existence is not a regular property. They distinguished between regular first-order properties of individuals and second-order properties of other properties. According to this view, to talk about existence is to talk about the second-order property of being instantiated. For instance, to deny that dinosaurs exist means that the property of being a dinosaur has the property of not being instantiated. According to Russell, the fundamental form of predication happens by applying a predicate to the proper name of an individual. An example of this type of atomic proposition is "Laika is a dog". Russell held that talk of existence in the form of sentences like "dogs exist" is less fundamental since it means that there is an individual to which this predicate applies without naming this individual. Willard Van Orman Quine followed Frege and Russell in accepting that existence is a second-order property. He drew a close link between existence and the role of quantification in formal logic. He applied this idea to scientific theories and stated that a scientific theory is committed to the existence of an entity if the theory quantifies over this entity. For example, if a theory in biology claims that "there are populations with genetic diversity" then this theory has an ontological commitment to the existence of populations with genetic diversity. Despite the influence of second-order theories, this view was not universally accepted. Alexius Meinong rejected it and claimed that existence is a property of individuals and that not all individuals have this property. This led him to the thesis that there is a difference between being and existence: all individuals have being but only some of them also exist. This implies that there are some things that do not exist, like merely possible objects and impossible objects. Many schools of thought in Eastern philosophy discuss the problem of existence and its implications. For instance, the ancient Hindu school of Samkhya developed a metaphysical dualism. According to this view, there are two types of existence: pure consciousness (Purusha) and matter (Prakriti). Samkhya explains the manifestation of the universe as the interaction between these two principles. A different approach was developed by Adi Shankara in his school of Advaita Vedanta. He defended a metaphysical monism by claiming that the divine (Brahman) is the ultimate reality and the only existent. According to this view, the impression that there is a universe consisting of many distinct entities is an illusion (Maya). The essential features of ultimate reality are described as Sat Chit Ananda, meaning existence, consciousness, and bliss. A central doctrine in Buddhist philosophy is called the three marks of existence. The three marks are aniccā (impermanence), anattā (absence of a permanent self), and dukkha (suffering). Aniccā is the doctrine that all of existence is subject to change. This means everything transforms at some point and nothing lasts forever. Anattā expresses a similar state in relation to persons. It is the claim that people do not have a permanent identity or a separate self. Ignorance about aniccā and anattā is seen as the main cause of dukkha by leading people to form attachments that cause suffering. A central idea in many schools of Chinese philosophy, like Laozi's Daoism, is that a fundamental principle known as dao is the source of all existence. The term is often translated as "the Way" and is understood as a cosmic force that governs the natural order of the world. One position in Chinese metaphysics holds that dao is itself a form of being while another contends that it is non-being that gives rise to being. The concept of existence played a central role in Arabic-Persian philosophy. Avicenna and Al-Ghazali discussed the relation between existence and essence. According to them, the essence of an entity is prior to its existence. The additional step of instantiating the essence is required for the entity to come into existence. Mulla Sadra rejected this priority of essence over existence. He argued that essence is only a concept used by the mind to grasp existence. Existence, by contrast, encompasses the whole of reality, according to his view. Formal logic studies which arguments are deductively valid. First-order logic is the most commonly used system of formal logic. In it, existence is expressed using the existential quantifier ( ∃ {\displaystyle \exists } ). For example, the formula ∃ x H o r s e ( x ) {\displaystyle \exists xHorse(x)} can be used to state that horses exist. The variable x ranges over all elements in the domain of quantification and the existential quantifier expresses that at least one element in this domain is a horse. In first-order logic, all singular terms, like names, refer to objects in the domain and imply that the object exists. Because of this, one can deduce that ∃ x H o n e s t ( x ) {\displaystyle \exists xHonest(x)} (someone is honest) from H o n e s t ( B i l l ) {\displaystyle Honest(Bill)} (Bill is honest). Many logical systems that are based on first-order logic also follow this idea. Free logic is an exception. It allows there to be empty names that do not refer to any object in the domain. One motivation for this modification is that reasoning is not limited to regular objects but can also be applied to fictional objects. In free logic, for instance, one can express that Pegasus is a flying horse using the formula F l y i n g h o r s e ( P e g a s u s ) {\displaystyle Flyinghorse(Pegasus)} . One consequence of this modification is that one cannot infer from this type of statement that something exists. This means that the inference from F l y i n g h o r s e ( P e g a s u s ) {\displaystyle Flyinghorse(Pegasus)} to ∃ x F l y i n g h o r s e ( x ) {\displaystyle \exists xFlyinghorse(x)} is invalid in free logic even though it would be valid in first-order logic. Free logic uses an additional existence predicate ( E ! {\displaystyle E!} ) to express that a singular term refers to an existing object. For example, the formula E ! ( H o m e r ) {\displaystyle E!(Homer)} can be used to express that Homer exists while the formula ¬ E ! ( P e g a s u s ) {\displaystyle \lnot E!(Pegasus)} states that Pegasus does not exist. The disciplines of epistemology, philosophy of mind, and philosophy of language aim to understand the nature of knowledge, mind, and language. A key issue in these fields is the problem of reference. This problem concerns the question of how mental or linguistic representations can refer to existing objects. Examples of such representations are beliefs, thoughts, perceptions, words, and sentences. For instance, in the sentence "Barack Obama is a Democrat", the name "Barack Obama" refers to a particular individual. In relation to perception, the problem of reference concerns the question of whether or to what extent perceptual impressions bring the perceiver in contact with reality by presenting existing objects rather than illusions. Closely related to the problem of reference is the relation between truth and existence. Representations can be true or false. According to truthmaker theory, true representations require a truthmaker. A truthmaker of a representation is the entity whose existence is responsible for the fact that the representation is true. For example, the sentence "kangaroos live in Australia" is true because there are kangaroos in Australia: the existence of these kangaroos is the truthmaker of the sentence. Truthmaker theory states that there is a close relation between truth and existence: there exists a truthmaker for every true representation. Existentialism is a school of thought that explores the nature of human existence. One of its key ideas is that existence precedes essence. This claim expresses the notion that existence is more basic than essence and that the nature and purpose of human beings are not pregiven but develop in the process of living. According to this view, humans are thrown into a world that lacks preexistent intrinsic meaning. They have to determine for themselves what their purpose is and what meaning their life should have. Existentialists use this idea to focus on the role of freedom and responsibility in actively shaping one's life.
[ { "paragraph_id": 0, "text": "Existence is the state of being real or participating in reality. The terms \"being\", \"reality\", and \"actuality\" are often used as close synonyms. Existence contrasts with nonexistence, nothingness, and nonbeing. A common distinction is between the existence of an entity and its essence, which refers to the entity's nature or essential qualities.", "title": "" }, { "paragraph_id": 1, "text": "The main philosophical discipline studying existence is called ontology. The orthodox view is that it is a second-order property or a property of properties. According to this view, to say that a thing exists means that its properties are instantiated. A different view holds that existence is a first-order property or a property of individuals. This means that existence has the same ontological status as other properties of individuals, like color and shape. Meinongians accept this idea and hold that not all individuals have this property: they state that there are some individuals that do not exist. This view is rejected by universalists, who see existence as a universal property of every individual.", "title": "" }, { "paragraph_id": 2, "text": "Various types of existence are discussed in the academic literature. Singular existence is the existence of individual entities while general existence refers to the existence of concepts or universals. Other distinctions are between abstract and concrete existence, between possible, contingent, and necessary existence, and between physical and mental existence. A closely related issue is whether different types of entities exist in different ways or to different degrees.", "title": "" }, { "paragraph_id": 3, "text": "A key question in ontology is whether there is a reason for existence in general or why anything at all exists. The concept of existence is relevant to various fields, including logic, epistemology, philosophy of mind, philosophy of language, and existentialism.", "title": "" }, { "paragraph_id": 4, "text": "Existence is the state of being real. To exist means to have being or to participate in reality. Existence is what sets real entities apart from imaginary ones. It can refer both to individual entities or to the totality of reality. The word \"existence\" entered the English language in the late 14th century from old French. It has its roots in the medieval Latin term ex(s)istere, which means to stand forth, to appear, and to arise. Existence is studied by the subdiscipline of metaphysics known as ontology.", "title": "Definition and related terms" }, { "paragraph_id": 5, "text": "The terms \"being\", \"reality\", and \"actuality\" are closely related to existence. They are usually used as synonyms of \"existence\" but their meanings as technical terms may come apart. According to metaphysicist Alexius Meinong, for example, all entities have being but not all have existence. He argues that merely possible objects, like Santa Claus, have being but lack existence. Ontologist Takashi Yagisawa contrasts existence with reality. He sees \"reality\" as the more fundamental term since it characterizes all entities equally. He defines existence as a relative term that connects an entity to the world that it inhabits. According to Gottlob Frege, actuality is more narrow than existence. He holds that actual entities can produce and undergo changes. He states that some existing entities are non-actual, like numbers and sets.", "title": "Definition and related terms" }, { "paragraph_id": 6, "text": "Existence contrasts with nonexistence, which refers to a lack of reality. It is controversial whether objects can be divided into existent and nonexistent objects. This distinction is sometimes used to explain how it is possible to think of fictional objects, like dragons and unicorns. But the concept of nonexistent objects is not generally accepted. Closely related contrasting terms are nothingness and nonbeing.", "title": "Definition and related terms" }, { "paragraph_id": 7, "text": "Another contrast is between existence and essence. Essence refers to the intrinsic nature or defining qualities of an entity. The essence of something determines what kind of entity it is and how it differs from other kinds of entities. Essence corresponds to what an entity is while existence corresponds to the fact that it is. For instance, it is possible to understand what an object is and grasp its nature even if one does not know whether this object exists.", "title": "Definition and related terms" }, { "paragraph_id": 8, "text": "Some philosophers, like Edmund Husserl and Quentin Boyce Gibson, hold that existence is an elementary concept. This means that it cannot be defined in other terms without involving circularity. This would imply that it may be difficult or impossible to characterize existence or to talk about its nature in a non-trivial manner.", "title": "Definition and related terms" }, { "paragraph_id": 9, "text": "A closely related issue concerns the distinction between thin and thick concepts of existence. Thin concepts understand existence as a logical property that every existing thing shares. It does not include any substantial content about the metaphysical implications of having existence. An example of a thin concept of existence is to state that existence is the same as the logical property of self-identity. Thick concepts of existence encompass a metaphysical analysis of what it means that something exists and what essential features existence implies. For example, George Berkeley's claim that esse est percipi presents a thick concept of existence. It can be translated as \"to be is to be perceived\" and highlights the mental nature of all existence.", "title": "Definition and related terms" }, { "paragraph_id": 10, "text": "Some philosophers emphasize that there is a difference between the entities that exist and existence itself. A similar distinction plays a central role in the philosophy of Martin Heidegger, who calls it the ontological difference and contrasts the individual beings that exist with their being or the horizon of meaning of their existence.", "title": "Definition and related terms" }, { "paragraph_id": 11, "text": "Theories of the nature of existence aim to explain what it means for something to exist. The central dispute regarding the nature of existence is whether it should be understood as a property of individuals.", "title": "Theories of the nature of existence" }, { "paragraph_id": 12, "text": "The two main theories of existence are first-order theories and second-order theories. First-order theories understand existence as a property of individuals. Some first-order theories see it as a property of all individuals while others hold that there are some individuals that do not exist. Second-order theories hold that existence is a second-order property, that is, a property of properties.", "title": "Theories of the nature of existence" }, { "paragraph_id": 13, "text": "A central challenge for the different theories of the nature of existence is to understand how it is possible to coherently deny the existence of something. An example is the sentence \"Santa Claus does not exist\". One difficulty consists in explaining how the name \"Santa Claus\" can be meaningful even though there is no Santa Claus.", "title": "Theories of the nature of existence" }, { "paragraph_id": 14, "text": "Second-order theories are often seen as the orthodox position. They understand existence as a second-order property rather than a first-order property. For instance, the Empire State Building is an individual object and being 443.2 meters tall is a first-order property of it. Being instantiated is a property of being 443.2 meters tall and therefore a second-order property. According to second-order theories, to talk about existence is to talk about which properties have instances. For example, this view states that the sentence \"God exists\" does not claim that God has the property of existing. Instead, it means \"Godhood is instantiated\".", "title": "Theories of the nature of existence" }, { "paragraph_id": 15, "text": "A key motivation of second-order theories is that existence is in important ways different from regular properties like being a building and being 443.2 meters tall: regular properties express what an object is like but existence does not. According to this view, existence is more fundamental than regular properties since, without it, objects cannot instantiate any properties.", "title": "Theories of the nature of existence" }, { "paragraph_id": 16, "text": "Second-order theorists usually hold that quantifiers rather than predicates express existence. Quantifiers are terms that talk about the quantity of objects that have certain properties. Existential quantifiers express that there is at least one object. Examples are expressions like \"some\" and \"there exists\", as in \"some cows eat grass\" and \"there exists an even prime number\". In this regard, existence is closely related to counting since to claim that something exists is to claim that the corresponding concept has one or more instances.", "title": "Theories of the nature of existence" }, { "paragraph_id": 17, "text": "Second-order views imply that a sentence like \"egg-laying mammals exist\" is misleading since the word \"exist\" is used as a predicate in them. They hold instead that their true logical form is better expressed in reformulations like \"there exist entities that are egg-laying mammals\". This way, existence has the role of a quantifier while egg-laying mammals is the predicate. Quantifier constructions can also be used to express negative existential statements. For instance, the sentence \"talking tigers do not exist\" can be expressed as \"it is not the case that there exist talking tigers\".", "title": "Theories of the nature of existence" }, { "paragraph_id": 18, "text": "Many ontologists accept that second-order theories provide a correct analysis of many types of existential sentences. However, it is controversial whether it is correct for all cases. One difficulty is caused by so-called negative singular existentials. Negative singular existentials are statements that deny that a particular object exists. An example is the sentence \"Ronald McDonald does not exist\". Singular terms, like Ronald McDonald, seem to refer to individuals. This poses a difficulty since negative singular existentials deny that this individual exists. This makes it unclear how the singular term can refer to the individual in the first place. One influential solution to this problem was proposed by Bertrand Russell. He holds that singular terms do not directly refer to individuals but are instead descriptions of individuals. Positive singular existentials affirm that an object matching the descriptions exists while negative singular existentials deny that an object matching the descriptions exists. According to this view, the sentence \"Ronald McDonald does not exist\" expresses the idea that \"it is not the case that there is a unique happy hamburger clown\".", "title": "Theories of the nature of existence" }, { "paragraph_id": 19, "text": "First-order theories claim that existence is a property of individuals. They are less widely accepted than second-order theories but also have some influential proponents. There are two types of first-order theories. According to Meinongianism, existence is a property of some but not all entities. This view implies that there are nonexistent entities. According to universalism, existence is a universal property instantiated by every entity.", "title": "Theories of the nature of existence" }, { "paragraph_id": 20, "text": "Meinongianism is a view about existence defended by Meinong and his followers. Its main claim is that there are some entities that do not exist. This means that objecthood is independent of existence. Proposed examples of nonexistent objects are merely possible objects, like flying pigs, as well as fictional and mythical objects, like Sherlock Holmes and Zeus. According to this view, these objects are real and have being even though they do not exist. Meinong states that there is an object for any combination of properties. For example, there is an object that only has the single property of being a singer without any additional properties. This means that neither the attribute of wearing a dress nor the absence of it applies to this object. Meinong also includes impossible objects, like round squares.", "title": "Theories of the nature of existence" }, { "paragraph_id": 21, "text": "Meinongians state that sentences describing what Sherlock Holmes and Zeus are like refer to nonexisting objects. They are true or false depending on whether these objects have the properties ascribed to them. For instance, the sentence \"Pegasus has wings\" is true because having wings is a property of Pegasus, even though Pegasus lacks the property of existing.", "title": "Theories of the nature of existence" }, { "paragraph_id": 22, "text": "One key motivation of Meinongianism is to explain how negative singular existentials, like \"Ronald McDonald does not exist\", can be true. Meinongians accept the idea that singular terms, like \"Ronald McDonald\" refer to individuals. For them, a negative singular existential is true if the individual it refers to does not exist.", "title": "Theories of the nature of existence" }, { "paragraph_id": 23, "text": "Meinongianism has important implications for how to understand quantification. According to an influential view defended by Willard Van Orman Quine, the domain of quantification is restricted to existing objects. This view implies that quantifiers carry ontological commitments about what exists and what does not exist. Meinongianism differs from this view by holding that the widest domain of quantification includes both existing and nonexisting objects.", "title": "Theories of the nature of existence" }, { "paragraph_id": 24, "text": "Some aspects of Meinongianism are controversial and have received substantial criticism. According to one objection, one cannot distinguish between being an object and being an existing object. A closely related criticism rests on the idea that objects cannot have properties if they do not exist. A further objection is that Meinongianism leads to an \"overpopulated universe\" since there is an object corresponding to any combination of properties. A more specific criticism rejects the idea that there are incomplete and impossible objects.", "title": "Theories of the nature of existence" }, { "paragraph_id": 25, "text": "Universalists agree with Meinongians that existence is a property of individuals. But they deny that there are nonexistent entities. They state instead that existence is a universal property: all entities have it, meaning that everything exists. One approach is to hold that existence is the same as self-identity. According to the law of identity, every object is identical to itself or has the property of self-identity. This can be expressed in predicate logic as ∀ x ( x = x ) {\\displaystyle \\forall x(x=x)} .", "title": "Theories of the nature of existence" }, { "paragraph_id": 26, "text": "An influential argument in favor of universalism rests on the claim that to deny the existence of something is contradictory. This conclusion follows from the premises that one can only deny the existence of something by referring to that entity and that one can only refer to entities that exist.", "title": "Theories of the nature of existence" }, { "paragraph_id": 27, "text": "Universalists have proposed different ways of interpreting negative singular existentials. According to one view, names of fictional entities like \"Ronald McDonald\" refer to abstract objects. Abstract objects exist even though they do not exist in space and time. This means that, when understood in a strict sense, all negative singular existentials are false, including the claim that \"Ronald McDonald does not exist\". However, universalists can interpret such sentences slightly differently in relation to the context. In everyday life, for example, people use sentences like \"Ronald McDonald does not exist\" to express the idea that Ronald McDonald does not exist as a concrete object, which is true. A different approach is to claim that negative singular existentials lack a truth value since their singular terms do not refer to anything. According to this view, they are neither true nor false but meaningless.", "title": "Theories of the nature of existence" }, { "paragraph_id": 28, "text": "Different types of existing entities are discussed in the academic literature. Many discussions revolve around the questions of what those types are, whether entities of a specific type exist, how entities of different types are related to each other, and whether some types are more fundamental than others. Examples are questions like whether souls exist, whether there are abstract, fictional, and universal entities, and whether besides the actual world and its objects, there are also possible worlds and objects.", "title": "Types of existing entities" }, { "paragraph_id": 29, "text": "One distinction is between singular and general existence. Singular existence is the existence of individual entities. For example, the sentence \"Angela Merkel exists\" expresses the existence of one particular person. General existence pertains to general concepts, properties, or universals. For instance, the sentence \"politicians exist\" states that the general term \"politician\" has instances without referring to any one politician in particular.", "title": "Types of existing entities" }, { "paragraph_id": 30, "text": "Singular and general existence are closely related to each other and some philosophers have tried to explain one as a special case of the other. For example, Frege held that general existence is more basic. One argument in favor of this position is that general existence can be expressed in terms of singular existence. For instance, the sentence \"Angela Merkel exists\" can be expressed as \"entities exist that are identical to Angela Merkel\", where the expression \"being identical to Angela Merkel\" is understood as a general term. A different position is defended by Quine, who gives primacy to singular existence. A related question is whether there can be general existence without singular existence. According to philosophers like Henry S. Leonard, a property only has general existence if there is at least one actual object that instantiates it. A different view, defended by Nicholas Rescher, holds that properties can even exist if they have no actual instances, like the property of being a unicorn.", "title": "Types of existing entities" }, { "paragraph_id": 31, "text": "This question has a long philosophical tradition in relation to the existence of universals. Platonists claim that universals have general existence as Platonic forms independently of the particulars that instantiate them. According to this view, the universal of redness exists independent of whether there are any red objects. Aristotelianism also accepts that universals exist. However, it holds that their existence depends on particulars that instantiate them and that they are unable to exist by themselves. According to this view, a universal that has no instances in the spacio-temporal world does not exist. Nominalists claim that only particulars have existence and deny that universals exist.", "title": "Types of existing entities" }, { "paragraph_id": 32, "text": "Another influential distinction in ontology is between concrete and abstract objects. Many concrete objects are encountered in regular everyday life, like rocks, plants, and other people. They exist in space and time and influence each other: they have causal powers and are affected by other concrete objects. Abstract objects exist outside space and time and lack causal powers. Examples of abstract objects are numbers, sets, and types. The distinction between concrete and abstract objects is sometimes treated as the most general division of being.", "title": "Types of existing entities" }, { "paragraph_id": 33, "text": "There is wide agreement that concrete objects exist but opinions are divided in regard to abstract objects. Realists accept the idea that abstract objects have independent existence. Some of them claim that abstract objects have the same mode of existence as concrete objects while others maintain that they exist but in a different way. Antirealists state that abstract objects do not exist. This is often combined with the view that existence requires a location in space and time or the ability to causally interact.", "title": "Types of existing entities" }, { "paragraph_id": 34, "text": "Fictional objects, like dragons and centaurs, are closely related to abstract objects and pose similar problems. However, the two terms are not identical. For example, the expression \"the integer between two and three\" refers to a fictional abstract object while the expression \"integer between two and four\" refers to a non-fictional abstract object. In a similar sense, there are also concrete fictional objects besides abstract fictional objects, like the winged horse of Bellerophon.", "title": "Types of existing entities" }, { "paragraph_id": 35, "text": "A further distinction is between merely possible, contingent, and necessary existence. An entity has necessary existence if it must exist or could not fail to exist. Entities that exist but could fail to exist are contingent. Merely possible entities are entities that do not exist but could exist.", "title": "Types of existing entities" }, { "paragraph_id": 36, "text": "Most entities encountered in ordinary experience, like telephones, sticks, and flowers, have contingent existence. It is an open question whether any entities have necessary existence. According to one view, all concrete objects have contingent existence while all abstract objects have necessary existence. According to some theorists, one or several necessary beings are required as the explanatory foundation of the cosmos. For instance, philosophers like Avicenna and Thomas Aquinas follow this idea and claim that God has necessary existence.", "title": "Types of existing entities" }, { "paragraph_id": 37, "text": "There are many academic debates about whether there are merely possible objects. According to actualism, only actual entities have being. This includes both contingent and necessary entities. But it excludes merely possible entities. This view is rejected by possibilists, who state that there are also merely possible objects besides actual objects. For example, David Lewis argues that possible objects exist in the same way as actual objects. According to him, possible objects exist in possible worlds while actual objects exist in the actual world. Lewis holds that the only difference between possible worlds and the actual world is the location of the speaker: the term \"actual\" refers to the world of the speaker, similar to how the terms \"here\" and \"now\" refer to the spatial and temporal location of the speaker.", "title": "Types of existing entities" }, { "paragraph_id": 38, "text": "A further distinction is between entities that exist on a physical level in contrast to mental entities. Physical entities include objects of regular perception, like stones, trees, and human bodies as well as entities discussed in modern physics, like electrons and protons. Physical entities can be observed and measured. They possess mass and a location in space and time. Mental entities belong to the realm of the mind, like perceptions, experiences of pleasure and pain as well as beliefs, desires, and emotions. They are primarily associated with conscious experiences but also include unconscious states, like unconscious beliefs, desires, and memories.", "title": "Types of existing entities" }, { "paragraph_id": 39, "text": "The ontological status of physical and mental entities is a frequent topic in metaphysics and philosophy of mind. According to materialists, only physical entities exist on the most fundamental level. Materialists usually explain mental entities in terms of physical processes, for example, as brain states or as patterns of neural activation. Idealists reject this view and state that mind is the ultimate foundation of existence. They hold that physical entities have a derivative form of existence, for instance, that they are mental representations or products of consciousness. Dualists believe that both physical and mental entities exist on the most fundamental level. They state that they are connected to one another in various ways but that one cannot be reduced to the other.", "title": "Types of existing entities" }, { "paragraph_id": 40, "text": "Closely related to the problem of different types of entities is the question of whether they differ also concerning their mode of existence. This is the case according to ontological pluralism. In this view, entities belonging to different types do not just differ in their essential features but also in the way they exist.", "title": "Modes and degrees of existence" }, { "paragraph_id": 41, "text": "This position is sometimes found in theology. It states that God is radically different from his creation and emphasizes his uniqueness by holding that the difference affects not just God's features but also God's mode of existence.", "title": "Modes and degrees of existence" }, { "paragraph_id": 42, "text": "Another form of ontological pluralism distinguishes the existence of material objects from the existence of spacetime. This view holds that material objects have relative existence since they exist in spacetime. It further states that the existence of spacetime itself is not relative in this sense since it just exists without existing within another spacetime.", "title": "Modes and degrees of existence" }, { "paragraph_id": 43, "text": "The topic of degrees of existence is closely related to the issue of modes of existence. This topic is based on the idea that some entities exist to a higher degree or have more being than other entities. It is similar to how some properties have degrees, like heat and mass. According to Plato, for example, unchangeable Platonic forms have a higher degree of existence than physical objects.", "title": "Modes and degrees of existence" }, { "paragraph_id": 44, "text": "While the view that there are different types of entities is common in metaphysics, the idea they differ from each other concerning their modes or degrees of existence is not generally accepted. For instance, philosopher Quentin Gibson maintains that a thing either exists or does not exist. This means that there is no alternative in between and that there are no degrees of existence. Peter van Inwagen uses the idea that there is an intimate relation between existence and quantification to argue against different modes of existence. Quantification is related to how people count objects. Inwagen argues that if there were different modes of entities then people would need different types of numbers to count them. Since the same numbers can be used to count different types of entities, he concludes that all entities have the same mode of existence.", "title": "Modes and degrees of existence" }, { "paragraph_id": 45, "text": "A central question in ontology is why anything exists at all or why there is something rather than nothing. Similar questions are \"why is there a world?\" and \"why are there individual things?\". These questions focus on the idea that many things that exist are contingent, meaning they could have failed to exist. It asks whether this applies to existence as a whole as well or whether there is a reason why something exists instead of nothing.", "title": "Why anything exists at all" }, { "paragraph_id": 46, "text": "This question is different from scientific questions that seek to explain the existence of one thing, like life, in relation to the existence of another thing, like the primordial soup which may have been its origin. It is also different from most religious creation myths that explain the existence of the material world in relation to a god or gods that created it. The difference lies in the fact that these theories explain the existence of one thing in terms of the existence of another thing instead of trying to explain existence in general. The additional difficulty of the ontological question lies in the fact that one cannot refer to any other existing entity without engaging in circular reasoning.", "title": "Why anything exists at all" }, { "paragraph_id": 47, "text": "One answer to the question of why there is anything at all is called the statistical argument. It is based on the idea that besides the actual world, there are many possible worlds. They differ from the actual world in various respects. For example, the Eiffel Tower exists in the actual world but there are possible worlds without the Eiffel Tower. There are countless variations of possible worlds but there is only one possible world that is empty, i.e., that does not contain any entities. This means that, if it was up to chance which possible world becomes actual, the chance that there is nothing is exceedingly small. A closely related argument in physics explains the existence of the world as the result of random quantum fluctuations.", "title": "Why anything exists at all" }, { "paragraph_id": 48, "text": "Another response is to deny that a reason or an explanation for existence in general can be found. According to this view, existence as a whole is absurd since it is there without a reason for being there.", "title": "Why anything exists at all" }, { "paragraph_id": 49, "text": "Not all theorists accept this question as a valid or philosophically interesting question. Some philosophers, like Graham Priest and Kris McDaniel, have suggested that the term nothing refers to a global absence, which can itself be understood as a form of existence. According to this view, the answer to the question is trivial, since there is always something, even if this something is just a global absence. A closely related response is to claim that an empty world is metaphysically impossible. According to this view, there is something rather than nothing because it is necessary for some things to exist.", "title": "Why anything exists at all" }, { "paragraph_id": 50, "text": "Western philosophy originated with the Presocratic philosophers, who explored the foundational principles of all existence. Some, like Thales and Heraclitus, suggested that concrete principles, like water or fire, are the root of existence. This position was opposed by Anaximander, who held that the source must lie in an abstract principle beyond the world of human perception.", "title": "History" }, { "paragraph_id": 51, "text": "Plato argued that different types of entities have different degrees of existence. He held that shadows and images exist in a weaker sense than regular material objects. He claimed that the unchangeable Platonic forms have the highest type of existence. He saw material objects as imperfect and impermanent copies of Platonic forms.", "title": "History" }, { "paragraph_id": 52, "text": "While Aristotle accepted Plato's idea that forms are different from matter, he challenged the idea that forms have a higher type of existence. Instead, he held that forms cannot exist without matter. Aristotle further claimed that different entities have different modes of existence. For example, he distinguished between substances and their accidents and between potentiality and actuality.", "title": "History" }, { "paragraph_id": 53, "text": "Neoplatonists, like Plotinus, suggested that reality has a hierarchical structure. They held that a transcendent entity called \"the One\" or \"the Good\" is responsible for all existence. From it emerges the intellect, which in turn gives rise to the soul and the material world.", "title": "History" }, { "paragraph_id": 54, "text": "In medieval philosophy, Anselm of Canterbury formulated the influential ontological argument. This argument aims to deduce the existence of God from the concept of God. Anselm defined God as the greatest conceivable being. He reasoned that an entity that did not exist outside his mind would not be the greatest conceivable being. This led him to the conclusion that God exists.", "title": "History" }, { "paragraph_id": 55, "text": "Thomas Aquinas distinguished between the essence of a thing and its existence. According to him, the essence of a thing constitutes its fundamental nature. He argued that it is possible to understand what an object is and grasp its essence even if one does not know whether this object exists. He concluded from this observation that existence is not part of the qualities of an object and should instead be understood as a separate property. Aquinas also considered the problem of creation from nothing. He claimed that only God has the power to truly bring new entities into existence. These ideas later inspired Gottfried Wilhelm Leibniz's theory of creation. He held that to create is to confer actual existence to possible objects.", "title": "History" }, { "paragraph_id": 56, "text": "Both David Hume and Immanuel Kant rejected the idea that existence is a property. According to Hume, objects are bundles of qualities. He held that existence is not a property since there is no impression of existence besides the bundled qualities. Kant came to a similar conclusion in his criticism of the ontological argument. According to him, this proof fails because one cannot deduce from the definition of a concept whether entities described by this concept exist. He held that existence does not add anything to the concept of the object, it only indicates that this concept is instantiated.", "title": "History" }, { "paragraph_id": 57, "text": "Franz Brentano agreed with Kant's criticism and his claim that existence is not a real predicate. He used this idea to develop his theory of judgments. According to him, all judgments are existential judgments: they either affirm or deny the existence of something. He stated that judgments like \"some zebras are striped\" have the logical form \"there is a striped zebra\" while judgments like \"all zebras are striped\" have the logical form \"there is not a non-striped zebra\".", "title": "History" }, { "paragraph_id": 58, "text": "Gottlob Frege and Bertrand Russell aimed to refine the idea of what it means that existence is not a regular property. They distinguished between regular first-order properties of individuals and second-order properties of other properties. According to this view, to talk about existence is to talk about the second-order property of being instantiated. For instance, to deny that dinosaurs exist means that the property of being a dinosaur has the property of not being instantiated. According to Russell, the fundamental form of predication happens by applying a predicate to the proper name of an individual. An example of this type of atomic proposition is \"Laika is a dog\". Russell held that talk of existence in the form of sentences like \"dogs exist\" is less fundamental since it means that there is an individual to which this predicate applies without naming this individual.", "title": "History" }, { "paragraph_id": 59, "text": "Willard Van Orman Quine followed Frege and Russell in accepting that existence is a second-order property. He drew a close link between existence and the role of quantification in formal logic. He applied this idea to scientific theories and stated that a scientific theory is committed to the existence of an entity if the theory quantifies over this entity. For example, if a theory in biology claims that \"there are populations with genetic diversity\" then this theory has an ontological commitment to the existence of populations with genetic diversity.", "title": "History" }, { "paragraph_id": 60, "text": "Despite the influence of second-order theories, this view was not universally accepted. Alexius Meinong rejected it and claimed that existence is a property of individuals and that not all individuals have this property. This led him to the thesis that there is a difference between being and existence: all individuals have being but only some of them also exist. This implies that there are some things that do not exist, like merely possible objects and impossible objects.", "title": "History" }, { "paragraph_id": 61, "text": "Many schools of thought in Eastern philosophy discuss the problem of existence and its implications. For instance, the ancient Hindu school of Samkhya developed a metaphysical dualism. According to this view, there are two types of existence: pure consciousness (Purusha) and matter (Prakriti). Samkhya explains the manifestation of the universe as the interaction between these two principles. A different approach was developed by Adi Shankara in his school of Advaita Vedanta. He defended a metaphysical monism by claiming that the divine (Brahman) is the ultimate reality and the only existent. According to this view, the impression that there is a universe consisting of many distinct entities is an illusion (Maya). The essential features of ultimate reality are described as Sat Chit Ananda, meaning existence, consciousness, and bliss.", "title": "History" }, { "paragraph_id": 62, "text": "A central doctrine in Buddhist philosophy is called the three marks of existence. The three marks are aniccā (impermanence), anattā (absence of a permanent self), and dukkha (suffering). Aniccā is the doctrine that all of existence is subject to change. This means everything transforms at some point and nothing lasts forever. Anattā expresses a similar state in relation to persons. It is the claim that people do not have a permanent identity or a separate self. Ignorance about aniccā and anattā is seen as the main cause of dukkha by leading people to form attachments that cause suffering.", "title": "History" }, { "paragraph_id": 63, "text": "A central idea in many schools of Chinese philosophy, like Laozi's Daoism, is that a fundamental principle known as dao is the source of all existence. The term is often translated as \"the Way\" and is understood as a cosmic force that governs the natural order of the world. One position in Chinese metaphysics holds that dao is itself a form of being while another contends that it is non-being that gives rise to being.", "title": "History" }, { "paragraph_id": 64, "text": "The concept of existence played a central role in Arabic-Persian philosophy. Avicenna and Al-Ghazali discussed the relation between existence and essence. According to them, the essence of an entity is prior to its existence. The additional step of instantiating the essence is required for the entity to come into existence. Mulla Sadra rejected this priority of essence over existence. He argued that essence is only a concept used by the mind to grasp existence. Existence, by contrast, encompasses the whole of reality, according to his view.", "title": "History" }, { "paragraph_id": 65, "text": "Formal logic studies which arguments are deductively valid. First-order logic is the most commonly used system of formal logic. In it, existence is expressed using the existential quantifier ( ∃ {\\displaystyle \\exists } ). For example, the formula ∃ x H o r s e ( x ) {\\displaystyle \\exists xHorse(x)} can be used to state that horses exist. The variable x ranges over all elements in the domain of quantification and the existential quantifier expresses that at least one element in this domain is a horse. In first-order logic, all singular terms, like names, refer to objects in the domain and imply that the object exists. Because of this, one can deduce that ∃ x H o n e s t ( x ) {\\displaystyle \\exists xHonest(x)} (someone is honest) from H o n e s t ( B i l l ) {\\displaystyle Honest(Bill)} (Bill is honest).", "title": "In various disciplines" }, { "paragraph_id": 66, "text": "Many logical systems that are based on first-order logic also follow this idea. Free logic is an exception. It allows there to be empty names that do not refer to any object in the domain. One motivation for this modification is that reasoning is not limited to regular objects but can also be applied to fictional objects. In free logic, for instance, one can express that Pegasus is a flying horse using the formula F l y i n g h o r s e ( P e g a s u s ) {\\displaystyle Flyinghorse(Pegasus)} . One consequence of this modification is that one cannot infer from this type of statement that something exists. This means that the inference from F l y i n g h o r s e ( P e g a s u s ) {\\displaystyle Flyinghorse(Pegasus)} to ∃ x F l y i n g h o r s e ( x ) {\\displaystyle \\exists xFlyinghorse(x)} is invalid in free logic even though it would be valid in first-order logic. Free logic uses an additional existence predicate ( E ! {\\displaystyle E!} ) to express that a singular term refers to an existing object. For example, the formula E ! ( H o m e r ) {\\displaystyle E!(Homer)} can be used to express that Homer exists while the formula ¬ E ! ( P e g a s u s ) {\\displaystyle \\lnot E!(Pegasus)} states that Pegasus does not exist.", "title": "In various disciplines" }, { "paragraph_id": 67, "text": "The disciplines of epistemology, philosophy of mind, and philosophy of language aim to understand the nature of knowledge, mind, and language. A key issue in these fields is the problem of reference. This problem concerns the question of how mental or linguistic representations can refer to existing objects. Examples of such representations are beliefs, thoughts, perceptions, words, and sentences. For instance, in the sentence \"Barack Obama is a Democrat\", the name \"Barack Obama\" refers to a particular individual. In relation to perception, the problem of reference concerns the question of whether or to what extent perceptual impressions bring the perceiver in contact with reality by presenting existing objects rather than illusions.", "title": "In various disciplines" }, { "paragraph_id": 68, "text": "Closely related to the problem of reference is the relation between truth and existence. Representations can be true or false. According to truthmaker theory, true representations require a truthmaker. A truthmaker of a representation is the entity whose existence is responsible for the fact that the representation is true. For example, the sentence \"kangaroos live in Australia\" is true because there are kangaroos in Australia: the existence of these kangaroos is the truthmaker of the sentence. Truthmaker theory states that there is a close relation between truth and existence: there exists a truthmaker for every true representation.", "title": "In various disciplines" }, { "paragraph_id": 69, "text": "Existentialism is a school of thought that explores the nature of human existence. One of its key ideas is that existence precedes essence. This claim expresses the notion that existence is more basic than essence and that the nature and purpose of human beings are not pregiven but develop in the process of living. According to this view, humans are thrown into a world that lacks preexistent intrinsic meaning. They have to determine for themselves what their purpose is and what meaning their life should have. Existentialists use this idea to focus on the role of freedom and responsibility in actively shaping one's life.", "title": "In various disciplines" } ]
Existence is the state of being real or participating in reality. The terms "being", "reality", and "actuality" are often used as close synonyms. Existence contrasts with nonexistence, nothingness, and nonbeing. A common distinction is between the existence of an entity and its essence, which refers to the entity's nature or essential qualities. The main philosophical discipline studying existence is called ontology. The orthodox view is that it is a second-order property or a property of properties. According to this view, to say that a thing exists means that its properties are instantiated. A different view holds that existence is a first-order property or a property of individuals. This means that existence has the same ontological status as other properties of individuals, like color and shape. Meinongians accept this idea and hold that not all individuals have this property: they state that there are some individuals that do not exist. This view is rejected by universalists, who see existence as a universal property of every individual. Various types of existence are discussed in the academic literature. Singular existence is the existence of individual entities while general existence refers to the existence of concepts or universals. Other distinctions are between abstract and concrete existence, between possible, contingent, and necessary existence, and between physical and mental existence. A closely related issue is whether different types of entities exist in different ways or to different degrees. A key question in ontology is whether there is a reason for existence in general or why anything at all exists. The concept of existence is relevant to various fields, including logic, epistemology, philosophy of mind, philosophy of language, and existentialism.
2001-06-01T18:01:46Z
2023-12-30T17:07:41Z
[ "Template:Redirect", "Template:Portal", "Template:Notelist", "Template:Multiref", "Template:Cite book", "Template:Short description", "Template:Other uses", "Template:Main", "Template:Clear", "Template:Reflist", "Template:Cite journal", "Template:Philosophy topics", "Template:Metaphysics", "Template:Sfn", "Template:Lang", "Template:Multiple image", "Template:Wiktionary", "Template:Authority control", "Template:Good article", "Template:Efn", "Template:Cite web", "Template:Wikiquote" ]
https://en.wikipedia.org/wiki/Existence
9,303
Economy (disambiguation)
An economy is an area of the production, distribution, or trade, and consumption of goods and services by different agents in a given geographical location in various countries Economy may also refer to:
[ { "paragraph_id": 0, "text": "An economy is an area of the production, distribution, or trade, and consumption of goods and services by different agents in a given geographical location in various countries", "title": "" }, { "paragraph_id": 1, "text": "Economy may also refer to:", "title": "" } ]
An economy is an area of the production, distribution, or trade, and consumption of goods and services by different agents in a given geographical location in various countries Economy may also refer to:
2023-05-29T17:57:47Z
[ "Template:Wiktionary", "Template:TOC right", "Template:Disambiguation" ]
https://en.wikipedia.org/wiki/Economy_(disambiguation)